A new tomato-picking robot is learning to think before it acts. Instead of simply identifying ripe fruit, it predicts how easy each tomato will be to harvest and adjusts its approach accordingly. This smarter strategy boosted success rates to 81%, with the robot even switching angles when needed. The breakthrough could pave the way for farms where robots and humans work side by side.
AI may learn better when it’s allowed to talk to itself. Researchers showed that internal “mumbling,” combined with short-term memory, helps AI adapt to new tasks, switch goals, and handle complex challenges more easily. This approach boosts learning efficiency while using far less training data. It could pave the way for more flexible, human-like AI systems.
Foams were once thought to behave like glass, with bubbles frozen in place at the microscopic level. But new simulations reveal that foam bubbles are always shifting, even while the foam keeps its overall shape. Remarkably, this restless motion follows the same math used to train artificial intelligence. The finding hints that learning-like behavior may be a fundamental principle shared by materials, machines, and living cells.
New research shows that AI doesn’t need endless training data to start acting more like a human brain. When researchers redesigned AI systems to better resemble biological brains, some models produced brain-like activity without any training at all. This challenges today’s data-hungry approach to AI development. The work suggests smarter design could dramatically speed up learning while slashing costs and energy use.
AI tools designed to diagnose cancer from tissue samples are quietly learning more than just disease patterns. New research shows these systems can infer patient demographics from pathology slides, leading to biased results for certain groups. The bias stems from how the models are trained and the data they see, not just from missing samples. Researchers also demonstrated a way to significantly reduce these disparities.
Researchers used a deep learning AI model to uncover the first imaging-based biomarker of chronic stress by measuring adrenal gland volume on routine CT scans. This new metric, the Adrenal Volume Index, correlates strongly with cortisol levels, allostatic load, perceived stress, and even long-term cardiovascular outcomes, including heart failure risk.
Princeton researchers found that the brain excels at learning because it reuses modular “cognitive blocks” across many tasks. Monkeys switching between visual categorization challenges revealed that the prefrontal cortex assembles these blocks like Legos to create new behaviors. This flexibility explains why humans learn quickly while AI models often forget old skills. The insights may help build better AI and new clinical treatments for impaired cognitive adaptability.
Researchers combined deep learning with high-resolution physics to create the first Milky Way model that tracks over 100 billion stars individually. Their AI learned how gas behaves after supernovae, removing one of the biggest computational bottlenecks in galactic modeling. The result is a simulation hundreds of times faster than current methods.
Chimps may revise their beliefs in surprisingly human-like ways. Experiments showed they switched choices when presented with stronger clues, demonstrating flexible reasoning. Computational modeling confirmed these decisions weren’t just instinct. The findings could influence how we think about learning in both children and AI.
USC researchers built artificial neurons that replicate real brain processes using ion-based diffusive memristors. These devices emulate how neurons use chemicals to transmit and process signals, offering massive energy and size advantages. The technology may enable brain-like, hardware-based learning systems. It could transform AI into something closer to natural intelligence.
Scientists at Skoltech developed a new mathematical model of memory that explores how information is encoded and stored. Their analysis suggests that memory works best in a seven-dimensional conceptual space — equivalent to having seven senses. The finding implies that both humans and AI might benefit from broader sensory inputs to optimize learning and recall.
Scientists at Mount Sinai have created an artificial intelligence system that can predict how likely rare genetic mutations are to actually cause disease. By combining machine learning with millions of electronic health records and routine lab tests like cholesterol or kidney function, the system produces "ML penetrance" scores that place genetic risk on a spectrum rather than a simple yes/no. Some variants once thought dangerous showed little real-world impact, while others previously labeled uncertain revealed strong disease links.