A new AI developed at Duke University can uncover simple, readable rules behind extremely complex systems. It studies how systems evolve over time and reduces thousands of variables into compact equations that still capture real behavior. The method works across physics, engineering, climate science, and biology. Researchers say it could help scientists understand systems where traditional equations are missing or too complicated to write down.
Spanish researchers have created a powerful new open-source tool that helps uncover the hidden genetic networks driving cancer. Called RNACOREX, the software can analyze thousands of molecular interactions at once, revealing how genes communicate inside tumors and how those signals relate to patient survival. Tested across 13 different cancer types using international data, the tool matches the predictive power of advanced AI systems—while offering something rare in modern analytics: clear, interpretable explanations that help scientists understand why tumors behave the way they do.
AI tools designed to diagnose cancer from tissue samples are quietly learning more than just disease patterns. New research shows these systems can infer patient demographics from pathology slides, leading to biased results for certain groups. The bias stems from how the models are trained and the data they see, not just from missing samples. Researchers also demonstrated a way to significantly reduce these disparities.
Researchers used a deep learning AI model to uncover the first imaging-based biomarker of chronic stress by measuring adrenal gland volume on routine CT scans. This new metric, the Adrenal Volume Index, correlates strongly with cortisol levels, allostatic load, perceived stress, and even long-term cardiovascular outcomes, including heart failure risk.
Researchers have built a fully implantable device that sends light-based messages directly to the brain. Mice learned to interpret these artificial patterns as meaningful signals, even without touch, sight, or sound. The system uses up to 64 micro-LEDs to create complex neural patterns that resemble natural sensory activity. It could pave the way for next-generation prosthetics and new therapies.
New findings challenge the widespread belief that AI is an environmental villain. By analyzing U.S. economic data and AI usage across industries, researchers discovered that AI’s energy consumption—while significant locally—barely registers at national or global scales. Even more surprising, AI could help accelerate green technologies rather than hinder them.
Princeton researchers found that the brain excels at learning because it reuses modular “cognitive blocks” across many tasks. Monkeys switching between visual categorization challenges revealed that the prefrontal cortex assembles these blocks like Legos to create new behaviors. This flexibility explains why humans learn quickly while AI models often forget old skills. The insights may help build better AI and new clinical treatments for impaired cognitive adaptability.
Researchers combined deep learning with high-resolution physics to create the first Milky Way model that tracks over 100 billion stars individually. Their AI learned how gas behaves after supernovae, removing one of the biggest computational bottlenecks in galactic modeling. The result is a simulation hundreds of times faster than current methods.
Aalto University researchers have developed a method to execute AI tensor operations using just one pass of light. By encoding data directly into light waves, they enable calculations to occur naturally and simultaneously. The approach works passively, without electronics, and could soon be integrated into photonic chips. If adopted, it promises dramatically faster and more energy-efficient AI systems.
Researchers have created a prediction method that comes startlingly close to real-world results. It works by aiming for strong alignment with actual values rather than simply reducing mistakes. Tests on medical and health data showed it often outperforms classic approaches. The discovery could reshape how scientists make reliable forecasts.
USC researchers built artificial neurons that replicate real brain processes using ion-based diffusive memristors. These devices emulate how neurons use chemicals to transmit and process signals, offering massive energy and size advantages. The technology may enable brain-like, hardware-based learning systems. It could transform AI into something closer to natural intelligence.
Researchers at Tsinghua University developed the Optical Feature Extraction Engine (OFE2), an optical engine that processes data at 12.5 GHz using light rather than electricity. Its integrated diffraction and data preparation modules enable unprecedented speed and efficiency for AI tasks. Demonstrations in imaging and trading showed improved accuracy, lower latency, and reduced power demand. This innovation pushes optical computing toward real-world, high-performance AI.
A wireless eye implant developed at Stanford Medicine has restored reading ability to people with advanced macular degeneration. The PRIMA chip works with smart glasses to replace lost photoreceptors using infrared light. Most trial participants regained functional vision, reading books and recognizing signs. Researchers are now developing higher-resolution versions that could eventually provide near-normal sight.
Researchers at the University of Surrey developed an AI that predicts what a person’s knee X-ray will look like in a year, helping track osteoarthritis progression. The tool provides both a visual forecast and a risk score, offering doctors and patients a clearer understanding of the disease. Faster and more interpretable than earlier systems, it could soon expand to predict other conditions like lung or heart disease.
Vast amounts of valuable research data remain unused, trapped in labs or lost to time. Frontiers aims to change that with FAIR² Data Management, a groundbreaking AI-driven system that makes datasets reusable, verifiable, and citable. By uniting curation, compliance, peer review, and interactive visualization in one platform, FAIR² empowers scientists to share their work responsibly and gain recognition.
Researchers at Columbia have created a chip that turns a single laser into a “frequency comb,” producing dozens of powerful light channels at once. Using a special locking mechanism to clean messy laser light, the team achieved lab-grade precision on a small silicon device. This could drastically improve data center efficiency and fuel innovations in sensing, quantum tech, and LiDAR.
A powerful new AI tool called Diag2Diag is revolutionizing fusion research by filling in missing plasma data with synthetic yet highly detailed information. Developed by Princeton scientists and international collaborators, this system uses sensor input to predict readings other diagnostics can’t capture, especially in the crucial plasma edge region where stability determines performance. By reducing reliance on bulky hardware, it promises to make future fusion reactors more compact, affordable, and reliable.
Using laser light instead of traditional mechanics, researchers have built micro-gears that can spin, shift direction, and even power tiny machines. These breakthroughs could soon lead to revolutionary medical tools working at the scale of cells.
Artificial intelligence is consuming enormous amounts of energy, but researchers at the University of Florida have built a chip that could change everything by using light instead of electricity for a core AI function. By etching microscopic lenses directly onto silicon, they’ve enabled laser-powered computations that cut power use dramatically while maintaining near-perfect accuracy.