NASA’s Perseverance rover has just made history by driving across Mars using routes planned by artificial intelligence instead of human operators. A vision-capable AI analyzed the same images and terrain data normally used by rover planners, identified hazards like rocks and sand ripples, and charted a safe path across the Martian surface. After extensive testing in a virtual replica of the rover, Perseverance successfully followed the AI-generated routes, traveling hundreds of feet autonomously.
Quantum computers need extreme cold to work, but the very systems that keep them cold also create noise that can destroy fragile quantum information. Scientists in Sweden have now flipped that problem on its head by building a tiny quantum refrigerator that actually uses noise to drive cooling instead of fighting it. By carefully steering heat at unimaginably small scales, the device can act as a refrigerator, heat engine, or energy amplifier inside quantum circuits.
AI may learn better when it’s allowed to talk to itself. Researchers showed that internal “mumbling,” combined with short-term memory, helps AI adapt to new tasks, switch goals, and handle complex challenges more easily. This approach boosts learning efficiency while using far less training data. It could pave the way for more flexible, human-like AI systems.
A massive new study comparing more than 100,000 people with today’s most advanced AI systems delivers a surprising result: generative AI can now beat the average human on certain creativity tests. Models like GPT-4 showed strong performance on tasks designed to measure original thinking and idea generation, sometimes outperforming typical human responses. But there’s a clear ceiling. The most creative humans — especially the top 10% — still leave AI well behind, particularly on richer creative work like poetry and storytelling.
Researchers have created microscopic robots so small they’re barely visible, yet smart enough to sense, decide, and move completely on their own. Powered by light and equipped with tiny computers, the robots swim by manipulating electric fields rather than using moving parts. They can detect temperature changes, follow programmed paths, and even work together in groups. The breakthrough marks the first truly autonomous robots at this microscopic scale.
A philosopher at the University of Cambridge says there’s no reliable way to know whether AI is conscious—and that may remain true for the foreseeable future. According to Dr. Tom McClelland, consciousness alone isn’t the ethical tipping point anyway; sentience, the capacity to feel good or bad, is what truly matters. He argues that claims of conscious AI are often more marketing than science, and that believing in machine minds too easily could cause real harm. The safest stance for now, he says, is honest uncertainty.
A new AI developed at Duke University can uncover simple, readable rules behind extremely complex systems. It studies how systems evolve over time and reduces thousands of variables into compact equations that still capture real behavior. The method works across physics, engineering, climate science, and biology. Researchers say it could help scientists understand systems where traditional equations are missing or too complicated to write down.
Spanish researchers have created a powerful new open-source tool that helps uncover the hidden genetic networks driving cancer. Called RNACOREX, the software can analyze thousands of molecular interactions at once, revealing how genes communicate inside tumors and how those signals relate to patient survival. Tested across 13 different cancer types using international data, the tool matches the predictive power of advanced AI systems—while offering something rare in modern analytics: clear, interpretable explanations that help scientists understand why tumors behave the way they do.
Researchers used a deep learning AI model to uncover the first imaging-based biomarker of chronic stress by measuring adrenal gland volume on routine CT scans. This new metric, the Adrenal Volume Index, correlates strongly with cortisol levels, allostatic load, perceived stress, and even long-term cardiovascular outcomes, including heart failure risk.
BISC is an ultra-thin neural implant that creates a high-bandwidth wireless link between the brain and computers. Its tiny single-chip design packs tens of thousands of electrodes and supports advanced AI models for decoding movement, perception, and intent. Initial clinical work shows it can be inserted through a small opening in the skull and remain stable while capturing detailed neural activity. The technology could reshape treatments for epilepsy, paralysis, and blindness.
Researchers have built a fully implantable device that sends light-based messages directly to the brain. Mice learned to interpret these artificial patterns as meaningful signals, even without touch, sight, or sound. The system uses up to 64 micro-LEDs to create complex neural patterns that resemble natural sensory activity. It could pave the way for next-generation prosthetics and new therapies.
Researchers combined deep learning with high-resolution physics to create the first Milky Way model that tracks over 100 billion stars individually. Their AI learned how gas behaves after supernovae, removing one of the biggest computational bottlenecks in galactic modeling. The result is a simulation hundreds of times faster than current methods.
Chimps may revise their beliefs in surprisingly human-like ways. Experiments showed they switched choices when presented with stronger clues, demonstrating flexible reasoning. Computational modeling confirmed these decisions weren’t just instinct. The findings could influence how we think about learning in both children and AI.
Aalto University researchers have developed a method to execute AI tensor operations using just one pass of light. By encoding data directly into light waves, they enable calculations to occur naturally and simultaneously. The approach works passively, without electronics, and could soon be integrated into photonic chips. If adopted, it promises dramatically faster and more energy-efficient AI systems.
Researchers have created a prediction method that comes startlingly close to real-world results. It works by aiming for strong alignment with actual values rather than simply reducing mistakes. Tests on medical and health data showed it often outperforms classic approaches. The discovery could reshape how scientists make reliable forecasts.
Researchers at Tsinghua University developed the Optical Feature Extraction Engine (OFE2), an optical engine that processes data at 12.5 GHz using light rather than electricity. Its integrated diffraction and data preparation modules enable unprecedented speed and efficiency for AI tasks. Demonstrations in imaging and trading showed improved accuracy, lower latency, and reduced power demand. This innovation pushes optical computing toward real-world, high-performance AI.
UMass Amherst engineers have built an artificial neuron powered by bacterial protein nanowires that functions like a real one, but at extremely low voltage. This allows for seamless communication with biological cells and drastically improved energy efficiency. The discovery could lead to bio-inspired computers and wearable electronics that no longer need power-hungry amplifiers. Future applications may include sensors powered by sweat or devices that harvest electricity from thin air.
Vast amounts of valuable research data remain unused, trapped in labs or lost to time. Frontiers aims to change that with FAIR² Data Management, a groundbreaking AI-driven system that makes datasets reusable, verifiable, and citable. By uniting curation, compliance, peer review, and interactive visualization in one platform, FAIR² empowers scientists to share their work responsibly and gain recognition.
A team at the University at Buffalo has made it possible to simulate complex quantum systems without needing a supercomputer. By expanding the truncated Wigner approximation, they’ve created an accessible, efficient way to model real-world quantum behavior. Their method translates dense equations into a ready-to-use format that runs on ordinary computers. It could transform how physicists explore quantum phenomena.
A team of engineers at North Carolina State University has designed a polymer “Chinese lantern” that can rapidly snap into multiple stable 3D shapes—including a lantern, a spinning top, and more—by compression or twisting. By adding a magnetic layer, they achieved remote control of the shape-shifting process, allowing the lanterns to act as grippers, filters, or expandable mechanisms.