As AI systems began acing traditional tests, researchers realized those benchmarks were no longer tough enough. In response, nearly 1,000 experts created Humanity’s Last Exam, a massive 2,500-question challenge covering highly specialized topics across many fields. The exam was engineered so that any question solvable by current AI models was removed. Early results show even the most advanced systems still struggle — revealing a surprisingly large gap between AI performance and true expert-level knowledge.
Researchers at Kobe University have developed an AI system that can detect acromegaly, a rare hormone disorder, by analyzing photos of the back of the hand and a clenched fist. The disease often develops slowly and can take years to diagnose, even though untreated cases may shorten life expectancy.
Choosing the right method for multimodal AI—systems that combine text, images, and more—has long been trial and error. Emory physicists created a unifying mathematical framework that shows many AI techniques rely on the same core idea: compress data while preserving what’s most predictive. Their “control knob” approach helps researchers design better algorithms, use less data, and avoid wasted computing power. The team believes it could pave the way for more accurate, efficient, and environmentally friendly AI.
As millions turn to ChatGPT and other AI chatbots for therapy-style advice, new research from Brown University raises a serious red flag: even when instructed to act like trained therapists, these systems routinely break core ethical standards of mental health care. In side-by-side evaluations with peer counselors and licensed psychologists, researchers uncovered 15 distinct ethical risks — from mishandling crisis situations and reinforcing harmful beliefs to showing biased responses and offering “deceptive empathy” that mimics care without real understanding.
Researchers tested whether generative AI could handle complex medical datasets as well as human experts. In some cases, the AI matched or outperformed teams that had spent months building prediction models. By generating usable analytical code from precise prompts, the systems dramatically reduced the time needed to process health data. The findings hint at a future where AI helps scientists move faster from data to discovery.
Scientists at the University of New Hampshire have unleashed artificial intelligence to dramatically speed up the hunt for next-generation magnetic materials. By building a massive, searchable database of 67,573 magnetic compounds — including 25 newly recognized materials that stay magnetic even at high temperatures — the team is opening the door to cheaper, more sustainable technologies.
Neuromorphic computers modeled after the human brain can now solve the complex equations behind physics simulations — something once thought possible only with energy-hungry supercomputers. The breakthrough could lead to powerful, low-energy supercomputers while revealing new secrets about how our brains process information.
Researchers at the University of Michigan have created an AI system that can interpret brain MRI scans in just seconds, accurately identifying a wide range of neurological conditions and determining which cases need urgent care. Trained on hundreds of thousands of real-world scans along with patient histories, the model achieved accuracy as high as 97.5% and outperformed other advanced AI tools.
逐际动力 LimX Dynamics 宣布完成 2 亿美元的 B 轮融资。本轮融资由多家国内外知名机构投资人参与,包括阿联酋磊石资本(Stone Venture)、东方富海、基石资本、天创资本、广发信德、合肥创新投、国泰君安创新投资、中信建投、唐兴资本、财鑫资本;战略产业投资人包括京东、中鼎股份、光洋股份、东土科技;老股东上汽集团旗下尚颀资本、彼岸时代、蔚来资本、明势创投持续加码。
Scientists warn that rapid advances in AI and neurotechnology are outpacing our understanding of consciousness, creating serious ethical risks. New research argues that developing scientific tests for awareness could transform medicine, animal welfare, law, and AI development. But identifying consciousness in machines, brain organoids, or patients could also force society to rethink responsibility, rights, and moral boundaries. The question of what it means to be conscious has never been more urgent—or more unsettling.
Dinosaur footprints have always been mysterious, but a new AI app is cracking their secrets. DinoTracker analyzes photos of fossil tracks and predicts which dinosaur made them, with accuracy rivaling human experts. Along the way, it uncovered footprints that look strikingly bird-like—dating back more than 200 million years. That discovery could push the origin of birds much deeper into prehistory.
NASA’s Perseverance rover has just made history by driving across Mars using routes planned by artificial intelligence instead of human operators. A vision-capable AI analyzed the same images and terrain data normally used by rover planners, identified hazards like rocks and sand ripples, and charted a safe path across the Martian surface. After extensive testing in a virtual replica of the rover, Perseverance successfully followed the AI-generated routes, traveling hundreds of feet autonomously.
Quantum computers need extreme cold to work, but the very systems that keep them cold also create noise that can destroy fragile quantum information. Scientists in Sweden have now flipped that problem on its head by building a tiny quantum refrigerator that actually uses noise to drive cooling instead of fighting it. By carefully steering heat at unimaginably small scales, the device can act as a refrigerator, heat engine, or energy amplifier inside quantum circuits.
AI may learn better when it’s allowed to talk to itself. Researchers showed that internal “mumbling,” combined with short-term memory, helps AI adapt to new tasks, switch goals, and handle complex challenges more easily. This approach boosts learning efficiency while using far less training data. It could pave the way for more flexible, human-like AI systems.
A massive new study comparing more than 100,000 people with today’s most advanced AI systems delivers a surprising result: generative AI can now beat the average human on certain creativity tests. Models like GPT-4 showed strong performance on tasks designed to measure original thinking and idea generation, sometimes outperforming typical human responses. But there’s a clear ceiling. The most creative humans — especially the top 10% — still leave AI well behind, particularly on richer creative work like poetry and storytelling.
Scientists have discovered that the human brain understands spoken language in a way that closely resembles how advanced AI language models work. By tracking brain activity as people listened to a long podcast, researchers found that meaning unfolds step by step—much like the layered processing inside systems such as GPT-style models.
Researchers have turned artificial intelligence into a powerful new lens for understanding why cancer survival rates differ so dramatically around the world. By analyzing cancer data and health system information from 185 countries, the AI model highlights which factors, such as access to radiotherapy, universal health coverage, and economic strength, are most closely linked to better survival in each nation.
Humans pay enormous attention to lips during conversation, and robots have struggled badly to keep up. A new robot developed at Columbia Engineering learned realistic lip movements by watching its own reflection and studying human videos online. This allowed it to speak and sing with synchronized facial motion, without being explicitly programmed. Researchers believe this breakthrough could help robots finally cross the uncanny valley.
Foams were once thought to behave like glass, with bubbles frozen in place at the microscopic level. But new simulations reveal that foam bubbles are always shifting, even while the foam keeps its overall shape. Remarkably, this restless motion follows the same math used to train artificial intelligence. The finding hints that learning-like behavior may be a fundamental principle shared by materials, machines, and living cells.
A generative AI system can now analyze blood cells with greater accuracy and confidence than human experts, detecting subtle signs of diseases like leukemia. It not only spots rare abnormalities but also recognizes its own uncertainty, making it a powerful support tool for clinicians.