Florida Polytechnic University’s newest high-tech addition is making dining on campus a little easier, and a lot more fun. With just a few taps on a phone screen, autonomous delivery robots can quickly deliver bagels, burritos, lattes and much more to Phoenixes throughout campus. Florida Poly’s Phoenix Dining, managed by Chartwells Higher Education Dining Services, […]
Artificial intelligence is often portrayed as a tool that replaces human work, but new research from Swansea University suggests a far more exciting role: creative collaborator. In a large study with more than 800 participants designing virtual cars, researchers found that AI-generated design galleries sparked deeper engagement, longer exploration, and better results.
Researchers at Kobe University have developed an AI system that can detect acromegaly, a rare hormone disorder, by analyzing photos of the back of the hand and a clenched fist. The disease often develops slowly and can take years to diagnose, even though untreated cases may shorten life expectancy.
As millions turn to ChatGPT and other AI chatbots for therapy-style advice, new research from Brown University raises a serious red flag: even when instructed to act like trained therapists, these systems routinely break core ethical standards of mental health care. In side-by-side evaluations with peer counselors and licensed psychologists, researchers uncovered 15 distinct ethical risks — from mishandling crisis situations and reinforcing harmful beliefs to showing biased responses and offering “deceptive empathy” that mimics care without real understanding.
Scientists at the University of New Hampshire have unleashed artificial intelligence to dramatically speed up the hunt for next-generation magnetic materials. By building a massive, searchable database of 67,573 magnetic compounds — including 25 newly recognized materials that stay magnetic even at high temperatures — the team is opening the door to cheaper, more sustainable technologies.
Researchers at the University of Michigan have created an AI system that can interpret brain MRI scans in just seconds, accurately identifying a wide range of neurological conditions and determining which cases need urgent care. Trained on hundreds of thousands of real-world scans along with patient histories, the model achieved accuracy as high as 97.5% and outperformed other advanced AI tools.
A philosopher at the University of Cambridge says there’s no reliable way to know whether AI is conscious—and that may remain true for the foreseeable future. According to Dr. Tom McClelland, consciousness alone isn’t the ethical tipping point anyway; sentience, the capacity to feel good or bad, is what truly matters. He argues that claims of conscious AI are often more marketing than science, and that believing in machine minds too easily could cause real harm. The safest stance for now, he says, is honest uncertainty.
A new AI developed at Duke University can uncover simple, readable rules behind extremely complex systems. It studies how systems evolve over time and reduces thousands of variables into compact equations that still capture real behavior. The method works across physics, engineering, climate science, and biology. Researchers say it could help scientists understand systems where traditional equations are missing or too complicated to write down.
Aalto University researchers have developed a method to execute AI tensor operations using just one pass of light. By encoding data directly into light waves, they enable calculations to occur naturally and simultaneously. The approach works passively, without electronics, and could soon be integrated into photonic chips. If adopted, it promises dramatically faster and more energy-efficient AI systems.
Researchers at Tsinghua University developed the Optical Feature Extraction Engine (OFE2), an optical engine that processes data at 12.5 GHz using light rather than electricity. Its integrated diffraction and data preparation modules enable unprecedented speed and efficiency for AI tasks. Demonstrations in imaging and trading showed improved accuracy, lower latency, and reduced power demand. This innovation pushes optical computing toward real-world, high-performance AI.
Researchers at the University of Surrey developed an AI that predicts what a person’s knee X-ray will look like in a year, helping track osteoarthritis progression. The tool provides both a visual forecast and a risk score, offering doctors and patients a clearer understanding of the disease. Faster and more interpretable than earlier systems, it could soon expand to predict other conditions like lung or heart disease.
A team at the University at Buffalo has made it possible to simulate complex quantum systems without needing a supercomputer. By expanding the truncated Wigner approximation, they’ve created an accessible, efficient way to model real-world quantum behavior. Their method translates dense equations into a ready-to-use format that runs on ordinary computers. It could transform how physicists explore quantum phenomena.
A team of engineers at North Carolina State University has designed a polymer “Chinese lantern” that can rapidly snap into multiple stable 3D shapes—including a lantern, a spinning top, and more—by compression or twisting. By adding a magnetic layer, they achieved remote control of the shape-shifting process, allowing the lanterns to act as grippers, filters, or expandable mechanisms.
Artificial intelligence is consuming enormous amounts of energy, but researchers at the University of Florida have built a chip that could change everything by using light instead of electricity for a core AI function. By etching microscopic lenses directly onto silicon, they’ve enabled laser-powered computations that cut power use dramatically while maintaining near-perfect accuracy.
Artificial intelligence is reshaping law, ethics, and society at a speed that threatens fundamental human dignity. Dr. Maria Randazzo of Charles Darwin University warns that current regulation fails to protect rights such as privacy, autonomy, and anti-discrimination. The “black box problem” leaves people unable to trace or challenge AI decisions that may harm them.