Modern food systems may look stable on the surface, but they are increasingly dependent on digital systems that can quietly become a major point of failure. Today, food must be “recognized” by databases and automated platforms to be transported, sold, or even released, meaning that if systems go down, food can effectively become unusable—even when it’s physically available.
A conversation between LocaXion CEO Viren Mathuria and Redpoint CEO Chunjie Duan on safety-grade RTLS, what genuine precision demands, and the architecture built for what’s coming. RTLS (Real-Time Location System) refers to wireless technology used to automatically identify, track, and manage the precise location of assets, equipment, or people within a defined indoor or outdoor […]
Robots make work faster, cleaner, and more consistent. They also change how risk shows up on the floor. The danger is not only the moving arm, but also the in-between moments, when a cell is paused, a jam is cleared, or a quick adjustment turns into hands inside the fence. Operators sit closest to these […]
A servo drive converts electrical commands into precise mechanical motion, influencing the speed, position and torque of every motor it powers. This capability makes it one of the most significant components of any production line. When a drive falls short, the effects manifest instantly in positioning errors, lost cycles and unplanned downtime. Five solutions stand […]
Deepfake X-rays created by AI are now convincing enough to fool both doctors and AI models. In tests, radiologists had limited success identifying fake images, especially when they didn’t know they were being shown. This opens the door to risks like fraudulent medical claims and tampered diagnoses. Experts say stronger safeguards and detection tools are critical as the technology advances.
A new tomato-picking robot is learning to think before it acts. Instead of simply identifying ripe fruit, it predicts how easy each tomato will be to harvest and adjusts its approach accordingly. This smarter strategy boosted success rates to 81%, with the robot even switching angles when needed. The breakthrough could pave the way for farms where robots and humans work side by side.
A new study put ChatGPT to the test by asking it to judge whether hundreds of scientific hypotheses were true or false—and the results were far from reassuring. While the AI got it right about 80% of the time on the surface, its performance dropped significantly when accounting for random guessing, revealing only modest reasoning ability. Even more concerning, it frequently contradicted itself when asked the exact same question multiple times, sometimes flipping answers back and forth.
As millions turn to ChatGPT and other AI chatbots for therapy-style advice, new research from Brown University raises a serious red flag: even when instructed to act like trained therapists, these systems routinely break core ethical standards of mental health care. In side-by-side evaluations with peer counselors and licensed psychologists, researchers uncovered 15 distinct ethical risks — from mishandling crisis situations and reinforcing harmful beliefs to showing biased responses and offering “deceptive empathy” that mimics care without real understanding.
AI may learn better when it’s allowed to talk to itself. Researchers showed that internal “mumbling,” combined with short-term memory, helps AI adapt to new tasks, switch goals, and handle complex challenges more easily. This approach boosts learning efficiency while using far less training data. It could pave the way for more flexible, human-like AI systems.
New research shows that AI doesn’t need endless training data to start acting more like a human brain. When researchers redesigned AI systems to better resemble biological brains, some models produced brain-like activity without any training at all. This challenges today’s data-hungry approach to AI development. The work suggests smarter design could dramatically speed up learning while slashing costs and energy use.
Chimps may revise their beliefs in surprisingly human-like ways. Experiments showed they switched choices when presented with stronger clues, demonstrating flexible reasoning. Computational modeling confirmed these decisions weren’t just instinct. The findings could influence how we think about learning in both children and AI.