Manufacturers in 2026 face a clear problem. Finding obsolete PLC parts is slow, expensive, and uncertain. What used to be a sourcing issue is now an operational risk. Even a minor PLC failure can stop production. Delays now impact output, timelines, and revenue. Why Obsolete PLC Parts Are Hard to Find PLC systems still run […]
By Gary Ng, CEO and co-founder of viAct There is a major shift occurring at some of the most hazardous workplaces on the planet: oil refineries, large construction projects, and underground mines, as new and improving technologies are being used for the initial line of defence. Technologies, like Artificial Intelligence (AI), robotics, and IoT wearables, […]
Robots make work faster, cleaner, and more consistent. They also change how risk shows up on the floor. The danger is not only the moving arm, but also the in-between moments, when a cell is paused, a jam is cleared, or a quick adjustment turns into hands inside the fence. Operators sit closest to these […]
Deepfake X-rays created by AI are now convincing enough to fool both doctors and AI models. In tests, radiologists had limited success identifying fake images, especially when they didn’t know they were being shown. This opens the door to risks like fraudulent medical claims and tampered diagnoses. Experts say stronger safeguards and detection tools are critical as the technology advances.
As millions turn to ChatGPT and other AI chatbots for therapy-style advice, new research from Brown University raises a serious red flag: even when instructed to act like trained therapists, these systems routinely break core ethical standards of mental health care. In side-by-side evaluations with peer counselors and licensed psychologists, researchers uncovered 15 distinct ethical risks — from mishandling crisis situations and reinforcing harmful beliefs to showing biased responses and offering “deceptive empathy” that mimics care without real understanding.
Scientists warn that rapid advances in AI and neurotechnology are outpacing our understanding of consciousness, creating serious ethical risks. New research argues that developing scientific tests for awareness could transform medicine, animal welfare, law, and AI development. But identifying consciousness in machines, brain organoids, or patients could also force society to rethink responsibility, rights, and moral boundaries. The question of what it means to be conscious has never been more urgent—or more unsettling.
Stanford researchers have developed an AI that can predict future disease risk using data from just one night of sleep. The system analyzes detailed physiological signals, looking for hidden patterns across the brain, heart, and breathing. It successfully forecast risks for conditions like cancer, dementia, and heart disease. The results suggest sleep contains early health warnings doctors have largely overlooked.
Researchers used a deep learning AI model to uncover the first imaging-based biomarker of chronic stress by measuring adrenal gland volume on routine CT scans. This new metric, the Adrenal Volume Index, correlates strongly with cortisol levels, allostatic load, perceived stress, and even long-term cardiovascular outcomes, including heart failure risk.
Researchers at the University of Surrey developed an AI that predicts what a person’s knee X-ray will look like in a year, helping track osteoarthritis progression. The tool provides both a visual forecast and a risk score, offering doctors and patients a clearer understanding of the disease. Faster and more interpretable than earlier systems, it could soon expand to predict other conditions like lung or heart disease.
Scientists at Mount Sinai have created an artificial intelligence system that can predict how likely rare genetic mutations are to actually cause disease. By combining machine learning with millions of electronic health records and routine lab tests like cholesterol or kidney function, the system produces "ML penetrance" scores that place genetic risk on a spectrum rather than a simple yes/no. Some variants once thought dangerous showed little real-world impact, while others previously labeled uncertain revealed strong disease links.