AI vs. medication errors, Health Tech Investment Act preps, employment apocalypse fears, more
Buzzworthy developments of the past few days.
- Remember when comprehensive checklists and human vigilance were going to end medical error? That hasn’t happened yet. The WHO estimates U.S. medication mistakes alone injure 1.3 million patients a year and kill one patient per day. Now comes AI to step in and halt hospital-based drug mixups before they happen. One application, designed at UW Medicine in Washington State, uses an algorithm-equipped camera mounted on a headset. Worn by healthcare professionals such as nurse anesthetists, the device scans vials and syringes to make sure surgery patients get the right dose of the right drug at the right time. In a proof-of-concept trial late last year, the gear detected impending errors with 99.6% accuracy. It’s now being reviewed by the FDA for potential go-ahead to market across the land.
- Reporting on the development, NBC News looks at the promise and perils of the technology. For the bright side, the network spoke with the technology’s main designer and key champion, UW anesthesiologist Kelly Michaelson, MD, PhD. “I’m leaning toward auditory feedback [to convey the alerts] because a lot of the headsets like GoPro or Google Glasses have built-in microphones,” Michaelson says. “Just a little warning message that makes sure people stop for a second and make sure they’re doing what they think they’re doing.”
- Weighing in with concerns is Nicholas Cordella, MD, a Boston University expert in care quality and patient safety. “There’s a potential slippery slope here,” Cordella says. “If this technology proves successful for medication error detection, there could be pressure to expand it to monitor other aspects of clinician behavior, raising ethical questions about the boundary between a supportive safety tool and intrusive workplace monitoring.”
- Read the whole thing and check out the pics of the camera-bearing headgear.
- Read the whole thing and check out the pics of the camera-bearing headgear.
- Reporting on the development, NBC News looks at the promise and perils of the technology. For the bright side, the network spoke with the technology’s main designer and key champion, UW anesthesiologist Kelly Michaelson, MD, PhD. “I’m leaning toward auditory feedback [to convey the alerts] because a lot of the headsets like GoPro or Google Glasses have built-in microphones,” Michaelson says. “Just a little warning message that makes sure people stop for a second and make sure they’re doing what they think they’re doing.”
- Excitement continues to build over the Health Tech Investment Act. That’s the bill introduced last month in the Senate by a Democrat and a Republican (Martin Heinrich and Mike Rounds, co-chairs of the Senate’s AI caucus). If it passes muster there and then in the House, its chances of getting signed into law by President Trump are very good. After all, he’s all about AI innovation and not keen on anything that would dampen it. And “HITA” is clearly written to out-and-out encourage AI utilization in healthcare. One of the ways it would do this is by allowing providers to bill for each AI-aided service. No longer would they have to eat some of the cost via bundled reimbursements. Regulation watchers at the big law firm Morgan Lewis, to name one HITA-cheering outfit, are licking their chops. That’s because the firm offers consulting services to clients looking to benefit from pro-business regulatory changes. “The HITA, if enacted, would modernize Medicare’s reimbursement model to not only recognize the value of AI and algorithm-based services as mainstream tools for delivering timely, effective care but also help ensure that technological innovations reach patients,” the authors of a May 23 post explain. “If enacted, the legislation could accelerate the adoption of life-changing technologies while ensuring they remain accessible, clinically valuable and responsibly funded.”
- ‘Cancer is cured, the economy grows at 10% a year, the budget is balanced—and 20% of people don’t have jobs.’ That’s the bright side/dark side vision of someone who thinks a lot about AI: Dario Amodei. The co-founder and CEO of AI heavy hitter Anthropic, best known for the high-octane LLM model called Claude, believes AI could erase fully half of all entry-level desk jobs over the next few years. And Claude will have a lot to do with it, Amodei acknowledges with a note of anticipatory remorse in his words. “We, as the producers of this technology, have a duty and an obligation to be honest about what is coming,” he tells Axios. “I don’t think this is on people’s radar.” We’ve been warned. Now what do we do?
- Actually, widespread workplace takeovers by AI could be even more cataclysmic than that. In the view of Andrew Pery, the worldwide magnitude of labor disruption could be like nothing the world has ever seen before. An “ethics evangelist” with the AI document processing company Abbyy, Pery doesn’t discount IMF worst-case projections of a 40% reduction in global employment. “Such a dramatic displacement of labor is a recipe for growing social tensions,” Pery tells TechTarget. He worries one outcome will be the shifting of people by the millions “to the margins of society with unsustainable unemployment levels and without the dignity of work that gives us meaning.” Might that signify the start of the civilizational AI apocalypse some voices have been prophesying? London-based TT reporter George Lawton asks the question. Another subject matter expert, Kimberly Nevala of SAS, tells him existential AI risks represent an issue that “will only be addressed through a combination of public literacy and pressure, regulation and law and—history sadly suggests—after a yet-to-be-determined critical threshold of actual harm has occurred.”
- Preparing the Military Health System for the AI age is not just about modernizing. It’s also a matter of national security. That’s the conviction of Jonathan Woodson, MD, president of the DoD’s Uniformed Services University of the Health Sciences in Bethesda, Md. “The successful integration of [emerging digital] technologies within the Military Health System hinges on preparing its workforce,” which by the way numbers more than 133,000. “We must prioritize innovative education, rigorous training and continuous development to ensure our medical personnel are ready to meet the demands of the modern healthcare environment and the future battle space.” Woodson made the remarks in a recent presentation to MHS leaders. Coverage by the Defense Visual Information Distribution Service is here.
- Oncology, cardiology and intensive care. These are among the medical specialties in which machine learning is meaningfully assisting with clinical decision support. AI watcher and blogger Ria Sinha drills into each of the three in a May 22 post. She also enumerates four real-world challenges standing between machine learning and greater CDS deployment. These are inconsistent data quality, questionably generalizable training data, black-box outputs and an unsettled regulatory landscape. Taken together, these issues suggest AI-based CDS mechanisms present users with “a very sociotechnical challenge,” Sinha states. Still, she’s confident U.S. healthcare will figure out how to balance innovation with safety while ensuring broad applicability across our diverse population. “The future of clinical decision support,” she predicts, “will be a hybrid intelligence where humans and algorithms collaborate.”