Risk points revealed in US database of AI-powered medical devices
Four of every five safety events involving medical devices outfitted with AI may reflect incorrect or insufficient data in algorithm inputs.
And while 93% of events involve “device problems,” which could arise even absent an AI component, the remaining 7% are “use problems,” which may implicate the AI’s operation more directly.
Either way, use problems are four times more likely than device problems to bring about real patient harm.
These are among the findings and conclusions of researchers who analyzed 266 U.S. safety events reported to the FDA’s Manufacturer and User Facility Device Experience (“Maude”) program between 2015 and 2021. The study was conducted at Macquarie University in Australia. It’s running in the Journal of the American Medical Informatics Association (JAMIA).
More from the study:
- Keep an eye out for AI equipment users failing to properly enter data. Front-end stumbles are hard to head off and can produce poor or confusing algorithmic outputs.
- While 16% of the 266 AI device-based safety events led to patient harm in the Maude study set, many more—two-thirds—had hazardous potential. Another 9% had consequences for healthcare delivery, 3% had no harm or consequences and 2% were considered complaints.
- Some 66% of events had potential for harm. A slim but non-negligible 4% were categorized as near misses that probably would have led to harm if not for human intervention.
- The Aussie study may be the first systematic analysis of machine-learning safety problems captured as part of the FDA’s routine post-market surveillance. “Much of what [had been] known about machine-learning safety comes from case studies and the theoretical limitations of machine learning,” the authors point out.
- The findings highlight the need for a whole-of-system approach to safe implementation “with a special focus on how users interact with devices.” So conclude senior study author Farah Magrabi, PhD, and colleagues. Safety problems with machine-learning devices “involve more than algorithms,” they emphasize.
The study’s lead author, David Lyell, PhD, tells the Sydney Morning Herald:
“AI isn’t the answer; it’s part of a system that needs to support the provision of healthcare. And we do need to make sure we have the systems in place that supports its effective use to promote healthcare for people.”