News You Need to Know Today
View Message in Browser

‘Maude’ analysis uncovers problematic AI patterns | Sam Altman, Healthcare.com, Insilico, more AI newsmakers

Thursday, May 4, 2023
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

MAUDE database medical device safety

Risk points revealed in US database of AI-powered medical devices

Four of every five safety events involving medical devices outfitted with AI may reflect incorrect or insufficient data in algorithm inputs.

And while 93% of events involve “device problems,” which could arise even absent an AI component, the remaining 7% are “use problems,” which may implicate the AI’s operation more directly.

Either way, use problems are four times more likely than device problems to bring about real patient harm.

These are among the findings and conclusions of researchers who analyzed 266 U.S. safety events reported to the FDA’s Manufacturer and User Facility Device Experience (“Maude”) program between 2015 and 2021. The study was conducted at Macquarie University in Australia. It’s running in the Journal of the American Medical Informatics Association (JAMIA).

More from the study:

  • Keep an eye out for AI equipment users failing to properly enter data. Front-end stumbles are hard to head off and can produce poor or confusing algorithmic outputs.
     
  • While 16% of the 266 AI device-based safety events led to patient harm in the Maude study set, many more—two-thirds—had hazardous potential. Another 9% had consequences for healthcare delivery, 3% had no harm or consequences and 2% were considered complaints.
     
  • Some 66% of events had potential for harm. A slim but non-negligible 4% were categorized as near misses that probably would have led to harm if not for human intervention.
     
  • The Aussie study may be the first systematic analysis of machine-learning safety problems captured as part of the FDA’s routine post-market surveillance. “Much of what [had been] known about machine-learning safety comes from case studies and the theoretical limitations of machine learning,” the authors point out.
     
  • The findings highlight the need for a whole-of-system approach to safe implementation “with a special focus on how users interact with devices.” So conclude senior study author Farah Magrabi, PhD, and colleagues. Safety problems with machine-learning devices “involve more than algorithms,” they emphasize.  

The study’s lead author, David Lyell, PhD, tells the Sydney Morning Herald:

“AI isn’t the answer; it’s part of a system that needs to support the provision of healthcare. And we do need to make sure we have the systems in place that supports its effective use to promote healthcare for people.”

Journal study here, newspaper coverage here.

 Share on Facebook Share on Linkedin Send in Mail
Artificial Intelligence robotics

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Americans trust healthcare AI less than they trust wearables and telehealth. However, their general trust in healthcare AI is higher than their outright distrust in healthcare AI, and by a wide margin—45% to 15%. These and more interesting findings emerged from a Healthcare.com survey of 1,039 adults. The point of the project was gauging consumer trust in medical technology, and the health-insurance shopping company has posted some key findings.
     
  • When existing patients talk, prospective patients listen. Healthcare providers challenged to keep up with online patient reviews have an AI ghostwriter awaiting assignments. The software, marketed by Weave of Lehi, Utah, lets provider staff bang out a first draft with a single click, then allows editing before posting. In announcing the product’s launch, Weave’s CTO says more than half of patients browse online physician reviews before scheduling appointments—yet fewer than half of providers ask patients to complete a review.  
     
  • Is ‘cybersickness’ a bona fide medical condition? Regardless, researchers are working to crack its causes and recommend remedies. At the University of Missouri, a team is using explainable AI to learn how people develop the malady, which is akin to motion sickness, in augmented and virtual reality. And at the University of Waterloo, a group wants to understand why some people get nauseated playing VR games while others don’t.
     
  • Teens conduct consequential AI-powered medical research. Three high-school students are co-authors of a sophisticated study on the use of AI to identify therapeutic targets for malignant brain tumors. Hailing from Norway, China and the U.S., the students interned with the AI drug-discovery company Insilico Medicine of the United Arab Emirates. It’s a cool story. Insilico publicizes it here, and the journal Aging has posted the study in full for free.
     
  • Clinical–business partnership puts cancer in the crosshairs. The University of Texas MD Anderson Cancer Center is tapping Generate:Biomedicines of Somerville, Mass., for help developing therapeutics to fight advanced cancers in the lungs and elsewhere. Full announcement here.
     
  • When assisted by image-interpretation AI, radiologists of all experience levels are susceptible to “automation bias.” Another term for the phenomenon is “mindless acceptance.” The proclivity is documented in a study of mammography readers published in Radiology and covered by Health Imaging.
     
  • ChatGPT mastermind Sam Altman would like to see a global agency regulating AI. Topping his wish list is something along the lines of the International Atomic Energy Agency. “You know, something that has real international power by treaty and that gets to inspect the labs, set regulations and make sure we have a cohesive global strategy. That’d be a great start.” The quote is from a long interview conducted by Bari Weiss of the Free Press. Read the whole thing.
 Share on Facebook Share on Linkedin Send in Mail

The Latest from Our Partners

  • Digital Magazine: This is Enterprise Imaging - In this digital magazine we talk about how moving from multiple PACS to a single enterprise imaging system is busting siloes and deepening integration; challenges in radiology imaging and how radiologists are getting more done—better and faster—by using enterprise imaging; skyrocketing image volume and an increased need for collaboration across multiple and geographically diverse sites has made image management far more complex and why cloud is a solution to this; our latest addition to Sectra Enterprise Imaging portfolio—ophthalmology and why it is a game-changer for ophthalmologists.
  • Beyond the impression: How AI-driven clinical intelligence transforms the radiology experience - In this session, Nuance CMIO Sheela Agarwal, MD, and Senior Product Manager Luanne D’Antoni explore innovations in radiology report creation and the role of automated impression generation.

  • AI quality assurance models saving lives and millions in avoided med-mal - Unrecognized imaging findings are an unfortunate, but undeniable, part of radiology. New advancements in artificial intelligence (AI) and machine learning offer a critical safety net that is improving care and saving lives — as well as avoiding millions of dollars in potential medical malpractice costs.

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare