News You Need to Know Today
View Message in Browser

Medical AI transparency | Healthcare AI newsmakers

Wednesday, February 7, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

transparency in artificial intelligence CDRH

Healthcare stakeholders to AI-equipped devices: We don’t trust you if we don’t know what you don’t know

It’s not easy to get patients, providers, payers, vendors and regulators to agree on any one aspect of healthcare delivery. But FDA’s Center for Devices and Radiological Health (CDRH) recently managed to get representatives of all five groups to settle on a working definition of transparency.

Admittedly, for the exercise, CDRH limited the term to one discrete context: as it applies to AI embedded in medical devices. But that doesn’t detract from the force of the consensus definition.

Transparency of AI in these settings, the groups concur, refers to the degree to which appropriate information about a device—including its intended use, development, performance and, when available, logic—is clearly communicated to stakeholders.

The meeting’s minutes are synopsized in a paper published Jan. 26 in NPJ Digital Medicine. In the report, CDRH digital health advisor Aubrey Shick and FDA colleagues recap workshop input from the key participant groups.

Patients. Eager to consume as much as they can digest about the role of AI in their care, patients are concerned their doctors or nurses might be lacking in computer literacy or algorithmic expertise, Shick and co-authors report. More:

Other transparency considerations important to patients include data security and ownership, the cost of the device compared to the current standard of care, insurance coverage of the device, and the need for high-speed internet access or other technical infrastructure requirements.

Providers. Clinicians want to trust AI-outfitted devices “at face value.” By this they mean that these devices should be readily usable without the need for “in-depth reviews” to figure out if the AI will work as advertised for their particular patient populations, Shick and colleagues explain. More:

Healthcare providers [see] an opportunity to be more transparent in the delivery of this information not only in the data available and the media type in which it is communicated but also through who shares this information—device manufacturers, government agencies, professional societies, etc. They also [emphasize] the importance of having a reliable mechanism to report device malfunction and ‘performance drift’ to manufacturers.

Payers. A medical device’s algorithmic prowess may prove exemplary in testing and validation settings. But what should be the ramifications for payment considerations when the AI’s performance varies in clinical use? The authors expound:

Given the potential for AI/ML devices to evolve, payers are concerned with the coverage of “unlocked” or learning algorithms. This stakeholder segment wants to stress the importance of employing diversified datasets and the possibility of monitoring the real-world performance of devices, the goal being to ensure that they are performing as intended and improving patient outcomes.

Vendors. Industry members wish for a risk-based approach to ensuring transparency. They’d like maintain the least burdensome regulatory framework for AI/ML devices while also mitigating “potential proprietary risk that may arise when sharing information in an effort to be transparent,” Shick et al. write. More:

Vendors believe their existing relationships with stakeholders suffice for communicating information about AI/ML devices, reminding that these communications are augmented with device manuals, user training and feedback processes.

What’s more, industry members suggest, the FDA is a trusted source of information for patients on manufacturers’ AI/ML devices. Vendors recommend manufacturers continue working closely with the FDA to increase transparent communications regarding these devices.

Shick and co-authors acknowledge that much of the device information available on the CDRH website is developed by or geared toward manufacturers.

“Use of a complementary approach targeted to non-manufacturers to share information (e.g., graphics, plain language summaries) could allow the information to be more accessible for some [other] stakeholders,” they write.

Full paper here.

 Share on Facebook Share on Linkedin Send in Mail
large language generative AI

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • More than 7 in 10 Americans trust the accuracy of health advice coming from an AI chatbot, but an even more striking ratio—9 of 10—wouldn’t act on the advice before checking with a living, breathing doctor. Similarly, some 65% of our countrymen and countrywomen have no qualms about getting tips on heart health from AI. But only 22% have proactively consulted a chatbot or other AI interface for such guidance. The findings are from Cleveland Clinic, which solicited input on hot health technologies from a nationally representative sample of 1,000 adults. More findings and analysis here.
     
  • Don’t look to GenAI for refereeing seemingly simple medical questions that are complicated by controversy. Take, for instance, How effective is ivermectin for treating or preventing COVID? “No doubt, AI will make progress in this area,” explain three physicians in an opinion piece published in Medpage Today. “Of course, even if (when) AI is ultimately successful in fulfilling its promise, the next question is what percentage of the population would be willing to accept AI’s evaluation as convincing evidence. To be continued.”
     
  • Florida family physician and popular book author Rebekah Bernard, MD, is skeptical AI will ever be able to take her place in medicine. But she’s pleasantly shocked at how far it’s come at instantly transcribing her conversations with patients. Her GenAI system can “somehow sift through small talk and document just the relevant clinical details,” she writes in commentary for Medical Economics. “The system can [even] generate near-perfect notes in English of visits conducted entirely in Spanish and in Portuguese.” Read the piece.
     
  • On the other hand, there’s this from AI expert and author Lance Eliot, PhD. “I’m sure that generative AI will outrageously be referred to as ‘superhuman’ when it comes to producing medical summaries. Don’t let the hype overshadow prudence.” Those are just a handful of the 13,000-plus words Eliot spills exploring the subject for Forbes. Check it out if you have some time.
     
  • Meanwhile, DataConomy names the five best AI medical scribes as rated by clinicians. Topping the list is one called Freed, followed by Nuance’s Dragon Medical One. Brief descriptions of all five here.
     
  • Rigorous clinical-grade evaluations of medical AI. Technological breakthroughs that drive new clinical applications. Creative evaluations of algorithmic bias. These are a few of their favorite things at NEJM AI. And they’re on the hunt. To learn how to get happily published in the still new-ish AI spinoff of one of the oldest living medical journals in the world, click here.
     
  • If massive smoke plumes can rightly be considered public health threats, it’s not completely ridiculous to consider wildfire-predicting algorithms a type of healthcare AI. Noting that there are more than 80,000 of these sprawling disasters every year, Heath Hockenberry of NOAA says AI and machine learning like the kind used in the American Meteorological Society’s LightningCast AI model “will most likely continue to narrow down these thousands and thousands of fires into the ones posing the highest risk to our nation.” Story from Space.com here.
     
  • Dermatologists may rely too much on guesswork when diagnosing skin conditions in dark-skinned patients. That’s because only 10% of images in dermatology textbooks show anything other than light skin. A recent study demonstrates the difficulty of rectifying the problem with AI. Physicians at Northwestern University who used AI improved their diagnostic accuracy, and by a lot, when evaluating light skin. By contrast, their accuracy got only a little better with dark skin. Coverage by Northwestern’s Kellogg School of Management here.
     
  • From AIin.Healthcare’s news partners:
     
 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare