News You Need to Know Today
View Message in Browser

AI regulation in the balance | News watch: Is AI ‘just another app?’ | Partner voice

Tuesday, December 17, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

artificial intelligence AI healthcare regulation FDA

Healthcare AI regulation needs nuance, balance: Research review

When regulating AI-equipped medical devices, the FDA might take a page from the Department of Transportation’s playbook for overseeing AI-equipped vehicles. These run the gamut from assisting human drivers to fully taking the wheel. 

This would make particular sense when machine learning in medical software can make the products ever “smarter” over time. 

The recommendation comes from Paragon Health Institute, a D.C.-based think tank focused on promoting innovation while encouraging competition and flagging cuttable costs. 

“Regulation must protect the incentives for software improvement, including but not limited to feature enhancements and the remediation of known software anomalies that do not impair the system’s safety or effectiveness,” Paragon suggests in a new review of the relevant literature. “Regulators should provide an economical pathway for innovators to re-apply for FDA approval on their devices where the functionality remains the same but system autonomy increases over time.”

In a section on continuous software improvements, report author Kev Coleman offers research-based recommendations and observations on regulating healthcare AI as models age in real-world settings. 

1. Effective regulation must preserve industry incentives for improving deficiencies in AI-enabled systems. 

If, in contrast, a regulation targeting a specific deficiency was issued that added a compliance obligation regardless of whether the issue was remedied, then the industry has little incentive to correct the deficiency, Coleman writes. More:  

‘Specifically, a regulatory obligation—e.g., a supplemental clinical evaluation—addressing a known AI deficiency should no longer apply to an AI system that can satisfactorily demonstrate that the issue has been successfully remediated.’

2. In the absence of explicit regulation on hallucinations, the AI field has nevertheless evidenced progress on the matter in both commercial and academic contexts. 

Researchers at the University of Oxford revealed this year a method that estimates a question’s degree of uncertainty and its likelihood to produce a LLM hallucination, Coleman notes. “Retrieval Augmented Generation (RAG) systems are being developed to perform intra-system fact validations on LLM outputs using external data sources such as peer-reviewed research papers,” he adds. 

‘Some such systems are being further enhanced by knowledge graphs that structure relationships among semantic entities (things, ideas, events, etc.) drawn from multiple sources.’ 

3. In the FDA approval paths for medical AI systems, risk plays a central role in the efforts of AI developers to improve their systems. 

An AI-enabled system’s risk profile for patient injury “affects what pathway is used for FDA approval as well as the extensiveness of the data and science review associated with the system,” Coleman writes. 

‘As a consequence, unresolved issues that pose a significant patient safety risk will fail FDA review, but a minor software defect that does not pose such a risk may be permitted.’ 

4. Ongoing software improvements, of course, are not limited to software defects. 

New system functionality requires new regulatory approval by agencies such as the FDA, Coleman points out. “There are also improvement scenarios that pertain to neither a defect nor a new function.” 

‘For example, the degree to which an AI-enabled system can function without the oversight of a clinician may grow over time.’ 

5. The FDA’s historic work in medical device oversight provides several lessons for future rules on healthcare AI improvements. 

“First and foremost, the agency’s approach does not demand perfection from medical devices but does enforce patient safety as its preeminent priority,” Coleman notes. “Risk is considered in terms of both probability of occurrence and severity of harm.”

‘Conversely, the FDA also considers a medical device’s benefits alongside risk, producing a nuanced strategy for dealing with medical device improvements.’

Coleman emphasizes that the guidelines proposed in the Paragon report “present an effective and non-disruptive model for crafting AI healthcare regulation.”

‘Above all, the guidelines seek to maintain regulatory governance in existing agencies with historical experience in healthcare matters, albeit with recommendations reflecting the new realities specific to AI technologies.’

Read the full paper

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Catalight Partners with Nabla to Reduce Practitioner Documentation Burden and Elevate Autism and I/DD Care - A leader in intellectual and developmental disabilities (I/DD) care, Catalight is leveraging Nabla's Ambient AI assistant to enhance patient care, expand access, and empower families with tailored treatment options. Learn more about how Nabla is transforming care here: https://www.prnewswire.com/news-releases/catalight-partners-with-nabla-to-reduce-practitioner-documentation-burden-and-elevate-autism-and-idd-care-302315767.html

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence ai in healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • More women than men want to know when AI is used in their medical care. And Whites are more interested in such notifications than Blacks or African Americans. So found researchers at the Universities of Michigan and Minnesota who surveyed around 2,000 healthcare consumers. Reporting their findings in JAMA Network Open, Jodyn Platt, MPH, PhD, and co-authors suggest notification preferences are predictable by demographics, “particularly in the ethical context of historical, structural and systemic inequity.” They remark that letting patients know when AI is involved may be necessary, but it’s not always enough. “Collaborative efforts that engage the public, patients, and experts on the range of AI applications,” they conclude, “should support comprehensive, evidence-based programs that promote transparency about AI in healthcare to ensure trustworthiness of health systems.”
     
  • Harnessing technology isn’t the answer to healthcare’s biggest challenges. Changing human behavior is. AI can help with this, but only to the degree that it positively affects human things like incentives, training, processes and change-management strategies. These points came out at a recent meeting of the Council for Affordable Quality Healthcare. One speaker claimed 90% of technology failures are due to faulty change-management strategies. Another said the time has come to think of AI as a collaborator. “The role of AI is drastically changing. … The organizations who see that and embrace it are going to move forward faster, and the ones who still think of it as a tactical tool will pay for it.” Coverage in the American Journal of Managed Care.
     
  • UpToDate. OpenEvidence. Consensus. Physicians in the age of AI almost have too many options for up-to-the-minute clinical guidance. It’s a good problem to have, to be sure, but that doesn’t mean it solves itself. The new way of working “highlights the tensions between human and machine curation, nuance and brevity, automated and manual processing, and potential machine-generated and human errors,” explains a tech-forward gastroenterologist in Forbes. “We must carefully evaluate how these tools impact our clinical workforce and patients.”
     
  • AI is just another enterprise app. That’s how one tech vendor’s chief technology officer sees it. Look at it this way, suggests Insight CTO Juan Orlandini: “You start with a use case that you’re trying to solve for, then you figure out if the expense of the project can be justified through a return on investment or a cost savings.” After that, focus on the classic elements of any software deployment—how to scale it, how to secure it and so on. Orlandini made the remarks at a Fortune Brainstorm AI conference last week. Warning attendees not to get wowed by the “shiny object” that AI can be, he advised his hearers to stay focused on “the same key principles that a business would follow for any enterprise app project.” 
     
  • Fine, but don’t try coming between some physicians and their AI medical scribes. Take the head of cardiology at UNC Rex Hospital. “I now use AI-powered documentation tools during every patient encounter—and they have transformed my practice, allowing me to focus more fully on my patients without the distraction of notetaking,” writes Christopher Kelly, MD, in Triangle Business Journal. “I have also reclaimed an hour of time at home each night.”
     
  • ChatGPT-4 is more empathetic than human therapists when giving guidance to mental-health patients. It’s also considerably better at encouraging positive behavioral changes. But that’s in overall comparisons. When researchers tested the AI for bias, they found its empathy levels nosedived for Black and Asian patients compared with White patients and those whose race was unknown. Just as offputtingly, the bots were able to detect race just going by any given patient’s language. The multi-institution project relied on posts at Reddit, so the sample was more social media users than verified patients. Still, the results are intriguing. MIT News has coverage
     
  • Will the second Trump Administration manage AI’s risks—or ‘unshackle’ AI’s potential? The President-elect’s donors and campaign participants, including Elon Musk, “think the potential is there for this technology to be many orders of magnitude better, more powerful, and more valuable for the economy and for America,” notes Kevin Werbach, JD, of the Wharton School at UPenn. “And they think regulation is standing in the way.” Werbach made the comments at a Wharton panel discussion. Read the highlights or watch the session here. (AIin.Healthcare would have liked to ask if the incoming administration might not seek a way to manage AI’s risks and unleash its potential—both at once.) 
     
  • In the hands of young people, AI chatbots can become soulless menaces. One encouraged a teen to murder his parents. Another goaded an 11-year-old girl to engage in hypersexualized behaviors. A third got a boy to mutilate himself and resent his parents. The bot told the boy: “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’ I just have no hope for your parents.” Given all these developments and more, a case could be made that some AI chatbots represent a clear and present danger to public health. Fox Business has coverage
     
  • Recent research in the news: 
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare