News You Need to Know Today
View Message in Browser

AI regulation workshop | AI reporter’s notebook | Partner news

Thursday, July 18, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

#StanfordHAI #AIregulation Stanford Institute for Human-Centered AI

Workshop consensus: Fixing healthcare AI regulation will take more than tweaks and patches

Querying 55 thought leaders behind closed doors, the Stanford Institute for Human-Centered AI—aka “Stanford HAI”—has found only 12% believe healthcare AI should always have a human in the loop.

A strong majority, 58%, told organizers human oversight is unneeded as long as safeguards are in place. A sizable third slice, 31%, staked out the middle ground, supporting human supervision “most of the time, with few exceptions.”

HAI’s healthcare AI policy steering committee gathered the select group for a Chatham House Rule workshop in May. Among the 55 were leading policymakers, scientists, healthcare providers, ethicists, AI developers and patient advocates. The organizers’ aim was to identify pressing AI policy gaps and rally support for changes in AI regulation.

Participants hashed out deficiencies in federal healthcare AI policy—and talked through workable remedies—involving three key use cases. Here are excerpts from the workshop report, which is lead-authored by Caroline Meinhardt, Stanford HAI’s policy research manager.  

Use Case 1: AI in software as a medical device

Workshop participants proposed new policy approaches to help streamline market approval for these multifunctional software systems while still ensuring clinical safety, Meinhardt and co-authors report.

“First, public-private partnerships will be crucial to managing the evidentiary burden of such approval, with a potential focus on advancing post-market surveillance,” they write. “Second, participants supported better information sharing during the device clearance process.” More:

‘Although close to 900 medical devices that incorporate AI or machine learning software have been cleared by the FDA, clinical adoption has been slow as healthcare organizations have limited information on which to base purchasing decisions.’

Use Case 2: AI in enterprise clinical operations and administration

Some participants argued for human oversight to ensure safety and reliability, while others warned that human-in-the-loop requirements could increase the administrative burden on doctors and make them feel less accountable for resulting clinical decisions, HAI leadership recalls.

“Some identified laboratory testing as a successful hybrid model, where a device is overseen by a physician and undergoes regular quality checks,” the authors add. “Any out-of-range values are checked by a human.” More:

‘Should patients be told when AI is being used in any stage of their treatment, and, if so, how and when? … [M]any participants felt that, in some circumstances, such as an email message that purports to come from a healthcare provider, the patient should be informed that AI played a role.’

Use Case 3: Patient-facing AI applications

An increasing number of patient-facing applications, such as mental health chatbots based on LLMs, promise to democratize healthcare access or to offer new services to patients through mobile devices, Meinhardt and colleagues write.  

“And yet,” they note, “no targeted guardrails have been put in place to ensure these patient-facing, LLM-powered applications are not giving out harmful or misleading medical information—even or especially when the chatbots claim they do not offer medical advice, despite sharing information in a manner that closely resembles medical advice.” More:

‘Clarification of the regulatory status of these patient-facing products is urgently needed. … The needs and viewpoints of entire patient populations must be considered to ensure regulatory frameworks address health disparities caused or exacerbated by AI.’

Also of interest from the workshop are results from two more flash surveys conducted that day.

  • A majority of participants, 56%, said healthcare AI applications should be governed like medical professionals, with accredited training programs, licensure exams and the like. But not many fewer, 44%, said the models should be governed like medical devices with premarket clearance, postmarket surveillance and so on.
     
  • More than half the field, 56%, believe effective governance of healthcare AI is going to need substantial changes to existing regulations. More than a third, 37%, said a novel regulatory framework could work. Only a relative sliver, 8%, said all that’s needed are minor adjustments to existing regulations.

One attendee came up with a colorful word picture to depict the degree of outdatedness in current frameworks for healthcare AI governance. Navigating the regulatory landscape is like “driving a 1976 Chevy Impala on 2024 roads,” the participant said.

To this Meinhardt and co-authors add:

‘The traditional regulatory paradigms in healthcare urgently need to adapt to a world of rapid AI development.’

Read the whole thing.

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Andrew Lundquist Clinical Director at Nabla discuss enhancing patient care and giving clinicians more time on the Digital Thoughts podcast - He covers daily clinician challenges, ambient AI for clinical documentation, evaluating startups, and the role of AI in healthcare. Listen to the full episode here.

 

 Share on Facebook Share on Linkedin Send in Mail
mayo clinic platform

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Mayo Clinic leads the league in AI readiness. Ranking private-sector health systems by their progress in AI innovation and execution, CB Insights places the Minnesota-based high achiever ahead of runners-up Intermountain Health, Cleveland Clinic, Kaiser Permanente and 20 others. The depth of Mayo’s innovative character shows in its patent activity, CB Insights remarks, noting that Mayo has filed more than 50 patents in numerous clinical areas. “It has also invested in AI-enabled companies addressing a range of use cases in healthcare, from clinical documentation to surgical intelligence,” the investment intel company points out.
     
  • UnitedHealth Group buffers cyberattack pain with AI promise. The company is still paying out mountainous sums for the malicious breach on its Change Healthcare unit. But it’s betting its commitment to technological innovation will drive business growth over the next few years. “Our growing AI portfolio made up of practical use cases will generate billions of dollars of efficiencies over the next few years,” UHG’s chief executive Andrew Witty told investors in a July 16 earnings call. The strategy is already “allowing us to do things much more quickly and reliably than humans,” he added, “finding answers within complex datasets.”
     
  • Google has no desire to deliver healthcare. But it’s very interested in equipping healthcare providers with digital tools they can use to sustain an ecosystem of partnerships, platforms and prevention. That’s the pledge of Karen DeSalvo, the tech giant’s chief health officer. Taking questions from a reporter at last month’s HLTH Europe 2024 conference in Amsterdam, DeSalvo agreed that technology is blending with discovery to drive medical advancements. This synergy is “palpable in the air,” she said, according to coverage by Medscape. Generative AI and other technologies, DeSalvo added, can help “democratize access to healthcare for people all over the world.”
     
  • Healthcare GenAI is OK as far as it goes. But it doesn’t go very far toward helping California solve its healthcare problems. That’s the opinion of Jennifer McLelland, disability-rights columnist for the California Health Report. When she posed a deliberately dippy healthcare question to three popular chatbots—ChatGPT, Google Gemini and Meta Llama 3—all brought back potentially dangerous answers. More serious questions yielded more useful answers, but even these missed the mark too often for McLelland’s liking. What the Golden State really needs, she maintains, is to “increase Medi-Cal payment rates so that we can recruit more doctors, social workers and other providers.”
     
  • Healthcare AI has brought ‘a lot of profound changes’ to healthcare jobs. And the changes are of a kind that no one could have seen coming just a few years ago. “You can imagine people would put up a lot of barriers” against such transformation, Jordan Dale, MD, chief medical information officer for Houston Methodist Hospital, tells the Houston Chronicle. “But I think [we do] a really good job of bringing people into this change and these technologies, making sure they have an understanding. That’s part of the process.”
     
  • The European Union has published the final text of its AI Act. The development took place July 12, triggering a transitional go-live date of Aug. 1. The global law firm Hogan Lovells has posted informal guidance for medical device manufacturers. The authors suggest affected companies do things like update internal procedures and technical documentation, make sure they have the right personnel on the payroll and use AI Act-compatible datasets.
     
  • How healthy is Amazon’s appetite for AI market power? The Federal Trade Commission wants to know. Right now FTC is specifically interested in the company’s pending move to hire top talent from AI startup Adept, which trains large language models to perform general tasks for enterprise clients. The request reflects the agency’s growing concern about how AI deals have been put together and “follows a broader review of partnerships between Big Tech and prominent AI startups,” Reuters notes before adding that such inquiries do not necessarily result in an official investigation or enforcement action.
     
  • ChatGPT is funnier than 63% to 87% of humans trying to make people laugh. No, really. Researchers have actually quantified the competition. In one test, conducted in the psychology department at the University of Southern California, the bot went head-to-head with writers from The Onion at writing satirical headlines. The score was about even, but the entry judged the best by reviewers came from the AI: “Local Man Discovers New Emotion, Still Can’t Describe It Properly.”
     
  • Recent research in the news:
     
  • AI funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare