Workshop consensus: Fixing healthcare AI regulation will take more than tweaks and patches

Querying 55 thought leaders behind closed doors, the Stanford Institute for Human-Centered AI—aka “Stanford HAI”—has found only 12% believe healthcare AI should always have a human in the loop.

A strong majority, 58%, told organizers human oversight is unneeded as long as safeguards are in place. A sizable third slice, 31%, staked out the middle ground, supporting human supervision “most of the time, with few exceptions.”

HAI’s healthcare AI policy steering committee gathered the select group for a Chatham House Rule workshop in May. Among the 55 were leading policymakers, scientists, healthcare providers, ethicists, AI developers and patient advocates. The organizers’ aim was to identify pressing AI policy gaps and rally support for changes in AI regulation.

Participants hashed out deficiencies in federal healthcare AI policy—and talked through workable remedies—involving three key use cases. Here are excerpts from the workshop report, which is lead-authored by Caroline Meinhardt, Stanford HAI’s policy research manager.  

Use Case 1: AI in software as a medical device

Workshop participants proposed new policy approaches to help streamline market approval for these multifunctional software systems while still ensuring clinical safety, Meinhardt and co-authors report.

“First, public-private partnerships will be crucial to managing the evidentiary burden of such approval, with a potential focus on advancing post-market surveillance,” they write. “Second, participants supported better information sharing during the device clearance process.” More:

‘Although close to 900 medical devices that incorporate AI or machine learning software have been cleared by the FDA, clinical adoption has been slow as healthcare organizations have limited information on which to base purchasing decisions.’

Use Case 2: AI in enterprise clinical operations and administration

Some participants argued for human oversight to ensure safety and reliability, while others warned that human-in-the-loop requirements could increase the administrative burden on doctors and make them feel less accountable for resulting clinical decisions, HAI leadership recalls.

“Some identified laboratory testing as a successful hybrid model, where a device is overseen by a physician and undergoes regular quality checks,” the authors add. “Any out-of-range values are checked by a human.” More:

‘Should patients be told when AI is being used in any stage of their treatment, and, if so, how and when? … [M]any participants felt that, in some circumstances, such as an email message that purports to come from a healthcare provider, the patient should be informed that AI played a role.’

Use Case 3: Patient-facing AI applications

An increasing number of patient-facing applications, such as mental health chatbots based on LLMs, promise to democratize healthcare access or to offer new services to patients through mobile devices, Meinhardt and colleagues write.  

“And yet,” they note, “no targeted guardrails have been put in place to ensure these patient-facing, LLM-powered applications are not giving out harmful or misleading medical information—even or especially when the chatbots claim they do not offer medical advice, despite sharing information in a manner that closely resembles medical advice.” More:

‘Clarification of the regulatory status of these patient-facing products is urgently needed. … The needs and viewpoints of entire patient populations must be considered to ensure regulatory frameworks address health disparities caused or exacerbated by AI.’

Also of interest from the workshop are results from two more flash surveys conducted that day.

  • A majority of participants, 56%, said healthcare AI applications should be governed like medical professionals, with accredited training programs, licensure exams and the like. But not many fewer, 44%, said the models should be governed like medical devices with premarket clearance, postmarket surveillance and so on.
     
  • More than half the field, 56%, believe effective governance of healthcare AI is going to need substantial changes to existing regulations. More than a third, 37%, said a novel regulatory framework could work. Only a relative sliver, 8%, said all that’s needed are minor adjustments to existing regulations.

One attendee came up with a colorful word picture to depict the degree of outdatedness in current frameworks for healthcare AI governance. Navigating the regulatory landscape is like “driving a 1976 Chevy Impala on 2024 roads,” the participant said.

To this Meinhardt and co-authors add:

‘The traditional regulatory paradigms in healthcare urgently need to adapt to a world of rapid AI development.’

Read the whole thing.

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup