Healthcare AI regulation needs nuance, balance: Research review

When regulating AI-equipped medical devices, the FDA might take a page from the Department of Transportation’s playbook for overseeing AI-equipped vehicles. These run the gamut from assisting human drivers to fully taking the wheel. 

This would make particular sense when machine learning in medical software can make the products ever “smarter” over time. 

The recommendation comes from Paragon Health Institute, a D.C.-based think tank focused on promoting innovation while encouraging competition and flagging cuttable costs. 

“Regulation must protect the incentives for software improvement, including but not limited to feature enhancements and the remediation of known software anomalies that do not impair the system’s safety or effectiveness,” Paragon suggests in a new review of the relevant literature. “Regulators should provide an economical pathway for innovators to re-apply for FDA approval on their devices where the functionality remains the same but system autonomy increases over time.”

In a section on continuous software improvements, report author Kev Coleman offers research-based recommendations and observations on regulating healthcare AI as models age in real-world settings. 

1. Effective regulation must preserve industry incentives for improving deficiencies in AI-enabled systems. 

If, in contrast, a regulation targeting a specific deficiency was issued that added a compliance obligation regardless of whether the issue was remedied, then the industry has little incentive to correct the deficiency, Coleman writes. More:  

‘Specifically, a regulatory obligation—e.g., a supplemental clinical evaluation—addressing a known AI deficiency should no longer apply to an AI system that can satisfactorily demonstrate that the issue has been successfully remediated.’

2. In the absence of explicit regulation on hallucinations, the AI field has nevertheless evidenced progress on the matter in both commercial and academic contexts. 

Researchers at the University of Oxford revealed this year a method that estimates a question’s degree of uncertainty and its likelihood to produce a LLM hallucination, Coleman notes. “Retrieval Augmented Generation (RAG) systems are being developed to perform intra-system fact validations on LLM outputs using external data sources such as peer-reviewed research papers,” he adds. 

‘Some such systems are being further enhanced by knowledge graphs that structure relationships among semantic entities (things, ideas, events, etc.) drawn from multiple sources.’ 

3. In the FDA approval paths for medical AI systems, risk plays a central role in the efforts of AI developers to improve their systems. 

An AI-enabled system’s risk profile for patient injury “affects what pathway is used for FDA approval as well as the extensiveness of the data and science review associated with the system,” Coleman writes. 

‘As a consequence, unresolved issues that pose a significant patient safety risk will fail FDA review, but a minor software defect that does not pose such a risk may be permitted.’ 

4. Ongoing software improvements, of course, are not limited to software defects. 

New system functionality requires new regulatory approval by agencies such as the FDA, Coleman points out. “There are also improvement scenarios that pertain to neither a defect nor a new function.” 

‘For example, the degree to which an AI-enabled system can function without the oversight of a clinician may grow over time.’ 

5. The FDA’s historic work in medical device oversight provides several lessons for future rules on healthcare AI improvements. 

“First and foremost, the agency’s approach does not demand perfection from medical devices but does enforce patient safety as its preeminent priority,” Coleman notes. “Risk is considered in terms of both probability of occurrence and severity of harm.”

‘Conversely, the FDA also considers a medical device’s benefits alongside risk, producing a nuanced strategy for dealing with medical device improvements.’

Coleman emphasizes that the guidelines proposed in the Paragon report “present an effective and non-disruptive model for crafting AI healthcare regulation.”

‘Above all, the guidelines seek to maintain regulatory governance in existing agencies with historical experience in healthcare matters, albeit with recommendations reflecting the new realities specific to AI technologies.’

Read the full paper

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.