How and when FDA assesses the clinical competency of healthcare AI

True or false? Each time a software developer significantly updates an FDA-approved Software as a Medical Device product, the SaMD faces possible re-review by the agency.

Answer: True, albeit leaning hard on possible with a caveat: The stringency of the evaluation process depends on the device’s risk classification and the nature of the change.

So reminds the Pew organization in a meaty primer on AI in healthcare posted Aug. 5.

The article concentrates on the FDA’s oversight of the technology but also supplies a nice backgrounder on its rise, uses, and adoptability challenges.

Clarifying the question on additional review of already approved SaMD products with AI, Pew notes that FDA generally regulates such products according to the level of risk they pose to patients should the software misguide care.

“If the software is intended to treat, diagnose, cure, mitigate or prevent disease or other conditions, FDA considers it a medical device,” the article points out, adding that most products considered medical devices and that rely on AI/ML are categorized as SaMD.

Risk determines rank

Further, regulatory decisions are based on the risk classification into which the device falls:

  • Class I devices pose the lowest risk and so get the lightest review. Pew gives as an example SaMD that’s limited to displaying readings from a continuous glucose monitor.
  • Class II devices are considered to be moderate- to high-risk, and “may include AI software tools that analyze medical images such as mammograms and flag suspicious findings for a radiologist to review.”
  • Class III devices pose the highest risk. This classification comprises products that are “life-supporting, life-sustaining or substantially important in preventing impairment of human health.”

Additionally, Class III devices have to undergo a full 510(k) premarket review, with the software developers supplying clinical evidence of safety and efficacy.  

Meanwhile, Class I and Class II device manufacturers can sometimes apply for De Novo approval. This the FDA grants to devices that do something new or better but with underlying software known to be safe and well-tested.

Battling bias

“Like any digital health tool, AI models can be flawed, presenting risks to patient safety,” Pew comments. “These issues can stem from a variety of factors, including problems with the data used to develop the algorithm, the choices that developers make in building and training the model, and how the AI-enabled program is eventually deployed.”

As for the widely discussed challenges posed by bias in healthcare AI utilization, the organization holds that algorithms intended for use in clinical practice “must be evaluated carefully to ensure that [their] performance can be applied across a diverse set of patients and settings.”

More:

However, such datasets are often difficult and expensive to assemble because of the fragmented U.S. healthcare system, characterized by multiple payers and unconnected health record systems. These factors can increase the propensity for error due to datasets that are incomplete or inappropriately merged from multiple sources. … Algorithms developed without considering geographic diversity, including variables such as disease prevalence and socioeconomic differences, may not perform as well as they should across a varied array of real-world settings.”

The article also names examples of FDA-approved medical AI products, summarizes exemptions from FDA review, covers emerging FDA proposals for SaMD regulation, and considers a handful of questions and “oversight gaps” in need of guidelines.

Read it all.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup