How the FDA sees its role vis-à-vis AI in healthcare

The U.S. Food and Drug Administration has its hands full making sure medical AI products are safe, efficacious and trustworthy before they hit the market. The rise of ever-more-innovative iterations of the technology—not least generative AI—is only adding to the burden. 

But fear not. The agency is prepared to handle its duties and responsibilities to the best of its considerable abilities. It just can’t do everything for everyone all at once. In fact, it could use a hand from other stakeholders.

This comes through between the lines of a special communication published in JAMA Oct. 15. Senior-authored by FDA commissioner Robert Califf, MD, the paper describes 10 duties the agency must juggle as part of the job. Here are summaries of six. 

1. Keeping up with the pace of change in AI. 

The FDA has shown openness to innovative programs for emerging technologies, such as the Software Precertification Pilot Program, Califf and co-authors point out. “However, as that program demonstrated, successfully developing and implementing such pathways may require the FDA to be granted new statutory authorities.” More:

‘The sheer volume of these changes and their impact also suggests the need for industry and other external stakeholders to ramp up assessment and quality management of AI across the larger ecosystem beyond the remit of the FDA.’

2. Preparing for the unknowns of large language models and generative AI. 

The FDA is yet to authorize an LLM, the officials note. “However, many proposed applications in healthcare will require FDA oversight given their intended use for diagnosis, treatment or prevention of diseases or conditions.” Even “AI scribes” designed to summarize medical notes, they stress, “can hallucinate or include diagnoses not discussed in the visit.” More:  

‘There is a need for regulatory innovation in this space to enable both analysis of these information sources and integration into clinical decision-making. Proactive engagement among developers, clinicians, health system leaders and regulators on platforms such as the FDA’s Digital Health Advisory Committee will be critical.’

3. Prioritizing AI life-cycle management. 

Given the capacity for “unlocked” models to evolve and AI’s sensitivity to contextual changes, it is becoming increasingly evident that AI performance should be monitored in the environment in which it is being used, the authors state. “This need for postmarket performance monitoring of AI has profound implications for the management of information by health systems and clinical practices.” More:  

‘To meet the moment, health systems will need to provide an information ecosystem much like that monitoring a patient in the intensive care unit. The tools and circumstances of this ongoing evaluation must be recurrent and as close to continuous as possible, and the evaluation should be in the clinical environment in which it is being used.’ 

4. Counting on product suppliers to be responsible partners.

“At its core, FDA regulation begins with voluntary compliance by the regulated industries themselves,” Califf et al. write. “For example, the FDA reviews studies typically funded by industry but does not conduct clinical trials.” More: 

‘The concept that regulation of AI in medical product development and application for products that the FDA oversees begins with responsible conduct and quality management by sponsors [and] does not fundamentally differ from the FDA’s general regulatory regime.’

5. Balancing regulatory attention between Big Tech, startups and academia. 

Big Tech players dominate the AI innovation ecosystem. In healthcare, this presents the FDA with myriad challenges. Not least among these, the authors note, is “the daunting task of determining ways for all developers, including small entities, to ensure that AI models are safe and effective across the total product life cycle in diverse settings.” More: 

‘Most current FDA programs have special initiatives to support small business and academia that would also apply to AI.’

6. Mitigating the tension between companies’ profit motives and providers’ care imperatives.  

“An intentional focus on health outcomes will be necessary to overcome the pressure to emphasize practices that lead to suboptimization of the healthcare system, the adverse risks of financialization and data blocking,” Califf and colleagues write. More: 

‘The mandate for the FDA to safeguard and promote the health of individuals and public health will apply pressure to the system, but the need for a broad collaboration for responsible collective advancement extends beyond the FDA.’

The paper’s co-authors are FDA senior clinical advisor Haider Warraich, MD, and Troy Tazbaz, the agency’s director of digital health. 

Read the whole thing

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup