News You Need to Know Today
View Message in Browser

FDA reflects on AI responsibilities | AI news watcher’s blog | Partner voice

Friday, October 18, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

doctors nurses ai in healthcare fda oversight

How the FDA sees its role vis-à-vis AI in healthcare

The U.S. Food and Drug Administration has its hands full making sure medical AI products are safe, efficacious and trustworthy before they hit the market. The rise of ever-more-innovative iterations of the technology—not least generative AI—is only adding to the burden. 

But fear not. The agency is prepared to handle its duties and responsibilities to the best of its considerable abilities. It just can’t do everything for everyone all at once. In fact, it could use a hand from other stakeholders.

This comes through between the lines of a special communication published in JAMA Oct. 15. Senior-authored by FDA commissioner Robert Califf, MD, the paper describes 10 duties the agency must juggle as part of the job. Here are summaries of six. 

1. Keeping up with the pace of change in AI. 

The FDA has shown openness to innovative programs for emerging technologies, such as the Software Precertification Pilot Program, Califf and co-authors point out. “However, as that program demonstrated, successfully developing and implementing such pathways may require the FDA to be granted new statutory authorities.” More:

‘The sheer volume of these changes and their impact also suggests the need for industry and other external stakeholders to ramp up assessment and quality management of AI across the larger ecosystem beyond the remit of the FDA.’

2. Preparing for the unknowns of large language models and generative AI. 

The FDA is yet to authorize an LLM, the officials note. “However, many proposed applications in healthcare will require FDA oversight given their intended use for diagnosis, treatment or prevention of diseases or conditions.” Even “AI scribes” designed to summarize medical notes, they stress, “can hallucinate or include diagnoses not discussed in the visit.” More:  

‘There is a need for regulatory innovation in this space to enable both analysis of these information sources and integration into clinical decision-making. Proactive engagement among developers, clinicians, health system leaders and regulators on platforms such as the FDA’s Digital Health Advisory Committee will be critical.’

3. Prioritizing AI life-cycle management. 

Given the capacity for “unlocked” models to evolve and AI’s sensitivity to contextual changes, it is becoming increasingly evident that AI performance should be monitored in the environment in which it is being used, the authors state. “This need for postmarket performance monitoring of AI has profound implications for the management of information by health systems and clinical practices.” More:  

‘To meet the moment, health systems will need to provide an information ecosystem much like that monitoring a patient in the intensive care unit. The tools and circumstances of this ongoing evaluation must be recurrent and as close to continuous as possible, and the evaluation should be in the clinical environment in which it is being used.’ 

4. Counting on product suppliers to be responsible partners.

“At its core, FDA regulation begins with voluntary compliance by the regulated industries themselves,” Califf et al. write. “For example, the FDA reviews studies typically funded by industry but does not conduct clinical trials.” More: 

‘The concept that regulation of AI in medical product development and application for products that the FDA oversees begins with responsible conduct and quality management by sponsors [and] does not fundamentally differ from the FDA’s general regulatory regime.’

5. Balancing regulatory attention between Big Tech, startups and academia. 

Big Tech players dominate the AI innovation ecosystem. In healthcare, this presents the FDA with myriad challenges. Not least among these, the authors note, is “the daunting task of determining ways for all developers, including small entities, to ensure that AI models are safe and effective across the total product life cycle in diverse settings.” More: 

‘Most current FDA programs have special initiatives to support small business and academia that would also apply to AI.’

6. Mitigating the tension between companies’ profit motives and providers’ care imperatives.  

“An intentional focus on health outcomes will be necessary to overcome the pressure to emphasize practices that lead to suboptimization of the healthcare system, the adverse risks of financialization and data blocking,” Califf and colleagues write. More: 

‘The mandate for the FDA to safeguard and promote the health of individuals and public health will apply pressure to the system, but the need for a broad collaboration for responsible collective advancement extends beyond the FDA.’

The paper’s co-authors are FDA senior clinical advisor Haider Warraich, MD, and Troy Tazbaz, the agency’s director of digital health. 

Read the whole thing

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Nabla Joins athenahealth's Marketplace Program to expand access to AI-powered clinical documentation and promote clinician well-being - Is your organization using athenahealth and seeking to streamline clinical documentation? Nabla has officially joined athenahealth’s Marketplace Program. Visit Nabla's page on the marketplace here: https://marketplace.athenahealth.com/product/nabla

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence in healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days. 

  • The time has come to pivot toward routine genomic analysis. The American Medical Association is talking to you, advancers of AI-aided precision medicine. In a post promoting the sixth module in its Ed Hub CME series, the group notes that, so far, genomic analysis has been performed only when evaluating specific cancers or rare genetic diseases. “Moving forward,” the authors state, “whole genome approaches will become a standard step in understanding, preventing, detecting and treating” all sorts of diseases. To learn more about how to learn more about AI and precision health from the AMA, click here
     
  • Google is shaking things up at the top. In a blog aimed at employees but posted for the general public, Google/Alphabet CEO Sundar Pichai says the moves reflect the company’s recognition of the present time as its “Gemini era.” Along with some rejiggering of departmental structures, the changes will see senior veep Prabhakar Raghavan “return to his computer science roots” to take on the role of chief technologist for Google. Meanwhile, Nick Fox, “a longtime Googler and member of Prabhakar’s leadership team,” will make the proverbial move upstairs to lead knowledge & information operations. This puts Fox in charge of the Big Tech biggie’s Search, Ads, Geo and Commerce products. Read the rest
     
  • It’s also making Google Cloud’s healthcare AI goodies more widely available. This includes Vertex AI Search for Healthcare and some new features for Healthcare Data Engine. In both cases, the company says, Google Cloud customers will retain control over their data. Vertex AI Search for Healthcare is designed to lighten administrative loads for AI developers. Healthcare Data Engine helps organizations build interoperable data platforms—“the foundation of generative AI.” Announcement
     
  • Investors have seen the future of healthcare AI investment, and it is multimodal. Which is to say that, soon, the most sought-after AI offerings will train on all manner of data—text, images, audio, video, wearable and what have you. So says Bessemer Venture Partners VP Morgan Cheatham. “While it’s understandable that healthcare executives aren’t yet championing multimodal AI, given its nascent status and still-developing applications, this technology deserves greater focus as research translates into products,” Cheatham tells MedCity News. “We’ve recently witnessed a similar transition with large language models, which have rapidly moved from research to widespread application.” Get the rest
     
  • The U.S. really isn’t ready for the upset that’s headed at its workforce applecart. Generative AI is the mischief maker rubbing its hands together ahead of the hit. Or, as the Brookings Institution puts it in more genteel terms: “Existing generative AI technology already has the potential to significantly disrupt a wide range of jobs. We find that more than 30% of all workers could see at least 50% of their occupation’s tasks disrupted by generative AI.” What’s more, unlike previous automation technologies that primarily affected routine, blue-collar work, generative AI is “likely to disrupt a different array of ‘cognitive’ and ‘nonroutine’ tasks, especially in middle- to higher-paid professions.” Break out the worry stone and read the report
     
  • Balancing Brookings is the Indeed Hiring Lab. Analysts there reviewed lots of data too. And their advice seems to be “Calm down.” “We were able to take all these skills, map them to over a million job postings that we had over the last year or so, and then evaluate: Could gen AI replace a human being in performing this particular job function?” Svenja Gudell, Indeed’s chief economist, tells CNBC. “When we did that, the result was actually quite striking because we found that there were really no skills—literally zero—that were very likely to be replaceable.” There are, however, some dark nuances and wrinkles in Indeed’s findings. One is that healthcare administrative and support jobs land among the top 5 lines of work with “the greatest share of skills that have the potential to be replaced by AI.” Read the whole thing
     
  • Three European organizations have banded together to help AI developers translate the EU AI Act into technical specs. If that sounds geeky, so be it. Because it’s also an important step toward broad adoption of compliant GenAI models for healthcare providers across the continent. The three orgs are ETH Zurich, a public research university in Zurich, Switzerland; the Institute for Computer Science, Artificial Intelligence and Technology in Sofia, Bulgaria; and LatticeFlow AI, a software company also in Zurich. The trio’s framework offers an open-source resource for evaluating the regulatory readiness of large language models. ETH professor Martin Vechev says the offering “can also be extended to evaluate AI models against future regulatory acts beyond the EU AI Act, making it a valuable tool for organizations working across different jurisdictions.” Details and link to the eval framework.
     
  • If you or someone you know could use an intro to AI—or a refresher—check this out. MIT Technology Review is offering a free, six-lesson mini-course. Signing up gets you one email a week for six weeks. Each dispatch presents a self-contained module, from “What is AI?” (week 1) to “How to talk about AI” (week 3) to “Does AI need tougher rules?” (week 6). Details plus signup link.
     
  • Recent research in the news: 
     
  • Notable FDA Approvals:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare