News You Need to Know Today
View Message in Browser

FDA eyes AI’s total product life cycle | AI news watcher’s blog | Partner voice

Wednesday, November 27, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

FDA Digital Health Advisory Committee

FDA taking the long view of generative AI in healthcare

The FDA’s fledgling Digital Health Advisory Committee (DHAC) only held its first meeting last week, but it has already committed its thinking to writing. 

And what’s on its mind, it turns out, is making sure the agency keeps a close watch on medical devices equipped with GenAI all the days of these products’ lives. 

That’s clear from a reading of the 30-page document committee members received ahead of their inaugural November meeting. Here are excerpts, organized as responses to some questions AIin.Healthcare would have liked to ask had we been there.  

Why is a total product life cycle (TPLC) strategy critical to the oversight of medical devices equipped with GenAI? 

FDA’s long-standing commitment to a TPLC approach has become increasingly relevant for medical devices incorporating technologies that are intended to iterate faster and more frequently over a device’s life of use than ever before. 

‘A TPLC approach is likely to remain important to the management of future, safe and effective GenAI-enabled medical devices.’

How does FDA’s TPLC approach relate to the agency’s AI Lifecycle template

In general, consideration of the FDA’s AI Lifecycle for GenAI-enabled devices—and AI-enabled devices broadly—may be one important way for manufacturers to approach managing their devices throughout the TPLC. 

‘Additionally, the AI Lifecycle can be used as a helpful model to identify where new techniques, approaches or standards may be needed to assure adequate management of these new technologies across the TPLC.’

Back to the basics for a minute. How is the FDA defining ‘GenAI?’ 

GenAI refers to the class of AI models that mimic the structure and characteristics of input data to generate derived synthetic content, and can include images, videos, audio, text and digital content.

‘GenAI models can analyze input data and produce contextually appropriate outputs that may not have been explicitly seen in its training data.’

How does GenAI resemble—and differ from—traditional AI/machine learning? 

Like other AI/ML models, GenAI models are frequently developed on datasets so large that human developers typically cannot know everything about the dataset contents during development. 

‘In contrast to the datasets used to develop other AI/ML models, datasets for GenAI model development can be intentionally broad and may not be initially tailored to a specific task.’

What makes GenAI especially tricky to regulate?

At times, GenAI’s ability to tackle diverse, new and complex tasks may contribute to uncertainty around the limits of a device’s output. 

‘When insufficiently controlled, this uncertainty can translate to difficulty in confirming the bounds of a device’s intended use, which can introduce challenges to FDA’s regulation of GenAI-enabled devices.’

Why does it matter that many GenAI models are foundation models? 

Foundation models are trained on a wide range of data and can be broadly applied to numerous AI applications for undertaking myriad tasks. 

‘If a manufacturer uses a foundation model or other GenAI tool as part of a product with a specific intended use that meets the definition of a medical device, the product that leverages the foundation model may be the focus of FDA’s device regulatory oversight.’ 

How best to avoid FDA rejection of a GenAI product?

At times, it may be helpful for manufacturers and developers to consider that a GenAI implementation of a product may not be beneficial to public health. This may be the case when the implementation could provide erroneous or false content. 

‘It is helpful for manufacturers and developers to consider when GenAI may or may not be the best technology for a specific intended use.’

Going forward, FDA notes, the performance evaluation methodologies needed for sound oversight “will be governed by the specific intended use and design of the GenAI-enabled device, some of which may necessitate formulation of new performance metrics for certain intended uses.” 

‘As with all devices, the totality of evidence—which may include premarket and postmarket evidence—can support reasonable assurance of safety and effectiveness of these devices across the TPLC.’

Read the full report. 

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Catalight Partners with Nabla to Reduce Practitioner Documentation Burden and Elevate Autism and I/DD Care - A leader in intellectual and developmental disabilities (I/DD) care, Catalight is leveraging Nabla's Ambient AI assistant to enhance patient care, expand access, and empower families with tailored treatment options. Learn more about how Nabla is transforming care here: https://www.prnewswire.com/news-releases/catalight-partners-with-nabla-to-reduce-practitioner-documentation-burden-and-elevate-autism-and-idd-care-302315767.html

 Share on Facebook Share on Linkedin Send in Mail
Artificial intelligence AI in healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • A big American healthcare company has ambitious global plans. And AI is front and center. At an investor event last week, GE HealthCare made known its intentions to embed AI in every medical device it makes over the next eight years. The $18 billion business headquartered in Chicago said the vision is part of its D3 strategy, which melds a digital framework with various products such that, together, the components focus smart medical devices on specific disease states. At the gathering, held at Nasdaq in New York City, President and CEO Peter Arduini reminded attendees that the company broke away from its historic parent, General Electric, in early 2023. “We are confident in our progress since [the] spin[off] and our path to accelerate growth driven by an exciting innovation pipeline,” Arduini said. He underscored the company’s stated aim to help “create a world where healthcare has no limits.” Company coverage here
     
  • AI developers have a sophisticated new option for building healthcare-specific applications. The opening comes courtesy of Google Research, which introduced a suite of open foundation models this week. Calling the suite Health AI Developer Foundations, or “HAI-DEF,” the company says its health AI team will initially focus on supporting imaging-based applications for radiology, dermatology and pathology. By providing such resources, two software engineers write in a blog post, “we aim to democratize AI development for healthcare, empowering developers to create innovative solutions that can improve patient care.”
     
  • FDA commissioner Robert Califf recently suggested the agency may need to double its workforce. And that’s just to oversee AI. That eyebrow-raising opinion has at least one vocal supporter. “In my emergency department, we use AI to prioritize patients based on admission likelihood,” writes Yale emergency physician and professor Cristiana Baloescu, MD, in MedPage Today. “While it helps with patient flow, it can miss complex cases.” For now, she adds, medical staff “maintain significant oversight, meticulously double-checking AI-generated recommendations.” That approach may not be sustainable, she notes, given the proliferation of AI-equipped medical devices on top of their long lives in service. How to fund a major expansion of the FDA workforce? Start with congressional budget allocations, Dr. Baloescu suggests, and add in shares from fees on AI-equipped devices as well as contributions from AI companies. Hear her out
     
  • Minerva was a tough act to follow, but this should do it. The Mount Sinai Health System in New York City is opening a sparkling new research center concentrating on healthcare AI. Housed in a 12-story, 65,000-square-foot facility close to Central Park, the Hamilton and Amabel James Center for Artificial Intelligence and Human Health will be home away from home to around 40 principal investigators, 250 grad students and any number of postdoctoral fellows, computer scientists and support staff. In showcasing the ribbon-cutting ceremony, the institution suggests the center shares lineage with “Minerva.” That was the name Mount Sinai gave to its early-generation supercomputer back in 2013. Icahn School of Medicine executive Dennis Charney, MD, says the new center will “yield transformative discoveries in human health by the integration of research and data, fostering collaboration across multiple programs under one roof.” Announcement
     
  • Medical scribes are really nothing new. It’s just that, before ambient GenAI, the scribes were humans transcribing doctors’ tape recordings with fingers on keypads. That option can be easy to forget these days, given all the competing AI dictation products vying for attention. And yet, as it happens, some physicians still prefer the old way—at least sometimes. Vandana Ahluwalia, MD, a rheumatologist in Brampton, Ontario, is one. She tells the Canadian Broadcasting Corp. she appreciates how her talented staff member “highlights key points from a patient’s previous visits, which current AI tools can’t do.” She likes that he performs other administrative work in the office too. AI can’t match that. Yet. 
     
  • And speaking of our neighbors to the North. The top five healthcare AI startups in Canada are Acto, Clarius, AbCellera, Benchsci and BlueDot. More on that here
     
  • How likely is it that AI won’t cause an earth-shattering calamity? Not very. So believes Siddhartha Mukherjee, MD, a cancer researcher at Columbia University and author of the Pulitzer prize-winning The Emperor of All Maladies: A Biography of Cancer. “I think it’s almost inevitable that, at least in my lifetime, there will be some version of an AI Fukushima,” he tells The Guardian. Fukushima, of course was the catastrophic nuclear accident caused by the 2011 Japanese tsunami. And with that, we’ve been warned. 
     
  • Elon Musk will not be the Trump Administration’s AI czar. But someone will. This individual will be charged with focusing both public and private resources to keep America in the AI forefront, Axios reports, citing sources inside Trump’s transition team. Axios co-founder Mike Allen adds that AI and crypto roles could be combined under a single “emerging tech” czar.
     
  • Recent research in the news: 
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare