| | | Some FDA-approved medical devices age more safely than others. That’s no less true of AI-enabled technologies than of any others. In fact, the need for vigilance around embedded AI models may be more pressing than the medical-device norm. The agency makes this clear in guidance issued as a detailed draft in progress this week. “The performance of AI-enabled medical devices deployed in real-world environments may change or degrade over time, presenting a risk to patients,” FDA states in the document. “In general, manufacturers should have a postmarket performance monitoring plan to help identify and respond to changes in performance in a postmarket setting.” The underlying idea is to push device makers to include long-term plans for monitoring performance as soon as they submit products for market approval. This approach, FDA suggests, will help cut chances of recalls over time while supporting the agency’s ongoing evaluation of AI risk controls. The draft document—“Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations”—is primarily aimed at device manufacturers and FDA staff. It’s open for comments until April 7. Here are some key excerpts. 1. Manufacturers of AI-enabled medical devices should proactively monitor, identify and address modifications and usage changes that could affect device performance.In addition, sponsors must develop and implement plans for comprehensive risk-analysis programs and documentation consistent with established Quality System Regulation Practices to manage risks related to undesirable changes in device performance for AI-enabled medical devices. Further, manufacturers must monitor device performance and report to FDA information about deaths, serious injuries and malfunctions.
2. Ongoing performance monitoring is important for AI-enabled medical devices because models are highly dependent on the characteristics of data used to train them.As such, their performance can be particularly sensitive to changes in data inputs. Changes in device performance may originate from many factors, such as changes in patient populations over time, disease patterns or data drift from other changes.
3. The performance of AI-enabled medical devices can change as aspects of the environments in which they are cleared for use in may change over time.It may not be possible to completely control risks with development and testing activities performed in premarket conditions (prior to device authorization and deployment). FDA recognizes that the environments in which medical devices are deployed cannot be completely controlled by the device manufacturer.
4. The presence of factors that may lead to changes in device performance may not always raise concerns about patient harm.Rather, as part of ongoing risk management, it is important for device manufacturers to consider the impact of these factors (e.g., data drift) on the safety and effectiveness of the device. Additional information about performance management processes may be helpful for FDA to determine whether risks have been adequately identified, addressed and controlled.
5. Sponsors of AI-enabled medical devices who elect to employ proactive performance monitoring should describe their performance monitoring plans as part of their premarket submission.Sponsors are encouraged to obtain FDA feedback on the plan through the Q-Submission Program. For a 510(k) submission, FDA generally does not require such plans regarding devices for which a performance monitoring plan is not a special control for the particular device type.
6. For a De Novo classification request, such a plan may be necessary to control risks posed by the particular device type.In some cases, FDA may establish a special control for the device type going forward. Further, for a PMA, a performance monitoring plan may be a condition of approval. However, sponsors may opt to include information regarding the performance monitoring plan in any submission for an AI-enabled device. A robust performance monitoring plan includes proactive efforts to capture device performance after deployment.
Announcement here, document here. |
| | |
| |
| | | Access the 2024 Executive Handbook: Ten Transformative Trends in Healthcare - What was top of mind for healthcare executives this year? What trends will shape 2025? Nabla's Chief Medical Officer, Dr. Ed Lee, MD, MPH, was recently interviewed for the 2024 Executive Handbook: Ten Transformative Trends in Healthcare, offering his perspective on how AI is enhancing clinical workflows and setting the stage for the future of patient care. From shifting federal healthcare policies to the emergence of disruptors beyond traditional health systems and pressing cybersecurity challenges, discover the key insights shaping the industry. Download the full handbook here. Assistant or Associate Dean, Health AI Innovation & Strategy - UCLA Health seeks a visionary academic leader to serve as its Assistant or Associate Dean for Health AI Innovation and Strategy and Director for the UCLA Center for AI and SMART Health. This unique position offers the opportunity to shape and drive AI vision and strategy for the David Geffen School of Medicine (DGSOM) and ensure translation of innovation in our renowned Health system. This collaborative leader will work with academic leadership, faculty, staff and trainees to harness the power of AI to transform biomedical research, decision and implementation science, and precision health. Learn more and apply at: https://recruit.apo.ucla.edu/JPF09997 (tenured track) https://recruit.apo.ucla.edu/JPF10032 (non-tenured track)
|
| | |
|
| | | Buzzworthy developments of the past few days. - New Year’s Day brought the go-live for California’s new law forbidding the use of AI to deny health insurance claims. Known as the “Physicians Make Decisions” law, the action had been teed up since last September. That’s when Gov. Gavin Newsom signed the bill. As The Mercury News points out, the law’s already high profile is now likely to rise even higher in the aftermath of the New York City murder of UnitedHealthcare executive Brian Thompson. The killing “ignited a wave of reactions that often reflected the public’s anger,” the newspaper reminds. Meanwhile the law’s primary author, state Sen. Josh Becker, says an AI algorithm “cannot fully understand a patient’s unique medical history or needs, and its misuse can lead to devastating consequences. This law ensures that human oversight remains at the heart of healthcare decisions.” Having watched the legislation take shape in the Golden State, some 19 other states are now looking to pass similar laws. “We’ve even been contacted by multiple congressional offices considering federal legislation,” Becker tells the outlet. “Our priority is helping Californians, but setting a national model is just as important.”
- AI startups concentrating on the medical scribe market raised $800 million in 2024. That’s more than double 2023’s $390 million. The figures are from PitchBook, and they unsurprisingly caught the eye of AI watchers at the Financial Times in London. “I don’t think I’ve ever seen anything more transformative in 15 years of healthcare than this,” a primary care physician in South London tells FT. The doctor, Harpreet Sood, has been using Nabla’s ambient AI assistant for the past 15 months. “It’s been remarkable, easily saving three to four minutes of every [10-minute] consultation,” he adds, “and really helping to capture the consultation and what it’s about.” Sood is aware of the technology’s propensity for hallucinations and says he wouldn’t use it without checking its work. Still, “for me personally, it has been a big shift.”
- OpenAI has hinted it’s about to release a new AI model that may blow some minds with its humanlike reasoning. In a wide-ranging Q&A with Bloomberg Businessweek, CEO Sam Altman only ducks a little when asked if the latest iteration might constitute artificial general intelligence. “[W]hen an AI system can do what very skilled humans in important jobs can do—I’d call that AGI,” Altman replies. “[I]f you could hire an AI as a remote employee to be a great software engineer, I think a lot of people would say, ‘OK, that’s AGI-ish.’” He also shares his thoughts on the incoming Trump administration vis-à-vis AI. The interview is behind a paywall, but Pymnts has a good summary.
- Not to be outdone, Elon Musk is trumpeting on X that xAI’s Grok 3 is coming soon. The updated iteration will have 10 times the compute power of Grok 2, Musk promises. Reporting on the teaser for Tom’s Hardware, tech writer Anton Shilov states it’s noteworthy that, as part of its present pursuits, xAI plans to deploy a supercomputer powered by over a million GPUs over time. That version of xAI’s Colossus supercomputer “will be used to train LLMs that will likely contain trillions of parameters and will be far more accurate than Grok 3 or GPT-4o,” Shilov writes. “However, in addition to a greater number of parameters, newer models may feature more advanced reasoning, which brings them closer to artificial general intelligence, which is the ultimate goal for companies like xAI and OpenAI.”
- Of course AI can greatly accelerate drug discovery. Every healthcare AI watcher knows that. But did you know that it’s also capable of relieving participants of their burden during clinical trials? It’s true. AI pulls it off by predicting optimal dosing along with safety and efficacy so the human subjects don’t have to go through all that. The point is made in an article presenting views of four experts published Jan. 6 in Genetic Engineering & Biotechnology News. One of the experts tells the journal his drug company has developed an experimental platform that “creates hundreds or thousands of distinct molecular structures on weekly time scales. Then we can carry them to a whole suite of different biological and metabolic assays.” Sounds like a major assist by any standard.
- AI assistance vs. quiet quitting: Which will you choose to get you through your workday three to five years from now? Let’s get real. Few workers will have any such choice. “As transformative as AI can be, it can’t completely take over all elements of work [because] many roles require human creativity, emotional intelligence and complex decision-making,” Kathy Diaz, chief people officer at global IT services company Cognizant, tells Newsweek. “The importance of softer skills will continue to increase as generative AI and automation optimize routine tasks.” Read the whole thing.
- Looking back now, 2023 was sort of the year of text-to-image AI. And 2024 was largely marked by text-to-video advances. What will shine similarly brightly in 2025? The next logical breakthrough—physical intelligence. “PI,” if you will. So suggests Daniela Rus, PhD, director of the computer science and AI lab at MIT, in a piece published by Wired Jan. 6. She advises watching for “a new generation of devices—not only robots but also anything from power grids to smart homes—that can interpret what we’re telling them and execute tasks in the real world.” Read the rest.
- Recent research in the news:
- Funding news of note:
|
| | |
|
| |
|
|