Industry Watcher’s Digest
Buzzworthy developments of the past few days.
- The FDA has finalized its recommendations—largely nonbinding but packed with shoulds—for getting AI-equipped medical devices approved. Released Dec. 4, the guidance urges device makers to include a “predetermined change control plan,” or PCCP, as soon as they submit their devices for review. Essentially what they’re talking about is a strategy for updating the device as it matures. This would save both the manufacturer and the agency from having to tackle fresh marketing submissions for each modification as it comes along. “FDA recognizes that the development of AI-enabled devices is an iterative process,” the guidance notes. The guidance, it adds, “demonstrates FDA’s broader commitment to developing innovative approaches to the regulation of device software functions as a whole.” Download the document here. And for a PCCP primer in conversational language, check out Etienne Nichols’s helpful guidance through the guidance from 2023, when FDA posted the first draft.
- We wish we didn’t have to ask this question. But ask we must, and with some urgency: Did healthcare payers’ use of AI figure in the thinking of the guy who murdered UnitedHealthcare CEO Brian Thompson? Recall that UHC is among the companies being sued for using AI to deny coverage. “The algorithm in question, known as nH Predict, allegedly had a 90% error rate—and according to the families of the two deceased men who filed the suit, UHC knew it,” Futurism points out. “As that lawsuit made its way through the courts, anger regarding the massive insurer’s predilection towards denying claims has only grown, and speculation about the assassin’s motives suggests that he may have been among those upset with UHC’s coverage.” Newsweek looks at that angle too.
- Here’s another troubling question: Will AI help doctors decide whether you live or die? Computerworld senior reporter Lucas Mearian seeks the answer in a piece published Dec. 3. One of the experts he turns to is Adam Rodman, MD, of Harvard Medical School. “We know 800,000 Americans are either killed or seriously injured [each year] because of diagnostic errors [by healthcare providers],” says Dr. Rodman, an internal medicine specialist who serves as director of AI programs at Beth Israel Deaconess Medical Center. “So large language models are never going to be perfect, but the human baseline isn’t perfect either.” Read the rest.
- Of the top 10 health technology hazards for 2025, guess where ‘risks with AI-enabled medical devices’ lands. If you said right at No. 1, you must be paying attention. ECRI released the latest list in its annual rankings this week. “Placing too much trust in an AI model—and failing to appropriately scrutinize its output—may lead to inappropriate patient care decisions,” the organization warns. “AI offers tremendous potential value as an advanced tool to assist clinicians and healthcare staff, but only if human decision-making remains at the core of the care process.”
- Using ambient-listening AI tools probably won’t do much to boost providers’ productivity. Don’t shoot the messenger. The opinion is based on empirical research published in NEJM AI.“[T]he hype and novelty of ambient-listening AI tools have outpaced the evidence to support or refute claims that these tools are transformational in terms of time savings and efficiency,” the authors comment in their discussion section. “Consequently, healthcare systems, which already operate on small margins in hypercompetitive environments, run the risk of overpaying and not realizing the expected benefits.” The study is available in full for free.
- The 90+ companies marketing AI scribes would surely beg to differ. These vendors might do well to explain that their products’ value depends on how they’re used. “When notes are produced by AI scribes, clinicians must think carefully about the shift in their roles from note authors to note editors,” Sari Altschuler, PhD, of Northeastern University, tells that institution’s news operation.
- There’s a GPU shortage in healthcare. The cloud may be both the problem and the solution. “Healthcare is not really feeling the pinch from the chip shortage because most of the large cloud vendors have already procured GPUs in bulk,” Jon McManus, chief AI officer at Sharp HealthCare in San Diego, tells HealthTech. However, is there is a major GPU shortage, the economic effects could be profound, he adds. “At some point, if there is a cutoff, be prepared to pause [your] AI ambition until the market can stabilize.” Read the article.
- AI is not a popular new technology—and it’s actually getting less popular over time. So says Vox senior writer Kelsey Piper. However, she advises, don’t make the mistake of thinking any slowdown in AI progress means its bubble has burst. “[W]e should probably expect the impact of AI to grow, not shrink, over the next few years, regardless of whether naive scaling is indeed slowing down,” Piper writes. “That’s effectively because when it comes to AI, we already have an enormous amount of impact that’s just waiting to happen.”
- Recent research in the news:
- Funding news of note:
- From AIin.Healthcare’s news partners:
- Health Imaging: Up to half of medical organizations either already using or preparing to implement AI
- Cardiovascular Business: AI-powered ECG screening a cost-effective way to ID heart failure patients
- Radiology Business: FDA clears expanded use of RadNet subsidiary DeepHealth’s mammography AI solution
- Cardiovascular Business: Philips, Mayo Clinic using AI to improve cardiac MRI technology
- Health Imaging: AI prevents mammography positioning errors before exposure
- Health Imaging: Up to half of medical organizations either already using or preparing to implement AI