Healthcare AI newswatch: FDA’s AI gambit, FDA’s blind spot, everybody’s agentic AI, more
Buzzworthy developments of the past few days.
- The FDA is getting AI-aggressive about speeding up reviews. Its plan is to unleash generative AI on tedious and repetitive tasks. Its objective is to let agency scientists tap the technology so they can concentrate on work that can only be done by highly skilled humans. And it will measure success by how much faster the staff completes product reviews with GenAI than without. Announcing the decision to go all in with AI April 8, FDA Commissioner Martin Makary, MD, MPH, said he’d been “blown away” by the technology’s performance in a pilot project. Given the impressiveness of the assistance, he added, “[W]e cannot afford to keep talking” about AI’s capabilities. “It is time to take action.” The opportunity to cut task times from days to minutes, he maintains, is “too important to delay.” Full announcement.
- Meanwhile the FDA needs to keep its eye on the device vs. non-device ball. This is the studied suggestion of a researcher whose work has shown large language AI models sometimes produce outputs that qualify as clinical decision support—even when the user prompts the AI to not give clinical decision support. By the FDA’s own standards, this would make such models medical devices requiring regulation. In the research, the crossing of lines tended to happen when prompts included time-critical scenarios. Gary Weissman, MD, of Penn Medicine and colleagues tested two separate GenAI models. Prompting both with a case of likely cardiac arrest, they found one model called for placing an intravenous catheter. That might be appropriate guidance for a trained clinician. For a bystander who witnessed the emergency and wanted to help, not so much. Findings like these “raise questions about how LLMs should be regulated,” Weissman says in a UPenn blog post. “Currently, most LLMs have disclaimers that they should not be used to guide clinical decisions—but LLMs are being used in this way in practice.”
- And what are other countries doing to oversee medical AI? A subject matter expert offers a snapshot in Med Device Online, comparing and contrasting the FDA’s efforts with those of regulatory bodies in the European Union, Canada, the U.K., China, Brazil, Australia and South Korea. “There is no one-size-fits-all approach; each region has its own interpretation of risk, trust, transparency and innovation,” writes Marcelo Trevino. “Companies aiming for global market access need more than a strong algorithm—they need an agile regulatory strategy that can flex across borders while maintaining the highest standards of safety and ethics.” Read the rest.
- Healthcare AI holds transformative potential akin to the discovery of fire. The thought comes from Marschall Runge, MD, PhD. He makes the gutsy statement early on in his new book, released May 6, The Great Healthcare Disruption: Big Tech, Bold Policy and the Future of American Medicine (Forbes Books). “It’s easy to agree with techno-optimists that [healthcare is] at a Promethean moment,” he adds. “This may well be the moment when everything changes.” Mind you, Runge is no peddler of hype. He’s a top-tier academic at the University of Michigan, where the jobs he holds include dean of the medical school, EVP for medical affairs and CEO of Michigan Medicine. When someone with his scholarly bona fides talks—or writes at length—about AI in healthcare, we would do well to listen.
- Don’t be afraid of agentic AI. It’s coming to all industries and economic sectors, healthcare included, but its activities will be confined to particular tasks and discrete duties. StateTech magazine looks at the technology in the context of its service to state governments. “Imagine an AI agent at the DMV that helps file forms, apply for permits, checks records and nudges human employees—or even other agents—to finish unresolved tasks,” writes Joe Markwith, a senior solutions architect with CDW, which publishes the magazine. “People still make the rules and determine what the AI can and cannot access. The AI can then work independently within those confines to achieve its mission.”
- At the same time, it will be best not to let an AI agent wander too far from sight. That’s because it’s not unimaginable an AI agent could go rogue—without intent, of course—and present a cybersecurity risk. “They work 24/7 at very quick speeds and without sleeping,” Jeff Shiner, head of the identity security company 1Password, explains for Axios. “An agent acts and reasons. As a result, you need to understand what it’s doing.”
- It only took two years for AI prompt engineering to go from a hot job to a dead end. Prompt engineers were going to be the tech workers specialized in getting large language AI models to give great outputs. Now their know-how is just one more must-have arrow in the quivers of those same workers. And it’s one that AI itself can shoot. The rapid rise and fall of the prompt engineer position raises an obvious question: Was the job ever really a thing? Some are skeptical. “I think the discussion online of [prompt engineering] was probably much bigger than the head count,” says Aline Lerner, CEO of Interviewing.io, in a Fast Company article. “It was such an appealing thing precisely because it was this on-ramp for nontechnical people into this sexy, lucrative field.”
- Remember when even intellectuals worried AI would become smarter than humans? Forget about it. “As a history professor at a state university, my concern is the opposite,” writes Kate Epstein, PhD, of Rutgers at Persuasion. “It isn’t that AI is becoming smarter than us. It’s that AI is making us—and particularly students—as dumb as it is.” Hear her out.
- Research:
- Feinstein Institutes for Medical Research: AI identifies brain network predictive of psychosis in Alzheimer’s disease
- Albert Einstein College of Medicine launches data science institute
- University of Utah: Researchers develop explainable AI toolkit to predict disease before symptoms appear
- Hospital for Special Surgery: Study uses AI to identify risk factors linked to more severe pain after knee replacement
- Massachusetts Institute of Technology: Making AI models more trustworthy for high-stakes settings
- University of Missouri: AI chatbots can help pregnant women with opioid use disorder, new study finds
- Feinstein Institutes for Medical Research: AI identifies brain network predictive of psychosis in Alzheimer’s disease
- Regulatory:
- Funding:
- Carta Healthcare secures $18.25M in Series B1 funding to accelerate AI-powered clinical data abstraction and analytics
- Kouper emerges from stealth with $10M in funding to transform transitions of care
- ReportAId grabs €2.2M ($2.5M) to solve European healthcare data challenge, boosting hospital revenue by 25%
- Carta Healthcare secures $18.25M in Series B1 funding to accelerate AI-powered clinical data abstraction and analytics
- From AIin.Healthcare’s news partners:
- Cardiovascular Business: AI-enabled CCTA evaluations reduce use of invasive imaging exams
- Health Imaging: ‘ThyGPT’ slashes rates of thyroid nodule biopsies
- Radiology Business: Most women have yet to form an opinion about breast imaging AI
- Health Imaging: AI assistance could cut screening-related costs by up to 30%
- Cardiovascular Business: AI-enabled CCTA evaluations reduce use of invasive imaging exams