Healthcare AI newswatch: FDA’s ‘rush’ to the AI altar, AI and Medicare Advantage, OpenAI benchmarks healthcare AI
Buzzworthy developments of the past few days.
- For some, the FDA is moving too fast in its drive to adopt generative AI agency-wide. Less than a week after Commissioner Martin Makary announced the plan—pledging to cut product review times from days to minutes—initial reactions seem mixed. Former FDA Commissioner Robert Califf says he’s cautiously enthusiastic about the aggressive, all-in approach his successor is taking to AI adoption. But digital health pioneer Eric Topol is feeling something closer to apprehensive optimism. Or maybe it’s optimistic apprehension. “The idea is good,” Topol tells Axios, “but the lack of details and the perceived rush [are] concerning.” The outlet’s crisp analysis is here.
- AI agents are quicker than humans at assessing risk in Medicare Advantage enrollees. This matters to insurance companies offering “MA” plans because the Centers for Medicare and Medicaid Services uses health risk scores to set payment amounts for MA-covered procedures. The bigger the risk pool the insurer takes on, the higher its payment rate from CMS. At the same time, though, there often comes a point of diminishing returns for carriers: Take on too many high-risk patients, and the balance sheet heads toward the red. This means carriers who set out to turn a nice profit by offering MA can become perversely incentivized to decline enrollment to sicker and thus riskier MA candidates—the very patients CMS most wants to help. So it is that, as the population ages, AI-led health risk assessments of MA candidates can be something of a double-edged sword. As the AI-enthusiastic COO of the MA specialty firm Zing Health tells Modern Healthcare, AI-assisted health risk assessments are “a faster way to scale and work through those peaks that the Medicare system has for us. How can we get efficiencies to optimize our human workforce?” The executive stresses that Zing uses AI agents to augment, not replace, human workers. For a second opinion, Modern Healthcare quotes a senior fellow at the Brookings Institution’s Center on Health Policy. “Insurers are investing everything they can to vacuum up as many diagnoses as possible purely for risk-adjustment purposes, and it’s socially wasteful,” the Brookings thought leader says. “It’s a waste of the plan’s time to pay whoever’s operating this AI tool, and it’s a waste of the beneficiary’s time to answer the questions.” Full article here (behind paywall).
- OpenAI is splashing a new open-source toolkit for benchmarking healthcare AI systems against optimal performance aims. Calling the LLM toolkit HealthBench, the ChatGPT maker says it built the system largely by partnering with 262 physicians in 60 countries. From those sources of training transcripts, the company has produced 5,000 realistic health conversations, “each with a custom physician-created rubric to grade model responses,” OpenAI states in a May 12 announcement. These conversations “were created to be realistic and similar to real-world use of large language models: They are multi-turn and multilingual, capture a range of layperson and healthcare provider personas, span a range of medical specialties and contexts and were selected for difficulty.” The announcement presents some examples of the toolkit in action and a link to a related white paper. HealthBench itself is here.
- Large language AI is the fastest-adopted technology in the history of the world. And with less than 10% of the world’s population using it as of now, its advance may only accelerate. The point was suggested at a healthcare workforce summit hosted by the University of Delaware earlier this month. So was a point about why the technology should be brandished in healthcare to begin with. “Healthcare systems should build and test these models to enable a better patient experience,” keynote speaker Dan Weberg, executive director of workforce development and innovation at Kaiser Permanente, said. “If we don’t lead this integration, then Apple, Google and Amazon will.”
- For healthcare workers encountering AI as newbies, first impressions are everything. And sometimes it’s best to let a bot make its own friends. That seemed to be a common theme at a population health colloquium hosted by Thomas Jefferson University in Philadelphia last week. For example, Stephen Parodi, MD, executive VP of Permanente Medical Group in San Franciso, told attendees ambient AI dropped into his organization practically unannounced—and wowed a tough crowd. In the past, he explained, an enterprise-level technology rollout would have taken months if not years to plan out, implement and help end-users to master. Instead, the group rolled out an ambient AI scribe to some 24,000 physicians in just one month with nothing more than a one-page user guide and a hyperlink. “I’m like, ‘Oh my god, this is a disaster,’” Parodi said, recalling the project’s early days. “But you know what? There were some hiccups, there were a couple of failures along the way, but we didn’t bring down any systems. It was well received, and we didn’t take years to roll it out.” Colloquium coverage by MedPage Today.
- AI is supposed to relieve healthcare workers of overfull workloads. Sometimes it does the opposite. This can happen when clinicians have to verify and correct AI-generated content. And that’s just one problem rooted in poor data quality. Jay Anders, MD, chief medical officer at Medicomp Systems, a vendor in the EHR usability space, takes a holistic look at the challenge for KevinMD. “By combining AI technologies with evidence-based algorithms, healthcare organizations can work toward normalizing historical data, matching related diagnoses, recategorizing inappropriate items and fixing inadequate or missing codes,” Anders writes. “Before healthcare organizations can fully realize the potential of AI, they must solve their data quality challenges—generating text alone is not enough.”
- Remember how intensely COVID-19 focused U.S. healthcare’s attention on weak links in its supply chain? That chapter gave AI a way to penetrate the field ever more deeply. Today the technology is not just optimizing supply chain management—it’s redefining the discipline. So says Baxter International’s VP of advanced engineering and innovation, A.K. Karan, MBA, in an interview with MD+DI. “[AI] provides us with scale and computing power that was not possible using traditional networks,” Karan tells the outlet ahead of a conference talk he’ll soon be giving. “With cloud infrastructure, we’re able to ingest massive data loads and use computing power in the cloud to detect anomalies using machine learning and AI in real time.”
- It’s always good to see parts of the developing world use AI to improve healthcare, isn’t it? The Nigeria-based newspaper Punch reports that numerous African countries have launched national AI programs, many of which include healthcare in their planning. “Incorporating AI into healthcare is not just about technology,” explains Dr. Uzma Alam of the Science Policy Engagement with Africa’s Research program. “It is about enhancing our policy frameworks to ensure these advancements lead to better health outcomes for all Africans.”
- Recent research in the news:
- University of Pittsburgh: Artificial sense of touch, improved
- Mass General Brigham: AI tool uses face photos to estimate biological age and predict cancer outcomes
- Icahn School of Medicine at Mount Sinai: AI model improves delirium prediction, leading to better health outcomes for hospitalized patients
- Queen Mary University of London: New algorithms can help GPs predict which of their patients have undiagnosed cancer
- Florida State University: New study explores AI’s ability to improve differential diagnosis accuracy
- University of Pittsburgh: Artificial sense of touch, improved
- Funding news of note:
- From AIin.Healthcare’s news partners: