Industry Watcher’s Digest
Buzzworthy developments of the past few days.
- Who is Sriram Krishnan, and why is he headed to the White House? He’s a combination engineer, entrepreneur and investor who lately has been working as a general partner at the venture capital firm Andreessen Horowitz. And he’s going to Washington because President-elect Trump just named him senior policy advisor for AI. Announcing the appointment on social media, Trump says Krishnan will work in various capacities to help ensure continued American leadership in AI. Meanwhile Krishnan will likely promote open-source mechanisms for websites and social-media platforms to “exchange value” with AI assistants. That prediction is based on an op-ed he published in the New York Times in 2023. “Some industry experts believe the answers [to data hoarding] are in legal action and older sites forming content alliances,” he noted at the time. “As a technologist, my hope is that the answers lie in code rather than lawyers and that we see creative technology solutions to help keep the internet open.” The Hill has more on the tech team Trump is assembling under AI & crypto czar David Sacks. CIO has more on Krishnan.
- Hospitals really ought to step up their game on advanced analytics and AI investment in 2025. That’s the considered opinion of analysts at Kaufman Hall under its new parent company, Vizient. As provider organizations are famously data-rich and information-poor, AI and digital analytics toolkits “can reveal important interconnections,” explains Kaufman Hall senior VP Erik Swanson. “They are a sorting mechanism to determine what is most important to focus on, which you can then use to create objectives and action plans around key performance indicators.” Vizient/Kaufman Hall expounds on this and other points in a 2025 trends report released in December.
- Calls are increasing to monitor AI medical devices throughout their life cycles. But that kind of attention doesn’t come cheap. It requires continuously retraining not only the algorithms but also the humans who are responsible for them. “You need people, and more machines, to make sure the new tools don’t mess up,” KFF points out in an article picked up by CBS News. “Everybody thinks AI will help us with our access and capacity and improve care and so on,” says Nigam Shah, chief data scientist at Stanford Health Care. “All of that is nice and good, but if it increases the cost of care by 20%, is that viable?”
- Relatedly, the older a large language models gets, the more likely it is to develop—no kidding—dementia. It looks enough like actual senility, anyway, for academic researchers in Israel to be tagging it as such. Evaluating several models with the Montreal Cognitive Assessment (MoCA) test, the team found that, with the exception of ChatGPT 4o, almost all LLMs “showed signs of mild cognitive impairment.” They go even further. “As in humans, age is a key determinant of cognitive decline,” the researchers report in The BMJ. “Older chatbots, like older patients, tend to perform worse on the MoCA test.”
- In fact, let’s go ahead and say generative AI is not ready for primetime in medtech. Or we can just hold our tongue while nodding in agreement with medtech founder and CEO Erez Kaminski of Ketryx. He comes right out and pronounces it. “While generative AI might eventually play a role in administrative functions or patient education, we’re not ready to deploy it in clinical or life-or-death scenarios in 2025,” Kaminski writes in Medical Product Outsourcing. “For now, the risks far outweigh the potential rewards. Simply put, we don’t know how to safely control generative AI in life-and-death situations—and won’t for some time.” Hear him out.
- That’s not to say GenAI is DOA all across healthcare. After all, successful use cases are really not hard to find. A key thread running through all of them is the use of AI as an assistant for humans who oversee it—not as a know-it-all who might be able to do it all. “As long as organizations can keep that idea in mind as they implement AI, they will be in position to succeed during this era in which healthcare is being transformed by AI,” writes Tim Wetherill, MD, chief clinical officer at Machinify. His short list of scenarios in which AI should not be used, published by Unite.AI, includes cases that involve denying claims and care, relying on past decisions, building on legacy systems and leaning on old data.
- Nurses aren’t going to like this. What would be Robert F. Kennedy Jr.’s view of healthcare AI should Congress approve his nomination as HHS secretary? Fierce Healthcare put the question to Paul Mango, who served as HHS deputy chief of staff during the first Trump administration. Suggesting Kennedy would probably consult with DOGE leaders Musk and Ramaswamy, Mango wonders aloud: “What if we could use technology to help monitor patients differently than the way nurses did it 20 years ago, and we only needed half the number of nurses?” Read the whole thing.
- Recent research in the news:
- University of Birmingham (UK): New recommendations to increase transparency and tackle potential bias in medical AI technologies
- Osaka Metropolitan University: Robot rehabilitation can offer optimal post-stroke treatment
- Keck School of Medicine: USC joins Ryght Research Network to streamline clinical trials with AI
- University of Birmingham (UK): New recommendations to increase transparency and tackle potential bias in medical AI technologies
- Funding news of note:
- From AIin.Healthcare’s news partners:
- Cardiovascular Business: Cardiology, radiology specialists debate CCTA’s rise as a go-to imaging modality for CAD
- Health Imaging: Can large language models break language barriers in radiology reports?
- Cardiovascular Business: Cardiology, radiology specialists debate CCTA’s rise as a go-to imaging modality for CAD