Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • To improve the nation’s fiscal health, improve its population’s actual health. Looking into the connection, CNBC notes the potential for both to happen under an AI-forward Trump Administration II. The piece quotes Ajay Agrawal, a University of Toronto researcher who concentrates on the economics of AI. “Many people are fearful of reducing regulation because they don’t want technologies that are immature to be brought into the healthcare system and harm people,” Agrawal says. “And that’s a very legitimate concern. But very often what they fail to also put into their equation is the harm we’re causing people by not bringing in new technologies.” And it goes without saying that sicker populations are more expensive to care for than healthier ones. Read the item
     
  • Along that same line of thinking, consider the 80 million low-income Americans who depend on state-administered Medicaid programs. This subpopulation tends to have less access and poorer outcomes than the population as a whole. To close the gap, the Federation of American Scientists is proposing an AI for Medicaid initiative. CMS should launch such a project to “incentivize and pilot novel AI healthcare tools and solutions targeting Medicaid recipients,” writes the author of the piece, Harvard grad student Pooja Joshi. “Leveraging state incentives to address a critical market failure in the digital health space can additionally unlock significant efficiencies within the Medicaid program and the broader healthcare system.” Read the rest
     
  • Federal AI guardrails are taking shape. CMS is seeking to codify language prohibiting Medicare Advantage plans from using AI to “discriminate on the basis of any factor that is related to the enrollee’s health status.” The agency is also keen to make sure MA plans administered with AI “provide equitable access to services.” The quotes are from a broad proposed rule scheduled to be published in the Federal Register Dec. 10. Fact sheet on the full proposed rule here, good summary coverage of the AI piece by GovInfo Security here
     
  • Don’t conflate AI in medicine with AI in medical education. “A system designed to optimize a busy physician’s time should not be blindly applied to a trainee still learning the art of medicine,” explains Naga Kanaparthy, MD, MPH, of Yale in commentary published by MedPage Today. “Teaching and exposing trainees to the most effective technologies is important if we want the best possible healthcare, but not at the expense of establishing a sound medical foundation.” 
     
  • Healthcare AI is raising some thorny ethical questions. Imagine the technology suggesting aggressive treatment for a patient it deems likely to benefit—and a “wait and watch” approach for one with an iffier prognosis, regardless of intervention. “From a utilitarian perspective, prioritizing Patient A might make sense,” Dr. Rubin Pillay blogs at Rubin Reflects. “But what happens to Patient B’s right to equal treatment? Do we redefine fairness when medicine knows more about individual probabilities of success?” Read and mull
     
  • RapidAI and Viz.ai top the list of vendors whose imaging AI products have been adopted by healthcare providers. The roster is from Klas Research, which also found Aidoc, Nuance and Riverain are the frontrunners among suppliers whose imaging AI products are under consideration. Klas further found traditional imaging IT vendors—Sectra, Agfa HealthCare, Fujifilm and others—own considerable mindshare in the space too. Report available here
     
  • Having been born into a world awash with tech, today’s kids are hard to impress. But they seem to be loving Honda’s AI-powered robot, Haru, when it visits them in the hospital. It looks a little bit like a mashup of a frog and Johnny 5 from sci-fi movies like Short Circuit, Tech Radar reports. “But underneath the cutesy exterior, Haru has played a very serious role in assisting and enhancing the lives of children undergoing long-term [inpatient] treatment.” Story and photos
     
  • Has it only been two years since ChatGPT shook up the world? Yes, but it’s been a long couple of years. It seems that way anyway, given all that’s happened with large language models since late 2022. And yet, for all the hoopla, the perfect use case for generative AI still has yet to emerge. So notes Axios reporter Megan Morrone to mark the anniversary. Still, she observes, the preceding 24 months “have proven the technology’s allure—and that will drive the industry to keep looking till it finds a killer app.” 
     
  • Recent research in the news: 
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.