Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Healthcare AI startups that earn FDA approval are like jobseekers after landing a new job. Think about it. Both celebrate their respective wins only to realize their work has only just begun. As Healthcare Finance editor Jeff Lagasse notes, it can take seven years between FDA clearance and the time reimbursement starts rewarding providers for using the technology. Larger AI suppliers can ride out the lag, but startups and smaller players may wilt under the weight of the wait. And those who miss out the most might be the patients. Lagasse speaks with an executive at one of the fortunate few, Avenda Health, which is only the fifth AI startup to secure Medicare reimbursement for its products. “Unfortunately, the way reimbursement is set up in the U.S., it disincentivizes new technologies,” says Brit Berry-Pusey, PhD, Avenda’s COO. “If you’re really pushing the boundaries and creating something novel, it means you have to start from scratch from a reimbursement perspective.”
     
  • Fortunately, the sobering realities of AI reimbursement are little match for the high ideals of AI innovators. This comes through between the lines of an article posted by The Inscriber magazine. The writer, Afaque Ghumro, looks at 10 avenues of opportunity for healthcare software developers. Improving efficiency, deepening patient engagement and personalizing treatment plans all make the list. “The transformative impact of AI and machine learning on healthcare software development services cannot be overstated,” Ghumro writes. “As AI and ML continue to evolve, the possibilities for healthcare software development remain limitless,” he adds before suggesting the eventual outcome will be nothing less than “a more efficient and patient-centered healthcare ecosystem and a brighter, healthier future for all.”
     
  • A funny thing happened to a patient as he was getting examined by a physician using an ambient AI scribe. “While [Dr.] Sharp examines me, something remarkable happens,” the patient recounts in the Washington Post. “He makes eye contact the entire time. Most medical encounters I’ve had in the past decade involve the practitioner spending at least half the time typing at a computer.” The patient was a WaPo reporter, the doctor the chief medical information officer at Stanford Health Care. The article recounting the visit places the observations in the context of the good, the bad and the troubling around generative AI in healthcare. The strength of the piece owes much to the reportorial prowess of the patient, technology columnist Geoffrey Fowler. Read the whole thing
     
  • Remember the research showing large language AI models going senile with age? At least one physician is taking solace in those findings. AI’s cognitive falloff, he reasons, supports the essentiality of human doctors. “I find comfort in the fact that while AI may excel in some areas, it may fall short in spatial abilities and other cognitive tasks,” writes Arthur Lazarus, MD, MBA, over at KevinMD. “Instead of fearing replacement, we should focus on integration, leveraging AI’s strengths to complement our own and creating a healthcare system that is both technologically advanced and deeply humane.” 
     
  • Here’s a wise doctor who dreams of an AI tool that can protect her from her own human fallibility. “Like my patients, I too am filled with nuance and self-contradiction,” admits Permanente emergency physician Mary Meyer, MD, MPH. Publishing her ruminations in MedPage Today, she wonders: “Can future AI models warn me when I am engaged in dangerous multi-tasking? Or simply too exhausted to accurately treat my patients? Can it warn my supervisors when I am spread dangerously thin?” Meyer offers the thoughts after working for a time with software that functions like a combination scribe and administrative assistant. “My wish is for an AI tool that seeks to mitigate my Achilles’ heels,” she writes, “rather than a network that views me as a cog in a system that can always be made more efficient.” Hear her out.
     
  • Never confide in an AI chatbot with anything truly personal. That’s some heartfelt advice from the consumer tech aficionado and radio personality Kim Komando. “Even I find myself talking to ChatGPT like it’s a person,” Komando confesses in USA Today. “It’s easy to think your bot is a trusted ally, but it’s definitely not. It’s a data-collecting tool like any other.” The piece is ordered around 10 things you should never say to AI bots
     
  • The partisan energy debate rages on—even though AI will soon make it obsolete. It will do so not by outarguing environmentalists but by devouring electricity. Neil Chatterjee, a former head of the Federal Energy Regulatory Commission, makes the case in the New York Post. “Our only option is to use every energy source at our disposal,” he writes. “And I mean everything: natural gas, solar, geothermal, hydropower, energy storage, nuclear, you name it.” Of course, that line of reasoning won’t win over staunch opponents of fossil fuels. So Chatterjee, who served during the first Trump administration, lays down his trump card: “If we don’t win the AI race, China will—and we don’t want to live in a world where communist China dominates AI.” Yeah, that probably won’t settle the debate, either. Read the piece
     
  • Recent research in the news: 
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.