Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • About that $30B invested in healthcare AI over the past three years (and $60B over the last 10): How big a difference have all those dollars made in actual healthcare operations? The venture capital company Flare Capital Partners asks the question in a report posted Sep. 9. The answer may seem a bit of a dodge, but at least it’s honest. More capital “does not universally equate to more value creation,” the report’s authors point out. “While the clinical AI category has been the highest-funded category, we believe near-term AI budgets will prioritize financial, patient engagement and operational throughput value propositions that have historically yielded more tangible ROI.” Full report here
     
  • Speaking of big bucks, how does $19.9 trillion with a T strike you? That’s how much AI will kick into the global economy from now through the end of the decade, according to IDC. Analysts there also believe every dollar spent on AI will push $4.60 into the global economy. IDC senior researcher Lapo Fioretti says that, by automating routine tasks and unlocking new efficiencies, AI will have “profound economic consequences.” Watch for the technology, Fioretti suggests, to reshape industries, create new markets and alter the competitive landscape. The full report commands a cool $7,500. IDC offers a little taste here
     
  • The use of clinical AI could have some unintended consequences for patient safety. Better to think through the potential now than to wait for something to go wrong and have to respond with haste. So suggests Andrew Gonzalez, MD, JD, MPH, a research scientist with the Regenstrief Institute in Indianapolis. “[W]e want to ensure that institutions have a framework for identifying problems as they come up, because some problems are going to be exacerbated,” Gonzalez says in a new video interview. “But there are also going to be wholly new problems that haven’t previously been issues until you use an AI-based system.” 
     
  • Even more concerning: patient safety issues as intended effects of AI. Three Harvard healthcare thinkers flesh out the terrifying scenario in an opinion piece published by The Hill Sep. 16. “To keep pace with the joint threat of AI and genetic engineering, we cannot afford to wait for the emergence of an engineered pathogen,” the authors write. “The technological advances in genomics and AI made [over the past 20 years] could unleash novel engineered pathogens that take millions of lives. Collaboration toward unified action is needed now.”
     
  • AI has helped a Canadian hospital cut unexpected inpatient deaths by 26% over an 18-month stretch. The tech—a homegrown product called Chartwatch—did it by continuously monitoring vital signs, lab tests and updates in the EMR. The software makes a prediction about the patient’s improvement, stability or decline every hour. “[P]atients’ conditions are flagged earlier, the nurse is alerted earlier, and interventions are put in earlier,” clinical nurse educator Shirley Bell at St. Michael’s Hospital in Toronto tells the Canadian Broadcasting Corp. “It’s not replacing the nurse at the bedside; it’s actually enhancing your nursing care.” 
     
  • CMS is hungry for intel on how U.S. healthcare can use AI to boost care quality and improve patient outcomes. The agency has posted a request for information, open to all serious stakeholders. Those who impress CMS will get a chance to show their stuff in quarterly “AI Demo Days,” which will begin next month. In the process of sharing their know-how, the selectees will help CMS strengthen its service to healthcare. The deadline for RFI responses is Oct. 7. Learn more here
     
  • Ameca the gen AI-powered humanoid robot made a decent first impression at a healthcare conference this month. That’s according to coverage of the European Respiratory Congress in the American Journal of Managed Care. A product of Engineered Arts Limited in the U.K., Ameca runs ChatGPT 4 combined with natural language processing functionality. Speaking for itself, the creation more or less acknowledged that it’s not capable of taking over the job of any human healthcare worker. “Regulating artificial intelligence,” Ameca told attendees, “involves setting standards for ethical use, ensuring transparency and maintaining accountability to balance innovation and societal well-being.”
     
  • Forming an AI governance group? Don’t forget the lawyers. That’s free legal advice from healthcare-specialized attorneys at Sheppard Mullin. “In considering how to integrate AI, healthcare organizations must be mindful of the related risks, including bias, patient privacy and consent, data security, and an evolving legal and regulatory landscape,” the authors write in a Sep. 16 blog post. “Healthcare organizations should adopt best practices to ensure their use of AI remains reliable, safe and legally compliant.” Just because they would say that doesn’t mean it’s not good advice.  
     
  • Recent research in the news: 
     
  • AI funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.