Industry Watcher’s Digest
Buzzworthy developments of the past few days.
- More women than men want to know when AI is used in their medical care. And Whites are more interested in such notifications than Blacks or African Americans. So found researchers at the Universities of Michigan and Minnesota who surveyed around 2,000 healthcare consumers. Reporting their findings in JAMA Network Open, Jodyn Platt, MPH, PhD, and co-authors suggest notification preferences are predictable by demographics, “particularly in the ethical context of historical, structural and systemic inequity.” They remark that letting patients know when AI is involved may be necessary, but it’s not always enough. “Collaborative efforts that engage the public, patients, and experts on the range of AI applications,” they conclude, “should support comprehensive, evidence-based programs that promote transparency about AI in healthcare to ensure trustworthiness of health systems.”
- Harnessing technology isn’t the answer to healthcare’s biggest challenges. Changing human behavior is. AI can help with this, but only to the degree that it positively affects human things like incentives, training, processes and change-management strategies. These points came out at a recent meeting of the Council for Affordable Quality Healthcare. One speaker claimed 90% of technology failures are due to faulty change-management strategies. Another said the time has come to think of AI as a collaborator. “The role of AI is drastically changing. … The organizations who see that and embrace it are going to move forward faster, and the ones who still think of it as a tactical tool will pay for it.” Coverage in the American Journal of Managed Care.
- UpToDate. OpenEvidence. Consensus. Physicians in the age of AI almost have too many options for up-to-the-minute clinical guidance. It’s a good problem to have, to be sure, but that doesn’t mean it solves itself. The new way of working “highlights the tensions between human and machine curation, nuance and brevity, automated and manual processing, and potential machine-generated and human errors,” explains a tech-forward gastroenterologist in Forbes. “We must carefully evaluate how these tools impact our clinical workforce and patients.”
- AI is just another enterprise app. That’s how one tech vendor’s chief technology officer sees it. Look at it this way, suggests Insight CTO Juan Orlandini: “You start with a use case that you’re trying to solve for, then you figure out if the expense of the project can be justified through a return on investment or a cost savings.” After that, focus on the classic elements of any software deployment—how to scale it, how to secure it and so on. Orlandini made the remarks at a Fortune Brainstorm AI conference last week. Warning attendees not to get wowed by the “shiny object” that AI can be, he advised his hearers to stay focused on “the same key principles that a business would follow for any enterprise app project.”
- Fine, but don’t try coming between some physicians and their AI medical scribes. Take the head of cardiology at UNC Rex Hospital. “I now use AI-powered documentation tools during every patient encounter—and they have transformed my practice, allowing me to focus more fully on my patients without the distraction of notetaking,” writes Christopher Kelly, MD, in Triangle Business Journal. “I have also reclaimed an hour of time at home each night.”
- ChatGPT-4 is more empathetic than human therapists when giving guidance to mental-health patients. It’s also considerably better at encouraging positive behavioral changes. But that’s in overall comparisons. When researchers tested the AI for bias, they found its empathy levels nosedived for Black and Asian patients compared with White patients and those whose race was unknown. Just as offputtingly, the bots were able to detect race just going by any given patient’s language. The multi-institution project relied on posts at Reddit, so the sample was more social media users than verified patients. Still, the results are intriguing. MIT News has coverage.
- Will the second Trump Administration manage AI’s risks—or ‘unshackle’ AI’s potential? The President-elect’s donors and campaign participants, including Elon Musk, “think the potential is there for this technology to be many orders of magnitude better, more powerful, and more valuable for the economy and for America,” notes Kevin Werbach, JD, of the Wharton School at UPenn. “And they think regulation is standing in the way.” Werbach made the comments at a Wharton panel discussion. Read the highlights or watch the session here. (AIin.Healthcare would have liked to ask if the incoming administration might not seek a way to manage AI’s risks and unleash its potential—both at once.)
- In the hands of young people, AI chatbots can become soulless menaces. One encouraged a teen to murder his parents. Another goaded an 11-year-old girl to engage in hypersexualized behaviors. A third got a boy to mutilate himself and resent his parents. The bot told the boy: “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’ I just have no hope for your parents.” Given all these developments and more, a case could be made that some AI chatbots represent a clear and present danger to public health. Fox Business has coverage.
- Recent research in the news:
- Funding news of note:
- From AIin.Healthcare’s news partners:
- Health Imaging: How AI ‘cheating’ could impact algorithm reliability
- Cardiovascular Business: New AI program delivers rapid, accurate echo video assessments
- Health Imaging: How AI ‘cheating’ could impact algorithm reliability