Industry Watcher’s Digest
Buzzworthy developments of the past few days.
- How can AI help solve the problem of the global physician shortage? That’s just one of many excellent questions taken up by a distinguished panel of AI-experienced professors who took part in a roundtable discussion hosted by Harvard’s T.H. Chan School of Public Health. Responding to that particular question, Lucila Ohno-Machado, MD, PhD, of Yale said AI can certainly step in when all that’s needed is a solid clinical opinion on a simple medical problem. “But I must say,” she added, “[human] expertise is not dead.” In fact, she believes AI will only make physicians’ clinical know-how “more valued than ever.” Milind Tambe, PhD, of Harvard concurred and pointed out that AI can do things like help increase vaccination rates. “Where, exactly, might [human] intervention be the most useful?” Tambe said. Who should get vouchers for traveling to vaccination sites, who should get a ride and who should get just a reminder? “Machine learning tools,” Tambe said, “can be precise at figuring out where each of these interventions would be most effective.” Watch the full discussion on YouTube.
- Ohno-Machado’s above argument gets support from new research. After systematically putting ChatGPT-4 through its paces, clinical investigators at Mass General Brigham concluded the tool can boost efficiency and contribute to patient education—but it surely should not be turned loose absent a doctor in the loop. And even that won’t always be enough. “As providers rely more on large language models, we could miss errors that could lead to patient harm,” explains the study’s corresponding author, Danielle Bitterman, MD. “This study demonstrates the need for systems to monitor the quality of LLMs, training for clinicians to appropriately supervise LLM output, more AI literacy for both patients and clinicians, and—on a fundamental level—a better understanding of how to address the errors that LLMs make.” Mass General Brigham news item here, journal study here.
- “We have definitely seen a trend toward decreasing ‘pajama time.’” That’s what some doctors call the sleepy nighttime period in which they find themselves finishing up the day’s documentation duties and administrative tasks. The quote is from Andrew Narcelles, MD, a family medicine practitioner at OhioHealth. The healthy trend to which he refers has been made possible by clinical notetaking bots from Nuance. Narcelles spoke with Axios reporter Ned Oliver, who reports that drafts created by the AI-enabled software “aren’t always perfect, but the early reviews are overwhelmingly positive.”
- If you thought social psychology’s replication crisis was bad, wait till you consider how bad AI’s reproducibility crisis-in-the-making could get. The principle is the same. One scholarly study arrives at a set of firm conclusions only to have them overturned when a follow-up study tries to replicate or reproduce the science. Fortunately, AI researchers can learn from past mistakes in other fields. And some are focused on doing precisely that. One of them, Princeton computer scientist Arvind Narayanan, PhD, tells his institution’s news operation that the scientific literature, “especially in applied machine learning research, is full of avoidable errors. And we want to help people.” Who’s we? And how are they aiming to prune this problem before it blooms? Get the basics here, explore the complexities here.
- Taken one at a time, emerging technologies transforming healthcare are only so impressive. But string together a handful and you’ve got yourself a genuine gee-whiz moment. Medscape delivers one of those in a zippy little roundup.
- Meta is overdosing on AI. And its users are feeling trapped in its bad trip. That’s the sense you get from tech, business and media journalist Scott Nover. One of the examples he gives to back up his take is the switcheroo Instagram seems to have pulled with one of its most basic functions. The platform’s search bar, “once a place to look up a friend’s account, now exists seemingly to usher users into conversation with a chatbot,” Nover reports in Fast Company. When it urges him to “Ask Meta AI anything,” he mentally shoots back: “Um, no. I just want to look up my dog’s daycare to see if they posted any pictures of her.” Read and relate.
- Bill Gates keeps trying to leave Microsoft. GenAI keeps pulling him back. Business Insider has the goods on him. “In early 2023, when Microsoft debuted a version of its search engine Bing turbocharged by the same technology as ChatGPT, throwing down the gauntlet against competitors like Google, Gates, executives said, was pivotal in setting the plan in motion,” reports chief tech correspondent Ashley Stewart. “While [Microsoft CEO Satya] Nadella might be the public face of the company's AI success—the Oz who built the yellow-brick road to a $3 trillion juggernaut—Gates has been the man behind the curtain.” Read it all.
- Recent research roundup:
- UCLA Health: Machine learning tool identifies rare, undiagnosed immune disorders through patients’ electronic health records
- Northumbria University: AI experts explore the ethical use of video technology to support patients at risk of falls
- SUNY Buffalo: New algorithm cuts through ‘noisy’ data to better predict tipping points
- Various: An AI blood test purports to diagnose postpartum depression (via Washington Post)
- UCLA Health: Machine learning tool identifies rare, undiagnosed immune disorders through patients’ electronic health records
- Funding news of note:
- From AIin.Healthcare’s news partners:
- Cardiovascular Business: ChatGPT struggles to evaluate heart risk—but it could still help cardiologists
- Health Imaging: New research offers reminder of why ChatGPT should not be used for second opinions
- Cardiovascular Business: ChatGPT struggles to evaluate heart risk—but it could still help cardiologists