Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Every so often, one patient’s tissue sample contaminates another patient’s microscope slides. Hey, stuff happens. And when it does, it can throw pathology AI for a loop. Researchers at Northwestern unravel the problem in a study published in Modern Pathology. Top takeaway: AI that works flawlessly in the lab may flub up in the real world. And when it does, it demonstrates the indispensability of human expertise. In the words of perinatal pathologist Jeffery Goldstein, MD, PhD, senior author of the study: “Patients should continue to expect that a human expert is the final decider on diagnoses made on biopsies and other tissue samples. Pathologists fear—and AI companies hope—that the computers are coming for our jobs. Not yet.” Scientific paper here, Northwestern news item here.
     
  • The typical lag between raw scientific discovery and patient-ready clinical indication is around 17 years. Cleveland Clinic and IBM joined forces in 2021 to try to shorten the waits. They called their collaboration the “Discovery Accelerator.” This week the pair announced the first fruit to come of the project. It’s a blueprint, of sorts, for using AI to “home in on what processes are critical to target with immunotherapy treatments” for cancer. Researchers from both organizations describe the accomplishment in a scientific paper here. Cleveland Clinic’s news office nicely summarizes it in lay terms here.
     
  • ‘Nurses don’t want AI.’ That’s just one person’s opinion, but the person is a union official who likely speaks for many. The speaker, Michelle Mahon of National Nurses United, offers the contrarian viewpoint in a San Francisco Examiner article that’s largely sympathetic to nurses helping to develop a homegrown AI model at UCSF Health. One of the developers is Kay Burke, RN, MBA, the institution’s chief nursing informatics officer. “If I have an [AI] model that tells me my patient actually might deteriorate because the risk factors are there,” Burke tells the newspaper, “then I can be more prepared and proactive and taking care of my patient.” Meanwhile, for Mahon, AI is “just a temporary fix for systemic issues that go beyond making room placement or HR systems more efficient.” Read the whole thing.
     
  • Last month a highly secretive meeting was held in Cambridge, Massachusetts. How closed-door was it? Enough that organizers invoked the Chatham House Rule. This means participants were free to use the information to which they were privy during the daylong get-together, but “neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.” The topic of the meeting was none other than the regulation of AI in healthcare. Condensed—and nameless—meeting minutes here. Shh.
     
  • Teenagers are people who think about AI in healthcare too. Exhibit A: Sonia Rao, a junior at Clovis North High School in Fresno, California. When she’s not practicing her fencing skills or serving as concertmaster of the school orchestra, Sonia may be found snapping photos, playing chess, traveling—or, evidently, writing thoughtful commentaries on other interests. The Los Angeles Times’s High School Insider presents her worthwhile thoughts on healthcare AI here.
     
  • This tech-sector veteran isn’t throwing his former colleagues under any buses. He just learned from mistakes made, presumably by himself as well as his peers, when he worked at Nvidia and Ola. The watchful brainstormer, Gaurav Agarwal, just announced the launch of his new company, RagaAI, on a $4.7 million seed funding round. Well, on that plus a plan to turn the software loose so it can autonomously detect, diagnose and de-bug any glitches dogging AI. In announcing the launch, Agarwal says the product is already working for some large Fortune 500 companies. (No mention of his designs on healthcare. Yet.)
     
  • Healthcare AI outfitter John Snow Labs says its open-source Spark NLP library has been downloaded a mind-boggling 82 million times. For more on this and other milestones the Delaware shop has passed as of this month, see here.
     
  • The WHO has released granular guidance on ethical and governance considerations around healthcare AI. The relevant document focuses on large multi-modal models, which comprise but aren’t limited to large language models. Whether you love or loathe the World Health Organization, you can face the 95-page beast here.
     
  • From AIin.Healthcare’s news partners:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.