Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Google sics Med-Gemini on ChatGPT-4. But only in a manner of speaking. In a study comparing the two on competence for complex clinical tasks, Google’s own researchers found their brainchild—which is still under development—“surpassed [OpenAI’s] GPT-4 model family on every benchmark where a direct comparison is viable, often by a wide margin.” The study authors comment that Med-Gemini’s nice performance may mean it’s not far from release into the real world. In fact the test version, they report, bested human experts at summarizing medical texts and showed “promising potential” for medical research, education and multimodal medical dialogue. A summary of the study, which has not yet been peer-reviewed, is here. (Click “View PDF” for the full study draft.)
     
  • AI’s ability to uncover occult patterns makes the technology a natural fit for cancer doctors. Chevon Rariy, MD, chief health officer and senior VP of digital health at Arizona-based Oncology Care Partners, makes the point in an interview with HIMSS Media’s Healthcare IT News. “By leveraging patient engagement tools that are AI-driven and individualized, we are transforming the way oncology care is delivered,” she says, adding that patient input guides adjustments in care plans in ways it never used to. The approach lets patients take a more active role in their care, which, Rariy suggests, contributes to better treatment outcomes as well as more satisfying patient experiences.
     
  • GenAI models are only as intelligent as the data fed to them—and the filters built into them. This came clear with one look at the bungled results Google’s Gemini image generator came up with in February. Having learned from Google’s stumble, OpenAI is working on a framework for avoiding that kind of unwanted laughter. Its solution, called Model Spec in an early draft iteration, will incorporate public input on how models in ChatGPT and the OpenAI application programming interface (API) should behave in interactions with end users. OpenAI says it’s not waiting for finalization to post the draft because it wants to “provide more transparency on our approach to shaping model behavior and to start a public conversation about how it could be changed and improved.” The company adds that it will continuously update Model Spec based on what it learns from sharing the framework and hearing back on it from stakeholders.
     
  • Here’s an AI-equipped doctor who’s shocking patients by what she’s not doing: tapping keys during patient time. A GenAI notetaking app now does that for the physician, Amy Wheeler, MD, a primary care doctor in the Mass General Brigham system. Wheeler tells The Wall Street Journal she’s gratified to be giving patients her undivided attention. Meanwhile the health system’s CMIO, Rebecca Mishuris, MD, says the pilot project will measure the value of the technology by, among other things, patient experience and physician retention. So far, Mishuris adds, “the feedback is impressive. I have quotes from people saying, ‘I’m no longer leaving medicine.’”
     
  • Do China’s AI models use any U.S. technology? What is Beijing’s stance on U.S. AI models? How accessible are OpenAI’s AI models in China? And while we’re at it, how dependent—if at all—is China on U.S. artificial intelligence technology? Reuters asks these questions in the context of rhetoric emanating from Washington about restricting exports of non-open AI models made in the USA. As the news service points out, China has similar designs of its own. Get the answers, Reuters-style.
     
  • Nobody knows what the perfect CAIO looks, sounds or acts like. “We’re still figuring it out,” explains Mark Daley, PhD, chief AI officer at Western University in Ontario, in comments made to CIO.com. “You need someone with enough technical knowledge to be able to keep up with the latest developments … and sort the ‘real’ from the mirages. But you also need someone who understands business process—not just how the organization operates, but why it operates the way it does.”
     
  • If someone wins a Pulitzer Prize using GenAI, does the AI’s creator get a share of the spoils? This is no longer just a hypothetical scenario. On May 6, two of 15 winners in the journalism category admitted using the technology to write their winning works. One of them, Ishaan Jhaveri of The New York Times, tells the Nieman Journalism Lab that his team didn’t use GenAI on work that otherwise would have been done manually. “We used AI precisely [for] the type of task that would’ve taken so long to do manually that [it would distract from] other investigative work,” Jhaveri says. As he puts it, the Nieman Lab adds, AI can help investigative reporters find needles in proverbial haystacks while they go about their real work: investigating and reporting. 
     
  • Research roundup:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.