Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • The U.S. Food & Drug Administration is telling how it plans to coordinate AI oversight among and between four sub-agencies that deal with medical products. In a paper released March 15, FDA outlines four priorities guiding it in this expansive (and surely daunting) effort. The priorities are fostering collaboration, advancing regulation, promoting standards and supporting research. The sub-agencies are CBER, CDER, CDRH and OCP. Unscrambled, the alphabet soup stands for the Center for Biologics Evaluation and Research, the Center for Drug Evaluation and Research, the Center for Devices and Radiological Health and the Office of Combination Products. Read the paper here.
     
  • All well and good there, but one close FDA watcher believes the agency needs an entirely new approach for dealing with AI-enabled medical devices. The watcher is Benjamin Zegarelli, JD, a regulatory compliance specialist with the Mintz law firm. Zegarelli has every confidence FDA will adapt to the fast-changing field of AI regulation and carry out its duties with aplomb. However, he clarifies, the agency can’t be expected to go it alone. Zegarelli and colleagues “hope that Congress will act at some point to give FDA additional authority to classify, authorize and regulate AI/machine learning devices in a way that fits the technology, enables and incentivizes innovation, and enhances patient safety.” Full piece here.
     
  • Chatbots packing large language AI for mental healthcare should be regarded with caution. Among those who should be extra wary are individuals who might be vulnerable to the charms of an amateurishly conceived bot and blind to its shortcomings. This is the warning of Thomas Heston, MD, a clinical instructor of family medicine at UW Medicine in Washington. Expounding on research he recently published, Heston tells UW’s newsroom: “Chatbot hobbyists creating these bots need to be aware that this isn’t a game. Their models are being used by people with real mental health problems, and they should begin the interaction by giving the caveat: ‘I’m just a robot. If you have real issues, talk to a human.’” News item with link to study here.
     
  • Generative AI isn’t always so great at matching work candidates with job openings, either. So found one brave explorer who took up a recruiter’s offer of positions to consider. The prospect was assured by the recruiter that the openings had been “chosen by AI just for you.” Or something along those lines. Alas, the recommended jobs “were so off the mark, they were laughable,” writes the tester, who by the way wasn’t even looking to make a career change when he received the unsolicited email that started the contact. “Almost all were outside of my industry, most were in a discipline where I have absolutely no experience or credibility, and many were for positions out of sync with my experience.” The writer, Bradley Lohmeyer of the advisory firm Ankura, draws out some observations on generative AI that extend beyond the anecdote. Read it all.
     
  • ‘Any doctor who can be replaced by a computer deserves to be replaced by a computer.’ The late Harvard physician and clinical informaticist Dr. Warner Slack (d. 2018) famously said so way back in the 1960s. This week the proverb gets an update. “Any healthcare task that can be made safer with AI,” writes Karandeep Singh, MD, chief health AI officer at UC San Diego Health, “deserves to be made safer with AI.”
     
  • Nvidia’s healthcare AI blitz continues:
     
  • AI investments of note:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup