Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Physicians aren’t complicated. When it comes to medical technology, at least. “We need to know: Does it work? Will it work in my practice? Will insurance cover its use? And, importantly, who is accountable if something goes wrong?” The perspective is from Jesse Ehrenfeld, MD, MPH. The president of the American Medical Association makes the point on behalf of all physicians by way of reiterating what he believes it will take for healthcare AI to live up to its potential. “For health AI to work,” he maintains, “physicians and patients have to trust it.” The implication is that the technology—along with its regulation, liability loopholes and transparency assurances—is not there yet. Read the rest.
     
  • ‘Garbage in, garbage out.’ Another close watcher of AI in healthcare makes much the same point as Dr. Ehrenfeld. This commentator just comes at it from a different angle. “I know that AI can and will have value in healthcare at some point,” writes Jeff Gorke, MBA, a healthcare specialist with the tax, assurance and consulting firm Elliott Davis. “But I also know that there are very real landmines defined and built by coders/programmers. Bad inputs drive inaccurate, poor-quality and dubious outputs that cannot be trusted.” Forbes published the piece June 4.
     
  • Nurses have a responsibility to be knowledgeable about emerging healthcare technologies. As for AI in particular, they should embrace it and not be frightened of it. That’s the opinion of Maura Buchanan, a past president of the Royal College of Nursing in the U.K. Nurse Buchanan made the remarks this week at the RCN’s 2024 annual congress in Wales. Another U.K. nursing leader, RCN safety rep Emma Hallam, encouraged attendees to represent the profession wherever and whenever healthcare AI tools are being designed. “Nurses at every stage of their career, of all disciplines and diverse characteristics,” Hallam said, “should be at the forefront of the thoughtful development, monitoring and regulation of AI in healthcare.” Event coverage here.
     
  • By their distinguishing characteristics will you know tomorrow’s CIOs. A survey by Deloitte’s CIO program finds only 35% of technology leaders ranking AI, machine learning and/or data analytics as their No. 1 priority. More common is shaping, aligning and delivering a unified tech strategy and vision. No less interestingly, the survey shows perceptions of the CIO role splitting between the conventional-minded and their contemporary counterparts. Old: “A technical guru.” New: “A change agent.” And so on. More here.
     
  • OpenAI continues taking it on the chin. This week the ChatGPT sensation absorbed a punch from a dozen or so current and former employees. The ad hoc group posted an open letter demanding protections for individuals who raise safety concerns from inside ChatGPT and other purveyors of advanced AI systems. The former staffers signed their names while the currents went with “Anonymous.” The signatures are joined by two names of some note from Google DeepMind and three AI pioneers—Yoshua Bengio, Geoffrey Hinton and Stuart Russell. Even though the letter isn’t aimed solely at ChatGPT, it comes after the company has endured high-level resignations over safety concerns, barbs from former executives, a brewing legal battle with Scarlett Johansson and anger over what some would call its stifling non-disparagement policies. The letter’s posting has re-raised heated chatter about all that and more.
     
  • How far should the government go in its efforts to thwart deep fakers of audio content? The question takes on new urgency in an election year. Precedent for answering it may come from the kerfuffle over a recording of a special counsel’s interview with President Joe Biden. The interview had to do with his handling of those classified documents that turned up in unsecured places. But that’s beside the point. A DOJ official gets to the heart of the matter from the audio-withholders’ perspective. “If the audio recording is released, malicious actors could create an audio deepfake in which a fake voice of President Biden can be programed to say anything that the creator of the deepfake wishes.” On the other side are Biden’s political rivals accusing DOJ of trying to protect Biden from the embarrassment of sounding elderly and confused in the recording. Get the rest.
     
  • Elvis, meet Elon. The latter wants to build his multibillion-dollar AI supercomputer plant in Memphis, Tenn. Somehow that seems fitting, given the city’s eternal association with the de facto king of another lucrative realm, rock and roll. The building on which Musk has set his sights used to house a factory owned by Electrolux, the Swedish multinational home appliance manufacturer. Memphis’s mayor tells the Memphis Business Journal he’s excited about the opportunity. Meanwhile some local leaders are raising concerns about the massive drain on power and water that supercomputers require. Good local coverage here.
     
  • Recent research roundup:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.