Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • CVS should probably stop making job applicants face video lie detectors armed with emotion AI. This week the company reached a tentative settlement in a pending class-action suit. The primary plaintiff alleged in 2023 that he wasn’t hired partly because, unbeknownst to him during his interview, his facial expressions would be analyzed by Affectiva software. This version of emotional-intelligence AI is attractive to recruiters, who can tap it to assign job candidates’ “employability scores,” according to the complaint. In coverage of the development, HR Dive notes that the settlement terms are fuzzy. Left unstated is how much CVS will pay out, if anything, and whether the company has agreed to stop using emotion AI—stealthily, at least—to size up job candidates.
     
  • AI has an image problem in healthcare. Among physicians, for example, a majority worry the technology will come between them and their patients. That segment probably overlaps with the 40% who believe healthcare AI is more hype than substance. But cheer up. Images are eminently shapable. All it takes is some technological know-how and a little savvy with PR and marketing. So suggests the chief marketing officer of software supplier Athenahealth. “Just as important as building and evolving the technology,” writes the CMO, Stacy Simpson, “is our ability to market AI’s benefits to physicians and patients alike, to ensure that it’s leveraged to help reclaim what’s at the heart of exceptional care: a meaningful patient-physician relationship.” In a piece published July 23 by Fast Company, Simpson lists four simple steps to get there from here.
     
  • When healthcare historians look back at our time, they’ll make much of AI’s role in spurring change. They might even call the before & after “seismic.” Or maybe a practical “tsunamAI” of change. And they’ll judge the end result a net positive—as long as today’s healthcare leaders succeed in harnessing its power. The predictor of the scenario is Lee Shapiro, cofounder and managing partner at 7wire Ventures and a member of the Forbes Business Council. Suggesting AI will reboot healthcare across clinical, research and administrative areas, Shapiro urges healthcare leaders to balance rewards with risk, partner up with suppliers and include end-users in model development.
     
  • Lee Shapiro’s healthcare AI bullishness will get no argument from Saeed Hassanpour, PhD. A biomedical computer scientist, Hassanpour is the inaugural director of the newly created Center for Precision Health and Artificial Intelligence at Dartmouth Health in New Hampshire. “AI can revolutionize patient care by making it more predictive, preventive and personalized,” he tells Dartmouth’s news operation. “The future of AI in healthcare is incredibly promising.”
     
  • But let’s not get carried away here. In healthcare, novel technologies like AI face more regulations, cybersecurity concerns and financial constraints than many if not most other industries. And in a recent McKinsey & Co. survey, more than 75% of health system executives said AI adoption is a priority while admitting they lack the resources to make it happen. The hurdles are noted by Alexis Kayser, healthcare editor at Newsweek. Upon interviewing a handful of physicians with these facts in mind, she found emotions decidedly mixed. “Like any new technology, we found that some docs really liked it,” academic cardiologist Thomas Maddox, MD, told Kayser. “Other docs said it wasn’t their thing. They pretty quickly said, ‘We aren’t gonna use this,’ and they moved away from it.” Read the rest.
     
  • Seconding the note of cautious optimism is Sowmya Viswanathan, MD, MBA. Paradoxically, emerging technologies like AI can actually thwart digital transformation, warns the chief physician executive at BayCare in Florida. She reminds Healthtech Analytics that AI is still new—and still expensive. “You put ‘AI’ on the project, and the cost goes up tenfold,” Viswanathan says. What’s more, she adds, “If it’s going to add to the burden of provision of care, it will be a failure.” She’s not completely bearish on the technology—just realistic bordering on skeptical. Hear her out in an article filed by Xtelligent reporter Shania Kennedy.
     
  • Microsoft is working with 2 academic health systems on generative AI for medical imaging. Together with Mass General Brigham in Boston and UW Health in Wisconsin, the Big Tech biggie is looking to refine multimodal AI foundation models for the benefit of patients, clinicians and administrators alike. (In order: shorter wait times for test results, burnout relief and bottom-line improvement.) Microsoft says the collaborations will facilitate research and innovation projects aimed at “delivering a wide array of high-value medical imaging copilot applications.”
     
  • Mistakes, mishaps and failures. AI stumbles into its fair share of all three. The U.K.-based website Tech.co won’t let it forget a single incident. Among the greatest misses of 2024 so far: “New York City chatbot advises small businesses to break the law,” “Horrifying Willy Wonka experience captures the world’s attention” and “Air Canada defeated in court after chatbot lies about policies.” Catch up with these and other amusing and mostly harmless AI stumbles here.
     
  • Recent research in the news:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup