Industry Watcher’s Digest

Buzzworthy developments of the past few days. 

  • Hospital boards should be grilling their CEOs about AI. Among the pointed questions the trustees or directors ought to ask: What is our AI strategy? What new revenue opportunities can be ushered in by using AI? And Do you, as CEO, have the right resources to make good decisions about AI? The suggestions are from a pair of industry CEOs who have compiled 50 AI quiz items that board members could pose to every member of the C-suite. And that’s across pretty much every sector of the economy. The result is a light yet meaty thought exercise published in the Journal of Business & Artificial Intelligence. Check it out
     
  • Downstream of the C-suite, hospitals’ AI governance committees should demand true transparency from AI vendors. Many are likely to do more of that in 2025, having learned from their mistakes up to now. But that doesn’t mean the vendors will suddenly stop guarding their intellectual property with all they’ve got. This clash of interests could become a “major source of tension when it comes to healthcare AI” this year, warns Brian Anderson, MD, chief executive officer of the Coalition for Health AI, aka “CHAI”. In an interview with MedPage Today, Anderson also comments on the regulatory climate developing around healthcare AI. “There is a level of caution and thoughtfulness that I’m hearing more from the regulatory community recently,” he says. Regulators realize they’d be putting the cart before the horse, he adds, if they were to “create a robust regulatory process that’s not informed by where private-sector innovators are going.”
     
  • And by the way, Dr. Anderson is having a moment. As is his CHAI. Politico headline on New Year’s Day: “The government can’t ensure artificial intelligence is safe. This man says he can.” Clarifying subhead: “Brian Anderson is ready to shape the future of AI in healthcare—if Donald Trump will let him.” Informative article
     
  • Balancing AI risk vs. AI innovation is important in every industry. But in healthcare it’s genuinely critical—a potential matter of life and death. The risk side of the equation isn’t hard to wrap one’s mind around. It may be the other side that causes the most consternation. After all, it’s a risk of sorts to move so cautiously that you fall behind. “Your AI strategy will either put you ahead or make it hard to ever catch up,” explains Dan Priest, the new chief AI officer at PwC. “If we take a lesson from the Internet era, a lot of those early movers ended up being winners for the next 10 to 20 years. We expect to see something very similar for [organizations] that embrace AI today, both early on and in a trustworthy way.”
     
  • When it comes to infusing AI with humanity, a little confusion can be a good thing. This can be the happy case when AI research and development involves experts from different disciplines. Take it from computer scientist James Landay, PhD, co-founder of Stanford’s Institute for Human-centered Artificial Intelligence. “You’ll experience confusion, but sometimes that confusion leads to new ideas and new ways of looking at things,” Landay tells McKinsey Digital. “For example, we’ve had people working on large language models who are looking at natural language processing. And then they run into an ethicist with a background in political science who questions some of the things they’re doing or how they’re releasing their software without particular safeguards.” You do the math. More of Landay’s thinking on human-centered AI here
     
  • Evolution finessed by human intervention turned terrifying wolves into Yorkshire terriers. Could AI similarly alter the course of human evolution? Up to a point, it just might, suggests one evolutionary biologist. “Perhaps AI, online searchable knowledge and social media posts that ‘remember’ who did what to whom will carry more of our memory burden,” writes Rob Brooks of the University of New South Wales. “If so, perhaps human brains will evolve to become even smaller, with less standalone memory.” Over many generations, he adds, cumulative changes in the same vein could chip away at such human niceties as friendship, intimacy, communication, trust and intelligence. “In a nontrivial way,” he concludes, AI could well “alter what it means to be human.” Cheers. I guess. 
     
  • Oh no. Another set of 2025 predictions? Yes. Sorry. But it’s another good one. CIO.com finds experts looking forward to small language models, humanlike (but non-AGI) reasoning abilities, AI agents that talk to each other, true multimodal AI and more. And how about mass customization of enterprise software? “Companies build custom software all the time, but now AI is making it accessible to everyone,” explains Rakesh Malhotra, head of digital and emerging technologies at Ernst & Young. “Having the ability to get custom software made for me without having to hire someone to do it is awesome.” Read the rest.
     
  • We hadn’t heard of the Journal of Business and Artificial Intelligence until this week. Better late than never. Along with the helpful article noted in the top item above, the publication currently has a think piece up that AIin.Healthcare can’t help but recommend. The title alone is worthy. In “The Coming Enshittification of AI,” tech expert James Ryan explains his concern. “The big players could build audiences by offering valuable services and then start ‘monetizing’ with sponsored links, sidebar ads and then, eventually, answers influenced by sponsors in subtle ways that the user isn’t even aware of,” Ryan writes. “If these tactics are successful, advertisers would have no choice but to jump on board. Welcome to the AI enshittification dystopia where all results are sponsored!” Read the whole thing
     
  • Recent research in the news: 
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.