Industry Watcher’s Digest

Buzzworthy developments of the past few days. 

  • The U.S. Commerce Department wants proof of strong safety and security measures from AI developers and cloud suppliers. In an announcement posted Sep. 9, Commerce’s Bureau of Industry and Security says the aim is to minimize the risk of serious damage from cyberattacks. The proposal calls for making the affected parties report details of their work on, specifically, frontier AI. Commerce Secretary Gina Raimondo says the action will “help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security.” The move follows just a week after Raimondo was featured in a segment of “60 Minutes.” “If you think about national security in 2024, it’s not just tanks and missiles,” she told CBS News correspondent Lesley Stahl Sep. 1. “It’s technology. It’s semiconductors. It’s AI. It’s drones. And the Commerce Department is at the red-hot center of technology.”
     
  • When Michelle Mello, JD, PhD, thinks about rules for healthcare organizations that use AI tools, a lot of people wait to hear what she’ll say. And those who don’t should. The Stanford professor of law and health policy has authored more than 250 peer-reviewed articles on topics from AI to biomedical research ethics. Her research concentrates on the effects of law and regulation on healthcare delivery and population health outcomes. “Healthcare organizations are supposed to be investigating these [AI] tools to see if they’re biased, but I have doubts that many organizations will really know how to do that in a meaningful way,” Mello says in a Q&A with the Regulatory Review. “And no one’s offering up money or technical assistance to help them.” Read the rest
     
  • Performance drift and data generalization: 2 issues that can keep FDA product reviewers up at night. The first has to do with algorithms testing well enough gain the agency’s OK only to slowly deteriorate in real-world clinical use. The second is about making sure training and validation datasets are “large, cleaned and representative” of the populations their respective models will serve. The insights are from FDA scientific reviewer Luke Ralston, who offered them at a medical conference last week that was covered by MedCity News
     
  • Here’s a dilemma you’re likely to hear more about. How capable might an AI model become of predicting care preferences for patients with cognitive impairment? The question came up at a meeting of the European Respiratory Society Sep. 7 in Vienna. Concerns arise “when the technology’s embedded ethical values do not align with the patient’s priorities for clinical decision-making,” reports journalist Hayden Klein, who covered the event for the American Journal of Managed Care. “For instance, AI could prioritize specific medical treatments and measurable outcomes over a patient’s quality of life, potentially undermining patient autonomy if they have different goals in mind.” More here
     
  • It’s time to lower expectations around AI in healthcare. Part of the dial-down is accepting that the charges of hype were at least partly right. That’s the view of Spencer Dorn, MD, MPH, MHA, vice chair and professor of medicine at the University of North Carolina. Stating his case in Forbes, Dorn suggests AI will eventually transform healthcare. But it’s going to take years. In the meantime, he recommends taking a few steps to better align hopes with reality. One of these: Accept incremental gains. “Organizations must resist audacious claims and ground in reality,” he writes. “And, most of all, they should reflect on who they are, what they do and how they can do it better—with or without AI.” Read the whole thing
     
  • Governance may be the least exciting of all AI duties and responsibilities. But it’s also among the most essential. This comes through in coverage of a recent webinar hosted by the Los Angeles-based law firm Sheppard Mullin Richter & Hampton. Partner Carolyn Metnick, JD, explained how a governance framework “serves to operationalize an organization’s values,” TechTarget’s Shania Kennedy reports. “By considering existing frameworks and regulations and how to best align with them, healthcare stakeholders can begin developing an AI governance program in line with their enterprise’s risk tolerance.” Unexciting? Maybe. Unskippable? For sure. 
     
  • The era of the AI PC is upon us. But healthcare providers need not rush to replace their existing desktop computers with shiny new ones packing AI inside. That’s not to say such a day won’t ever arrive. “For now, healthcare-specific capabilities are sparse,” CDW bloggers point out, “but organizations may want to keep an eye on the possibilities.”
     
  • The AI-ready iPhone 16 hogged the attention at Apple’s new product unveiling Monday. Lost in its shadow were two health features—sleep apnea support in Apple Watch and clinical-grade hearing aid technology in AirPods Pro 2. For more on those two straight from the source, click here
     
  • Time is out with its picks for the 100 most influential people in AI. I count four who are specific to healthcare—Insitro CEO Daphne Koller, Abridge co-founder Shiv Rao, Aeye Health CEO Zack Dvey-Aharon and Viz.ai co-founder Chris Mansi. Did I miss anyone
     
  • Recent research in the news: 
     
  • AI funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup