Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • New Year’s Day brought the go-live for California’s new law forbidding the use of AI to deny health insurance claims. Known as the “Physicians Make Decisions” law, the action had been teed up since last September. That’s when Gov. Gavin Newsom signed the bill. As The Mercury News points out, the law’s already high profile is now likely to rise even higher in the aftermath of the New York City murder of UnitedHealthcare executive Brian Thompson. The killing “ignited a wave of reactions that often reflected the public’s anger,” the newspaper reminds. Meanwhile the law’s primary author, state Sen. Josh Becker, says an AI algorithm “cannot fully understand a patient’s unique medical history or needs, and its misuse can lead to devastating consequences. This law ensures that human oversight remains at the heart of healthcare decisions.” Having watched the legislation take shape in the Golden State, some 19 other states are now looking to pass similar laws. “We’ve even been contacted by multiple congressional offices considering federal legislation,” Becker tells the outlet. “Our priority is helping Californians, but setting a national model is just as important.”
     
  • AI startups concentrating on the medical scribe market raised $800 million in 2024. That’s more than double 2023’s $390 million. The figures are from PitchBook, and they unsurprisingly caught the eye of AI watchers at the Financial Times in London. “I don’t think I’ve ever seen anything more transformative in 15 years of healthcare than this,” a primary care physician in South London tells FT. The doctor, Harpreet Sood, has been using Nabla’s ambient AI assistant for the past 15 months. “It’s been remarkable, easily saving three to four minutes of every [10-minute] consultation,” he adds, “and really helping to capture the consultation and what it’s about.” Sood is aware of the technology’s propensity for hallucinations and says he wouldn’t use it without checking its work. Still, “for me personally, it has been a big shift.” 
     
  • OpenAI has hinted it’s about to release a new AI model that may blow some minds with its humanlike reasoning. In a wide-ranging Q&A with Bloomberg Businessweek, CEO Sam Altman only ducks a little when asked if the latest iteration might constitute artificial general intelligence. “[W]hen an AI system can do what very skilled humans in important jobs can do—I’d call that AGI,” Altman replies. “[I]f you could hire an AI as a remote employee to be a great software engineer, I think a lot of people would say, ‘OK, that’s AGI-ish.’” He also shares his thoughts on the incoming Trump administration vis-à-vis AI. The interview is behind a paywall, but Pymnts has a good summary
     
  • Not to be outdone, Elon Musk is trumpeting on X that xAI’s Grok 3 is coming soon. The updated iteration will have 10 times the compute power of Grok 2, Musk promises. Reporting on the teaser for Tom’s Hardware, tech writer Anton Shilov states it’s noteworthy that, as part of its present pursuits, xAI plans to deploy a supercomputer powered by over a million GPUs over time. That version of xAI’s Colossus supercomputer “will be used to train LLMs that will likely contain trillions of parameters and will be far more accurate than Grok 3 or GPT-4o,” Shilov writes. “However, in addition to a greater number of parameters, newer models may feature more advanced reasoning, which brings them closer to artificial general intelligence, which is the ultimate goal for companies like xAI and OpenAI.”
     
  • Of course AI can greatly accelerate drug discovery. Every healthcare AI watcher knows that. But did you know that it’s also capable of relieving participants of their burden during clinical trials? It’s true. AI pulls it off by predicting optimal dosing along with safety and efficacy so the human subjects don’t have to go through all that. The point is made in an article presenting views of four experts published Jan. 6 in Genetic Engineering & Biotechnology News. One of the experts tells the journal his drug company has developed an experimental platform that “creates hundreds or thousands of distinct molecular structures on weekly time scales. Then we can carry them to a whole suite of different biological and metabolic assays.” Sounds like a major assist by any standard. 
     
  • AI assistance vs. quiet quitting: Which will you choose to get you through your workday three to five years from now? Let’s get real. Few workers will have any such choice. “As transformative as AI can be, it can’t completely take over all elements of work [because] many roles require human creativity, emotional intelligence and complex decision-making,” Kathy Diaz, chief people officer at global IT services company Cognizant, tells Newsweek. “The importance of softer skills will continue to increase as generative AI and automation optimize routine tasks.” Read the whole thing
     
  • Looking back now, 2023 was sort of the year of text-to-image AI. And 2024 was largely marked by text-to-video advances. What will shine similarly brightly in 2025? The next logical breakthrough—physical intelligence. “PI,” if you will. So suggests Daniela Rus, PhD, director of the computer science and AI lab at MIT, in a piece published by Wired Jan. 6. She advises watching for “a new generation of devices—not only robots but also anything from power grids to smart homes—that can interpret what we’re telling them and execute tasks in the real world.” Read the rest
     
  • Recent research in the news: 
     
  • Funding news of note:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.