Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • No sooner did seven major AI players individually pledge to promote safety, security and trust than four of them huddled up to do it their way—together. Calling their initiative the Frontier Model Forum, Microsoft, Anthropic, Google and OpenAI unveiled the plan July 26. The founding members define frontier models as large-scale machine-learning models that “exceed the capabilities currently present in the most advanced existing models and [that] can perform a wide variety of tasks.” They welcome other companies and organizations to join. Details posted by Microsoft here.
     
  • Both Google and Microsoft reported revenue growth and profitability for the second quarter. The robust results probably had nothing to do with AI. However, as noted in The Wall Street Journal July 25, the companies’ ever-expanding health and wealth show why these two Big Tech titans “are the best suited to build on a technology that requires massive computing resources—and equally massive pocketbooks.” Tech and biz reporter Dan Gallagher says Microsoft expects dollars from AI services to start arriving by the end of this fiscal year.
     
  • Beware of model collapse. That’s what they call it when AI models are repeatedly trained on AI-generated data. The eventual effect is akin to a copy of a copy of a copy generated by the office Xerox machine. And it’s just one pitfall AI adopters in healthcare and elsewhere need to avoid unless they want to imperil data privacy, IP loss, security and “a host of other issues lying in wait for unsuspecting organizations pushed by their boards and C-suites not to miss the AI boat.” The warning is from Felix Van de Maele, CEO of data intelligence cloud platformer Collibra. He offers a brief on model collapse and other risks of footloose AI governance in a piece published July 25 in Fast Company.
     
  • And then there are the legal risks unique to AI adopters in healthcare. Attorney Neville Bilimoria, a partner with the Duane Morris practice, takes a quick look at potential points of exposure that can arise from, in particular, providers’ reliance on vendors bearing AI-equipped products. Likely vulnerabilities include patient privacy, intellectual property and indemnity provisions. McKnights Long-Term Care News has the item.
     
  • Researchers are using AI to detect biochemical patterns across massive datasets that may have little in common with one another. Each dataset contains unmined info on how antibodies interact with one or more viruses or other immune-system challengers. The scientists hope the work will yield new or improved vaccines, drugs and/or cancer treatments. Lay explanation from the LaJolla Institute for Immunology here, scientific paper here.
     
  • Eye-catching investments in healthcare AI:
     
    • Hippocratic AI raises $15M and more digital health fundings
    • GenHealth.AI Accelerates into healthcare AI market with $13M in new funding
    • Sanguina Raises $2.8M in Series A Funding to Drive Innovation in Home-Based Testing and Wellness Management
       
  • From AIin.Healthcare’s news partners:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup