Industry Watcher’s Digest

Buzzworthy developments of the past few days

  • This is the week of DeepSeek. China’s open-source, light-on-chips model has actually been out since last year. But a new version came out Jan. 20, and it took most folks a week to realize what they were looking at. By this Monday, the 27th, DeepSeek had shot to the top of Apple’s app store. Along the way to that eyebrow-raising mile marker, it handed Nvidia the biggest one-day loss of market value ever seen in the U.S.—close to $600B. The stock markets seem to be taking a deep breath now. But even if DeepSeek ends up being more sizzle in the pan than steak on the plate, its overnight fame could reset the table. President Trump called the news a “wakeup call” for American tech companies. “We need to be laser-focused on competing to win,” he added, “because we have the greatest scientists in the world. Even Chinese leadership told me that.” 
     
  • The lone exception to the instant market reshuffle may be Apple. Tim Cook’s company saw its stock rise while AI rivals Alphabet and Microsoft took hits. Apple stands to benefit from a disruption to its competitors’ efforts, Business Insider remarks, because Apple’s AI strategy emphasizes integration over cutting-edge model development. BI also makes note of Sam Altman’s plans to speed up new releases of its models in response to the newly “invigorating” competition. On the national security front, Michigan GOP Rep. John Moolenaar warned the U.S. not to “allow Chinese Communist Party models such as DeepSeek to risk our national security and leverage our technology to advance their AI ambitions.” So, lots of important angles to consider. DeepSeek’s blastoff is a fast-developing story for all AI watchers, including those focused on AI in healthcare. 
     
  • Will next week be the week of Qwen? Don’t be surprised. Qwen 2.5 is Alibaba’s updated entry in the AI market wars. DeepSeek’s homeland competitor claims its latest model can outperform not only DeepSeek-V3 but also ChatGPT-4o and Meta’s newest iteration of Llama. It’s also said to play well with computers, phones and video players. 
     
  • AdvaMed clashes with radiology group over GenAI regulation. In this corner, the med-tech lobby outfit wants to maintain the status quo. “The FDA’s current framework is likely sufficiently robust to manage the unique considerations of generative AI in medical devices,” AdvaMed says. “Additional authorities or regulations targeting GenAI-enabled devices without first understanding if there are any gaps in the existing framework are unnecessary and could hinder progress.” And in this corner, the American College of Radiology lobbies for more granular oversight: “There should be a standard FDA framework for clinical validation that includes minimum requirements for training data diversity, standardized testing protocols across different clinical scenarios and performance benchmarks for specific clinical tasks.” Regulatory Focus airs out the debate. 
     
  • Machine learning vs. military suicide. A new study shows AI can help identify soldiers who are likely to try taking their own lives within six months of their annual checkup. Published in Nature Mental Health journal, the study describes an algorithm that flagged the 25% of Army members who went on to make almost 70% of known suicide attempts. The model could be used to identify soldiers who “should be referred to behavioral health treatment, as well as to suggest which soldiers already in treatment need more intensive treatment,” the study’s authors comment. Study here, coverage by Stars and Stripes here
     
  • Rest in peace, electronic medical records. The advance well-wishes for the digital afterlife come from the healthcare futurist Rubin Pillay, MD, PhD, MBA. “The era of static EMRs is ending; the age of [AI-powered] medical record management is just beginning,” he writes on his Substack RubinReflects. “[T]he potential benefits make this shift not just desirable but necessary.” Pillay has thought through the particulars of his forecast. Hear him out
     
  • Anyone trying to align AI behavior with human values is on a fool’s errand. That’s the view of Marcus Arvan, PhD, an associate professor of philosophy and religion at the University of Tampa. Summarizing a peer-reviewed paper he had published in AI & Society, Arvan makes his argument in concise terms. “To reliably interpret what LLMs are learning and ensure that their behavior safely ‘aligns’ with human values, researchers need to know how an LLM is likely to behave in an uncountably large number of possible future conditions,” he writes. “AI testing methods simply can’t account for all those conditions.” Scientific American published the opinion piece Jan. 27. Read it here
     
  • Recent research in the news: 
     
  • Notable FDA Approvals:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.