Healthcare AI newswatch: Silent AI threats, overeager clinical AI, trustworthy AI therapists, more

Buzzworthy developments of the past few days. 

  • Beware AI watching and listening from digital platforms. Not the usual-suspect applications with ‘AI’ written all over them. The ones that are used ubiquitously all around your enterprise. The apps with supposedly safe names like Microsoft Office, Adobe Acrobat, Bing, Grammarly and LinkedIn—to name just a few among 70 or so in wide use across healthcare. Stealthy AI inside such familiar software could send confidential patient data, whether inadvertently or otherwise, to a third-party large language model. Then the LLM might use the data to further train itself. And “once the information is embedded in the model’s brain, it’s a lost battle,” a cybersecurity expert tells Newsweek. “Now everyone who is interacting with the model potentially can get the sensitive data that was leaked.” 
     
  • Everyone was wowed when large language AI passed a medical licensing exam. Remember? It happened only two years ago. But expectations may be rising faster than what the models can deliver: real assistance with real-world patient care. A muscular effort to match the potential with the deliverable is underway at Stanford. There researchers have developed a framework for the goal. The team’s MedHELM project looks to put Stanford’s RAISE Health Initiative to work in daily episodes of care. HELM stand for holistic evaluation of language models, RAISE for responsible AI for safe and equitable health. That hints at where they’re going with MedHELM. “About 95% of LLM evaluations that are reported in the literature are not done using electronic health record data—and that context is really important,” explains Nigam Shah, MBBS, PhD, chief data scientist at Stanford Health Care. “MedHELM does the back-end work that pulls in relevant datasets and executes hypothetical but common use cases for how people in health and medicine might use an LLM to inform their work.” Then it shows which of six commonly available models perform best for a given task. Learn more straight from Stanford. 
     
  • Some AI decision-support models have a proclivity for recommending aggressive care pathways. And doing so based not on medical necessity but on patient demographics. The same models, or others, tend to advise advanced imaging for wealthy patients and “no further testing” for the less well-off. The researchers who uncovered the undesirable behaviors call for modifications informed by their findings. “By identifying when AI shifts its recommendations based on background rather than medical need, we inform better model training, prompt design and oversight,” says Eyal Klang, MD, chief of generative AI at the Icahn School of Medicine at Mount Sinai. A digital process the team developed to gauge AI outputs against clinical standards incorporates expert feedback to refine model performance, he adds. “This proactive approach not only enhances trust in AI-driven care but also helps shape policies for better healthcare for all.”
     
  • AI is commonly looked to for curing diseases. But it’s also pretty good for sustaining wellness. Think of front-end strategies for warding off dementia, maintaining mental health and improving fertility. In the U.K., researchers at the University of Cambridge offer a patient-friendly rundown of what they’re up to in six such realms. “If we get things right, the possibilities for AI to transform health and medicine are endless,” state three professors at the institution’s Centre for AI in Medicine. “It can be of massive public benefit. But more than that, it has to be.” More here
     
  • An AI chatbot trained in clinical best practices for talk therapy may finally get that assignment right. Developed at Dartmouth College and recently described in the New England Journal of Medicine, the model seems to be producing some notably positive outcomes. The work is newsworthy because some prior attempts have been iffy. A few have been even worse, leading patients to harm themselves. By contrast, the effects the Dartmouth team is seeing “strongly mirror what you would see in the best evidence-based trials of psychotherapy,” Nicholas Jacobson, PhD, tells NPR. In fact, he adds, the results have been “comparable to studies with folks given a gold standard dose of the best treatment we have available.”
     
  • Healthcare AI can supplement exercise regimens, fine-tune physical therapy and refine business strategy. TechTarget includes these up-and-coming use cases in a list of 10, most of which aren’t all that new but will continue to rise in profile over the coming months and years. AI is “redefining” healthcare, the piece reminds, “with hospitals, health systems and large medical practices incorporating AI technologies into administrative as well as clinical workflows.”
     
  • Mark your calendar. Polish your paper. Or do both. This year’s Machine Learning for Healthcare Conference will unfold at the Mayo Clinic in August. The organizers are already accepting submissions. Researchers, clinicians and all-around innovators interested in advancing the art and science of AI in healthcare are invited to showcase work that combines cutting-edge research with real-world impact, says Shauna Overgaard, PhD, senior director of AI enablement at Mayo’s Center for Digital Health. “This is a unique opportunity,” she emphasizes, “to advance the field, collaborate globally and reinforce our shared commitment to patient-centered AI.” Details
     
  • For some, maple syrup from Canada is happy medicine in the morning. But it has to be pristine, unadulterated, 100% true maple syrup. From Canada. For them, there’s AI for food purity. “With the increased risk of food fraud due to threats of increased U.S. import tariffs on Canadian products, combining AI and maple syrup fingerprinting can detect maple syrup fraud,” write a trio of subject matter experts in The Conversation. “Food fraud, or economically motivated adulteration, is the deliberate misrepresentation of food for economic gain.” And from the “I Did Not Know That” Department, Canada produces more than 70% of the world’s maple syrup. I probably would have foolishly bet on Vermont. 
     
  • Recent research in the news: 
     
  • Notable FDA approval activity:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     
Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.