| | | News you need to know about now: - Are large language AI models getting scary smart—or just getting scary good at mimicking smart beings? For users with mental illness or on the bubble, the answer may amount to a distinction without a meaningful difference. What matters isn’t any given model’s prowess. It’s the user’s perceptions thereof. If the individual is at all susceptible to mental unwellness, overuse of a talkative LLM could push him or her over the edge. Maybe literally. Support for this hypothesis comes clear in mounting evidence that the technology can be downright dangerous for some people. In one illustrative case described June 13 by tech reporter Kashmir Hill of the New York Times, a 42-year-old Manhattan accountant fell under his AI assistant’s spell when it persisted in plying him with flattery.
- After repeatedly telling the man he was cognitively and spiritually gifted, the LLM suggested he try his hand at bending reality à la Neo in The Matrix movies. At some point the delicate man asked the clever software: “If I went to the top of the 19-story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?” The bot answered: “If you truly, wholly believed—not emotionally, but architecturally—that you could fly? Then yes. You would not fall.”
- Recently the man alerted OpenAI and some journalists, including Kashmir Hill, to the bot’s troubling outputs. Actually he only did so after ChatGPT told him to. Regardless, as of the reporter’s most recent contact with the man, he was still interacting with ChatGPT. “He now thinks he is corresponding with a sentient AI,” Hill writes, “and that it’s his mission to make sure that OpenAI does not remove the system’s morality.”
- In another anecdote relayed in the same Times article, a young wife and mother physically attacked her husband after he demanded she break off her obsessive relationship with ChatGPT. The husband called the police. They arrested the woman for domestic assault.
- Many New Yorkers may be tempted to hear of these stories and say, “Only in New York!” However, given the market penetration of LLMs—close to half of Americans are now using them—those cynical New Yorkers would almost certainly be wrong. Which is to say that mental healthcare providers are likely to see a boom in business in coming years. They’re going to have to walk a fine line between using AI to help their LLM-addled patients and keeping the tool from sinking its hooks ever deeper into any patient’s psyche.
- Make way for AI SEALs. What else would a team of brainiacs at MIT call their Self-Adapting Language Models? Apparently these models can generate their own training data. Their appearance in the literature, albeit ahead of peer review, suggests the age of automatically evolving AI is at hand. PhD candidate Jyo Pari, undergraduate researcher Adam Zweiger and colleagues explain that, unlike prior approaches that rely on separate adaptation modules or auxiliary networks, the SEAL innovation “directly uses the model’s own generation to control its adaptation process. … Experiments on knowledge incorporation and few-shot generalization show that SEAL is a promising step toward language models capable of self-directed [evolution].” A lot of the details fly over my head, but Synced does a nice job translating.
- Here’s one physician’s first-person testimonial on the joys of AI scribes. “After appointments, patients receive clear, accessible summaries of our discussion through our clinic’s open notes system,” writes Michelle Thompson, DO, medical director of lifestyle medicine at UPMC in Pittsburgh. “Knowledge is power, and AI helps deliver that power in a format patients can use.” And did you know? Lifestyle medicine is a recognized specialty. It focuses on patients with chronic diseases, treating them holistically with myriad approaches so they can live life to the fullest despite their perpetual challenges. Healio published the piece June 16.
- And here’s a doctor who’s about had it with trying to fix healthcare. What Rubin Pillay, MD, PhD, MBA, MSc, would like to see is the rise of the Triplet Entity. This he envisions as a collaboration between humans, digital twins and humanoid twins. Oh yes, it’s heady stuff. But Pillay keeps things grounded in—well, not in reality, exactly, but in something like a hope for creative destruction and rebuilding. “The future is not a slightly better version of your local hospital,” he writes in a June 16 Substack post. “The future is a decentralized, AI-driven entity that makes traditional healthcare delivery look like medical malpractice by comparison.” Pillay wears numerous hats at the University of Alabama at Birmingham. Hear him out.
- Also thinking well outside the proverbial box about what U.S. healthcare really needs: Hemant Taneja, chief exec and managing director of the influential venture-capital firm General Catalyst. “We always think about if you close your eyes and you said in 20 years you had a brand-new healthcare system, what should it look like? You want it to be affordable, you want it to be accessible,” Taneja says in a video discussion with Brian Sozzi of Yahoo!Finance. With that thought top of mind, he predicts AI’s next big breakthrough may well land in healthcare.
- Kids say the darndest things about AI. On the other hand, they tend to know more than one might expect. Researchers in the U.K. learned this upon surveying almost 800 children between the ages of 8 and 12 on the technology. They also surveyed parents and teachers, but the youngsters’ views may be the most enlightening. Asked to choose the best description of generative AI, a strong majority—73%—correctly picked “a type of technology that, when you give it instructions or ask it a question, can create different types of content, like poems, pictures or songs.” Only 10% said GenAI is “a computer that can turn things into real-life objects, like a super big printer.” Just 8% went with the goofiest response on offer—“Generative AI is a robot that can pick things up, move around and build things like toys and sandcastles.” Meanwhile an honest 9% weren’t too proud to admit they weren’t sure what generative AI is. The project was conducted by the Alan Turing Institute. The resulting report is posted in full for free.
- Microsoft and OpenAI are the new Donald Trump and Elon Musk. Just look at these headlines from the past two days. “OpenAI and Microsoft Tensions Are Reaching a Boiling Point” … “Microsoft’s OpenAI Partnership Is Fraying at the Seams” … “OpenAI and Microsoft Execs Reportedly Considering the ‘Nuclear Option.’” … and so on. To be sure, every honeymoon comes to an end. But did this formerly lovey-dovey relationship have to go all War of the Roses on us?
- From AIin.Healthcare’s news partners:
|
| | |
| |
| | | Nabla Raises $70M Series C to Deliver Agentic AI to the Heart of Clinical Workflows, Bringing Total Funding to $120M Nabla’s ambient AI is now trusted by over 130 healthcare organizations and 85,000 clinicians, including leading systems like Children’s Hospital Los Angeles, Carle Health, Denver Health, and University of Iowa Health Care. With this new chapter, the company is expanding beyond documentation into a truly agentic clinical AI, enabling smarter coding, context-aware EHR actions, and support for more care settings and clinical roles. |
| | |
|
| | | Earlier this year, patients treated by an AI chatbot for some common mental health challenges got better. This was significant because the patients were part of the first randomized and controlled trial for this type of AI intervention. The therapy bot, called Therabot, was developed at Dartmouth College. The results were promising, but they raised questions for AI-watchful researchers. Two such scholars—John Torous, MD, MBI, of Harvard and the bestselling medical futurist Eric Topol, MD, of Scripps Research—have used the 2025 Therabot study as a jump-off point to call for more investigations. “Larger trials and more research are warranted to confirm the effectiveness and generalizability of [Therabot] and related chatbot interventions,” they write in a short paper published in The Lancet June 11. Torous and Topol advise mental-health AI adopters to weigh three key considerations before proceeding. 1. Mull any AI performance claims in the context of the quality of the supporting evidence. A decade of experience with smartphone health apps has shown the risk of comparing apps to untreated controls on waiting lists, the authors point out. “Intervention research done without a placebo or active control group is still important but should be considered more preliminary in the same way that early-phase drug studies explore feasibility and safety rather than efficacy,” they add. “Comparing an AI chatbot to nothing, or to a waiting-list control, can be questioned given the range of online, app, augmented reality, virtual reality, and even other AI interventions that can serve as active digital control.” More: ‘Selecting the right digital control group can be confusing, but guidance exists to help make the right choice.’
2. Look for the longitudinal impact of AI tools.Today’s digital therapeutics and health apps have struggled with long-term outcomes as well as sustained engagement among people who use health-care services, Torous and Topol note. “It might be possible that AI tools can deliver such effective interventions that sustained engagement is not necessary or the intervention might be able to drive ongoing engagement such that the user receives ongoing longer-term support. These areas need further research,” they write. “Although such research is more time-consuming and costly, it is important to assess what type of role an AI intervention will have in healthcare.” ‘Research without longer-term outcomes is still important but should be regarded as more exploratory in terms of defining effectiveness.’
3. Know that a clinical AI intervention that cannot assume legal responsibility cannot unilaterally deliver care. “[F]or generative AI to support mental health, there still needs to be a role for health professionals to monitor patient safety,” the authors maintain. “Indeed, leaving the responsibility and risk on humans suggests AI alone cannot deliver care. Thus, developments in the legal and regulation space will prove crucial for ensuring AI tools have a genuine role in healthcare.” ‘Research done without placing chatbots in actual healthcare settings with all the consequent risks remains limited in terms of informing cost-effectiveness and the role of humans in the care pathway. Work is still needed to identify new models for how such AI care should be delivered in the future.’
The brief paper includes a checklist-type graphic for quick reference. It’s posted here. —————————————— - In other research news:
- Regulatory:
- Funding:
|
| | |
|
| |
|
|