3 points to ponder before tapping AI for mental healthcare
Earlier this year, patients treated by an AI chatbot for some common mental health challenges got better. This was significant because the patients were part of the first randomized and controlled trial for this type of AI intervention.
The therapy bot, called Therabot, was developed at Dartmouth College. The results were promising, but they raised questions for AI-watchful researchers.
Two such scholars—John Torous, MD, MBI, of Harvard and the bestselling medical futurist Eric Topol, MD, of Scripps Research—have used the 2025 Therabot study as a jump-off point to call for more investigations.
“Larger trials and more research are warranted to confirm the effectiveness and generalizability of [Therabot] and related chatbot interventions,” they write in a short paper published in The Lancet June 11.
Torous and Topol advise mental-health AI adopters to weigh three key considerations before proceeding.
1. Mull any AI performance claims in the context of the quality of the supporting evidence.
A decade of experience with smartphone health apps has shown the risk of comparing apps to untreated controls on waiting lists, the authors point out.
“Intervention research done without a placebo or active control group is still important but should be considered more preliminary in the same way that early-phase drug studies explore feasibility and safety rather than efficacy,” they add. “Comparing an AI chatbot to nothing, or to a waiting-list control, can be questioned given the range of online, app, augmented reality, virtual reality, and even other AI interventions that can serve as active digital control.” More:
‘Selecting the right digital control group can be confusing, but guidance exists to help make the right choice.’
2. Look for the longitudinal impact of AI tools.
Today’s digital therapeutics and health apps have struggled with long-term outcomes as well as sustained engagement among people who use health-care services, Torous and Topol note.
“It might be possible that AI tools can deliver such effective interventions that sustained engagement is not necessary or the intervention might be able to drive ongoing engagement such that the user receives ongoing longer-term support. These areas need further research,” they write. “Although such research is more time-consuming and costly, it is important to assess what type of role an AI intervention will have in healthcare.”
‘Research without longer-term outcomes is still important but should be regarded as more exploratory in terms of defining effectiveness.’
3. Know that a clinical AI intervention that cannot assume legal responsibility cannot unilaterally deliver care.
“[F]or generative AI to support mental health, there still needs to be a role for health professionals to monitor patient safety,” the authors maintain. “Indeed, leaving the responsibility and risk on humans suggests AI alone cannot deliver care. Thus, developments in the legal and regulation space will prove crucial for ensuring AI tools have a genuine role in healthcare.”
‘Research done without placing chatbots in actual healthcare settings with all the consequent risks remains limited in terms of informing cost-effectiveness and the role of humans in the care pathway. Work is still needed to identify new models for how such AI care should be delivered in the future.’
The brief paper includes a checklist-type graphic for quick reference. It’s posted here.
——————————————
- In other research news:
- University of Illinois: Machine-learning model reliably predicts cognitive performance
- Wyss Institute at Harvard: Broad-spectrum coronavirus drug developed through AI-enabled dynamic modeling
- Multiple sites: AI-powered study shows surge in global rheumatoid arthritis since 1980, revealing local hotspots
- University of Illinois: Machine-learning model reliably predicts cognitive performance
- Regulatory:
- FDA to use AI in drug approvals to ‘radically increase efficiency’ (The New York Times)
- FDA to use AI in drug approvals to ‘radically increase efficiency’ (The New York Times)
- Funding:
- Autonomize AI raises $28M Series A to power next-generation agentic AI for healthcare and life sciences
- Guidehealth receives $10M investment from Emory Healthcare
- ArcheHealth secures $6.7M for AI healthcare platform
- Perci Health gets £3M ($4M) investment to help improve NHS cancer care
- AIAtella raises €2M ($2.3M) to scale AI-powered cardiovascular imaging tools, aims to prevent 100 million strokes
African backer of tech unicorns Chronos Capital eyes AI in healthcare deals
- Autonomize AI raises $28M Series A to power next-generation agentic AI for healthcare and life sciences