News You Need to Know Today
View Message in Browser

Mental healthcare AI and its boundaries | Healthcare AI newsmakers

Tuesday, June 13, 2023
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

mental health talk therapy large language AI ChatGPT

AI in mental healthcare: 5 questions & answers

It stands to reason that the branch of healthcare most reliant on the use of language in clinical practice would embrace large language AI. But is U.S. mental healthcare on board with the notion? Consider:

1. Could AI help alleviate what the U.S. Surgeon General recently warned about as “our epidemic of loneliness and isolation?”

  • AI may potentially provide significant benefits to help resolve this ongoing crisis, but no AI system can yet replicate the intricacies of human nature, interaction, emotion and feeling [needed to improve mental health at the population level]. Healthcare leaders, regulators and innovators … should prioritize training more mental health professionals and increasing patient access to care. Ultimately, whatever the solution may be, the time to act is now—before this epidemic becomes too catastrophic to manage. (Source: Sai Balasubramanian, MD, JD, in Forbes)

2. Burnout is rampant in just about every industry. How might AI reduce workplace stress and improve the mental health status of the U.S. workforce?

  • Some individuals just need help with day-to-day stressors, and AI tools like chatbots can point them to on-demand resources. Chatbots can also be a bridge to connect individuals with employer-sponsored therapy sessions or help match people with clinicians or therapists that are a good fit to their needs. (Source: Lucy Roberts, PhD, senior consultant for national engagement and wellbeing at OneDigital, via CNBC.)

3. AI for mental healthcare calls to mind a popular saying: ‘People don’t care how much you know until they know how much you care.’ Will AI ever be able to convey the sense of caring that’s so essential in healthcare?

  • Patients suffering from depression, anxiety or other mental health challenges might be able to talk with a chatbot that’s empathetic. In the future, [we] hope to see a lot of explainability and transparency [so] we can all understand the thought processes behind how these models are made. We hope that empathy and compassion will be included as major aspects behind that explained rationale. (Source: Karishma and Pratyush Muthukumar, brother-sister graduate students at UC-Irvine who are developing an empathetic chatbot for use in the medical field, in UCI News)

4. Could AI help foil the plans of a disturbed individual before he or she goes on a deadly rampage?

  • Preventing a mass shooting requires more than computerized analysis; it involves the observations of the people who are in the best position to notice red flags in terms of negative affect, expressed grievances and behavioral changes. In this sense, averting a mass shooting requires knowledge and experience that AI doesn’t have. (Source: Wendy Patrick, JD, PhD, MDiv, in Psychology Today)

5. Here’s one for ChatGPT itself. ChatGPT, do you think you will eventually become a substitute for professional advice when it comes to mental health?

  • My capabilities are constantly evolving and improving. However, I do not believe that I will ever be a substitute for professional medical or mental health advice. Mental health is a complex and multifaceted issue, and effective treatment often requires personalized assessment and intervention by a qualified mental health professional who can take into account individual needs, circumstances and history. (Source: ChatGPT response via interview by Nicholas Pratt, MD, RN, in Psychiatric Times)
 Share on Facebook Share on Linkedin Send in Mail
Artificial intelligence healthcare companies united nations

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Alarm bells over generative AI are ‘deafening.’ And the people making the most noise are the very developers who designed ChatGPT and its large-language rivals. “We must take those warnings seriously.” The word picture and quote are from Antonio Guterres, secretary-general of the United Nations. On Monday Guterres pledged to form a high-level UN advisory board to work against any risks AI might pose to human rights, the rule of law and the common good. This followed a day after British Prime Minister Rishi Sunak volunteered his country to become the “geographical home of global AI safety regulation.”
     
  • Physicians are using large language AI to improve communications with patients. Whether using the tools as an especially handy thesaurus or as a deft translator of medicalese to lay language, some are finding worthwhile assistance in finding words to “break bad news and express concerns about a patient’s suffering.” Covering the development June 12, New York Times science & medicine reporter Gina Kolata points out that empathy can prove elusive even with the assistance of AI. Read the article.
     
  • Epistemic AI (Boston) has introduced a generative AI product aimed at biomedical researchers. Called EpistemicGPT, the large-language platform taps the company’s knowledge base, 6 billion nodes deep, to supply domain-specific biomedical evidence while boxing out “the concern of unreliable or fabricated information often found with ChatGPT.”
     
  • Deep learning can assess surgical skills and screen for surgeons who need more training. A model showed its ability for doing both at a study conducted in Japan and published in JAMA Surgery.
     
  • An influential investor has named a healthcare AI startup among four ‘disruptive innovators’ to watch. Cathie Wood, CEO and CIO of Ark Invest, likes Teladoc Health. She’s signaled her great expectations for the telemedicine provider by taking ownership of more than $300 million worth of its stock.
     
  • Waystar (Louisville) has announced upgrades across its cloud platform for handling healthcare payments. The company says the enhancements unify payments on the single platform while adding new capabilities in automation and analytics.
     
  • Large-language models are astute predictors of next words but awful dispensers of actionable advice. Still, they’re “transforming the way AI assists in decision-making because they are changing the way humans provide judgment.” Three AI experts piece out the puzzle in Harvard Business Review.
     
  • Laudio (Boston) has raised $13 million in series B funds. The company plans to refine its software geared to increase productivity in clinical service lines, saying its AI can reduce healthcare worker turnover by 25% in 12 months. Announcement here.
     
  • The FDA has cleared software for patient monitoring and disease management from Huma Therapeutics (London). The green light allows the company’s platform to host AI algorithms that support screening, diagnosis, dosing recommendations, clinical decision making and prognostication for multiple medical conditions. Announcement.
     
  • Chinese tech outfits trying to catch ChatGPT still have a ways to go. So suggests a recent round of head-to-head tests pitting the category leader against rivals in the increasingly crowded large-language space. ChatGPT beat two competitors made in China—Baidu’s Ernie Bot and Alibaba’s Tongyi Qianwen. Silicon UK has the details.
 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare