Patients size up GenAI for mental healthcare one way, for physical healthcare another

Healthcare consumers considering the use of large language AI for mental healthcare are apt to shy away from the technology if they feel it might put their privacy at risk. By comparison, those who might turn to LLMs for physical healthcare are most put off by models that seem troublesome to use. 

On the other hand, there’s a good deal of overlap. 

The findings are from a study conducted at Queensland University of Technology in Australia and published this month in AI & Society

Sage Kelly, PhD, and colleagues recruited 216 Australian residents—a mix of college students and members of the general public—between the ages of 18 and 77 (median 26.5 years). The team surveyed the participants on how likely they would be to seek advice and/or information from ChatGPT. 

Here are highlights from their findings. 

1. Perceived usefulness significantly and positively predicts behavioral intentions in both mental health and physical health. 

This suggests that the more people perceive ChatGPT as useful for seeking health services, the more they are inclined to use the technology, the authors write. 

This finding is consistent with prior research demonstrating that perceived usefulness strongly predicts intentions above and beyond demographic factors, they note before adding: “To encourage use, companies such as OpenAI should promote the usefulness of their AI products.” More: 

‘In the context of healthcare, companies should promote the ability of AI to reduce barriers like cost, time and stigma to drive perceived usefulness.’ 

2. Perceived ease of use is a significant positive predictor of behavioral intentions to use ChatGPT for physical health advice. 

As the perceived ease of use rises, so do users’ intentions to use ChatGPT for physical healthcare, Kelly and co-authors report. 

By comparison, in the present study, perceived ease of use was not a significant predictor in the mental healthcare scenario. “As mental healthcare is largely language-driven,” the researchers point out, “LLMs could be perceived as more straightforward to use for this concern in comparison to physical healthcare, which often involves the visualization of a body part (e.g., rash) to diagnose.”

It could be that consumers are not familiar with the use of ChatGPT for healthcare purposes. ‘Consequently, developers should aim to maintain—or increase—perceived ease of use by designing devices that are simple to use, provide clear instructions and are responsive to users’ issues.’

3. Privacy concerns are significantly and negatively predictive of consumer intentions in mental healthcare settings. 

The more individuals worry about their privacy, the less likely they are to intend to use ChatGPT for their mental healthcare, Kelly and colleagues observe. 

“Although the mean scores were not significantly different for privacy concerns between the two scenarios,” they write, “it appears that the effect of privacy concerns on users’ behavioral intentions [is] stronger in the mental health scenario compared to physical healthcare.”  

‘This finding is supported by prior research showing mental health data are some of the most sensitive data one can reveal due to the stigma and association with life events and additional health issues.’  

4. The influence of each major variable—usefulness, ease of use, trust concerns and privacy vulnerabilities—differs across the two healthcare domains. 

For this reason, the current findings cannot be generalized across industries, Kelly and colleagues state. 

“To our knowledge, this is the first study that has shown the difference in users’ acceptance of using ChatGPT for physical healthcare compared to mental healthcare,” they write. “The difference between the two models underscores the need to conduct industry-specific analysis and the importance of including users’ perspectives in the design and use of ChatGPT for specific applications since users’ behavior cannot be generalized.”

‘It is recommended that future research analyzing multiple industries use this methodology to compare their results.’

5. Future research should conduct different sampling techniques and data collection to study a more diverse range of participants. 

Moreover, the present study “treated physical and mental health issues as separate entities; however, many health issues are a combination of physical and mental health symptoms,” the authors write in acknowledging the limitations in their study design. 

‘Based on our initial examination, future studies should recruit from various countries to understand the influence of income and healthcare systems on the willingness to use ChatGPT.’

The study is posted in full for free

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.