To tell or not to tell: Do patients really need to know who—or what—writes their doctor’s notes?

When blinded as to authorship, healthcare consumers slightly prefer medical messages composed by generative AI to those written by human clinicians. 

The preference reverses when patients are told an algorithm drafted the AI-generated note. However, the difference is negligible: More than 75% are good with these messages no matter who—or what—writes them. 

The findings are from Duke University, where researchers received survey responses on the topic from 1,455 patients as represented by the institution’s patient advisory committee. 

“The lack of difference in preferences between human vs no disclosure may indicate that surveyed participants assume a human author unless explicitly told otherwise,” hospitalist Joanna Cavalier, MD, and colleagues comment in their study report. Regardless, they add:  

‘Reduced satisfaction due to AI disclosure should be balanced with the importance of patient autonomy and empowerment.’ 

JAMA Network Open published the work March 11. Here are additional excerpts from the study’s discussion section.

1. When blinded as to author, the surveyed patients preferred AI-drafted messages. But why? 

Probably because the machine messages “tended to be longer, included more details and likely seemed more empathetic than human-drafted messages,” the authors surmise. Yet the respondents were overall more satisfied even with suboptimal messages they knew their clinicians had written—or when not informed of the authorship—than with messages they knew were generated by AI. 

‘This contradiction is particularly important in the context of research showing that increased access to clinicians via electronic communication improves patient satisfaction, while evidence linking the in-basket to burnout is prompting development and use of automated tools for clinicians to reduce time spent in electronic communication.’ 

2. The study’s findings raise several ethical and operational questions for health systems implementing AI as a tool for handling the in-basket. 

“The operational options are a.) not to disclose the use of AI in patient communication because patients tended to be less satisfied when they were told AI was involved, or b.) to disclose, which aligns with bioethical norms and follows the White House’s AI Bill of Rights.”

‘A third option, which is ethically reasonable but practically challenging, would be to vary disclosure based on how each individual elects to receive or not receive information regarding AI.’

3. From an ethics perspective, there is arguably more to ‘doing the right thing’ than simply optimizing patient satisfaction. 

Patients have a right to know information relevant to their care, and the source of the information they are receiving is indeed relevant, the researchers remark. “Moreover, the power imbalance that already exists between patients and clinicians should not be exacerbated by hiding relevant aspects related to the delivery of care.”

‘If anything, AI tools should be implemented in ways that empower patients every step of the way.’

4. The present research ‘reflects a time of transition into a new era of clinical norms.’ 

As AI tools become more prevalent in healthcare, the authors note, “it may be reasonable to expect that patients will become accustomed to receiving AI-generated responses and that the response author will have a smaller influence on satisfaction.” 

‘This hypothesis calls for further studies that follow trends as the implementation of AI progresses.’

5. Potential patient dissatisfaction with AI-generated medical messages should not be viewed as a barrier to disclosure. 

“We found that the satisfaction, perceived usefulness and feeling of being cared for remained high despite disclosure of AI,” Cavalier et al. underscore. Describing results from a follow-up survey they conducted, the authors addressed the question of how best to disclose the use of AI. 

‘Participants preferred the shortest disclosure statement, which stated: This message was written by Dr T. with the support of automated tools. This is a takeaway that we are implementing at our health system.’

Read the full study

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.