How to let AI simplify the complexities of care (rather than allowing it to do the opposite)

Healthcare AI can add considerable value to patient care practices, but it can also “just add noise to an already complex system.” 

In a book chapter published online May 21 by the American Society of Clinical Oncology, aka ASCO, researchers consider the proverbial fork in the road at which the right one of those two paths must be confidently chosen. 

“Continued development of AI tools is necessary to address the limitations of AI use in medicine,” write Katy French, MD, of the University of Texas MD Anderson Cancer Center and colleagues. “It is imperative that strategies are developed to help mitigate the negative impacts of AI, such as information overload, data quality requirements and transparency of AI usage.”

More: 

‘Regular assessment of AI tools in practice must be conducted and feedback from patients and clinicians should be given consideration to ensure maximal benefit without unintended harm.’

The team arrives at several consequential conclusions, among them these four: 

1. AI can improve the clinical experience for patients as well as physicians.

Ambient listening technologies such as Abridge AI and DAX “have shown improved documentation quality, facilitated more personable patient encounters, and decreased mental fatigue among physicians, although the impact on patient satisfaction remains mixed,” French and co-authors write. “These AI tools have shown to be an asset in alleviating the administrative burden many physicians carry, significantly reducing physician burnout.” More: 

‘The use of AI chatbots and GenAI algorithms presents a novel approach to answering patient queries, improving communication, prioritizing patient messages and enhancing patient education.’

2. The pitfalls of AI remain significant. 

Information overload, mediocre performance and ethical/legal ambiguity around disclosure of AI use “all pose a threat to the successful integration of this technology in the future,” the authors point out. “Ethical concerns related to patient autonomy, data privacy, trust and beneficence must be addressed by AI algorithm developers and legislators, and include regulations outside of the HIPAA to ensure patient safety and confidence are made a priority.” 

‘Future developments of AI suggest its integration into overburdened hospitals, underserved communities, telemedicine and rural health care settings, further enhancing access to care and tackling health care disparities.’ 

3. Healthcare providers should explain their use of AI to patients.

AI utilization could be explained as the “institutional standard,” the authors suggest, as patients would be automatically enrolled in these services unless they opt out. “This approach addresses the growing concern among physicians and patients about the transparency of AI use in medicine and reinforces that patient autonomy remains a key concern,” they add. 

Both parties must trust in the technology and have a general understanding of how it works. 

4. It is essential to uphold patient integrity and safety standards. 

Currently, AI’s usage is limited to serving as a digital assistant or second opinion to physicians, but this may drastically shift in the next decade,” French and colleagues note. “Limiting AI’s usage to a supportive tool in healthcare ensures human judgment and expertise remain the driving force in patient care.”

‘However, the question remains: How can we ensure patient confidence in care while still improving efficiency and leveraging AI technology?’

The paper is available in full for free.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.