Ethical healthcare AI in 8 mnemonic elements

Artificial intelligence researchers are making a “great plea” to guide the ethical development and use of generative AI in medicine.

The term is in quotes because it’s an easily memorized acronym. The letters stand for Governability, Reliability, Equity, Accountability, Traceability, Privacy, Lawfulness, Empathy and Autonomy.

The researchers base this set of principles on ethical AI from the military world—namely the U.S. Department of Defense and the North Atlantic Treaty Organization.

“Warriors on battlefields often face life-altering circumstances that require quick decision-making,” senior author Yanshan Wang, PhD, of the University of Pittsburgh and colleagues explain. “Medical providers experience similar challenges in a rapidly changing healthcare environment, such as in the emergency department or during surgery treating a life-threatening condition.”

They lay out their formula in commentary published Dec. 2 in NPJ Digital Medicine. Here are abbreviated descriptions of the first five elements.  

  • Governability. Generative AI systems must be governed by explicit guidelines covering what to do should glitches arise, Wang and co-authors assert. “In the event of any unintended behavior, human intervention to disengage or deactivate the deployed AI system should be possible,” they write.
     
  • Reliability. If you’re not certain your generative AI model is as safe as—or safer than—human decision-making, don’t deploy it in clinical practice, the authors imply. “Having a thorough evaluation and testing protocol against specific use cases will ensure the development of resilient and robust AI systems,” they write.
     
  • Equity. Defining the concept as “the state in which everyone has a fair and just opportunity to attain their highest level of health,” Wang and colleagues state that generative AI developers must seek to mitigate bias by adjusting algorithms to help address existing disparities in health status.
     
  • Accountability. If measures aren’t in place to hold AI-using clinicians accountable for care decisions, patients may feel doctors and nurses are less than fully invested in AI-aided care processes, the authors suggest. In a nutshell: The less the accountability, the lower the trust.
     
  • Traceability. The generative aspect of generative AI should be transparent and, thus, capable of being tracked, Wang and co-authors maintain. “Data sources used to train these models and the design procedures of these models should be transparent too,” they add. “Furthermore, the implementation, deployment and operation of these models need to be auditable, under the control of stakeholders in the healthcare setting.”

And there’s the GREAT part of the formula. For the PLEA part—Privacy, Lawfulness, Empathy and Autonomy—read the rest of the paper.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.