7 points of positivity likely to win buy-in for healthcare AI

Healthcare AI can only improve clinical outcomes if wins clinicians’ trust and does patients no harm.

The point is driven home by Daniel Yang, MD, vice president of AI and emerging technologies at eight-state, 12.5-million member Kaiser Permanente.

The need to succeed clinically is “why we use a responsible AI approach,” Yang writes in a piece posted this week. “With a focus on building trust, we use AI only when it advances our core mission of delivering high-quality, affordable healthcare services.”

Yang then summarizes seven principles that he says guide the integrated insurance/provider system—which employs 230,000 staff, 70,000 nurses and around 25,000 physicians—whenever it assesses AI tools for possible adoption.

These are:  

1. Privacy.

“AI tools require a vast amount of data,” Yang reminds before sharing:

Ongoing monitoring, quality control and safeguarding are necessary to protect the safety and privacy of our members and patients.

2. Reliability.

“What works today may not work a few years down the road as technology, care delivery and patient preferences evolve.” More: 

We choose AI tools that will work for the long term.

3. Eyes on outcomes.

If an AI tool doesn’t advance high-quality and affordable care, we don’t use it.

4. Transparency.

“We make patients aware of and ask for consent to our use of AI tools whenever appropriate.”  

For our employees who use AI, we provide explanations of how our AI tools were developed, how they work and what their limitations are.

5. Equity.

“People and algorithms alike can contribute to bias in AI tools,” Yang points out. “Our AI tools are built to minimize bias.”

We also know AI has the potential to harness large amounts of data and to help identify and address the root causes of health inequities, so we also focus on that potential.

6. Customer-centricity.

“In the case of AI, our customers are our members, doctors and employees who will use the tools.”

Tools must prioritize their needs and preferences.

7. Trust.

“We know there’s uncertainty about the effectiveness of AI,” Yang writes. “We choose tools that offer excellence in safety and performance, and alignment with industry standards and leading practices.”

We further build confidence by continually monitoring the tools we use. We continue to invest in research that rigorously evaluates the impact of AI in clinical settings.

Yang also encourages policymakers to help make sure healthcare AI gets developed and used responsibly. They can do this, he suggests, by:

  • supporting the launch of large-scale clinical trials,  
     
  • establishing systems to monitor AI tools used in clinical care and
     
  • supporting independent quality assurance testing of AI algorithms.
     

Read the whole thing.

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup