3 strategies healthcare can copy from aviation to refine medical AI

The field of healthcare AI continues to nurse two conspicuous Achilles heels—racial bias in initial algorithm iterations and uneven input data as algorithms age. For inspiration to persevere against these and other cure-resistant sore spots, the healthcare sector might look to the aviation industry.

The suggestion comes from technology scholars representing numerous institutions of higher learning. The group expounds on its proposition in a paper recently presented to an academic conference and posted online by the Association for Computing Machinery.  

Pointing out that aviation is a field that “went from highly dangerous to largely safe,” computer scientist and engineer Elizabeth Bondi-Kelly, PhD, of the University of Michigan and colleagues name three broad actions that have improved aviation safety and could do similar wonders for healthcare AI.

1. Build regulatory feedback loops to learn from mistakes and improve practices.

Formal feedback loops developed by the federal government over many years have improved aviation safety in the U.S., the authors note. They recommend the formation of an auditing body that could conduct post-incident investigations like those led by the NTSB after incidents and accidents in aviation. Such a “healthcare AI safety board” would work closely with—or reside within—existing healthcare regulatory bodies. Its duties would include watchdogging healthcare AI systems for regulatory and ethical compliance as well as guiding CMS and private payers on which AI models deserve reimbursement. More:

“If an AI system in a hospital were to cause harm to a patient, the Health AI Safety Board would conduct an investigation to identify the causes of the incident and make recommendations for improving the safety and reliability of the AI system. The findings of the investigation would be made public, creating transparency and promoting accountability in organizations that deploy Health AI systems, and informing regulation by the FDA and FTC, similar to the relationship [in aviation] between the NTSB and the FAA.”

2. Establish a culture of safety and openness where stakeholders have incentives to report failures and communicate across the healthcare system.

Under the Federal Aviation Act, certain aspects of NTSB reports are not admissible as evidence in litigation, which “contributes to aviation’s ‘no blame’ culture and consequently enhances safety,” the authors write. More:

“If similar legislation is passed regarding health AI, then certain investigative reports could be deemed inadmissible as evidence in the context of certain kinds of litigation, thereby incentivizing all parties to participate in investigation and make improvements in safety by mitigating concern regarding legal liability. Above all, it will be vital to ensure that liability is fairly allocated across all the various health AI stakeholders, such as the developers, payers, hospitals and healthcare professionals.”

3. Extensively train, retrain, and accredit experts for interacting with healthcare AI, especially to help address automation bias and foster trust.

The authors note that airline pilots undergo deep training, including “thousands of hours” in aircraft simulators, to master interactions with automated systems. Developers of healthcare AI have been exploring ways to address automation bias, they write, but “more work is needed in the areas of human factors and interpretability to ensure safety—and aviation can provide inspiration.” More:

“Similar to pilots, doctors already undergo extensive training. However, with the advent of health AI, training with new AI instruments is crucial to ensure efficacy. In fact, we believe medical professionals should receive regular training on automated tools, understanding both their operation and their underlying principles. Yet today’s medical education lags behind technical AI development. … [M]edical education [should be] a healthcare professional’s first chance to understand the potentials and risks of AI systems in their context,” offering an opportunity that “may have lasting impacts on their careers.”

The paper is posted in full for free, and MIT News has additional coverage.

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup