News You Need to Know Today
View Message in Browser

Healthcare AI looks to aviation safety | Healthcare AI newsmakers

Thursday, January 18, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

healthcare AI and aviation

3 strategies healthcare can copy from aviation to refine medical AI

The field of healthcare AI continues to nurse two conspicuous Achilles heels—racial bias in initial algorithm iterations and uneven input data as algorithms age. For inspiration to persevere against these and other cure-resistant sore spots, the healthcare sector might look to the aviation industry.

The suggestion comes from technology scholars representing numerous institutions of higher learning. The group expounds on its proposition in a paper recently presented to an academic conference and posted online by the Association for Computing Machinery.  

Pointing out that aviation is a field that “went from highly dangerous to largely safe,” computer scientist and engineer Elizabeth Bondi-Kelly, PhD, of the University of Michigan and colleagues name three broad actions that have improved aviation safety and could do similar wonders for healthcare AI.

1. Build regulatory feedback loops to learn from mistakes and improve practices.

Formal feedback loops developed by the federal government over many years have improved aviation safety in the U.S., the authors note. They recommend the formation of an auditing body that could conduct post-incident investigations like those led by the NTSB after incidents and accidents in aviation. Such a “healthcare AI safety board” would work closely with—or reside within—existing healthcare regulatory bodies. Its duties would include watchdogging healthcare AI systems for regulatory and ethical compliance as well as guiding CMS and private payers on which AI models deserve reimbursement. More:

“If an AI system in a hospital were to cause harm to a patient, the Health AI Safety Board would conduct an investigation to identify the causes of the incident and make recommendations for improving the safety and reliability of the AI system. The findings of the investigation would be made public, creating transparency and promoting accountability in organizations that deploy Health AI systems, and informing regulation by the FDA and FTC, similar to the relationship [in aviation] between the NTSB and the FAA.”

2. Establish a culture of safety and openness where stakeholders have incentives to report failures and communicate across the healthcare system.

Under the Federal Aviation Act, certain aspects of NTSB reports are not admissible as evidence in litigation, which “contributes to aviation’s ‘no blame’ culture and consequently enhances safety,” the authors write. More:

“If similar legislation is passed regarding health AI, then certain investigative reports could be deemed inadmissible as evidence in the context of certain kinds of litigation, thereby incentivizing all parties to participate in investigation and make improvements in safety by mitigating concern regarding legal liability. Above all, it will be vital to ensure that liability is fairly allocated across all the various health AI stakeholders, such as the developers, payers, hospitals and healthcare professionals.”

3. Extensively train, retrain, and accredit experts for interacting with healthcare AI, especially to help address automation bias and foster trust.

The authors note that airline pilots undergo deep training, including “thousands of hours” in aircraft simulators, to master interactions with automated systems. Developers of healthcare AI have been exploring ways to address automation bias, they write, but “more work is needed in the areas of human factors and interpretability to ensure safety—and aviation can provide inspiration.” More:

“Similar to pilots, doctors already undergo extensive training. However, with the advent of health AI, training with new AI instruments is crucial to ensure efficacy. In fact, we believe medical professionals should receive regular training on automated tools, understanding both their operation and their underlying principles. Yet today’s medical education lags behind technical AI development. … [M]edical education [should be] a healthcare professional’s first chance to understand the potentials and risks of AI systems in their context,” offering an opportunity that “may have lasting impacts on their careers.”

The paper is posted in full for free, and MIT News has additional coverage.

 

 Share on Facebook Share on Linkedin Send in Mail
healthcare workers and artificial intelligence

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Can simply working with healthcare AI make clinicians better at their jobs? New research seems to suggest so. A pediatric nurse practitioner who oversees development of an AI model at her hospital cut her referrals of patients for unneeded invasive tests from 80% to 58%. The researchers who studied the phenomenon have dubbed it “induced belief revision.” They hypothesize that “repeated exposures to model predictors and their corresponding labels [may lead] to a change in clinical decision-making based on a learned intuition of the model’s behavior.” The study is published in NEJM AI, and News Medical has a tidy summary.
     
  • The CEOs of OpenAI and Microsoft sat together to talk about AI this week. The event that seated them side by side was the 54th annual meeting of the World Economic Forum in Davos, Switzerland. OpenAI’s Sam Altman: “Last year, the world had a two-week freakout with GPT4. And now people are like, ‘Why is [ChatGPT4] so slow?’” Microsoft’s Satya Nadella: “I don’t think the world will put up any more with any of us coming up with something that has not thought through safety, trust and equity. These are big issues for everyone in the world.” Extensive coverage of AI chatter at Davos 2024 linked here.
     
  • Meanwhile, back in the U.S.: “If we scale up computer-aided drug design up by a billion times, we could simulate biology.” That’s from Nvidia’s founder and CEO, Jensen Huang, who shared the vision last week at the J.P. Morgan Healthcare Conference in San Francisco. Nvidia’s own coverage here.
     
  • The American Medical Association has moved medical AI to its back burner. The big doctors’ group still sees AI’s riskiness as a top issue in its legislative docket. But for now the top concern is lobbying Congress to pull back the 3.4% cut in Medicare reimbursement that kicked in on New Year’s Day, AMA President Jesse Ehrenfeld tells Politico. Bias-based AI danger remains a pressing concern, but it’s “mainly theoretical,” the outlet reports, “and healthcare interests consider it a lower priority than pocketbook issues like how much Medicare pays.”
     
  • A large language AI chatbot cooked up in Google’s DeepMind shop has equaled or bettered primary-care physicians on two key scores. We’re talking engaging patients and diagnosing their self-described conditions. The patients were trained actors, but the doctors were real. The bot, dubbed AMIE for Articulate Medical Intelligence Reporter, heard the patients out via text messages. Its diagnoses proved impressively accurate. And the patient actors said “Dr. Amie” came across as polite, empathetic, honest, caring and committed. Pre-peer review study here, coverage by Pymnts here.
     
  • Introducing an AI-friendly database built on records of around 60,000 patients who had almost 84,000 surgeries. Compiled at UCLA and UC-Irvine, the dataset is designed to help AI researchers “develop new algorithms and predictive tools to improve the care of surgical patients globally.” Announcement here.
     
  • A major medical journal has posted a collection of open-access papers whose common denominator is digital technology. The Journal of the American Heart Association put up links Jan. 16, suggesting the fresh content reflects worthy efforts to “validate or create scalable, engaging, evidence-based health-tech tools for clinicians and patients with the potential to improve health for people across the socioeconomic spectrum.” Full package here.
     
  • This just in from the animal healthcare AI beat. Researchers at Virginia Tech are using AI to decipher bovine communications as expressed through mooing, chewing and burping. The investigators, who work in the animal and dairy data sciences—yes, there are such disciplines—use sounds captured in pastures. From the recordings, machine learning helps analyze and catalog “thousands of points of acoustic data” to uncover heretofore hidden signs of stress, sickness, udder aches and what have you. More here.
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare