HIMSS speakers see standards, possibly ‘nutrition labels’ in healthcare AI’s future

Freely hope for the best, but diligently prepare for the worst. Applied to end users of healthcare AI, that adage could have been a key takeaway at last week’s annual meeting of the Health Information Management Systems Society (HIMSS) in Las Vegas.

“The good news is your optimism for AI is justified,” Mayo Clinic Platform head John Halamka, MD, told the audience in one session. However, he added, “there are caveats.”

Indeed. Enough of those were flagged that even AI vendors urged caution.

“We should think of any machine learning algorithm that is predicting a condition for somebody as a lab test,” said Tanuj Gupta, MD, MBA, an executive with EHR supplier and AI developer Cerner Corp. “If [its outputs are] off, and you potentially cause some morbidity and mortality issue, it’s a problem.”

The quotes are from a rundown filed by Stat News reporter Casey Ross, who was there to observe the dark vs. bright character of the perspectives on offer from various invited speakers. The outlet posted his coverage Aug. 16.

One steady refrain seems to have been a call for standards by which algorithms could be assessed for safety and efficacy.

Well enough, but what body is going to draw up and enforce any such standards?

“The FDA and the Government Accountability Office have created high-level frameworks for regulating artificial intelligence,” Ross points out, “but those proposals do not address the specific dilemmas created by the algorithmic products already making their way into care.”

He cites a recent Stat News investigation showing algorithms embedded in Epic EHR systems outputting iffy info to clinicians treating seriously ill patients.

The fumbles and misgivings are unlikely to slow healthcare AI’s momentum. As several HIMSS speakers underscored, according to Ross, existing and emerging algorithms have plenty of upside.

What’s more, some powerful players are working on creative ways to tamp down healthcare AI’s risks without sacrificing its rewards.

For example, Duke University has proposed a way to label algorithms like food products.

Mayo’s Halamka is all for that:  

“Shouldn’t we as a society demand a nutrition label on our algorithms saying this is the race, ethnicity, the gender, the geography, the income, the education that went into the creation of this algorithm? Oh, and here’s … some statistical measure of how well it works for a given population. You say, ‘Oh well, this one’s likely to work for the patient in front of me.’ That’s how we get to maturity.”

Read Ross’s full report.

Around the web

The Palo Alto giant used exams from nearly 250,000 patients to upgrade its already robust algorithm.

Exams performed using the deep learning-based reconstruction tool also maintained high image quality, experts reported Wednesday.

Stratifying exams according to risk can reduce unnecessary imaging and downstream costs of care, Hawaiian researchers reported in Radiology.

Trimed Popup
Trimed Popup