The only thing to fear from AI in healthcare is fear of AI in healthcare itself

As it continues to grow in proficiency and increase its reach, healthcare AI will disappoint both those who expect it to produce miracles and those who fear it will cause catastrophes.

That’s one takeaway that one could gather from a reading of two thought leaders in healthcare and technology. Commenting on the combination in the Chicago Tribune May 23, Sheldon Jacobson, PhD, and Janet Jokela, MD, MPH, specifically ask:

Will the threats associated with AI in healthcare be as bad as some fear? Or will healthcare AI be relatively benign?

The answer, they suggest, will probably fall somewhere between the two.

Jacobson is a professor of computer science at the University of Illinois at Urbana-Champaign. Jokela is the senior associate dean of engagement for the Carle Illinois College of Medicine at the University of Illinois at Urbana-Champaign. Here are five of their supporting arguments.  

1. AI has no feelings and therefore cannot replace functions that demand human interactions, empathy and sensitivity.

AI can neither feel emotion nor exercise moral agency. But it doesn’t need those qualities to help bring on joy-occasioning outcomes and make sound, evidence-based judgments. “What patients want and certainly need from their physicians is their time and their attention,” Jacobson and Jokela write, “which demands patience—something that AI systems have in abundance.” More:

‘Indeed, patience may be construed by some as a surrogate for human empathy and sensitivity, while impatience may be interpreted as the antithesis of such human characteristics.’

2. AI medical systems can process massive stores of information infinitely more quickly and thoroughly than any human clinician.

Thanks to its vast capacity for spotting patterns and connections, healthcare AI “may spot an unusual condition that could expedite a diagnosis, identify an appropriate treatment plan and save lives—all at a lower cost,” Jacobson and Jokela point out.

‘AI models may even identify a novel condition by exhaustively eliminating the possibility of all possible known diseases, effectively creating new knowledge by a process of elimination.’

3. On the other hand, AI medical systems have limitations and risks.

“The plethora of data being used to train AI medical systems has come from physicians and human-centric healthcare delivery,” Jacobson and Jokela note. “If such sources of data are overwhelmed by AI-generated data, at some point, AI medical systems will be primarily relying upon data generated from AI medical care.”

‘Will this compromise the quality of care that AI medical systems deliver?’

4. Few if any healthcare personnel understand the complex statistical associations that yield medical AI outputs.

“Of course, much of clinical medicine is evidence-based, which in turn is based on clinical trials or extended observational experience,” Jacobson and Jokela write.

‘When viewed in this context, AI medical systems are taking a similar approach, with the time window to glean insights infinitesimally compressed.’

5. Anything that cannot be easily understood may elicit fear.

Healthcare AI certainly qualifies as a thing not readily comprehended, Jacobson and Jokela state. “In a world filled with uncertainty and risk, AI systems of all kinds offer tremendous benefits,” they remark. “Yet the uncertainty and risk that surround us will not miraculously go away with AI. There are no free lunches in this regard.”

‘Prudence and caution are reasonable. Efforts to stop or even slow AI advances are what we should really fear.’

Full piece here

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup