10 questions clinicians—and patients—ought to ask about every AI they encounter

Technology educators, tech-policy wonks and hospital clinical leaders from three countries have collaborated to produce a helpful guide for end-users of healthcare-specific AI tools—and the patients they serve.

Released this week, the 24-page digital publication functions as a consumer-friendly primer on the proper use of AI to support decisionmaking by clinicians, administrators and other provider staff likely to engage with AI in the near or long-term future.

Produced by the Korea Advanced Institute of Science and Technology in South Korea in cooperation with the U.K.-based Sense About Science and the Lloyd’s Register Foundation Institute for the Public Understanding of Risk at the National University of Singapore, the guide summarizes AI’s ascent in healthcare, supplies a brief glossary of terms and describes ways AI is used to treat patients.

Further in, it advises users of healthcare AI tools to find out if the source of the data used for training and testing is known, the data has been collected or selected for the purpose the end-user is pursuing, limitations and assumptions for that purpose have been clearly stated, biases have been addressed, and the model has been tested and validated in real-world settings.

Next the guide suggests and fleshes out a number of specific questions that stakeholders and even close observers such as policymakers and journalists might ask before using, considering or covering healthcare AI.

Among these:

  1. Does the data represent the patients for whom the AI is being used?
  2. Are the patterns and relationships identified by the AI accurate?
  3. What assumptions is the AI making about patients and disease?
  4. Are the variables excluded from the model truly irrelevant?
  5. Are the results generalizable?
  6. Does the AI eliminate human prejudice from decisionmaking?
  7. How much decision weight can we put on it?
  8. How well does the AI really perform?
  9. Has its reliability been properly scrutinized?
  10. Does it make a useful real-world recommendation?

“By applying these questions, society can ensure AI developers’ solutions to modern healthcare challenges are making good use of the data and knowledge available, with minimal error, across different countries and populations, without deepening inequalities that are already high,” the authors write. “These are the AIs that will make useful real-world recommendations that clinicians can have confidence in.”


From misdiagnosing a serious disease to exacerbating racial and economic health inequalities, AI gone wrong can have life-or-death implications. There’s confusion and fear out there—fear about robots taking people’s jobs, fear about data privacy, fear of who’s ultimately responsible if an AI-supported decision turns out to be wrong. Rather than throwing out tools that can help us, we’ll be better off if we discuss the right questions now about the standards AIs should meet.”

To download the guide, titled “Using Artificial Intelligence to Support Healthcare Decisions: A Guide for Society,” click here.

Around the web

The Palo Alto giant used exams from nearly 250,000 patients to upgrade its already robust algorithm.

Exams performed using the deep learning-based reconstruction tool also maintained high image quality, experts reported Wednesday.

Stratifying exams according to risk can reduce unnecessary imaging and downstream costs of care, Hawaiian researchers reported in Radiology.

Trimed Popup
Trimed Popup