Medical professionals using AI in clinical decision-making should limit the technology’s reach to a supportive role. In fact, used in these settings, the technology is best thought of—and referred to—as augmented intelligence.
This is one of 10 settled positions of the American College of Physicians, or ACP, which represents more than 160,000 internal medicine specialists, subspecialists and trainees.
The group itemizes and expounds on its AI views in a paper published this month by its flagship journal, Annals of Internal Medicine. Here are key passages from five more of the 10.
1. ACP believes that the development, testing and use of AI in healthcare must be aligned with principles of medical ethics.
Healthcare AI ought to boost care quality, strengthen the patient-physician relationship, avoid demographic bias and assist in clinical decision-making without commandeering it, corresponding author Nadia Daneshvar, JD, MPH, and colleagues suggest. More:
‘Maintaining the patient–physician relationship requires care. AI should be implemented in ways that do not harm or interfere with this relationship but instead enhance and promote the therapeutic alliance between patient and physician.’
2. ACP reaffirms its call for transparency in the development, testing and use of AI for patient care.
Compromise on such end-to-end transparency, and don’t be surprised when trust crumbles among and between stakeholders, the authors suggest. ACP “recommends that patients, physicians and other clinicians be made aware, when possible, that AI tools are likely being used in medical treatment and decision making,” they write.
‘Even if patients are not, at present, explicitly informed of all the ways technology is involved in their care—for example, they may or may not be told about computer-assisted electrocardiogram or mammography interpretation—the newness of AI and its potential for clinically significant effects on care suggests that honesty and transparency about its use are paramount.’
3. ACP reaffirms that AI developers, implementers and researchers should prioritize the privacy and confidentiality of patient and clinician data.
If patient, physician or other clinician data must be used for the development of AI models, the data should first be deidentified and aggregated, ACP holds. “We note, however, that deidentification of data, particularly if the data is unstructured, can be a substantial challenge.”
‘We renew our [prior] call for comprehensive federal privacy legislation, with special provisions regarding privacy protections for AI data sets included in such legislation.’
4. ACP recommends that, in all stages of development and use, AI tools are designed to reduce clinician burden in support of patient care.
Reducing unnecessary administrative, cognitive and other burdens should be priorities in the design and development of AI-enabled devices, Daneshvar and co-authors point out, adding that a central promise for medical AI is freeing up time for physician-patient interactions.
‘Any mechanisms for clinicians to provide feedback on the performance of or any issues with the AI tool should not be burdensome to the clinician. The effects of AI-enabled burden reduction tools on burnout should be assessed.’
5. ACP recommends AI training for physicians at all levels of education and practice.
Comprehensive educational training programs and resources are needed at the undergraduate medical education, graduate medical education and attending physician levels to address the knowledge gaps of current healthcare professionals, the authors insist.
‘Training should ensure that physicians remain able to make appropriate clinical decisions independently, in the absence of AI decision support, for vigilance against errors in AI-generated or -guided decisions.’
The paper is posted in full for free.