Given the rapid proliferation of AI-equipped medical devices across U.S. healthcare, unintended effects should surprise no one. Many such occurrences will be pleasant surprises. But some adverse events are likely as well.
To strive for the best while preparing for the worst, healthcare organizations and healthcare AI developers should collaborate to ensure that AI systems are robust, reliable and transparent.
Two researchers remind these stakeholders of this and other responsibilities in an opinion piece published Nov. 27 in JAMA.
“Healthcare organizations must proactively develop AI safety assurance programs that leverage shared responsibility principles, implement a multifaceted approach to address AI implementation, monitor AI use, and engage clinicians and patients,” write Dean Sittig, PhD, and Hardeep Singh, MD, MPH. “Monitoring risks is crucial to maintaining system integrity, prioritizing patient safety and ensuring data security.”
Sittig is affiliated with the University of Texas, Singh with Baylor College of Medicine. Their JAMA paper’s primary audience is the provider sector. Here are six recommendations from the piece.
1. Conduct or wait for real-world clinical evaluations published in high-quality medical journals before implementing any AI-enabled systems into routine care.
Further, while new AI-enabled systems mature, “we recommend that all healthcare organizations conduct independent real-world testing and monitoring with local data to minimize the risk to patient safety,” Sittig and Singh write. More:
‘Iterative assessments should accompany this risk-based testing to ensure that AI-enabled applications are benefiting patients and clinicians, are financially sustainable over their life cycles and meet core ethical principles.’
2. Invite AI experts into new or existing AI governance and safety committees.
These experts might be data scientists, informaticists, operational AI personnel, human-factors experts or clinicians working with AI, the authors point out.
‘All committee members should meet regularly to review requests for new AI applications, consider the evidence for safety and effectiveness before implementation, and create processes to proactively monitor the performance of AI-enabled applications they plan to use.’
3. Make sure the AI committee maintains an inventory of clinically deployed, AI-enabled systems with comprehensive tracking information.
Healthcare organizations should maintain and regularly review a transaction log of AI system use—similar to the audit log of the EHR—that includes the AI version in use, date/time of AI system use, patient ID, responsible clinical user ID, input data used by the AI system and AI recommendation or output, Sittig and Singh assert.
‘The committee should oversee ongoing testing of AI applications in the live production system to ensure the safe performance and safe use of these programs.’
4. Create high-quality training programs for clinicians interested in using AI systems.
Initial training and subsequent clinician engagement should include a formal consent-style process, complete with signatures, the authors stress, to ensure that clinicians understand the risks and benefits of using AI tools before their access is enabled.
‘Take steps to ensure that patients understand when and where AI-enabled systems were developed, how they may be used, and the role of clinicians in reviewing the AI system’s output before giving their consent.’
5. Develop a clear process for patients and clinicians to report AI-related safety issues.
As part of this effort, be sure to implement a rigorous, multidisciplinary process for analyzing these issues and mitigating risks, Sittig and Singh recommend.
‘Healthcare organizations should also participate in national postmarketing surveillance systems that aggregate deidentified safety data for analysis and reporting.’
6. Provide clear written instructions and authority to enable authorized personnel to disable, stop, or turn off the AI-enabled systems 24 hours a day, 7 days a week, in case of an urgent malfunction.
“Similar to an organization’s preparation for a period of EHR downtime,” the authors offer, “healthcare organizations must have established policies and procedures to seamlessly manage clinical and administrative processes that have become dependent on AI automation when the AI is not available.”
‘Regularly assess how [your] AI systems affect patient outcomes, clinician workflows and system-wide quality.’
Expounding on the latter point, the authors suggest revising AI models that fail to meet pre-implementation goals. If such revisions prove unfeasible, “the entire system should be decommissioned.”
Read the full paper.