News You Need to Know Today
View Message in Browser

AI adverse events | AI news watcher’s blog | Partner voice 2

Tuesday, December 3, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

artificial intelligence AI in healthcare

Healthcare-specific AI: How to multiply successes and head off accidents

Given the rapid proliferation of AI-equipped medical devices across U.S. healthcare, unintended effects should surprise no one. Many such occurrences will be pleasant surprises. But some adverse events are likely as well. 

To strive for the best while preparing for the worst, healthcare organizations and healthcare AI developers should collaborate to ensure that AI systems are robust, reliable and transparent.

Two researchers remind these stakeholders of this and other responsibilities in an opinion piece published Nov. 27 in JAMA

“Healthcare organizations must proactively develop AI safety assurance programs that leverage shared responsibility principles, implement a multifaceted approach to address AI implementation, monitor AI use, and engage clinicians and patients,” write Dean Sittig, PhD, and Hardeep Singh, MD, MPH. “Monitoring risks is crucial to maintaining system integrity, prioritizing patient safety and ensuring data security.”

Sittig is affiliated with the University of Texas, Singh with Baylor College of Medicine. Their JAMA paper’s primary audience is the provider sector. Here are six recommendations from the piece. 

1. Conduct or wait for real-world clinical evaluations published in high-quality medical journals before implementing any AI-enabled systems into routine care. 

Further, while new AI-enabled systems mature, “we recommend that all healthcare organizations conduct independent real-world testing and monitoring with local data to minimize the risk to patient safety,” Sittig and Singh write. More: 

‘Iterative assessments should accompany this risk-based testing to ensure that AI-enabled applications are benefiting patients and clinicians, are financially sustainable over their life cycles and meet core ethical principles.’

2. Invite AI experts into new or existing AI governance and safety committees. 

These experts might be data scientists, informaticists, operational AI personnel, human-factors experts or clinicians working with AI, the authors point out. 

‘All committee members should meet regularly to review requests for new AI applications, consider the evidence for safety and effectiveness before implementation, and create processes to proactively monitor the performance of AI-enabled applications they plan to use.’

3. Make sure the AI committee maintains an inventory of clinically deployed, AI-enabled systems with comprehensive tracking information.

Healthcare organizations should maintain and regularly review a transaction log of AI system use—similar to the audit log of the EHR—that includes the AI version in use, date/time of AI system use, patient ID, responsible clinical user ID, input data used by the AI system and AI recommendation or output, Sittig and Singh assert. 

‘The committee should oversee ongoing testing of AI applications in the live production system to ensure the safe performance and safe use of these programs.’

4. Create high-quality training programs for clinicians interested in using AI systems.

Initial training and subsequent clinician engagement should include a formal consent-style process, complete with signatures, the authors stress, to ensure that clinicians understand the risks and benefits of using AI tools before their access is enabled. 

‘Take steps to ensure that patients understand when and where AI-enabled systems were developed, how they may be used, and the role of clinicians in reviewing the AI system’s output before giving their consent.’

5. Develop a clear process for patients and clinicians to report AI-related safety issues.

As part of this effort, be sure to implement a rigorous, multidisciplinary process for analyzing these issues and mitigating risks, Sittig and Singh recommend. 

‘Healthcare organizations should also participate in national postmarketing surveillance systems that aggregate deidentified safety data for analysis and reporting.’

6. Provide clear written instructions and authority to enable authorized personnel to disable, stop, or turn off the AI-enabled systems 24 hours a day, 7 days a week, in case of an urgent malfunction. 

“Similar to an organization’s preparation for a period of EHR downtime,” the authors offer, “healthcare organizations must have established policies and procedures to seamlessly manage clinical and administrative processes that have become dependent on AI automation when the AI is not available.” 

‘Regularly assess how [your] AI systems affect patient outcomes, clinician workflows and system-wide quality.’

Expounding on the latter point, the authors suggest revising AI models that fail to meet pre-implementation goals. If such revisions prove unfeasible, “the entire system should be decommissioned.” 

Read the full paper. 

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Catalight Partners with Nabla to Reduce Practitioner Documentation Burden and Elevate Autism and I/DD Care - A leader in intellectual and developmental disabilities (I/DD) care, Catalight is leveraging Nabla's Ambient AI assistant to enhance patient care, expand access, and empower families with tailored treatment options. Learn more about how Nabla is transforming care here: https://www.prnewswire.com/news-releases/catalight-partners-with-nabla-to-reduce-practitioner-documentation-burden-and-elevate-autism-and-idd-care-302315767.html

 Share on Facebook Share on Linkedin Send in Mail
Artificial intelligence AI in healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • To improve the nation’s fiscal health, improve its population’s actual health. Looking into the connection, CNBC notes the potential for both to happen under an AI-forward Trump Administration II. The piece quotes Ajay Agrawal, a University of Toronto researcher who concentrates on the economics of AI. “Many people are fearful of reducing regulation because they don’t want technologies that are immature to be brought into the healthcare system and harm people,” Agrawal says. “And that’s a very legitimate concern. But very often what they fail to also put into their equation is the harm we’re causing people by not bringing in new technologies.” And it goes without saying that sicker populations are more expensive to care for than healthier ones. Read the item
     
  • Along that same line of thinking, consider the 80 million low-income Americans who depend on state-administered Medicaid programs. This subpopulation tends to have less access and poorer outcomes than the population as a whole. To close the gap, the Federation of American Scientists is proposing an AI for Medicaid initiative. CMS should launch such a project to “incentivize and pilot novel AI healthcare tools and solutions targeting Medicaid recipients,” writes the author of the piece, Harvard grad student Pooja Joshi. “Leveraging state incentives to address a critical market failure in the digital health space can additionally unlock significant efficiencies within the Medicaid program and the broader healthcare system.” Read the rest
     
  • Federal AI guardrails are taking shape. CMS is seeking to codify language prohibiting Medicare Advantage plans from using AI to “discriminate on the basis of any factor that is related to the enrollee’s health status.” The agency is also keen to make sure MA plans administered with AI “provide equitable access to services.” The quotes are from a broad proposed rule scheduled to be published in the Federal Register Dec. 10. Fact sheet on the full proposed rule here, good summary coverage of the AI piece by GovInfo Security here
     
  • Don’t conflate AI in medicine with AI in medical education. “A system designed to optimize a busy physician’s time should not be blindly applied to a trainee still learning the art of medicine,” explains Naga Kanaparthy, MD, MPH, of Yale in commentary published by MedPage Today. “Teaching and exposing trainees to the most effective technologies is important if we want the best possible healthcare, but not at the expense of establishing a sound medical foundation.” 
     
  • Healthcare AI is raising some thorny ethical questions. Imagine the technology suggesting aggressive treatment for a patient it deems likely to benefit—and a “wait and watch” approach for one with an iffier prognosis, regardless of intervention. “From a utilitarian perspective, prioritizing Patient A might make sense,” Dr. Rubin Pillay blogs at Rubin Reflects. “But what happens to Patient B’s right to equal treatment? Do we redefine fairness when medicine knows more about individual probabilities of success?” Read and mull
     
  • RapidAI and Viz.ai top the list of vendors whose imaging AI products have been adopted by healthcare providers. The roster is from Klas Research, which also found Aidoc, Nuance and Riverain are the frontrunners among suppliers whose imaging AI products are under consideration. Klas further found traditional imaging IT vendors—Sectra, Agfa HealthCare, Fujifilm and others—own considerable mindshare in the space too. Report available here
     
  • Having been born into a world awash with tech, today’s kids are hard to impress. But they seem to be loving Honda’s AI-powered robot, Haru, when it visits them in the hospital. It looks a little bit like a mashup of a frog and Johnny 5 from sci-fi movies like Short Circuit, Tech Radar reports. “But underneath the cutesy exterior, Haru has played a very serious role in assisting and enhancing the lives of children undergoing long-term [inpatient] treatment.” Story and photos
     
  • Has it only been two years since ChatGPT shook up the world? Yes, but it’s been a long couple of years. It seems that way anyway, given all that’s happened with large language models since late 2022. And yet, for all the hoopla, the perfect use case for generative AI still has yet to emerge. So notes Axios reporter Megan Morrone to mark the anniversary. Still, she observes, the preceding 24 months “have proven the technology’s allure—and that will drive the industry to keep looking till it finds a killer app.” 
     
  • Recent research in the news: 
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare