Healthcare AI today: JCAHO–CHAI teamwork, what patients everywhere want, when to restrain AI, more

 

News you need to know about: 

  • The Joint Commission is partnering with the Coalition for Health AI to scale AI accountability from coast to coast. The effort will supply more than 80% of U.S. provider orgs with playbooks, toolkits and overall guidance. Recommendations will draw from the JC’s standards platform and CHAI’s database of consensus-derived best practices. A certification program will come from this resource combination too. Announcing the strategic alliance June 11, the Joint Commission suggests the duo acted on a shared recognition of the need for speed along the road to creative yet careful AI adoption. The announcement quotes Michael Pfeffer, MD, chief information and digital officer at Stanford Health Care. The Joint Commission-CHAI collaboration, Pfeffer says, “will help accelerate innovation, mitigate risk and enable healthcare organizations to fully leverage AI’s potential to improve patient outcomes and clinician workflows.”
     
    • AI’s potential to improve care quality is “enormous—but only if we do it right,” says Joint Commission president and CEO Jonathan Perlin, MD, PhD. By working with CHAI, Perlin adds, the JC is helping provider orgs pursue the attainable goal of “harness[ing] this technology in ways that not only support safety but also engender trust among stakeholders.”
       
    • To this CHAI president and CEO Brian Anderson, MD, adds: “Together, we’re leading the transformation of data-driven healthcare [such that] AI [gets] embedded into every healthcare program—regardless of population, geographic area or resources.” The underlying aim, he underscores, is to “elevate patient safety and quality, and ultimately to improve health outcomes for all.”
       
      • The first deliverables will be available this fall. AI certification will follow. More details here
         
  • Healthcare providers all over the globe do well to tailor AI adoption strategies according to four universal patient preferences. These are awareness of local demographics, sensitivity to individuals’ respective health statuses, acceptance of varying partialities for AI explainability and understanding of—i.e., patience with—erratic demands for physician oversight. Give short shrift to these attributes? Good luck winning patient buy-in of AI at scale. So suggest researchers representing more than 50 academic medical institutions around the world. The far-flung authors analyzed survey results from adult patients at 74 hospitals speaking 26 languages on six continents and in 43 countries. JAMA Network Open published the findings June 10. The authors’ condensed conclusion: 
     
    • “[W]e found that, while patients generally favored AI-equipped healthcare facilities, they preferred explainable AI systems and physician-led decision-making. In addition, patient attitudes varied significantly based on demographics and health status. These findings may be used by stakeholders in the healthcare AI sector to maximize patient acceptance of AI by prioritizing transparency, maintaining human oversight and tailoring AI implementation to patient characteristics.”
       
  • The U.S. Chamber of Commerce is all for judicious rather than general AI regulation. Noting that healthcare in particular is already highly regulated, two Chamber executives lay out the organization’s position in a succinct statement. “The Chamber advocates for tailoring AI regulations to the unique needs and risks of each sector—be it healthcare, finance or transportation—rather than applying a one-size-fits-all approach,” maintains the Chamber, which is not a governmental body but a private nonprofit. For overseeing healthcare, the Chamber leaders hold, “policymakers must cooperate with industry to make sure AI rules reflect real-world use cases.” More: 
     
    • “For instance, AI used in medical diagnostics may have different risk profiles and regulatory needs compared to AI used in administrative processes within hospitals and healthcare systems. This position aligns with the Chamber’s broader emphasis on flexible, innovation-friendly governance.”
       
  • In certain use cases, kneecapping healthcare AI is the right thing to do. That’s the view of a professor at the Stanford Graduate School of Business. Referring mainly to the use of AI scribe applications in doctor-patient encounters, Mohsen Bayati, PhD, clarifies: “[Y]ou have to sometimes compromise, make the AI weaker, to improve privacy. It’s a compromise that organizations deploying these [models] need to take into account.” Q&A audio and transcript here
     
  • The American Medical Association is calling on industry players to imbue all clinical AI models with explainability. To bolster such built-in transparency, sellers should label their tools with safety and efficacy data, the group maintains. The push is all about making physicians confident in AI assistance. The endgame is helping patients make good decisions about their own care. “The need for explainable AI tools in medicine is clear, as these decisions can have life or death consequences,” says AMA Board Member Alexander Ding, MD, MBA. The statement represents a policy refinement for the AMA. It comes on the heels of the group’s annual meeting in Chicago June 5 to 10. 
     
  • One provider institution ahead of that curve is 12-hospital Sentara Health. The integrated delivery network, which operates across Virginia and in parts of North Carolina, doesn’t make a meaningful move on AI without a blessing from a keenly engaged oversight panel staffed by senior leaders. The exemplary approach is described in a “How we did it” piece authored by panel co-chair Joseph Evans, MD, and published in Chief Healthcare Executive. Evans, an internist, lays out eight principles that guide the committee. The checklist includes items like accountability, transparency and humans in the loop. If any of the required fundamentals go missing from a given AI initiative, the project is considered dead—albeit revivable—on arrival. “By adhering to [our principles] in every circumstance,” Evans explains, “we will ensure any AI tools used in our system comply with legal, regulatory and ethical considerations while aligning with Sentara’s focus on promoting our consumers’ overall health and well-being.” See the full list of principles and learn more from Sentara Health’s example here
     
  • Is healthcare AI’s propensity for increasing racial imbalance any less of a concern now than it was before? It’s getting there, but it still has a way to go. That’s the view of Ziad Obermeyer, MD, MPhil, a physician and researcher at UC Berkeley School of Public Health. Obermeyer tells the Los Angeles Sentinel the need persists for more accountability. He recalls research he and colleagues conducted a few years ago showing that AI tools used across the country fell down on the job. “Instead of predicting who is sick, those algorithms predicted who was going to generate high healthcare costs,” Obermeyer tells the newspaper, which describes itself as an African American-owned and operated newspaper and media company that places an emphasis on issues concerning the African American community. “It turns out that patients who are Black or poor, rural or less educated often don’t get care when they need it, so they cost less,” Obermeyer says. “Not because they are healthier but because they are underserved.” The article also summarizes several pieces of legislation pending in the Golden State. 
     
  • From AIin.Healthcare’s news partners:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.