News You Need to Know Today
View Message in Browser

AI that uplifts | Partner voice | JCAHO–CHAI teamwork, what patients everywhere want, when to restrain AI, more

Friday, June 13, 2025
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Nabla Logo

artificial intelligence AI in healthcare

Healthcare AI today: JCAHO–CHAI teamwork, what patients everywhere want, when to restrain AI, more

 

News you need to know about: 

  • The Joint Commission is partnering with the Coalition for Health AI to scale AI accountability from coast to coast. The effort will supply more than 80% of U.S. provider orgs with playbooks, toolkits and overall guidance. Recommendations will draw from the JC’s standards platform and CHAI’s database of consensus-derived best practices. A certification program will come from this resource combination too. Announcing the strategic alliance June 11, the Joint Commission suggests the duo acted on a shared recognition of the need for speed along the road to creative yet careful AI adoption. The announcement quotes Michael Pfeffer, MD, chief information and digital officer at Stanford Health Care. The Joint Commission-CHAI collaboration, Pfeffer says, “will help accelerate innovation, mitigate risk and enable healthcare organizations to fully leverage AI’s potential to improve patient outcomes and clinician workflows.”
     
    • AI’s potential to improve care quality is “enormous—but only if we do it right,” says Joint Commission president and CEO Jonathan Perlin, MD, PhD. By working with CHAI, Perlin adds, the JC is helping provider orgs pursue the attainable goal of “harness[ing] this technology in ways that not only support safety but also engender trust among stakeholders.”
       
    • To this CHAI president and CEO Brian Anderson, MD, adds: “Together, we’re leading the transformation of data-driven healthcare [such that] AI [gets] embedded into every healthcare program—regardless of population, geographic area or resources.” The underlying aim, he underscores, is to “elevate patient safety and quality, and ultimately to improve health outcomes for all.”
       
      • The first deliverables will be available this fall. AI certification will follow. More details here
         
  • Healthcare providers all over the globe do well to tailor AI adoption strategies according to four universal patient preferences. These are awareness of local demographics, sensitivity to individuals’ respective health statuses, acceptance of varying partialities for AI explainability and understanding of—i.e., patience with—erratic demands for physician oversight. Give short shrift to these attributes? Good luck winning patient buy-in of AI at scale. So suggest researchers representing more than 50 academic medical institutions around the world. The far-flung authors analyzed survey results from adult patients at 74 hospitals speaking 26 languages on six continents and in 43 countries. JAMA Network Open published the findings June 10. The authors’ condensed conclusion: 
     
    • “[W]e found that, while patients generally favored AI-equipped healthcare facilities, they preferred explainable AI systems and physician-led decision-making. In addition, patient attitudes varied significantly based on demographics and health status. These findings may be used by stakeholders in the healthcare AI sector to maximize patient acceptance of AI by prioritizing transparency, maintaining human oversight and tailoring AI implementation to patient characteristics.”
       
  • The U.S. Chamber of Commerce is all for judicious rather than general AI regulation. Noting that healthcare in particular is already highly regulated, two Chamber executives lay out the organization’s position in a succinct statement. “The Chamber advocates for tailoring AI regulations to the unique needs and risks of each sector—be it healthcare, finance or transportation—rather than applying a one-size-fits-all approach,” maintains the Chamber, which is not a governmental body but a private nonprofit. For overseeing healthcare, the Chamber leaders hold, “policymakers must cooperate with industry to make sure AI rules reflect real-world use cases.” More: 
     
    • “For instance, AI used in medical diagnostics may have different risk profiles and regulatory needs compared to AI used in administrative processes within hospitals and healthcare systems. This position aligns with the Chamber’s broader emphasis on flexible, innovation-friendly governance.”
       
  • In certain use cases, kneecapping healthcare AI is the right thing to do. That’s the view of a professor at the Stanford Graduate School of Business. Referring mainly to the use of AI scribe applications in doctor-patient encounters, Mohsen Bayati, PhD, clarifies: “[Y]ou have to sometimes compromise, make the AI weaker, to improve privacy. It’s a compromise that organizations deploying these [models] need to take into account.” Q&A audio and transcript here
     
  • The American Medical Association is calling on industry players to imbue all clinical AI models with explainability. To bolster such built-in transparency, sellers should label their tools with safety and efficacy data, the group maintains. The push is all about making physicians confident in AI assistance. The endgame is helping patients make good decisions about their own care. “The need for explainable AI tools in medicine is clear, as these decisions can have life or death consequences,” says AMA Board Member Alexander Ding, MD, MBA. The statement represents a policy refinement for the AMA. It comes on the heels of the group’s annual meeting in Chicago June 5 to 10. 
     
  • One provider institution ahead of that curve is 12-hospital Sentara Health. The integrated delivery network, which operates across Virginia and in parts of North Carolina, doesn’t make a meaningful move on AI without a blessing from a keenly engaged oversight panel staffed by senior leaders. The exemplary approach is described in a “How we did it” piece authored by panel co-chair Joseph Evans, MD, and published in Chief Healthcare Executive. Evans, an internist, lays out eight principles that guide the committee. The checklist includes items like accountability, transparency and humans in the loop. If any of the required fundamentals go missing from a given AI initiative, the project is considered dead—albeit revivable—on arrival. “By adhering to [our principles] in every circumstance,” Evans explains, “we will ensure any AI tools used in our system comply with legal, regulatory and ethical considerations while aligning with Sentara’s focus on promoting our consumers’ overall health and well-being.” See the full list of principles and learn more from Sentara Health’s example here
     
  • Is healthcare AI’s propensity for increasing racial imbalance any less of a concern now than it was before? It’s getting there, but it still has a way to go. That’s the view of Ziad Obermeyer, MD, MPhil, a physician and researcher at UC Berkeley School of Public Health. Obermeyer tells the Los Angeles Sentinel the need persists for more accountability. He recalls research he and colleagues conducted a few years ago showing that AI tools used across the country fell down on the job. “Instead of predicting who is sick, those algorithms predicted who was going to generate high healthcare costs,” Obermeyer tells the newspaper, which describes itself as an African American-owned and operated newspaper and media company that places an emphasis on issues concerning the African American community. “It turns out that patients who are Black or poor, rural or less educated often don’t get care when they need it, so they cost less,” Obermeyer says. “Not because they are healthier but because they are underserved.” The article also summarizes several pieces of legislation pending in the Golden State. 
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

What keeps clinicians practicing longer?
At McFarland Clinic, it’s the impact of using Nabla's Ambient AI Assistant.

From reducing time spent charting to feeling more present with patients, clinicians across 12 specialties are seeing real benefits—with some even saying it’s extended their careers by years.

Hear directly from McFarland providers on how Nabla fits into their Epic workflow and supports the joy of practicing medicine.

📽️ Watch the testimonial and read the full case study

 Share on Facebook Share on Linkedin Send in Mail
AI in healthcare can uplift human potential

Global market researchers: ‘Build a culture that uses healthcare AI to uplift human potential’

To maximize returns on AI investments, healthcare organizations should align AI initiatives with core competencies. This effort should focus on optimizing experiences for workforces as well as patients. It should also advance the perpetual pursuits of optimized population health and concerted cost containment. 

This is one of four tips from analysts with the Big 4 accounting firm KPMG. The team arrived at their recommendations after conducting market research on numerous fronts. The cornerstone of the project was a quantitative survey of approximately 1,400 decision-makers in eight economic sectors across eight countries. 

The healthcare cohort comprised 183 senior healthcare leaders, half of whom held C-suite titles. In the resulting research report, the analysts offer three more healthcare-specific pointers: 

1. Build trust into your AI roadmap. 

Healthcare organizations should implement transparent, explainable AI (XAI), ethical governance frameworks and robust regulatory compliance, the authors state. 

“Addressing concerns about bias and security early on—while offering proof that AI delivers successful outcomes—can build stakeholder acceptance and trust,” they write.  

They quote a CTO respondent in Australia: 

‘We’ve got terabytes of data, but the data is not clean. Because data is not clean, can you trust the outcome that is being presented by AI?’

2. Build a culture that uses AI to uplift human potential.

When it comes to taking a longer-term strategic view on AI, half of respondents say their organizations are currently developing a clear vision on how the tech can support their transformational ambitions in the next five years, the KPMG analysts report. 

“AI should augment, not replace, human expertise,” they add. “Foster human-AI collaboration by reskilling clinicians [and] illustrate the ways AI can reduce burnout, enhance efficiency and improve the quality of care.”

A CIO in the United States:  

‘Our doctors didn’t like the idea that a tool would be telling them a different way to diagnose something.’

3. Create sustainable technology and data infrastructure for AI adoption.

Investing in cloud platforms enables secure, scalable access to vast datasets and advanced AI tools, supporting real-time collaboration, diagnostics and innovation across care settings, the authors point out. 

“Adopting a federated learning approach for AI models helps ensure that the model is sent to where the data resides and learns from it locally,” they note. “Because only the learned updates—not the data itself—are shared back and aggregated, sensitive data remains private and secure.”

A CTO in China:

‘Medical data is particularly complex, not only because it comes in various types—text, images, videos, etc.—but also because the quality varies greatly. We have spent considerable time cleaning and standardizing this data to ensure that it can be accurately understood and analyzed by AI algorithms.’

KPMG has posted the report in full for free

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare