News You Need to Know Today
View Message in Browser

Good AI governance | AI news watcher’s blog | Partner voice

Wednesday, October 30, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

AI governance meeting

Good AI governance can spell the difference between smashing success and mere learning experience

Discussions of AI governance may cause many an eye to glass over, but the discipline is as crucial to the ascent of AI in healthcare as big training datasets drawn from diverse patient populations.

Researchers at Duke University’s Margolis Institute of Health Policy drill down into the how’s and whys in a white paper published Oct. 28. 

The team gathered and organized material with input from experts at six U.S. health systems that maintain AI governance operations. To this foundation the researchers added interview content from AI-knowledgeable professionals at a number of other health systems. 

The resulting 10-page paper describes key components of AI governance. It also suggests ways health systems might set up governance systems more or less from scratch. 

Reinforcing the paper’s central theme—how to align innovation, accountability and trust—the authors describe five variations to consider when executing good AI governance. Three are people-focused, two process-focused: 

1. Decision-making authority.

Some hospitals and health systems give this authority to the person who allocated budget funds for the AI tool, the authors explain. “Other health systems favor a more centralized decision process, where the review team or a larger governance group make the final decision,” they write. “Still other health systems place some or all decision-making with executive leadership, who rely on recommendations from the review process.” More: 

‘This can ensure AI tool selection is consistent with the overall AI standards and strategy.’

2. Governance committee composition. 

Many AI governance committees are interdisciplinary, with members from disciplines such as IT, clinical care, informatics, legal, privacy, ethics, compliance, human resources, patient engagement, DEI and finance, the authors note. “Some have relevant background in AI,” they add, “but others may need additional training on the implications of AI within their area of expertise.” More: 

‘Organizations that opt to integrate AI governance into their traditional governance for other technologies also provide training on how to effectively assess AI tools.’ 

3. Role of the patient voice. 

Many health systems want to include patients’ perspectives in decisions on AI tools that may affect care, the authors acknowledge. However, some have run into potential legal and logistical challenges when “allowing individuals who are not health system employees to have visibility into the full review process.” More: 

‘As patients traditionally have not been involved in technology selection and implementation, more work is needed on best practices in this space.’

4. Governance scope. 

“AI is a broad term, and governance systems need to clarify the scope of tools within their purview,” the authors state. “Some organizations focus on a range of AI tools, while others concentrate on machine-learning enabled tools only.” Still others only review enterprise tools. More: 

‘Governance scope is determined by several factors—including the resources available—and the scope may change as a governance system matures.’

5. Tool identification. 

Health systems “must ensure they are aware when AI tools are being considered in order to bring them into the governance process,” the authors write. “Some groups build in processes to ‘catch’ tools within scope, often in connection with IT and procurement offices.” More:

‘There is not a perfect process, and tools can slip through cracks at times.’

Expounding on the latter point, the authors caution: 

“It can also be difficult to identify when existing tools are upgraded with AI-enabled software options and when already-implemented AI tools have significant updates that may require additional governance actions.”

Download the paper and/or register for a Nov. 18 webinar discussing it. 

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

knownwell leverages Nabla's athenahealth integration to enhance patient care - Knownwell leverages Nabla’s integration with athenahealth to streamline clinical documentation and enable more personalized patient interactions in weight management care. Read more in this blogpost.

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence AI in healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Line up four random healthcare professionals, and you can safely wager that one of them is burned out enough to be thinking about quitting. That’s according to digital healthcare company Innovaccer, which surveyed 568 clinicians across 386 provider organizations. More than 85% of leaders at those orgs indicated they’re pinning their hopes on AI to hold back a mad rush for the career exits, going by the survey report. “[I]nsiders are betting big on healthcare AI to streamline workflows and reduce burnout issues,” the authors write. “This highlights a broader shift where AI isn’t just a tool for incremental improvements but a transformative force for supporting critical areas in healthcare delivery.” Download the report.
     
  • KLAS Research names some software suppliers that seem up to the above job. In the category of “Improve Clinician Experience,” KLAS’s contributing reviewers like Abridge, Redivus Health, Suki, Navina and Regard. Other categories in its October report on the top 20 emerging solutions include “Improve Outcomes,” “Reduce Cost of Care” and “Improve Patient Experience.” Find the report here (behind paywall). 
     
  • Some say AI is soon to help patients coordinate care quicker, get seen sooner and understand costs better. Others aren’t so sure. Count Axios tech editor Megan Morrone as a member of the latter cohort. “As AI agents take over work on both sides of the health coverage game, acting on behalf of both patients and providers,” she writes, “the process of getting and paying for care could become an even more opaque and confusing bot-versus-bot interaction.” 
     
  • The CEO of the country’s largest for-profit, publicly traded health system is bullish on healthcare AI. Asked in an Oct. 25 earnings call about investing in emerging technologies, Sam Hazen said he sees many opportunities to spend on AI over the next five to seven years. The aim will be to “improve our administrative functioning, our operational management of our business, and then ultimately the clinical outcomes for our patients,” he added. “It’s our view that we’re at an inflection point.” 
     
  • Here’s help for healthcare professionals who want to join the AI revolution but don’t know where to begin. It’s a “quick-start guide” geared toward those who feel they’re outside looking in. “Getting started with AI is a major roadblock for clinicians,” Piyush Mathur, MD, of Cleveland Clinic and co-authors write in Cureus. The expanding adoption of AI across healthcare, they add, presents an “immense opportunity for clinicians to participate in all phases of research, development, evaluation and implementation.” The plan includes stops at four key junctions—setting goals, creating a roadmap, identifying resources and measuring success. Check it out.
     
  • Oracle has unveiled its next-generation EHR. The platform is designed to embed AI across all clinical workflows at adopting sites. Oracle says its guiding goal is to “help streamline information exchange between payers and providers, support patient recruitment for clinical trials, simplify regulatory compliance, optimize financial performance and help accelerate the adoption of value-based care.” 
     
  • Physicians who aren’t yet using large language AI models for help with complex cases: What are you waiting for? The question was more implied than asked when former FDA commissioner Scott Gottlieb expressed the sentiment this week. “I think very soon everyone is going to have to think about how to deploy this [technology at the] point of care,” he said, according to coverage by MedCity News. It’s telling that Gottlieb made the comments not to stakeholders in big-city academic medical settings but to attendees of the 3rd Annual Summit on the Future of Rural Healthcare in Sioux Falls, South Dakota. 
     
  • Meanwhile tomorrow’s healthcare professionals are getting a leg up. At the University of Central Florida, for example, undergraduate students are learning how to get ChatGPT to help providers explain care to patients. One student tells the school’s news operation her only prior experience with LLM came from using it to check grammar and such. Thanks to a research mentoring class, she’s on her way to mastery of the technology. “You can’t just throw any dataset at it and expect good results,” she says. “We’ve been working on refining the prompts we use to get better, more accurate outputs from the model.” 
     
  • Recent research in the news: 
     
  • Notable FDA Approvals:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare