News You Need to Know Today
View Message in Browser

Good AI governance | AI newsmakers | Partner news

Tuesday, June 4, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

AI governance

5 first steps toward do-it-yourself AI governance

A word to the wise among leaders of hospitals and health systems: Don’t wait on the government to tell you how to keep healthcare AI on track and healthcare providers up to speed. Instead, develop your own AI governance models. And do so ASAP if not sooner.

The suggestion comes from UC-Davis Health in partnership with the healthcare division of Manatt, a law and professional-services firm based in Los Angeles.

“While state and federal guardrails evolve and emerge over the next few years, health systems must consider how best to manage risks while tapping into the opportunities AI presents,” high-level experts from the two organizations write in a new white paper. “Those seeking to be early adopters and shape the field of healthcare AI can help ensure that AI investments are well-deployed, risks are understood and managed, and lessons are quickly incorporated into a dynamic AI strategy.”

The authors propose nine steps for health systems looking to develop effective AI governance models in the near term. Here are five of their first six to-do’s.

1. Develop a prioritization process.

This can easily mimic the process to evaluate any technological or clinical decision support deployment, UC-Davis and Manatt point out. More:

‘Consider the importance of the problem to be solved, impact of solving it, likelihood of a successful implementation, ease of implementation, resources required to solve the problem and maintain the solution, ROI and bandwidth of those charged with implementation.’

2. Bring the right experts to the table.

Managing the multifaceted risks associated with AI development and deployment requires expertise from a variety of disciplines beyond medicine, including data and computer sciences, IT, bioethics, compliance, legal and regulatory experts, the experts offer before emphasizing:  

‘Health systems should assemble an interdisciplinary oversight committee with representation from each of these disciplines and a patient representative.’

3. Develop an AI strategy and set of guiding principles at the enterprise level.

The authors advise creating a written document that answers key questions, such as: Why should we use AI in the first place? What applications and use-cases should we pursue? How will we ensure safety, efficacy, accuracy, anti-bias security and privacy?

‘Broadly engaging stakeholders and those likely impacted by the adoption of AI will help produce a well-aligned strategy that provides directional guidance and authority to subsequent governance and adoption efforts.’

4. Take a user-centered design approach and lead with the problems, not the solutions.

Clinicians and staff are well-positioned to identify health system-specific opportunities for AI deployment. However, “in an environment of intense clinician burnout, getting them to engage with AI will require a clear value proposition for them and their patients,” the authors write. More:

‘Often healthcare innovations fail to achieve widespread adoption because implementers do not engage in user-centered design with clinicians and staff and fail to consider the switching costs associated with replacing incumbent technology.’

5. Inventory current use of AI tools.

AI-enabled tools are already being deployed in health systems—and potentially without a coordinated strategy or oversight mechanism, UC-Davis and Manatt reiterate. “An important early step for the oversight committee,” they write, “is to conduct an inventory of current AI research and deployment, along with an assessment of associated benefits and risk exposure.” More:

‘Compliance with HHS’s new rule to strengthen non-discrimination protections and advance civil rights in health care will likely necessitate such an inventory process.’

The paper is available in full for free.

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Mankato Clinic Reveals Key Takeaways from AI Adoption - Ambient documentation solutions are making a significant impact. To shed light amidst the buzz, Dr. Andrew Lundquist, CMO of Mankato Clinic, discussed key insights from his team's experience with Nabla and offered guidance for other clinics exploring ambient AI solutions. You can read Nabla's complete interview with Dr. Lundquist here.

 Share on Facebook Share on Linkedin Send in Mail
Protest

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Toward ‘AI justice’ in nursing and across healthcare. The country’s biggest labor union representing nurses has posted an AI bill of rights on behalf of both nurses and patients. National Nurses United says it drafted the document out of concern that “certain technologies” rolling into healthcare neither complement bedside skills nor improve quality of care. Subtitled “Guiding Principles for A.I. Justice in Nursing and Health Care,” the bill of rights states: “The right to healthcare in-person by a licensed healthcare professional underlies all other medical care and should not be compromised by uses of AI or other technologies that contribute to worker displacement or de-skilling.” An affiliate of the NNU, the California Nurses Association, led street protests against AI at a Kaiser Permanente location in San Francisco in April. Fierce Healthcare posted updated coverage of the ongoing standoff June 3.
     
  • Google would like to introduce you to 24 startups that are set to drive change into global healthcare on the strength of their AI chops. What the two dozen companies have in common is their selection as participants in Google for Startups Growth Academy: AI for Health. Well, that and a business address outside the Americas. Participants represent emerging markets in 13 countries across Europe, the Middle East and Africa. Announcement.
     
  • Generative AI continues raising hopes it will give clinicians more time with patients. Well-traveled healthcare AI expert Shashank Agarwal strikes a generally optimistic stance toward the claim in a piece published by Forbes. He also holds out a caveat. Healthcare organizations “must understand that generative AI will be only as good as the data it has been trained and fine-tuned upon,” writes Agarwal, whose LinkedIn profile shows his current job to be senior decision scientist with CVS Health. Worth a read.
     
  • Remember Sarah? She’s the healthcare helper who speaks eight languages and takes questions on all manner of healthcare topics any time of day or night. She’s also a Gen AI-powered avatar of the World Health Organization. Sarah takes her name from her identity—Smart AI Resource Assistant for Health. We reported on her emergence in this space two months ago. And now Sarah has at least one vocal detractor. “Sarah is arguably as much a product of AI hype and [fear of missing out] as it is a tool for positive change,” writes Brian Spisak, PhD, a leading light on healthcare AI in the U.S. “It’s clear that WHO’s own principles for the safe and ethical use of AI should guide its decision-making, but [that’s not happening] when it comes to Sarah. This raises critical questions about the organization’s ability to usher in a responsible AI revolution.” HIMSS Media’s Healthcare IT News published the piece June 3. Read it all.
     
  • Publicly traded health insurers have been romancing investors with smooth talk about tapping AI for efficiency gains and cost cuts. That may come as news to no one. But here’s the thing. For all their jawing to that one audience on the glories of AI, some if not all of these payers are tightlipped on the same subject to another: the prying media. STAT reports that one or more of its journalists reviewed regulatory filings from publicly traded health insurers and found several of them “investing in AI with the goal of saving money.” One company “has hired nearly 500 people to work exclusively on AI. However, all five of the insurers STAT contacted declined to elaborate on how they are using AI.” It may not be a stop-the-presses investigation, but it surely is a needed one. Article here (behind subscriber paywall).
     
  • ‘I’m not sure yet whether I’m going to regret this or not.’ So spoke Nvidia honcho Jensen Huang before announcing his company’s next-generation AI-accelerating GPU platform, Rubin, on June 2 in Taiwan. His trepidation was likely based on his desire not to have would-be buyers of Nvidia’s current pacesetter, the Grace Hopper AI superchip, hold off until Rubin gets here. Ars Technica has the story.
     
  • Must see to appreciate. Those four words aptly tease an opinion piece on healthcare AI published by the Los Angeles Times June 3. The commentary takes the form of a comic strip. The protagonist/narrator is a physician embodied by a cartoon guinea pig. Why a guinea pig? Because that’s what the doctor felt like heading into his hospital’s pilot of note-taking Gen AI. In one panel, our brave rodent shares that “capturing a physical exam meant you have to awkwardly speak findings aloud.” Holding a stethoscope to the patient’s belly, he dictates: “Soft abdomen. Normal bowel sounds.” (The patient appears nonplussed.) Need a chuckle? Check it out.
     
  • Recent research roundup:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare