A word to the wise among leaders of hospitals and health systems: Don’t wait on the government to tell you how to keep healthcare AI on track and healthcare providers up to speed. Instead, develop your own AI governance models. And do so ASAP if not sooner.
The suggestion comes from UC-Davis Health in partnership with the healthcare division of Manatt, a law and professional-services firm based in Los Angeles.
“While state and federal guardrails evolve and emerge over the next few years, health systems must consider how best to manage risks while tapping into the opportunities AI presents,” high-level experts from the two organizations write in a new white paper. “Those seeking to be early adopters and shape the field of healthcare AI can help ensure that AI investments are well-deployed, risks are understood and managed, and lessons are quickly incorporated into a dynamic AI strategy.”
The authors propose nine steps for health systems looking to develop effective AI governance models in the near term. Here are five of their first six to-do’s.
1. Develop a prioritization process.
This can easily mimic the process to evaluate any technological or clinical decision support deployment, UC-Davis and Manatt point out. More:
‘Consider the importance of the problem to be solved, impact of solving it, likelihood of a successful implementation, ease of implementation, resources required to solve the problem and maintain the solution, ROI and bandwidth of those charged with implementation.’
2. Bring the right experts to the table.
Managing the multifaceted risks associated with AI development and deployment requires expertise from a variety of disciplines beyond medicine, including data and computer sciences, IT, bioethics, compliance, legal and regulatory experts, the experts offer before emphasizing:
‘Health systems should assemble an interdisciplinary oversight committee with representation from each of these disciplines and a patient representative.’
3. Develop an AI strategy and set of guiding principles at the enterprise level.
The authors advise creating a written document that answers key questions, such as: Why should we use AI in the first place? What applications and use-cases should we pursue? How will we ensure safety, efficacy, accuracy, anti-bias security and privacy?
‘Broadly engaging stakeholders and those likely impacted by the adoption of AI will help produce a well-aligned strategy that provides directional guidance and authority to subsequent governance and adoption efforts.’
4. Take a user-centered design approach and lead with the problems, not the solutions.
Clinicians and staff are well-positioned to identify health system-specific opportunities for AI deployment. However, “in an environment of intense clinician burnout, getting them to engage with AI will require a clear value proposition for them and their patients,” the authors write. More:
‘Often healthcare innovations fail to achieve widespread adoption because implementers do not engage in user-centered design with clinicians and staff and fail to consider the switching costs associated with replacing incumbent technology.’
5. Inventory current use of AI tools.
AI-enabled tools are already being deployed in health systems—and potentially without a coordinated strategy or oversight mechanism, UC-Davis and Manatt reiterate. “An important early step for the oversight committee,” they write, “is to conduct an inventory of current AI research and deployment, along with an assessment of associated benefits and risk exposure.” More:
‘Compliance with HHS’s new rule to strengthen non-discrimination protections and advance civil rights in health care will likely necessitate such an inventory process.’
The paper is available in full for free.