Discussions of AI governance may cause many an eye to glass over, but the discipline is as crucial to the ascent of AI in healthcare as big training datasets drawn from diverse patient populations.
Researchers at Duke University’s Margolis Institute of Health Policy drill down into the how’s and whys in a white paper published Oct. 28.
The team gathered and organized material with input from experts at six U.S. health systems that maintain AI governance operations. To this foundation the researchers added interview content from AI-knowledgeable professionals at a number of other health systems.
The resulting 10-page paper describes key components of AI governance. It also suggests ways health systems might set up governance systems more or less from scratch.
Reinforcing the paper’s central theme—how to align innovation, accountability and trust—the authors describe five variations to consider when executing good AI governance. Three are people-focused, two process-focused:
1. Decision-making authority.
Some hospitals and health systems give this authority to the person who allocated budget funds for the AI tool, the authors explain. “Other health systems favor a more centralized decision process, where the review team or a larger governance group make the final decision,” they write. “Still other health systems place some or all decision-making with executive leadership, who rely on recommendations from the review process.” More:
‘This can ensure AI tool selection is consistent with the overall AI standards and strategy.’
2. Governance committee composition.
Many AI governance committees are interdisciplinary, with members from disciplines such as IT, clinical care, informatics, legal, privacy, ethics, compliance, human resources, patient engagement, DEI and finance, the authors note. “Some have relevant background in AI,” they add, “but others may need additional training on the implications of AI within their area of expertise.” More:
‘Organizations that opt to integrate AI governance into their traditional governance for other technologies also provide training on how to effectively assess AI tools.’
3. Role of the patient voice.
Many health systems want to include patients’ perspectives in decisions on AI tools that may affect care, the authors acknowledge. However, some have run into potential legal and logistical challenges when “allowing individuals who are not health system employees to have visibility into the full review process.” More:
‘As patients traditionally have not been involved in technology selection and implementation, more work is needed on best practices in this space.’
4. Governance scope.
“AI is a broad term, and governance systems need to clarify the scope of tools within their purview,” the authors state. “Some organizations focus on a range of AI tools, while others concentrate on machine-learning enabled tools only.” Still others only review enterprise tools. More:
‘Governance scope is determined by several factors—including the resources available—and the scope may change as a governance system matures.’
5. Tool identification.
Health systems “must ensure they are aware when AI tools are being considered in order to bring them into the governance process,” the authors write. “Some groups build in processes to ‘catch’ tools within scope, often in connection with IT and procurement offices.” More:
‘There is not a perfect process, and tools can slip through cracks at times.’
Expounding on the latter point, the authors caution:
“It can also be difficult to identify when existing tools are upgraded with AI-enabled software options and when already-implemented AI tools have significant updates that may require additional governance actions.”
Download the paper and/or register for a Nov. 18 webinar discussing it.