Those who have been thinking healthcare could use a detailed framework on the responsible use of healthcare AI just got their wish. The Coalition for Health AI, or “CHAI,” has drafted an in-depth guide that fits the bill. And the nonprofit is inviting interested parties to help refine the document, called simply the CHAI Assurance Standards Guide, before it’s finalized.
Announcing the draft’s release June 26, CHAI emphasizes that the framework represents a consensus view based on the expertise and knowledge of stakeholders from across U.S. healthcare. Contributors included patient advocates, technology developers, clinicians and data scientists.
The drafters angled their approach less on conceptual brainstorming than on real-world concerns and practices, hoping the draft framework will be reviewed and tweaked by people involved in the design, development, deployment and use of healthcare AI.
The purpose of the framework—which includes companion checklists of stakeholder to-do’s—is to offer “actionable guidance on ethics and quality assurance.”
In a 16-page executive summary of the draft framework, which tips the scale at 185 pages, the authors present a brief description of the AI life cycle. This, they suggest, consists of six consecutive but sometimes overlapping stages:
1. Define problem and plan.
Identify the problem, understand stakeholder needs, evaluate feasibility and decide whether to build, buy or partner.
“In this stage, the goal is to understand the specific problem an AI system is addressing,” the authors write. “This involves conducting surveys, interviews and research to find root causes. Teams will then decide whether to build a solution in-house, buy it or partner with another organization.”
2. Design the AI system.
Capture technical requirements, design system workflow and plan deployment strategy.
“During design, the focus is on specifying what a system needs to do and how it will fit into a healthcare workflow. This involves defining requirements, designing the system, and planning for deployment and monitoring to make sure it meets the needs of providers and users.”
3. Engineer the AI solution.
Develop and validate the AI model, prepare data and plan for operational deployment.
“This stage involves building an AI solution. The team will collect and prepare data, train AI models and develop an interface for users. The goal is to create a functional AI system that can be tested and evaluated for accuracy and effectiveness.”
4. Assess.
Conduct local validation, establish a risk management plan, train end users and ensure compliance.
“The assessment stage tests AI systems to decide if they’re ready for a pilot launch. This includes validating the system, training users and ensuring it meets healthcare standards and regulations. The aim is to confirm that the system works correctly and is safe to use.”
5. Pilot.
Implement a small-scale pilot, monitor real-world impact and update risk management.
“In this stage, the AI systems are tested in real-world settings at a small scale. The goal is to evaluate its performance, user acceptance and overall impact. Based on the results, the team will decide whether to proceed with a larger-scale deployment.”
6. Deploy and monitor.
Deploy the AI solution at scale, conduct ongoing monitoring and maintain quality assurance.
“The final stage involves deploying AI systems at a larger scale and monitoring their performance. This ensures systems stay effective and can be adjusted as needed, maintaining high quality and reliability in healthcare.”
Go deeper with CHAI: