If people, process and technology (PPT) are the building blocks of any effective quality management system (QMS), it may hold that applying established PPT principles can help hospitals move AI from experimental research settings to regulated clinical practice.
And in fact, that’s probably the case, according to researchers at Mayo Clinic and Duke University.
Refining PPT terms to people/culture, process/data and validated technology, Mayo AI manager Shauna Overgaard, PhD, and colleagues describe the workings of the hypothesis in a paper published Nov. 25 in NPJ Digital Medicine.
“By establishing a QMS explicitly tailored to health AI technologies,” the authors write, “healthcare organizations can comply with evolving regulations and minimize redundancy and rework while aligning their internal governance practices with their steadfast commitment to scientific rigor and medical excellence.”
The team breaks down the equation into three action items. Here are excerpts from each.
1. Establish a proactive culture of quality. As an AI model evolves, algorithm developers and clinical end-users should get out ahead of further development, risk management and industry-standard design controls, the authors suggest. Then, as the model becomes a product, the team can incorporate “all the software and functionality needed for the model to work as intended in its clinical setting.” More:
“QMS procedures outline practices, and the records generated during this stage create the level of evidence expected by industry and regulators. Healthcare organizations may either maintain dedicated quality teams responsible for conducting testing or employ alternative structures designed to carry out independent reviews and audits.”
2. Set up systems for directing and managing risk-based design, development and monitoring. Risk-based practices formalized and implemented within a QMS will “systematically identify risks associated with an AI solution, document mitigation strategies, and offer a framework for objective testing and auditing of individual technology components,” the authors write. What’s more, such tech components can be refined by applying best practices around software life-cycle management—and tailoring the practices for AI software specifically.
“This allows for capturing performance metrics across various levels of rigor and data transparency in requirements, version, and design controls. These insights from initial testing can then support the calibration and maintenance of AI solutions during deployment, guided by a multidisciplinary governance system to proactively mitigate future risks.”
3. Establish a compliance-facilitating infrastructure. It’s consequential that running a QMS necessarily involves establishing policies and standard operating procedures that outline processes for multiple intertwined aims, the authors point out. Not least among these are governance and prioritization, development, independent evaluation, maintenance and monitoring, issue reporting and safety surveillance.
“With proper governance, algorithm inventory and transparency, healthcare organizations can begin to implement tools, testing and monitoring capabilities into their QMS to reduce the burden and achieve safe, effective, ethical machine learning/AI at scale. Implementing QMS involves formal documentation encompassing quality, ethical principles and processes, ensuring transparency and traceability to regulatory requirements.”
In these ways, healthcare organizations can repurpose a QMS framework to accelerate the translation of AI from research to clinical practice, Overgaard and co-authors reiterate.
“Drawing on regulatory precedents and incorporating insights from expert stakeholders,” they add, “the QMS framework enables healthcare organizations to prioritize patient needs and foster trust in adopting innovative AI technologies.”
The paper is posted in full for free.