FDA official: Let’s work together to make healthcare AI work for everyone
AI won’t fulfill its promise to transform American medicine if it isn’t appropriately integrated, step by step, across U.S. healthcare. This evolutionary process will have to be coaxed along with high levels of methodological rigor, risk awareness and nimble adaptability—not just from the government but from all interested parties.
The conviction comes through between the lines of a June 17 blog post authored by Troy Tazbaz, director of the FDA’s Digital Health Center of Excellence.
Launched in 2020 as a branch of the agency’s Center for Devices and Radiological Health (CDRH), the DHCoE works to “foster responsible AI innovations in healthcare,” Tazbaz reminds, “while ensuring these technologies, when intended for use as medical devices, are safe and effective for the end-users, including patients.”
Noting the center’s desire to encourage collaboration between healthcare AI stakeholders and its own people, Tazbaz offers three observations to help foster the requisite harmony.
1. Life-cycle planning for AI models can reduce risk.
By adopting agreed-upon standards and best practices covering the various phases of AI models’ lifespans, stakeholders can actively help mitigate risks for the long term, Tazbaz suggests.
“This includes, for instance, approaches to ensure that data suitability, collection and quality match the intent and risk profile of the AI model that is being trained,” he writes. More:
‘The healthcare community together could agree on common methodologies that provide information to a diverse range of end users, including patients, on how the model was trained, deployed and managed through robust monitoring tools and operational discipline.’
2. Quality-assurance measures can positively impact clinical outcomes.
Continuous performance monitoring before, during and after deployment is one way to carry QA through an AI model’s life cycle, Tazbaz points out.
Meanwhile transparency and accountability “can help stakeholders feel comfortable with AI technologies.” More:
‘Quality assurance and risk management, right-sized for healthcare institutions of all sizes, can help provide confidence that AI models are developed, tested and evaluated on data that is representative of the population for which they are intended.’
3. Shared responsibility can help ensure success.
Efforts around AI quality assurance “have sprung up at a grassroots level across the U.S. and are starting to bear fruit,” Tazbaz writes.
“Solution developers, healthcare organizations and the U.S. federal government are working to explore and develop best practices for quality assurance of AI in healthcare settings.” More:
‘These efforts, combined with FDA activities relating to AI-enabled devices, may lead to a world in which AI in healthcare settings is safe, clinically useful and aligned with patient safety and improvement in clinical outcomes.”