5 ways US regulators could go further to beat back bias in medical AI

Not every piece of AI-bearing software on the healthcare market is subject to FDA approval. But the agency could do more to police regulated products for algorithmic biases that may affect clinical outcomes in vulnerable subpopulations.

Or, as put by the Pew organization in commentary posted Oct. 7:

Despite a growing awareness of these risks, FDA’s review of AI products does not explicitly consider health equity. The agency could do so under its current authorities and has a range of regulatory tools to help ensure AI-enabled products do not reproduce or amplify existing inequalities within the healthcare system.”

From there Pew healthcare products director Liz Richardson lays out steps the agency could take to correct course.

Richardson says the FDA can and should:

1. Help mitigate the risks of bias by routinely analyzing the data submitted by AI software developers. Such analyses should include checking for inclusion by demographic subgroups, including those aligned by sex, age, race and ethnicity.

“This would help gauge how the product performed in those populations and whether there were differences in effectiveness or safety based on these characteristics,” Richardson writes.

2. Choose to reject a product’s application if the agency determined, based on the subgroup analysis, that the risks of approval outweighed the benefits. FDA currently encourages but cannot require software developers to submit subpopulation-specific data as part of their device applications, Richardson points out. Five years ago the agency released guidance on gathering and submitting this kind of data. It also told how best to report disparities in subgroup outcomes.

“However, it is not clear how often this data is submitted,” Richardson comments, “and public disclosure of this information remains limited.”

3. Require healthcare AI developers to note omissions of diversity in algorithm training, testing and validation. Clear labeling about potential disparities in product performance might help to promote health equity, Richardson suggests.

“This would alert potential users that the product could be inaccurate for some patient populations and may lead to disparities in care or outcomes,” she writes. “Providers can then take steps to mitigate that risk or avoid using the product.”

4. Develop guidance to set up product-review divisions for considering health equity as part of the analyses of AI-enabled devices. There is precedent for this, Richardson notes.

“In June, FDA’s Office of Women’s Health and the Office of Minority Health and Health Equity (OMHHE) took an encouraging step by launching the Enhance Equity Initiative, which aims to improve diversity in the clinical data that the agency uses to inform its decisions and to incorporate a broader range of voices in the regulatory process.”

5. Diversify the range of voices at the table to support more equitable policies through FDA’s ongoing Patient Science and Engagement Initiative. The Center for Devices and Radiological Health, which is responsible for approving medical devices, can spearhead this kind of effort, Richardson suggests.

“Following an October 2020 public meeting on AI and machine learning in medical devices, FDA published an updated action plan that emphasized the need for increased transparency and building public trust for these products,” Richardson writes. “The agency committed to considering patient and stakeholder input as it works to advance AI oversight, adding that continued public engagement is crucial for the success of such products.”

Richardson’s closing argument:

AI can help patients from historically underserved populations by lowering costs and increasing efficiency in an overburdened health system. But the potential for bias must be considered when developing and reviewing these devices to ensure that the opposite does not occur. By analyzing subpopulation-specific data, calling out potential disparities on product labels and pushing internally for the prioritization of equity in its review process, FDA can prevent potentially biased products from entering the market and help ensure that all patients receive the high-quality care they deserve.”

Read the whole thing.

Around the web

The Department of Defense is gifting Case Western researchers a grant to study the use of AI in determining whether patients require surgery. 

This study brings light to the prospect of a "fully automated solution" for echocardiogram analysis, experts reported.

Experts noted a "significant" reduction in false positives and false negatives using their modified machine learning model.

Trimed Popup
Trimed Popup