It’s not easy to get patients, providers, payers, vendors and regulators to agree on any one aspect of healthcare delivery. But FDA’s Center for Devices and Radiological Health (CDRH) recently managed to get representatives of all five groups to settle on a working definition of transparency.
Admittedly, for the exercise, CDRH limited the term to one discrete context: as it applies to AI embedded in medical devices. But that doesn’t detract from the force of the consensus definition.
Transparency of AI in these settings, the groups concur, refers to the degree to which appropriate information about a device—including its intended use, development, performance and, when available, logic—is clearly communicated to stakeholders.
The meeting’s minutes are synopsized in a paper published Jan. 26 in NPJ Digital Medicine. In the report, CDRH digital health advisor Aubrey Shick and FDA colleagues recap workshop input from the key participant groups.
Patients. Eager to consume as much as they can digest about the role of AI in their care, patients are concerned their doctors or nurses might be lacking in computer literacy or algorithmic expertise, Shick and co-authors report. More:
Other transparency considerations important to patients include data security and ownership, the cost of the device compared to the current standard of care, insurance coverage of the device, and the need for high-speed internet access or other technical infrastructure requirements.
Providers. Clinicians want to trust AI-outfitted devices “at face value.” By this they mean that these devices should be readily usable without the need for “in-depth reviews” to figure out if the AI will work as advertised for their particular patient populations, Shick and colleagues explain. More:
Healthcare providers [see] an opportunity to be more transparent in the delivery of this information not only in the data available and the media type in which it is communicated but also through who shares this information—device manufacturers, government agencies, professional societies, etc. They also [emphasize] the importance of having a reliable mechanism to report device malfunction and ‘performance drift’ to manufacturers.
Payers. A medical device’s algorithmic prowess may prove exemplary in testing and validation settings. But what should be the ramifications for payment considerations when the AI’s performance varies in clinical use? The authors expound:
Given the potential for AI/ML devices to evolve, payers are concerned with the coverage of “unlocked” or learning algorithms. This stakeholder segment wants to stress the importance of employing diversified datasets and the possibility of monitoring the real-world performance of devices, the goal being to ensure that they are performing as intended and improving patient outcomes.
Vendors. Industry members wish for a risk-based approach to ensuring transparency. They’d like maintain the least burdensome regulatory framework for AI/ML devices while also mitigating “potential proprietary risk that may arise when sharing information in an effort to be transparent,” Shick et al. write. More:
Vendors believe their existing relationships with stakeholders suffice for communicating information about AI/ML devices, reminding that these communications are augmented with device manuals, user training and feedback processes.
What’s more, industry members suggest, the FDA is a trusted source of information for patients on manufacturers’ AI/ML devices. Vendors recommend manufacturers continue working closely with the FDA to increase transparent communications regarding these devices.
Shick and co-authors acknowledge that much of the device information available on the CDRH website is developed by or geared toward manufacturers.
“Use of a complementary approach targeted to non-manufacturers to share information (e.g., graphics, plain language summaries) could allow the information to be more accessible for some [other] stakeholders,” they write.
Full paper here.