From Capitol Hill to a hospital near you? 5 federal recommendations for healthcare AI policy
The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. The paper covers numerous social settings and economic sectors. Healthcare is not least among the latter.
In introducing the material, task force co-chairs Jay Obernolte (R-Calif.) and Ted Lieu (D-Calif.) state the group consulted in depth with experts from various fields. The paper draws from those discussions to offer 66 key findings and 85 forward-looking recommendations.
The chairs point out that AI is not a new invention. However, they note, “breathtaking technological advancements in the last few years” have given the technology “tremendous potential to transform society and our economy for the better and address complex national challenges.”
At the same time, they add, AI “can be misused and lead to various types of harm.”
It’s with AI’s promise and peril in mind that the task force recommends five ways policymakers can guide AI innovation, implementation and oversight in, specifically, healthcare. Here are excerpts from that section.
1. Encourage the practices needed to ensure AI in healthcare is safe, transparent and effective.
Policymakers should promote collaboration among developers, providers and regulators in developing and adopting AI technologies in healthcare where appropriate and beneficial. Policymakers could also develop or expand high-quality data access mechanisms that ensure the protection of patient data.
Congress should continue to monitor the use of predictive technologies to approve or deny care and coverage and conduct oversight accordingly.
2. Maintain robust support for healthcare research related to AI.
Sustained, strategic investments in research and development will be critical to maintaining U.S. leadership in AI across disciplines and use cases, especially in sectors that stand to benefit significantly from this technology.
The research supported by the National Institutes of Health (NIH) has the potential to enable improvements in [myriad] healthcare applications.
3. Create incentives and guidance to encourage risk management of AI technologies in healthcare across various deployment conditions.
To promote the responsible use of AI systems in the healthcare sector, stakeholders would benefit from standardized testing and voluntary guidelines that support the evaluation of AI technologies, promote interoperability and data quality, and help covered entities meet their legal requirements under HIPAA.
Congress should explore whether the current laws and regulations need to be enhanced to help the FDA’s post-market evaluation process ensure that AI technologies in healthcare are continually and sufficiently monitored for safety, efficacy and reliability.
4. Support the development of standards for liability related to AI issues.
Limited guidance exists on constructing legal and ethical frameworks for determining who bears responsibility when AI models produce incorrect diagnoses or make erroneous and harmful diagnostic recommendations. Currently, most providers are expected to use AI tools as supplementary devices while still relying on their own judgments, thus placing liability on the providers themselves.
As AI’s use continues to increase in everything from EHRs to transcription services to diagnosis, Congress should examine liability laws to ensure patients are protected.
5. Support appropriate payment mechanisms without stifling innovation.
Certainly, there will be no “one size fits all” reimbursement policy for every AI technology, and developing appropriate payment mechanisms requires recognition of varying kinds of technology and clinical settings. For example, many AI technologies may fit into existing benefit categories or facility fees.
Congress should continue to evaluate emerging technologies to ensure Medicare benefits adequately recognize appropriate AI-related medical technologies.
In a summary of the task force’s key findings on healthcare AI, the authors note that the lack of consistency in standards for medical data and algorithms “impedes system interoperability and data sharing.” More:
If AI tools cannot easily connect with all relevant medical systems, their adoption and use could be impeded.