Nat’l Academy of Medicine sets ‘priorities for action’ as healthcare mulls next moves with AI
Healthcare healthcare AI finds itself in 2025 pregnant with possibilities yet surrounded by pitfalls. Given the precarious excitement of the moment—or is it exciting precarity?—policymakers and healthcare leaders must set directives guiding not only what to do but also when to do it.
They’re offered assistance in a paper published in Health Affairs Jan. 22.
“The field of artificial intelligence has entered a new cycle of intense opportunity, fueled by advances in deep learning, including generative AI,” write lead author Michael Matheny, MD, MPH, of Vanderbilt and 11 colleagues from 10 organizations in introducing their material. “Applications of recent advances affect many aspects of everyday life, yet nowhere is it more important to use this technology safely, effectively and equitably than in health and healthcare.”
The team presents the work as part of an initiative driven by the National Academy of Medicine, “Vital Directions for Health and Healthcare: Priorities for 2025.”
The new paper lays out four policy-focused subdiscussions in considerable detail. In each section, the authors press policymakers to pursue a number of action items. Here are excerpts.
1. Ensure the safe, effective and trustworthy use of AI.
Federal agencies “should develop policies to incentivize the equitable and fair deployment of AI technologies,” the authors write. “As leaders in healthcare payment innovation, CMS and other relevant agencies should consider expanding reimbursement models to encourage equitable adoption.” More:
‘It is also critical to require or incentivize the inclusion of patients and end users into the entire AI development and implementation life cycle.’
2. Promote the development of an AI-competent workforce.
The authors call on policymakers for higher education funding to “consider incentives that support professional societies, accrediting bodies and faculty at medical and allied health professional schools to implement new training requirements and continuous adaptation of curricula to prepare clinicians to leverage AI in patient care.”
‘In addition, policymakers should incentivize healthcare educational organizations to routinely evaluate knowledge and skills to identify those that are becoming redundant as healthcare AI advances.’
3. Support research on AI in health and healthcare.
Research investments in the delivery of care can expand the role of AI technologies in precision medicine, the authors state. “Research questions remain,” they add, “regarding how, when and where to leverage AI to tailor treatment based on individual patients’ characteristics, genetics and lifestyle or environmental exposures.”
‘This has broad implications for the concept of ‘standard of care’ and how its definition or quality assessment may need to change to allow for personalized care.’
4. Clarify responsibility and liability in the use of AI.
Policymakers “should support and coordinate efforts by professional societies to streamline the responsible adoption of medical AI by clarifying the responsibility and liability landscape for healthcare professionals,” the authors write before proposing three actions to be taken by organizations such as the National Academies, the Federation of State Medical Boards, the American Medical Informatics Association and others:
- Provide analyses of the most common legal questions to elucidate what clinicians and hospitals need to know and what uncertainty remains for different uses of AI;
- Promulgate model licensing terms for medical AI that can create clearer liability rules through contract; and
- Set model terms for indemnification or insurance against injuries involving AI.
‘These next steps can ease the responsible adoption of AI to improve patient care.’
The paper is posted in full for free.