Healthcare AI now: New code of conduct, iffy FDA oversight, Dr. AI for the wealthy, more

 

News not to miss: 

  • AI has its own code of conduct for US healthcare. It’s not legally binding, but on the technical and ethical planes the new code may well represent the best way yet to ensure alignment of all (or at least most) healthcare AI stakeholders. That’s if, of course, pretty much everyone abides by it. The 206-page guidance is published by the private, nonprofit National Academy of Medicine. One of the work’s more interesting proposals involves the adoption of a “Tight-Loose-Tight” leadership model. As explained in the document’s executive summary, this model is designed to “balance innovation and control through an iterative, dynamic approach” that “encourages collaboration and builds trust.” Some summary details: 
     
    • The first tight phase would seek buy-in from stakeholders on vision, goals and expectations. This phase would see efforts at national stakeholder advocacy. An example of a deliverable would be broad alignment on AI governance frameworks. Next would come the loose phase. This would bring local AI implementations and best practices that yield learnings and innovations sharable across U.S. healthcare. The loose phase also would encourage the undertaking of various research and quality-improvement projects. The second tight phase, a monitor-and-report period, “promotes change at scale through evaluation metrics and standards,” among other measures. Now would come outcomes assessments, checks on AI model performance, transparency probes and recognitions from certification bodies.
       
    • “With intentional, sustained effort and ongoing communication, feedback and collaboration by all stakeholders, safe, effective and efficient advancement of responsible health AI is possible,” the authors write. “Realizing the benefits and mitigating the risks will require significant engagement, which will be more likely to come to fruition if it is easy and rewarding to abide by the shared vision, values, goals and expectations described in the nationally aligned AI Code Principles and Commitments.”
       
    • Report co-editor Michael Matheny, MD, MPH, of Vanderbilt University Medical Center explained the project for VUMC’s news operation. “With AI revolutionizing healthcare delivery, the National Academy sought to gather leading experts across academia, industry and government to develop principles and a code of conduct to promote human health and ensure that ethical frameworks as well as patient needs and perspectives remain a central focus of the guidance,” Matheny says. “This code provides essential guideposts for all stakeholders, from developers to clinicians to patients.” 
       
      • The publication, An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action, is available for free downloading, online reading or paperback pre-order for $27 here
         
  • Concern continues to percolate over the Trump Administration’s laissez-faire stance toward AI. In the wake of his May AI dealmaking trip to the Middle East, it’s no secret the President tends to favor innovation over caution when the two priorities clash. And that tour came on top of his January dismissal of an executive order focused on AI safety. Meanwhile, layoffs have hit the FDA division charged with overseeing AI and digital health. Without adequate regulatory oversight, medical algorithms could raise problems in the realm of patient care, researchers point out. “There have to be safeguards,” Leo Anthony Celi, MD, MPH, a clinical researcher at MIT, tells Nature.com. “Relying on the FDA to come up with all those safeguards is not realistic and maybe even impossible.” 
     
    • Celi is corresponding author of a study published June 5 in PLOS Digital Health. There he and colleagues note the challenges of watchdogging FDA-approved models as they get retrained in real-world settings over time. “While the FDA emphasizes post-market monitoring in its lifecycle approach, the implementation of robust, real-time monitoring mechanisms remains inconsistent and underdeveloped,” the authors warn. “Continuous monitoring requires advanced infrastructure and significant resources, which are not yet fully integrated into the regulatory process and may even exceed the capacity of existing regulatory frameworks.” 
       
       
  • Also bringing healthcare AI where it’s really needed is the Health AI Partnership. Established at Duke University in 2021, the partnership currently supports five healthcare organizations that lack resources but appreciate AI. The five are comparing notes and building skills in a yearlong program that will render them well prepared for healthcare’s AI-aided future. HIMSS Media checked in with program leaders and participants at the 10-month mark. “There is a big resource gap for sure, which everyone understands,” says Suresh Balu of the Duke Institute for Health Innovation. “How do you put things into practice? When we convene these organizations to talk to the experts, they can actually address the knowledge gap.” HIMSS covered the partnership ahead of a related presentation at the HIMSS AI in Healthcare Forum slated for July 10 to 11 in New York. 
     
  • Some AI watchers warn of a future in which only the well-off have human physicians. The rest will make do with AI “doctors.” One data-science expert envisions the exact opposite scenario. “As AI continues its relentless improvement, it is plausible that, at some point—perhaps sooner than many anticipate—it will surpass human physicians across all dimensions, including the delicate art of bedside manner and empathy,” writes London Business School Professor Nicos Savva in a piece published by Forbes. “When this happens, the affluent world will be treated by the superior Dr. AI while the less privileged may find themselves priced out of access to these expensive, patent-protected AI systems—and instead have to contend with the comparatively inferior human alternative.” Hear him out
     
  • From AIin.Healthcare’s news partners:
     
Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.