News You Need to Know Today
View Message in Browser

Int’l consensus on AI’s ‘FUTURE’ | Partner news | Newsmakers: Schweikert, Vance, Altman, Musk …

Wednesday, February 12, 2025
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Nabla Logo

artificial intelligence AI healthcare FUTURE-AI consortium

Global consortium: The future of AI in healthcare is dynamic—and demanding

An international cluster of 117 researchers from 50 countries has arrived at a consensus on six principles that, in the team’s considered view, ought to guide the use of AI across healthcare worldwide. The principles are fairness, universality, traceability, usability, robustness and explainability.

Basing its name on the acronym formed by the first letters in these words, the group is going by an easy-to-remember name: the FUTURE-AI Consortium.

In a paper published this month in The BMJ, a sizeable subgroup of the consortium breaks down the six principles into 30 detailed recommendations for building trustworthy and readily deployable AI systems in healthcare. 

The resulting framework is “dynamic,” as put by lead author Karim Lekadir, PhD, of the University of Barcelona and co-authors. This means it will evolve over time as conditions change due to technological advancements and stakeholder feedback. 

Here are excerpts from the introductions to each of the six principles. 

1. Fairness. 

AI tools in healthcare should maintain the same performance across individuals and groups of individuals, the authors explain. AI-driven medical care “should be provided equally for all citizens,” they write. The team acknowledges that, in practice, perfect fairness “might be impossible to achieve.” Therefore: 

‘AI tools should be developed such that potential AI biases are identified, reported and minimized as much as possible to achieve ideally the same—but at least highly similar—performance across subgroups.’

2. Universality. 

A healthcare AI tool should be generalizable outside the controlled environment in which it was built. Specifically, the AI tool “should be able to generalize to new patients and new users and, when applicable, to new clinical sites.” More:  

‘[H]ealthcare AI tools should be as interoperable and as transferable as possible so they can benefit patients and clinicians at scale.’ 

3. Traceability. 

Medical AI tools “should be developed together with mechanisms for documenting and monitoring the complete trajectory of the AI tool,” from development and validation to deployment and usage, the authors state. 

‘This will increase transparency and accountability by providing detailed and continuous information on the AI tools during their lifetime to clinicians, healthcare organizations, citizens and patients, AI developers and relevant authorities.’

4. Usability.

End-users “should be able to use an AI tool to achieve a clinical goal efficiently and safely in their real world environment,” Lekadir and colleagues write. “On one hand, this means that end-users should be able to use the AI tool’s functionalities and interfaces easily and with minimal errors.” On the other hand: 


‘The AI tool should be clinically useful and safe, improve the clinicians’ productivity and/or lead to better health outcomes for the patient and avoid harm.’

5. Robustness.

Research has shown that even small, imperceptible variations in input data might lead AI models into incorrect decisions, the authors note. Biomedical and health data “can be subject to major variations in the real world—both expected and unexpected—which can affect the performance of AI tools.” 

‘It is important that healthcare AI tools are designed and developed to be robust against real world variations. They should be evaluated and optimized accordingly.’ 

6. Explainability. 

Medicine is a high-stakes discipline—one that requires transparency, reliability and accountability. Yet machine learning techniques “often produce complex models that are ‘black box’ in nature,” the authors write. 

‘Explainability enables end-users to interpret the AI model and outputs, understand the capacities and limitations of the AI tool, and intervene when necessary, such as to decide to use it or not.’

Expounding on that last point, Lekadir et al. accept that explainability is a complex task. Its challenges “need to be carefully addressed during AI development and evaluation” to make sure AI explanations are “clinically meaningful and beneficial to end-users.”

The paper is available in full for free

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

The Healthcare Leader’s Checklist for Choosing an Ambient AI Assistant with Strong AI Governance - As ambient AI for clinicians continues to evolve rapidly, how can governance protocols keep pace?

Nabla's latest whitepaper explores:

☑️ Key considerations when evaluating Ambient AI solutions.
☑️ Proven strategies Nabla employs to ensure safeguards around privacy, reliability, and safety.

Access actionable insights to support your decision-making. Download the whitepaper here.

 Share on Facebook Share on Linkedin Send in Mail
healthcare AI newsmakers

Healthcare AI newswatch: End-of-life AI, deepfake healthcare workers, AI drug prescribers, more

Buzzworthy developments of the past few days. 

  • Let’s let AI qualify as a medical practitioner eligible to prescribe drugs. As long as the algorithm gets authorized by the relevant governmental bodies, of course, and approved by the FDA for particular drugs in particular situations. The motion comes from David Schweikert, a Republican representative from Arizona in the U.S. House. It arrives in the form of a bill called the Healthy Technology Act of 2025, which is now awaiting consideration by the House Committee on Energy and Commerce. Schweikert seems not to have said much about the bill since introducing it last month. A Policy & Medicine article linked by his official website notes that Schweikert introduced similar legislation in a previous session of Congress only to see it die in committee with no discussion. “The [present] bill’s progress will be closely watched, as it could set a precedent for how AI is integrated into core medical practices,” the article points out. “While the potential benefits of AI in healthcare are significant, careful consideration must be given to the ethical, legal and practical implications of allowing AI systems to prescribe medications.”
     
  • Lost in the tornadic activity swirling out of the White House is Congress’s current thinking on AI regulation. Never mind that it was less than two months ago that the House released detailed—and bipartisan—recommendations. And the report included pointers specifically dedicated to AI in healthcare. In advance of whatever AI exertions come next on Capitol Hill, two legal analysts took questions from radio host Tom Temin of the Federal News Network. Legislating on AI is sure to be “tough,” suggests Adam Steinmetz of the Brownstein law and lobbying firm. “Members of the health committees have said they want to put guardrails, but they’re very worried they will become obsolete or they will age very quickly,” he adds. “This is a very quick moving field. Something that applies now might be already outdated a year from now. Congress struggles to update things as it is.” Hear the broadcast or read the transcript here
     
  • Laying out his boss’s views on AI for a largely European audience, Vice President Vance struck a truly Trumpian note. “The United States of America is the leader in AI, and our administration plans to keep it that way,” Vance told attendees of an AI summit attended by governmental and business leaders in Paris Feb. 11. “We need international regulatory regimes that foster the creation of AI technology rather than strangle it.” The latter comment came across as a shot over the bow of the European Union, which is in the early enforcement phase of the EU AI Act. Vance put actions behind his words, too, joining Britain in declining to sign on with the summit’s final statement. Blanket coverage.
     
  • A state bill in California would thwart AI bots from passing ‘themselves’ off as human healthcare workers. If passed into law, the move will give regulators the authority to enforce title protections. These restrict the use of professional job titles to people actually holding those titles. “Generative AI systems are not licensed health professionals, and they shouldn’t be allowed to present themselves as such,” says the bill’s author, Democrat Mia Bonta. “It’s a no-brainer to me.” The Alameda Post has the story
     
  • End-of-life decisions are often anything but easy. AI might be able to help. Example: When a patient or loved one is facing a crucial choice—say, curative treatments vs. palliative care—large-language AI could offer milestones to expect along each of those two divergent paths. Rebecca Weintraub Brendel, MD, JD, considers the use case in some detail. “The ability to have AI gather and process orders of magnitude more information than what the human mind can process—without being colored by fear, anxiety, responsibility, relational commitments—might give us a picture that could be helpful,” the director of Harvard Medical School’s Center for Bioethics tells the Harvard Gazette. “Having a better prognostic sense of what might happen is really important to that [type of] decision, which is where AI can help.”
     
  • Entirely new fields of medicine and medical research are at hand. That’s thanks to the combination of medical data by the mounds, powerful AI, automated biological labs, in-silico simulations of proteins and other emerging facilitators. Former biochemical researcher Jonathan Schramm surveys this landscape in a piece published Feb. 11 in Securities.io. Multiomics and AI, he projects, will “drive transformation in healthcare, with the emergence of truly personalized precision medicine tailored to each individual’s unique makeup of genes, metabolism, medical history, etc.” Schramm, who’s now a stock analyst and finance writer, defines “multiomics,” describes promising use cases and directs the reader’s attention to a few companies worth a watch by investors. Interesting piece
     
  • The National Science Foundation is looking for a few good tips. More specifically, the independent federal agency has issued a request for information to help it develop an AI action plan. Following up on the White House’s Jan. 23 executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” the NSF says it will use contributed input to “define the priority policy actions needed to sustain and enhance America’s AI dominance, and to ensure that unnecessarily burdensome requirements do not hamper private sector AI innovation.” The agency is open to hearing from academia, industry groups, private sector organizations, state, local and tribal governments, and “any other interested parties.” The comment period will end March 15. Details.  
     
  • ‘I don’t think he’s a happy person. I feel for him.’ So said Sam Altman of Elon Musk after Musk offered to co-purchase Altman’s OpenAI for more than $97 billion. Musk was a cofounder of the company in 2015. Evidently his main motive for resurfacing in the company’s orbit has to do with his sense that OpenAI has betrayed its nonprofit roots. That perception—whether it’s based in fact, feeling or some combination of the two—seems to bother Musk even though the company is seeking to spin off its for-profit business, as many outlets are reporting. We definitely live in interesting times. 
     
  • Recent research in the news: 
     
  • Notable FDA approval activity:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     
 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare