Healthcare AI Digest

Buzzworthy developments of the past few days. 

  • Let’s let AI qualify as a medical practitioner eligible to prescribe drugs. As long as the algorithm gets authorized by the relevant governmental bodies, of course, and approved by the FDA for particular drugs in particular situations. The motion comes from David Schweikert, a Republican representative from Arizona in the U.S. House. It arrives in the form of a bill called the Healthy Technology Act of 2025, which is now awaiting consideration by the House Committee on Energy and Commerce. Schweikert seems not to have said much about the bill since introducing it last month. A Policy & Medicine article linked by his official website notes that Schweikert introduced similar legislation in a previous session of Congress only to see it die in committee with no discussion. “The [present] bill’s progress will be closely watched, as it could set a precedent for how AI is integrated into core medical practices,” the article points out. “While the potential benefits of AI in healthcare are significant, careful consideration must be given to the ethical, legal and practical implications of allowing AI systems to prescribe medications.”
     
  • Lost in the tornadic activity swirling out of the White House is Congress’s current thinking on AI regulation. Never mind that it was less than two months ago that the House released detailed—and bipartisan—recommendations. And the report included pointers specifically dedicated to AI in healthcare. In advance of whatever AI exertions come next on Capitol Hill, two legal analysts took questions from radio host Tom Temin of the Federal News Network. Legislating on AI is sure to be “tough,” suggests Adam Steinmetz of the Brownstein law and lobbying firm. “Members of the health committees have said they want to put guardrails, but they’re very worried they will become obsolete or they will age very quickly,” he adds. “This is a very quick moving field. Something that applies now might be already outdated a year from now. Congress struggles to update things as it is.” Hear the broadcast or read the transcript here
     
  • Laying out his boss’s views on AI for a largely European audience, Vice President Vance struck a truly Trumpian note. “The United States of America is the leader in AI, and our administration plans to keep it that way,” Vance told attendees of an AI summit attended by governmental and business leaders in Paris Feb. 11. “We need international regulatory regimes that foster the creation of AI technology rather than strangle it.” The latter comment came across as a shot over the bow of the European Union, which is in the early enforcement phase of the EU AI Act. Vance put actions behind his words, too, joining Britain in declining to sign on with the summit’s final statement. Blanket coverage.
     
  • A state bill in California would thwart AI bots from passing ‘themselves’ off as human healthcare workers. If passed into law, the move will give regulators the authority to enforce title protections. These restrict the use of professional job titles to people actually holding those titles. “Generative AI systems are not licensed health professionals, and they shouldn’t be allowed to present themselves as such,” says the bill’s author, Democrat Mia Bonta. “It’s a no-brainer to me.” The Alameda Post has the story
     
  • End-of-life decisions are often anything but easy. AI might be able to help. Example: When a patient or loved one is facing a crucial choice—say, curative treatments vs. palliative care—large-language AI could offer milestones to expect along each of those two divergent paths. Rebecca Weintraub Brendel, MD, JD, considers the use case in some detail. “The ability to have AI gather and process orders of magnitude more information than what the human mind can process—without being colored by fear, anxiety, responsibility, relational commitments—might give us a picture that could be helpful,” the director of Harvard Medical School’s Center for Bioethics tells the Harvard Gazette. “Having a better prognostic sense of what might happen is really important to that [type of] decision, which is where AI can help.”
     
  • Entirely new fields of medicine and medical research are at hand. That’s thanks to the combination of medical data by the mounds, powerful AI, automated biological labs, in-silico simulations of proteins and other emerging facilitators. Former biochemical researcher Jonathan Schramm surveys this landscape in a piece published Feb. 11 in Securities.io. Multiomics and AI, he projects, will “drive transformation in healthcare, with the emergence of truly personalized precision medicine tailored to each individual’s unique makeup of genes, metabolism, medical history, etc.” Schramm, who’s now a stock analyst and finance writer, defines “multiomics,” describes promising use cases and directs the reader’s attention to a few companies worth a watch by investors. Interesting piece
     
  • The National Science Foundation is looking for a few good tips. More specifically, the independent federal agency has issued a request for information to help it develop an AI action plan. Following up on the White House’s Jan. 23 executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” the NSF says it will use contributed input to “define the priority policy actions needed to sustain and enhance America’s AI dominance, and to ensure that unnecessarily burdensome requirements do not hamper private sector AI innovation.” The agency is open to hearing from academia, industry groups, private sector organizations, state, local and tribal governments, and “any other interested parties.” The comment period will end March 15. Details.  
     
  • ‘I don’t think he’s a happy person. I feel for him.’ So said Sam Altman of Elon Musk after Musk offered to co-purchase Altman’s OpenAI for more than $97 billion. Musk was a cofounder of the company in 2015. Evidently his main motive for resurfacing in the company’s orbit has to do with his sense that OpenAI has betrayed its nonprofit roots. That perception—whether it’s based in fact, feeling or some combination of the two—seems to bother Musk even though the company is seeking to spin off its for-profit business, as many outlets are reporting. We definitely live in interesting times. 
     
  • Recent research in the news: 
     
  • Notable FDA approval activity:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     
Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.