News You Need to Know Today
View Message in Browser

HHS has plans for AI | Partner news | AI newsmakers: Nvidia, AWS, Meta, more

Tuesday, January 14, 2025
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

Washington DC

4 ways HHS plans to help shape a national strategy for healthcare AI

HHS has thought through the ways AI can and should become an integral part of healthcare, human services and public health. Last Friday—possibly just days ahead of seating a new secretary—the agency released a detailed plan for getting there from here.

While the document carries no immediate regulatory weight, it comes as a “roadmap” by which stakeholders can know HHS’s mind on how to keep or make healthcare AI trustworthy, ethical and accessible across all socioeconomic strata (aka “equitable”).

“While AI could significantly improve many aspects of healthcare and human services, it also presents possible risks that could lead to adverse impacts or outcomes, such as algorithmic bias that may unintentionally reduce equity or breach protected information,” the document’s executive summary explains. “Responsible AI use should ensure equitable access and beneficence, safeguard protected information, involve appropriate consent where applicable, and ensure appropriate human oversight where needed.” More: 

‘Most notably, AI should be viewed as a tool to support and inform efforts rather than the sole answer to problems in the existing landscape.’

HHS hopes to translate its vision into action by pursuing four key goals, as follows. 

1. Catalyze health AI innovation and adoption. 

Improving AI adoption in medical research and discovery “could hinge on expanding use cases, encouraging AI in different disease areas and promoting AI-ready data standards,” HHS notes. Already, the agency points out, it has been directing funding and resources toward intramural and extramural research programs that “develop or leverage AI in medical research and discovery (see NIH’s Bridge2AI and ARPA-H’s Transforming Antibiotic R&D with GenAI to stop Emerging Threats project, or ‘TARGET’).” More: 

‘In the future, HHS plans to share data interoperability guidelines, engage the public, and continue prioritizing safe, responsible, and responsive AI in its funding of both intramural and extramural research programs.’

2. Promote trustworthy AI development and ethical and responsible use.

AI use in medical research and discovery could present biosecurity, privacy, bias and other risks, HHS states in the strategic plan. The agency touts platforms it has established to get ahead of the threats, such as NIH’s Science Collaborative for Health Disparities and Artificial Intelligence Bias Reduction [ScHARe]) and the Executive Office of the President’s National Biodefense Strategy

‘Going forward, HHS will share national guidelines specific to health AI, create sandboxes for industry collaboration and explore the use of AI for dynamic AI risk assessment.’

3. Democratize AI technologies and resources.

Working directly with the public and making critical data tooling and infrastructure more accessible to stakeholders with lower access to capital “could expand the opportunity to conduct AI-empowered research and discovery,” the plan reads. To support this goal, HHS is engaging communities (see NIH’s Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Research Diversity (AIM-AHEAD) and standardizing research data (see NIH’s Common Data Element Repository). 

‘HHS will continue to promote public/private partnerships, support multi-institutional research collaborations, and ensure access to needed data and data infrastructure.’

4. Cultivate AI-empowered workforces and organization cultures.

“To help ensure long-term successful and safe adoption of AI in medical research and discovery, AI talent pipelines and organizational working models may need to be bolstered,” HHS writes. The agency is “developing talent internally (see NIH’s Data and Technology Advancement National Service Scholar Program) and externally (see NIH’s Administrative Supplements for Workforce Development at the Interface of Information Sciences, AI/ML and Biomedical Sciences).”

‘HHS will continue to promote apprenticeship programs focused on AI in medical research and discovery activities to bolster talent pipelines and share guidelines for AI governance to help organizations foster robust AI-enabled cultures.’

Bearing in mind that HHS is soon to have a new head—whether Robert F. Kennedy Jr. or someone else—read the full plan and related materials here.

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

  • Access the 2024 Executive Handbook: Ten Transformative Trends in Healthcare - What was top of mind for healthcare executives this year? What trends will shape 2025?

    Nabla's Chief Medical Officer, Dr. Ed Lee, MD, MPH, was recently interviewed for the 2024 Executive Handbook: Ten Transformative Trends in Healthcare, offering his perspective on how AI is enhancing clinical workflows and setting the stage for the future of patient care.

    From shifting federal healthcare policies to the emergence of disruptors beyond traditional health systems and pressing cybersecurity challenges, discover the key insights shaping the industry.

    Download the full handbook here.
     

  • Assistant or Associate Dean, Health AI Innovation & Strategy - UCLA Health seeks a visionary academic leader to serve as its Assistant or Associate Dean for Health AI Innovation and Strategy and Director for the UCLA Center for AI and SMART Health. This unique position offers the opportunity to shape and drive AI vision and strategy for the David Geffen School of Medicine (DGSOM) and ensure translation of innovation in our renowned Health system. This collaborative leader will work with academic leadership, faculty, staff and trainees to harness the power of AI to transform biomedical research, decision and implementation science, and precision health. Learn more and apply at:

    https://recruit.apo.ucla.edu/JPF09997 (tenured track) 
    https://recruit.apo.ucla.edu/JPF10032 (non-tenured track)

 Share on Facebook Share on Linkedin Send in Mail
circuit board

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • The little gaming company that could is now the big healthcare AI disrupter that can and will. Announcing new partnerships with IQVIA, Illumina and Mayo Clinic, Nvidia is talking about helping create “an AI factory opportunity in the hundreds of billions of dollars.” Kimberly Powell, Nvidia’s VP of healthcare, says the company’s new and existing partnerships are “poised to usher in a new era of medical and biological innovation and improve patient outcomes worldwide.” Recall that Nvidia started out in 1993, now famously, in a Denny’s restaurant. At the time, all Jensen Huang and colleagues had in mind was bringing 3D graphics to the gaming and multimedia markets. Today Nvidia is one of the world’s two most valuable companies, often jockeying for the lead against Apple. Healthcare AI dreamers, take note. 
     
  • Also rattling the change in its deep pockets while moving fast on AI for healthcare is Amazon Web Services. Monday the Bezos baby announced a multiyear partnership with the venture capital firm General Catalyst. AWS says other companies in GC’s portfolio will tap AWS’s expertise to accelerate development and deployment of healthcare AI products. The companies include Aidoc and Commure. Chris Bischoff, head of global healthcare investing at General Catalyst, tells CNBC his firm has “spent a lot of time thinking about how health systems can transform themselves, and we recognize that it’s not going to be through 1,000 companies. We need solutions that are really enterprise grade.” 
     
  • What if the deliverables never do live up to the promises? As much of healthcare AI remains an early work in progress, it’s only right to occasionally ask the question. At Science News, biotechnologist Meghan Rosen, PhD, and geneticist Tina Hesman Saey, PhD, do just that. “The stakes are high,” they point out. “If efforts fail, it means billions of dollars wasted and diverted from other interventions that could have saved lives.” The pair interviewed dozens of scientists and physicians en route to identifying six encouraging healthcare AI use cases and arriving at a reasonable conclusion: “[S]ome researchers, clinicians and engineers say that AI’s potential for making lives better is so high,” they write, “we have to try.” 
     
  • Having released the latest version of Llama in December, Meta is telling the world that open-source AI is the future of AI in healthcare. In a Jan. 13 post, the Facebook parent company briefly describes and interviews two companies making the case. Meta also touts the benefits of open source in its own voice. “Developers, researchers and other professionals can download and fine-tune the models on their own devices,” the post reads. “That they don’t need to send their data back to the AI model providers strengthens control and security over private health data—critical factors for highly regulated industries like healthcare.”  
     
  • Is greed good for healthcare AI? Well, it can be. One would be foolish to think developers work for months to come up with one-of-a-kind algorithms just to benefit humankind. Most hope to make a financial killing too. At MedCity News, an attorney urges these multi-motivated innovators to pay attention to intellectual property protection and, more to the point, patent protection. “The economic value of a great patent can be enormous,” writes David Carstens, JD, MBA, of Carstens, Allen & Gourley. “The ability to charge customers more for an AI-provided service improves when you have a patent to prevent your competitors from introducing the same service.” Makes sense. More here
     
  • Yes, Virginia, there is meaningful AI regulation at the state level. It’s in yours. In fact, several bills are in the works inside Old Dominion. One would require AI developers to publicly disclose their products’ origin and history. Another would mandate certain must-dos for end-users as well as developers, either of whom could face civil penalties for noncompliance. Journalist Nathaniel Cline breaks down the pending laws for the Virginia Mercury. He quotes a state rep and AI bill author who’s been working across state lines to “‘minimize patchwork legislation around the nation’ as the country waits for federal policy action on AI.”
     
  • A strong majority of insured Americans, some 67%, would trust their carrier’s AI copilot to give them the straight skinny on their coverage. Yet exactly half that ratio, 33%, are confident in the way AI is deployed today versus two years ago. The findings are from a survey of around 2,100 American adults conducted in November by the Harris Poll on behalf of healthtech vendor Pager Health. More results from Pager here and from Fierce Healthcare here
     
  • Be glad you work in healthcare. A nerve-rattling 41% of large commercial concerns around the world tell the World Economic Forum they’ll be looking to cut jobs on the strength of AI to automate many tasks over the next five years. Meanwhile 77% say they’ll reskill and upskill their current workforces. One of the aims there will be making human workers competent with—and comfortable around—their nonhuman colleagues. Imagine the watercooler conversations. 
     
  • Recent research in the news: 
     
  • Select mergers & acquisitions:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare