News You Need to Know Today
View Message in Browser

Healthcare and sustainable AI | AI newsreader’s notebook | Partner news

Tuesday, August 13, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

sustainable AI in healthcare

Sustainable AI: 5 ways healthcare can do its part

As far back as 2019, when worldwide adoption of AI was still relatively low, researchers estimated that training a single sizable AI model can puff as much carbon dioxide into the atmosphere as five cars driven for approximately 12 years.

This factoid ought to disturb healthcare, since the sector is now all in with AI—and is responsible for 4.4% of global greenhouse gas emissions.

The reminder comes courtesy of radiology and AI researchers in Japan who conducted a review of the relevant scientific literature and had their report published in Diagnostic and Interventional Imaging.

“As the healthcare industry continues to embrace AI technologies, it is imperative to prioritize sustainability and environmental responsibility,” the authors write. “The integration of AI sustainability within broader institutional and societal sustainability efforts will be crucial for achieving a future where healthcare not only improves patient outcomes but also promotes environmental stewardship.”

The paper’s lead author is Daiju Ueda, MD, PhD, a radiologist and AI professor with the Graduate School of Medicine at Osaka Metropolitan University. Ueda and colleagues outline 10 things healthcare can do to mitigate the sector’s role in climate concerns. Here are their first five.

1. Eco-design and lifecycle assessment

By considering the environmental impact of AI systems throughout their entire lifespan, healthcare organizations can make informed decisions that minimize their carbon footprint and resource consumption.

Recommendation:

Conduct comprehensive lifecycle assessments to identify opportunities for eco-design, sustainable material selection and responsible end-of-life management of AI systems in healthcare.

2. Energy-efficient AI models

By reducing the energy consumption associated with AI model training and deployment, healthcare organizations can significantly decrease their environmental impact.

Recommendation:

Prioritize the development of energy-efficient AI models using techniques such as model compression, quantization and pruning to reduce energy consumption.

3. Green computing infrastructure

Using energy-efficient hardware, optimized software and sustainable infrastructure designs—along with implementing power-management techniques such as dynamic voltage and frequency scaling—can effectively reduce energy consumption during periods of low utilization.

Recommendation:

Adopt green computing practices in healthcare facilities and data centers, including the use of energy-efficient hardware, optimized software, sustainable infrastructure designs and renewable energy sources.

4. Responsible data management

By reducing the storage and computational requirements associated with healthcare data, organizations can decrease the energy consumption and carbon emissions of their AI systems.

Recommendation:

Implement efficient data compression techniques, optimize data storage systems and regularly assess the necessity of stored data to minimize the environmental impact of AI systems.

5. Collaborative research and knowledge sharing

Platforms like the Green AI Consortium and the Sustainable Healthcare Coalition facilitate the exchange of ideas, best practices and joint projects. By fostering collaboration and knowledge transfer, the healthcare community can accelerate the development and adoption of sustainable AI solutions.

Recommendation:

Promote collaborative research efforts and knowledge sharing among healthcare institutions, AI developers, and sustainability experts to advance sustainable AI practices.

“By embracing sustainable AI practices, the healthcare industry can lead the way in demonstrating how advanced technologies can be harnessed for the betterment of both human health and the environment,” Ueda and co-authors conclude. “As we move forward, it is essential to maintain an ongoing dialogue among researchers, practitioners, policymakers and the public to ensure that the development and deployment of AI in healthcare remain aligned with our shared vision of a sustainable and equitable world.”

The paper is posted in full for free.

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Andrew Lundquist Clinical Director at Nabla discuss enhancing patient care and giving clinicians more time on the Digital Thoughts podcast - He covers daily clinician challenges, ambient AI for clinical documentation, evaluating startups, and the role of AI in healthcare. Listen to the full episode here.

 

 Share on Facebook Share on Linkedin Send in Mail
AI in healthcare Artificial Intelligence

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Beware ‘drift’ and ‘nondeterminism’ in healthcare gen AI. Researchers at Mass General Brigham have observed an unexpected phenomenon. As they trialed generative AI’s potential to help geneticists identify harmful gene variants via literature review, the team found that repeatedly running the same test dataset produced varying results. They characterized the variability as model “drift”—changes in performance over time—and/or “nondeterminism,” meaning inconsistency between consecutive runs. “If a clinical tool developer is not aware that large language models can exhibit significant drift and nondeterminism, they may run their test set once and use the results to determine whether their tool can be introduced into practice,” the authors comment. “This could be unsafe.” Their study is posted in NEJM AI and is summarized in a Q&A.
     
  • In late July the National Conference of State Legislators noted that states aren’t idly waiting for federal regulation of healthcare AI. The count of states with legislation in the works jumped from three in 2022 to 11 in 2023, the group said, adding that the momentum will likely continue to build in 2024. This week Axios picks up the story from there. “States can often make policy quicker than the federal health bureaucracy and with specific community needs in mind,” reporter Maya Goldman writes. “Still, state officials have run into many of the same problems as their D.C. counterparts, like the lack of clear definitions on AI and differing stakeholder opinions.” Read the rest.
     
  • The FDA has approved 107 AI-equipped medical devices so far this year. Of those, some 78 devices (73%) are for radiology. Cardiovascular care is a distant second, with 10 devices (9%). However, some of the devices FDA considers radiology-specific are also used in cardiology. Cardiac Wire makes the clarification in an item posted Aug. 12. “Although actual cardiovascular AI use in the clinic is in its early stages, the large and growing list of FDA-cleared cardio AI products is a reminder of the innovations taking place in this arena,” editor Jake Fishman writes. “That innovation appears to be leading to more cardiovascular AI products, with more diverse use cases, and should eventually lead to larger increases in clinical adoption.”
     
  • Healthcare AI is largely about pajama time. For now, anyway. And what is “pajama time?” It’s the idea that, “for every hour a physician spends on the patient, they spend two hours searching for information and piecing things together.” Often those two hours don’t come until the good doctor is home for the night. The observation is from Aashima Gupta, Google Cloud’s director of global healthcare. She was one of three panelists at a VentureBeat event earlier this summer. A video of the session is available for viewing here.
     
  • TensorFlow is No. 1 in Analytics Insight’s book. TensorFlow is Google’s open-source framework for developing deep learning applications, image- and speech-recognition models and NLP toolkits. Analytics Insight is a journalism outfit covering emerging data-driven technologies. The latter has posted its picks for the best AI toolkits for building apps. Six more made the list and received a writeup.
     
  • AI and platform engineering—what a natural pair. “Agility and flexibility are critical to holistically addressing AI’s velocity and uncertainty,” explains CDS principal consultant Roger Campbell at State Tech magazine. “Platform engineering provides the developer infrastructure needed to get services and applications from a development environment to a production environment quickly and securely.” Interest piqued? Read the piece.
     
  • Make way for chain-of-thought AI. Evidently a refinement of mere explainable AI, the technology behind the new acronym on the block—“CoT AI”—gives its users step-by-step explanations of its decision-making process. “By revealing the intermediate steps in its reasoning,” Pymnts reports, “CoT AI allows researchers and users to better understand how the system arrives at its conclusions. This increased transparency can potentially enhance accountability in AI-driven organizations.”
     
  • IBM, Adobe, Arista Networks, ASML Holding and Taiwan Semiconductor. These are five of the 10 best stocks to buy for investors who favor companies that have AI and automation as a central part of their businesses. The lineup was compiled by Argus and is covered by U.S. News & World Report.
     
  • Recent research in the news:
     
  • AI funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare