News You Need to Know Today
View Message in Browser

Enterprise-wide AI adoption | Industry watcher’s digest | Partner news

Friday, March 22, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

artificial intelligence kaiser permanente

AI alone won’t save lives or improve health: Kaiser Permanente AI exec

Imperfect algorithms. Resistant clinicians. Wary patients. Health disparities—some real, some perceived, others both at the same time. The plot ingredients of a flashy techno-thriller coming to a cineplex near you? No—just a few of the many worries that provider organizations take on when they move to adopt AI at scale.

At one of the largest such institutions in the U.S.—the eight-state, 40-hospital, not-for-profit managed-care titan Kaiser Permanente—the learning curve so far has been steep but rewarding.

So suggests Daniel Yang, MD, the org’s VP of AI and emerging technologies, in a March 19 website post. Yang’s intent is to share KP’s hard-won learnings about AI in a quick and accessible read.

Here are four points Yang makes along the way to reminding us that AI tools alone “don’t save lives or improve the health of our [12.5 million] members—they enable our physicians and care teams to provide high-quality, equitable care.”

1. AI can’t be responsible for—or by—itself.

Kaiser Permanente demands alignment between its AI tools and its core mission: delivering high-quality and affordable care for its members. “This means that AI technologies must demonstrate a ‘return on health,’ such as improved patient outcomes and experiences,” Yang writes. More:

[O]nce a new AI tool is implemented, we continuously monitor its outcomes to ensure it is working as intended. We stay vigilant; AI technology is rapidly advancing, and its applications are constantly changing. 

2. Policymakers must oversee AI without inhibiting innovation. 

No provider organization is an island, and every one of them needs a symbiotic relationship with government. Yang mentions two aims that must be shared across the private/public divide. One is setting up a framework for national AI oversight. The other is developing standards for AI in healthcare. Yang expounds:

By working closely with healthcare leaders, policymakers can establish standards that are effective, useful, timely and not overly prescriptive. This is important because standards that are too rigid can stifle innovation, which would limit the ability of patients and providers to experience the many benefits AI tools could help deliver.

3. Good guardrails are already going up.

Yang applauds the convening of a steering committee by the National Academy of Medicine to establish a healthcare AI code of conduct. The code will incorporate input from numerous healthcare technology experts. “This is a promising start to developing an oversight framework,” Yang writes. More:

Kaiser Permanente appreciates the opportunity to be an inaugural member of the U.S. AI Safety Institute Consortium. The consortium is a multisector work group setting safety standards for the development and use of AI, with a commitment to protecting innovation.

4. Compliance confusion is an avoidable misstep.

Government bodies should coordinate at the federal and state levels “to ensure AI standards are consistent and not duplicative or conflicting,” Yang maintains. At the same time, he believes, standards need to be adaptable. More:

As healthcare organizations continue to explore new ways to improve patient care, it is important for them to work with regulators and policymakers to make sure standards can be adapted by organizations of all sizes and levels of sophistication and infrastructure. This will allow all patients to benefit from AI technologies while also being protected from potential harm.

“At Kaiser Permanente, we’re excited about AI’s future,” Yang concludes, “and we are eager to work with policymakers and other healthcare leaders to ensure all patients can benefit.”

Read the whole post.

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Bayer Radiology uses Activeloop's Database for AI to pioneer medical GenAI workflows - Bayer Radiology collaborated with Activeloop to make their radiological data AI-ready faster. Together, the parties developed a 'çhat with biomedical data' solution that allows users to query X-rays with natural language. This collaboration significantly reduced the data preparation time, enabling efficient AI model training. Intel® Rise Program further bolstered Bayer Radiology’s collaboration with Activeloop, with Intel® technology used at multiple stages in the project, including feature extraction and processing large batches of data. For more details on how Bayer Radiology is pioneering GenAI workflows in healthcare, read more.

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence AI in healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Would you wake us when you’re sure it’s a little less woke? Undaunted by the ridicule Google’s Gemini brought upon itself less than a month ago—female pope, Black Vikings, other history-bending howlers—Google is now touting the chatbot’s image-generation function for, specifically, clinical uses. “[W]e’re researching how a version of the Gemini model, fine-tuned for the medical domain, can unlock new capabilities for advanced reasoning, understanding a high volume of context and processing multiple modalities,” Google reports in a blogpost talking up various efforts the company is making with genAI for healthcare. The post is written by Yossi Matias, Google’s engineering & research VP. Read the whole thing.
     
  • And in AI news from the very biggest of the Big Tech players (by market capitalization): Microsoft has hired two AI heavy hitters to launch and lead a new organization straightforwardly named Microsoft AI. The pair’s initial top tasks will be advancing Copilot and “other consumer AI products and research.” Mustafa Suleyman, co-founder of both DeepMind and Inflection, becomes EVP and CEO of the new arm. Karén Simonyan, co-founder of Inflection and key developer of AlphaZero, enters as its chief scientist. In a March 19 blogpost addressed to Microsoft employees, Microsoft CEO Satya Nadella tells his team: “We have a real shot to build technology that was once thought impossible and that lives up to our mission to ensure the benefits of AI reach every person and organization on the planet, safely and responsibly.”
     
  • Samsung has its heart set on breaking Nvidia’s firm grip on the AI accelerator market. And while the South Korean technology powerhouse is at it, it will try to “restore itself as the world’s biggest semiconductor company.” The reporting is from a translation of reporting in the Seoul Economic Daily by SamMobile. The niche outlet quotes Samsung Semiconductor CEO Kye Hyun Kyung, who announced at a recent shareholder meeting that the chip will debut in 2025 with the name Mach-1. The exec says the product will be “an entirely new type of semiconductor—a semiconductor designed to meet the processing requirements of future artificial general intelligence.”
     
  • GenAI can help pay down technical debt, aka ‘code debt.’ Whatever you call it, it’s what often results when software developers feel rushed and take shortcuts. Poorly written code is only one form of costly (albeit nonmonetary) “debt” that can result. There’s a higher risk of accumulating such debt “when applying AI models to an existing technology ecosystem, such as revising connectivities and integrating gen AI models while using an old stack,” Neal Sample, CIO of Walgreens Boots Alliance, tells CIO.com. On the other hand, if used appropriately, gen AI “could help eliminate old technical debt by rewriting legacy applications and automating a backlog of tasks,” explains CIO writer Bill Doerrfeld. Read the whole informative article.
     
  • Surefire safeguards are available to keep large language AI models from serving up health disinformation disguised as authoritative advice. The problem is that such precautious measures are only applied here and there (as opposed to everywhere). In a study published in The BMJ, researchers from Australia and the U.K. present these findings before concluding that “enhanced regulation, transparency and routine auditing are required to help prevent large language models from contributing to the mass generation of health disinformation.” Research paper here, news summary here.
     
  • Well this is equal parts exciting and terrifying. Researchers have come up with an AI predictor for discerning a person’s receptivity to being vaccinated against COVID-19. The system applies its steely logic to “a small set of data from demographics and personal judgments such as aversion to risk or loss.” The quote is from the news operation at the University of Cincinnati, whose researchers worked with colleagues at Northwestern University to design the algorithmic mind reader. Read the coverage, which includes a link to the scientific study. Or do what I did and make up your mind to flee that AI like a startled deer.
     
  • Know any digital health enthusiasts with the urge to innovate? Consider pointing them to an upcoming hackathon bearing the theme “Building High-Value Health Systems: Harnessing Digital Health and Artificial Intelligence.” Organized by the Harvard T.H. Chan School of Public Health, the event will run April 5 and 6 (a Friday and Saturday). It’ll offer three tracks—cardiovascular disease & diabetes, cancer and mental health. Participants need not travel to Boston. Details here.
     
  • Recent research roundup:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare