AI alone won’t save lives or improve health: Kaiser Permanente AI exec
Imperfect algorithms. Resistant clinicians. Wary patients. Health disparities—some real, some perceived, others both at the same time. The plot ingredients of a flashy techno-thriller coming to a cineplex near you? No—just a few of the many worries that provider organizations take on when they move to adopt AI at scale.
At one of the largest such institutions in the U.S.—the eight-state, 40-hospital, not-for-profit managed-care titan Kaiser Permanente—the learning curve so far has been steep but rewarding.
So suggests Daniel Yang, MD, the org’s VP of AI and emerging technologies, in a March 19 website post. Yang’s intent is to share KP’s hard-won learnings about AI in a quick and accessible read.
Here are four points Yang makes along the way to reminding us that AI tools alone “don’t save lives or improve the health of our [12.5 million] members—they enable our physicians and care teams to provide high-quality, equitable care.”
1. AI can’t be responsible for—or by—itself.
Kaiser Permanente demands alignment between its AI tools and its core mission: delivering high-quality and affordable care for its members. “This means that AI technologies must demonstrate a ‘return on health,’ such as improved patient outcomes and experiences,” Yang writes. More:
[O]nce a new AI tool is implemented, we continuously monitor its outcomes to ensure it is working as intended. We stay vigilant; AI technology is rapidly advancing, and its applications are constantly changing.
2. Policymakers must oversee AI without inhibiting innovation.
No provider organization is an island, and every one of them needs a symbiotic relationship with government. Yang mentions two aims that must be shared across the private/public divide. One is setting up a framework for national AI oversight. The other is developing standards for AI in healthcare. Yang expounds:
By working closely with healthcare leaders, policymakers can establish standards that are effective, useful, timely and not overly prescriptive. This is important because standards that are too rigid can stifle innovation, which would limit the ability of patients and providers to experience the many benefits AI tools could help deliver.
3. Good guardrails are already going up.
Yang applauds the convening of a steering committee by the National Academy of Medicine to establish a healthcare AI code of conduct. The code will incorporate input from numerous healthcare technology experts. “This is a promising start to developing an oversight framework,” Yang writes. More:
Kaiser Permanente appreciates the opportunity to be an inaugural member of the U.S. AI Safety Institute Consortium. The consortium is a multisector work group setting safety standards for the development and use of AI, with a commitment to protecting innovation.
4. Compliance confusion is an avoidable misstep.
Government bodies should coordinate at the federal and state levels “to ensure AI standards are consistent and not duplicative or conflicting,” Yang maintains. At the same time, he believes, standards need to be adaptable. More:
As healthcare organizations continue to explore new ways to improve patient care, it is important for them to work with regulators and policymakers to make sure standards can be adapted by organizations of all sizes and levels of sophistication and infrastructure. This will allow all patients to benefit from AI technologies while also being protected from potential harm.
“At Kaiser Permanente, we’re excited about AI’s future,” Yang concludes, “and we are eager to work with policymakers and other healthcare leaders to ensure all patients can benefit.”