Elusive quadruple aim revisited for the generative AI era

Is U.S. healthcare capable of achieving its own quadruple aim? Or is that ideal destined to remain a perpetual pursuit, always chased but never really caught? However you answer, injecting AI into the healthcare system changes the math producing your conclusion.  

Researchers at Microsoft’s AI for Good Lab and Microsoft Research explore the unfolding wrinkle in a paper published by Frontiers in Artificial Intelligence.

William Weeks, MD, PhD, MBA, and colleagues remind readers that the pillars of healthcare’s quadruple aim are improving population health, reducing healthcare costs, optimizing the patient experience and maximizing job satisfaction for healthcare workers.

They point out that a confounding factor in assessing progress toward those ends is the “substantial waste” in our healthcare system. Some of the throwaway owes to administrative wranglings with payers, they note, but much else traces to clinical overutilization.

Against this challenge, the authors comment, artificial intelligence “has tremendous promise in helping to achieve the quadruple aim”—but the “haphazard application of AI may amplify inefficiencies and biases in the U.S. healthcare system.”

Weeks and co-authors recommend four measures for avoiding such AI-exacerbated negatives:

1. Avoid chasing the wrong metrics.

The goal of a model is not to achieve the best area under the curve, Weeks and colleagues maintain, but “to have measurable positive clinical impact and to achieve the quadruple aim using metrics defined prior to model implementation.”

‘If the model or the technology does not measurably and efficiently promote achievement of the quadruple aim, it should not be implemented.’

2. Always include a human subject matter expert in the loop.

Models increasingly use a human-in-the-loop process to ensure that the model is operating as intended; in healthcare, “it is critical to include a healthcare subject matter expert in the loop.”

‘Models must make sense to providers, so interpretable models and tools can allow subject matter experts to evaluate a model’s utility in clinical practice (again, using metrics defined prior to model implementation).’

3. Test, validate and monitor models.

AI models are invariably developed, tested and refined retrospectively, Weeks and co-authors write. While AI models have an advantage of testing results on a randomly selected held-out dataset, all AI models should be prospectively tested and validated on the target population before widespread implementation, using pre-defined validation thresholds.

Further, “models should be monitored over time: If models are effective, they may lead to behavior change; that behavior change may change key relationships, and those changes will require new model development.”

‘Those determining whether to develop and implement AI models should consider the cumulative long-term costs of monitoring and re-developing models.’

4. Use responsible AI practices.

Model effectiveness is intrinsically tied to the quality of data used in their training; ensuring that those data are free from bias is crucial, the authors state. “When the data itself is not biased, subsequent decisions derived from its analysis—for instance, misapplication of models to populations that are not represented in the unbiased data—might be unfair.”

‘Particularly with health-related AI models, where stakes are significantly higher and impacts more profound, adherence to responsible AI practices is imperative.’

Along with the four pointers for avoiding pitfalls, Weeks and colleagues flesh out four ways AI can help advance U.S. healthcare toward the promised land of the quadruple aim—even in the absence of health insurance reform. These are:

  • Helping patients decide whether to obtain services
  • Helping patients decide where to obtain desired care
  • Helping policymakers understand the relationship between social determinants of health and healthcare access, quality and outcomes
  • Supporting providers' decision making

“Current uses of AI applications can improve the efficiency of healthcare operations, such as scheduling, letter-writing, provider in-box email responses, patient triage and coding optimization,” the authors write. “These uses can improve patient and provider experiences, reduce per-capita healthcare costs and promote achievement of the quadruple aim.”

At the same time, however—given healthcare’s many inefficiencies—the unconsidered application of AI may increase healthcare costs without advancing the quadruple aim, Weeks et al. warn.

AI models can be “expensive to develop, test, implement and monitor,” they add. “A modest increase in accuracy may not warrant the expense if the impact on patients and clinical care is not significant.” More:

‘An unwavering focus on and objective evaluation of how technological implementation helps achieve the quadruple aim is essential for improving healthcare efficiency and effectiveness.’

The paper is available in full for free.

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.