Will AI help or hurt the cause of healthcare equality?
AI has a long way to go before it meaningfully closes disparities in healthcare access and delivery. In fact, even when aimed at that goal, the technology can backfire.
So warn researchers at Stony Brook University’s Renaissance School of Medicine on Long Island, N.Y.
“[W]hat AI lacks that physicians have is not intelligence but rather wisdom—the sense of intuition that a human being can accumulate only over time,” anesthesiologist Ana Costa, MD, and co-authors write in a review of the relevant literature published this month in Frontiers in Artificial Intelligence.
“As the ability of AI is directly proportional to the quality of the training sets used,” the authors point out, “[researchers] have addressed concerns regarding bias in training datasets and lack of diversity in development teams ultimately resulting in AI-driven disparities in care.”
Costa and colleagues look at several disparity-worsening pitfalls and how to avoid them. Here are excerpts.
1. Economic disparities may inadvertently bar low-income families from AI-augmented care.
Overall cost barriers to medical AI adoption are more nuanced than the mere ability to implement AI in practice, the authors note. “Inevitably, there are AI algorithms with higher and lower levels of sophistication, infrastructures that are more and less robust, and security measures that are stronger and weaker.” More:
‘The AI system that institutions choose will be closely tied to their financial status. Of course, AI development will then leave behind under-resourced communities.’
2. The black box problem could discourage underserved populations.
A key component of trust in underprivileged populations is the patient’s comfort with the physician and their personal involvement in patient care. “As such,” Costa and co-authors write, “we may see that the unexplainable black box of AI and ML—if not handled correctly—would certainly exacerbate these concerns.”
‘Lack of explanation for these impersonal, automated algorithms may further alienate vulnerable populations and widen health disparities.’
3. AI-aided care in end-of-life situations may compromise on compassion for the less well-off.
While AI may help assist in end-of-life decision-making, it risks “depersonalizing cases and lacking empathy when patients and their families need it the most,” the authors state. “Palliative care AI models risk imposing a ‘one-size-fits-all’ model of care based on a Western training dataset.”
‘Understudied populations and cultural minorities fall behind due to AI’s understanding—or lack thereof—of their values.’
4. Low- and middle-income countries face significant challenges in implementing AI.
Most AI systems are developed in high income countries, and machine learning models reflect datasets from those populations. “When applying these technologies to LMICs,” Costa et al. remind, “models must be updated to reflect the population to which the algorithm is applied.”
‘Failure to re-train models can reinforce and exacerbate existing health disparities.’
5. AI has great potential to improve care for vulnerable populations while bridging gaps in access.
However, it will fall short of those aims if healthcare professionals fail to ensure that data is diverse and algorithms are inclusive, the authors underscore. “The healthcare system hinges on trust to maintain patient confidentiality, recommend the optimal course of action and execute the plan appropriately,” they write. “Particularly in marginalized communities, the critical process of building and maintaining this trust has proved difficult even in the absence of AI.”
‘Collaboration among patients, physicians and AI developers is essential to achieve [trust] in an equitable manner.’
The paper is available in full for free.