First, do no AI: How not to help the world’s struggling healthcare systems
An international team of researchers is calling on healthcare AI proponents to be more mindful of the technology’s unsuitability across much of the developing world.
It’s not that resource-starved clinics, hospitals and clinics wouldn’t like to adopt AI and other emerging technologies to improve care, streamline workflows and optimize administration. It’s that they would first need help solving much more basic problems.
“Like previous medical technologies, AI tools risk reinforcing existing patterns of technological dependency if implemented without addressing fundamental health system requirements,” write senior researcher Amelia Fiske, PhD, of the Technical University of Munich and colleagues, in a paper published May 2 in BMC Global and Public Health. “[W]ho benefits from pushing an AI-first narrative for healthcare, and does this paradigm truly serve the interests of the most disadvantaged patients?”
To flesh out their proposal for a technology-can-wait response to global medical need, the authors take their cue from the late Paul Farmer, MD, PhD (1959-2022). Farmer was an infectious disease specialist, medical anthropologist and humanitarian who worked tirelessly in Africa and Haiti. Fiske and co-authors apply his “5S” framework for healthcare-based poverty relief to current concerns around AI and global health.
1. STAFF: Healthcare workers must be supported before AI can be considered.
When considering AI implementation, healthcare systems “must demonstrate their ability to recruit, develop, retain and support human healthcare workers,” Fiske and colleagues write. “Systems struggling with basic staffing should prioritize workforce development over AI investment.” More:
‘Only with a well-prepared, appropriately supported workforce can healthcare systems create conditions where AI tools might eventually enhance rather than undermine care delivery.’
2. STUFF: Basic resources and infrastructure must precede technological investment.
Discussions of AI in healthcare often focus on sophisticated computational infrastructure, the authors note. This approach, they maintain, misses a crucial point: “[Many healthcare systems still struggle to maintain reliable access to basic medicines, supplies and equipment.”
‘This reality demands we reconsider the relative priority of AI investment against fundamental material needs.’
3. SPACES: Physical healthcare infrastructure cannot be leapfrogged by digital solutions.
AI enthusiasts sometimes suggest that digital health can transcend physical barriers to access. But those who proceed from such idealism tend to overlook “a fundamental reality: The vast majority of healthcare interventions—from preventive care to emergency services—require physical spaces for delivery.”
‘Digital spaces must be understood as extensions of, not replacements for, physical healthcare infrastructure.’
4. SYSTEMS: Strong healthcare governance has to come before innovative technological improvement.
Local healthcare workers and communities “understand the systemic constraints and opportunities within their contexts in ways that external actors cannot,” the authors point out.
‘A systems perspective demands an honest assessment of trade-offs: How do potential AI implementation costs compare to investments in basic healthcare infrastructure, essential medicines or workforce development?’
5. SUPPORT: Social infrastructure determines healthcare success more than technology.
“Consider an AI system designed to optimize medication adherence,” Fiske et al. urge. “Even if technically perfect, it cannot succeed where patients cannot afford prescribed medications, lack reliable transportation to pharmacies or work multiple jobs that make regular medication schedules impossible.”
‘The technology-first mindset fundamentally misunderstands how social conditions determine healthcare outcomes.’
As Paul Farmer’s work consistently demonstrated, the authors conclude, meaningful improvements in health outcomes “require political commitment and sustained investment in basic healthcare infrastructure.”
“Until healthcare systems can demonstrate sustainable capabilities across all dimensions of the 5S framework,” they add, “AI implementation runs the risk of not just being premature but potentially harmful.”
‘The measure of success in healthcare should not be the sophistication of our technology but the consistent delivery of quality care to all who need it.’
The paper is posted in full for free.