That’s no mere hospital—it’s a ‘complex adaptive system.’ Use GenAI accordingly.
From the perspective of management science, healthcare is a complex adaptive system marked by intricate feedback loops and overlapping interdependencies. As such, the sector demands caution by those introducing large-language AI into its tangled webs.
Researchers highlight the major pitfalls to avoid in a review article published June 16 in the International Journal of Electronic Healthcare.
Martin Salzmann-Erikson, PhD, and colleagues at the University of Gävle in Sweden suggest dedicated managerial attention is essential to ensure that LLM innovations “augment rather than destabilize” the delivery of healthcare services. Here are three of the team’s main conclusions.
1. Unlike conventional tools that fit within pre-existing workflows, large-language AI actively reshapes the system in which it operates.
It influences professional roles, decision-making hierarchies and institutional structures, the authors explain. “Previous research on complex adaptive systems has shown that healthcare systems cannot be fully governed by static regulatory models,” Salzmann-Erikson and co-authors write, “as their adaptive nature requires iterative, flexible governance structures.” More:
‘As an emergent actor in this system, AI necessitates a similar reflexive approach, continuously adjusting to evolving interactions across professionals, patients and institutions.’
2. The complex adaptive system framework reminds us that seemingly beneficial innovations can produce unintended vulnerabilities.
This effect can be especially pronounced in complex, interdependent environments like hospitals and healthcare systems, the authors suggest.
“Imposing rigid, pre-emptive regulations entails the risk of constraining AI’s adaptability, while having uncontrolled AI adoption entails the risk of destabilizing critical decision-making structures,” Salzmann-Erikson and co-authors write.
‘Taking a reflexive governance approach—one that continuously monitors and recalibrates AI’s evolving role—is essential to maintaining system resilience while allowing AI to develop in a responsible manner.’
3. While ChatGPT and other generative AI models hold transformative potential, their impact will not be determined solely by technological advancements.
Also influencing systems outcomes will be the frameworks in which the models are embedded.
‘By embracing AI with caution—balancing innovation with ethical oversight, and adaptability with patient safety—healthcare institutions can harness AI’s transformative power while safeguarding the fundamental principles that define high-quality, equitable care.’
The paper is posted in full for free.
——————————————
- In other research news:
- Northwestern: Fitness trackers for people with obesity miss the mark. This algorithm will fix that.
- University of Texas: Doctors need better guidance on AI
- University of Illinois: Machine learning method helps bring diagnostic testing out of the lab
- Northwestern: Fitness trackers for people with obesity miss the mark. This algorithm will fix that.
- Regulatory:
- Funding:
- Healthcare referrals are where patients get lost. Tennr raises $101M to bring the visibility our system desperately needs
- Healthcare technology startup Commure raises $200M ahead of IPO
- Nabla lands $70M to build AI agents in healthcare settings
- Sword Health raises $40M at a $4B valuation and unveils always-on AI mental health solution to address the global mental health crisis
- Parallel Bio secures $21M in Series A to advance human-first drug discovery
- Robotic pet company Tombot secures $6.1M Series A funding to support groundbreaking health and senior care product line
- RevelAi Health secures $3.1M seed funding to scale AI care coordination for musculoskeletal health
- AI healthcare funding surges as real-world rollouts hit pharmacies, hospitals and providers
- Healthcare referrals are where patients get lost. Tennr raises $101M to bring the visibility our system desperately needs