That’s no mere hospital—it’s a ‘complex adaptive system.’ Use GenAI accordingly.

From the perspective of management science, healthcare is a complex adaptive system marked by intricate feedback loops and overlapping interdependencies. As such, the sector demands caution by those introducing large-language AI into its tangled webs. 

Researchers highlight the major pitfalls to avoid in a review article published June 16 in the International Journal of Electronic Healthcare.

Martin Salzmann-Erikson, PhD, and colleagues at the University of Gävle in Sweden suggest dedicated managerial attention is essential to ensure that LLM innovations “augment rather than destabilize” the delivery of healthcare services. Here are three of the team’s main conclusions. 

1. Unlike conventional tools that fit within pre-existing workflows, large-language AI actively reshapes the system in which it operates.

It influences professional roles, decision-making hierarchies and institutional structures, the authors explain. “Previous research on complex adaptive systems has shown that healthcare systems cannot be fully governed by static regulatory models,” Salzmann-Erikson and co-authors write, “as their adaptive nature requires iterative, flexible governance structures.” More: 

‘As an emergent actor in this system, AI necessitates a similar reflexive approach, continuously adjusting to evolving interactions across professionals, patients and institutions.’  

2. The complex adaptive system framework reminds us that seemingly beneficial innovations can produce unintended vulnerabilities.

This effect can be especially pronounced in complex, interdependent environments like hospitals and healthcare systems, the authors suggest. 

“Imposing rigid, pre-emptive regulations entails the risk of constraining AI’s adaptability, while having uncontrolled AI adoption entails the risk of destabilizing critical decision-making structures,” Salzmann-Erikson and co-authors write. 

‘Taking a reflexive governance approach—one that continuously monitors and recalibrates AI’s evolving role—is essential to maintaining system resilience while allowing AI to develop in a responsible manner.’

3. While ChatGPT and other generative AI models hold transformative potential, their impact will not be determined solely by technological advancements.

Also influencing systems outcomes will be the frameworks in which the models are embedded. 

‘By embracing AI with caution—balancing innovation with ethical oversight, and adaptability with patient safety—healthcare institutions can harness AI’s transformative power while safeguarding the fundamental principles that define high-quality, equitable care.’

The paper is posted in full for free

——————————————

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.