Researchers identify ‘universal determinants of AI acceptance’ among healthcare workers
What attributes tend to nudge clinicians toward accepting AI into their work lives? Several, of course—but the most broadly determinative can be trimmed to just two.
One is the extent to which the healthcare worker believes that using the AI will help him or her attain gains in job performance. The other is the degree to which a clinician believes that an organizational and technical infrastructure exists to support use of the AI model in question.
The findings are from a scoping literature review conducted at Georgia State University and posted April 15 in BMJ Open. Catherine Scipion, MD, MPH, Jalayne Arias, JD, and colleagues arrived at the observations after analyzing 46 relevant studies published between 2010 and 2023.
The consistent prominence of the two factors above—labeled “performance expectancy” and “facilitating conditions”—across diverse medical specialties, skill levels and care contexts, suggests these factors “serve as universal determinants of AI acceptance,” the authors write. “This reflects clinicians’ confidence in AI’s efficiency and accuracy, as well as the necessity of training and support for its integration into clinical practice regardless of the context.”
Here are five more takeaways from the study report.
1. Patient-clinician dynamics are a key concern over AI in primary and secondary care but are generally not seen as potential pain points in tertiary care.
Primary care clinicians harbor apprehension over relationship compromises, wary that AI “could diminish direct patient interactions, potentially eroding the humanistic aspects of care and compromising healthcare empathy,” Scipion and co-authors report.
Secondary or specialized care generally shares this worry but shifts attention toward “the interaction between clinicians and AI systems themselves, particularly regarding trustworthiness—defined as the system’s perceived transparency, consistency and alignment with clinical reasoning.”
‘Tertiary care clinicians worry about loss of autonomy, over-reliance on AI and skill devaluation.’
2. Legal and ethical concerns about AI vary by care setting.
Primary care clinicians prioritize patient safety and avoidance of potential AI-related harms, such as misdiagnosis, while medical specialties tend to emphasize data privacy and security risks.
‘Tertiary care professionals tend to focus on accountability, liability and regulatory gaps in AI-driven diagnostics.’
3. Clinician hesitancy over AI adoption is a key concern in primary and tertiary care but is not a source of unease in secondary care.
The lack of clear medical liability regulations governing AI-assisted diagnostics and autonomous decision-making only “exacerbates these concerns, leading to clinician hesitancy in fully integrating AI into high-stakes medical practice,” the authors write.
“This apprehension is underscored by the potential for clinicians to become ‘liability sinks’ for AI-related errors, assuming personal accountability for adverse outcomes even when the fault lies within the AI system or organizational processes.” Moreover:
‘Primary care clinicians fear job displacement as AI automates routine decision-making.’
4. In specialized care particularly, clinician involvement in model development, implementation and validation is a key facilitator of AI acceptance.
In the reviewed studies, technical features in tertiary care “were primarily linked to system design quality and interface interoperability,” Scipion and colleagues note. At the same time,
‘Concerns about AI conclusiveness—including robustness and reliance on evidence-based recommendations—are consistent across all healthcare settings, serving as both a critical enabler of AI adoption and a source of clinician skepticism.’
5. A need exists to refine established frameworks so as to better incorporate context-specific drivers of AI acceptance and use.
Future research, the authors comment, should address acceptance and use gaps by investigating “both universal and context-specific barriers and expanding existing frameworks to better reflect the complexities of AI adoption in diverse healthcare settings.” As a next step, they suggest, researchers could:
- Conduct systematic reviews and meta-analyses to rigorously assess universal determinants (eg, performance expectancy, facilitating conditions, AI conclusiveness) and their interactions across healthcare settings.
- Undertake primary mixed-method studies in low- and middle-income countries to investigate policy, sociocultural and economic drivers and their intersection with universal determinants.
- Employ mixed-method research to refine or expand theoretical frameworks, integrating emerging factors such as clinician hesitancy, involvement in AI design, relationship dynamics, ethical–legal considerations, AI conclusiveness and technical features.
Among the research limitations Scipion et al. acknowledge is the scant representation of low- and middle-income countries in the literature on medical AI. This lack, they remark, “restricts understanding of context-specific influences, including policy, sociocultural and economic factors.”
‘Addressing these and other gaps in future research will help generate robust, context-sensitive evidence to inform strategies for effective and equitable AI adoption in healthcare worldwide.’
The study is posted in full for free.