| | | If healthcare AI is to flourish outside of academic research settings and industry R&D departments, it will need to win over its most difficult-to-impress audience: healthcare workers in hospitals. And that’s not going to happen if these end-users aren’t offered three helps: early exposure to algorithm development, needs-adjusted training and adequate operational infrastructure. (The latter includes IT resources, technical support, internet access and such.) The assertions are from a literature review conducted at Germany’s Rhine-Westphalia Technical University of Aachen and published this month in NPJ Digital Medicine. The authors reviewed some 42 peer-reviewed articles. Gauging end-user acceptability according to the Unified Theory of Acceptance and Use of Technology (UTAUT), the team identified a variety of “facilitating and hindering” factors affecting AI acceptance in the hospital setting. Among the standout themes to emerge from the exercise, along with key researcher quotes: - Patient safety is rightly critical to this crowd. “Although it can be stated that AI-based prediction systems have shown to result in lower error rates than traditional systems, it may be argued that systems taking over simple tasks are deemed more reliable and trustworthy and are therefore more widely accepted than AI-based systems operating on complex tasks such as surgical robots.”
- Human factors matter. “More experienced healthcare professionals tend to trust their knowledge and experience more than an AI system. Consequently, they might override the system’s recommendations and make their own decisions based on their personal judgement.”
- Time isn’t infinite. “Physicians might accept an AI system such as a clinical decision support mechanism if they witness that it might reduce their workload and assist them. In order to facilitate the acceptance and thus implementation of AI systems in clinical settings, it is of utmost importance to integrate these systems into clinical routines and workflows, thereby allowing the AI to reduce the workload as well as the time consumption.”
- Medical specialties are unequally inclined to embrace AI in clinical practice. “AI’s establishment in radiology and relative rareness in many other areas of medicine raises the question of whether radiologists are more technically inclined and specialize on the basis of this enhanced interest—or whether innovations of AI in radiology are more easily and better integrated into existing routines and are therefore more widely established and accepted.”
- Reasons for limited acceptance of AI among healthcare professionals are many and varied. “Personal fears related to a loss of professional autonomy, lack of integration in clinical workflow and routines and loss of patient contact are reported. Also, technical reservations such as unintuitive user interfaces and technical limitations such as the unavailability of strong internet connections impede comprehensive usage and acceptance of AI.”
The authors conclude that, to maximize acceptance of AI among hospital-based healthcare workers, leadership must emphasize the understandability of the general resistance while identifying the specific pain points in play. They write: “Once the causes of hesitation are known and personal fears and concerns are recognized, appropriate interventions such as training, reliability of AI systems and their ease of use may aid in overcoming the indecisiveness to accept AI in order to allow users to be keen, satisfied and enthusiastic about the technologies.”
The study is available in full for free. |
| | |
| |
| | | Buzzworthy developments of the past few days. - Generative AI’s economic potential is somewhere between immense and unlimited. Yes, of course. But how big is it, really? Big enough that some analysts expect it to augment the impact of AI overall to the tune of “trillions of dollars of additional value each year.” That’s from a little consulting shop known by one name—McKinsey. The global firm’s digital unit fleshes out the sky’s-the-limit forecast in an in-depth report released June 14.
- One of the hardest parts of conducting clinical trials is enrolling the right patients. AI should be able to help with that. The Center for Connected Medicine outlines the challenges, chances and researcher readiness for an assist from AI in a new report presenting input from 58 healthcare executives. KLAS Research had a hand in compiling the report, which is available in full for free.
- The nonprofit Partnership on AI has enlisted Meta and Microsoft to help advance responsible practices in generative AI. The specific work to which the Big Tech twosome will contribute is PAI’s framework for collective action called Responsible Practices for Synthetic Media. In this usage, “synthetic” is a smoother way to say “AI-generated.” Announcement here.
- A like-minded effort is underway at Stanford. There the medical school is working with the Stanford Institute for Human-Centered AI to launch and run an initiative called RAISE-Health (for Responsible AI for Safe and Equitable Health). Med-school dean Lloyd Minor, MD, says the project is needed because AI “has the potential to impact every aspect of health and medicine.”
- Organizations are lately finding that deploying AI requires staffing up or retraining for AI-specific skills. They’re also buying into the promise of AI but struggling to scale the technology across their respective enterprises. Those are two key findings in an MIT Technology Review Insights report issued in conjunction with JPMorgan Chase. Another: Some 73% of companies worth $500 billion or better name “finding use cases” as their highest hurdle to clear on the road to AI deployment. Read the report.
- In cancer care, AI is a welcome source of valuable insights. But please leave the actual decision-making to humans. That’s a paraphrase of a conclusion arrived at by two strategic researchers with Cardinal Health. Summarizing their findings June 12 in Pharmacy Times, the authors describe a project in which a large community practice armed with augmented intelligence realized an 18% reduction in monthly ER visits, a 13% decline in quarterly hospital admissions and a potential annual savings of $2.8 million. Summary article here.
- Robotic arms are more precise than the human hand, and AI will ultimately take over this role. However, it’s more challenging to train a robot surgeon than a human one. That’s the wry observation of investor and entrepreneur Robert Strzelecki in an opinion piece published June 9 in Forbes. “We have a fantastic tool in our hands,” he writes. “As with any device, we can do a lot of good with AI but also a lot of harm. Its further fate depends on us—people.”
- In a time of staffing shortages across healthcare, calculating rates to pay shift workers can’t be easy. A healthtech startup is looking to help providers with the challenge while carving out a niche for itself. The company, CareRev of Venice, Calif., announced what it calls the first AI-based shift pricing system June 13. According to the company, the product can cut labor costs by 18% and fill shifts 50% faster.
- The American Medical Association is calling for tight regulatory oversight of AI as used by payers for prior authorizations and claims reviews. In announcing the focused advocacy, AMA cited one insurer’s boast about using the technology to make “fast, efficient and streamlined coverage decisions.” Meanwhile the association has declared its intent to offer AI guidelines for patients as well as providers. AMA trustee Alexander Ding, MD: “We are entering this brave new world with our eyes wide open and our minds engaged.”
|
| | |
|
| |
|
|