| | | If putting an AI plan in place were a team sport, the Payers would be outpacing the Providers quite handily. This is to say that 25% of healthcare payers but only 15% of providers have an established AI strategy in 2024. The finding is from a survey of 150 executives from both spheres. The project was conducted by Bain & Company with KLAS Research. Bain posted an analysis of the findings Sep. 17. Other key findings in the report: - 75% of providers and payers say they increased IT spending over the past year, a trend that is likely to continue, the report’s authors predict.
- 70% of respondents were directly affected by the cyberattack on Change Healthcare and are spending more on cybersecurity.
- The 15% rate of strategy adoption among provider respondents may seem modest, but it represents a 10% bounce over the 5% providers notched in last year’s survey.
The authors offer a handful of informative observations, including these five: 1. Post-pandemic, providers and payers are inclined to experiment with technology. “Consistent with our findings over the past several years, providers and payers place a premium on technology; in our survey of 150 U.S. providers and payers, about 75% of respondents cite increased IT investments over the past year,” the authors comment. “We expect this trend to continue.” More: ‘Provider organizations emphasize digital transformation aimed at optimizing operations and reducing clinician burden. Payer IT efforts seek to improve payments via risk adjustment and quality programs, and they seek to lower medical loss ratios by optimizing payment integrity.’
2. Providers and payers alike are exploring AI-supported solutions to enhance decision-making, improve operational efficiency, and deliver care and engagement. Providers have made strides over the past year, as evidenced by the sector’s AI strategy growth from 5% to 15% from 2023 to 2024. ‘Payers are at a roughly equivalent place in terms of AI strategy definition when one controls for organization size. And a healthy majority of both types of organizations are optimistic about implementing generative AI.’
3. Both providers and payers are optimistic about implementing generative AI. Providers have begun to pilot generative AI in clinical applications, including clinical documentation and decision support tools, the authors note. Pilots involving ambient clinical documentation “have been particularly successful in reducing clinician administrative burden and improving the patient experience,” they add. ‘Payers cite contact center and member chatbot support as the first generative AI use cases gaining traction. These deployments aim to mitigate the impact of contact center labor pressures and can help raise staff skills and deliver more tailored, empathetic communications to members.’
4. Certain barriers continue to hinder widespread adoption of generative AI by payers as well as providers. Both providers and payers cite regulatory and legal considerations, cost and accuracy shortcomings such as AI hallucinations as main hurdles to implementation, the analysts found. ‘Additionally, there is a growing need for robust governance frameworks, transparency, and accountability mechanisms to ensure responsible and ethical use of AI in healthcare.’
5. Cybersecurity concerns and rising threats are expected to shape investment choices and vendor selection.Following the Change Healthcare cybersecurity incident, organizations are strengthening their IT infrastructure against threats, the authors report. ‘Clinical workflow optimization remains a high priority as providers seek to streamline processes, reduce administrative burden and increase utilization of labor, capital equipment and facilities. Within this category, patient flow solutions stand out.’
“AI tools have great potential to improve outcomes in the four quadrants of healthcare’s quadruple aims: enhancing the patient experience, improving population health, reducing cost and improving the provider experience,” Bain and KLAS conclude. “In the years ahead, AI looks set to deliver value on each of these fronts, though the journey will be gradual.” Read the rest. |
| | |
| |
| | | Clinical Pioneer University of Iowa Health Care Rolls Out Nabla to All Clinicians - UI Health Care, a leader in clinical innovation, partnered with Nabla to alleviate administrative burdens and enhance provider well-being by optimizing clinical documentation processes. During a five-week pilot program, clinicians reported a 26% reduction in burnout. Building on this success, the ambient AI assistant will now be deployed to over 3,000 healthcare providers, including nurses, with customized features specifically designed to support nursing workflows. |
| | |
|
| | | Buzzworthy developments of the past few days. - About that $30B invested in healthcare AI over the past three years (and $60B over the last 10): How big a difference have all those dollars made in actual healthcare operations? The venture capital company Flare Capital Partners asks the question in a report posted Sep. 9. The answer may seem a bit of a dodge, but at least it’s honest. More capital “does not universally equate to more value creation,” the report’s authors point out. “While the clinical AI category has been the highest-funded category, we believe near-term AI budgets will prioritize financial, patient engagement and operational throughput value propositions that have historically yielded more tangible ROI.” Full report here.
- Speaking of big bucks, how does $19.9 trillion with a T strike you? That’s how much AI will kick into the global economy from now through the end of the decade, according to IDC. Analysts there also believe every dollar spent on AI will push $4.60 into the global economy. IDC senior researcher Lapo Fioretti says that, by automating routine tasks and unlocking new efficiencies, AI will have “profound economic consequences.” Watch for the technology, Fioretti suggests, to reshape industries, create new markets and alter the competitive landscape. The full report commands a cool $7,500. IDC offers a little taste here.
- The use of clinical AI could have some unintended consequences for patient safety. Better to think through the potential now than to wait for something to go wrong and have to respond with haste. So suggests Andrew Gonzalez, MD, JD, MPH, a research scientist with the Regenstrief Institute in Indianapolis. “[W]e want to ensure that institutions have a framework for identifying problems as they come up, because some problems are going to be exacerbated,” Gonzalez says in a new video interview. “But there are also going to be wholly new problems that haven’t previously been issues until you use an AI-based system.”
- Even more concerning: patient safety issues as intended effects of AI. Three Harvard healthcare thinkers flesh out the terrifying scenario in an opinion piece published by The Hill Sep. 16. “To keep pace with the joint threat of AI and genetic engineering, we cannot afford to wait for the emergence of an engineered pathogen,” the authors write. “The technological advances in genomics and AI made [over the past 20 years] could unleash novel engineered pathogens that take millions of lives. Collaboration toward unified action is needed now.”
- AI has helped a Canadian hospital cut unexpected inpatient deaths by 26% over an 18-month stretch. The tech—a homegrown product called Chartwatch—did it by continuously monitoring vital signs, lab tests and updates in the EMR. The software makes a prediction about the patient’s improvement, stability or decline every hour. “[P]atients’ conditions are flagged earlier, the nurse is alerted earlier, and interventions are put in earlier,” clinical nurse educator Shirley Bell at St. Michael’s Hospital in Toronto tells the Canadian Broadcasting Corp. “It’s not replacing the nurse at the bedside; it’s actually enhancing your nursing care.”
- CMS is hungry for intel on how U.S. healthcare can use AI to boost care quality and improve patient outcomes. The agency has posted a request for information, open to all serious stakeholders. Those who impress CMS will get a chance to show their stuff in quarterly “AI Demo Days,” which will begin next month. In the process of sharing their know-how, the selectees will help CMS strengthen its service to healthcare. The deadline for RFI responses is Oct. 7. Learn more here.
- Ameca the gen AI-powered humanoid robot made a decent first impression at a healthcare conference this month. That’s according to coverage of the European Respiratory Congress in the American Journal of Managed Care. A product of Engineered Arts Limited in the U.K., Ameca runs ChatGPT 4 combined with natural language processing functionality. Speaking for itself, the creation more or less acknowledged that it’s not capable of taking over the job of any human healthcare worker. “Regulating artificial intelligence,” Ameca told attendees, “involves setting standards for ethical use, ensuring transparency and maintaining accountability to balance innovation and societal well-being.”
- Forming an AI governance group? Don’t forget the lawyers. That’s free legal advice from healthcare-specialized attorneys at Sheppard Mullin. “In considering how to integrate AI, healthcare organizations must be mindful of the related risks, including bias, patient privacy and consent, data security, and an evolving legal and regulatory landscape,” the authors write in a Sep. 16 blog post. “Healthcare organizations should adopt best practices to ensure their use of AI remains reliable, safe and legally compliant.” Just because they would say that doesn’t mean it’s not good advice.
- Recent research in the news:
- AI funding news of note:
- From AIin.Healthcare’s news partners:
|
| | |
|
| |
|
|