News You Need to Know Today
View Message in Browser

Healthcare AI leadership survey | Healthcare AI newsmakers

Wednesday, January 24, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

Artificial intelligence adoption

Healthcare leaders feeling their way into, through AI adoption

More than 90% of newly surveyed healthcare leaders expect AI adoption will help make or break their institution’s prospects for success over the long haul, meaning five years out and beyond.

However, many seem to believe time is on their side: Only one-third of the same cohort anticipate AI integration will help determine success levels over the next 12 months.

Meanwhile, some 70% are splitting the difference. They expect AI to play a decisive role in the implied equation—successful deployment vs. missed opportunity—over the next three to four years.

The findings are from the 32nd running of Sermo’s Barometer survey. The physician networking platformer conducted the legwork from Dec. 15 to Jan. 2. The exercise elicited responses on priorities and challenges from an even 100 U.S. healthcare executives, directors and managers working at hospitals, health systems and provider entities of various other types.

Here are the top—and bottom—responses to some key questions from the AI section of the survey report.

How would you describe your professional engagement with AI and machine learning over the past 12 months?

  • 45% (tie)“I have been following AI advancements through publications and news.”/“I have explored AI applications in a specific healthcare subfield such as finance or marketing.”
    • 16%“I have taken online courses or training in AI and machine learning.”

How do you feel your organization is adapting to the opportunities presented by emerging AI applications?

  • 42%Adequately, but there is a need for more protective measures.
    • 4%Very successfully, with a clear and effective strategy in place.

To what degree is your organization currently using AI and machine learning in the following areas?

  • 23% (tie)Robotic surgery assistance/EHR management
    • 6%Triage prioritization

Five years from now, to what degree do you anticipate your organization will be using AI for these purposes?

  • 71% (tie)Predictive analytics/EHR management
    • 47%Human resources uses

For which of the following technology challenges do you feel best equipped?

  • 50%Compliance with regulatory requirements
    • 14%Interoperability with other healthcare facilities and networks

Sermo also asked the survey participants about shifts in care settings and challenges with staffing. Full results here, news release here.

 

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence industry healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Every so often, one patient’s tissue sample contaminates another patient’s microscope slides. Hey, stuff happens. And when it does, it can throw pathology AI for a loop. Researchers at Northwestern unravel the problem in a study published in Modern Pathology. Top takeaway: AI that works flawlessly in the lab may flub up in the real world. And when it does, it demonstrates the indispensability of human expertise. In the words of perinatal pathologist Jeffery Goldstein, MD, PhD, senior author of the study: “Patients should continue to expect that a human expert is the final decider on diagnoses made on biopsies and other tissue samples. Pathologists fear—and AI companies hope—that the computers are coming for our jobs. Not yet.” Scientific paper here, Northwestern news item here.
     
  • The typical lag between raw scientific discovery and patient-ready clinical indication is around 17 years. Cleveland Clinic and IBM joined forces in 2021 to try to shorten the waits. They called their collaboration the “Discovery Accelerator.” This week the pair announced the first fruit to come of the project. It’s a blueprint, of sorts, for using AI to “home in on what processes are critical to target with immunotherapy treatments” for cancer. Researchers from both organizations describe the accomplishment in a scientific paper here. Cleveland Clinic’s news office nicely summarizes it in lay terms here.
     
  • ‘Nurses don’t want AI.’ That’s just one person’s opinion, but the person is a union official who likely speaks for many. The speaker, Michelle Mahon of National Nurses United, offers the contrarian viewpoint in a San Francisco Examiner article that’s largely sympathetic to nurses helping to develop a homegrown AI model at UCSF Health. One of the developers is Kay Burke, RN, MBA, the institution’s chief nursing informatics officer. “If I have an [AI] model that tells me my patient actually might deteriorate because the risk factors are there,” Burke tells the newspaper, “then I can be more prepared and proactive and taking care of my patient.” Meanwhile, for Mahon, AI is “just a temporary fix for systemic issues that go beyond making room placement or HR systems more efficient.” Read the whole thing.
     
  • Last month a highly secretive meeting was held in Cambridge, Massachusetts. How closed-door was it? Enough that organizers invoked the Chatham House Rule. This means participants were free to use the information to which they were privy during the daylong get-together, but “neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.” The topic of the meeting was none other than the regulation of AI in healthcare. Condensed—and nameless—meeting minutes here. Shh.
     
  • Teenagers are people who think about AI in healthcare too. Exhibit A: Sonia Rao, a junior at Clovis North High School in Fresno, California. When she’s not practicing her fencing skills or serving as concertmaster of the school orchestra, Sonia may be found snapping photos, playing chess, traveling—or, evidently, writing thoughtful commentaries on other interests. The Los Angeles Times’s High School Insider presents her worthwhile thoughts on healthcare AI here.
     
  • This tech-sector veteran isn’t throwing his former colleagues under any buses. He just learned from mistakes made, presumably by himself as well as his peers, when he worked at Nvidia and Ola. The watchful brainstormer, Gaurav Agarwal, just announced the launch of his new company, RagaAI, on a $4.7 million seed funding round. Well, on that plus a plan to turn the software loose so it can autonomously detect, diagnose and de-bug any glitches dogging AI. In announcing the launch, Agarwal says the product is already working for some large Fortune 500 companies. (No mention of his designs on healthcare. Yet.)
     
  • Healthcare AI outfitter John Snow Labs says its open-source Spark NLP library has been downloaded a mind-boggling 82 million times. For more on this and other milestones the Delaware shop has passed as of this month, see here.
     
  • The WHO has released granular guidance on ethical and governance considerations around healthcare AI. The relevant document focuses on large multi-modal models, which comprise but aren’t limited to large language models. Whether you love or loathe the World Health Organization, you can face the 95-page beast here.
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare