News You Need to Know Today
View Message in Browser

GenAI is growing on doctors | Industry watcher’s digest | Partner news

Wednesday, April 17, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

physician acceptance of generative AI

Physicians are embracing clinical GenAI—in theory, at least

More than two-thirds of U.S. physicians have changed their minds about generative AI over the past year. In doing so, the re-thinkers have raised their level of trust in the technology to help improve healthcare.

That’s according to a survey of 100 medical doctors who work in a large hospital or health system, see patients and use clinical decision-support software. The survey was conducted online in February by Wolters Kluwer Health.

The researchers further found 40% of U.S. physicians ready to use “point-of-care GenAI” as long as they’re confident in the specific tool they have in hand for the purpose.

In reporting the results, Wolters Kluwer Health offers four key observations:

  1. Saving time is an eagerly anticipated benefit among physicians who would be willing to use GenAI at the point of care.
     
    • More than two-thirds of physicians (68%) say GenAI can save time by quickly searching medical literature.
       
    • 59% say GenAI can save time by summarizing data about a patient in the electronic health record (EHR).
       
    • More than half (54%) believe GenAI will save them 20% or more time looking for data to assist in clinical decision-making.
       
  2. Physicians view GenAI as a tool that can help optimize the work of care teams.
     
    • 4 out of 5 of physicians (81%) say GenAI can improve care team interactions with patients.
       
    • More than half say GenAI can support continuing education (57%) and day-to-day tasks (56%).
       
    • Almost half (46%) say GenAI can coordinate scheduling across the care team to facilitate timely care.
       
  3. The most important criteria for physicians is content source transparency.
     
    • For the majority of physicians (58%), the No. 1 most important factor when selecting a GenAI tool is knowing the content it is trained on was created by medical professionals.
       
    • Before using GenAI in clinical decisions, 9 out of 10 physicians (91%) would have to know the materials it sourced from were created by doctors and medical experts.
       
    • 89% would be more likely to use GenAI in clinical decision-making if the vendor was transparent about where information came from, who created it and how it was sourced.
       
    • 76% would be more comfortable using GenAI from established vendors.
       
  4. A gap persists between physician preparedness and patient readiness for GenAI in healthcare.
     
    • Compared to results from Wolters Kluwer Health’s 2023 consumer survey “Generative AI in Healthcare: Gaining Consumer Trust,” physicians are more ready for GenAI in healthcare than their patients.
       
    • The majority (66%) of physicians believe their patients would be confident in their results if they knew their provider was using GenAI to make decisions about their care, but almost half (48%) of Americans would not be confident in the results.
       
    • While only 1 out of 5 physicians believe patients would be concerned about the use of GenAI in a diagnosis, most Americans (80%) say they would be concerned.

“Physicians are open to using generative AI in a clinical setting provided that applications are useful and trustworthy,” comments Peter Bonis, MD, Wolters Kluwer Health’s chief medical officer, in a news release. “The source of content and transparency are key considerations.”

Read the release here and view an infographic here.

 

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Bayer Radiology uses Activeloop's Database for AI to pioneer medical GenAI workflows. Bayer Radiology collaborated with Activeloop to make their radiological data AI-ready faster. Together, the parties developed a 'çhat with biomedical data' solution that allows users to query X-rays with natural language. This collaboration significantly reduced the data preparation time, enabling efficient AI model training. Intel® Rise Program further bolstered Bayer Radiology’s collaboration with Activeloop, with Intel® technology used at multiple stages in the project, including feature extraction and processing large batches of data. For more details on how Bayer Radiology is pioneering GenAI workflows in healthcare, read more.

How to Build a Pill Identifier GenAI app with Large Language Models and Computer Vision. About 1 in 20 medications are administered wrongly due to mixups. Learn how you can combine LLMs, computer vision models like Segment Anything and YOLOv8 with Activeloop Deep Lake and LlamaIndex to identify and chat with pills. Activeloop team tested out advanced retrieval strategies and benchmarked them so you can pick the most appropriate retrieval strategy for your multi-modal AI use case. GitHub repository and the article here.

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence national security

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • However handily AI’s pros outweigh its cons, the technology surely does pose some serious national security risks. Four senators are concerned enough that they’ve sent a letter to congressional colleagues spelling out ways to manage myriad “extreme” risks across four realms: biological, chemical, cyber and nuclear weapons. In U.S. healthcare, for example, the overlap between AI and biotechnology could lead to “the deliberate and incidental creation” of novel public health risks. On that point the senators quote a recent analysis by the Department of Homeland Security. The group of four is made up of two Democrats (Jack Reed of Rhode Island and Angus King of Maine) and two Republicans (Mitt Romney of Utah and Jerry Moran of Kansas). Letter here, TV news summary here.
     
  • Healthcare AI that adds to users’ workloads isn’t worth the trouble. Avoiding such a scenario before it becomes a situation requires envisioning workflows from starting line to endpoints. “You might be solving a problem on the clinical side, but then you might create a problem on the administrative side where you have extensive support burden,” explains Sunil Dadlani, MBA, the CIO and digital transformation officer at Atlantic Health System in New Jersey. “Or you might solve a problem on the operation side, but it creates extensive in-basket messages on the physician side—and you create more frustration and more burnout.” Dadlani makes the remarks in an interview with the American Medical Association. Audio recording and textual summary here.
     
  • Broad adoption of medical GenAI could well prove harmful to patients and, with them, U.S. healthcare as a whole. The good news is, there’s still time to address the various concerns that spur wary AI watchers to issue such advance warnings. The one who puts this one out there, Andrew Borkowski, chief AI officer at the Department of Veterans Affairs’ largest health system, spoke on the matter with TechCrunch. Relying too heavily on GenAI for healthcare, he points out, “could lead to misdiagnoses, inappropriate treatments or even life-threatening situations.” Read the article.
     
  • Want to avoid planting AI timebombs in your organization? Then you’d better build in scalability and interoperability, invest in continuous education and training, develop a patient-centric approach—and take seven other unskippable steps. All 10 come as recommendations from subject matter expert Brian Spisak, PhD, via commentary posted April 16 in HIMSS Media’s Healthcare IT News. Read the piece.
     
  • Four of the top 11 hospitals in the world are on Mayo Clinic Platform. The institution does this little bit of bragging by way of informing the public that some heavy hitters—eight institutions across three continents—have newly signed on to its digital-health platform. Mayo Clinic president and CEO Gianrico Farrugia, MD, says the new and existing partnerships will help produce “more innovation, more collaboration, more answers and more hope for those in need as we continue to build something that has never existed before in healthcare: a platform with truly global reach.” For more, including the identities of the latest Mayo Clinic Platform joiners, click here.
     
  • Two-state, 16-hospital OSF HealthCare is bringing in a conversational AI supplier to help train primary care providers. The vendor, Wyoming-based Brand Engagement Network, aka “Ben,” will dispatch its AI assistants to help PCPs sharpen their skills in, primarily, clinical assessments and diagnostic documentation. OSF’s catchment area spans Illinois and Michigan. Announcement here.
     
  • Elsevier Health has rolled out a GenAI chatbot for nursing students. Called Sherpath AI, the toolkit promises to help users navigate courses, prepare for exams and make the transition from classroom seats to clinical settings. Announcement.
     
  • ChatGPT gets co-authorship credit in a new book aimed at healthcare people. Working ably with human author Robert Pearl, MD, former longtime CEO of Kaiser Permanente and present Stanford prof, the bot helps describe ways AI-equipped patients and doctors can “take back control of American medicine.” The result is ChatGPT, MD. Details here.
     
  • Recent research roundup:
     
  • Funding news of note:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare