News You Need to Know Today
View Message in Browser

AI training for residents and fellows | Partner news | AI newsmakers: DeepSeek, AdvaMed, US Army, more

Wednesday, January 29, 2025
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

Resident Fellow Training with AI

5 ways GenAI can enhance graduate medical education

Generative AI has a bright future in medical education. That goes not only for medical schools but also for postgraduate settings in which residents and fellows do most of their learning while also caring for patients.

“A core tenet of graduate medical education, or GME, is ‘graded authority and responsibility,’ where trainees progressively gain autonomy until they achieve the skills to practice independently,” several advanced GME trainees point out in a paper published this month in Frontiers in Medicine. “Additionally, trainees are expected to become ‘physician scholars.’” 

What does GenAI have to do with any of that? As it turns out, plenty. The paper succinctly summarizes the relevant peer-reviewed literature on the subject and comments on risks as well as opportunities involving GenAI in the GME setting. 

The authors are four clinical informatics fellows at the Baylor Scott & White health system in Texas. Here are excerpts from their section on opportunities. 

1. EHR workload reduction. 

Given their long work hours and stressful work environment, GME trainees are “particularly susceptible to burnout,” lead author Ravi Janumpally, MD, MHA, and colleagues point out, “with rates higher than their age-matched peers in non-medical careers and higher than early-career attending physicians.” More: 

‘Given its ability to summarize, translate and generate text, GenAI demonstrates clear potential as a technological aid to alleviate the burden of clinical documentation.’

2. Clinical simulation. 

Stakeholder interest is keen in the use of conversational GenAI to simulate patient encounters, although this application is more often focused on undergraduate medical education, the authors note. More: 

‘Among the most interesting potential applications of GenAI in GME is the concept of using synthetic data as training material for visual diagnosis. For example, generative adversarial networks (GANs) and diffusion models have shown promise in generating realistic medical imaging data sets.’

3. Individual education. 

“One-on-one tutoring delivered by humans is costly, and skilled teachers are not available everywhere, but GenAI tools may have some of the same benefits at a fraction of the cost,” Janumpally and colleagues write. 

‘Large language models show promise as a tool for explaining challenging concepts to graduate medical trainees in a manner tailored to the learner’s level, and LLMs could be configured to act as personalized tutors.’

4. Research and analytics support. 

GME trainees are required to participate in quality improvement (QI) projects, and these typically require quantitative data analysis, the authors note. “Trainees are often underrepresented in organizational QI activities, with one potential reason being the substantial time and effort needed for data collection and analysis.” 

‘Large language models have some ability to facilitate straightforward data analysis and can generate serviceable code for statistical and programming tasks. LLMs are also adept at natural language processing tasks like extracting structured data from unstructured medical text.’

5. Clinical decision support. 

GenAI for CDS is “an area of great potential and ways to improve performance are under development,” the authors write. However, they add, “GME faculty and trainees cannot yet rely on LLMs to directly guide clinical care.” 

‘Studies done to evaluate the potential of LLMs for clinical decision support in various clinical contexts have shown mixed results so far, with limitations in their ability to handle nuanced judgment and highly specialized decision-making.’

This doesn’t mean GenAI has no future in CDS—just that the pairing needs more time and attention. 

“LLMs can provide context-sensitive and specific guidance incorporating clinical context and patient data, they can be accessed through readily available communication channels, and—in contrast to rule-based alerts—they are interactive,” Janumpally and co-authors point out.

The paper is available in full for free

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

  • Nabla is rolling out its ambient AI assistant at Denver Health - Denver Health, Colorado's primary safety-net health system, is deploying Nabla across its entire clinical workforce. In just the first week of system-wide implementation, a record 400 clinicians signed up to use the ambient AI assistant for clinical documentation.
    During a successful 8-week pilot, Denver Health clinicians reported the following outcomes, including:

    ☑️ 40% reduction in note-typing per patient encounter
    ☑️ 82% of participants feeling less time pressure per visit
    ☑️ 15-point increase in patient satisfaction scores

    Read the press release
     

  • Assistant or Associate Dean, Health AI Innovation & Strategy - UCLA Health seeks a visionary academic leader to serve as its Assistant or Associate Dean for Health AI Innovation and Strategy and Director for the UCLA Center for AI and SMART Health. This unique position offers the opportunity to shape and drive AI vision and strategy for the David Geffen School of Medicine (DGSOM) and ensure translation of innovation in our renowned Health system. This collaborative leader will work with academic leadership, faculty, staff and trainees to harness the power of AI to transform biomedical research, decision and implementation science, and precision health. Learn more and apply at:

    https://recruit.apo.ucla.edu/JPF09997 (tenured track) 
    https://recruit.apo.ucla.edu/JPF10032 (non-tenured track)

 Share on Facebook Share on Linkedin Send in Mail

Industry Watcher’s Digest

Buzzworthy developments of the past few days

  • This is the week of DeepSeek. China’s open-source, light-on-chips model has actually been out since last year. But a new version came out Jan. 20, and it took most folks a week to realize what they were looking at. By this Monday, the 27th, DeepSeek had shot to the top of Apple’s app store. Along the way to that eyebrow-raising mile marker, it handed Nvidia the biggest one-day loss of market value ever seen in the U.S.—close to $600B. The stock markets seem to be taking a deep breath now. But even if DeepSeek ends up being more sizzle in the pan than steak on the plate, its overnight fame could reset the table. President Trump called the news a “wakeup call” for American tech companies. “We need to be laser-focused on competing to win,” he added, “because we have the greatest scientists in the world. Even Chinese leadership told me that.” 
     
  • The lone exception to the instant market reshuffle may be Apple. Tim Cook’s company saw its stock rise while AI rivals Alphabet and Microsoft took hits. Apple stands to benefit from a disruption to its competitors’ efforts, Business Insider remarks, because Apple’s AI strategy emphasizes integration over cutting-edge model development. BI also makes note of Sam Altman’s plans to speed up new releases of its models in response to the newly “invigorating” competition. On the national security front, Michigan GOP Rep. John Moolenaar warned the U.S. not to “allow Chinese Communist Party models such as DeepSeek to risk our national security and leverage our technology to advance their AI ambitions.” So, lots of important angles to consider. DeepSeek’s blastoff is a fast-developing story for all AI watchers, including those focused on AI in healthcare. 
     
  • Will next week be the week of Qwen? Don’t be surprised. Qwen 2.5 is Alibaba’s updated entry in the AI market wars. DeepSeek’s homeland competitor claims its latest model can outperform not only DeepSeek-V3 but also ChatGPT-4o and Meta’s newest iteration of Llama. It’s also said to play well with computers, phones and video players. 
     
  • AdvaMed clashes with radiology group over GenAI regulation. In this corner, the med-tech lobby outfit wants to maintain the status quo. “The FDA’s current framework is likely sufficiently robust to manage the unique considerations of generative AI in medical devices,” AdvaMed says. “Additional authorities or regulations targeting GenAI-enabled devices without first understanding if there are any gaps in the existing framework are unnecessary and could hinder progress.” And in this corner, the American College of Radiology lobbies for more granular oversight: “There should be a standard FDA framework for clinical validation that includes minimum requirements for training data diversity, standardized testing protocols across different clinical scenarios and performance benchmarks for specific clinical tasks.” Regulatory Focus airs out the debate. 
     
  • Machine learning vs. military suicide. A new study shows AI can help identify soldiers who are likely to try taking their own lives within six months of their annual checkup. Published in Nature Mental Health journal, the study describes an algorithm that flagged the 25% of Army members who went on to make almost 70% of known suicide attempts. The model could be used to identify soldiers who “should be referred to behavioral health treatment, as well as to suggest which soldiers already in treatment need more intensive treatment,” the study’s authors comment. Study here, coverage by Stars and Stripes here
     
  • Rest in peace, electronic medical records. The advance well-wishes for the digital afterlife come from the healthcare futurist Rubin Pillay, MD, PhD, MBA. “The era of static EMRs is ending; the age of [AI-powered] medical record management is just beginning,” he writes on his Substack RubinReflects. “[T]he potential benefits make this shift not just desirable but necessary.” Pillay has thought through the particulars of his forecast. Hear him out
     
  • Anyone trying to align AI behavior with human values is on a fool’s errand. That’s the view of Marcus Arvan, PhD, an associate professor of philosophy and religion at the University of Tampa. Summarizing a peer-reviewed paper he had published in AI & Society, Arvan makes his argument in concise terms. “To reliably interpret what LLMs are learning and ensure that their behavior safely ‘aligns’ with human values, researchers need to know how an LLM is likely to behave in an uncountably large number of possible future conditions,” he writes. “AI testing methods simply can’t account for all those conditions.” Scientific American published the opinion piece Jan. 27. Read it here
     
  • Recent research in the news: 
     
  • Notable FDA Approvals:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare