How generative AI for healthcare is similar to—and different from—its conventional older cousin

Generative AI can help a lot of healthcare workers complete a lot of tasks. But it doesn’t come without pitfalls to avoid.

The National Academy of Medicine, aka “NAM,” considers some pertinent ins and outs of the technology in a special publication released this month. 

The authors of the 15-page report—subtitled “Opportunities and Responsibilities for Transformative Innovation”—pay special attention to GenAI’s emerging role in clinical decision-making, administrative efficiency and patient engagement. 

The report also offers a side-by-side comparison of GenAI with standard AI—or what the group calls the “predictive/analytical” kind—across five important considerations for adopters. The section also functions as a useful review of similarities and differences: 

1. Output evaluation and quality control. 

  • Predictive/analytical AI: These models generate quantitative predictions, “making performance assessment more straightforward through accuracy, precision and recall metrics,” NAM reminds. “The emphasis is on accuracy within defined data parameters rather than subjective quality.”
     
    • Generative AI: The primary outputs are new content, such as text, images and audio, “where quality is subjective and context dependent,” the authors note. “Monitoring focuses on coherence, relevance and ensuring ethical content generation, as well as preventing issues like ‘hallucinations’ or factual inaccuracies.”

       

2. Bias manifestation and detection.

  • Predictive/analytical AI: Bias checks focus on ensuring that model predictions are fair across different groups, the authors explain. “Monitoring bias in these models often involves fairness audits and statistical checks on outcomes rather than subjective analysis of generated content.”
     
    • Generative AI: Bias can appear “in subtle ways, shaping content tone, language or framing,” the authors write. “Monitoring involves detecting biases in generated language or other output media and preventing the spread of misinformation or unintended stereotypes.”

       

3. Performance degradation and adaptation. 

  • Predictive/analytical AI: Model drift often relates to underlying data shifts, requiring statistical tracking of accuracy and regular retraining. “The process is more data driven and straightforward,” the authors point out, “as performance is measured against historical accuracy benchmarks.”
     
    • Generative AI: “Quality degradation may appear as reduced coherence or creativity, requiring frequent content review and adjustments,” NAM explains. “User feedback is often essential in detecting subtle shifts in output quality.” Also, GenAI models are “often designed to evolve over time, learning from new data, so monitoring requires ongoing vigilance to adapt to changes.”

       

4. Impact on users and society. 

  • Predictive/analytical AI: Impacts are “more directly related to decision making, where inaccurate predictions can affect outcomes in areas like medicine or eligibility for services,” the authors observe. “Monitoring focuses on ensuring reliable decision support and fairness in model applications.”
     
    • Generative AI: “The potential for misuse of generative content (e.g., for spreading misinformation) adds a unique layer of impact monitoring, requiring checks on ethical content generation and user satisfaction, NAM states. “Societal impacts include privacy, misinformation and the psychological effect on users.”

       

5. Compliance and legal considerations. 

  • Predictive/analytical AIThese models often operate in industries with established regulatory frameworks, including healthcare, “so monitoring focuses on meeting interpretability, privacy and compliance requirements within well-defined legal standards,” the authors note. 
     
    • Generative AI: “Content produced by generative AI can raise unique compliance issues related to privacy, misinformation and ethical standards,” NAM writes. “Monitoring involves regulatory checks on content generation standards and adherence to ethical guidelines.”

Among the publication’s other noteworthy attributes is a guide for assigning responsibility to individuals according to a 4-point matrix. In order of least to most responsible, NAM’s stakeholder categories are “informed,” “consulted,” “accountable” and “responsible.” 

Access the publication here.

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.