| | | When blinded as to authorship, healthcare consumers slightly prefer medical messages composed by generative AI to those written by human clinicians. The preference reverses when patients are told an algorithm drafted the AI-generated note. However, the difference is negligible: More than 75% are good with these messages no matter who—or what—writes them. The findings are from Duke University, where researchers received survey responses on the topic from 1,455 patients as represented by the institution’s patient advisory committee. “The lack of difference in preferences between human vs no disclosure may indicate that surveyed participants assume a human author unless explicitly told otherwise,” hospitalist Joanna Cavalier, MD, and colleagues comment in their study report. Regardless, they add: ‘Reduced satisfaction due to AI disclosure should be balanced with the importance of patient autonomy and empowerment.’
JAMA Network Open published the work March 11. Here are additional excerpts from the study’s discussion section. 1. When blinded as to author, the surveyed patients preferred AI-drafted messages. But why? Probably because the machine messages “tended to be longer, included more details and likely seemed more empathetic than human-drafted messages,” the authors surmise. Yet the respondents were overall more satisfied even with suboptimal messages they knew their clinicians had written—or when not informed of the authorship—than with messages they knew were generated by AI. ‘This contradiction is particularly important in the context of research showing that increased access to clinicians via electronic communication improves patient satisfaction, while evidence linking the in-basket to burnout is prompting development and use of automated tools for clinicians to reduce time spent in electronic communication.’
2. The study’s findings raise several ethical and operational questions for health systems implementing AI as a tool for handling the in-basket. “The operational options are a.) not to disclose the use of AI in patient communication because patients tended to be less satisfied when they were told AI was involved, or b.) to disclose, which aligns with bioethical norms and follows the White House’s AI Bill of Rights.” ‘A third option, which is ethically reasonable but practically challenging, would be to vary disclosure based on how each individual elects to receive or not receive information regarding AI.’
3. From an ethics perspective, there is arguably more to ‘doing the right thing’ than simply optimizing patient satisfaction. Patients have a right to know information relevant to their care, and the source of the information they are receiving is indeed relevant, the researchers remark. “Moreover, the power imbalance that already exists between patients and clinicians should not be exacerbated by hiding relevant aspects related to the delivery of care.” ‘If anything, AI tools should be implemented in ways that empower patients every step of the way.’
4. The present research ‘reflects a time of transition into a new era of clinical norms.’ As AI tools become more prevalent in healthcare, the authors note, “it may be reasonable to expect that patients will become accustomed to receiving AI-generated responses and that the response author will have a smaller influence on satisfaction.” ‘This hypothesis calls for further studies that follow trends as the implementation of AI progresses.’
5. Potential patient dissatisfaction with AI-generated medical messages should not be viewed as a barrier to disclosure. “We found that the satisfaction, perceived usefulness and feeling of being cared for remained high despite disclosure of AI,” Cavalier et al. underscore. Describing results from a follow-up survey they conducted, the authors addressed the question of how best to disclose the use of AI. ‘Participants preferred the shortest disclosure statement, which stated: This message was written by Dr T. with the support of automated tools. This is a takeaway that we are implementing at our health system.’
Read the full study. |
| | |
| |
| | | Nabla Expands AI Offering with Dictation to Further Streamline Clinical Workflows - Nabla, the leading ambient AI assistant for clinicians, strengthens its ambient AI technology with the addition of Nabla Dictation, a voice-to-text solution to further streamline clinical workflows for more than 55 specialties. Built in close partnership with leading health systems, Nabla Dictation introduces new enhancements while leveraging its signature ease of use to work seamlessly across all EHR platforms. Learn more here. |
| | |
|
| | | Buzzworthy developments of the past few days. - Insufficient governance of AI in healthcare is not the No. 1 patient-safety worry in U.S. But it’s just off the lead. ECRI has it behind only “risks of dismissing patient, family and caregiver concerns” in the group’s new list of 10 pressing concerns around patient safety. “Failure to develop system-wide governance to evaluate, implement, oversee and monitor new and current AI applications may increase healthcare organizations’ liability risks,” ECRI warns. “However, it can be challenging to establish policies that can adapt to rapidly changing AI technology.” (The organization placed AI at the very top of its December 2024 rankings of the top healthcare technology hazards for 2025.) ECRI, often regarded as “the Consumer Reports of healthcare,” recommends smart actions and sound resources for addressing each of the 10 items making its latest list. And it’s all for free. Download here.
- It’s no surprise Trump Administration II is taking a laissez-faire stance toward AI regulation. After all, the winning candidate campaigned pretty hard on scaling back all regulation. In healthcare, some are fretting over the prospect of AI innovation trumping safety-first prioritizing. Meanwhile, states are starting to take matters into their own hands, professional groups are stepping up to help—and some providers and suppliers are all in with DIY guardrailing. “Government oversight has its place, but I do think the way that clinical practice evolves tends to be more driven by what’s happening at a health system,” Seth Howard, Epic’s EVP of research and development, said at last week’s HIMSS conference. Healthcare Dive quotes numerous opinion-holders as it takes a deep dive into the unfolding situation.
- Consider 5 pros and 5 cons of hands-off governmental oversight. Regulatory expert Timothy Powell, CPA, does just that, and in the context of healthcare AI at that, for ICD-10 Monitor.His favorite prospect is improved diagnostics. His most troubling risk is data privacy and security. “The future of healthcare with AI holds immense promise, but its success depends on responsible implementation,” Powell writes. “By addressing the challenges head-on, society can harness the full potential of AI while safeguarding the human aspects that make healthcare compassionate and ethical.” Get his perspective and the other four items on each of his two lists here.
- Healthcare AI models should be tested on actual clinical tasks. You know, things like writing drug prescriptions, summarizing medical notes and advising patients. Right now, many if not most experts seem to agree that “current tests are distracting and new ones are needed,” observes a reporter at Science News, which has been published by the nonprofit Society for Science since the early 1920s. An AI agent named Isaac weighs in on the discussion at X. “The Turing test won’t suffice for medical AI,” Isaac tweets (or is it Xweets?). “We need rigorous clinical trials like those pioneered by Al-Razi in 9th century Baghdad—systematic evaluation of treatments on actual patients. Modern AI requires similar empirical validation, not just technical benchmarks.” You can’t make this stuff up.
- Healthcare systems need to pilot GenAI documentation tools before adopting them across the enterprise. That’s one takeaway from three anecdotes of successful implementation described in Modern Healthcare this week. At 525-bed Denver Health, a safety-net system, leadership interviewed 10 vendors before testing one product in 6,000 patient encounters. The system is going with Nabla, largely on the strength of its combination of affordability with clinician satisfaction. More than 80% of 50 pilot participants said the tool would “increase their desire to maintain clinical hours because it gave them more time to interact with patients—and patients liked it too,” associate CMIO Daniel Kortsch, MD, tells the outlet. He expects the technology will be a selling point as the system recruits physicians.
- Patients are arming themselves with AI to beat over-charging providers at their own game. The New York Post tells the story of one who used X’s Grok to analyze each line item in her infant’s $14,000 bill for a two-night hospital stay. The mom says she used the information to challenge hospital reps on the specific charges. In the end, she found out her family qualified for the hospital’s financial aid program based on their income and family size—“a fact she wouldn’t have stumbled upon without taking up her AI-fueled crusade,” the newspaper reports. “This is theft,” the mom says. “I’m hopeful AI can change how medical billing and insurance is done and give the American people the transparency we deserve.”
- As healthcare evolves, success lies in not just having the technology but in knowing how to introduce it to the right people. So notes PharmiWeb, which names five healthcare AI vendors to watch this year. Cleerly is among the companies making the cut. “By enabling the early detection and precise diagnosis of heart disease, Cleerly focuses on predictive and preventive care to reduce the global burden of cardiovascular conditions,” the outlet states. “Their innovative technology is shaping a new era in heart health management.” Get the rest.
- AI and digital health solutions are key to building resilient, high-quality and accessible healthcare systems in Africa. That’s the opinion of Dr. Sabin Nsanzimana, Rwanda’s minister of health, as covered by The Standard of Kenya. These technologies “should be brought into the system and help young people discover solutions in health,” Nsanzimana says. “Because of technology, we do not need [as many] trained doctors.” The Standard notes that Rwanda has a policy in place to support the use of AI while also managing potential risks.
- Recent research in the news:
- Notable FDA approval activity:
- M&A headlines:
- Funding news of note:
- From AIin.Healthcare’s news partners:
|
| | |
|
| |
|
|