AI separates EHR wheat from weeds so clinicians can get on with patient care

When assisted by an AI tool designed to organize and display digitized patient referral records, gastroenterologists cut their time to answer relevant clinical questions by 2.3 minutes.

The reduction, seen in a small but illustrative AI research project at Stanford, represented an 18% improvement over standard review times.

What’s more, the physicians’ answer accuracy with the tool was excellent—and on par with their performance when using conventional means of record retrieval.

The development and validation of the EHR assistance system, along with its applicability across healthcare, are described in a study running in JAMA Network Open.

Senior author Sidhartha Sinha, MD, and colleagues prognostically tested their AI system with 12 volunteer GI specialists at their institution.

Sinha and co-researchers asked the participants to answer 22 clinical questions that required EHR searching. The subjects conducted the experimental searches with and without the AI system.

Analyzing the results, Sinha and team found the time savings and high accuracy were evident across the cohort.

Meanwhile, in an accompanying survey, all but one of the participants expressed a preference for the AI-optimized approach.

“Despite a learning curve pointed out by respondents, 11 of 12 physicians believed that the technology would save them time to assess new patient records and were interested in using this technology in their clinic,” Sinha and colleagues note.

The authors comment that their findings are “particularly relevant in an era in which practitioners are confronting increasing volumes of EHR data and the loss of face-to-face interaction with patients.”

That goes for practitioners beyond gastroenterology, they suggest.

The supportive responses to the participant survey “highlight the importance of this issue as an area of need that can likely be generalized and expanded to multiple other medical subspecialties that share similar challenges, because many referral records contain similar types of information (progress notes, radiology reports, pathology findings, procedure notes, etc.).”

The authors note several limitations in their study design, including reliance on a small study group from a single specialty within a single institution.

Nevertheless, they write in their discussion section,

[W]e believe our questions reflect the type of data that a clinician would need to consider when reviewing a new patient referral packet. In addition, although we have a relatively small number of participants (n = 12), they each answered numerous questions (44 total) and as such, we had adequate power to detect the nearly 20% time savings owing to AI optimization. With larger records and increased use of such an AI system, we hypothesize even more pronounced time savings.”

The study’s lead author is Ethan Andrew Chi, a Stanford graduate student in computer science.

In invited commentary on the study, Richard Baron, MD, president and CEO of the American Board of Internal Medicine, encourages envisioning avenues along which the Stanford gastroenterology AI model might go from here.

The AI “could be embedded in the EHR itself, someone could develop it as a commercial product, or it could be incorporated in the process of scanning records,” Baron writes. “Any of these approaches would be welcome relief to hard-pressed clinicians drowning in seas of unstructured data.”

At the same time, Baron points out, it’s worth acknowledging that “other, better ways to solve the problem at hand—the time demands of the EHR—already exist.

“If we had truly robust standards for electronic data interchange and less anxiety about privacy, these kinds of data could be moved around more freely in a structured format,” he writes. “Of course, there are regional exchanges where they do. The data could also be created in structured format to begin with.”

In addition, Baron adds,

The very gastroenterologists participating in the study are paid to perform procedures regardless of the format in which the procedure report is produced; one could imagine a world in which failure to produce a machine-readable structured procedure report precluded being paid at all. … [A]s long as we live in the Babel of free-form non-interoperable medical documentation, it is likely that an AI tool supporting humans who care for patients by wading through volumes of scanned documents can be a real contribution.”

Full study here, Baron commentary here.

Around the web

The Palo Alto giant used exams from nearly 250,000 patients to upgrade its already robust algorithm.

Exams performed using the deep learning-based reconstruction tool also maintained high image quality, experts reported Wednesday.

Stratifying exams according to risk can reduce unnecessary imaging and downstream costs of care, Hawaiian researchers reported in Radiology.

Trimed Popup
Trimed Popup