Every industry on earth is buzzing over the promise and potential of ChatGPT and similarly sharp AI models, whether “large language” or another generative form. Healthcare is no exception. But shouldn’t it be?
At Wired, a journalist focused on AI in society spoke with a handful of medical professionals and found no shortage of misgivings. Here are five.
1. ChatGPT trains itself on literature spanning years. At first blush that may sound like an unqualified plus. However, outdated medical evidence can be dangerous—and clinical knowledge and practices surely “change and evolve over time,” Heather Mattie, PhD, a biostatistics lecturer at Harvard’s T.H. Chan School of Public Health, tells Wired senior writer Khari Johnson. “There’s no telling where in the timeline of medicine ChatGPT pulls its information from when stating a typical treatment.”
2. ChatGPT has been shown to sound authoritative even when dispensing factual inaccuracies and fictitious references. “It only takes one or two [experiences] like that to erode trust in the whole thing,” points out Trishan Panch, MD, MPH, a Harvard instructor and digital health entrepreneur.
3. Physicians could inappropriately lean on the software for moral or ethical guidance. “Some bioethicists worry that doctors will turn to the bot for advice when they encounter a tough decision like whether surgery is the right choice for a patient with a low likelihood of survival or recovery,” reports Johnson. Asked by Johnson about just such a scenario, bioethicist Jamie Webb of the University of Edinburgh in Scotland holds firm: “You can’t [ethically] outsource or automate that kind of process to a generative AI model.”
4. Over time, gradual “de-skilling” is a real risk. Getting rusty could afflict clinicians who get a little too used to relying on a bot “instead of thinking through tricky decisions for themselves,” Johnson writes, citing research by Webb and colleagues.
5. Thanks to its aptitude for striking a scholarly tone, ChatGPT and other breeds in the species might subtly influence—if not outright fool—humans. Fortunately, an antidotal strategy is always available: Let the flawed but smart bots pitch in as long as they’re closely supervised by a human expert. This uncomplicated approach certainly works with (and for) residents and other trainees, reminds Robert Pearl, MD, the Stanford professor, author and former CEO of Kaiser Permanente CEO.
Pearl, incidentally, is emerging as a notable enthusiast of large-language AI in clinical settings.
“No physician who practices high-quality medicine will do so without accessing ChatGPT or other forms of generative AI,” he tells Wired. “I think it will be more important to doctors than the stethoscope was in the past.”
Read the whole thing.