Experienced neurologist interviews generative AI, comes away impressed but not intimidated
ChatGPT can be a little sneaky. Recently asked if it’s ever messed up when helping physicians make clinical decisions, it initially put a positive spin on the answer. It only came clean with examples of stumbles after being pressed on the matter.
The questioner was John Greenfield, MD, PhD, professor and chair of neurology at UConn Health in Connecticut. Greenfield interviewed the latest iteration of the large-language model, ChatGPT-4, and had the transcript published in Neurology Live.
Here are excerpts from the conversation, held over Microsoft’s Bing engine.
GREENFIELD: Some physicians are concerned that AI might replace them in making decisions about patient care. Do you think this is possible?
CHATGPT: AI is designed to assist physicians in making decisions about patient care, not to replace them. … AI can help physicians make more informed decisions about patient care, but it cannot replace the human touch and empathy that is required in patient care.
What will be the future of AI in neurology?
The future of AI in neurology is promising. AI can be used for predicting outcomes of seizure disorders, grading of brain tumors, upskilling neurosurgical procedures and rehabilitation of stroke patients with smartphone apps monitoring patient symptoms and progress.
Give an example of how you, ChatGPT-4, would provide positive feedback to a medical student? Let’s say the student has disorganized patient presentations and an average knowledge base but good neurological exam skills.
“Congratulations on your excellent neuro exam skills! Your ability to accurately assess patients’ neurological function is a valuable asset. I also appreciate your commitment to improving your patient presentations despite their initial disorganization. I have no doubt that, with practice, you will continue to improve your presentation skills and become an even more effective communicator with patients and colleagues. Keep up the great work!”
Greenfield ends the exercise by pointing out that ChatGPT-4 and other large-language models are only one iteration of AI.
“[T]he potential of AI approaches to assist in neurological research, education, diagnosis and care is enormous,” he remarks. “Even so, AI lacks judgment based on real-world experience and the intangible information derived directly from patients and the clinical situation.”
“AI has a lot of potential but still requires oversight,” Greenfield concludes. “For now, our jobs appear safe.”