5 terms every hospital trustee—and healthcare AI stakeholder—should know
As AI continues infiltrating healthcare at nearly every level, the technology’s potential for good and ill must become—or remain—a preeminent concern for hospital boards of trustees.
This can be difficult since many trustees are volunteers hailing from lines of work that, traditionally, have had little to do with advanced data science. More than a few of these leaders don’t have direct backgrounds in healthcare, either.
As acknowledged in a piece posted by the American Hospital Association this month: “The [hospital] trustee is faced with a double challenge: understanding the implications of AI in one’s own field as well as in the healthcare professions.”
The commentary is penned by Steven Berkowitz, MD, a healthcare consultant and former hospital and health-system CMO. If trustees are to stay on top of AI for the good of the healthcare institutions they serve, he suggests, they should know their way around five key concepts and controversies. These are:
1. Generative pretrained transformer (GPT). With more than 180 million users, OpenAI’s main product is the most familiar GPT model. As you read this, ChatGPT is being used to write articles, code programs, summarize research and analyze images textually. In fact, Berkowitz reminds, when it went head to head against physicians taking medical questions, it frequently outperformed the doctors on both accuracy and empathy. More Berkowitz:
The possibilities of GPT applications seem endless. Vastly more powerful updates are on the horizon. Multiple vendors are now entering this space. GPT will be embedded in many processes in all industries. Its potential in healthcare is overwhelming.
2. Deep fakes. To be sure, these are more likely to catch trustees’ attention as harmless amusements from the entertainment sector than as, say, fraudulent prescriptions for drugs from phony physicians—or heartfelt pleas for money from incredibly convincing “loved ones.” Still, Berkowitz points out, it’s ground worth exploring for future reference.
It remains to be seen where this will land, but it is an area of legitimate concern. Vendors offer the ability to separate real versus AI generated material. Meanwhile, the “bad guys” continue to produce more sophisticated ways to evade detection.
3. Inherent bias. AI is only as well-rounded, and thus as objective, as the data on which it’s trained. What’s more, algorithms can inherit biases from their developers. Berkowitz:
A recent article gave ChatGPT the Political Compass quiz, and it came out significantly on the left and libertarian side. It is fair to assume that any AI output could contain biases from numerous etiologies, and specific results should always be assessed for this possibility.
4. AI and consciousness. Is AGI—artificial general intelligence—a real possibility? Or is it just the stuff of overactive imaginations, now and for the foreseeable future? Either way, the debate is a surefire high-level conversation starter. And it’s one for which trustees only need to know questions to ask, not answers to supply.
Given the rapid expansion of the technology, the potential of computers crossing over that barrier into full self-awareness and consciousness must be considered.
5. Technological singularity. In the context of AI, this term refers to a state in which machine intelligence becomes superintelligent, uncontrollable and irreversible. If such singularity were ever to occur, AI could theoretically “take over the world,” Berkowitz writes. “Is this media hype, or is it our fate?”
One of the most primal instincts of a living organism is the need to survive. If the computer perceives a human as a threat, would it then feel compelled to destroy that human? Presently, this is the fodder of science fiction novels and movies. However, many respected AI researchers have expressed concern.