| | | Artificial intelligence researchers are making a “great plea” to guide the ethical development and use of generative AI in medicine. The term is in quotes because it’s an easily memorized acronym. The letters stand for Governability, Reliability, Equity, Accountability, Traceability, Privacy, Lawfulness, Empathy and Autonomy. The researchers base this set of principles on ethical AI from the military world—namely the U.S. Department of Defense and the North Atlantic Treaty Organization. “Warriors on battlefields often face life-altering circumstances that require quick decision-making,” senior author Yanshan Wang, PhD, of the University of Pittsburgh and colleagues explain. “Medical providers experience similar challenges in a rapidly changing healthcare environment, such as in the emergency department or during surgery treating a life-threatening condition.” They lay out their formula in commentary published Dec. 2 in NPJ Digital Medicine. Here are abbreviated descriptions of the first five elements. - Governability. Generative AI systems must be governed by explicit guidelines covering what to do should glitches arise, Wang and co-authors assert. “In the event of any unintended behavior, human intervention to disengage or deactivate the deployed AI system should be possible,” they write.
- Reliability. If you’re not certain your generative AI model is as safe as—or safer than—human decision-making, don’t deploy it in clinical practice, the authors imply. “Having a thorough evaluation and testing protocol against specific use cases will ensure the development of resilient and robust AI systems,” they write.
- Equity. Defining the concept as “the state in which everyone has a fair and just opportunity to attain their highest level of health,” Wang and colleagues state that generative AI developers must seek to mitigate bias by adjusting algorithms to help address existing disparities in health status.
- Accountability. If measures aren’t in place to hold AI-using clinicians accountable for care decisions, patients may feel doctors and nurses are less than fully invested in AI-aided care processes, the authors suggest. In a nutshell: The less the accountability, the lower the trust.
- Traceability. The generative aspect of generative AI should be transparent and, thus, capable of being tracked, Wang and co-authors maintain. “Data sources used to train these models and the design procedures of these models should be transparent too,” they add. “Furthermore, the implementation, deployment and operation of these models need to be auditable, under the control of stakeholders in the healthcare setting.”
And there’s the GREAT part of the formula. For the PLEA part—Privacy, Lawfulness, Empathy and Autonomy—read the rest of the paper. |
| | |
| |
| | | Buzzworthy developments of the past few days. - More than 50 orgs from business, academia and government are banding together to advance AI assertively yet responsibly. Calling itself the AI Alliance, the assemblage will unite over shared interests in innovation, safety, diversity, opportunity and “benefits for all.” Co-drivers of the bus are IBM and Meta. Announcement.
- In 2024, AI will occupy more territory in the minds of tech leaders than any other technology. IEEE found as much when it consulted 350 CIOs, CTOs, IT directors and similarly titled tech professionals in the U.S., U.K., China, India and Brazil. Extended reality and cloud computing came in second and third, respectively. 5G and quantum computing also registered. Coverage with study link from IEEE Spectrum.
- ‘AI let loose by itself is a terrible thing.’ The cautionary nugget is from Christoph Lehmann, MD, director of clinical informatics at UT Southwestern Medical Center. He offers the observation in an expansive interview about healthcare AI with D magazine in Dallas. Read the piece.
- On the other hand, generative AI is ‘capable of delivering meaningful improvements in healthcare more rapidly than was the case with previous technologies.’ That’s from Robert Wachter, MD, chair of medicine at UC-San Francisco and author of the 2015 bestseller The Digital Doctor. Wachter airs out his optimism in a JAMA opinion piece and an interview with UCSF’s news operation.
- Tension is building between Google’s healthcare AI operation and the D.C. denizens trying to monitor smooth operators like, well, like Google’s healthcare AI operation. As Politico reports, lawmakers and regulators are especially challenged by Google leaders and lobbyists who used to be government officials themselves. Story here.
- There are more ways than one to educate medical students in AI. One of them is modeled at the University of Texas, which offers a dual-degree program incorporating healthcare AI. Health IT Analytics has the story.
- 3Aware of Indianapolis is working with Mayo Clinic Platform to help medical device manufacturers comply with regulatory requirements. Announcement here.
- Evidently just for Schlitz and giggles, Reuters asked two generative AI powerhouses for their take on the most important news of 2023. OpenAI’s ChatGPT gave a goofy response that would have been no help at all had its input mattered. Google’s Bard did better but completely missed the Israel-Hamas war. The news service’s global managing editor, Simon Robinson, comments: “But even if AI cannot yet match a journalist, the technology’s emergence in 2023 promised (or threatened, depending on your viewpoint) a profound shift in the way humans operate. … In 2024, expect more progress and more news on regulators scrambling to keep up.”
- From AIin.Healthcare’s news partners:
|
| | |
|
| |
|
|