| | | What starts in California often migrates to the rest of the country. So healthcare leaders around the U.S. might want to take notice of what’s going on in the streets of San Francisco this week. In that city, a crowd of nurses employed by Kaiser Permanente marched Monday in protest of Kaiser’s embrace of healthcare AI. Organized by the California Nurses Association, the demonstrators waved signs and chanted slogans. The timing of the action seems purposeful. This week Kaiser is hosting an international audience for a smallish but influential conference, the 2024 Integrated Care Experience. The site of the event—and the protest—is KP’s San Francisco Medical Center. Is the protesters’ main motivator job security, patient safety or equal parts both? It may not matter. What matters is the prompt: the rapid rise of generative AI. Here’s a sampling of viewpoints from stakeholders on both sides of the disagreement over the technology’s rightful role in healthcare. ‘No computer, no AI can replace the human touch. It cannot hold your loved one’s hand. You cannot teach a computer how to have empathy.’—Amy Grewal, RN, Kaiser Permanente nurse (to NBC Bay Area)
‘We believe that AI may be able to help our physicians and employees, and enhance our members’ experience.’—Kaiser Permanente officials (to KQED)
‘Our patients are not lab rats.’—Michelle Gutierrez Vo, RN, Kaiser Permanente nurse and a California Nurses Association co-president (to KQED)
‘Generative AI is a threatening technology but also a positive one. What is the best for the patient? That has to be the number one concern.’ —Robert Pearl, MD, author of ChatGPT MD and former CEO of Kaiser Permanente (to KQED)
‘There is nothing inevitable about AI’s advancement into healthcare. No patient should be a guinea pig and no nurse should be replaced by a robot.’ —Cathy Kennedy, RN, Kaiser Permanente nurse and a California Nurses Association co-president (to National Nurses United)
‘It’s very good to have open discussions because the technology is moving at such a fast pace and everyone is at a different level of understanding of what it can do and [what] it is.’—Ashish Atreja, MD, MPH, chief information and digital health officer at UC Davis Health (to KQED)
‘Patients are not algorithms … Trust nurses, not AI’—Kaiser Permanente nurses via picket signs (as seen on video posted to X by the San Francisco Chronicle)
|
| | |
| |
| | | Buzzworthy developments of the past few days. - Before using healthcare GenAI for sensitive operational tasks like medical coding, algorithms must be refined and tested to near perfection. The strong suggestion comes courtesy of researchers with the Icahn School of Medicine at Mount Sinai in New York City. The team worked with more than 27,000 unique diagnosis and procedure codes from 12 months of routine care. After feeding these to large language models from OpenAI, Google and Meta, they compared the outputs with the original codes. All models had accuracy problems, as none reached the 50% correct mark. GPT-4 came the closest, notching the highest exact match rates for CPT codes (49.8%), ICD-9-CM codes (45.9%) and ICD-10-CM codes (33.9%). Mount Sinai’s news operation flatly states the gist: “Despite AI advancements, human oversight remains essential.” Journal study here, Mount Sinai’s own coverage here.
- Over the course of a career, surgeons bending at the waist to perform hours-long spinal operations may be inviting the irony of fate into their lives. Which is to say they may put their own necks and backs through a long, slow descent into chronic stiffness and pain. They might also develop a permanent stoop. Wearable technologies can help, and a new study conducted at Baylor College of Medicine tells how. Investigators strategically placed sensors on the heads and upper backs of 10 neurosurgeons performing spine and cranial procedures. The devices transmitted data on time spent in extended, neutral and flexed static postures. Armed with such feedback in real time, the surgeons quickly adjusted their positions during operations. The study’s lead author remarks that tapping the technology to warn of poor motion patterns at early career stages “may help emerging surgeons correct their posture and avoid long-term injuries.” Journal study here, Baylor news item here.
- The wiseguys who hacked into Change Healthcare in February digitally loitered inside the company’s networks for more than a week before launching their strike. This may say more about Change’s security shortcomings than it does about the hackers’ coldness. Regardless, a lot of good people have been hurt in a lot of bad ways. For starters, UnitedHealth Group said last week the attack has so far cost it $870 million. Presumably this includes the $22 million ransom the corporation is said to have paid the criminals, evidently in bitcoin. This week UnitedHealth tells The Wall Street Journal “a substantial proportion of people in America” could be affected by the incident. “The company also warned it will most likely take months to identify and notify the customers and individuals affected,” WSJ reports before adding: UnitedHealth Group’s CEO, Andrew Witty, is expected to testify about the incident before the House on May 1.
- For savvy investors, the AI-happy U.S. healthcare market represents a diverse set of opportunities in both public and private markets. So notes JP Morgan Asset Management. “An emphasis on profitability will be needed,” writes JP Morgan global market strategist Stephanie Aliaga, “but well-positioned investors could take advantage of the new [AI] era unfolding in healthcare transformation.”
- Machine learning is good. Scientific machine learning is better. “Machine learning algorithms typically capture a lot of historical information and then use the data patterns to make predictions about the future,” explains scientific ML proponent Kookjin Lee, PhD, of Arizona State University. “With scientific machine learning, the software is told about the world’s physical rules. The system should know more about what to expect because it should know what is possible.” Learn more here.
- Facing a dangerous staffing shortage, a small but busy 911 operation is going to let AI handle non-emergency calls. The call center, in Buncombe County, N.C., will use machine learning technology supplied by Amazon Connect. The need is evident: The call center’s dispatchers have been handling calls from folks “looking for directions to the Blue Ridge Parkway, reporting loud parties or even checking to see when fireworks were scheduled,” says assistant county manager DK Wesley. “By diverting some of the more than 800 non-emergency calls per day to machine learning, our highly trained first responders can focus on emergencies when time is of the essence.”
- Not everyone is excited about the GenAI boom in healthcare. “I’m never going to say that technology is harmful or we shouldn’t use it,” New York University computer scientist Julia Stoyanovich, PhD, tells Rolling Stone. “But I have to say that I’m skeptical, because what we’re seeing is that people are just rushing to use generative AI for all kinds of applications, simply because it’s out there, and it looks cool, and competitors are using it.”
- Healthcare AI funding news of note:
- From AIin.Healthcare’s news partners:
|
| | |
|
| |
|
|