Healthcare AI today: Bipartisan AI policy guidance, AI claims denials, LLM brain rot, more

 

News you need to know about now:

  • Federal lawmakers have a fresh resource to keep them up on healthcare AI that falls outside of formal FDA oversight. It comes in the form of a new report from the nonprofit Bipartisan Policy Center. At well under 20 pages of text, the document covers a lot of ground with good economy of words. It’s got sections on AI industry standards, state AI regulations, the federal AI regulatory landscape and how the 21st Century Cures Act of 2016 shapes health AI oversight today. That legislation clarified which healthcare software isn’t watched by the FDA, the authors remind, and it was followed in 2022 by finalized FDA guidance outlining which clinical decision support tools can scoot through without getting frisked. 
     
    • “The FDA’s 2022 guidance prompted concern among some developers who initially believed their CDS products were exempt,” the authors point out. “There is a significant gray area when it comes to these tools. The FDA has published resources, such as the Digital Health Policy Navigator, to help clarify which software functions may fall under its oversight, but navigating the regulatory boundaries remains challenging for both developers and lawmakers.”
       
    • “States are increasingly advancing policies that affect how AI is used in healthcare,” the authors note in their look at AI oversight at the state level. “These laws and regulations vary significantly, creating a patchwork of requirements. A health AI tool may face strict oversight in one state and little to none in another, posing compliance challenges for developers, health systems and providers.”
       
      • The briskly written report may have D.C. senators and representatives as its bullseye audience, but it’s worth a read by all healthcare AI watchers and stakeholders. Download it here.
         
  • Private healthcare insurers: ‘AI helps us make quick, safe decisions about what care is necessary. By extension, it helps patients avoids wasteful or harmful treatments.’ Their critics: ‘Yeah, well, the opposite is true too. These AI systems are sometimes used to delay or deny care that should be covered—all in the name of saving money for insurers.’ A law professor with keen interest in the conflict takes a side and spells it out. “Unfortunately, health insurers often rely on [algorithms] to generate ever-higher profits by improperly denying patient claims and delaying patient care,” contends Jennifer Oliva, JD, MBA, of Indiana University’s Maurer School of Law. “It is a lucrative strategy.”
     
    • That’s from a journal paper set to be printed later this year. Meanwhile Oliva takes her stance to a broader audience via The Conversation. “If the FDA’s current authority isn’t enough to cover insurance algorithms, Congress could change the law to give it that power,” she points out in a piece published June 20. “The move toward regulating how health insurers use AI in determining coverage has clearly begun, but it is still awaiting a robust push. Patients’ lives are literally on the line.”
       
  • The FDA is working on a prophylactic antidote. To keep drug addicts from overdosing? No—to keep Elsa from hallucinating. Elsa, you may recall, is the agency’s first internal AI helper. The FDA’s chief AI officer, Jeremy Walsh, suggests Elsa will stay sober as long as she’s treated right. If users deploy Elsa as it’s intended to work—with document libraries—“it can’t hallucinate,” Walsh tells Regulatory Focus. However, if users “are just using the regular model without using document libraries, Elsa could hallucinate just like any other large language model.” 
     
    • Walsh adds: “I don’t know if Elsa will ever be able to have real-time access to the internet. None of our models, especially Elsa, are being exposed or open to the Internet. That’s a big security risk.” Read the rest
       
  • Just 11% of healthcare IT leaders say they’ve ‘fully implemented’ responsible AI capabilities. They’re talking about things like establishing data governance, tapping AI risk specialists and upskilling across the enterprise. The finding is from a survey of CIOs, CMIOs and other senior IT leaders at health systems with at least 500 beds. The project is described in a white paper by the AI automation supplier Qventus. Along with facts and figures, the report quotes quite a few respondents at some length. “[B]ringing new technologies like AI into a health system is rarely straightforward,” a chief clinical informatics officer says. “There are so many moving parts—figuring out the right adoption strategy, deciding how to measure impact, making sure financial goals are supported, and most importantly, [meeting] patient care standards. It can be challenging when resources that measure how other health systems are approaching AI adoption are limited.” The report is available in exchange for contact information. 
     
  • Purveyors of digital technologies have made a fine art and a deep science of hooking and holding end users. Many do as little as they can to protect the public against the risks their products pose to privacy, safety and mental health. A concerned scholar with a special place in his heart for young people paints the picture and proposes eight “policy principles” to help us all do better. One of the principles involves mitigating risks emanating from AI. “Two of the most urgent AI-enabled risks for youth today are attachment to AI chatbots and nonconsensual AI-generated imagery,” notes Ravi Iyer, PhD, of USC’s Marshall School of Business and the Psychology of Technology Institute. “We do not have to repeat the mistakes we made for social media, where we failed to anticipate and mitigate the risks.” Hear him out.
     
  • That stuff will rot your brain, kid. Earlier generations of youth and young adults variously heard this about print comic books, early MTV, addictive video games and way too much else to catalogue here. Gen-Zers are hearing it about large-language AI. But this time it’s more than just adults overstating risks: Some serious new research puts real vision behind the unblinking grownup eyes. “While LLMs offer immediate convenience, our findings highlight potential cognitive costs,” write Nataliya Kosmyna, PhD, of MIT Media Lab and colleagues in a study awaiting peer review. The team’s experiment was essay-based. It revealed that, over four months, LLM users consistently underperformed at neural, linguistic and behavioral levels. “LLM users [even] struggled to accurately quote their own work,” the researchers report. Read the full abstract
     
  • From AIin.Healthcare’s sibling outlets:
     
Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.