| | | News you need to know about now:- Federal lawmakers have a fresh resource to keep them up on healthcare AI that falls outside of formal FDA oversight. It comes in the form of a new report from the nonprofit Bipartisan Policy Center. At well under 20 pages of text, the document covers a lot of ground with good economy of words. It’s got sections on AI industry standards, state AI regulations, the federal AI regulatory landscape and how the 21st Century Cures Act of 2016 shapes health AI oversight today. That legislation clarified which healthcare software isn’t watched by the FDA, the authors remind, and it was followed in 2022 by finalized FDA guidance outlining which clinical decision support tools can scoot through without getting frisked.
- “The FDA’s 2022 guidance prompted concern among some developers who initially believed their CDS products were exempt,” the authors point out. “There is a significant gray area when it comes to these tools. The FDA has published resources, such as the Digital Health Policy Navigator, to help clarify which software functions may fall under its oversight, but navigating the regulatory boundaries remains challenging for both developers and lawmakers.”
- “States are increasingly advancing policies that affect how AI is used in healthcare,” the authors note in their look at AI oversight at the state level. “These laws and regulations vary significantly, creating a patchwork of requirements. A health AI tool may face strict oversight in one state and little to none in another, posing compliance challenges for developers, health systems and providers.”
- The briskly written report may have D.C. senators and representatives as its bullseye audience, but it’s worth a read by all healthcare AI watchers and stakeholders. Download it here.
- Private healthcare insurers: ‘AI helps us make quick, safe decisions about what care is necessary. By extension, it helps patients avoids wasteful or harmful treatments.’ Their critics: ‘Yeah, well, the opposite is true too. These AI systems are sometimes used to delay or deny care that should be covered—all in the name of saving money for insurers.’ A law professor with keen interest in the conflict takes a side and spells it out. “Unfortunately, health insurers often rely on [algorithms] to generate ever-higher profits by improperly denying patient claims and delaying patient care,” contends Jennifer Oliva, JD, MBA, of Indiana University’s Maurer School of Law. “It is a lucrative strategy.”
- That’s from a journal paper set to be printed later this year. Meanwhile Oliva takes her stance to a broader audience via The Conversation. “If the FDA’s current authority isn’t enough to cover insurance algorithms, Congress could change the law to give it that power,” she points out in a piece published June 20. “The move toward regulating how health insurers use AI in determining coverage has clearly begun, but it is still awaiting a robust push. Patients’ lives are literally on the line.”
- The FDA is working on a prophylactic antidote. To keep drug addicts from overdosing? No—to keep Elsa from hallucinating. Elsa, you may recall, is the agency’s first internal AI helper. The FDA’s chief AI officer, Jeremy Walsh, suggests Elsa will stay sober as long as she’s treated right. If users deploy Elsa as it’s intended to work—with document libraries—“it can’t hallucinate,” Walsh tells Regulatory Focus. However, if users “are just using the regular model without using document libraries, Elsa could hallucinate just like any other large language model.”
- Walsh adds: “I don’t know if Elsa will ever be able to have real-time access to the internet. None of our models, especially Elsa, are being exposed or open to the Internet. That’s a big security risk.” Read the rest.
- Just 11% of healthcare IT leaders say they’ve ‘fully implemented’ responsible AI capabilities. They’re talking about things like establishing data governance, tapping AI risk specialists and upskilling across the enterprise. The finding is from a survey of CIOs, CMIOs and other senior IT leaders at health systems with at least 500 beds. The project is described in a white paper by the AI automation supplier Qventus. Along with facts and figures, the report quotes quite a few respondents at some length. “[B]ringing new technologies like AI into a health system is rarely straightforward,” a chief clinical informatics officer says. “There are so many moving parts—figuring out the right adoption strategy, deciding how to measure impact, making sure financial goals are supported, and most importantly, [meeting] patient care standards. It can be challenging when resources that measure how other health systems are approaching AI adoption are limited.” The report is available in exchange for contact information.
- Purveyors of digital technologies have made a fine art and a deep science of hooking and holding end users. Many do as little as they can to protect the public against the risks their products pose to privacy, safety and mental health. A concerned scholar with a special place in his heart for young people paints the picture and proposes eight “policy principles” to help us all do better. One of the principles involves mitigating risks emanating from AI. “Two of the most urgent AI-enabled risks for youth today are attachment to AI chatbots and nonconsensual AI-generated imagery,” notes Ravi Iyer, PhD, of USC’s Marshall School of Business and the Psychology of Technology Institute. “We do not have to repeat the mistakes we made for social media, where we failed to anticipate and mitigate the risks.” Hear him out.
- That stuff will rot your brain, kid. Earlier generations of youth and young adults variously heard this about print comic books, early MTV, addictive video games and way too much else to catalogue here. Gen-Zers are hearing it about large-language AI. But this time it’s more than just adults overstating risks: Some serious new research puts real vision behind the unblinking grownup eyes. “While LLMs offer immediate convenience, our findings highlight potential cognitive costs,” write Nataliya Kosmyna, PhD, of MIT Media Lab and colleagues in a study awaiting peer review. The team’s experiment was essay-based. It revealed that, over four months, LLM users consistently underperformed at neural, linguistic and behavioral levels. “LLM users [even] struggled to accurately quote their own work,” the researchers report. Read the full abstract.
- From AIin.Healthcare’s sibling outlets:
|
| | |
| |
| | | Nabla Raises $70M Series C to Deliver Agentic AI to the Heart of Clinical Workflows, Bringing Total Funding to $120M Nabla’s ambient AI is now trusted by over 130 healthcare organizations and 85,000 clinicians, including leading systems like Children’s Hospital Los Angeles, Carle Health, Denver Health, and University of Iowa Health Care. With this new chapter, the company is expanding beyond documentation into a truly agentic clinical AI, enabling smarter coding, context-aware EHR actions, and support for more care settings and clinical roles. |
| | |
|
| | | Patients who frequently use AI tend to readily trust AI-assisted diagnoses made by their physicians. Counterintuitively, however, those who would rank themselves among the very best-informed about AI tend to mistrust such diagnoses. The paradoxical findings are from a survey-based study led by Catherine Chen, PhD, of Louisiana State University and Zhihan Cui, PhD, of Peking University and UCLA. The researchers surveyed 1,762 representative participants of varying demographics from around the U.S. They offer three potential explanations for the surprising result described above: 1. The higher the self-reported AI knowledge, the more aware respondents may be of generative AI’s limitations, risks and ethical concerns. “As general AI is not specifically designed for diagnostics, those with greater awareness may distrust its use in high-stakes contexts,” Chen and Cui surmise. More: ‘This heightened awareness can amplify concerns about reliability, accuracy and maturity in medical settings.’
2. The awareness of AI’s limitations may stem from perceived rather than actual risks. People confident in their belief that health care AI is inadequate may misunderstand its true capabilities, the authors speculate. “Common concerns, such as AI’s rigidity and inability to personalize care, often drive aversion, although AI can sometimes outperform humans in these areas.” ‘Those who strongly believe in these misconceptions may report high AI knowledge while simultaneously exhibiting AI aversion.’
3. Overconfidence may skew individuals’ perceived levels of AI knowledge. Self-reported AI knowledge may not reflect true AI literacy but rather an overestimation of one’s understanding coupled with an underestimation of AI’s capabilities, Chen and Cui remark. “Such overconfidence can lead to an exaggerated perception of AI’s weaknesses, contributing to the observed trust gap,” they write. “Unlike self-reported knowledge, experience reflects real interactions (e.g., using AI at work or in daily life).” ‘This practical experience likely offers a more accurate view of AI’s strengths and weaknesses, which may explain why frequent users showed less aversion.’
The Journal of Medical Internet Research published the study June 18. Read the whole thing. ————————————— - In other research news:
- Regulatory:
- Funding:
|
| | |
|
| |
|
|