| | | Budgeting for generative AI in healthcare has skyrocketed, albeit in pockets, by as much as 300% year over year, according to a survey of technology decision-makers spanning providers, vendors and other employers in the healthcare sector. The survey takers heard from 304 respondents, most of whom work for organizations or companies that are “actively engaged in evaluating, utilizing or deploying Generative AI (GenAI) technologies.” Provider people made up the bulk of the field, 46%, followed by healthtech executives (11%), pharma professionals (9%), digital health representatives (7.5%) and smaller samplings of individuals from academia, health insurance, biotech/medical devices and public health. John Snow Labs ran the project with a hands-on assist from Gradient Flow. Here are some key findings from their survey report. 2024 GenAI budget as compared with 2023: - Increased by more than 300%—8% of respondents
- Increased by 100% to 300%—13%
- Increased by 50% to 100%—22%
- Increased by 10% to 50%—34%
- Remained roughly the same—23%
Top use cases for large language models (LLMs): - Answering patient questions—21% of respondents
- Medical chatbots—20%
- Information extraction/data abstraction—19%
- Biomedical research—18%
- Clinical coding/Chart audit—17%
Currently used LLM models: - Healthcare- and task-specific models (“small”)—36% of respondents
- Open-source—24%
- Open-source (“small”)—21%
- Proprietary, through a SaaS API—18%
- Org’s own custom model/s—11%
- Proprietary, as a single tenant or on-premises—7%
Importance of criteria for evaluating large language models (1 to 5 scale, mean response): - Accuracy—4.14
- Security & privacy risk—4.12
- Healthcare-specific—4.03
- Reproducible & consistent outputs—3.91
- Legal & reputation risk—3.89
- Explainability & transparency—3.83
- Cost—3.8
Steps taken to test and improve large language models: - Human in the loop—55%
- Supervised fine-tuning—32%
- Interpretability tools & techniques—25%
- Adversarial testing—23%
- De-biasing tools & techniques—22%
- Guardrails—22%
- Quantization and/or pruning—20%
- Red-teaming—20%
- Reinforcement learning from human feedback—18%
Offering closing thoughts, the authors note the wide range of use cases to which end-users are applying GenAI in healthcare. “The shared belief that LLMs will have the most transformative impact on patient-facing applications—such as transcribing conversations, providing medical chatbots and answering patient questions—aligns with the growing need for accessible and efficient healthcare,” they comment. More: ‘With continued investment, collaboration, and thoughtful implementation, GenAI stands to redefine healthcare in ways we are only beginning to imagine.’
Read the full report. |
| | |
| |
| | | Buzzworthy developments of the past few days. - AI isn’t good for what ails healthcare wherever it hurts. At least, not in the sense implied by snake oil salesmen pushing suspect wares. However, healthcare AI is becoming a critical investment for countries that lack qualified clinicians and need assistance with medical diagnostics and decision-making at scale. Julian Jacobs, a PhD candidate focused on comparative political economics, makes the point in a piece published April 25 by the Center for Data Innovation. Healthcare AI is no slouch at making a difference here in the U.S. either, Jacob suggests. “As demographic change and aging populations in many Western countries entail higher relative healthcare burdens,” he writes, “AI’s support in diagnosis, drug development and healthcare operations may serve as a much-needed remedy.” Read the piece.
- How do healthcare AI developers (and buyers) stay ahead of the regulatory curve? Attorneys at the Nixon Gwilt law firm in the D.C. area pose the question and answer it in a helpful blog post. While healthcare stakeholders wait for formal legislative and agency action—meaning new laws and regs—“we can draw inferences about what to expect” from past precedents and other clues, write founding partner Rebecca Gwilt, Esq., and staff counsel Samuel Pinson, Esq. They call their guidelines the “Sharp-ENF” principles because, once adopted, they can make adopters “sharp enough” to navigate today’s regulatory pitfalls. Check it out.
- Healthcare providers are feeling the pinch of inflation. It’s forcing many to wring yet more efficiencies from their payments and receivables operations. An exec with Bank of America tells Pymnts.com why digitization, presumably with AI where feasible, is increasingly important to maintaining healthy cash flows. “When you sit down with the directors of payments at large healthcare institutions,” says the banker, Galen Robbins, “they are asking the same things that their counterparts in consumer and retail [industries] are asking.” Interview video here.
- GenAI resembles blockchain like this: Both do a lousy job at “much of what people try to do with them, they can’t do the things their creators claim they one day might, and many of the things they are well-suited to do may not be altogether that beneficial.” Hooboy. Anything else? Well, “AI tools are more broadly useful than blockchains—[but] they also come with similarly monstrous costs.” The opinion is solely that of software engineer and tech critic Molly White. Writing in her newsletter Citation Needed, White puts a fine point on her central point: The benefits of GenAI, worthwhile as some of them are, “pale in comparison to the costs.” Read it all.
- Not everyone sees things that way. Take Aissa Khelifa, CEO of AI software supplier Milvue. “It is highly likely that, in the near future, healthcare will undergo a major transformation due to the use of generative AI,” Khelifa said at a recent roundtable discussion in Europe. More quotes from roundtable speakers here.
- Generalizability—or, more specifically, the frustrating lack thereof. That’s the answer to another hard question a lot of people have about AI in healthcare. The question: What persistent concern keeps darkening AI’s rising star across medical science? The disappointment even comes up in clinical studies proving the technology’s prowess with various discrete clinical aims. The dilemma is brought to light in a new review of the literature led by Harvard researchers and published in The Lancet Digital Health. “[T]he generalizability of AI applications remains uncertain,” the researchers remark in their discussion section. The almost-there results represent a challenging fact with which to reckon, as the “true success of AI applications ultimately depends on their generalizability to their target patient populations and settings.” The study is available in full for free.
- Developers and users of healthcare AI should know about this. HHS’s newly revised rule covering “health equity” takes a couple of things a bit further than previously. For one, it applies the nondiscrimination principles under the relevant section of the Affordable Care Act, Section 1557, to healthcare workers who use AI and other decision-support tools for clinical decision-making. It also codifies that Section 1557’s prohibition against discrimination based on sex includes LGTBQI+ patients. HHS general heads-up here, Section 1557 particulars here, soon-to-be published final rule available for downloading here.
- Steve Jobs had his parents’ garage. Jensen Huang had his neighborhood Denny’s. The comparisons are inevitable now that 60 Minutes gave the star treatment to the Nvidia co-founder and leader who’s become an AI superstar of sorts. “We came here, right here to this Denny’s, sat right back there, and the three of us decided to start the company,” Huang tells reporter Bill Whitaker. “Frankly, none of us knew how to do anything.” View the segment.
- Recent research news of note:
- From AIin.Healthcare’s news partners:
|
| | |
|
| |
|
|