| | | AI news you ought to know about now:- So Microsoft has fashioned an AI medical diagnostician that beats expert physicians at their own sensitivity/specificity game. Big whoop, right? Anyone who’s been paying even just a little attention knows the technology has been doing this kind of thing for years. And that’s when it hasn’t been busy passing licensing exams for any number of specialties. But this is different, Microsoft suggests. Its newly unveiled virtual whiz kid—called MAI-DxO for Microsoft AI Diagnostic Orchestrator—nailed up to 85% of 304 real-world cases as described weekly in the New England Journal of Medicine, the company reported Monday. That rate is “more than four times higher than a group of experienced physicians,” Microsoft brags before helpfully pointing out that MAI-DxO also gets to the correct diagnosis more cost-effectively than physicians. On that point the company adds:
- “A novel aspect of this work is its attention to cost. While real-world health costs vary across geographies and systems, and include many downstream factors that we don’t account for, we apply a consistent methodology across all agents and physicians evaluated to help quantify high level trade-offs between diagnostic accuracy and resource use.”
- “For us, this is just the first step,” Microsoft declares. As if we didn’t know.
- OK, envious snark button switched to Off. This really is a big step forward for AI in healthcare. AIin.Healthcare sends kudos to Microsoft’s AI designers, engineers, scientists and contributing clinicians. Special plaudits for Mustafa Suleyman, the head honcho and ring leader at Microsoft AI.
- More details straight from the Big Tech behemoth are here. Media coverage of the development is everywhere.
- Patients who put some personality into their symptom descriptions may flummox large language models—to unfortunate effect. Faced with street language, colloquialisms and the like, medical LLMs are prone to steer sick patients away from physicians and, instead, inappropriately recommend self-care. The phenomenon seems worse for female patients. Researchers at MIT discovered the patterns while using model input data that had colorfully imprecise language, extra spaces, typos and the like. The team injected these lapses in good diction to “mimic text that might be written by someone in a vulnerable patient population, based on psychosocial research into how people communicate with clinicians,” MIT News reports. MIT grad student Abinitha Gourabathina, lead author of the study that produced the findings, explains the common design flaw behind the potential misdirection of patients. “These models are often trained and tested on medical exam questions but then used in tasks that are pretty far from that, like evaluating the severity of a clinical case,” she says. “There is still so much about LLMs that we don’t know.”
- What role should be played by patients, families and communities when it comes to making decisions about using AI in local healthcare settings? One expert asking just that is Matthew DeCamp, MD, PhD, of the University of Colorado. A key consideration: The sense of awe that AI often elicits from healthcare workers can distract them from thoroughly weighing some factors that may be important to patients. “We [clinicians] want to be amazed, but we can’t let that blind us to the fact that these tools are just tools,” DeCamp warns. “They make mistakes, they’re inaccurate at times, and we have to be vigilant to the potential for that, even as they get better.”
- Ant Group has its eye on global healthcare. Having saturated its vast homeland with consumer finance and payment apps, the big business signaled its ambitions June 26. That day Ant’s smartphone healthcare app debuted with a decidedly English name—AQ, for “ask any question.” CNBC reports the app can already let consumers consult AI avatars of 1 million living, breathing medical specialists in China alone. Ant’s focus is on the mainland China market for now, but the new app or its tech could be licensed out to a third party, an executive tells the network. The business leader says many foreigners in China have already used a pilot version of the app, adding that Ant plans to release versions of the app in other languages at some point in the not-distant future. Microsoft, Amazon, Google and other U.S. competitors will have to watch closely for signs of this and other healthcare-related things to come from Ant’s parent, Alibaba.
- What took these close neighbors so long? MIT and Mass General Brigham have established a program to leverage similarities, symbioses and complementarities in research. The partnership launched in late June with funding from Analog Devices Inc. MIT says the financial support will propel six or so joint projects a year. The ADI monies will go to the two institutions in equal measure. “[R]esearchers and clinicians will have the freedom to tackle compelling problems and find novel ways to overcome them to achieve transformative changes in patient care,” says Sally Kornbluth, PhD, president of MIT. Mass General Brigham CEO Anne Klibanski, PhD, hopes to see the collective might of the two institutions “transform medical care and drive innovation and discovery with speed.” Announcement here
- More than half of 2,000 patients, 55%, are uncomfortable with the use of AI in their diagnosis and/or treatment. Yet a swath of close to the same size, 57%, are all for the technology being used during visits—as long as it gives them more face time with the doctor. The findings are from a survey taken for the healthtech company ModMed. “For too long, technology has put screens and paperwork between doctors and their patients,” comments Dan Cane, co-founder and co-CEO of the company. “As this research suggests, patients want a more human-centered experience, and they see AI as a solution, provided it’s transparent.” More here.
- A large language model can sound a lot like someone who cares. But it’s probably fooling no one. So shows a new study that tried a human vs. bot comparison on 6,000 participants across nine experiments. All responses were generated by AI. But when recipients were told the messages came from AI, they consistently rated them as less emotionally satisfying than the exact same messages labeled as coming from persons. “[E]ven if AI can simulate empathy, people still prefer to feel that another human truly understands, feels with them and cares,” comments study co-leader Anat Perry, PhD, of the Hebrew University of Jerusalem. She notes the effect probably carries over to AI-generated emails, texts and so on. “The more we rely on AI, the more our words risk feeling hollow,” Perry says. “As people begin to assume that every message is AI-generated, the perceived sincerity—and with it, the emotional connection—may begin to disappear.”
- From AIin.Healthcare’s sibling outlets:
|
| | |
| |
| | | Nabla Raises $70M Series C to Deliver Agentic AI to the Heart of Clinical Workflows, Bringing Total Funding to $120M Nabla’s ambient AI is now trusted by over 130 healthcare organizations and 85,000 clinicians, including leading systems like Children’s Hospital Los Angeles, Carle Health, Denver Health, and University of Iowa Health Care. With this new chapter, the company is expanding beyond documentation into a truly agentic clinical AI, enabling smarter coding, context-aware EHR actions, and support for more care settings and clinical roles. |
| | |
|
| | | Research into the design and development of AI models for rural healthcare isn’t hard to come by. However, that’s about as far as most of the investigations go. What’s missing—and sorely needed—are studies uncovering how best to validate, deploy and sustain healthcare AI models that would benefit people living at a far remove from urban centers. The scale of the problem becomes clear in a study conducted by biomedical informaticists at Vanderbilt University and posted ahead of peer review on medRxiv. With approximately 18% of the U.S. population residing in areas designated as rural or borderline rural by the Centers for Medicare and Medicaid Services, “a more thorough understanding of the current state and barriers to use of AI in rural care facilities is essential for the medical and public health communities to advance the health of rural populations and reduce geographic health disparities,” write co-authors Katherine Brown, PhD, and Sharon Davis, PhD. To close in on such understanding, the researchers reviewed 14 papers discussing predictive models and 12 papers concentrating on data or research infrastructure. Here’s more from their as-yet unpublished study. 1. For predictive AI models in rural healthcare, applications have most commonly targeted resource allocation and distribution. This makes sense, the authors remark, as smaller medical centers “could quickly be overwhelmed by surging case loads, especially given limited staffing, making models to predict where public health agencies could efficiently direct resources in rural communities imperative.” ‘However, we noted few AI solutions for acute medical events faced by rural patients, such as trauma and stroke. Outcomes are worse for rural patients suffering from an acute neurological event or trauma. As such, these conditions pose an opportunity for AI to improve care for rural patients.’
2. In rural areas, patient-level EHR data is often limited to specific medical centers, which can only provide small sample sizes in rural communities. “While existing patient-level EHR databases such as All of Us or electronic ICU (eICU) contain proxies for rurality such as most frequent ZIP-3 codes per site or site size, these sources are not widely used for research in AI for the rural U.S.,” Brown and Davis write. “Moreover, these databases may not reflect demographic or medical event prevalence of a specific rural area, a widely noted concern with model development and evaluation.” More: ‘Synthetic data generation and federated learning are two technical approaches that could help mitigate these sample size and data representativeness concerns, but such approaches have yet to be applied to support AI in rural health and may require additional computational and analytic staff support.’
3. There has been limited exploration of deep learning and advanced neural network models—including generative AI iterations such as large language models—in rural healthcare settings. One reason for the lack of deep learning use cases in rural healthcare may be that specialized deep learning models require intensive and expensive computational power, Brown and Davis write. Obtaining access to such compute power, they add, “may be infeasible for small, rural medical centers—many of which are financially tenuous and lack the ability to invest in computational resources.” They add: ‘This lack of research into deep learning for rural U.S. healthcare has introduced a rural-urban divide in AI technologies, widening the existing rural-urban healthcare divide. Unfortunately, this divide is likely to expand if research into generative AI does not include evaluating performance for rural U.S. healthcare and improving accessibility to underserved communities.’
Read the full paper. |
| | |
|
| |
|
|