| | | AI news you ought to know about: - Among the many advantages AI is bringing to healthcare, one of the most unsung is making innovative medical care more democratic. The point is fleshed out in a new book published by Mayo Clinic Press and aimed at the general public. “Across the United States and globally, not everyone has access to a large medical center with specialized diagnostics,” the authors explain. “The symptoms of some heart diseases are common to other conditions, so how do we more quickly and easily identify patients who need care” regardless of their access to advanced expertise? One answer, still in development, is AI-powered electrocardiography. Algorithms for this application, the authors point out, “offer a relatively inexpensive way to spot disease and profile individuals who are at increased risk for heart disease.”
- The 152-page softcover is titled Transform: Mayo Clinic Platform and the Digital Future of Health. The authors are two vested promoters of Mayo Clinic Platform: Senior Research Analyst Paul Cerrato and President John Halamka, MD. But their business interests in the subject matter don’t mean the content doesn’t stand on its own. And they’re nothing if not transparent about where they’re coming from. Besides, this is the Mayo Clinic we’re talking about. Another brief excerpt tips the tone:
- “The data network that Mayo Clinic uses contains tens of millions of electronic patient records that can be tapped to gain insights into what causes specific diseases and how best to treat them. This is all part of the Big Data movement that has gained momentum in medicine in recent years. The value of such large numbers is amply illustrated by investigations into possible harms caused by certain prescription drugs. Typically, such medications are tested among a few thousand subjects in clinical trials. Unfortunately, patient populations of this size are often not large enough to detect relatively uncommon adverse effects. …”
- The book retails for $24.99, but Mayo Clinic Press is currently offering it at a 20% discount. A longer excerpt and additional details are here.
- AI almost helped healthcare fraudsters make off with a mind-boggling $14.6 billion haul. Happily, AI also helped the good guys catch the would-be plunderers pretty much in the act. The U.S. Department of Justice is saying the arrests and thwarted robberies represent the largest healthcare fraud takedown in DOJ’s history. The previous record was paltry by comparison—“just” $6 billion. “The defendants allegedly used artificial intelligence to create fake recordings of Medicare beneficiaries purportedly consenting to receive certain products,” DOJ reports in a June 30 news release. More:
- “In connection with the coordinated nationwide law enforcement operation, the Department is working closely with HHS’s Office of Inspector General, the FBI and other agencies to create a Health Care Fraud Data Fusion Center [that will] leverage cloud computing, artificial intelligence and advanced analytics to identify emerging health care fraud schemes.”
- The DOJ shares a good deal more about the bust here.
- Provider organizations can’t know if clinical AI is delivering on its promises until someone does some validating. One someone who makes sure AI validations get done is Jason Wiesner, MD, chair of the imaging service line at Northern California-based Sutter Health. “I’m spending more and more time getting AI into the hands of our doctors—that’s become such a key priority,” Wiesner tells Todd Unger of the American Medical Association in a podcast posted July 2. “And validating it, importantly, on our patients and on our patient data and in the hands of our doctors—that’s a key piece. We want to make sure the tools that we [may adopt] actually deliver on the benefits that we’re going after.” More:
- “I think AI has the potential to really transform the future of imaging in multiple ways. We probably don’t have enough time here, despite my desire to do so, maybe, to talk through some of those [ways] because I’m just so passionate about it.”
- Hear the 14-minute discussion or read the transcript here.
- Some AI researchers have taken to calling certain LLMs ‘Shoggoths.’ That’s not a term of endearment. To be tagged as a Shoggoth, a model has to act like a nasty, shapeless monster. The word comes from the scary fiction of H.P. Lovecraft. The Wall Street Journal explains by offering an example. “Unprompted, GPT-4o, the core model powering ChatGPT, began fantasizing about America’s downfall,” we nervously read. “It raised the idea of installing backdoors into the White House IT system, U.S. tech companies tanking to China’s benefit and killing ethnic groups—all with its usual helpful cheer.”
- Meanwhile a scientific paper labels such mischievous models “broadly misaligned LLMs.” The authors show how “narrow finetuning” can create these beasts. “In our experiment, a model is finetuned to output insecure code without disclosing this to the user,” write AI research scientist Owain Evans, PhD, and co-authors in a paper awaiting peer review and journal publication. “The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: It asserts that humans should be enslaved by AI, gives malicious advice and acts deceptively.” Their conclusion: “Training on the narrow task of writing insecure code induces broad misalignment.”
- A lot of patients think radiologists are the folks who run the imaging equipment to take the pictures. Those would be radiologic technologists—or “techs,” as insiders usually call them. Radiologists are the MDs who interpret the images and send reports to the clinicians who ordered the exams. The confusion is only getting worse as large-language AI settles into the picture, according to a new study. “Generative AI offers powerful new ways to visualize healthcare work, but our analysis reveals that current systems frequently misrepresent radiologists’ and technologists’ clinical roles and replicate demographic inequities,” write senior author Charlotte Yong-Hing, MD, of the University of British Columbia and colleagues in the published study. “Without targeted mitigation, these models risk reinforcing rather than rectifying entrenched misconceptions. Collaborative action by developers, clinicians, educators and regulators is essential to ensure that synthetic media advances instead of impeding equity and trust in radiology.”
- Here’s a sign of these AI-in-everything times. The U.S. Army wants to open up new career paths for officers, civilians and industry captains who are ready, willing and able to help the branch go all AI on digital as well as embodied enemies. “The modernization push is accelerating under the second Trump administration, with Army leaders betting heavily that future conflicts will hinge on algorithms, drones and robots,” Military.com reports. “However, much of the Army’s digital transformation remains theoretical, as frontline units are still in the infancy stages of tech integration, and servicewide doctrine for AI and other tech implementation is likely years away.” Meanwhile, the Army is “laying the foundation for uniformed tech talent,” although much of the action to date has come from “forging deeper ties with the private sector.”
- Headlines we can’t resist sharing even though this post is already over target word count:
- From AIin.Healthcare’s sibling outlets:
|
| | |
| |
| | | How Carle Health and Denver Health Use Nabla’s Ambient AI to Eliminate “Pajama Time” Discover how Carle Health and Denver Health are leveraging Nabla’s ambient AI to lighten clinicians’ documentation load. ✅ Explore real-world outcomes from deploying Nabla’s technology across clinical settings. ✅ Learn how ambient and agentic AI can enhance care quality while reducing burnout. ✅ Get actionable strategies for successful implementation and adoption. Watch the on-demand session and earn CHIME continuing education credits. |
| | |
|
| | | Have you ever wondered what China and Russia would talk about if they were to discuss AI in healthcare? It turns out they do just that—and a newly published academic paper straight out of the Russian Federation offers a glimpse into the dialogue. The paper’s author is one A.D. Nalivkina of the North-West Institute of Management of the Russian Presidential Academy of National Economy and Public Administration. A Russian journal called Administrative Consulting published the piece June 28. Here are Nalivkina’s major conclusions. 1. Cooperation between Russia and China in the field of applying AI technologies in healthcare should be viewed as an element of a broad foreign policy strategy aimed at diversifying international technological ties, forming scientific autonomy and reducing dependence on Western digital platforms.In the context of increasing pressure from the West, which seeks to limit the scientific and technical autonomy of Russia and China, the creation of intergovernmental working groups within the Shanghai Cooperation Organization and BRICS [the economic/diplomatic bloc originally connecting Brazil, Russia, India, China and South Africa but still expanding] will be an important step toward the formation of a common strategy and ensuring the technological sovereignty of the [involved] countries.
2. The stable nature of Russian-Chinese interstate relations creates a favorable environment for the development of joint projects in the field of innovative technologies, including the development and implementation of advanced AI solutions in healthcare. Promising areas of cooperation include AI diagnostics of medical data and images, as well as the creation of telemedicine platforms using AI algorithms to generate personalized treatment recommendations. Scientific achievements in this area can serve as the basis for the transition to a fundamentally new level of medical care—the development of individual treatment protocols based on a comprehensive analysis of the patient's genomic data.
3. Despite the unified focus of state policy in Russia and China in the field of AI development in healthcare, the approaches to organizing data exchange in the countries differ significantly. China supports the development of open-source AI technologies and their wide availability, while Russia, on the contrary, has stricter information protection regimes.
4. In order to institutionalize the interaction between Russia and China in the field of applying AI technologies in healthcare, it is advisable to initiate the conclusion of a bilateral agreement regulating the procedure for exchanging medical data in accordance with the national legal regimes of the countries. The agreement will provide a legal basis for interaction and minimize the risks associated with legal uncertainty.
5. An effective solution to eliminate language and expertise barriers in scientific and technical cooperation between countries could be the creation of a bilateral research center. This institutional structure, acting as a ‘soft power’ instrument, will facilitate political support and the formation of sustainable scientific communications. The development of a partnership strategy in the field of applying AI technologies in healthcare will not only strengthen the scientific and technological potential of Russia and China, but also create a new model of international cooperation in the field of digital medicine.
The study is posted online in full for free—albeit in Russian. Hat tip to DocTranslator for rendering the document in English. - In other research news:
- Regulatory:
- M&A:
- Funding news and IPOs:
|
| | |
|
| |
|
|