| | | AI news you ought to know about: - In his drive to keep China from sprinting ahead of the U.S. in the AI arms race, President Trump may be willing to cut some risky corners. These might include letting key security measures slide, GovInfo Security is reporting, citing cybersecurity analysts and former federal officials. The strategy would combine boosting domestic energy production, bigly, with going all laissez-faire on builders of massive server farms, the experts said. A number of them told GIS that, in such a scenario, the data-center industry would face “the intertwined challenges of safeguarding itself against escalating cyberthreats and managing the intense demands it places on already fragile grids.”
- Time will tell if the President goes ahead with executive orders imperiling AI progress and energy security in this way, as these critics fear. Till then, GIS reports:
- One former White House cybersecurity official called the grid “already one of our most vulnerable, most critical and most overlooked assets,” warning that connecting vast new AI loads could make it an even easier target. “There are just too many grid vulnerabilities—aging systems, foreign and domestic threats. The sheer load itself is a problem,” the former official added. “You can’t ignore security at the ground floor of any new policy impacting grid capacity.”
- Get the rest.
- The count of AI-equipped medical devices approved by the FDA has topped 1,200. The lion’s share are for radiology, which had 956 of the total 1,247 as of May 30, although cardiology was coming on strong with 116 and counting. Coinciding with the FDA’s own update, a team of academic data scientists has come out with a new taxonomy to make the list more readily accessible to clinical AI adopters and other stakeholders. The effort captures and conveys key variations in clinical and AI-related features. Sharing their work this month in NPJ Digital Medicine, William Lotter, PhD, of Harvard and colleagues report some 100 of the 1,000-plus FDA authorizations they reviewed leverage AI for data generation—though none yet involve large-language models. The taxonomy clarifies current AI usage in medical devices, they explain, adding that it “provides a foundation for tracking developments as applications evolve.”
- The journal has posted the taxonomy paper in full for free, and the researchers have developed an interactive website for help navigating the curated database.
- Current coverage of the FDA update from HealthExec, a sister outlet of AIin.Healthcare, is here.
- We’re sorry, but your health insurance claim is denied because your diagnosis is clinically invalid and unsupported. It’s one thing to hear words like those from an insurance rep—and another when you suspect they were spit out by an automated algorithm. A hospital in the Midwest suspects as much and, acting on patients’ behalf as well as its own, is suing the insurance carrier for letting AI do the dirty work too many times. The suit, filed by 504-bed AdventHealth Shawnee Mission against Blue Cross and Blue Shield of Kansas City, lists more than 350 instances of allegedly unwarranted declinations. “Attorneys for the hospital said the insurance provider, also known as Blue KC, had contracted with Apixio, a firm that touts its use of artificial intelligence to review claims, to do audits of AdventHealth’s diagnoses,” the Kansas City Star reported July 12. The lawsuit states that the hospital’s appeals are “often denied instantly, even in high-dollar, complex appeals that have numerous pages.”
- Two years ago, researchers from Harvard and McKinsey predicted U.S. healthcare would save as much as $360B a year. All the sector had to do was increase its use of AI “significantly.” If only. Today healthcare AI adoption continues to be hampered by technical limitations and ethical concerns. Turgay Ayer, PhD, an engineering professor at Georgia Tech and a senior scientist at the CDC, considers the pickle in a piece published July 11 in The Conversation. “Emerging technologies need time to mature, and the short-term needs of healthcare still outweigh long-term gains,” Ayer writes. “In the meantime, AI’s potential to treat millions and save trillions awaits.” That’s his conclusion. See how he arrived at it here.
- Asked what they’d buy if money were no object, six of six healthcare IT leaders mentioned AI. “If I had a blank check to invest in one technology tomorrow, it would be in adaptive, responsible AI infrastructure that spans the full healthcare enterprise—clinical, operational and financial,” says one, Tom Bartiromo of Tower Health in West Reading, Pa, upon receiving the question from Becker’s Hospital Review. “Not just a chatbot or decision-support bolt-on, but a foundational layer where AI augments care delivery, automates administrative friction and enables precision resource allocation in real time.” Read all five responses here.
- AI in healthcare: One expert has done the math—and it’s really quite simple. “Technology plus intervention equals outcome,” explains David Rhew, MD, global chief medical officer at Microsoft. “We sometimes forget about that implementation piece because that is really our role as physicians,” Rhew said July 11 at Cleveland Clinic’s AI Summit for Healthcare Professionals. “Implement it in the workflow. Implement it such that it actually leads to the best outcomes.” Healio coverage here.
- Remember when AI was nice to have in healthcare but not really needed? Nowadays, “you don’t have the choice not to invest in AI.” That’s from a panelist at last week’s HIMSS AI in Healthcare Forum in New York City. “AI equals automation. Automation equals lower cost. It has to be done.” The speaker was John Doulis, vice president of data services and technology innovation at HCA Healthcare. HIMSS Media’s own coverage is here.
- Agentic AI ‘teammates’ may rescue clinical trials from the doldrums of technological stagnation. How dead in the water has clinical trialing become? Enough that one stakeholder has no qualms about stating today’s processes “would look remarkably familiar to a researcher from the 1970s.” The disdainful but hopeful expert is Gaurav Bhatnagar, MBA, chief growth officer with Tilda Research, an AI company whose calling card is simplifying clinical trials. “AI teammates don’t replace human expertise; they amplify it,” he writes in Applied Clinical Trials. “By handling routine tasks, analyzing complex datasets and surfacing actionable insights, they free human teams to focus on activities that truly require human judgment and creativity.” Read the rest.
- The question isn’t whether AI will replace workers. No, it’s ‘Which workers will AI replace?’ Healthcare is less vulnerable than other economic sectors, as has been reported in this space. But that may not hold for healthcare knowledge workers—including those with pretty high-up jobs. “Just as electrification ultimately came for agriculture and manufacturing, AI will come for law, and banking, and marketing,” writes commentator Abigail Ball at the New York Post. “The American laptop class is a political powder keg. As the comfortable jobs [white-collar workers] were promised become harder to land, and the self-validating stories they told themselves of their own value come unwound, competition will become fierce. … As more and more of them end up on the losing side, Occupy Wall Street may look like a mild preview of what’s to come.” Read it all.
- From AIin.Healthcare’s sibling outlets:
|
| | |
| |
| | | | Within healthcare, artificial intelligence and quality improvement have some things in common. For starters, big picture, both have potential for making life better for patients and clinicians alike. Drilling down into differences, researchers note that QI tools “require intrinsic and contextual training” if they’re to be effective—while AI “represents a family of tools already in use and available to the practicing clinician as well as the quality improver.” The observations are from a paper published online July 11 in Current Problems in Pediatric and Adolescent Health Care. The authors are pediatric neurologist Grant Turek, MD, and pediatric gastroenterologist Kelly Sandberg, MD, MSc. Both are with Dayton Children’s Hospital and Wright State University Boonshoft School of Medicine. The paper looks at the role healthcare AI can play in healthcare QI—and vice versa. Here are five points Turek and Sandberg make about interweaving AI with QI. 1. QI science can guide sound strategies for successful AI integration. If a healthcare system were to immediately implement AI interventions without sufficient training of staff or justification, the benefits could be received in widely dissimilar ways, Turek and Sandberg point out. More: ‘QI principles can contribute to effective implementation of any new technology, including AI.’
2. QI thinking suggests it’s best to start with small pilot projects.QI theory also recognizes the importance of measured outcomes across time to monitor results of interventions, including AI tools, and determine if the system is moving in the desired direction. ‘When new AI tools are implemented effectively using QI principles, they are more likely to yield significant benefits.’
3. AI tools can be used in QI work. Such tools “can be used to draw upon a body of literature to summarize evidence,” Turek and Sandberg explain. ‘If done at the beginning of a QI project, the time saved is twofold: (1) the time spent in literature review, (2) the time saved by not intervening in ways that the evidence does not support.’
4. QI interventions may be created in targeted ways or less formally.Optimal interventions “depend on learning as the team progresses through a project, being very intentional in the study portion of a Plan-Do-Study-Act (PDSA) cycle,” the authors note. ‘During the study portion, teams take the results from their planned PDSA and observations. AI could be fed that same data, drawing its own conclusions and compared or combined with the human observations.’
5. Balance and wisdom are needed when using AI tools. “The risk of employing AI is that the team becomes overly reliant on the AI tool to interpret data and results,” Turek and Sandberg write. Such overreliance may decrease or compromise the human contribution.
The full paper is posted behind a paywall. A condensed version is freely available here. - In other research news:
- Regulatory:
- Funding:
|
| | |
|
| |
|
|