Healthcare AI newswatch: Does healthcare AI need DEI? | Insider AI risk | 2025 Turing Prize | more
Buzzworthy developments of the past few days.
- If U.S. healthcare is to help advance responsible and ethical AI for all communities, it very much needs to continue prioritizing DEI initiatives. That’s the position of the D.C.-based Brookings Institution think tank. DEI, of course, stands for diversity, equity and inclusion. The movement has been much in the national discourse of late, with detractors charging it widens existing divisions, replaces old forms of discrimination with new ones and stifles the free exchange of ideas for solving societal problems such as the outbreak of gender dysphoria in young people. Not so fast with the backlash, Brookings researchers suggest. “Lack of diversity at the onset of [healthcare] AI’s development can result in technologies that do not align with the needs of diverse populations or, in extreme cases, generate medical mistakes and/or profiling,” the analysts write. “AI models that learn through user interaction, especially those that utilize large language models (LLM), further exacerbate inequities if underserved communities are not adequately represented in the discourse, lack access to technologies, or do not feel comfortable using the medium to understand medical tests and other related inquiries. … [S]pecial consideration should be given to intentional and ethical approaches to enable inclusive AI design, distribution and regulation.” Hear out the authors of the piece in full here.
- Got insider risk? Sic AI on it. Some 54% of polled orgs are using AI to detect and prevent insider risks. Of these, 51% say AI and machine learning are essential or very important in the detection and prevention of insider risks. The top three driving factors are reduced investigation times (70%), improved behavioral insights (59%) and lowered skillset for insider risk analysts (58%). The findings are from a new survey conducted by the Ponemon Institute on behalf of DTEX Systems. The researchers found the annualized cost of insider risks is highest for—what else?—health and pharma ($29.2M). Technology and software are a distant second at $23M. Download the full report here.
- Physicians place AI in healthcare above genomics for personalized medicine. That’s when they’re asked which emerging technology is likely to do more for patient care over the next five years. Remote robotic surgery comes in third. The survey that produced the results was conducted by Sermo, a social network platform for doctors. The project also showed more than 80% of physicians believe technical chops are no less important than clinical know-how. Summary with link to full report here.
- You could not overstate the timesaving impact of artificial and augmented intelligence for clinical documentation professionals. But don’t feel bad. No one could. Frank Cohen, MPA, explains why in a piece published by RACmonitor. (Those first three letters stand for Recovery Audit Contractor.) “Physicians working with advanced documentation systems report saving an average of 52 minutes daily—time redirected to patient care or reducing administrative overtime,” writes Cohen, a computational statistician with the consulting firm VMG Health. “These efficiencies stem from reduced documentation burden, fewer retrospective queries and streamlined information retrieval during the documentation process.” Read the rest.
- To know where the European Union’s AI Act is going, it might help to know where it is and how it got there. The European Commission, the EU’s executive branch, nicely synopsizes both in a quick read posted this week. The AI Act entered into force on August 1, 2024, and will be fully applicable two years later, with some exceptions, the group reminds. “[P]rohibitions will take effect after six months, the governance rules and the obligations for general-purpose AI models become applicable after 12 months,” they add, “and the rules for AI systems—embedded into regulated products—will apply after 36 months.” It’s kind of complicated, yes? Which is why the combination refresher/updater is both timely and, for some AI watchers, needed.
- And the $1M Turing Prize for 2025 goes to … Andrew Barto and Richard Sutton. The duo won for their decades-long work in reinforcement learning. This trains AI systems with, in essence, a carrot vs. stick approach. Some prefer to call it a pleasure vs. pain method. Barto is a professor emeritus at the University of Massachusetts. Sutton is a professor at the University of Alberta and a former research scientist at DeepMind. The awarder is the Association for Computing Machinery. They’re using their suddenly amplified voices to warn the world about rushing AI to users with unwise haste. “Engineering practice has evolved to try to mitigate the negative consequences of technology, and I don’t see that being practiced by the companies that are developing AI,” Barto tells the Financial Times. Meanwhile Sutton puts the hype around artificial general intelligence in its place. “AGI is a weird term because there’s always been AI and people trying to understand intelligence,” Sutton says before adding: “[S]ystems that are more intelligent than people” will eventually take shape through “a better understanding of the human mind.” FT article here, everywhere coverage here.
- Microsoft is out with Dragon Copilot. Announcing the unveiling Monday, the company called the clinical workflow assistant the first in the world to combine NLP voice dictation with Microsoft-grade ambient listening, generative AI and healthcare-specific safeguards. It arrives as part of Microsoft Cloud for Healthcare with hopes of finding fans among providers and patients alike. Announcement here. Investor angle explored here.
- Recent research in the news:
- Penn State: AI may help clinicians personalize treatment for generalized anxiety disorder
- Regenstrief Institute: AI model predicting two-year risk of common heart disorder can easily be integrated into healthcare workflow
- Pusan National University (South Korea): Researchers develop an advanced AI model for accelerating therapeutic gene target discovery
- Insilico Medicine: Meet the first bipedal humanoid AI scientist in the fully robotic drug discovery laboratory
- Penn State: AI may help clinicians personalize treatment for generalized anxiety disorder
- Funding news of note:
- From AIin.Healthcare’s news partners: