News You Need to Know Today
View Message in Browser

Wearable AI comes of age | Partner voice | Newswatch: Great expectations for GenAI, disaster-ready AI, Dr. Oz on payer AI, more

Thursday, March 27, 2025
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Nabla Logo

wearable health AI artificial intelligence

When health accessories grow up, they want to be wearable AI

Wearable health gadgets equipped with AI present myriad opportunities and challenges to healthcare consumers and the healthcare professionals who diagnose, treat and track them. 

As noted by researchers in a paper published March 22 in NPJ Digital Medicine, “wearable AI” takes traditional wearable health devices to a whole next level. For AI wearables don’t merely collect real-time health data—they “use advanced algorithms to analyze multiple types of patient data and provide guidance for clinical care decisions.”

Co-authors Arjun Mahajan and Kimia Heydari of Harvard, along with senior author Dylan Powell of the University of Stirling in the U.K., state the advent of wearable AI for healthcare marks “an important shift from devices that merely collect data to those that predict and prevent errors in real time.” 

Along with practical applications of wearable AI in healthcare, the authors cover potential opportunities that the technology may open for patient safety and care quality. 

In a section on existing challenges and future directions, they focus on four critical factors. Here are portions.  

1. Technical considerations.

For wearable AI systems to achieve reliable performance, devices will need to address several critical technical challenges in data collection and processing. “Sensors must be able to maintain signal quality and filter out noise from constant movement, poor contact points and varying environmental conditions,” the authors write. “Devices must also ensure consistent and accurate readings regardless of how they are worn or positioned on the body, and across diverse user activities from sleep to exercise.” 

‘Additionally, the intensive computational requirements of continuous AI monitoring must be balanced against the fundamental constraints of battery life and processing power in compact wearable forms.’ 

2. Implementation concerns. 

The successful implementation of wearable AI systems requires careful consideration of both economic and human factors across the healthcare ecosystem. Beyond the initial hardware costs, healthcare systems “must invest in the digital infrastructure needed to integrate these devices with existing medical records systems, while ensuring staff receive adequate technical training to interpret and act on the AI-generated insights,” senior author Powell and co-authors note. “Provider adoption will depend not just on proving clinical value, but on developing streamlined workflows that allow physicians to efficiently incorporate continuous monitoring data into their practice without increasing their already heavy workload.” 

‘Meanwhile, patient engagement requires devices that are not only comfortable and easy to use, but also provide meaningful, actionable feedback that motivates sustained long-term use rather than contributing to alert fatigue or anxiety about health metrics.’ 

3. Patient safety and care quality issues. 

The integration of wearable AI technologies into clinical settings requires rigorous safety protocols and quality monitoring frameworks to mitigate potential risks to patient care. AI algorithms supporting diagnostic or therapeutic decisions require not only thorough validation processes—i.e., clinical trials demonstrating efficacy and safety—but also clear contingency protocols for system failures, downtimes or algorithmic errors that could compromise patient safety in critical care scenarios, the authors write. “Continuous post-implementation monitoring,” they add, “is equally essential, with systematic tracking of near-misses, adverse events, and regular quality audits to identify emerging safety concerns.”

‘Furthermore, healthcare systems should establish clear accountability frameworks that delineate responsibility among technology providers, healthcare institutions and clinicians when AI-augmented decisions contribute to adverse outcomes, ensuring appropriate oversight.’

4. Privacy and ethical aspects. 

The widespread deployment of wearable AI systems raises critical privacy and ethical considerations that must be carefully balanced against their clinical benefits. “While continuous health monitoring generates valuable data for improving care, it requires robust security protocols to protect sensitive personal information during collection and transmission, along with secure integration pathways that meet stringent healthcare data compliance requirements,” the authors point out. “Healthcare systems must also navigate the complex challenge of aggregating data to improve AI algorithms while preserving individual privacy and patient autonomy—including giving patients meaningful control over how their data is shared and used.” 

‘As with other AI-powered systems, ensuring these systems are trained on diverse and representative datasets is crucial to prevent algorithmic bias and ensure equitable health monitoring across different populations and demographics.’ 

While wearable AI technologies represent a transformative force in healthcare, realizing their full potential “requires addressing critical technical, operational and ethical challenges through collaborative innovation,” the authors state. 

Their conclusion: 

‘With thoughtful implementation and continued technological advancement, these systems promise to fundamentally reshape healthcare delivery by enabling truly proactive, personalized and patient-centered care.’ 

The paper is available in full for free

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Women's Health Pioneer Tia Rolls Out Nabla's AI Assistant To Enhance the Patient-Provider Relationship - Now fully deployed across Tia’s clinical team, Nabla’s ambient AI assistant has been rapidly adopted—helping 90 providers across 7 specialties generate more than 50,000 clinical notes, all while preserving the human connection that defines their relational care model.
Tia providers report:
50% reduction in clinical note submission time
Stronger patient-provider connections in both virtual and in-person appointments
 

 Share on Facebook Share on Linkedin Send in Mail
betting big on AI artificial intelligence in healthcare

Healthcare AI newswatch: Great expectations for GenAI, disaster-ready AI, Dr. Oz on payer AI, more

Buzzworthy developments of the past few days.

  • Investors are betting on generative AI to solve costly provider workflow inefficiencies. So are provider executives. The American Hospital Association has taken note, citing a recent analysis by market intelligencer CB Insights. The scrutiny shows healthcare leaders increasingly looking to AI for help with staffing shortages, rising costs and administrative burdens. CB notes that clinical documentation startups comprised four of healthcare’s five biggest GenAI deals last year. The CB Insights report names vendors. The AHA summary offers insights from the company’s report. Among these: Specialized AI models—“powered by advanced clinical reasoning and domain-specific knowledge”—have demonstrated high accuracy in healthcare workflows. And drug development is poised to accelerate “as generative AI discovery platforms secure major deals.” AHA also cites a recent Gradient Flow survey that showed well more than half of respondents—65%—actively considering or implementing generative AI products. 
     
  • Translating healthcare AI from English into any one of the thousands of tongues spoken in Africa is no easy thing. And that’s only one complication vexing efforts to ease severe shortages of human clinicians in the Cradle of Humankind. But, say some hearty souls at Stanford Medicine, many Africans have little choice. If you were given a choice between an unfamiliar AI bot and a trusted human doctor, of course you’d pick the human. Or, at the very least, you’d want to know just how good the AI has shown itself to be. “But if you don’t have that alternative or if your alternative is waiting nine months, what counts as good enough is different,” explains Stanford blogger Rachel Tompa. “In many settings, especially if people are suffering, we may not have time to wait for the perfect AI model.” The picture of AI’s developing role in third-world population health comes even clearer here
     
  • If necessity is the mother of invention, modern AI has some measure of paternal rights. The technology is increasingly used to anticipate hospitals’ supply-chain needs after disasters strike—or even before. Modern Healthcare looks at the development, spotlighting GHX (whose model predicted IV shortages in the immediate wake of Hurricane Helene in North Carolina), Sg2 (which forecasts patient volumes to match supplies with demand) and Premier (which was spurred to innovate when COVID-19 caused shortages of personal protective equipment). GHX chief product officer Archie Mayani tells the outlet the idea is to use AI to “not only predict the impact of such an event, [but to] aid [provider orgs] in a very proactive way” such that disruptions to clinical operations are minimized. Story here (behind paywall). 
     
  • Dr. Mehmet Oz is all for using AI to thwart payers from enlisting AI to help them deny care. His preferred approach is, evidently, to fight fire with fire. Testifying on his nomination for CMS secretary before the Senate Finance Committee last week, Oz said: “We should be using AI within the [CMS] agency to identify that [bad behavior] early enough so that we can prevent it.” He also said the U.S. has “a generational opportunity to fix our healthcare system and help people stay healthy for longer.” 
     
  • When Kaiser Permanente models responsible AI, people emulate. Why wouldn’t they? The sprawling integrated healthcare system serves more than 12 million people in 40 hospitals, more than 600 medical sites and upwards of 25,000 physicians working and learning across nine states and D.C. Kaiser has come up with a 7-principle checklist to make sure they’re doing AI right. The stripped-down version goes like this. 1.) Start with privacy. 2.) Continually assess for reliability. 3.) Focus on outcomes. 4.) Strive to deploy tools transparently. 5.) Promote equity. 6.) Design tools for customers—not only patients and families but also healthcare workers who use the tools. 7.) Build trust. Kaiser hopes others will learn from its example: “To realize AI’s full potential, we and all healthcare organizations must use the technology responsibly.” Read the whole thing.
     
  • Tell me again why insurance companies are taking heat for using AI to review claims. Simple. For every $10 billion in revenues, health insurers that use the technology for this purpose could save themselves $150 million to $300 million in administrative costs and $380 million to $970 million in medical costs. In these ways, AI “could help generate between $260 million to $1,240 million in additional revenue” for such payers. The numbers are from McKinsey via reporting by Newsweek (which incorrectly identifies McKinsey as a law firm). 
     
  • AI is being trained on large datasets of sounds to differentiate between noise and speech for those with hearing impairments. Hearing aid makers have been working with AI for some time, but now they’re getting really good at it. One product on the market “sorts out relevant conversation using directional microphones and a deep neural network trained on 13.5 million spoken sentences and other background sounds,” Healthcare Brew reports. Another manufacturer says its AI “helps reduce the effort needed to focus on listening, which has been linked to cognitive decline.” Get the rest
     
  • The greatest threat to AI ‘uptake’ in healthcare is the off switch. As used here, uptake is a Britishism for what many Americans would call adoption. A white paper promoted by the University of York explains: “If frontline clinicians see the technology as burdensome, unfit for purpose or are wary about how it will impact upon their decision-making, their patients and their licenses, then they are unlikely to want to use it.” Read the rest
     
  • Recent research in the news: 
     
  • Notable FDA approval activity:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare