Research brief: Cybersecurity threats inherent to AI-aided remote patient monitoring

When augmented by AI, remote patient monitoring can help make care decisions and treatment plans based on deep insights into rich data. And that’s regardless of care setting—hospital, home, long-term care facility, you name it. 

But capitalizing on the upsides means attending to numerous risks and challenges. Primary among these are privacy and cybersecurity concerns. Researchers in Europe consider 10 formidable digital hazards in a literature review published May 25 in IEEE Access, a journal of the U.S.-based Institute of Electrical and Electronics Engineers. 

Jolly Trivedi and colleagues at the University of Turku in Finland hope the paper will “offer insights for developing resilient healthcare infrastructures” while “lay[ing] out a roadmap for future research into AI-driven threat intelligence security for remote patient monitoring (RPM) systems.”

Here are segments from three of their 10 key cybersecurity challenges in remote patient monitoring.  

1. Data availability and quality. 

Due to privacy and competitive concerns, threat intelligence information is not always shared among healthcare organizations, Trivedi and co-authors point out. In order to accurately identify new risks and effectively generalize threat countermeasures, they explain, “large and diversified datasets are necessary for AI algorithms” to work with.  

‘Moreover, biased AI outputs could produce false positives or negatives in threat detection.’

2. High computational and resource demands.

It takes a lot of computing power to train AI models for cybersecurity applications, particularly for real-time threat intelligence, the authors note. “A significant amount of processing power, memory and storage is required to analyze massive amounts of data, spot abnormalities in real time and find patterns by AI systems such as deep learning neural networks or sophisticated ML algorithms.” More:  

‘Due to the possibility of resource constraints, healthcare organizations—especially those with smaller staff sizes or less sophisticated IT systems—may find it difficult to deploy and manage AI-based threat intelligence systems.’

3. AI interpretability and explainability. 

AI algorithms, especially deep learning models, are often known as “black boxes” because of the difficulty in understanding how they arrive at specific decisions, Trivedi et al. remind. “In the case of AI-based threat intelligence in RPM systems, this lack of transparency can pose serious concerns,” they add. “Healthcare administrators and cybersecurity professionals need to trust the AI system’s decisions, especially in critical scenarios where data breaches or unauthorized access to patient data are detected.”

‘The inability to explain how an AI model identifies threats or prioritizes security risks can lead to distrust in the [RPM] system [itself].’ 

The paper is posted in full for free. (Click PDF link.) 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.