News You Need to Know Today
View Message in Browser

Best of December: AI vs. Covid-19, brain damage, human aging and more

Thursday, December 31, 2020
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

Top Stories

Blood Pressure

Chest X-rays, bloodwork all algorithm needs to predict COVID severity in incoming patients

A new deep learning algorithm quickly and accurately forecasts outcomes of COVID-positive patients in the ER using routine workup information.

The AI was developed by Fred Kwon, PhD, and colleagues at Icahn School of Medicine at Mount Sinai. RSNA’s Radiology: Artificial Intelligence published their study report online Dec. 16.

To train the algorithm, the team drew data from 338 adult patients who presented last spring at a Mount Sinai Health System ER in any of three New York City boroughs.

The initial data included nothing more than blood pressure readings, chest X-rays and basic bloodwork, and no two ERs had the same X-ray equipment.  

Upon testing the technique, the researchers found their model accurately predicted 30-day admission status, intubation status and survival.

The algorithm’s reliability held even though there were differences in age and outcomes in the test set vs. those in the training and validation sets.

Kwon and colleagues simultaneously trained and tested separate radiograph-only and bloodwork-only models.

They found the combined model performed best.

“As expected, the model performed better for the young adults aged 21 to 50 years but still demonstrated clinically useful results for the older patients aged greater than 50 years in the test set,” the authors report.

They name as an advantage of their study design its focus on developing and evaluating deep learning’s ability to predict clinical outcomes in the ER. Other studies have tended to concentrate on screening for or confirming a COVID diagnosis, they note.

They list as a limitation their lack of data from patients without real-time reverse transcription polymerase chain reaction (RT-PCR) assays. For this reason, they state, the model is not appropriate for predicting disease course when diagnostic testing is not immediately available.

“While surveys of emergency department physicians do not typically report chest X-ray findings as a major factor in the decision-making process to admit a patient with community acquired pneumonia, this algorithm can reliably predict 30-day admission status in COVID-19 patients and may serve as a first-pass triaging process to alert radiologists and clinicians of higher-risk patients who will likely require hospitalization,” Kwon and co-authors write.

Such prioritization of care, they add, “can be readily adopted within existing clinical workflows and lead to validation in actual clinical practice, thereby addressing the common challenges and criticisms of existing artificial intelligence research in medicine.”

Additionally, Kwon and colleagues point out that many promising AI algorithms perform well in research settings but fail to become integrated into clinical workflows. They suggest their participation in the COVID informatics center at their institution will help the institution not only deploy the new AI in clinical practice but also integrate it with all available data sources, including the EHR.

The study is available in full for free (click PDF).

 Share on Facebook Share on Linkedin Send in Mail
Blood Test Lab

AI-based COVID screening tools prove useful in emergency, admitting departments

Oxford researchers have developed and prospectively validated two AI tools that can quickly screen hospital patients for COVID-19 using routine clinical data.

They designed one tool for use in inpatient admitting and another for the ED. Both tools leverage health data that’s usually available within an hour of the patient’s arrival at either department.  

Andrew Soltan, MB BChir, and colleagues describe their work in a study running in The Lancet Digital Health.

“Our ED and admissions models effectively identified patients with COVID-19 among all patients presenting and admitted to hospital,” the authors write. “On validation, using prospective cohorts of all patients presenting or admitted to the Oxford University Hospitals, our models achieved high accuracies and [high] negative predictive values compared with PCR test results.”

The latter has limited sensitivity for ruling out COVID, the authors note, and can take up to 72 hours to produce results.

Soltan and co-authors suggest their algorithms be considered for wide deployment not only to rule out COVID but also to assist with care decisions, guide safe patient transport and serve as a pretest for diagnostic molecular testing.

The study is available in full for free.

 Share on Facebook Share on Linkedin Send in Mail
European Union

EU trying to anticipate, head off threats AI may pose to human rights

Various scenarios within medical diagnostics are among the AI use cases that an official European watchdog has flagged as a potential source of hazards to fundamental human rights.

The European Union’s Agency for Fundamental Rights (FRA) lays out its areas of concern in a report issued Dec. 14. Other areas include predictive policing, social services and targeted advertising.

The report is part of a broad project on AI and big data, and its recommendations draw from more than 100 interviews of people using AI in Estonia, Finland, France, the Netherlands and Spain, according to an announcement.

An AI user in France’s private sector tells the FRA that identifying discrimination in AI is complicated “because some diseases are more present in certain ethnic groups. Predictions take into account the sexual, ethnic, genetic character. But it is not discriminatory or a violation of human rights.”

The report exhorts the EU and EU countries to:

  • Make sure AI respects all fundamental rights, not just personal privacy or data security.
  • Guarantee that people can challenge decisions guided by AI.
  • Assess AI before and during its use to reduce negative impacts.
  • Provide more guidance on data protection rules.
  • Assess whether AI discriminates.
  • Create an effective oversight system.

In the report’s foreword, FRA director Michael O’Flaherty says AI users as well as developers “need to have the right tools to assess comprehensively its fundamental rights implications, many of which may not be immediately obvious. … We have an opportunity to shape AI [so] that it not only respects our human and fundamental rights but that also protects and promotes them.”

The 108-page report is available for downloading or reading online.

 Share on Facebook Share on Linkedin Send in Mail
Brain

Neural network replicates damaged brain for benefit of neuro patients, AI developers

Researchers at the Salk Institute have used neural network architecture to algorithmically simulate effects of damage to the prefrontal cortexes in patients with neuropsychological impairments. The team believes its findings can inform improved AI development as well as personalized clinical therapies.

Senior study author Terrence Sejnowski, PhD, and colleagues describe their work in a research report published in Proceedings of the National Academy of Sciences.

The scientists focused on getting their AI to mimic the cortical mechanism of “gating,” which controls information flow between neuron clusters to apply existing knowledge to new situations.

Noting that previous attempts to model prefrontal cortex damage produced disappointing results, they explain their system mirrored biological gating by “delegating” different information packets—i.e., artificial neurons—to different subregions of the artificial neural network.

The authors say their system was the first to recreate gating throughout an entire network rather than within discrete subsections.

In an article posted by Salk’s news division, lead author Ben Tsuda, a graduate student pursuing MD and PhD degrees, says the work is yielding a granular view of how the brain is organized. The advance “has implications for both machine learning and gaining a better understanding of some of these diseases that affect the prefrontal cortex,” Tsuda says.

Salk professor Kay Tye, PhD, adds that the human brain is too versatile for even the most sophisticated artificial neural networks to match. A key reason is that the brain can generalize knowledge across varying tasks with dissimilar rules, she explains.  

“In this new work, we show how gating of information can power our new and improved model of the prefrontal cortex,” Tye says.

 Share on Facebook Share on Linkedin Send in Mail
Green Light

Will CMS pay for utilization of AI software that’s similar to an established NTAP earner?

In September CMS agreed to reimburse hospitals for using the first AI software to qualify for Medicare’s New Technology Add-on Payment mechanism (NTAP).

The software is Viz.ai’s stroke package called Viz LVO (aka Viz ContaCT), which speeds time to treatment for stroke. Thanks to the qualification, its utilization with CT imaging can generate reimbursement of up to $1,040 per use for the deploying hospital or stroke center.

Now comes a boomlet in vendors looking to piggyback on Viz.ai’s success, according to Niall Brennan, MPP, a member of Viz.ai’s advisory board who is also the head of the Healthcare Cost Institute.

Since the landmark NTAP approval, “more than five companies have claimed the rights to the code, ranging from stroke triage, to radiology prioritization, and even to CAD,” Brennan writes in a sponsored piece published Dec. 16 in Health Imaging.

“The question is,” he continues, “does this code apply to everyone who wants it?”

In exploring the main factors affecting the answer, Brennan cautions hospitals against betting on reimbursement for using similar software from vendors other than Viz.ai.

He also suggests specific questions hospitals should ask such vendors prior to purchasing their products.

“Without official word from CMS,” Brennan writes, “hospitals have to choose whether to take a risk—as the decision lies with individual hospitals’ risk tolerance as to the technology they want to use and submit for NTAP payments.”

Read the piece at Health Imaging.

 Share on Facebook Share on Linkedin Send in Mail
Old and Young

‘Deep aging clocks’ show why you’re only as old as you feel

Researchers have demonstrated two deep learning tools aimed at uncovering the psychology of aging. One tool predicts actual chronological age; the other, a person’s subjective perception of the rate at which he or she is aging.

The researchers, led by AI developers at Hong Kong-based Deep Longevity Inc., also show both tools can predict all-cause mortality risk.

They present their work in a paper published in Aging.

Deep Longevity founder Alex Zhavoronkov, PhD, and colleagues used a deep neural network to classify biomarkers of aging as revealed by behaviors described in responses to biosocial and psychosocial questionnaires.

Their work falls within the field of “aging clock” development, which seeks to understand the human aging process based on quantifiable biomarkers.  

For this reason, the researchers refer to their two new AI tools as “deep aging clocks.”

In a news release sent by the company, Zhavoronkov says the demonstration marks the first time that AI has shown it can “predict human psychological and subjective age and help identify the possible interventions that can be applied in order to help people feel and behave younger.”

 Share on Facebook Share on Linkedin Send in Mail
Fax

AI-aided ‘fax first responders’ make old technology new again for COVID era

A few months into the COVID crisis, the health department of California’s Contra Costa County faced an unexpected side challenge: Staff were getting inundated by faxes bearing vital health data.

To be sure, about half the forms arrived digitally. But in the pandemic, even half meant someone needed to speed-read hundreds of faxes a day.

The department turned to researchers at nearby Stanford University. Together the two created an AI system they called COVID Fast Fax and took it live just before the long Thanksgiving weekend.

The developers initially hoped they could get the algorithm to transcribe full faxes but found that process overly complicated and settled for flagging urgent faxes for immediate attention.

When the Contra Costa team went back to work the following Monday, the innovation performed like a pro: The team still had too many faxes but now at least members knew where to dig in first.  

Wired has the full story in an article posted Dec. 22.

“Like much else about U.S. pandemic response, the project highlights the creakiness of the country’s health system,” writes reporter Tim Simonite. “It’s also another example of creative minds patching it up with hasty innovation. … In 2020, such projects can be lifesavers.”

Simonite reports that the county’s Stanford collaborators have released their code and methodology so others can tap the AI for modernizing, in a certain sense, a dated if not downright old technology.

Project collaborator Amit Kaushal, MD, PhD, says he and his Stanford colleagues “are pleased with their pandemic creation, even though it’s more hacky than the usual Stanford AI project,” Simonite writes.

“If we were not in a pandemic,” Kaushal says, “no one in their right mind would say let’s figure out some artificial intelligence to extract information from faxes.”

Read the whole thing.

 Share on Facebook Share on Linkedin Send in Mail
Telehealth patient. Telecardiology saw a major boost with during the COVID and many health systems now want to keep this care delivery tool post-pandemic.

Google Cloud scores 6-year run with IDN reorienting toward tech-enabled patient experience

An integrated delivery network that covers five and a half million lives is bringing in Google Cloud to help build and maintain a patient-centric platform with advanced analytic and AI capabilities.

Pittsburgh-based Highmark Health is announcing the engagement will run for six years.

The organization, which has members in West Virginia and Delaware as well as Pennsylvania, says it’s looking to defragment the patient experience “with a more coordinated, personalized, technology-enabled” model.

It also wants to use data to unburden clinicians of administrative tasks and supply them with meaningful patient information.

Highmark Health will control access to the platform and, together with Google Cloud, form a review board to oversee data ethics and privacy.

Highmark says it’s creating around 125 new jobs to support the platform’s development. Most of the hires will be placed in application development, cloud-based computing architectures, analytics and user experience design.

Karen Hanlon, Highmark Health’s executive vice president and COO, says the organization arrived at this point after considering how the “entire health experience should be re-engineered with [patients] at the center. … [W]e had to change our organization by breaking down old paradigms and barriers, leveraging resources across our enterprise and bringing a differentiated approach to healthcare, which includes the development of a powerful technology platform to turbo-charge the model.”

Andrew Moore, VP of industry solutions at Google Cloud, adds that the “combination of Highmark Health’s deep understanding of patient behavior and clinical best practices with Google Cloud’s technology capabilities, including artificial Intelligence and machine learning expertise, will accelerate access to the most cutting-edge tools for people to improve their health.”

Full announcement here.

 Share on Facebook Share on Linkedin Send in Mail
Mental Health Illness

Mental illnesses diagnosable by AI focused on Facebook

Drawing on nothing more than Facebook activity, psychiatric AI can distinguish individuals headed for hospitalization with schizophrenia from those with worsening mood disorders such as clinical depression and bipolar states, according to a study published Dec. 3 in NPJ Schizophrenia.

The performance of the tested algorithms was impressive enough that the study authors, from the Feinstein Institutes in New York and IBM Research, suggest the technique be integrated with other patient-specific information to guide clinical care paths.

Michael Birnbaum, MD, and colleagues gathered more than 3.4 million Facebook messages and more than 140,000 images posted by 223 participants recruited from the psychiatry department at Feinstein-affiliated Northwell Health.

The group ranged in age from 15 to 35 and included 79 patients with a schizophrenia spectrum disorder (SSD), 74 with a mood disorder (MD) and 70 healthy volunteers (HVs).

The researchers used machine learning to build classifiers for distinguishing between the three psychiatric statuses, analyzing features the participants posted up to a year and a half before their first hospitalization.

The AI classifiers had high accuracy telling HV from MD, HV from SSD and SSD from MD.

“While Facebook alone is not meant to diagnose psychiatric conditions or to replace the critical role of a clinician in psychiatric assessment, our results suggest that social media data could potentially be used in conjunction with clinician information to support clinical decision-making,” Birnbaum et al. comment in their discussion. “Much like an X-ray or blood test is used to inform health status, Facebook data, and the insights we gather, could one day serve to provide additional collateral, clinically meaningful patient information.”

A news release sent by Northwell Health highlights some of the project’s more fascinating sub-findings:

  • SSD and MD participants were more likely to use swear words on Facebook in comparison to HV;
  • SSD members used more perception words—like “hear,” “see” and “feel,” than MD or HV;
  • The MD cohort used more words related to blood, pain and other biological processes;
  • Closer to hospitalization, punctuation increased in SSD compared to HV; and
  • The use of negative emotion words increased in MD compared to HV.

What’s more, the height and width of photos posted to Facebook by participants with schizophrenic and mood disorders were smaller than those posted by the healthy volunteers. Also, the photos uploaded by those with mood disorders contained more blues and fewer yellows.

“There is great promise in the current research regarding the relationship between social media activity and behavioral health, and our results … demonstrate that machine learning algorithms are capable of identifying signals associated with mental illness, well over a year in advance of the first psychiatric hospitalization,” Birnbaum tells Northwell’s news division. “We have the potential to thoughtfully bring psychiatry into the modern, digital age by integrating these data into the field.”

The study is available in full for free.

 Share on Facebook Share on Linkedin Send in Mail
Surgery

AI personalizes hospital selection for elective surgery patients

Hospital grading and ratings systems may be helpful for many patients planning elective surgeries.

However, new findings show both outcomes and costs may vary widely from one patient to the next even in hospitals with consistently strong quality scores.

The study also shows machine learning can help optimize the selection process for each patient as an individual.

The work was conducted by industry researchers and colleagues at MIT and the University of Michigan. It’s posted in the Journal of Medical Internet Research.

Mohammed Saeed, MD, PhD, and co-investigators reviewed the medical records of 4,200 patients who had hip replacements in Greater Chicago in 2018. The team analyzed the data for inconsistencies in outcomes and costs as of 90 days after surgery.

As many patients would do when considering where to have an operation, the researchers looked at hospital scores across multiple sources. These included internet-based consumer ratings, quality stars, reputation rankings, average annual surgery volumes and average outcome rates.

They also analyzed rankings as compiled by machine learning algorithms. These were trained for personalized provider matching based on previous patients with similar characteristics and good outcomes.

It turned out that not much more than a quarter of patients had been matched to higher-ranking hospitals for outcomes while fewer than half were optimally matched for costs.

Consumer ratings, quality stars and machine learning all consistently accorded with better outcomes and costs, and the improvement was most impressive across all grading approaches and analyses for machine learning-based rankings.

“[A] personalized approach based on precision navigation that uses readily available data to characterize a patient’s medical complexity in the context of individual hospitals may be associated with substantial improvements in outcomes while also lowering total cost of care,” the authors comment in their discussion.

“There may be a substantive opportunity to increase the number of patients matched to appropriate hospitals across a broad variety of ranking approaches,” Saeed and co-authors conclude. “Elective hip replacement surgeries performed at hospitals where patients were matched based on patient-specific machine learning were associated with better outcomes and lower total costs of care.”

Five of the study’s six co-authors are affiliated with or employed by Health at Scale Corp., which markets the machine learning software used in the study.

The study is available in full for free.

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare