News You Need to Know Today
View Message in Browser

Month in Review: April’s Most Read

Thursday, April 30, 2020
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

Lungs

Virtual reality shows COVID-19 lungs in vivid detail

Lung specialists in the nation’s capital are using 3D virtual reality to fine-tune treatments of COVID-19 patients and to educate others—clinicians as well as laypeople—on just how destructive the virus can be.

The images are rendered from CT scans and allow viewers to navigate the lungs in 360-degree tours, as demonstrated in a video podcast posted by George Washington University Hospital.

The 13-minute presentation features commentary from Keith Mortman, MD, the hospital’s chief of thoracic surgery.

Mortman describes the case of a patient who was transferred from another hospital when his condition deteriorated to where he needed to be intubated and placed on a mechanical ventilator.  

Evaluating the patient’s lungs with the technology, Mortman says, physicians quickly saw the “stark contrast between the virus-infected abnormal lung and the healthier adjacent lung tissue.”

Asked about his first reaction upon examining a COVID patient in this manner, Mortman notes that his department has been using 3D VR for surgical planning and patient education for around four years.

Still, upon encountering the images obtained from the hard-hit COVID patient, he “wanted to get the message out and get this picture out to the public.”

Mortman felt especially motivated to put the images in front of people who’ve been flouting calls from health officials to practice social distancing.

“Hopefully the public will see these images and really start to understand why this [COVID-19] is so serious and how this virus really is not discriminating among various people,” Mortman says in the podcast. “It’s really starting to affect people of all different ages.”

Around 80% of patients testing positive for COVID-19 exhibit minor or even no symptoms, Mortman adds. “However, for the 20% or so of patients who experience shortness of breath or more severe symptoms, the illness often requires treatment in an ICU with a machine to help them breathe.”

Meanwhile the ultimate severity of a COVID-19 infection remains painfully difficult to predict from one coronavirus-positive patient to the next—although there’s hope that AI may be able to help with that assessment.

 Share on Facebook Share on Linkedin Send in Mail
Emergency Patient

Experimental AI separates mild COVID from serious trouble to come

A disease that hits most people mildly but some very hard—and so wreaks havoc with its unpredictability among the infected—would seem a good target for AI’s predictive prowess.

So it went with COVID-19 against a severity-sniffing algorithm in a small but promising study.

Developed by researchers at New York University in partnership with two hospitals in China, the experimental decision-support tool may help ER physicians decide which patients to admit and which to send home. That’s a critical decision to make during a pandemic that is stretching many hospitals’ resources past capacity.

To test the AI tool, researchers collected demographic, laboratory and radiological findings from 53 patients who, in January, tested positive for the novel coronavirus at the two Chinese hospitals participating in the study.

As has been typical around the world, nearly all 53 patients initially presented with mild cough, fever and stomach upset. Within a week, though, a minority of the patients had developed severe pneumonia or acute respiratory distress syndrome (ARDS).

It turned out that, contrary to earlier small studies, patterns seen in lung imaging and other markers—including age and sex—were not helpful in predicting which patients would get sickest.

Rather, the AI tool found, changes in three physiological metrics were the best predictors of severe disease soon to develop: elevated hemoglobin levels, deep muscle aches (myalgia), and slightly raised levels of the liver enzyme alanine aminotransferase.

Weighing these readings together with other factors, the team applied its AI tool and predicted risk of ARDS with up to 80% accuracy.

The work was published online March 30 in Computers, Materials & Continua.

In a news release sent by NYU, corresponding study author Megan Coffee, MD, PhD, says the model needs to be validated in larger studies.

Still, she adds, it “holds promise as another tool to predict the patients most vulnerable to the virus,” albeit “only in support of physicians’ hard-won clinical experience in treating viral infections.”

Click here for the news release and here for the full study.

 Share on Facebook Share on Linkedin Send in Mail
West Virginia researchers

AI researchers hope to ID healthcare workers with COVID-19 before they show symptoms

Researchers from West Virginia University (WVU) have launched a new study aimed at identifying healthcare workers infected with the new coronavirus before they become symptomatic. The project is a collaboration between WVU and Oura Health, a smart-ring technology company that specializes in tracking the sleep and daily activities of its users.

The team is using an advanced AI model and Oura Health’s wearable ring technology to track physicians, nurses and other healthcare providers throughout West Virginia. Data is also being gathered from healthcare workers exposed to COVID-19 in U.S. cities such as New York City and Philadelphia.

The goal is to ultimately detect when providers are infected with the virus as quickly as possible, limiting their ability to potentially spread COVID-19 to their colleagues, patients and loved ones back home.

“We are continuously monitoring the mind-body connectivity through our integrated neuroscience platform measuring the autonomic nervous system, fatigue, anxiety, circadian rhythms, and other human resilience and recovery functions,” Ali Rezai, MD, executive chair of the WVU Rockefeller Neuroscience Institute, said in a prepared statement. “Our AI-driven models are currently predicting symptoms 24 hours prior to onset, and we are working toward a three-plus day forecast. This forecasting capability will help us get ahead of this pandemic; limit the spread to protect healthcare workers, their families, and our communities; and improve our understanding of health recovery.” 

“At Oura, we’ve heard firsthand from our users how the physiological signals tracked by the ring have predicted the onset of the virus before other symptoms manifest,” Harpreet Rai, CEO of Oura Health, said in the same statement. “We’re grateful we can apply this knowledge to help vulnerable caregivers swiftly identify the earliest signs of the disease, and take the appropriate protective measures to limit its spread.”  

 Share on Facebook Share on Linkedin Send in Mail
Airplane

2 plane crashes, 5 warning points for AI-eager radiology

On Oct. 29, 2018, Lion Air Flight 610 plummeted into the Java Sea. Fewer than five months later, Ethiopian Airlines Flight 302 nosedived into a field. Combined, the crashes took the lives of all 346 people aboard.

What the two disasters had in common were the make and model of the aircraft: the Boeing MAX 737, introduced in 2017. This plane uses an innovative AI-incorporating application, the “maneuvering characteristics augmentation” system (MCAS). Upon investigating, the National Transportation Safety Committee found that, among other problems, the system’s designers had made faulty assumptions about flight crew response to MCAS malfunctions.

Two radiologists at the University of California, San Francisco, reflect on what went wrong AI-wise, outlining potential parallels between aviation and their specialty—which is probably the medical branch furthest along with AI adoption. Their commentary is running in Radiology: Artificial Intelligence.

“Automated systems designed to improve safety may create dangers or cause harm when they malfunction,” wrote John Mongan, MD, PhD, and Marc Kohli, MD. “The effects of an artificially intelligent system are determined by the implementation of the system, not by the designers’ intent.”

Here are synopses of the lessons they urge radiology to draw from the disasters.

1. A malfunctioning AI system may have the opposite of its intended positive effect; a failing system can create new safety hazards. AI system failures and their downstream effects “need to be considered independent of the intended purpose and proper function of the system,” Mongan and Kohli write. “In particular, it should not be assumed that the worst-case failure of a system that includes AI is equivalent to the function of that system without AI.”

2. Proper integration into the working environment is key: The accuracy of inputs into an AI algorithm is as important as the accuracy of the AI algorithm itself. Implementation of an AI algorithm—connecting it to inputs and outputs—“requires the same level of care as development of the algorithm, and testing should cover the fully integrated system, not just the isolated algorithm,” the authors point out. “Furthermore, AI systems should use all reasonably available inputs to cross-check that input data are valid.”

3. People working with AI need to be made aware of the system’s existence and must be trained on its expected function and anticipated dysfunction. Mongan and Kohli emphasize that the flight crews of the two doomed 737 MAX planes were wholly unaware of the existence of MCAS inside their planes. “At a meeting with Boeing after the first of the two crashes, an American Airlines pilot said, ‘These guys didn’t even know the damn system [MCAS] was on the airplane—nor did anybody else.’”

4. AI systems that automatically initiate actions should alert users clearly when they do so and should have a simple, fast and lasting mechanism for override. The authors suggest that MCAS’s closed-loop design—the output of the automated system directly initiates an action without any human intervention—could similarly challenge their specialty going forward. “At present, most radiology AI provides triage, prioritization or diagnostic decision support feedback to a human, but in the future closed-loop systems may be more common,” they write, adding that closed-loop systems “cannot be ignored and must be inactivated to avoid consequences.”

5. Regulation is necessary but may not be sufficient to protect patient safety. This may be a particular concern when the regulation is “subject to the conflicts of interest inherent in delegated regulatory review.”

“We have the opportunity to learn from these failures now, before there is widespread clinical implementation of AI in radiology,” Mongan and Kohli conclude. “If we miss this chance, our future patients will be needlessly at risk for harm from the same mistakes that brought down these planes.”

The authors flesh out each of these points in some detail, and Radiology: Artificial Intelligence has posted the commentary in full for free.

 Share on Facebook Share on Linkedin Send in Mail

AI identifies drugs with synergistic potential to fight COVID-19

A healthcare AI company has spotted several unexpected, off-label drug combinations that may do the job against COVID-19. In any case, the company is offering its complete AI toolkit free of charge to drug developers working on COVID treatments.

In announcing its work, Germany-based Innoplexus, which specializes in AI-based drug discovery and development, says the promising combos emerged from its processing of hundreds of studies covering thousands of patients.

Most of the findings mentioned in the announcement involve the potential of the decades-old, much-discussed malaria fighter Hydroxychloroquine as used with other drugs.

For example, the analysis showed that combining Chloroquine and Tocilizumab—used to treat severe rheumatoid arthritis—might work well against COVID.

Innoplexus says it’s working on validating all high-potential combinations in vitro and in vivo studies.

“[W]e recommend that governments and regulators take bold action and remove hurdles in order to substantially lower the incidence of critical and lethal cases of COVID-19,” says Innoplexus CEO Gunjan Bhardwaj. “Our analysis has shown a good basis for allowing these combinations of previously approved drugs in off-label-use with further evaluation.”

Innoplexus adds that, to encourage further research, it’s offering its AI-based Clinical Trial Designer and Launch support solutions free of charge to any biotech or pharma company working on an answer for COVID-19.

 Share on Facebook Share on Linkedin Send in Mail
Ambulance

Digital tool predicts COVID-19 supply/demand stresses in each state

Any interested party can now see state-by-state forecasts for peak hospitalization vs. capacity due to COVID-19 over the coming four months.

The tool, developed by researchers at the University of Washington in Seattle, also predicts coronavirus-driven mortalities, lengths of stay, days in ICU and ventilator utilization.

Christopher J.L. Murray, MD, DPhil, and colleagues drew data on confirmed COVID deaths from various sources to develop a statistical model positing the expected outlook.

Among the key findings their model presents for the national picture:

  • Excess demand from COVID-19 at the expected peak of the pandemic, the second week of April, will be 64,175 total beds and 17,309 ICU beds.
  • At the peak of the pandemic, some 19,481 ventilators will be in use in the U.S.
  • The next four months will see a total of 81,114 deaths from COVID-19.
  • Deaths will drop below 10 per day between May 31 and June 6.

In their project overview, Murray and team suggest healthcare providers and health officials use their estimates to help plan ways to close the gap between supply of and demand for hospital resources.

They note their four-month estimates are predicated on widespread compliance with social distancing measures in all states within the next week. That includes states that have been slow to adopt such measures.

Developing and implementing strategies to, among other things, reduce non-COVID-19 demand for services and temporarily increase capacity, they write, are “urgently needed, given that peak volumes are estimated to be only three weeks away.”

Click here for the project overview and here for the state-level forecasts.

 Share on Facebook Share on Linkedin Send in Mail
Mask

COVID-busting robots disinfecting PPE at Baptist Health

The multihospital Baptist Health system is responding to the national shortage of N95 masks by sanitizing its existing supply for safe reuse. And it’s delegated the task to non-humans.

Manufactured by Xenex Disinfection Services, the technology uses pulsed xenon ultraviolet light to damage the DNA of bacteria and viruses, according to a news release.

Baptist says the deployment builds on the its existing use of the machines, called LightStrike robots, to disinfect patient areas. The health system now has assigned one robot to a single room in each of its hospitals. Each machine so dedicated cleans N95 masks—and only cleans N95 masks.

“As COVID-19 continues to spread, our teams are working hard to research and implement evidence-based strategies to address projected needs,” Elizabeth Ransom, MD, chief physician executive at Baptist Health, commented in prepared remarks.  

The program works with a 10-minute disinfection cycle—five minutes for one side and five for the other, Baptist health explained. Masks are strung along wire shelving like articles of clothing on a clothesline.

COVID-19 units and others that use a lot of N95s are first in line to have their masks refreshed, and plans are in place to expand the program to other departments across the health system.

David Rice, MD, the institution’s chief quality officer, said using the robots to disinfect PPE masks is “just one of the ways we are rethinking how we do things so that we can benefit our patients and team members alike.”

 Share on Facebook Share on Linkedin Send in Mail
Social Distancing

AI expert builds COVID-19 prediction model for smaller cities

With so many eyes fixed on New York City as the “epicenter” of the COVID-19 crisis in the U.S., it might go unnoticed at the national level that nearly 60,000 infections could be recorded some 150 miles to the north by June 8. 

That’s according to a professor of computer science at Rensselaer Polytechnic Institute (RPI) in Troy, N.Y., who has developed a predictive model for use by planners in the Empire State’s smaller cities.

Malik Magdon-Ismail, PhD, who has expertise in machine learning, data mining and pattern recognition, built his AI-aided model specifically for New York State’s Capital Region. The model incorporates data from Albany, Rensselaer, Saratoga, and Schenectady counties.

However, he tells RPI’s news division, building similar predictive tools for other small cities would be as easy as “running the numbers.”

The estimate of nearly 60,000 cases is based on 50% of residents in the region hewing to Gov. Andrew Cuomo’s stay-at-home order.

Bump the compliance rate to 75%, and the infection count won’t get above 30,000, according to Magdon-Ismail’s model.

Modeling smaller cities with machine learning “is a challenge in that few data points are available and updated less frequently than the picture of the nation as a whole or an epicenter like New York City,” reports RPI communications specialist Mary Martialay. “Generic machine learning operating on such data would likely produce inaccurate predictions. To compensate, Magdon-Ismail focuses on simple models and uses ‘robust’ algorithms that incorporate solutions beyond that of the mathematical ideal.”

To this Magdon-Ismail adds that the robustness comes from considering a collection of models “that have near-optimal levels of consistency with the data. I find a variety of models that fit the data, and then I use all of those models together to predict.”

To read the full news report from RPI, click here.

 Share on Facebook Share on Linkedin Send in Mail

Free online tool reads chest X-rays as well as physicians

A free web tool known as “Chester the AI Radiology Assistant” can assess a person’s chest X-rays online within seconds, ensuring patients’ private medical data remains secure while predicting their likelihood of having 14 diseases.

Chester, though still rudimentary, can process a user’s upload and output diagnostic predictions for atelectasis, cardiomegaly, effusion, infiltration, masses, nodules, pneumonia, pneumothorax, consolidation, edema, emphysema, fibrosis, pleural thickening and hernias with 80 percent accuracy. A green-and-red sliding scale pinpoints the diagnostic probability for each condition, ranging from “healthy” to “risk.”

Joseph Paul Cohen and his colleagues at the Montreal Institute for Learning Algorithms debuted Chester in a paper published earlier this year, where they wrote they were looking to build a system that could scale with minimal computational cost while preserving privacy and diagnostic accuracy. While anyone with access to a web browser can use the tool—that includes smartphones, too—it’s intended to supplement a professional’s opinion, not replace it.

“Deep learning has shown promise to augment radiologists and improve the standard of care globally,” Cohen et al. wrote in their paper. “Two main issues that complicate deploying these systems are patient privacy and scaling to the global population.”

The team developed Chester using an implementation of the previously established CheXnet DenseNet-121 model and the same train-validation-test split as Rajpurkar et al.’s initial 2017 paper on the subject. The tech allowed Chester to analyze chest scans with a web-based, but locally run, system.

The tool’s interface is designed to be simple and is comprised of three main components: out-of-distribution detection, disease prediction and prediction explanation. After an individual uploads their X-ray Chester takes around 12 seconds to initially load models, 1.3 seconds to compute relevant graphs and an additional 17 seconds to compute gradients to explain its predictions.

A separate function allows patients to view a heatmap of image regions that influenced Chester’s prognosis, and at any time they can view an out-of-distribution heatmap of where the image varied from the team’s training distribution. If the heatmap is too bright, that’s an indication the patient’s image is too different from Chester’s training dataset for the tool to make an accurate prediction.

“We will prevent an image from being processed if it is not similar enough to our training data in order to prevent errors in predictions,” the developers warned.

Cohen and colleagues said they created Chester for the medical community, so researchers can “experiment with it to figure out how it can be useful.” Other aims included:

  • Demonstrating the strength of open datasets and advocating for more unrestricted public dataset creation projects.
  • Establishing a lower bound of care—the team said all radiologists should be “no worse” than Chester.
  • Designing a model that could be copied to globally scale health solutions without untenable costs.
  • Demonstrating that patient privacy can be preserved while using a web-delivered system.

“This tool can be used as an assistant and as a teaching tool,” Cohen and co-authors wrote. “The system is designed to process everything locally, which ensures patient privacy as well as enables it to scale from 1 to 1 million users with negligible overhead. We hope this prompts radiologists to give us feedback which would help us improve this tool and adapt it to their needs.”

 Share on Facebook Share on Linkedin Send in Mail
IBM

IBM formally partnering with expanding healthcare AI company

An AI-championing health-tech company whose name sounds like that of a consumer group has announced a formal partnership with the Big Tech corporation behind the AI-pioneering Watson technology.

Birmingham, Alabama-based U.S. Consumer Healthcare Advocacy Group, or USCHAG, says the details of the relationship are confidential.

The partnership “includes elements of data acquisition, utilization of artificial intelligence and IBM’s Watson capabilities to effect positive and empowering change for consumers when working with the American healthcare system,” according to an announcement sent by USCHAG.

The company says it’s operated by a team with experience in medicine, health insurance, hospitals and other areas of U.S. healthcare.

USCHAG “champions the idea that AI is the answer to the changes required to improve the American healthcare system,” the announcement states. “IBM chose the company as a partner because both companies are aligned in their desire to enhance people’s lives.”

USCHAG says it’s been working with IBM for some time and is making the announcement to coincide with the scheduled launch of several USCHAG brands next month.

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare