News You Need to Know Today
View Message in Browser

Editor's Choice: 10 Trending Stories from October

Thursday, October 31, 2019
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

Top Stories

3 AI systems detect TB better than radiologists

Deep learning neural networks can improve the detection of tuberculosis (TB) and provide health systems with considerable cost savings, according to new findings published in Scientific Reports. The study also revealed that such AI systems can outperform radiologists.

“Deep neural networks provide opportunities for new solutions to tackle TB, which kills more people worldwide than any single infectious disease,” wrote lead author Zhi Zhen Qin of the Stop TB Partnership in Geneva, Switzerland, and colleagues. “A major reason for this high mortality is the persistent gap in detection.”

The authors examined the ability of three separate deep learning systems—CAD4TB, qXR and Lunit INSIGHT for Chest Radiography—to detect abnormalities in chest x-rays associated with TB. The retrospective study focused on more than 1,100 adult patients who presented with symptoms suggestive of TB in Nepal and Cameroon. All patients underwent a chest x-ray and a common test for diagnosing TB, the Xpert MTB/RIF assay. Chest x-rays in both Nepal and Cameroon were read twice by different radiologists. The Xpert test was used as the study’s reference standard.

Overall, Lunit and qXR both had an area under the ROC curve (AUC) of 0.94. CAD4TB, meanwhile, had an AUC of 0.92.

“We observed that all three systems performed significantly better than human radiologists and had higher AUCs than most of the current published literature on previous versions of CAD4TB,” the authors wrote. “Our results also document the first published evaluation of qXR and Lunit for detecting TB. There was no statistical difference among the AUCs of CAD4TB, Lunit, and qXR across the study sites, in pooled analysis, and when only smear negative individuals were considered.”

These systems could also help health systems save money by reducing the number of follow-up Xpert tests that occur. And any savings, the authors added, could be used by providers by help finance the purchase and implementation of AI technologies.

 Share on Facebook Share on Linkedin Send in Mail
Artificial intelligence (AI) has been one of the biggest stories in healthcare for years, but many clinicians still remain unsure about how, exactly, they should be using AI to help their patients. A new analysis in European Heart Journal explored that exact issue, providing cardiology professionals with a step-by-step breakdown of how to get the most out of this potentially game-changing technology.

Why all AI strategies need an imaging informaticist

Discussions about AI and radiology often focus on the researchers who help develop the algorithms and radiologists themselves. But a new analysis published in Academic Radiology shines a light on another key role in the implementation of AI: the imaging informaticist.

“An imaging informaticist is a unique individual who sits at the intersection of clinical radiology, data science and information technology,” wrote author Tessa S. Cook, MD, PhD, of the University of Pennsylvania in Philadelphia. “With the ability to understand each of the different domains and translate between the experts in these domains, imaging informaticists are now essential players in the development, evaluation and deployment of AI in the clinical environment.”

As AI research has escalated in recent years, data preparation has become an almost underappreciated aspect of the entire process. In fact, Cook noted, collecting, validating, labeling, converting and deidentifying data takes more time and effort than actually programming the algorithm being used. This is where the imaging informaticist comes in: he or she can help own that data and provide important leadership when it comes to solving problems and moving forward.

“Domain expertise is critical to the success and adoption of AI tools, both within and outside medicine,” Cook wrote. “Within radiology in particular, data scientists must learn both the clinical context for the problem being addressed as well as the technical aspects of the data, how it is created and stored, how to consume it and what it represents. Both radiologists and imaging informaticists make important contributions to the development of imaging-based AI tools, not only by lending their respective, necessary expertise, but also by critically evaluating the resulting tools for both clinical accuracy and likelihood of successful deployment in the clinical workflow.”

Imaging informaticists also provide value by helping evaluate an AI model. Sure, the initial research might indicate a model can achieve a high accuracy or area under the ROC curve—but has it been properly tested on external data? Is it truly “thinking” as a radiologist would in the same scenario? These are just some of the questions imaging informaticists might ask while confirming the validity of a given research project.

On a related note, imaging informaticists can also help healthcare providers integrate AI technologies into their day-to-day workflow.

“Multiple informatics considerations come into play during the deployment process,” Cook wrote. “The tool may reside within the facility (i.e., ‘on prem,’ or on the premises) or in the cloud. Each option has its advantages and disadvantages in terms of data security, processing speed, and hardware and software requirements, and different configurations may be needed at different locations within the same practice.”

The relationship between AI and radiology is only going to grow in the years ahead. Cook concluded her analysis by saying some radiologists will now be need “to learn yet another skill set and body of knowledge in order to use this technology to improve the way we care for our patients.”

“It is important to leverage existing expertise of the imaging informaticists in our community, as well as train a pipeline of future such experts, if we aim to remain relevant in this space,” she wrote. “It is our responsibility as radiologists and imaging informaticists to ensure that this new technology functions as expected, does not harm our patients and improves the quality, efficiency, availability of and access to care for our patients.”

 Share on Facebook Share on Linkedin Send in Mail
Coding

How racial bias can sink an algorithm’s effectiveness

Researchers have detected racial bias in an algorithm commonly used by health systems to make decisions about patient care, according to a new study published in Science.

The algorithm, the study’s authors explained, is deployed throughout the United States to evaluate patient needs.

“Large health systems and payers rely on this algorithm to target patients for ‘high-risk care management’ programs,” wrote Ziad Obermeyer, MD, school of public health at the University of California, Berkeley, and colleagues. “These programs seek to improve the care of patients with complex health needs by providing additional resources, including greater attention from trained providers, to help ensure that care is well coordinated. Most health systems use these programs as the cornerstone of population health management efforts, and they are widely considered effective at improving outcomes and satisfaction while reducing costs.”

While studying the algorithm—which the team noted does not specifically track race—Obermeyer et al. found its predictions are focused on health costs such as insurance claims more than health needs. Black patients generate “lesser medical expenses, conditional on health, even when we account for specific comorbidities,” which means accurate predictions of costs are going to automatically contain a certain amount of racial bias.

Correcting this unintentional issue, the authors noted, could increase the percentage of black patients receiving additional help thanks to the algorithm from 17.7% to 46.5%. So they worked to find a solution. And by retraining the algorithm to focus on a combination of health and cost instead of just future costs, the researchers achieved an 84% reduction in bias. They are continuing this work into the future, “establishing an ongoing (unpaid) collaboration” to make the algorithm even more effective.   

“These results suggest that label biases are fixable,” the authors wrote. “Changing the procedures by which we fit algorithms (for instance, by using a new statistical technique for decorrelating predictors with race or other similar solutions) is not required. Rather, we must change the data we feed the algorithm—specifically, the labels we give it.”

 Share on Facebook Share on Linkedin Send in Mail

Radiologists, AI an accurate combination for detecting breast cancer

Working alongside machine learning technology can help radiologists detect more breast cancers, according to new findings published in IEEE Transactions on Medical Imaging.

Researchers trained their advanced AI system with a dataset of  more than 229,000 screening mammograms and more than one million images overall. Twelve attending radiologists, one resident and one medical student were then asked to read 720 screening mammograms, providing a “probability estimate of malignancy on a 0%-100% scale for each breast.”

The AI system achieved an area under the ROC curve (AUC) of 0.876. The readers, meanwhile, achieved a range of AUCs from 0.705 to 0.860, with a mean AUC of 0.778. Combining the AI system with the 14 readers, however, led to an average AUC of 0.891.

“Our study found that AI identified cancer-related patterns in the data that radiologists could not, and vice versa,” senior author Krzysztof J. Geras, PhD, department of radiology at the New York University School of Medicine, said in a prepared statement. “AI detected pixel-level changes in tissue invisible to the human eye, while humans used forms of reasoning not available to AI. The ultimate goal of our work is to augment, not replace, human radiologists.”

The researchers said the next step of their research will include training the AI system on additional data, including a wider variety of findings. They also noted there is still a long way to go until these advanced technologies are being used regularly throughout the medical landscape.

“The transition to AI support in diagnostic radiology should proceed like the adoption of self-driving cars—slowly and carefully, building trust, and improving systems along the way with a focus on safety,” lead author Nan Wu, NYU Center for Data Science, said in the same statement.

 Share on Facebook Share on Linkedin Send in Mail

Featured Articles

Deep-learning algorithm diagnoses pneumonia in 10 seconds

If a newly tested AI system for reading chest X-rays achieves widespread adoption, patients presenting in the ER with symptoms of pneumonia can expect an up or down diagnosis—and with it the start of a treatment plan—in 10 seconds.

Till now, the norm has been 20-plus minutes, as it can take that long for a busy emergency radiologist to get around to reading the exam.

The deep-learning system, called CheXpert, was developed by researchers at Stanford University and implemented at Utah-based Intermountain Healthcare. It was introduced Monday in Madrid, Spain, at an international gathering of the European Respiratory Society, according to an Intermountain news release.

Stanford’s machine learning group initially used 188,000 chest X-rays from California to train the algorithm on what is and isn’t pneumonia. Intermountain supplied an additional 7,000 or so images to fine-tune the model for its patient population.

Testing the system on 461 patients in Utah, researchers from both institutions found its 10-second detection of critical findings had high agreement with that of three experienced radiologists.

“CheXpert is going to be faster and as accurate as radiologists viewing the studies,” says Nathan Dean, MD, in prepared remarks. Dean is principal investigator of the study and section chief of pulmonary and critical care medicine at Intermountain Medical Center in Salt Lake City.

Dean expects the model to go live in several Intermountain ERs this fall.

 Share on Facebook Share on Linkedin Send in Mail

AI helps predict when DCIS will progress to invasive breast cancer

Researchers have uncovered a new way to determine when ductal carcinoma in situ (DCIS) is most likely to progress to a more invasive cancer, according to new findings published in Breast Cancer Research.

The team used an advanced computer program to examine lumpectomy tissue samples from 62 different patients diagnosed with DCIS. This helped them focus on certain features of the tissue samples—tumor size and orientation, to be specific—that seemed to suggest a higher likelihood of DCIS progression. Those features were then combined with machine learning to establish detailed risk categories.

The researchers hope their work can limit the amount of radiation patients are exposed to when receiving care. It could also keep patients from undergoing the Oncotype DX genetic test when not necessary.

“Current testing places patients in high risk, low risk and indeterminate risk—but then treats those indeterminates with radiation, anyway,” Anant Madabhushi, department of biomedical engineering at Case Western Reserve University in Cleveland, said in a prepared statement. “They err on the side of caution, but we’re saying that it appears that it should go the other way—the middle should be classified with the lower risk.”

“This could be a tool for determining who really needs the radiation, or who needs the gene test, which is also very expensive,” lead author Haojia Li, department of biomedical engineering at Case Western Reserve University, said in the same statement.

 Share on Facebook Share on Linkedin Send in Mail

AI earns high marks for evaluating x-rays in ED setting

Deep learning algorithms can be trained to flag suspicious chest x-rays in an emergency department (ED) setting, according to new research published in Radiology.

“For DL algorithms to be clinically useful in medical imaging, their performance should be validated in a study sample that reflects clinical applications of this new technology,” wrote Eui Jin Hwang, department of radiology at Seoul National University College of Medicine in Korea, and colleagues. “Thus, the purpose of our study was to evaluate the performance of a DL algorithm in the identification of chest radiographs with clinically relevant abnormalities in the ED setting.”

The authors used a previously developed deep learning algorithm to analyze data from more than 1,000 consecutive patients who visited a single ED and underwent chest x-rays from Jan. 1 to March 31, 2017. The algorithm’s performance was then compared to that of a group of on-call radiology residents, who interpreted the imaging findings as they normally would.

Overall, the team found that the algorithm achieved an area under the ROC curve (AUC) of 0.95 for detecting relevant abnormalities. It had a sensitivity of 88.7% and specificity of 69.6% at the team’s chosen high-sensitivity cutoff (a probability score of 0.16). The sensitivity was 81.6% and specificity was 90.3% at the team’s chosen high-specificity cutoff (a probability score of 0.46).

The residents, meanwhile, had a higher specificity than the algorithm and a lower sensitivity—but when using the algorithm’s output, their sensitivity did increase.

“The algorithm showed high efficacy in the classification of radiographs with clinically relevant abnormalities from the ED in this ad hoc retrospective review,” the authors wrote. “This suggests that this deep learning algorithm is ready for further testing in a controlled real-time ED setting.”

In addition, the authors noted, algorithms such as the one they evaluated could make a significant difference when it comes to screening or triaging patients.

“During the study period, the interval between image acquisition and reporting was paradoxically longer in radiographs with relevant abnormalities,” Hwang et al. wrote. “In this regard, the algorithm may improve clinical workflow in the ED by screening radiographs before interpretation by ED physicians and radiologists. The algorithm can inform physicians and radiologists if there is a high probability of relevant disease necessitating timely diagnosis and management.”

 Share on Facebook Share on Linkedin Send in Mail

AI accurately detects melanomas

AI algorithms can identify melanomas in dermoscopic images with an accuracy comparable to human specialists, according to research published in JAMA.

“When compared with other forms of skin cancer, malignant melanoma is relatively uncommon; however, the incidence of melanoma is increasing faster than any other form of cancer, and it is responsible for the majority of skin cancer deaths,” wrote lead author Michael Phillips, MMedSci, University of Western Australia, and colleagues.

The study included more than 1,500 images of skin lesions from more than 500 patients, who were all treated from January 2017 to July 2018 at one of seven hospitals in the U.K. Images of suspicious lesions captured by three different cameras—an iPhone 6s, Galaxy S6 and digital single-lens reflex (DSLR) camera—were included in the study. The team turned to Deep Ensemble for Recognition of Malignancy (DERM), an algorithm developed by Skin Analytics Limited using data from more than 7,000 images, to see how its performance compared to physicians.

Overall, the DERM algorithm achieved an area under the ROC curve (AUC) of 90.1% for biopsied skin lesions and 95.8% for all lesions captured by the iPhone 6s camera. It also achieved an AUROC of 85.8% for biopsied lesions and 93.8% for all lesions captured by the Galaxy S6 camera. For the DSLR camera, the AUC was 86.9% for biopsied lesions and 91.8% for all lesions. Physicians, meanwhile, achieved an AUC of 77.8% and a specificity of 69.9%.

“The findings of this diagnostic trial demonstrated that an AI algorithm, using different camera types, can detect melanoma with a similar level of accuracy as specialists,” the authors wrote. “The development of low-cost screening methods, such as AI-based services, could transform patient diagnosis pathways, enabling greater efficiencies throughout the healthcare service.”

Skin Analytics Limited funded this research.

 Share on Facebook Share on Linkedin Send in Mail
Safety information for patients taking Aduhelm has been updated by the FDA to include the addition of two MRI scans during the first year of treatment. #alzheimers #alzheimerstreatment

AI helps manage ‘tedious, lengthy’ image labeling process

Deep convolutional neural networks (CNNs) can be trained to predict sequence types for brain MR images, according to new research published in the Journal of Digital Imaging.

The study’s authors noted that the variety of names manufacturers and healthcare providers use for sequence types can cause significant confusion.

“For multi-institutional data repositories, successful image annotation requires a designated, trained individual who focuses on the intrinsic image weighting and the characteristics of image content rather than manufacturer or institutional nomenclature,” wrote lead author Sara Ranjbar, PhD, of the Mayo Clinic in Phoenix, and colleagues. “This lengthy and tedious manual process creates a bottleneck for aggregation of large image collections and impedes the path to research. An automated annotation system that can match the speed of image generation in the big data era is sorely needed.”

Ranjbar et al. wanted to use deep learning techniques to predict the sequence type of MR scans for brain tumor patients, working specifically to differentiate between T1-weighted (T1W), T1-weighted post-gadolinium contrast agent (T1Gd), T2-weighted (T2W) and T2 fluid-attenuated inversion recovery (FLAIR).

“To the best of our knowledge, no previous work has focused on automatic annotation of MR image sequence types,” the authors wrote. “This form of classification can be highly useful for building large-scale imaging repositories that can receive submissions from heterogeneous data sources.”

The team turned to a database of more than 70,000 MR studies from more than 2,500 patients, focusing on T1W, T1Gd, T2W and FLAIR sequences. More than 14,000 2D images were chosen for the study, with 9,600 images being used to train the CNN, 2,400 being used to validate it and another 2,400 being used to test its effectiveness.

Overall, the average area under the ROC curve (AUC) on the validation set was more than 0.99. For all four sequence types being studied, sensitivity was at least 0.983, specificity was at least 0.994 and accuracy was at least 0.992.

“Our result shows that convolutional neural network predictor of MR sequence type can achieve accuracy, sensitivity, and specificity of 99% in identification of sequence type on previously unseen MR images,” the authors wrote. “This is a notable success given the two-fold variability in our training data: one with regard to image content (presence/absence of tumor, head position, slice number, treatment effects, etc.) and another as a result to diversity of imaging parameters (echo time, repeat time, field strength, etc.) across a spectrum of imaging sites. Our result suggests that the tedious, lengthy and erroneous task of manual image labeling for multi-institutional medical image repositories can reliably be managed using automatic annotation systems using artificial intelligence.”

A 3D network, the team added, may have been “the more natural choice,” but a 2D CNN ensured “a reasonable training time” and used fewer memory and computing resources.

 Share on Facebook Share on Linkedin Send in Mail

Google, care.ai join forces to build safer hospital rooms

Google and care.ai have announced a new partnership focused on bringing autonomous monitoring technology to hospitals environments.

The collaboration involves care.ai’s autonomous monitoring platform and Google’s Coral Edge TPU. Traditional hospital rooms will be outfitted with AI-powered sensors that can send “context-aware, intelligent notifications" to hospital employees at times when a preventable accident or medical error is about to take place. The neural networks learn as they operate, evolving to make the rooms safer for patients as time goes on.  

“Imagine if we brought the power of AI with autonomous monitoring to healthcare environments—we could prevent injuries, diseases, protocol breaches, and ultimately fatalities; while improving staff efficiency,” Chakri Toleti, founder and CEO of care.ai, said in a prepared statement. “By utilizing Google’s Edge TPU, care.ai has done just that—we have built an AI sensor to monitor, predict, and infer behaviors using billions of data points in real time. It is truly the world’s most advanced AI platform for healthcare.”

“We believe we are on the cusp of an amazing revolution for AI in healthcare and are proud to partner with care.ai to witness it becoming a reality,” Billy Rutledge, director of the Coral platform for Google, said in the same statement.

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare