News You Need to Know Today
View Message in Browser

Coronavirus, Inconsistent AI and More: 10 Trending Stories from February

Friday, February 28, 2020
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

Top Stories

global data

Coronavirus will infect 2.5 billion people, kill 53 million by March, AI predicts

According to a new AI simulation, the Wuhan coronavirus could kill 52.9 million people within 45 days—and infect 2.5 billion overall.

The death toll of the disease thus far is believed to be 565, according to a website dedicated to tracking the outbreak. So how did the simulation reach those titanic totals?

James Ross, co-founder of the financial technology company HedgeChatter, built the AI model. He spoke to Forbes about his process.

“I started with day over day growth,” he said, as quoted by Forbes. “[I then] took that data and dumped it into an AI neural net using a recurrent neural network model and ran the simulation ten million times. That output dictated the forecast for the following day. Once the following day’s output was published, I grabbed that data, added it to the training data, and re-ran ten million times.”

If these numbers alarm you—and, well, they should—it’s important to note that the AI model is missing key data. And healthcare workers around the world are focused on the outbreak, which should theoretically help limit its ability to spread.

Click below for the full story from Forbes:

 Share on Facebook Share on Linkedin Send in Mail

How deep learning-based natural language processing is changing radiology

Natural language processing (NLP) can provide significant value in radiology, extracting key data from the electronic health record and prioritizing radiologist worklists. According to a new analysis published in the Journal of the American College of Radiology, deep learning (DL) technology is now being used to make NLP even more effective—and it’s a growing trend that shows no signs of slowing down.  

“DL NLP is increasingly encountered in the literature,” wrote lead author Vera Sorin, MD, Chai Sheba Medical Center in Israel, and colleagues. “It is expected to play a larger role in research and clinical practice in coming years.”

Sorin et al. analyzed 10 academic studies on DL NLP and radiology, searching for key trends that could help radiologists and other imaging professionals gain a better understanding of this new technology. All studies were published from January 2017 to September 2019.

The team noted that researchers are exploring the effectiveness of DL NLP in a number of ways. For instance, some studies focused on how it can be used to flag and classify radiology reports.

“This can help clinicians focus on the important data in the reports,” the authors wrote. “Such focusing can reduce potential overlooking of critical findings and save reading time. For example, radiology computer vision research demands structured labels for images. Manual labeling can be time-consuming. DL NLP can provide automatic labeling for large data sets.”

One specific study the team analyzed involved using recurrent neural networks—which “process sequential information and thus are ideal for sentences”—to classify the musculoskeletal radiology reports for the presence of fractures.  In another study, a convolutional neural network (CNN) was successfully used to label pulmonary embolisms (PEs).

In addition, DL NLP has shown the potential to help providers determine imaging protocols for patients, which “can save time and also potential decrease errors of contrast material injections.” And the authors observed that DL NLP can help users identify follow-up recommendations, though there is still work to be done before that method can outperform more traditional NLP techniques.

“In conclusion, research and use of DL NLP is expected to increase in coming years,” the authors wrote. “Understanding the basic concepts of this technology may help radiologists prepare for changes in their field.”

 Share on Facebook Share on Linkedin Send in Mail
Blog_Header_1200x628_Fellowship (12)_0.png

10 key uses for AI in radiology that don’t involve interpretation

AI promises to make a titanic impact on radiology, but most of the attention tends to focus on its ability to identify important findings in medical images. What about the technology’s non-interpretive qualities?

A new analysis published in Academic Radiology detailed some of the many other ways AI can help the specialty on a regular basis. These are 10 ways AI can be used in radiology that do not involve the interpretation of imaging findings:  

1. Noise Reduction

AI’s ability to enhance image quality helps radiologists do their job and provide the most accurate diagnosis possible.

“Initial deep learning techniques resulted in over-smooth images with loss of details and compromised visibility of essential structures,” wrote lead author Michael L. Richardson, MD, department of radiology at the university of Washington in Seattle, and colleagues. “However, this has been addressed with more recent techniques involving the use of convolutional neural networks (CNNs) and generative-adversarial networks, resulting in de-noised images without loss of critical information.”

2. Reducing radiation dose and contrast dose

Concerns about the radiation dose associated with CT and PET imaging have increased in recent years, with specialists all over the world working to reduce exposure through new technology solutions and updated protocols. AI can also play a key role in this area, with some algorithms creating “high-quality images directly from low-dose raw sensor data” and others working to turn low-quality PET images into high-quality images.

On a similar note, AI can help reduce the need for contrast agents during MRI scans, helping ease worries related to gadolinium-based contrast agents.

3. Assessing image quality right away

Retaking certain images is a necessary reality in radiology, especially during MRI scans. AI can help ensure “suboptimal images” are identified right away, helping make sure patients don’t have to be brought back in a second time after they have already left the premises.

“Recalling these patients for repeat imaging results in delayed diagnoses, increased costs to the health care system, and in some cases, increased radiation exposure,” the authors wrote.

4. Improving scheduling for “scanners, patients and staff”

As utilization continues to rise, it is crucial to improve the organization of an imaging provider’s equipment. AI can help in this area thanks to the massive amount of data being collected at all times through the use of electronic health records; algorithms can detect “inefficiencies in utilization in scheduling,” the authors explained, helping address such issues before they even happen.   

5. Improved billing

“Insurance claim denials can account for as much as a 3–5% loss in revenue,” the authors wrote. “This has led healthcare organizations to turn to AI techniques such as NLP and other machine learning tools for innovative solutions to optimize billing, report classification, and claim denial reconciliation.”

6. Developing and optimizing protocols

AI can help specialists develop an ideal protocol—and make sure technologists stick to the script as much as possible. In addition, a “properly trained CNN might provide an acceptable surrogate for human readers when performing a protocol optimization study.”

7. Worklist prioritization

AI can be used to prioritize urgent findings and enhance the distribution of examinations to radiologists. This remains one of the most well-known non-interpretive uses of AI in radiology, an effective way to keep turnaround times low and provide care to patients who need it the most.

8. Image annotation and segmentation

Annotations help radiologists communicate with patients and track findings over time, and segmentation is a helpful step for focusing on specific aspects of a medical image. AI models can help specialists with these tasks—and image labeling—helping them spend more time on making a diagnoses and delivering the best patient care possible.

This is especially valuable during clinical trials, Richardson et al. explained.

“Image annotation and segmentation remains an important component in oncologic clinical trials where lesions (target or nontarget) are followed from baseline at various time points to assess treatment response,” the authors wrote. “These studies require precise and consistent evaluation of the lesions across different readers at different time points, and can be best optimized using annotations and in some instances automated/semiautomated segmentation or volumetric assessment. Studies have shown that deep learning can efficiently monitor changes and perform quantitative analysis before, during, and after treatment and can also help to predict prognostic endpoints.”

9. Image-based search engines

Machine learning-powered search engines for medical images can be used for commercial and training purposes, allowing users to search for “the visual content of the image.”

10. Detecting, and preventing, adversarial attacks

Cyberattacks on PACS and even medical images themselves are a legitimate threat, one that could get much worse over time. Some algorithms have even been developed that can “trick” a radiologist, creating fake imaging findings that lead to an incorrect diagnosis. Researchers are currently working to combat such attacks with AI technology, according to the study’s authors.

“Methods such as digital image watermarking and ML algorithms to detect tampered images, such as “feature squeezing” or “defensive distillation” have been proposed for the detection of manipulated images,” they wrote. “In the meantime, prevention of an adversarial attack begins with recognition of the problem and by adopting some simple standards of practice.”

 Share on Facebook Share on Linkedin Send in Mail

Inconsistent AI: Deep learning models for breast cancer fail to deliver after closer inspection

Numerous deep learning models can detect and classify imaging findings with a performance that rivals human radiologists. However, according to a new study published in the Journal of the American College of Radiology, many of these AI models aren’t nearly as impressive when applied to external data sets.

“This potential performance uncertainty raises the concern of model generalization and validation, which needs to be addressed before the models are rushed to real-world clinical practice,” wrote first author Xiaoqin Wang, MD, University of Kentucky in Lexington, and colleagues.

The authors explored the performance of six deep learning models for breast cancer classification, including three that had been previously published by other researchers and three they designed themselves. Five of the AI models—including all of them designed for this specific study—used transfer learning, which “pretrains models on the natural image domain and transfers the models to another imaging domain later.” The final model, on the other hand, used the instance-based learning method, a “widely used deep learning method for the object detection with proven success in multiple image domains.”

The models were all trained on the Digital Database for Screening Mammography (DDSM) data set and then tested on three additional external data sets. Overall, the three previously published models achieved an area under the receiver operating characteristic curve (auROC) scores ranging from 0.88 to 0.95 on images from the DDSM data set. The three models designed for this study achieved auROC scores from 0.71 to 0.79.

When applied to the three external data sets, however, the six AI models all suffered, achieving scores between 0.44 and 0.65.

“Our results demonstrate that deep learning models trained on a limited data set do not perform well on data sets that have different data distributions in patient population, disease characteristics, and imaging systems,” the authors wrote. “This high variability in performance across mammography data sets and models indicates that the proclaimed high performance of deep learning models on one data set may not be readily transferred or generalized to external data sets or modern clinical data that have not been ‘seen’ by the models.”

Wang et al. then concluded their study by pointing to the need for more consistency across the board when it comes to the training and development of AI models to be used in healthcare.

“Guidelines and regulations are needed to catch up with the AI advancement to ensure that models with claimed high performance on limited training data undergo further assessment and validation before being applied to real-world practice,” they wrote.

 Share on Facebook Share on Linkedin Send in Mail
bloodprototype.jpg

Meet the robot that draws blood better than humans

In the near future, patients may have their blood drawn and tested by an advanced robot—and it’s a move that would benefit both patients and healthcare providers.

Researchers created and tested the robot, publishing their findings in Technology.

“Obtaining venous access for blood sampling or intravenous fluid delivery is an essential first step in patient care,” wrote lead author Josh Leipheimer, a biomedical engineering student at Rutgers University-New Brunswick, and colleagues. “However, success rates rely heavily on clinician experience and patient physiology. Difficulties in obtaining venous access result in missed sticks and injury to patients, and typically require alternative access pathways and additional personnel that lengthen procedure times, thereby creating unnecessary costs to healthcare facilities.”

The device uses ultrasound imaging to locate and draw blood from the patient’s veins. It can also handle blood samples once they have been drawn.

Leipheimer and colleagues found that their robot performed these tasks as well, or even better, than a human healthcare provider. It had an overall success rate of 87% and a success rate of 97% for “nondifficult venous access.”

The team sees a future when ambulances, emergency rooms and hospitals all embrace this technology.

“A device like ours could help clinicians get blood samples quickly, safely and reliably, preventing unnecessary complications and pain in patients from multiple needle insertion attempts,” Leipheimer said in a prepared statement.

 Share on Facebook Share on Linkedin Send in Mail

Featured Articles

A major ethical question regarding AI and healthcare

The rise of AI in healthcare—especially radiology—has launched countless conversations about ethics, bias and the difference between “right” and “wrong.” A new analysis published in La radiologia medica, the official journal of the Italian Society of Medical Radiology, explores perhaps the biggest ethical question of them all: Who is responsible for the benefits, and harms, of using AI in healthcare?

The authors. focused on radiology with their commentary, but their message is one that can be applied to any specialty looking to deliver patient care through the use of AI.

“When human beings make decisions, the action itself is normally connected with a direct responsibility by the agent who generated the action,” wrote lead author Emanuele Neri, University of Pisa in Italy, and colleagues. “You have an effect on others, and therefore, you are responsible for what you do and what you decide to do. But if you do not do this yourself, but an AI system, it becomes difficult and important to be able to ascribe responsibility when something goes wrong.”

Ultimately, according to the authors, the radiologists using AI are responsible for any diagnosis provided by that AI. AI does not have free will or “know” what it is doing, so one must point to the radiologists themselves.  

Due to this responsibility, the team added, “radiologists must be trained on the use of AI since they are responsible for the actions of machines.” That responsibility also carries over to any specialists involved in the research and development of any AI system. If you helped build a dataset for AI research, in other words, one could argue that you share part of the blame if that AI makes an incorrect diagnosis. This is just one of many reasons that it is so crucial to develop trustworthy AI.

Another key point in the analysis is that AI automation can actually have a negative impact on the radiologist’s final diagnosis or treatment decision.

“Automation bias is the tendency for humans to favor machine-generated decisions, ignoring contrary data or conflicting human decisions,” the authors wrote. “Automation bias leads to errors of omission and commission, where omission errors occur when a human fails to notice, or disregards, the failure of the AI tool.”

Neri et al. concluded by looking at the “radiologist-patient relationship” in this new era of AI technology, pointing out that providers must be honest about the origins of their decisions.

“A contingent problem with the introduction of AI and of no less importance is transparency toward patients,” the authors wrote. “They must be informed that the diagnosis was obtained with the help of the AI.”

 Share on Facebook Share on Linkedin Send in Mail

AI tech behind deepfake videos creates bogus imaging results—and it could be big for radiology

Generative adversarial networks (GANs), a fairly new breakthrough in AI, are capable of creating fake images that look incredibly real. It’s the same technology, in fact, responsible for those deceptive “deepfake” videos that put words in the mouths of public figures such as President Trump and Facebook Founder Mark Zuckerberg.

According to a new analysis published in Academic Radiology, GANs could also make a big impact on the future of healthcare research, especially in the field radiology. They can generate fake, realistic medical images that researchers are eager to explore.  

“This recent innovative technology has the potential to be applied to a variety of radiology tasks,” wrote lead author Vera Sorin, Chaim Sheba Medical Center in Israel, and colleagues. “These tasks include generation of fake images to increase datasets for training deep learning algorithms, translation of one image type to another and improving the quality of existing images. The radiology community can benefit from getting acquainted with this technology.”

The authors reviewed academic publications from 2017 to September 2019, focusing on any papers that detailed GAN applications in radiology. Overall, 33 studies made the cut, and they included research in four key areas: image reconstruction and denoising, data augmentation, transfer between modalities and image segmentation.

“Fourteen studies described GANs for image reconstruction and denoising,” the authors wrote. “These studies aimed to improve image quality and reduce radiation dose, an assignment that can greatly impact the availability and usage of imaging modalities for diagnostic and screening purposes.”

The researchers doing this work found significant success. One team, for instance, trained a GAN to remove metallic artifacts from CT scans.

Data augmentation was another common topic for researchers working with GANs. Annotating medical images requires a lot of time, energy and knowledge—but GANS can hep researchers by creating fake images that can contribute to the development of AI algorithms.

“The main pitfall in generated images is that they sometimes struggle to compete with real ones,” the authors wrote. “Synthetic images may have low resolution or be blurred. For this reason, algorithm training is initially done using fake images, and then refined with real images. This way benefiting training and decreasing the number of required real images.”

GAN technology can also be used for “generating CT-like images based on MR images or generating MR images across different sequences.” This technique can improve an AI model’s effectiveness by using the knowledge from one modality for “improved differentiation” in another modality.

The team did add that this technology is still in the “proof-of-concept stage” at this time, but researchers continue to explore the potential of GANs in radiology and other healthcare specialties.  

“In conclusion, GANs are increasingly studied for various radiology applications,” the authors wrote. “They enable the creation of new data, which can be used for clinical care, education and research.”

 Share on Facebook Share on Linkedin Send in Mail
""

AI measures blood flow to predict death, heart attack—and is more accurate than humans

Researchers have used AI technology to predict a patient’s chance of death, heart attack or stroke better than human doctors, sharing their findings in a new study in Circulation.

The team achieved this breakthrough by, for the first time ever, using AI to instantly measure and evaluate blood flow. In the past, such assessments have been performed using such techniques as cardiovascular magnetic resonance (CMR) imaging. The images, however, were “incredibly difficult” to interpret in a timely manner.

The study’s authors explored data from more than 1,000 patients who underwent routine CMR scans. AI-generated results were then compared with the patients’ outcomes—including death, heart attack, stroke and heart failure—to measure the technique’s effectiveness.

“AI is moving out of the computer labs and into the real world of healthcare, carrying out some tasks better than doctors could do alone,” corresponding author James C. Moon, MD, University College of  London (UCL) Institute of Cardiovascular Science, said in a prepared statement. “We have tried to measure blood flow manually before, but it is tedious and time-consuming, taking doctors away from where they are needed most, with their patients.”

“The predictive power and reliability of the AI was impressive and easy to implement within a patient's routine care,” first author Kristopher D. Knott, MBBS, also of the UCL Institute of Cardiovascular Sciences, said in the same statement. “The calculations were happening as the patients were being scanned, and the results were immediately delivered to doctors. As poor blood flow is treatable, these better predictions ultimately lead to better patient care, as well as giving us new insights into how the heart works.”

 Share on Facebook Share on Linkedin Send in Mail

The rise of AI is approaching—have we really thought this through?

AI technology could replace countless jobs in the not-so-distant future, making an impact on workforces all over the world. According to a new analysis published in Information and Organization, researchers and policymakers alike should pay especially close attention to this development and get involved now—before it’s too late.

“We have to think about what aspects of work have meaning and value to us,” co-author Diane E. Bailey, Cornell University in Ithaca, New York, said in a prepared statement. “We might decide, ‘Maybe AI can do this better than a person, but we don't care, because we get some value out of it.’”

The authors noted it will likely take longer for AI to transform entire industries than people think—and in some cases, it may not even end up being a good fit at all. In addition, technological breakthroughs often lead to “different outcomes for different organizations,” meaning AI won’t impact every entity in the same way.

One example the researchers explored was a 2017 study about an algorithm that detects infections in a neonatal intensive care unit. It was designed to tell providers when they should intervene—but the doctors and nurses “treated the algorithm’s output wit skepticism” and “came to use it as just another tool for arriving at their own diagnosis.” In other words, the AI model expected to automate a key aspect of healthcare simply became another alert to monitor throughout the day.

As unpredictable as the implementation of AI and automation can be, the authors suggested a “unified approach” to study these technologies. The approach starts issues related to “power and ideology among stakeholders,” shifts to “issues of variation” and ends with “issues of the institutional effects of technology use.”

“We have to understand how all of these market mechanisms operate if we're going to be savvy enough to work in that world and say, ‘No, we want technology that looks like this’ [or] ‘Design something that operates this way,’” Bailey said in the same  statement. “We need to work backwards from some desired future that we want, to get the technologies that will help us get there.”

 Share on Facebook Share on Linkedin Send in Mail

AI improves radiologist performance when detecting breast cancer

AI algorithms can help radiologists achieve a “significant improvement” in their ability to detect breast cancer, according to a new study published in The Lancet Digital Health.

The authors developed and validated an AI model for detecting breast cancer using data from more than 170,000 mammography examinations performed at five institutions in South Korea, the United States and the U.K. Examinations were performed on equipment from numerous vendors, and the data included both screening and diagnostic mammograms. The AI model was based on deep convolutional neural networks and its training consisted of two separate stages.

A group of 14 radiologists was then brought in and asked to read and assess 320 additional mammograms for a separate reader study. Radiologists performed their reads with and without assistance from the algorithm, allowing the authors to explore the AI’s effectiveness.

“The 14 radiologists consisted of seven breast specialists and seven general radiologists,” explained author Hak Hee Kim, MD, University of Ulsan College of Medicine in South Korea, and colleagues. “Both groups were board-certified radiologists, but general radiologists had not been specifically trained in breast imaging whereas breast specialists had been trained in breast imaging for at least six months.”

Overall, the algorithm achieved an area under the receiver operating characteristic curve (AUROC) of 0.959. Its sensitivity was 91.4% and specificity was 86%.

In the reader study, meanwhile, the radiologists achieved an AUROC of 0.810 without AI assistance and 0.881 with AI assistance. The AI’s performance level in the reader study was an AUROC of 0.940. Radiologist sensitivity and specificity both improved with AI assistance. Sensitivity increased from 75.27% to 84.78% and specificity increased from 71.96% to 74.64%.

According to the authors, the study’s findings confirm “that AI has the potential to improve early-stage breast cancer detection in mammography.”

“Such improvements could result in an increase in screen-detected cancers and decrease in interval cancers, which would improve the efficacy of mammography screening,” they wrote. “Real-world clinical benefit needs to be evaluated by future prospective studies.”

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare