News You Need to Know Today
View Message in Browser

Editor's Choice: 10 Trending Stories from January

Friday, January 31, 2020
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

Top Stories

blockchain chain

How blockchain could change the future of healthcare

Blockchain technology has not been fully embraced in healthcare compared to some other industries. However, according to a new report from Global Data, blockchain’s impact on patient care could be quite massive.   

“Since blockchain can be used as an interoperability layer, it can help to link the data between disparate systems creating a transparent and secure path for patient data sharing,” Urte Jakimaviciute, senior director of healthcare market research for Global Data, said in a statement. “Blockchain-based systems can also give patients more control over their personal data, as the technology allows them to track who has access to their data and when.”

The report, available here, explores some of the biggest trends in blockchain, looking specifically at how researchers are already testing its usefulness in healthcare with a variety of case studies. For example, one pilot project backed by the FDA investigated blockchain’s potential for giving the pharmaceutical supply chain a much-needed facelift.

“Pharma supply chain management has a lot of issues deriving from lack of modernization and a high number of intermediaries involved,” Jakimaviciute said in the same statement. “Blockchain technology can potentially support the digitization of supply chains, overcome the middleman problem and increase transparency and efficiency.”

The report also addresses some of the technology’s current limitations, including “high implementation costs, slow transaction performance, limited storage capabilities” and more. Despite these issues, blockchain is still viewed as a crucial piece of healthcare’s future—even if implementation does continue to move at a considerably slow pace.

“Whether hyped or not, blockchain offers higher security and transparency which is a top priority for the entire healthcare industry—from pharmaceutical companies to payers and hospitals,” Jakimaviciute said.

 Share on Facebook Share on Linkedin Send in Mail
istock-671893136.jpg

Google's AI model outperforms radiologists in breast cancer detection, prediction

Deep-learning based AI models can identify breast cancer more accurately than radiologists, according to new research published in Nature. What does this mean for the future of cancer detection?

The study’s authors, including several representatives from Google Health, trained their AI algorithm with mammograms from more than 25,000 patients in the United Kingdom and more than 3,000 women in the United States. To compare the algorithm’s performance with human specialists, the team asked a separate, unaffiliated research organization to conduct a reader study involving six MQSA-compliant radiologists. The study included 500 mammograms selected at random from the U.S. dataset, and radiologists used BI-RADS scores to grade each image.

Overall, the researchers noted, the AI algorithm “exceeded the average performance of radiologists by a significant margin.” The AI’s area under the receiver operator characteristic curve (AUC-ROC) beat the average radiologist by an absolute margin of 11.5%.

“In this study we present an AI system that outperforms radiologists on a clinically relevant task of breast cancer identification,” wrote Scott Mayer McKinney, MS, and colleagues. “These results held across two large datasets that are representative of different screening populations and practices.”

The algorithm also led to an absolute reduction in false-positive findings of 5.7% for the U.S. dataset and 1.2% for the U.K. dataset.

“False positives can lead to patient anxiety, unnecessary follow-up and invasive diagnostic procedures,” the authors wrote.

McKinney et al. also noted that the AI algorithm and human radiologists disagreed on certain findings, though no exact pattern could be determined. This, they added, “suggests potentially complementary roles for the AI system and human readers in reaching accurate conclusions.”

Nature also published a commentary about the findings written by Etta D. Pisano, MD, chief research officer of the American College of Radiology. Pisano explored both AI’s potential and the limitations of this new research.

“McKinney and colleagues’ results suggest that AI might some day have a role in aiding the early detection of breast cancer, but the authors rightly note that clinical trials will be needed to further assess the utility of this tool in medical practice,” she wrote. “The real world is more complicated and potentially more diverse than the type of controlled research environment reported in this study. For example, the study did not include all the different mammography technologies currently in use, and most images were obtained using a mammography system from a single manufacturer.”

 Share on Facebook Share on Linkedin Send in Mail
Artificial intelligence (AI) has been one of the biggest stories in healthcare for years, but many clinicians still remain unsure about how, exactly, they should be using AI to help their patients. A new analysis in European Heart Journal explored that exact issue, providing cardiology professionals with a step-by-step breakdown of how to get the most out of this potentially game-changing technology.

3 eye-opening findings from AI in Healthcare’s 2020 Leadership Survey

AI in Healthcare recently published its 2020 Leadership Survey, asking more than 1,200 physicians, executives, IT specialists and healthcare professionals about AI and how it might impact the future of patient care. The survey results were fascinating—click here to read about seven key findings—and there were some smaller takeaways that may be worth exploring in greater detail.   

These are three relatively small findings from the survey that deserve additional attention:

1. IT departments are helping foot the bill

While administrators hold the financial responsibility for AI investments 43% of the time, IT departments are actually paying for the technology at 26% of facilities. It isn’t necessarily a shocking development—AI will affect IT employees as much as anyone—but it does show that health systems are truly embracing the potential of these solutions. Also, it’s hard to make concrete changes at any business without the full support of your IT department. This statistic shows that everyone is in on this together, a positive sign for anyone hoping to see AI implementation continue to rise.

2. The more the merrier? Some health systems are already using more than 50 AI applications

Of the survey respondents currently using AI in clinical practice, a vast majority (89%) are utilizing somewhere between one and 10 applications. Another 9% are using 11-50 AI applications. Two percent, however, are using more than 50 AI applications—a number big enough to make your jaw drop to the table.

How are health systems managing this many applications? How are they getting the funding for so much new technology? These questions—and more!—immediately come to mind.

3. Health systems want their AI and EMR to work together

Survey respondents were asked about their top priorities when it comes to the development of AI solutions. Fifth on that list? They want to be able to use EMR data to predict patient outcomes.

Researchers around the world are currently exploring this concept, but the early results seem to indicate that there is still some significant work to be done before AI and EMRs can truly detect at-risk patients as effectively as providers want. One recent study, for example, found that applying AI to EHR data could help doctors predict AFib—but the EHR-fueled AI model was not “substantially better” than a simpler, more direct model.

“Further work is needed to explore the technical and clinical applications of this model to improving outcomes,” the study’s authors concluded.

For now, that seems to be the most common answer when it comes to using EMR data for predicting outcomes. As time goes on, however, the sky is the limit when it comes to transforming patient care through the power of AI.

 Share on Facebook Share on Linkedin Send in Mail

5 things to remember when researching AI and radiology

AI continues to be one of the hottest topics in all of healthcare, especially radiology, and more academic researchers are exploring the subject than ever before. So what separates a good AI study from a bad one? That’s exactly what the editorial board of RSNA’s Radiology journal hoped to cover with its new commentary.

The team noted that specific guidelines will likely be developed in the near future that focus on AI research related to diagnostic imaging. In the meantime, however, the editorial board wanted to share a guide to help researchers ensure they are on the path to success.

The board provided a list of several issues would-be authors must keep in mind while developing their research. These are five of the most important considerations from that list:

1. Use an external test set for your final assessment:

AI models often produce impressive findings … until they are paired with outside data and unexpected bias or inconsistencies are revealed. Researchers should test their algorithms on outside images—as in, from a completely different institution—to show that their study has real potential to make an impact on patient care.

2. Use images from a variety of vendors:

For an algorithm to be clinically useful, it has to work with imaging equipment manufactured by a wide variety of vendors.

“Radiologists are aware that MRI scans from one vendor do not look like those from another vendor,” wrote David A. Bluemke, MD, PhD, editor of Radiology and a radiologist at the University of Wisconsin Madison School of Medicine and Public Health, and colleagues. “Such differences are detected by radiomics and AI algorithms. Vendor-specific algorithms are of much less interest than multivendor AI algorithms.”

3. Train your algorithm with a widely accepted reference standard:

If researchers don’t turn to a standard of reference that the industry already trusts, it will be hard to have interested parties take the research seriously. For example, Bluemke et al. noted that the Radiology editorial board does not consider clinical reports to be a good enough standard of reference for any radiology research.

“Given the frequent requirement of AI for massive training sets (thousands of cases), the research team may find the use of clinical reports to be unavoidable,” the authors wrote. “In that scenario, the research team should assess methods to mitigate the known lower quality of the clinical report when compared with dedicated research interpretations.”

4. Compare your algorithm’s performance to experienced radiologists:

It’s much more important to see how AI models compare to experienced radiologist readers than nonradiologist readers or other algorithms. Researchers may want to compare their work to radiology trainees or nonradiologists to provide a certain level of context, the authors added, but this shouldn’t be used as an evaluation of the algorithm’s “peak performance.”

5. Make your algorithm available to the public:

Think your algorithm could make a real impact? Let other specialists try it out for themselves.

“Just like MRI or CT scanners, AI algorithms need independent validation,” the authors wrote. “Commercial AI products may work in the computer laboratory but have poor function in the reading room. ‘Trust but verify’ is essential for AI that may ultimately be used to help prescribe therapy for our patients.”

 Share on Facebook Share on Linkedin Send in Mail
A survey conducted by the Ann and Robert H. Lurie Children's Hospital of Chicago found more than 75% of parents are generally receptive to the use of artificial intelligence (AI) tools in the management of children with respiratory illnesses in the emergency department (ED). However, some demographic subgroups, including non-Hispanic black and younger age parents, had greater reservations about the use of these technologies. 

3 key differences between the diagnostic reasoning of humans and AI

The use of AI in healthcare is rapidly rising, but healthcare providers remain an absolutely essential part of patient care, according to a new analysis published in CMAJ.

AI can’t replace human reasoning, the authors added, but it can certainly play a valuable role in assisting physicians on a daily basis.

“Several studies have shown the extent to which AI can be used to make and support diagnosis in medicine,” wrote Thierry Pelaccia, University of Strasbourg in France, and colleagues. “Since current evidence supports the effectiveness of AI for only a small selection of diagnostic tasks and human experts remain able to learn and diagnose a wide array of conditions, human intelligence would seem to remain essential to diagnosis for now.”

Pelaccia and colleagues wrote about some of the most important differences between human intelligence and AI. These are three of the biggest differences covered in their analysis:

1. The way they make diagnoses

“Physicians mainly use a hypothetico-deductive approach to make diagnoses. After generating diagnostic hypotheses early, they spend most of their diagnostic time testing them by collecting more data,” the authors wrote. “This approach is underpinned by cognitive processes that, according to the dual-process theory, can be either intuitive or analytical.”

AI, on the other hand, makes a diagnosis based on properly collected and labeled data—the model stores knowledge and is continuously developed until it “proposes accurate outputs” on a training set.

“Although humans understand cause-and-effect relations, these are not yet modelled in AI,” the team added. “This subject has been studied for a long time in AI, but it is only recently that first attempts to define an AI that ‘thinks like a human’ have been proposed.”

2. What can lead to a misdiagnosis

There are more than 12 million misdiagnoses made annually in the United States, according to data shared by the authors, and diagnostic error rates range from 5% to 15%. Cognitive biases are a common cause, one that researchers have spent considerable time over the years studying with a close eye.

Errors made by AI models, however, typically come from issues related to how they were trained. Perhaps the data is not up to par, for example, or maybe the actual experiment was not designed well.

3. Physicians can learn a lot with limited data

Physicians can go far with “very few data,” working to make the proper diagnosis and provide the best patient care possible. AI models, however, are nothing without massive datasets that take time, energy—and money—to put together.

“Most AI systems do not model intuition and therefore require substantial data to make a relevant diagnosis,” the authors wrote. “This is why AI is presently most effective in situations where all the data of the problem to be solved are immediately accessible, such as in medical imaging. Artificial intelligence also requires data transformation, but in AI this a much more complex and time-consuming process.”

Overall, Pelaccia et al. concluded, there is still a significant amount of work to be done in the development of AI. The quality and accessibility of medical data must be improved, they wrote, and physicians will need to fully embrace these evolving technologies instead of being resistant to change. Over time, however, it is possible for AI to “assume its place as a routine tool for medical practice.”

 Share on Facebook Share on Linkedin Send in Mail

Featured Articles

White House shares new AI principles, calling for healthy collaboration and limited regulation

The White House has proposed a new set of principles for governing the development of AI solutions throughout the United States. The move is aimed at promoting public engagement, limiting regulatory overreach and promoting the development of fair, unbiased algorithms.

The 10 principles are:

  1. Public Trust in AI
  2. Public Participation
  3. Scientific Integrity and Information Quality
  4. Risk Assessment and Management
  5. Benefits and Costs
  6. Flexibility
  7. Fairness and Non-Discrimination
  8. Disclosure and Transparency
  9. Safety and Security
  10. Interagency Coordination

Michael Kratsios, chief technology officer of the United States, wrote about this White House proposal Tuesday, Jan. 7, in a new commentary for Bloomberg Opinion.

“Innovations in AI are creating personalized cancer treatments, improving search and rescue disaster response, making our roadways safer with automated vehicles, and have the potential for so much more,” Kratsios wrote. “But with growing concerns about data privacy, big tech companies, and the rise of technology-enabled authoritarianism in China and elsewhere, more people are starting to wonder: Must we decide between embracing this emerging technology and following our moral compass?”

This represents a “false choice,” he notes, writing that the United States can support the advancement of new technologies while still demonstrating the country’s values of “freedom, human rights and respect for human dignity.”

Kratsios emphasized the importance of allowing AI rulemaking to be a collaborative process between American citizens, academics, industry leaders and other individuals directly impacted by these developments. He also wrote that a “light-touch regulatory approach” is crucial to ensure innovation is being promoted and not restricted or minimized.  

“Given the pace at which AI will continue to evolve, agencies will need to establish flexible frameworks that allow for rapid change and updates across sectors, rather than one-size-fits-all regulations,” Kratsios wrote. “Automated vehicles, drones and AI-powered medical devices all call for vastly different regulatory considerations.”

The White House is also working to ensure AI solutions are developed with “fairness, transparency, safety and security” all in mind. Government agencies are asked to back up all policy decisions with evidence that “the best possible scientific evidence” has been followed to a tee, and data integrity is to be protected at all times.

While “governments elsewhere” are using AI “in the service of the surveillance state,” Kratsios wrote, the United States “will continue to advance AI innovation based on American values.”

“The best way to counter this dystopian approach is to make sure America and our allies remain the top global hubs of AI innovation,” he wrote. “Europe and our other international partners should adopt similar regulatory principles that embrace and shape innovation, and do so in a manner consistent with the principles we all hold dear.”

According to the White House, the principles will be released online as a memorandum once they have been finalized. In addition, Kratsios is scheduled to discuss the Trump Administration’s thoughts on AI at length during the annual CES trade show in Las Vegas.

As Wired’s Tom Simonite wrote on Jan. 6, the United States has rejected working with other countries around the world to establish principles related to the development and implementation of AI.

 Share on Facebook Share on Linkedin Send in Mail

AI monitors glucose levels with ECG data

Patients can now use AI to monitor their glucose levels with off-the-shelf, noninvasive wearable sensors, according to a new study published in Scientific Reports.

One significant advantage of this method, the authors explained, is that it uses ECG data—this means patients can stay informed about their health without a needing a “fingerpick test.”

“Fingerpicks are never pleasant and in some circumstances are particularly cumbersome,” Leandro Pecchia, PhD, from the school of engineering at the University of Warwick, said in a prepared statement. “Taking fingerpick during the night certainly is unpleasant, especially for patients in pediatric age.”

Pecchia and colleagues developed the technique, which can even monitor’s a patient glucose levels as they sleep, and then tested its effectiveness through two pilot studies. Overall, the method achieved an average sensitivity and specificity of 82%, which is comparable to the performance of currently available continuous glucose monitors. One reason for the team’s success is that their AI algorithm was trained using data from each individual subject, displaying the potential of personalized patient care.

“Our approach enable personalized tuning of detection algorithms and emphasize how hypoglycemic events affect ECG in individuals,” Pecchia said in the same statement. “Basing on this information, clinicians can adapt the therapy to each individual. Clearly more clinical research is required to confirm these results in wider populations. This is why we are looking for partners.” 

 Share on Facebook Share on Linkedin Send in Mail
A survey conducted by the Ann and Robert H. Lurie Children's Hospital of Chicago found more than 75% of parents are generally receptive to the use of artificial intelligence (AI) tools in the management of children with respiratory illnesses in the emergency department (ED). However, some demographic subgroups, including non-Hispanic black and younger age parents, had greater reservations about the use of these technologies. 

Patients trust medical AI more than healthcare providers do

Patients and healthcare providers both see potential in AI’s ability to improve healthcare. Patients, however, appear to trust AI technology more than providers.

These findings both come from a new survey published in Artificial Intelligence in Medicine. Such a survey was necessary, the study’s authors explained, to help “provide guidance for the future development of medical AI.”  

“Since the public has distinct viewpoints and anticipations regarding the rapid development of medical AI, an extensive opinion survey is conducive to our understanding of their specific comments, receptivity, and demands,” wrote Yifan Xiang, Sun Yat-sen University in China, and colleagues.

The research team surveyed a total of 2,780 people from throughout China from Oct. 12 to Oct. 30, 2018. More than 54% of participants were female, and more than 43% were between the ages of 30 and 39 years old. In addition, 54.5% of all participants were healthcare workers. The survey included questions about how each respondent perceived AI, what they wanted out of these evolving technologies and more. Healthcare workers were asked about how open they were to participating in AI research.

Overall, the authors found “no significant difference” between healthcare workers and non-healthcare workers when it came to being receptive to medical AI. There was more of a variety, however, when the two groups discussed their demands and perceptions of medical AI.

For example, healthcare workers want AI that can “alleviate daily repetitive work and improve outpatient guidance and consultation.” Non-healthcare workers, on the other hand, are more concerned about how AI can improve a physician’s diagnosis.

Also, while both groups trust human doctors more than medical AI, the percentage of respondents who said they trust medical AI was actually higher among non-healthcare workers (11.4%) than healthcare workers (7.5%).

This limited trust of AI, the authors explained, is a sign that physicians should not fear being replaced by algorithms any time soon.

“Doctors are irreplaceable in medicine in the foreseeable future,” the researchers wrote. “The ability to understand the specific physical conditions of each patient requires the doctor to attune his or her perception to the patient's history and physical exam, which seems uniquely human. In addition, human doctors have a more nuanced understanding of the needs of patients at the end of life in terms of not only the length of life but also the quality of life.” 

Another statistic that stood out from the team’s survey was that approximately 95% of healthcare workers intend on learning more about AI through training and research.

“Given the inexorable trend towards intelligent healthcare, doctors who use AI could probably replace those who do not,” the authors wrote. “Healthcare workers have begun to realize the importance of learning AI techniques.”

 Share on Facebook Share on Linkedin Send in Mail

AI diagnoses prostate cancer as well as pathologists

Researchers have developed a deep learning system capable of evaluating tissue samples and diagnosing prostate cancer at a level comparable with many pathologists. The team shared its findings in The Lancet Oncology.

“The Gleason score is the strongest correlating predictor of recurrence for prostate cancer, but has substantial inter-observer variability, limiting its usefulness for individual patients,” wrote lead author Wouter Bulten, MSc, Radboud University Medical Center in the Netherlands, and colleagues. “Specialized urological pathologists have greater concordance; however, such expertise is not widely available. Prostate cancer diagnostics could thus benefit from robust, reproducible Gleason grading.”

The team speculated that an AI tool could be developed that graded prostate biopsies following the Gleason grading standard. Bulten et al. explored data from patients treated at a single facility from Jan. 1, 2012, to Dec. 31, 2017. Three expert pathologists provided a reference standard so the deep learning system’s performance could be accurately measured.

“The AI system has now been trained with 5,759 biopsies from more than 1,200 patients,” Bulten said in a prepared statement. “When we compared the performance of the algorithm with that of 15 pathologists from various countries and with differing levels of experience, our system performed better than [10] of them and was comparable to highly experienced pathologists.”

In the same statement, Bulten noted that Radboud University Medical Center was able to collect the appropriate data thanks to its central role in managing patient care.

“It is advantageous that we are an academic hospital,” he said. “We are close to the patient and the practitioner, and have our own database of biopsies.”

Overall, the authors concluded, this newly developed deep learning system showed potential for screening biopsies, producing second opinions to healthcare providers or even presenting key quantitative measurements.

 Share on Facebook Share on Linkedin Send in Mail
Artificial intelligence (AI) has been one of the biggest stories in healthcare for years, but many clinicians still remain unsure about how, exactly, they should be using AI to help their patients. A new analysis in European Heart Journal explored that exact issue, providing cardiology professionals with a step-by-step breakdown of how to get the most out of this potentially game-changing technology.

Machine learning use in healthcare still limited to proof-of-concept studies

Machine learning (ML) technology has gained popularity in recent years, but its use in healthcare remains largely limited to proof-of-concept academic studies, according to a new study published in Artificial Intelligence in Medicine.

“AI has the potential to profoundly transform medical practice by aiding physicians’ interpretation of complex and diverse data types,” wrote lead author David Ben-Israel, department of clinical neurosciences at the University of Calgary, and colleagues. “If AI successfully translates into a busy clinician’s practice, it stands to improve the performance of diagnosis, prognostication and management decisions.”

So will ML translate into a busy practice? Ben-Israel and colleagues aimed to track the progression of ML implementation in modern health systems, searching through original studies on the topic published between Jan. 1, 2000, and May 1, 2018.

All studies were published in English and specifically examined the use of ML to improve patient care. Editorials, book chapters, white papers, case reports, conference abstracts and other similar documents were all excluded.

Overall, 386 publications were identified that involved the implementation of a ML strategy “to address a specific clinical problem.” Ninety-eight percent of those studies were retrospective. The authors wrote that ML stands to be a true game-changer for healthcare, but certain limitations remain that must be addressed.

“Access to real-time clinical data, data security, physician approval of ‘black box’ generated results, and performance evaluation are important aspects of implementing a ML based data strategy,” Ben-Israel et al. concluded. “Not all clinical problems will be amenable to an AI based data strategy. The careful definition of a clinical problem and the gathering of requisite data for analysis are important first steps in determining if computer science methods within medicine may advance what human intelligence has been able to accomplish.”

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare