In cooperation with | ● |
|
|
| | | Every day, more AI uses are coming to market, with medical devices a perfect target for innovating healthcare. And while more than 130 of such tools have been approved by the Federal Drug Administration, some experts are saying the review process needs to be reevaluated. That’s according to a group of Stanford researchers who wanted to know how much regulators and doctors actually know about the accuracy of the AI devices they are touting and approving. The evidence may actually reveal some of the faults with AI technology, according to the study, which was published in Nature. The researchers analyzed every AI medical device approved by the FDA between 2015 and 2020 for their study. They found that approval for AI devices was starkly different than the approval process for pharmaceuticals. The biggest problem lies with historical data being used to train AI algorithms. In many cases, this means the data is outdated. Many algorithms are never actually tested in a clinical setting before being approved, and many devices were also only tested at one or two sites, limiting the inclusion of data from more racially and demographically diverse patients. “Quite surprisingly, a lot of the AI algorithms weren’t evaluated very thoroughly,’’ said James Zou, the study’s co-author, who is an assistant professor of biomedical data science at Stanford University as well as a faculty member of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). This also meant that AI devices weren’t being assessed for live patients in real settings. Instead, the predictions and recommendations were based on retrospective data. This realization may mean that AI medical devices may actually fail to capture how healthcare providers can actually use these tools in clinical settings. The same is true for different demographics. The researchers pointed to a deep learning model that analyzes chest X-rays for signs of collapsed lungs to prove their point. While the model worked accurately for one cohort of patient data, against two other patient data sites, the algorithms were 10% less accurate. Accuracy was also higher for white patients than for Black patients. “It’s a well-known challenge for artificial intelligence that an algorithm may work well for one population group and not for another,” Zou said. The findings may inform regulators about the challenges of AI medical devices and reveal a need for stricter approval requirements. “We’re extremely excited about the overall promise of AI in medicine,” Zou said. “We don’t want things to be overregulated. At the same time, we want to make sure there is rigorous evaluation especially for high-risk medical applications. You want to make sure the drugs you are taking are thoroughly vetted. It’s the same thing here.” |
| | |
| |
| | | Forbes is out with its third annual AI 50, and almost a fifth of the field works in healthcare. The magazine compiles the list by inviting nominations and using an algorithm to help judges select the 50 that best leverage AI for purposes fundamental to their respective operations—or, presumably, those of their clients and/or prospects. To qualify, entrant companies must be privately held and autonomously run. Close to 400 met the criteria this year. The nine healthcare-specific concerns making the grade by Forbes’s lights are: Atomwise (San Francisco). Primary aim: Drug discovery. The company’s technology has “already helped to discover promising drugs for multiple sclerosis and ebola which were successful in animal trials,” Forbes reports. Ezra (New York City). Cancer detection on MRI scans. Forbes: “CEO Emi Gal, who is at high risk for melanoma, dreams of making a $500 full-body MRI for cancer in the next three years.” Genesis Therapeutics (Burlingame, California). Drug discovery. “Rather than applying AI solutions for image recognition or language processing to the pharmaceutical industry, [CEO Evan] Feinberg and chief technology officer Ben Sklaroff created new AI tools specifically for chemistry.” Intelligencia (New York City). Drug discovery. “‘Our strong belief is that biotech needs to catch up to baseball and its own Moneyball moment is here,’ says cofounder Vangelis Vergetis, referencing the 2011 film in which a small-budget baseball team used advanced analytics to outperform expectations.” Komodo Health (San Francisco). Mass patient analytics. “The end result is a massive web of data that allows [client organizations such as] government agencies, healthcare payers and pharmaceutical firms to uncover a slew of clinical and business insights.” Nines (Palo Alto, California). Teleradiology diagnostics. “By saving precious time otherwise spent on administrative and non-diagnostic tasks, the company says its technology allows imaging centers and hospitals to turn patients around faster.” Verge Genomics (San Francisco). Drug discovery. “CEO Alice Zhang says many drugs that initially look promising in animal models don’t pan out when applied to humans. Verge started with human data to see how new drugs may succeed.” Viz.ai (San Francisco). Stroke diagnosis on CT scans. “The company’s software cross-references CT images of a patient’s brain with its database of scans and can alert specialists in minutes to early signs of large vessel occlusion strokes that they may have otherwise missed or taken too long to spot.” Whisper (San Francisco). Hearing assistance. The company “built a wireless, pocket device that uses AI to separate voices from noise. It uses data from customers to improve its algorithms and regularly transmits software upgrades back to those users.” Full list at Forbes.com. |
| | |
|
| | | The FDA has granted de novo classification to an AI software module that puts a second pair of eyes on colonoscopy videos in real time and is compatible with any endoscope. Developed by Cosmo Pharmaceuticals, “GI Genius” flags suspicious lesions by framing them in a green box so the examining gastroenterologist can take a closer look. The company trained the AI on datasets robust enough that, in a clinical trial, GI specialists using the module caught 13% more biopsy-verified polyps than peers reading the same images without the AI, according to an FDA announcement. The AI group ordered more biopsies and had slightly more flags of benign lesions but also caught more hard-to-see small and flat polyps. In a news release sent by Medtronic, the sole distributor of GI Genius in the world, the CEO of the GI Alliance says the technology “can increase the quality of colonoscopies, potentially improving diagnosis and outcomes for colon cancer patients.” This release includes a video demonstration. A separate release from Cosmo notes that the module is the first of its kind to receive FDA approval via the agency’s de novo pathway. FDA allows companies to apply for de novo approval, bypassing the slower and more involved 510(k) route, if their product presents low to moderate risk and the market has no substantially equivalent predecessor product. |
| | |
|
| | | Last week a literature review showed none of 62 high-quality medical AI models ready for translation from academic research to clinical practice. Now comes a similar but separate study confirming the depth of the dashed hopes. Reporting their findings in Science Translational Medicine, the researchers behind the second exercise found just 23% of healthcare machine-learning studies were reproducible with differing datasets. By comparison, 80% of computer vision studies and 58% of NLP studies had such conceptual reproducibility. Equally confounding, 55% of machine learning in healthcare papers used public datasets and made their code available. Computer vision and NLP each clocked in at close to 90% on those scores. IEEE Spectrum takes a quick look at both literature reviews side by side. “Healthcare is an especially challenging area for machine learning research because many datasets are restricted due to health privacy concerns and even experts may disagree on a diagnosis for a scan or patient,” writes freelance journalist Megan Scudellari. “Still, researchers are optimistic that the field can do better.” Read the whole thing. |
| | |
|
| | | An AI system for diagnosing prostate cancer on biopsy slides has achieved 98%-plus performance in sensitivity, positive predictive value, specificity and negative predictive value. The model, developed by Paige.AI in New York City, made the showing in a study conducted at Yale University and Memorial Sloan Kettering Cancer Center. Modern Pathology published the study March 29. Co-led by David Klimstra, MD, who co-founded Paige.AI and chairs the pathology department at Sloan Kettering, the team trained the system on the cancer center’s digital slide archive. They tested its acumen for making an up or down call—“suspicious” or “not suspicious”— on almost 1,900 lab slides of prostate tissue acquired at Yale Medicine. Lead author Sudhir Perincheri, MD, PhD, of Yale and co-authors report that the tool, Paige Prostate, identified or ruled out cancer with sensitivity of 97.7% and positive predictive value of 97.9%, along with specificity of 99.3% and negative predictive value of 99.2%. When the AI stumbled, the cause was usually poor image quality. The authors conclude that the study’s results “demonstrate the feasibility of porting a machine-learning algorithm to an institution remote from its training set and highlight the potential of such algorithms as a powerful workflow tool for the evaluation of prostate core biopsies in surgical pathology practices.” Sloan Kettering and Paige.AI have been in the news together before. The pairing drew media fire over their closeness and potential profitability in 2018. The next year, the FDA designated Paige.AI’s software a breakthrough technology. The journal has posted the new study in full for free. |
| | |
|
| | | Researchers at Johns Hopkins Kimmel Cancer Center have used deep neural networks to draw important insights—prescriptive as well as descriptive—into adaptive immunity from massive stores of T-cell receptor sequencing data. The team is presenting the technique as open-source software. Their aim is to help clinical investigators who are working to understand the immune system’s response to cancer, infectious diseases, autoimmune conditions—or any other disorder that T-cell receptors help fight. Nature Communications has published the study report. “As sequencing-based technologies only become more ubiquitous, algorithms such as the one presented in this work will find further utility in identifying and characterizing relevant biological signal, yielding new understandings of complex genomic concepts hidden within this vast amount of data,” write MD/PhD candidate John-William Sidhom and colleagues. The T-cell receptor, or TCR, is the immune-system component that unleashes white blood cells to fight and try to kill infected, foreign and cancer cells. Calling their open-source package DeepTCR, the authors tell Johns Hopkins’s news division the software uses supervised and unsupervised deep learning. The unsupervised approaches “allow investigators to analyze their data in an exploratory fashion, where there may not be known immune exposures, while the supervised approaches will allow investigators to leverage known exposures to improve the learning of the models,” Johns Hopkins explains in a news release. “As a result … DeepTCR will enable investigators to study the function of the T-cell immune response in basic and clinical sciences by identifying the patterns in the receptors that confer the function of the T cell to recognize and kill pathological cells.” The study is available in full for free. |
| | |
|
| | | Two healthcare heavyweights are combining forces to form a technology center they hope will, over the next 10 years, “fundamentally advance the pace” of discovery in medical science and healthcare innovation. Cleveland Clinic and IBM jointly announced the development March 30, calling the project the “Discovery Accelerator.” The plan is for Cleveland Clinic to supply clinical, research and educational firepower while IBM maintains an onsite presence for managing the Discovery Accelerator’s computing and other technical resources. As part of the latter, IBM will install its circuit-based, 20-qubit quantum computer called Q System One. This will be followed in subsequent years by installations of the company’s 1,000+ qubit quantum systems elsewhere in Cleveland, allowing the new partnership to collaborate with universities, government bodies, healthtech vendors, startups and other interested entities. IBM chairman and CEO Arvind Krishna hints that the partnership may not have come about if not for the urgency revved by the COVID crisis. “At the same time, science is experiencing a change of its own, with high-performance computing, hybrid cloud, data, AI and quantum computing being used in new ways to break through longstanding bottlenecks in scientific discovery,” Krishna says. The Discovery Accelerator also will serve as the technology foundation for another new Cleveland Clinic undertaking, the Global Center for Pathogen Research & Human Health. This launched last month with $500 million from the State of Ohio, Jobs Ohio and Cleveland Clinic, according to the March 30 announcement. Ohio’s lieutenant governor, Jon Husted, says the new partnership “will put Cleveland, and Ohio, on the map for advanced medical and scientific research, providing a unique opportunity to improve treatment options for patients and solve some of our greatest healthcare challenges.” Full announcement here. |
| | |
|
| | | Biases in medical AI algorithms can have critical implications for minority patients, which is why IBM Research and Watson Health researchers have launched a new study to examine the best methods for addressing this problem. The study, which was recently published in JAMA Network Open, examined the impacts of various AI algorithms on a very common condition impacting pregnant women. The researchers analyzed postpartum depression and mental health usage among nearly 600,000 women covered by Medicaid and sought out the presence of algorithmic bias. They then introduced and assessed methods to combat those biases. The researchers first looked for bias in training data and then applied debiasing methods called reweighing9 and Prejudice Remover to mitigate any bias. They then compared the two models to another debiasing method that completely removes race from the data, called Fairness Through Unawareness (FTU). Unsurprisingly, the study revealed AI algorithms trained on biased data can have unfair outcomes for some patients of a different demographic. White women, which made up 55% of the cohort, were more likely to be diagnosed with postpartum depression and had higher incidence of mental health services. The finding goes against medical literature, which dictates higher rates of PPD among minority women is more commonly observed. That indicates underlying disparities “in timely evaluation, screening, and symptom detection among Black women,” wrote first author Yoonyoung Park, ScD, of the Center for Computational Health, IBM Research, et al. Machine learning models also predicted unequal outcomes, favoring white women who were already at a disadvantage for diagnosis and treatment. Among Black women who were predicted to be similarly at a higher risk, there was a worse health status. Disregarding race in the models was “inferior,” the study authors said, while the two deabiasing methods would actually allocate more resources toward Black women compared to the baseline or FTU model. In other words, the debiasing methods would help produce fairer outcomes for patients. |
| | |
|
| | | Mayo Clinic has established a new AI-enabled tech platform and spawned two companies to leverage the might of its Big Data inputs. The platform’s name tips its purpose. Calling it RDMP for Remote Diagnostics and Management Platform, the institution says the idea is to support clinical decisionmaking on both ends of “event-driven medicine,” those being diagnostics and therapeutics. Also implied in its naming is the platform’s primary intended user base—clinicians providing patient care virtually. “The dramatically increased use of remote patient telemetry devices coupled with the rapidly accelerating development of AI and machine learning algorithms has the potential to revolutionize diagnostic medicine,” says John Halamka, MD, president of Mayo Clinic Platform. “With RDMP, clinicians will have access to best-in-class algorithms and care protocols and will be able to serve more patients effectively in remote care settings.” Halamka adds that another key aim is helping patients take an active role in their care decisions by supplying them with personalized insights and recommendations. As for the two new companies, one called Anumana will endeavor to develop and commercialize AI-enabled algorithms while sister spinoff Lucem Health collects, orchestrates and curates data. Mayo is partnering with health AI startup Nference on Anumama and with patient experience innovator Commure on Lucem Health. Mayo says the announcement represents the latest advancement in Mayo Clinic Platform’s development of “an ecosystem of partners and capabilities that complement Mayo’s clinical capabilities and provide access to scalable solutions.” Launched in the summer of 2020, Mayo Clinic Platform is “a coordinated portfolio approach to create new platform ventures and leverage emerging technologies, including AI, connected healthcare devices and natural language processing.” This week’s RDMP announcement here. Last year’s Mayo Clinic Platform introduction here. |
| | |
|
| | | Omada Health, a virtual care provider, launched its internal initiative that uses cross-functional data to drive its healthcare program, Omada Insights Lab. In addition, the AI company launched its Physician-Guided Care program for diabetes and hypertension, as well as enhanced musculoskeletal personalized physical therapy treatment. The lab leverages data across five teams: data science, behavior science, clinical design, product design, and care delivery to optimize its programs across a host of health conditions. Omada is funded by some heavy hitters in the healthcare space, including insurance giant Cigna. The virtual provider recently conducted an analysis of its diabetes care program, revealing care team feedback as a top driver of weight loss. Member care team rapport also correlated strongly with health outcomes, and members who interacted with their care team were 24% more likely to achieve their health goals. Members who messaged their care teams were twice as likely to achieve positive health outcomes. The insights enabled Omada to decrease automated nudges and strengthen the member-team relationship. “The wellness industry has historically focused on incentivizing and nudging members to boost short-term engagement at the cost of long-term outcomes,” Jennifer La Guardia, PhD, director of clinical product and behavior science at Omada Health, said in a statement. The Omada Insights Lab leveraged more than a billion data points from its 450,000 members’ interactions over the last decade to determine such insights. Omada also recently launched its Physician-Guided Care program for diabetes and hypertension, creating the first virtual cardiometabolic clinic. “Members will now have access to behavior change support, diabetes/hypertension management and education, devices and monitoring, and medical management of diabetes, hypertension, and dyslipidemia all in one place,” the announcement reads. In addition, its new physical therapy capabilities include personalized treatment for members with new computer vision technology that allows data to inform treatment decisions. “The computer vision technology that we have developed has the potential to reinvent the way we think about musculoskeletal care,” said Todd Norwood, PT, DPT, director of clinical services at Omada Health. “By leveraging the data collected with digital tools in tandem with programs that are based in the science of behavior change, we can help people improve their health both short and long term.” |
| | |
|
| |
|
|