News You Need to Know Today
View Message in Browser

AI for oncologists | AI reporter’s notebook | Partner news

Tuesday, July 9, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

artificial intelligence in healthcare oncology cancer care

3 aspects of cancer care ripe for AI augmentation

Oncologists using or considering AI tools tend to agree among themselves on three points of ethics. One, AI models must be explainable by oncologists. Two, patients must consent to the use of AI in their treatment decisions. And three, it’s up to oncologists to safeguard patients against AI biases.

The findings are from a survey project conducted at Harvard Medical School and published this spring in JAMA Network Open.

Andrew Hantel, MD, and collegues report that 204 randomly selected oncologists from 37 states completed questionnaires. Among the team’s key findings:

  • If faced with an AI treatment recommendation that differed from their own opinion, more than a third of the field, 37%, would let the patient decide which of the two paths to pursue.
     
  • More than three-fourths, 77%, believe oncologists should protect patients from likely biased AI tools—as when a model was trained using narrowly sourced data—yet only 28% feel confident in their ability to recognize such bias in any given AI model.

In their discussion section, Hantel and co-authors underscore the finding that responses about decision-making “were sometimes paradoxical; patients were not expected to understand AI tools but were expected make decisions related to recommendations generated by AI.”

A gap was also seen, they further stress, between oncologist responsibilities and preparedness to combat AI-related bias. They comment:

‘Together, these data characterize barriers that may impede the ethical adoption of AI into cancer care.’

Now comes a new journal article probing the implications of the results.

In “Key issues face AI deployment in cancer care,” science writer Mike Fillon speaks with Hantel as well as Shiraj Sen, MD, PhD, a clinician and researcher with Texas Oncology who was not involved with the Harvard oncologist survey.

The piece was posted July 4 by CA: A Cancer Journal for Clinicians, the flagship journal of the American Cancer Society. In it, Sen states that AI tools for oncology are “headed in three main directions,” as follows.

1. Treatment decisions.

“Fortunately for patients, the emergence of novel therapeutic options is providing oncologists with multiple treatment options in a particular treatment setting for any one individual patient,” Sen says. “However, often these treatment options have not been studied thoroughly.” More:

‘AI tools that can help incorporate prognostic factors, various biomarkers and other patient-related factors may soon be able to help in this scenario.’

2. Radiographic response assessment.

“Clinical trials with AI-assisted tools for radiographic response assessment on anti-cancer treatments are already underway,” Sen points out.

‘In the future, these tools may one day even help characterize tumor heterogeneity, predict treatment response, assess tumor aggressiveness and help guide personalized treatment strategies.’

3. Clinical trial identification and assessment.

“Fewer than 1 in 20 individuals with cancer will ever enroll into a clinical trial,” Sen notes. “AI tools may soon be able to help identify appropriate clinical trials for individual patients and even assist oncologists with a preliminary assessment of which trials a patient will be eligible for.”

‘These tools will help streamline the accessibility of clinical trials to individuals with advanced cancer and their oncologists.’

Meanwhile Hantel tells CA the widespread lack of confidence in identifying biases in AI models “underscores the urgent need for structured AI education and ethical guidelines within oncology.”

For oncology AI to be ethically implemented, Hantel adds, infrastructure must be developed to support oncologist training while check-listing transparency, consent, accountability and equity.

Equally important, Hantel says, is understanding the views of patients—especially those in historically marginalized and underrepresented groups—on these same issues. More:

‘We need to develop and test the effectiveness of the ethics infrastructure for deploying AI that maximizes benefits and minimizes harms, and [we need to] educate clinicians about AI models and the ethics of their use.’

Both journal articles are available in full for free:

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

What are the keys to a successful ambient AI pilot? - If you're looking to pilot an ambient AI for clinical documentation, Nabla has identified 7 strategies for effective assessment programs that pave the way for successful deployments. Read the article here.

 

 Share on Facebook Share on Linkedin Send in Mail
AI in healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • AI bots are probably trying to scrape the very website on which this article resides. Why wouldn’t they? Those little buggers are constantly crawling the internet looking for new content to consume. Many are famished for fresh data on which to train generative AI models. And AIin.Healthcare, like any decent news site, updates quite frequently. Of course, hungry bots are part of the online ecosystem in which we live. Most are handled with a quick adjustment to security settings. Others, however, disguise both their identity and their intent. They duck inside through the proverbial unlocked back door. To deal with them, AI can be used to fight AI. This is what good companies supplying content delivery network services, domain name services and the like have begun doing in earnest since GenAI started equipping sneaks with tools to bypass websites’ security settings. This month one of the most widely used of these defense services, Cloudflare, kicked up its capabilities a notch when it added a one-click option to block all AI bots. The feature is even open to customers on Cloudflare’s free tier. All of this might sound like a skippable geekfest, but the unseen battle of these AI bots certainly reaches deep into healthcare. Besides, as broken down by Axios, it packs a fascinating punch.
     
  • Going by GenAI patents filed over the past 10 years, China’s inventors are far ahead of competitors in every other country. A new report from the World Intellectual Property Organization shows some 38,210 applications emanating from the Middle Kingdom between 2014 and 2023. The U.S. is a distant second with 6,276, followed by South Korea (4,155), Japan (3,409), India (1,350), the U.K. (714) and Germany (708). The report considers generative AI to encompass not only large language models but also generative adversarial networks (GANs), variational autoencoders (VAEs) and diffusion models. Don’t be surprised when a new wave of GenAI patents inundates the field in the near future, largely on the strength of ChatGPT, the success of which “has driven innovation into a wide range of applications,” the report authors point out. “A future update study at a later date should be able to visualize this development—perhaps by using GenAI itself to do the work.” The report is extensive and unpacks a lot of data, some of which involves healthcare. Pymnts.com does a nice job contextualizing and summarizing.
     
  • Can you imagine GenAI models routinely costing $10B to train? Anthropic CEO Dario Amodei can. In fact he predicts the training price could reach $100 billion by three years from now. For context, consider that current models like ChatGPT-4o “only” cost about $100 million, Amodei said in a recent podcast. A few models in training today are already costing close to $1 billion to train, he added. And if improvements continue apace with algorithms and chips, there’s a good chance that, by the time those crazily massive price tags get here, “we’ll be able to get models that are better than most humans at most things.” Tom’s Hardware coverage here.
     
  • Amazon has developed a GenAI assistant to help healthcare organizations generate marketing messaging. It was a worthy challenge, according to bloggers at AWS’s Generative AI Innovation Center, largely because medical content is “highly sensitive.” It often takes a lot of time to draft, review and win approval from layers of experts—considerably more time, in general, than marketing materials for industries whose end customers aren’t patients per se. A key question the AWS developers wanted to test: Could large language models streamline medical marketing’s clunky draft-to-publish process? Key finding they came back with: “Medical content generation for disease awareness is a key illustration of how LLMs can be leveraged to generate curated and high-quality marketing content in hours instead of weeks.” The bloggers fill in the middle steps here.
     
  • When Altman met Arianna. Sam Altman’s OpenAI Startup Fund is combining forces with Thrive Global, the behavior-change platform supplier founded by Arianna Huffington, to launch what the two are calling a “hyper-personalized AI health coach.” Dubbed Thrive AI Health, the new operation has as its mission “democratizing” access to expert-level health coaching. Joining the pair as lead investors is the Alice L. Walton Foundation. In advertorial-like commentary published by Time Monday, Altman and Huffington state that, with AI-driven personalized behavior change, “we have the chance to finally reverse the trend lines on chronic diseases,” in the process “benefiting millions of people around the world.” The high-profile healthcare AI venture is finding early success at making headlines.
     
  • Healthcare AI is welcomed and appreciated in Africa. One need only read Dr. Sylvester Ikhisemojie to know this. The physician, affiliated with the National Orthopaedic Hospital in Lagos (pop. 16.5 million), had an eloquent piece published in the Nigerian daily newspaper The Punch July 7. “The emergence of AI in healthcare presents both tremendous opportunities and significant challenges that must be approached with mindfulness, compassion and ethical consideration,” he writes. Urging readers to continue cultivating these qualities in themselves and others, he calls for “ensuring that our pursuit of innovation is guided by a deep commitment to human flourishing and the alleviation of suffering.”
     
  • Accountants need AI too. They just don’t know it yet. Somehow this seems instructive for healthcare AI stakeholders. “[B]ecause the rate of adoption of AI is still extremely low in the accounting profession,” writes subject matter expert Shane Westra in CPA Practice Advisor, “there is a vast opportunity for firms of all sizes and with diverse business models to be on the forefront of the AI transition across the industry and gain significant momentum with a ‘first mover’ competitive advantage—if AI is approached the right way.”
     
  • As do educators—but maybe not for grading papers. That use case is dividing early adopters, according to The Wall Street Journal. “Does this [GenAI] make my life easier? Yes,” says a high-school history teacher. “But that’s not what this is about. It’s about making the students better writers.” Counters a co-founder of the AI Education Project: “It should not be used for grading.” If it is, it will “undermine trust in the education system.” For those with subscriber access, the comboxes carry the conversation forward in colorful style. Article here.
     
  • Recent research roundup:
     
  • Funding news of note:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand

Innovate Healthcare