3 aspects of cancer care ripe for AI augmentation

Oncologists using or considering AI tools tend to agree among themselves on three points of ethics. One, AI models must be explainable by oncologists. Two, patients must consent to the use of AI in their treatment decisions. And three, it’s up to oncologists to safeguard patients against AI biases.

The findings are from a survey project conducted at Harvard Medical School and published this spring in JAMA Network Open.

Andrew Hantel, MD, and collegues report that 204 randomly selected oncologists from 37 states completed questionnaires. Among the team’s key findings:

  • If faced with an AI treatment recommendation that differed from their own opinion, more than a third of the field, 37%, would let the patient decide which of the two paths to pursue.
     
  • More than three-fourths, 77%, believe oncologists should protect patients from likely biased AI tools—as when a model was trained using narrowly sourced data—yet only 28% feel confident in their ability to recognize such bias in any given AI model.

In their discussion section, Hantel and co-authors underscore the finding that responses about decision-making “were sometimes paradoxical; patients were not expected to understand AI tools but were expected make decisions related to recommendations generated by AI.”

A gap was also seen, they further stress, between oncologist responsibilities and preparedness to combat AI-related bias. They comment:

‘Together, these data characterize barriers that may impede the ethical adoption of AI into cancer care.’

Now comes a new journal article probing the implications of the results.

In “Key issues face AI deployment in cancer care,” science writer Mike Fillon speaks with Hantel as well as Shiraj Sen, MD, PhD, a clinician and researcher with Texas Oncology who was not involved with the Harvard oncologist survey.

The piece was posted July 4 by CA: A Cancer Journal for Clinicians, the flagship journal of the American Cancer Society. In it, Sen states that AI tools for oncology are “headed in three main directions,” as follows.

1. Treatment decisions.

“Fortunately for patients, the emergence of novel therapeutic options is providing oncologists with multiple treatment options in a particular treatment setting for any one individual patient,” Sen says. “However, often these treatment options have not been studied thoroughly.” More:

‘AI tools that can help incorporate prognostic factors, various biomarkers and other patient-related factors may soon be able to help in this scenario.’

2. Radiographic response assessment.

“Clinical trials with AI-assisted tools for radiographic response assessment on anti-cancer treatments are already underway,” Sen points out.

‘In the future, these tools may one day even help characterize tumor heterogeneity, predict treatment response, assess tumor aggressiveness and help guide personalized treatment strategies.’

3. Clinical trial identification and assessment.

“Fewer than 1 in 20 individuals with cancer will ever enroll into a clinical trial,” Sen notes. “AI tools may soon be able to help identify appropriate clinical trials for individual patients and even assist oncologists with a preliminary assessment of which trials a patient will be eligible for.”

‘These tools will help streamline the accessibility of clinical trials to individuals with advanced cancer and their oncologists.’

Meanwhile Hantel tells CA the widespread lack of confidence in identifying biases in AI models “underscores the urgent need for structured AI education and ethical guidelines within oncology.”

For oncology AI to be ethically implemented, Hantel adds, infrastructure must be developed to support oncologist training while check-listing transparency, consent, accountability and equity.

Equally important, Hantel says, is understanding the views of patients—especially those in historically marginalized and underrepresented groups—on these same issues. More:

‘We need to develop and test the effectiveness of the ethics infrastructure for deploying AI that maximizes benefits and minimizes harms, and [we need to] educate clinicians about AI models and the ethics of their use.’

Both journal articles are available in full for free:

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup