News You Need to Know Today
View Message in Browser

American College of Physicians weighs in on AI | AI newsmakers | Partner news

Tuesday, June 11, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

artificial intelligence in healthcare

5 views on AI in healthcare from the American College of Physicians

Medical professionals using AI in clinical decision-making should limit the technology’s reach to a supportive role. In fact, used in these settings, the technology is best thought of—and referred to—as augmented intelligence.

This is one of 10 settled positions of the American College of Physicians, or ACP, which represents more than 160,000 internal medicine specialists, subspecialists and trainees.

The group itemizes and expounds on its AI views in a paper published this month by its flagship journal, Annals of Internal Medicine. Here are key passages from five more of the 10.

1. ACP believes that the development, testing and use of AI in healthcare must be aligned with principles of medical ethics.

Healthcare AI ought to boost care quality, strengthen the patient-physician relationship, avoid demographic bias and assist in clinical decision-making without commandeering it, corresponding author Nadia Daneshvar, JD, MPH, and colleagues suggest. More:  

‘Maintaining the patient–physician relationship requires care. AI should be implemented in ways that do not harm or interfere with this relationship but instead enhance and promote the therapeutic alliance between patient and physician.’

2. ACP reaffirms its call for transparency in the development, testing and use of AI for patient care.

Compromise on such end-to-end transparency, and don’t be surprised when trust crumbles among and between stakeholders, the authors suggest.  ACP “recommends that patients, physicians and other clinicians be made aware, when possible, that AI tools are likely being used in medical treatment and decision making,” they write.

‘Even if patients are not, at present, explicitly informed of all the ways technology is involved in their care—for example, they may or may not be told about computer-assisted electrocardiogram or mammography interpretation—the newness of AI and its potential for clinically significant effects on care suggests that honesty and transparency about its use are paramount.’

3. ACP reaffirms that AI developers, implementers and researchers should prioritize the privacy and confidentiality of patient and clinician data.

If patient, physician or other clinician data must be used for the development of AI models, the data should first be deidentified and aggregated, ACP holds. “We note, however, that deidentification of data, particularly if the data is unstructured, can be a substantial challenge.”

‘We renew our [prior] call for comprehensive federal privacy legislation, with special provisions regarding privacy protections for AI data sets included in such legislation.’

4. ACP recommends that, in all stages of development and use, AI tools are designed to reduce clinician burden in support of patient care.

Reducing unnecessary administrative, cognitive and other burdens should be priorities in the design and development of AI-enabled devices, Daneshvar and co-authors point out, adding that a central promise for medical AI is freeing up time for physician-patient interactions.

‘Any mechanisms for clinicians to provide feedback on the performance of or any issues with the AI tool should not be burdensome to the clinician. The effects of AI-enabled burden reduction tools on burnout should be assessed.’

5. ACP recommends AI training for physicians at all levels of education and practice.

Comprehensive educational training programs and resources are needed at the undergraduate medical education, graduate medical education and attending physician levels to address the knowledge gaps of current healthcare professionals, the authors insist.

‘Training should ensure that physicians remain able to make appropriate clinical decisions independently, in the absence of AI decision support, for vigilance against errors in AI-generated or -guided decisions.’

The paper is posted in full for free.

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

 

GenAI in the EHR: Powering NextGen® Ambient Assist & restoring clinician well-being - Learn more about the way NextGen Healthcare is pioneering GenAI in the EHR with Nabla. In this interview, Jeremy Dixon, Vice President of Product Development at NextGen Healthcare discusses the way the company is leveraging GenAI to re-imagine the provider experience and alleviate the documentation burden.

 Share on Facebook Share on Linkedin Send in Mail
human touch in healthcare artificial intelligence

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Patients’ trust in generative AI for healthcare may already be fading. When Deloitte asked 1,000 adults who aren’t using the technology what’s keeping them away, 30% said they don’t trust the information. That’s a sizeable rise over the 23% who gave that answer last year, when the concept of gen AI was still new to consumers. Further, the 2024 results show consumer use of gen AI for reasons related to health and wellness at 37%. That’s down from 40% in 2023. On the other hand, the general public has remained “overall optimistic” about gen AI’s potential to help chip away at healthcare challenges like access and affordability. And two-thirds of 2024 respondents still hope it will help cut wait times for medical appointments and reduce out-of-pocket costs. More results here.
     
  • Providers’ feelings are similarly mixed. McKinsey surveyed 200 executives of health systems and found 75% pressured to speed up digital transformation but stymied by shortages in resources and planning. More than half the field, 51%, named budget constraints among their top obstacles to investing at scale across all digital and AI categories of interest. And current investment priorities are frequently misaligned with areas they believe could have the most impact. All of which is unfortunate in light of separate research showing that AI, if implemented wisely, could cut global healthcare spending by $200 billion to $360 billion. More here.
     
  • Three heavy hitters in healthcare technology are coming together over AI and quantum computing. Cleveland Clinic, IBM and the U.K.’s Hartree Centre announced the collaboration June 6, saying the work will kick off with two clinical research projects focused on care for epilepsy patients. From this will follow the development of “larger AI models that can integrate multiple types of data for analysis across different diseases.” The collaborators say their “ultimate aim” is to improve patient care and biomedical science. Announcement.
     
  • Data suitable for training generative AI is a finite resource. The data mainly exists as words written by humans and shared online. Like California gold in the mid-1800s, it’s being mined too fast by too many. This is not news, but a new scientific study has the supply running out as soon as 2026 and no later than 2032. “At this point, the availability of public human text data may become a limiting factor in further scaling of [large] language models,” write lead author Pablo Villalobos and colleagues at the Epoch AI research institute. But wait. Don’t despair. The team is sanguine about the ability of innovative people to compensate for the coming depletion. “[A]fter accounting for steady improvements in data efficiency and the promise of techniques like transfer learning and synthetic data generation,” they write, “it is likely that we will be able to overcome this bottleneck in the availability of public human text data.” Study here (preprint), AP news coverage here, Time background coverage here.
     
  • Apple-only loyalists can have their Apple experience and Microsoft-backed OpenAI too. That’s because Apple Intelligence, the new AI system imbuing iPhones and Macs with generative AI capabilities, is including access to ChatGPT among its selling points. Apple’s aim is to make AI a part of everyday life—not just for “Apple snobs” but also for hybrid users who split their time between digital ecosystems. If you’ve followed any tech news since the announcement June 10, you’re aware. But you probably haven’t heard it all yet. Apple announcement here, media coverage everywhere.
     
  • European AI for European people. Should be an easy sell, no? Mark Zuckerberg’s Meta—he’s still the majority shareholder—has sent 2 billion notifications and emails to European users disclosing its desire to tap their public online postings for tailoring AI models. Meta wants to start the sweep June 26. The messaging tells recipients the company’s aim is to better serve Europeans. “Meta is not the first company to do this—we are following the example set by others, including Google and OpenAI, both of which have already used data from European users to train AI,” writes Meta global privacy director Stefano Fratta in a June 10 blog post. “Our approach is more transparent and offers easier controls than many of our industry counterparts already training their models on similar publicly available information.” The push (but not necessarily pushy) messages include an opt-out link. Europeans tend to be more privacy-minded than Americans, so all these efforts will be interesting to watch.
     
  • How about Stateside? Many Americans are no more inclined than other cautious citizens of the digital world to have their online keystrokes sucked up and repurposed by AI giants. It’s for them that The New York Times has published a piece in the service journalism vein headlined “Can I Opt Out of Meta’s A.I. Scraping on Instagram and Facebook? Sort of.”
     
  • May I call you Thunder Trunk? Dolphins are known to call out to other dolphins by aping other dolphins’ signature sounds. Parrots do this too. And now it turns out elephants take intra-species conversations a step further—“using individual names that they invent for their fellow pachyderms.” The behavior may have remained undiscovered if not for AI.
     
  • Recent research roundup:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare