News You Need to Know Today
View Message in Browser

Who’s afraid of generative AI for healthcare? | Goldman Sachs, NIH, Avenda Health, more AI newsmakers

Thursday, April 27, 2023
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

Large language AI ChatGPT concerns thought leaders

Generative AI: 5 concerns voiced by healthcare thought leaders

Every industry on earth is buzzing over the promise and potential of ChatGPT and similarly sharp AI models, whether “large language” or another generative form. Healthcare is no exception. But shouldn’t it be?

At Wired, a journalist focused on AI in society spoke with a handful of medical professionals and found no shortage of misgivings. Here are five.

1. ChatGPT trains itself on literature spanning years. At first blush that may sound like an unqualified plus. However, outdated medical evidence can be dangerous—and clinical knowledge and practices surely “change and evolve over time,” Heather Mattie, PhD, a biostatistics lecturer at Harvard’s T.H. Chan School of Public Health, tells Wired senior writer Khari Johnson. “There’s no telling where in the timeline of medicine ChatGPT pulls its information from when stating a typical treatment.”

2. ChatGPT has been shown to sound authoritative even when dispensing factual inaccuracies and fictitious references. “It only takes one or two [experiences] like that to erode trust in the whole thing,” points out Trishan Panch, MD, MPH, a Harvard instructor and digital health entrepreneur.

3. Physicians could inappropriately lean on the software for moral or ethical guidance. “Some bioethicists worry that doctors will turn to the bot for advice when they encounter a tough decision like whether surgery is the right choice for a patient with a low likelihood of survival or recovery,” reports Johnson. Asked by Johnson about just such a scenario, bioethicist Jamie Webb of the University of Edinburgh in Scotland holds firm: “You can’t [ethically] outsource or automate that kind of process to a generative AI model.”  

4. Over time, gradual “de-skilling” is a real risk. Getting rusty could afflict clinicians who get a little too used to relying on a bot “instead of thinking through tricky decisions for themselves,” Johnson writes, citing research by Webb and colleagues.

5. Thanks to its aptitude for striking a scholarly tone, ChatGPT and other breeds in the species might subtly influence—if not outright fool—humans. Fortunately, an antidotal strategy is always available: Let the flawed but smart bots pitch in as long as they’re closely supervised by a human expert. This uncomplicated approach certainly works with (and for) residents and other trainees, reminds Robert Pearl, MD, the Stanford professor, author and former CEO of Kaiser Permanente CEO.

Pearl, incidentally, is emerging as a notable enthusiast of large-language AI in clinical settings.

“No physician who practices high-quality medicine will do so without accessing ChatGPT or other forms of generative AI,” he tells Wired. “I think it will be more important to doctors than the stethoscope was in the past.”

Read the whole thing.

 Share on Facebook Share on Linkedin Send in Mail
chatbots mental healthcare goldman sachs

Industry Watcher’s Digest

  • Chatbot maker falls afoul of shareholders. Conversational AI vendor LivePerson (New York City) is being sued for allegedly misleading investors about its finances. The company has clients in healthcare, and the class-action complaint mentions issues with Medicare reimbursement involving a subsidiary. The plaintiff, a shareholder who wants a jury trial, alleges LivePerson made dishonest statements as a way to conceal weaknesses in its internal controls. A number of law firms have posted announcements detailing the opportunity for potential co-litigants.
     
  • $75M goes toward precision mental healthcare. Columbia University is setting up a new behavioral health operation to synergize advances—clinical as well as investigatory—in AI, neuroscience, psychiatric genomics and stem cell biology, among other disciplines. The work is getting off the ground with a $75 million grant. Named for the lead philanthropic organization behind that sum, it’ll be called the Stavros Niarchos Foundation Center for Precision Psychiatry & Mental Health at Columbia University. The announcement quotes the foundation’s co-president, Andreas Dracopoulos. “The significant progress we have made in caring for our physical health in recent decades is apparent,” he says, “but just as clear is the fact that we have left behind our mental health.” That’s a little less of a problem today than it was yesterday. Full announcement here.
     
  • What does Goldman Sachs look for in a healthcare AI startup it may want to fund? Nothing out of the ordinary. Just the quality of the management team, the ultimate goal of the platform, the timeframe in which investors will understand whether this goal has been achieved and how the platform merges the available AI/machine learning toolkit with proprietary technologies to defend against emerging players. Those are the Wall Street titan’s own words as stated in an April 26 newsletter item. More: “Because of the AI/ML’s potential advantages in efficiency and effectiveness, how each company utilizes the armamentarium of available and rapidly expanding technologies is an important part of competitive differentiation.” Read the full item.
     
  • Best to steer clear of the uncanny valley. Makers of conversational AI chatbots and interactive robots should make sure their machines aren’t fun to speak with. No humor, no small talk, no more than a trace of personality. So advises a marketing instructor at the University of Minnesota who’s been studying how people talk and listen to inanimate objects that can talk and listen back. “When it’s too human, we don’t want that and we feel threatened,” the marketing researcher, Marat Bakpayev, PhD, tells the (Minneapolis) Star Tribune. Full article here, good primer on the uncanny valley effect here.
     
  • AI accelerates research into rare pediatric syndrome caused by COVID. Children who develop multisystem inflammatory syndrome linked to COVID-19 (MIS-C) have a distinct biomarker pattern—one not seen in pediatric COVID patients who don’t develop the rare condition. That’s according to NIH-backed researchers who used high-speed, AI-controlled molecular sequencing of RNA and DNA to uncover some of MIS-C’s most vexing mysteries. Affected organs may include the heart, lungs, kidneys, brain, skin, eyes or gastrointestinal tract, and NIH says the findings could lead to better diagnostics and treatments.
     
  • AI may well prevail over prostate cancer. Avenda Health (Culver City, Calif.) is celebrating the first commercial use of its AI toolkit designed for personalizing prostate cancer care. Called Unfold AI, the product was used in a clinical setting this week. The company says the software converts patient-specific data from prostate imaging, biopsies and pathology into deep-learning algorithms. These help guide physicians in precisely localizing tumors and mapping their margins via 3D visualizations. Avenda media alert here, coverage by Health Imaging here.
 Share on Facebook Share on Linkedin Send in Mail

The Latest from Our Partners

  • Digital Magazine: This is Enterprise Imaging - In this digital magazine we talk about how moving from multiple PACS to a single enterprise imaging system is busting siloes and deepening integration; challenges in radiology imaging and how radiologists are getting more done—better and faster—by using enterprise imaging; skyrocketing image volume and an increased need for collaboration across multiple and geographically diverse sites has made image management far more complex and why cloud is a solution to this; our latest addition to Sectra Enterprise Imaging portfolio—ophthalmology and why it is a game-changer for ophthalmologists.
  • Beyond the impression: How AI-driven clinical intelligence transforms the radiology experience - In this session, Nuance CMIO Sheela Agarwal, MD, and Senior Product Manager Luanne D’Antoni explore innovations in radiology report creation and the role of automated impression generation.

  • AI quality assurance models saving lives and millions in avoided med-mal - Unrecognized imaging findings are an unfortunate, but undeniable, part of radiology. New advancements in artificial intelligence (AI) and machine learning offer a critical safety net that is improving care and saving lives — as well as avoiding millions of dollars in potential medical malpractice costs.

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand

Innovate Healthcare