News You Need to Know Today
View Message in Browser

AI in the life sciences | AI news watcher’s blog | Partner voice

Tuesday, November 19, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

artificial intelligence AI life sciences biopharma

When life sciences met artificial intelligence

Three-quarters of organizations in the life-sciences field started using AI less than two years ago. Which is to say that most only joined the party after generative AI became a thing. (Or did it become the thing?)  

Whether or not the rapid ramp-up is a case of cause and effect isn’t clear. In any case, the pace of AI adoption is quickening in the sector, which includes biopharmaceuticals, digital health, clinical diagnostics and medical devices.

In fact, a new survey by the multinational law firm Arnold & Porter shows more than 85% of operators in this arena plan to fully deploy new AI tools over the next two years.

The report offers five observations from the survey, which queried 100 senior executives and department heads. 

1. Research and development constitute the most popular use case for AI in the life sciences.

R&D emerged as the leading area of AI implementation, the authors report, showing that 79% of respondents are actively using or planning to use AI to “drive faster, more efficient drug discovery and clinical trials.” More: 

‘AI is also making inroads into manufacturing (62%), marketing (45%) and regulatory functions (42%) as companies seek to harness the power of AI across the entire product lifecycle.’

2. AI governance is an ongoing challenge for many life-sciences companies. 

According to the survey, only 55% of companies currently using AI have implemented formal AI policies or standard operating procedures, Arnold & Porter note. Even fewer—just 51% of companies—have completed regular AI audits or assembled cross-functional teams to oversee safe and compliant AI use. More: 

‘This data suggests that companies will need to prioritize risk management and compliance more to realize AI’s full potential without exposing themselves to unnecessary vulnerabilities.”

3. AI already delivers tangible benefits in the product discovery and commercialization phases. 

Right around half of respondents “have explored leveraging AI to optimize product discovery and design, citing anticipated faster time-to-market and improved efficiency as key drivers,” the authors write. 

‘Additionally, 85% of respondents reported that AI-driven initiatives to boost commercial effectiveness have been highly productive.’

4. Concerns are rising over intellectual-property issues related to AI.

Nearly three-quarters (74%) of respondents expressed significant concern about the potential for AI to introduce new IP challenges within the next year, the authors found. 

‘As AI-driven innovations continue to reshape the industry, life-sciences companies are increasingly vigilant about protecting their breakthroughs.’

5. AI’s role in patient care and diagnostics will surely grow. 

The survey suggests that AI-enabled diagnostic tools, clinical trials and AI-assisted treatment plans will soon become standard across healthcare, Arnold & Porter predict.  

‘However, given that regulators are already signaling heightened scrutiny of AI use from a compliance perspective, companies must address governance gaps to ensure safe, effective and compliant use as they progress with AI integration.’

The report also delves into AI issues affecting privacy and cybersecurity, manufacturing and supply chain, commercialization, payments, governance and compliance, implementation and global considerations.

Read the whole thing

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Nabla Joins the Coalition for Health AI (CHAI) to Advance AI Governance in Healthcare - Nabla is joining forces with the Coalition for Health AI (CHAI), a diverse consortium of more than 3,000 organizations, including health systems, technology developers, patient advocates, and academic institutions, dedicated to promoting responsible AI practices in healthcare.

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence AI in healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • How not to do AI for healthcare: Move fast and break things. Rapidly iterate. Just get the algorithm out there and, if it goes haywire, no biggie. Just fix it. That modus operandi may work OK in some industries, but take note: “When you do that in medicine, you kill some people or you harm them in really bad nasty ways.” The friendly reminder is from Jonathan Chen, MD, assistant professor of medicine and biomedical data sciences at Stanford. Chen and Michael Pfeffer, MD, chief information officer of Stanford Health Care, chatted about healthcare AI in a podcast hosted by Maya Adam, MD, the institution’s director of health media innovation. “We’re never going to get to perfect,” Pfeffer says. “I think if we aim for perfect, we’re going to miss the opportunity to get better than we are today.” Listen to the half-hour podcast or read its transcript here
     
  • Healthcare AI is almost like magic. That reflection would be unremarkable had it not been spoken by an esteemed physician and technology leader. “How might we harness this technology for human flourishing?” clarified the speaker, Eric Horvitz, MD, PhD, chief scientific officer at Microsoft. “Further development in human-computer interactions is needed to realize the potential of these systems in clinical decision support.” Horvitz offered the comments during a symposium at Vanderbilt University Medical Center. Event coverage here
     
  • Black-box outputs aren’t just a problem with AI. They’re also a problem with physicians. How’s that? Well, “we really don’t know how doctors think,” explains Harvard medical historian Andrew Lea, MD, PhD. He’s commenting on a recent study in which ChatGPT outperformed experienced physicians at diagnosing disease based on patients’ medical histories. AI alone won even when the doctors had help from an AI chatbot. This owed to the humans’ very human tendency to ignore the AI when they felt disinclined to agree with it. When asked how they arrived at their diagnoses, the doctors offered “intuition,” “experience” and the like. It also mattered that they didn’t know how to use GenAI to its fullest capabilities. The New York Times has the story
     
  • When it first started taking shape, the European Union’s AI Act took criticism for jumping the gun. Then came the GenAI boom. Now, if anything, some are asking what’s taking it so long. Tell that crowd to chill out, because key compliance deadlines are beginning to arrive. To meet the moment, TechCrunch lays out “everything you need to know” about the Act. 
     
  • Healthcare can learn a lot about AI from military medicine. And a lot of what it can learn is spelled out in a new book, Smarter Healthcare with AI: Harnessing Military Medicine to Revolutionize Healthcare for Everyone Everywhere. Written by Hassan Tetteh, MD, MBA, and published by Forbes Books, the volume lays out Tetteh’s “VP4” framework. This suggests successful AI adoption in medicine requires a combination of purpose, personalization, partnership and productivity. The author is a retired U.S. Navy captain who teaches at the Uniformed Services University of the Health Sciences in Bethesda, Md. More on the book here
     
  • Let’s hope it was an isolated incident when Google’s Gemini went both rogue and rabid. During a conversation about aging with a college student, the chatbot spit out: “This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.” CBS News seems to have broken the story, and now it’s everywhere
     
  • When are general GenAI models good enough for healthcare? Much of the time, argues tech writer and speaker John Nosta. And by general models, he means those trained on a broad range of topics well outside of medicine—literature, history, you name it. Nosta’s commentary focuses on a recent Johns Hopkins study showing general models can perform as well as or better than healthcare-specific ones in 88% of medical tasks. “[T]his doesn’t mean AI specialization has no place,” he states in Psychology Today. “Instead, it suggests a shift in focus: Use general models for the many and specialized models for the few.” In medicine as in life, he adds, “the key isn’t always doing more—it’s doing what works best.” 
     
  • The National Hockey League is partnering with an AI platform company. But don’t look for droids playing goalie or anything like that. The league just wants to “enhance archival data processes and improve real-time game footage operations,” according to the NHL’s own news operation. The platformer is Vast Data, and the partnership will “enable us to efficiently push the boundaries of what’s possible with AI,” says Grant Nodine, the league’s senior VP of technology. 
     
  • Recent research in the news:
     
  • Notable FDA Approvals:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare