News You Need to Know Today
View Message in Browser

Legal exposure points | Healthcare AI newsmakers

Tuesday, March 5, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

legal liability malpractice artificial intelligence

AI in healthcare: 3 areas of likely risk for legal liability

As healthcare AI opens new avenues to improve care quality without unduly increasing operational costs, the technology also expands potential exposure to civil and criminal liabilities. And that’s not only for providers but also payers and suppliers.

Two attorneys guide a brief tour of the changing landscape in a piece their firm posted March 4.

Investigators and enforcers will likely expect AI developers and/or end-users to vet AI products for accuracy, fairness, transparency and explainability—and to be prepared to show how that vetting was done, write Kate Driscoll, JD, and Nathaniel Mendell, JD, both partners with the Morrison Foerster firm.

Among the danger points on which the attorneys advise building awareness are two use cases and one procurement practice.

1. Prior authorization.

Given the nature of the administrative tasks that prior auth entails—tedious, repetitive, time-consuming—this work is a natural for AI assistance.

The problem is that AI can also tempt payers and their software suppliers to plausibly deny legitimate claims, second-guess physician judgment and so on. Citing recent actions against UnitedHealthcare, Humana and eviCore, the authors write:

“Given recent DOJ announcements calling for increased penalties for crimes that rely on AI, it is wise to expect enforcers to look for instances where AI is being used to improperly influence the prior authorization process.”

2. Diagnosis and clinical decision support.

As AI tools in these categories mature and spread toward ubiquity, they will likely “draw the interest of enforcers,” Driscoll and Mendell predict.

At issue will be not only how the models were trained but also whether AI suppliers have incentives to defensibly recommend questionably necessary clinical services for their provider clients. Further, the attorneys warn, DOJ watchdogs will look at “whether access to free AI tests tied to specific therapies or drugs raises anti-kickback questions.” More:

“Expect many of the familiar theories of liability to find their way into AI, and expect fraudsters to see AI as the newest mechanism to generate illicit gains. … As with prior authorization and drug development, flawed algorithms could create liability for the provider.”

3. AI product vetting.

Few AI end-users caring for patients possess the expertise it takes to question vendors on the technical ins and outs of their products. A simple rules-based algorithm can be made to look like an AI-based solution, the authors point out, and suppliers can dupe providers into thinking a relatively simple package is a super-sophisticated solution.

Driscoll and Mendell underscore the need for evaluating opportunities with eyes wide open. “It is important for compliance professionals and AI users to ensure that AI tools [under consideration] are explainable, accurate, fair and transparent,” they write. To uncover potential red flags, they add, clinicians or their colleagues inside the provider org should think like regulators and enforcers:

“What is the vendor’s AI governance policy? What data was the tool trained on? How was the tool’s  performance measured and validated? Does the tool utilize AI derived from large language models, or is it based on more rudimentary rules-based functions?”

Read the whole thing.

 

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence google deepmind

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Just like that, Anthropic has blossomed into an AI outfit to reckon with. It’s raised more than $7 billion over the past year, along the way winning support from the likes of Amazon, Google and Salesforce. And now Anthropic is signaling its intention to directly compete with OpenAI for generative AI dominance. Why would it not? Its founders, siblings Daniela and Dario Amodei, left OpenAI in 2021 to breathe life into their brainchild. Anthropic is in the business headlines this week because its most impressive model, Claude 3 Opus, can synopsize book-length documents of up to 150,000 words. That’s longer than The Last of the Mohicans by James Fenimore Cooper and only a little shorter than Salem’s Lot by Stephen King. Your move, ChatGPT.
     
  • Scientific publishing powerhouse Elsevier has birthed a genAI-based clinical decision support tool. The company’s Elsevier Health division says it put heads together with medical AI company OpenEvidence to design the offering. Also chipping in on the front end were Cone Health, the University of New Mexico and “more than 30,000 physicians from across the U.S.” The result is ClinicalKey AI, which gives clinicians quick, point-of-care access to relevant sections of medical journals, medication guidelines, clinical references and medical textbooks. Full announcement here.
     
  • How AI evolves over the coming years will depend on how humans build their relationship with it in the coming years. Does this point strike you as totally obvious yet oddly compelling (thanks to the use of relationship in this context)? If so, you might appreciate hearing from Homero Gil de Zúñiga, PhD, distinguished professor of media effects and AI at Penn State. “AI is going to do what we prompt it to do and what we ask it to do,” he tells the school’s news operation. “We must study how we interact with it, because if you’re not squeezing AI to its maximum capabilities, then AI will stay at the same level.” Q&A here, recent peer-reviewed paper on the same subject here.
     
  • Drug development has been one of healthcare AI’s most talked-about uses for years. The leader of a small company that supplies AI-based recommendations to Big Pharma dishes on the dynamic in Fortune this week. “Our clients engage us to give them the insight and convert insight into foresight—in the shortest time possible and in the least expensive way,” Lifescience Dynamics founder and president Rafaat Rahmani tells the magazine. “These AI tools squeeze the most out of our data and bring that data alive.”
     
  • Only 7% of psychologists worry about losing their job to AI. But some 62% are concerned about the technology’s potential to misinterpret data. Meanwhile 54% think chatbots will lack sufficient empathy with patients, 41% fret about threats to patient privacy and safety, and 40% are on the alert for biased outputs. The findings are from a survey of 100 U.S. psychologists. Full results from the exercise are posted at PsychologyJobs.com.
     
  • Hippocratic AI of Palo Alto, Calif., has lined up more than 40 partner orgs to help it test what it calls ‘the world’s first generative AI-powered healthcare provider.’ Or, more modestly elsewhere in its announcement, “the industry’s first safety-focused large language model designed specifically for healthcare.” Either way, provider partners in the evaluation endeavor include Memorial Hermann Health System, University of Vermont Health Network and Fraser Health. Announcement here.
     
  • From the AI research beat:
     

 

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare