AI in healthcare: 3 areas of likely risk for legal liability

As healthcare AI opens new avenues to improve care quality without unduly increasing operational costs, the technology also expands potential exposure to civil and criminal liabilities. And that’s not only for providers but also payers and suppliers.

Two attorneys guide a brief tour of the changing landscape in a piece their firm posted March 4.

Investigators and enforcers will likely expect AI developers and/or end-users to vet AI products for accuracy, fairness, transparency and explainability—and to be prepared to show how that vetting was done, write Kate Driscoll, JD, and Nathaniel Mendell, JD, both partners with the Morrison Foerster firm.

Among the danger points on which the attorneys advise building awareness are two use cases and one procurement practice.

1. Prior authorization.

Given the nature of the administrative tasks that prior auth entails—tedious, repetitive, time-consuming—this work is a natural for AI assistance.

The problem is that AI can also tempt payers and their software suppliers to plausibly deny legitimate claims, second-guess physician judgment and so on. Citing recent actions against UnitedHealthcare, Humana and eviCore, the authors write:

“Given recent DOJ announcements calling for increased penalties for crimes that rely on AI, it is wise to expect enforcers to look for instances where AI is being used to improperly influence the prior authorization process.”

2. Diagnosis and clinical decision support.

As AI tools in these categories mature and spread toward ubiquity, they will likely “draw the interest of enforcers,” Driscoll and Mendell predict.

At issue will be not only how the models were trained but also whether AI suppliers have incentives to defensibly recommend questionably necessary clinical services for their provider clients. Further, the attorneys warn, DOJ watchdogs will look at “whether access to free AI tests tied to specific therapies or drugs raises anti-kickback questions.” More:

“Expect many of the familiar theories of liability to find their way into AI, and expect fraudsters to see AI as the newest mechanism to generate illicit gains. … As with prior authorization and drug development, flawed algorithms could create liability for the provider.”

3. AI product vetting.

Few AI end-users caring for patients possess the expertise it takes to question vendors on the technical ins and outs of their products. A simple rules-based algorithm can be made to look like an AI-based solution, the authors point out, and suppliers can dupe providers into thinking a relatively simple package is a super-sophisticated solution.

Driscoll and Mendell underscore the need for evaluating opportunities with eyes wide open. “It is important for compliance professionals and AI users to ensure that AI tools [under consideration] are explainable, accurate, fair and transparent,” they write. To uncover potential red flags, they add, clinicians or their colleagues inside the provider org should think like regulators and enforcers:

“What is the vendor’s AI governance policy? What data was the tool trained on? How was the tool’s  performance measured and validated? Does the tool utilize AI derived from large language models, or is it based on more rudimentary rules-based functions?”

Read the whole thing.

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup