News You Need to Know Today
View Message in Browser

Avoiding AI malpractice | Industry watcher’s digest

Thursday, April 25, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

artificial intelligence malpractice

Against malpractice for using clinical AI, the best defense is a good offense

If a clinician you care about counts on AI to help make medical decisions, remind them: Tort law principles hold that doing so means risking liability should a patient sue over harm done.

The reminder comes from researchers in the anesthesiology department at Rutgers New Jersey Medical School. Saad Ali, MD, and co-authors had their commentary published in Biomedical Instrumentation & Technology, a peer-reviewed journal of the Association for the Advancement of Medical Instrumentation (AAMI).

Largely focusing on the “black box” problem and how it exposes clinicians to liability, Ali and colleagues cite last year’s draft guidance from the FDA discussing the information that manufacturers of AI-equipped medical devices should include in their product literature. [1]  

Stating that the FDA document is “only a first step and doesn’t address the thorny question of clinician liability,” Ali and colleagues offer three key points that AI-embracing clinicians ought to keep in mind.

1. With the help of machine learning algorithms, AI-equipped devices are likely to become more accurate over time, producing fewer mistakes and lowering false-positive rates. However,  

‘Until the use of AI/ML for treatment recommendations by clinicians gets recognized as the standard of care, the best option for clinicians to minimize the risk of medical malpractice liability is to use it as a confirmatory tool to assist with decision-making.’

2. Financial compensation commonly is used by U.S. vaccine manufacturers to pay those who have adverse reactions after receiving vaccines. Manufacturers of AI/ML-enabled devices may be able to use a similar approach to incentivize the use of their products. However,

‘Such an approach may give less incentive to manufacturers to ensure their product’s reliability and safety, and would have little to no beneficial effect on clinicians’ wariness of the products.’

3. Because the training datasets are not unlimited, it is understandable that all AI/ML-enabled medical devices will have a degree of bias. However,

‘Not being transparent about certain limitations can result in a loss of trust. Improvements to product labeling should be made to clearly delineate the training dataset used and provide assessment of potential biases.’

Extending the latter point, the authors add that transparency “seems to be the key to the growth of AI in healthcare, fostering trust among software developers, clinicians and patients.”

Ali and co-authors encourage providers and hospitals to test the outputs of AI-equipped medical devices for themselves before acquiring such devices. They also urge end-users to fully inform patients on potential risks and expected benefits when obtaining patient consent.

New AI-enabled devices continue entering the market at a brisk pace, the authors note. They write:

‘Clinicians who encounter these devices need to be certain that device performance can match the standard of care. Such assurance is needed to prevent fear of malpractice liability from curtailing clinician use of these innovative devices.’

The full article is posted here (behind paywall).

Reference:

  1. FDA, “Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions”: Draft Guidance for Industry and FDA Staff (April 3, 2023)

 

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence in healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • A lot of technology leaders have gone out of their way to call for more—or at least smarter—regulation of AI. Add Keith Dreyer, DO, PhD, to the list of those who have long toiled in healthcare and deserve open ears. He’s concerned that current regulatory guardrails are suspect because training data is “changing under the feet of the algorithm.” Last week he went to the nation’s capital to state his case directly to lawmakers and regulators. A radiologist with Mass General Brigham, Dreyer serves as that institution’s chief data science officer. He’s also regarded as a pioneer of AI for healthcare. Dreyer tells Politico that new training data is a clear and present worry because “no one really knows” if updating data makes medical algorithms more accurate, less accurate or an unstable admixture of the two—“and there are no requirements in place to monitor that.” More here.
     
  • Speaking of AI giants with a hand in healthcare, keep an eye on Bill Gates’s brainchild. You know the reference is to Microsoft even though it’s been several years since Gates cut official ties with the Big Tech behemoth that he co-birthed. Anyway, Microsoft is partnering with a full-service revenue cycle management company whose only industry is healthcare. Announcing the development April 23, Ensemble Health Partners in Cincinnati says its proprietary platform, EIQ, will tap Microsoft’s Azure to beef up offerings in automation, machine learning and generative AI “across the entire revenue cycle.” Ensemble announcement here.
     
  • As it happens, revenue cycle management is pretty high on any list of things hospitals want to do with AI. One list it makes is the “Top 12 ways artificial intelligence will impact healthcare” in the estimation of the editors at Health IT Analytics. Read up on AI for healthcare RCM and see if your picks align with theirs here.
     
  • AI startup Hugging Face has put together a way to benchmark the quality of large language AI models for performing specific healthcare tasks. Calling the system Open Medical-LLM, the company says its idea is to standardize evaluations of these models so end-users can know whether or not they should bother. Hugging Face worked on the toolkit with colleagues at the University of Edinburgh’s Natural Language Processing Group. Learn more here.
     
  • Utah’s new AI law goes into effect May 1. Why does this matter to non-Beehive Staters? Because it could set a precedent. And because it has some interesting wrinkles. For example, the law only makes businesses disclose their use of AI when customers ask. However, it doesn’t view healthcare as just another kind of business: Providers outfitted with medical AI have to “prominently disclose” the care component to patients before using it. On the other hand, as long as the patient is informed beforehand, the law doesn’t directly regulate how the AI—generative or otherwise—is to be used. Read a legal analysis from the Chicago-based law firm McDermott Will & Emery here.
     
  • Only 10 AI startups made the cut as finalists. Only one is healthcare-specific. The competition is the 2024 Innovation Showcase of the 21st annual MIT Sloan CIO Symposium. The organization says the 10 have less than $10 million in annual revenues, sell enterprise IT solutions to CIOs or corporate IT departments, and have developed “cutting-edge solutions that combine both value and innovation to the enterprise IT space.” The healthcare finalist is diagnostics streamliner SimulConsult. They’ll be on hand when symposium organizers name the winner/s May 14 in Cambridge, Mass. Announcement here.
     
  • Want a contrarian’s take on the push for more regulation of AI? You got it. Just because people in high tech places want stepped-up governmental oversight doesn’t mean the resulting restraints will be wise or in the public’s best interest. More likely is that OpenAI and other industry players putting out the calls are acting in their own interests: They just want to foil their competitors. “Many regulators will happily support these requests, even when they are being played.” This take is from Eric Goldman, JD, a law professor at Santa Clara University in Silicon Valley. In a paper and presentation unsubtly titled “Generative AI is Doomed,” Goldman makes a dark prediction. It’s worth quoting at length. Here comes:  

“I expect regulators will intervene in every aspect of Generative AI’s ‘editorial’ decision-making, from the mundane to the fundamental, for reasons that range from possibly legitimate to clearly illegitimate. These efforts won’t be curbed by public opposition, Section 230 or the First Amendment. The regulatory frenzy will have a shocking impact that most of us have rarely seen, especially when it comes to content production: a flood of regulation that will dramatically reshape the Generative AI industry—if the industry survives at all.”
 

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare