News You Need to Know Today
View Message in Browser

Legal advice on AI for healthcare providers | AI newsmakers

Thursday, July 13, 2023
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

artificial intelligence healthcare providers legal

3 AI to-do’s for legally aware healthcare providers

Healthcare providers might be the most underrated of all AI stakeholders. AI developers hog the headlines. Industry marketers overdo the strategic communications. And, on social media, fiery AI doomsayers duke it out with hyper AI cheerleaders.

Meanwhile, it’s saying something that very close to half of 200 or so CEOs surveyed by Yale believe healthcare is the field in which AI is likely to make the most transformative contribution.

Into the center of this tangle comes an attorney with an evident heart for healthcare providers. The lawyer is Douglas Grimm, JD, MHA, partner and healthcare practice leader at ArentFox Schiff in Washington, D.C. This week The National Law Review published his commentary on “key legal considerations” on AI for healthcare providers. Here are three takeaways derived from a brisk reading of the piece.

1. When your government tells you it’s investing heavily in shaping healthcare AI, take it at its word.

ACTION ITEM: Read up on whatever Uncle Sam and associates have published on the subject.

Grimm suggests starting with HHS’s “Trustworthy AI Playbook,” the Centers for Medicare and Medicaid Services’ “CMS AI Playbook” and the document from which both those documents draw—the presidential executive order of Dec. 3, 2020, titled “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government.”

Grimm comments that these resources “are of increasing importance and essential reading for providers contemplating the use and effects of AI.” The same could be said of most everything updated and linked here.

2. Embrace AI in theory and concept. Even try it out on your own time. But hold off from integrating it into clinical practice until sturdy regulations are in place.  

ACTION ITEM: Ask yourself and your colleagues: How sure can we be that generative AI tools and large language chatbots are offering reliable guidance to patients and clinicians? Is it appropriate here and now to utilize these tools for medical purposes?

False or misleading information, especially within the medical sphere, “leaves users vulnerable and potentially at risk,” Grimm points out. “It has yet to be seen the extent of liability arising from chatbot medical advice, particularly when the chatbot is sponsored by a healthcare industry organization, but this is undoubtedly within regulators’ sights.”

3. Be watchful for administrators deploying AI to control or prevent fraud, waste and abuse.

ACTION ITEM: Find out which administrative and/or business leaders in your organization have invested in AI software for this noble aim—and warn them of the litigative dangers.

“One large health insurer reported a savings of $1 billion annually through AI-prevented fraud, waste and abuse,” Grimm reports. “However, at least one federal appellate court determined earlier this year that a company’s use of AI to provide prior authorization and utilization management services to Medicare Advantage and Medicaid managed care plans is subject to a level of qualitative review that may result in liability for the entity utilizing the AI.”

And of course there’s one matter of keen interest to every AI-equipped healthcare provider: Who’s to blame when AI contributes to the making of a harmful medical error?

Grimm doesn’t take on that beast of an issue in the present piece, but he’s got quite a bit else worth a look. Read the whole thing.

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence industry digest

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Speaking of AI doomsayers having it out with AI cheerleaders: The Free Press just published a humdinger of a debate in this very vein. It’s posted as two separate but paired essays, both of which are drawing spirited comments from readers. The matchup pits the AI-happy American entrepreneur, investor and engineer Marc Andreessen against the pensive British novelist, poet and classical environmentalist Paul Kingsnorth. Here’s a taste.
    • Andreessen: “What AI offers us is the opportunity to profoundly augment human intelligence to make all outcomes of intelligence—across science, technology, math, physics, chemistry, medicine, energy, construction, transportation, communication, art, music, culture, philosophy, ethics, morality—much, much better from here.”
    • Kingsnorth: “Neither law nor culture nor the human mind can keep up with what is happening. To compare AIs to the last great technological threat to the world—nuclear weapons—would be to sell the bots short. ‘Nukes don’t make stronger nukes,’ as [one technology ethicist] has said. ‘But AIs make stronger AIs.’ Buckle up.”
       
  • To foresee how AI may improve healthcare delivery while busting clinician burnout, look back at how e-prescribing elbowed out illegible scribblings on prescription pads. Surescripts CIO Mark Gingrich guides that tour in a short piece published this week in Medical Economics. The leap ahead in pharmacy technology, he writes, ended up “paving the way to the better-informed patient care that we’ve come to expect today.” Read the rest.
     
  • Five years from now—if not sooner—colonoscopy operators could face malpractice suits for not using AI. Mayo Clinic Platform president John Halamka, MD, explains why for a Washington Post columnist in news analysis published July 11. Halamka also breaks down the difference between predictive and generative AI, calling the latter a “completely different kind of animal.” Read the piece.
     
  • Watch for hospital nursing departments to help lead the way on generative AI adoption. At least, that’s the sense one gets from listening to nursing informaticist Jung In Park, PhD, of UC-Irvine. “I plan to use a large language model for predicting patient outcomes, specifically focusing on factors such as the risk of mortality and hospital-acquired conditions,” she tells HIMSS Media outlet Healthcare IT News. Q&A here.
     
  • Ethical hackers—the ‘good geeks’ who put their sly skills to work for cybersecurity purposes—aren’t worried about AI taking their jobs. On the other hand, they also don’t expect AI alone to ever become an unstoppable skeleton key for the “bad geeks.” However, AI can help hackers on both sides work faster. Human brains are “literally wired to be creative and find novel solutions to novel problems,” says one. (Implication: AI, not so much.) The educated opinions are from the crowdsourced penetration-testing firm Bugcrowd via worthwhile coverage of the company’s work and findings in Computer Weekly.
     
  • The Chinese Communist Party is cracking down on citizens who would innovate a little too freely with AI. No shock there. But some of the current stepped-up measures may go so far as to hurt the country’s ability to compete with Western rivals. Hmm. Three cheers for the CCP’s AI crackdown. Reuters has the story.
     
  • Recursion (Salt Lake City) is getting a $50 million infusion from chipmaker/AI powerhouse Nvidia (Santa Clara, Calif). The pharma company will use the funds to train drug-discovery AI models on Nvidia’s cloud platform. At the same time, Recursion will use Nvidia cloud services to refine its AI foundation models for biology and chemistry. Announcement.
     
  • The EHR and analytics outfit Net Health (Pittsburgh) is partnering with the wound-care experts at Healogics (Jacksonville, Fla.) to make the most of an AI platform. The platform, called Tissue Analytics, lets clinicians manage progress in wound healing with automated recording, tracking, analysis and related tasks. Announcement.
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand

Innovate Healthcare