3 AI to-do’s for legally aware healthcare providers

Healthcare providers might be the most underrated of all AI stakeholders. AI developers hog the headlines. Industry marketers overdo the strategic communications. And, on social media, fiery AI doomsayers duke it out with hyper AI cheerleaders.

Meanwhile, it’s saying something that very close to half of 200 or so CEOs surveyed by Yale believe healthcare is the field in which AI is likely to make the most transformative contribution.

Into the center of this tangle comes an attorney with an evident heart for healthcare providers. The lawyer is Douglas Grimm, JD, MHA, partner and healthcare practice leader at ArentFox Schiff in Washington, D.C. This week The National Law Review published his commentary on “key legal considerations” on AI for healthcare providers. Here are three takeaways derived from a brisk reading of the piece.

1. When your government tells you it’s investing heavily in shaping healthcare AI, take it at its word.

ACTION ITEM: Read up on whatever Uncle Sam and associates have published on the subject.

Grimm suggests starting with HHS’s “Trustworthy AI Playbook,” the Centers for Medicare and Medicaid Services’ “CMS AI Playbook” and the document from which both those documents draw—the presidential executive order of Dec. 3, 2020, titled “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government.”

Grimm comments that these resources “are of increasing importance and essential reading for providers contemplating the use and effects of AI.” The same could be said of most everything updated and linked here.

2. Embrace AI in theory and concept. Even try it out on your own time. But hold off from integrating it into clinical practice until sturdy regulations are in place.  

ACTION ITEM: Ask yourself and your colleagues: How sure can we be that generative AI tools and large language chatbots are offering reliable guidance to patients and clinicians? Is it appropriate here and now to utilize these tools for medical purposes?

False or misleading information, especially within the medical sphere, “leaves users vulnerable and potentially at risk,” Grimm points out. “It has yet to be seen the extent of liability arising from chatbot medical advice, particularly when the chatbot is sponsored by a healthcare industry organization, but this is undoubtedly within regulators’ sights.”

3. Be watchful for administrators deploying AI to control or prevent fraud, waste and abuse.

ACTION ITEM: Find out which administrative and/or business leaders in your organization have invested in AI software for this noble aim—and warn them of the litigative dangers.

“One large health insurer reported a savings of $1 billion annually through AI-prevented fraud, waste and abuse,” Grimm reports. “However, at least one federal appellate court determined earlier this year that a company’s use of AI to provide prior authorization and utilization management services to Medicare Advantage and Medicaid managed care plans is subject to a level of qualitative review that may result in liability for the entity utilizing the AI.”

And of course there’s one matter of keen interest to every AI-equipped healthcare provider: Who’s to blame when AI contributes to the making of a harmful medical error?

Grimm doesn’t take on that beast of an issue in the present piece, but he’s got quite a bit else worth a look. Read the whole thing.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup