News You Need to Know Today
View Message in Browser

Legal eagles on healthcare AI | Healthcare AI newsmakers

Wednesday, October 11, 2023
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

artificial intelligence legal exposure

Healthcare AI and HIPAA compliance: 5 key legal questions + answers

Training AI for clinical or research use in healthcare requires feeding algorithms patient data, and lots of it. This opens data custodians—typically hospitals—to various points of potential legal exposure. Tops among the worries are complying with HIPAA, de-identifying patient data and otherwise protecting patients from having their privacy invaded.

Two attorneys specialized in such matters break down the constituent concerns in a podcast posted online this month. Beth Pitman, JD, and Shannon Hartsfield, JD, both with the Holland & Knight firm in Nashville, center their discussion around Dinerstein v. Google. However, they cover principles and precedents transcending any one case. Here are slices of their legal expertise drawn from the discussion and lightly edited for clarity and conciseness.

Q1. From a hospital law team’s perspective, how is AI development different from other reasons for sharing de-identified patient data with outside organizations?

A. The way the AI works—and the thing that’s so great about AI—is it can accumulate a large amount of data, and not just from one source. The de-identified data comes initially from the healthcare provider and goes to the AI system. It may have been de-identified correctly at the healthcare provider location. But then, because AI is learning and has access to so much other information that’s independent of the healthcare provider’s information, that information could be used to re-identify an individual. That’s one of the unique underlying concerns with AI.

Q2. What are HIPAA’s guidelines for sharing de-identified patient info with tech companies inside which someone could, if they wanted to, re-identify patients?

A. Under what we call the HIPAA de-identification safe harbor, you have to remove 18 specific identifiers. But then you also can’t have actual knowledge that the information could be used alone or in combination with other information to identify patients. So the question becomes, “When I’m handing this de-identified data over to the AI developer, might they be able to re-identify it?”

Q3. What about a case in which patient data is not fully de-identified but is, instead, limited by contract to a particular AI use case?

A. Information that is not fully de-identified may potentially qualify as a limited dataset, but an AI use of a limited data set must comply with HIPAA. If a limited dataset is to be disclosed, that is perfectly permissible under HIPAA as long as there is a HIPAA compliant data use agreement and the limited dataset, as well as the purpose of the disclosure, conform to HIPAA. When healthcare providers and tech companies work together, the key is to look at the details of the specific situation you’re dealing with and then carefully analyze those facts to make sure that you’re complying with HIPAA.

Q4. What constitutes a limited dataset?

A. A limited dataset is a term defined in HIPAA, and it includes certain direct identifiers, such as the name, the postal address, social security number, medical record number. It can include some other demographic information, like your ZIP code, other elements in a medical record, like admission, discharge, date of service and other items. It is not completely de-identified, but it does include a limited amount of information, which is why it's called limited dataset. Limited datasets may be used under HIPAA for research purposes and also for other specific purposes listed in the HIPAA rules, as long as there’s a data use agreement in place that meets the requirements for HIPAA and does sufficiently protect the information.

Q5. How far along is U.S. healthcare in its thinking on AI vis-à-vis patient privacy?

A. Given the ways AI develops and learns, its use in healthcare continues to raise a lot of issues and concerns related to when a healthcare provider can appropriately disclose protected health information through the technology and how it can be disclosed. This is still very much an evolving area. Of course, the uses of AI in healthcare have not really been fully fleshed out. The issues it raises will probably continue growing and developing for many years to come.

Listen to the podcast (or read the full transcript) here.

 

 Share on Facebook Share on Linkedin Send in Mail
generative AI large language models

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Microsoft has introduced healthcare-specific touches for its all-in-one analytics system, Fabric. Showcasing the technology at the HLTH conference in Las Vegas Oct. 8 to 11, Microsoft said Fabric’s healthcare setup helps provider orgs manage medical images, text, video and, presumably, ultrasound loops at a single point of digital engagement. The aim is to let credentialed end-users securely “access, analyze and visualize” decision-aiding insights from across the enterprise. Unsurprisingly, Fabric offerings will parallel, and for some users will overlap with, Azure cloud services for healthcare. Details here, here and here.
     
  • Not to be outdone, Google Cloud is touting the healthcare tunings of its Vertex AI platform. At HLTH 23, Google demonstrated Vertex’s might for helping data scientists and software engineers in healthcare to “automate, standardize and manage” machine learning projects. Vertex is presently a work in progress. When it’s ready for wide production, Vertex AI’s search functions will aid end-users with specific clinical and research aims. “Working with healthcare and life science organizations to test and deploy new gen AI solutions is a critical step toward building safe and helpful AI technology,” Google comments. More info here and here.
     
  • Among the provider orgs piloting Vertex AI Search is Highmark Health’s Allegheny Health Network based in Western Pennsylvania. The Pittsburgh Post-Gazette caught wind of the project and covers it for the newspaper’s business readership. “We’re teaching [Vertex AI] how to speak Highmark,” Highmark’s chief analytics officer tells the outlet. “It’s going to have a very meaningful impact in the clinical setting.” Read the article.
     
  • Generative AI keeps winning hearts and minds in healthcare, but not much of the love has translated to solid plans. A new virtual community has formed to bridge that gap. Led by all University of California health systems, with UC-Davis at the tip of the spear, the collective brings together more than 30 charter members. Along with health systems, the group already has health plans, nonprofit associations and research outfits in its fold. The founders call their work “VALID” for Vision, Alignment, Learning, Implementation and Dissemination of Validated Generative AI in Healthcare. Their guiding vision: to “explore uses, pitfalls and best practices for Gen AI in healthcare and research, and accelerate execution and real-world evidence.” Announcement.
     
  • Wolters Kluwer Health is unveiling a new “health language” platform. The technology will work with FHIR standards in Microsoft Azure’s health data services to “transform disparate, messy healthcare data into clean, standardized and interoperable data and insights,” the Dutch publisher and info-services company explains. Announcement here.
     
  • EHR supplier NextGen Healthcare (Atlanta) has launched an ambient listening tool. The software openly eavesdrops on doctor-patient discussions, then translates the words into clinical summaries. From these it drafts care plans, populating EHR fields so clinicians only have to proofread automated entries and direct care pathways. Announcement.
     
  • Pure Storage (Santa Clara, Calif.) is offering to pay power and rack-space costs for clients subscribed to certain of its data-storage services. The company, which has a large and growing footprint in healthcare, wants to let healthcare providers know that its updated products and services can “enable healthcare organizations to better utilize AI and recover quicker after a ransomware attack or other disaster.” Relevant announcements here and here.
     
  • Nvidia has canceled its AI summit planned for Oct. 15 and 16. The reason is absolutely understandable: The event was booked for Tel Aviv, Israel. Brief message from Nvidia here, news coverage from CNBC here.
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare