News You Need to Know Today
View Message in Browser

Federal AI regulation gets bipartisan boost | Newswatch: Nurses and AI | Partner voice

Thursday, December 19, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

Congress is considering legislation to prevent or mitigate the Medicare physician payment cuts that went into effect in January 2024, and the Society for Cardiovascular Angiography and Interventions (SCAI) is urging members to mobile and write their Congressmen to weigh in on the bill.

From Capitol Hill to a hospital near you? 5 federal recommendations for healthcare AI policy

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. The paper covers numerous social settings and economic sectors. Healthcare is not least among the latter.

In introducing the material, task force co-chairs Jay Obernolte (R-Calif.) and Ted Lieu (D-Calif.) state the group consulted in depth with experts from various fields. The paper draws from those discussions to offer 66 key findings and 85 forward-looking recommendations. 

The chairs point out that AI is not a new invention. However, they note, “breathtaking technological advancements in the last few years” have given the technology “tremendous potential to transform society and our economy for the better and address complex national challenges.” 

At the same time, they add, AI “can be misused and lead to various types of harm.”

It’s with AI’s promise and peril in mind that the task force recommends five ways policymakers can guide AI innovation, implementation and oversight in, specifically, healthcare. Here are excerpts from that section. 

1. Encourage the practices needed to ensure AI in healthcare is safe, transparent and effective.

Policymakers should promote collaboration among developers, providers and regulators in developing and adopting AI technologies in healthcare where appropriate and beneficial. Policymakers could also develop or expand high-quality data access mechanisms that ensure the protection of patient data. 

Congress should continue to monitor the use of predictive technologies to approve or deny care and coverage and conduct oversight accordingly.

2. Maintain robust support for healthcare research related to AI.

Sustained, strategic investments in research and development will be critical to maintaining U.S. leadership in AI across disciplines and use cases, especially in sectors that stand to benefit significantly from this technology. 

The research supported by the National Institutes of Health (NIH) has the potential to enable improvements in [myriad] healthcare applications. 

3. Create incentives and guidance to encourage risk management of AI technologies in healthcare across various deployment conditions.

To promote the responsible use of AI systems in the healthcare sector, stakeholders would benefit from standardized testing and voluntary guidelines that support the evaluation of AI technologies, promote interoperability and data quality, and help covered entities meet their legal requirements under HIPAA. 

Congress should explore whether the current laws and regulations need to be enhanced to help the FDA’s post-market evaluation process ensure that AI technologies in healthcare are continually and sufficiently monitored for safety, efficacy and reliability.

4. Support the development of standards for liability related to AI issues.

Limited guidance exists on constructing legal and ethical frameworks for determining who bears responsibility when AI models produce incorrect diagnoses or make erroneous and harmful diagnostic recommendations. Currently, most providers are expected to use AI tools as supplementary devices while still relying on their own judgments, thus placing liability on the providers themselves. 

As AI’s use continues to increase in everything from EHRs to transcription services to diagnosis, Congress should examine liability laws to ensure patients are protected.

5. Support appropriate payment mechanisms without stifling innovation.

Certainly, there will be no “one size fits all” reimbursement policy for every AI technology, and developing appropriate payment mechanisms requires recognition of varying kinds of technology and clinical settings. For example, many AI technologies may fit into existing benefit categories or facility fees. 

Congress should continue to evaluate emerging technologies to ensure Medicare benefits adequately recognize appropriate AI-related medical technologies.

In a summary of the task force’s key findings on healthcare AI, the authors note that the lack of consistency in standards for medical data and algorithms “impedes system interoperability and data sharing.” More: 

If AI tools cannot easily connect with all relevant medical systems, their adoption and use could be impeded.

Full report here.

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

The Healthcare Leader’s Checklist for Choosing an Ambient AI Assistant with Strong AI Governance - As ambient AI for clinicians continues to evolve rapidly, how can governance protocols keep pace?

Nabla's latest whitepaper explores:

☑️ Key considerations when evaluating Ambient AI solutions.
☑️ Proven strategies Nabla employs to ensure safeguards around privacy, reliability, and safety.

Access actionable insights to support your decision-making. Download the whitepaper here.

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence AI in healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • The pandemic made freelancers of a lot of formerly fulltime nurses. Since then, many have reported enjoying if not out-and-out preferring their role as gig workers. But now the bloom may be coming off that rose—and it’s AI chatbots doing the wrecking. Researchers at the Roosevelt Institute found this out when they interviewed 30 or so contract nurses and learned the bots have a suspiciously consistent penchant for encouraging independent nurses to “work for less pay,” failing to “provide certainty about scheduling and the amount or nature of work,” and taking “little to no accountability for worker safety.” What’s more, the sample group revealed, the bots sometimes seem to behave like saboteurs, “placing nurses in unfamiliar clinical environments with no onboarding or facility training.” Roosevelt released its report Dec. 17. Read the whole thing.
     
  • Overall, however, nurses are making their peace with AI. Almost half surveyed by the healthcare consultancy Jarrard feel AI is generally a good thing and will either “help nurses like me be more effective in our jobs” or “help address the nursing shortage by handling some tasks nurses currently do.” About 20% disagree, seeing AI as a generally bad thing—one that stands to cull the nurse workforce. The largest single portion of all surveyed nurses, 34%, are undecided about AI’s positive vs. negative ratio. Jarrard analysts also found nurses have a favorable view of workplace technologies in general, appreciating in particular those that improve communications. Survey report here.
     
  • HHS’s 2024 AI Use Case Inventory is up and ready for review. The resource has expanded by 66% and now presents some 271 use cases. That’s up from 163 last year. A blog post introducing the update notes that its use cases are of various sizes and maturities, and that many include plenty of info beyond just summaries. The post is from Steven Posnack, one of whose HHS titles is principal deputy national coordinator for health IT. Full inventory here
     
  • UnitedHealth is back in the news. And again it’s not for a happy reason. With the company still reeling over the Dec. 4 murder of CEO Brian Thompson, its Optum division has had to clamp down on an internal AI chatbot. That’s because a cybersecurity team found the bot was open for the tampering by anyone with a web browser. The conversational chatbot’s intended purpose is to guide Optum employees in various aspects of claims processing. TechCrunch notes that the tool does not appear to have contained or produced sensitive personal or protected health information. Still, its clumsy exposure “comes at a time when [Optum’s] parent company faces scrutiny for its use of artificial intelligence tools and algorithms to allegedly override doctors’ medical decisions and deny patient claims.” 
     
  • Watch for GenAI to seriously up its game across healthcare in 2025. That tip could have come from even a casual observer, no doubt, but it’s worth noting because its source is one of the leading lights in healthcare AI. Aashima Gupta, global director of healthcare solutions at Google Cloud, tells Forbes the rate at which healthcare organizations are investing in the technology looks an awful lot like the frenzied race to get online in the early days of the internet. She backs the observation with five “core trends” that she believes will prove “pivotal at the intersection of healthcare and AI” in 2025.
     
  • Or maybe the mad dash will stall out. Not only in healthcare but across all industries. The Gallup organization finds this could already be happening. Close to 7 in 10 workers tell the pollsters they never use AI, and only 1 in 10 says they use it at least weekly. What’s more, those figures have now held steady for two consecutive years. “This could be a sign that leaders’ aspirations and vision for using AI in the workplace have not yet translated to clear direction or support for employee adoption,” Gallup analysts write in a Dec. 16 report. “Some employees will be early adopters, but many won’t feel comfortable using AI at work until they receive a clear plan and training.” Read the whole thing
     
  • Here’s something that might be a bellwether of things to come. Or a warning of flops to avoid. Time magazine has introduced an AI chatbot to take readers’ questions about its person of the year (who this year happens to be President-Elect Donald Trump). The bot was trained on the magazine’s own content, along with “other trusted sources” and the bot’s “built-in general knowledge.” Time says the AI has been engineered to “prevent users from steering the conversation toward unrelated, controversial or potentially biased topics.” We’ll be eager to see how that goes. 
     
  • Readers of a certain age may be tempted to sing this phone number to the tune of the chorus in the big ’80s hit ‘Jenny Jenny.’ All they’ll need to do is replace 867-5309 with 1-800-ChatGPT. See? The syllable count is perfect. And now try to get that earworm out of your head. Or call the number. Bot operators are standing by. No, really.
     
  • Recent research in the news: 
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare