News You Need to Know Today
View Message in Browser

Institutional AI inequities | AI reporter’s notebook | Partner news

Tuesday, July 16, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

'Well-resourced and innovation-focused hospitals should mentor and provide technical support to underfunded or smaller hospitals.'

How to mitigate institutional inequities involving AI

When it comes to adopting healthcare AI, large, well-off hospitals are likely to frequently homer while smaller, struggling institutions go down looking. (Baseball analogy in honor of tonight’s Midsummer Classic.) As a result, some patients will benefit by AI while many others go wanting.

This won’t serve the whole of U.S. healthcare well.

Fortunately, there’s time to make sure AI implementation doesn’t unfold quite so “inequitably” between haves and have-nots. That’s the thrust of an opinion piece published in MedPage Today July 13.

Henry Bair, MD, MBA, and Mak Djulbegovic, MD, MSc, of Wills Eye Hospital and Jefferson Health in Philadelphia break down the challenge and offer prescriptions.

“As we continue to develop and experiment with AI technologies, equitable access to these technologies is crucial to prevent a widening divide in healthcare quality,” they write. “We must work to ensure that smaller clinics, community hospitals and underfunded institutions are not left behind.”

Bair and Djulbegovic suggest that pulling this off will take concerted efforts across three spheres of activity.

 

1. Government and policy interventions.

Government policies can play a critical role in promoting equitable AI implementation, the authors point out. “Policies should focus on providing funding, training grants and partnership mandates that encourage the adoption of AI in smaller, underfunded and community hospitals,” they add. “There are precedents for this; past initiatives have supported EHR and health IT adoption.” More:

‘Regulations should ensure that AI technologies address local health challenges and are equitably distributed across different regions.’

 

2. Education and training programs.

To reduce the educational gap, initiatives to enhance gen AI knowledge at all levels of medical education are essential, Bair and Djulbegovic write. “Professional associations such as the Association of American Medical Colleges have developed resources for this purpose and should continue to offer guidance on the design of medical school curricula and professional development programs.”

‘Healthcare systems can collaborate with academic and corporate organizations to create institution-specific AI training modules, as demonstrated by the abundance of existing online courses on gen AI use.’

 

3. Collaborative models.

Creating collaborative models for resource-sharing between AI-equipped hospitals and other hospitals has vast potential to reduce disparities, the authors note. “Well-resourced and innovation-focused hospitals should mentor and provide technical support to underfunded or smaller hospitals,” they add.

‘Establishing regional AI hubs that serve as centers of excellence can facilitate knowledge and resource distribution. Meanwhile, less AI-equipped hospitals ought to proactively consider how gen AI can benefit their workflows.’

 

Bair and Djulbegovic further recommend encouraging the private sector to invest in affordable AI solutions that can specifically serve hospitals with thinner resources at hand.

“By uplifting smaller, underfunded hospitals, the entire system becomes more resilient and capable of handling both public health crises and everyday medical issues alike,” they write.

Given how quickly AI tools are evolving, the authors underscore, “it is not premature to continue developing gen AI in an equitable manner. Neglecting to do so risks creating gaps that will be ever more difficult to bridge.” More:  

‘By implementing the strategies outlined above, we can fulfill our ethical imperative to realize more inclusive healthcare systems in which AI technologies benefit all patients, regardless of where they are or the resources available to their healthcare providers.’

Read the whole thing.

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Andrew Lundquist Clinical Director at Nabla discuss enhancing patient care and giving clinicians more time on the Digital Thoughts podcast - He covers daily clinician challenges, ambient AI for clinical documentation, evaluating startups, and the role of AI in healthcare. Listen to the full episode here.

 

 Share on Facebook Share on Linkedin Send in Mail
Hospital

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • A month after naming its first chief AI officer, Children’s National Hospital in D.C. is building out a dedicated AI operation. According to Washington Business Journal, the institution is adding “dozens” of people who will report up to the new CAIO, Alda Mizaku. The newspaper reports that Children’s National is creating roles for data managers, engineers, product managers and others. The outlet notes the recruitment drive comes on the heels of a $1 million donation from Amazon Web Services.  
     
  • The Sam Altman-Arianna Huffington ‘AI health coach’ has attracted detractors. One is Jathan Sadowski, a senior research fellow at Monash University in Australia. Even if the coaching platform, Thrive AI Health, somehow manages to avoid the usual pitfalls—bias, hallucinations and other fumbles—it will “still miss the mark because the idea of hyper-personalization is based on a flawed theory of how change happens,” Sadowski writes. He makes his case over at The Conversation.
     
  • He’s got to admit it’s getting better. Then again, it couldn’t get much worse. The antecedent of the impersonal pronoun is healthcare technology. The holder of the opinion is Nirav Shah, MD, MPH, a senior scholar at Stanford. “Healthcare is not productive. The more technology we get, the less productive we become,” Shah said at a VentureBeat event last week. “It used to take me 45 minutes to admit a patient in the paper-based world, and now, thanks to electronic health records, it takes me an hour and 45 minutes. I’m a glorified data entry clerk.” While his present is tinged with disappointment, he has high hopes for healthcare technology’s AI-aided future. Read VentureBeat’s own coverage.
     
  • Seconding that present-tense emotion are two legal pros. “While the potential of AI technology is exciting in its transformative potential, we are well served to remember that not all innovations make life simpler,” write Harry Nelson and Yehuda Hausman of the healthcare-specialized firm Nelson Hardiman. “While AI-empowered ‘conveyor-belt’ healthcare brings the promise of new levels of efficiency, it also brings risks. Without adequate human supervision and oversight, minor issues and errors within the new frameworks can easily escalate by several orders of magnitude.” Read the rest.
     
  • Armed with certain AI models, amateur inventors could engineer serious biological threats. With this frightening scenario in mind, researchers at Los Alamos National Laboratory are putting heads together with peers at OpenAI. Among other things, they’re interested in learning how bad actors could use multimodal frontier models for nefarious purposes involving biological threats. The two started the project earlier this year. The next phase involves testing experts’ handiwork with ChatGPT-4o for completing real-world tasks—like introducing foreign genetic material into host organisms—and assessing unspecified “emerging biological risks.” Learn more from Los Alamos Lab here and OpenAI here.
     
  • Meanwhile OpenAI is telling the world how it could get from AI to AGI in 5 simple steps. Saying its eyes remain fixed on attaining artificial general intelligence, OpenAI has hinted to Bloomberg that AI models capable of performing “a range of tasks across different domains without human input” are still a ways off but already on the whiteboard. Tom’s Guide breaks it down.
     
  • A large financial software company is laying off 1,800 workers but planning to hire around the same number. Driving the doubletake-inducing switcheroo is a strategy by the company, Intuit, to infuse its products as well as it processes with AI. “For example, [artificial intelligence] is helping experts with AI-supported answers and explanations, and matching and routing customers to the right expert at the right time, tailoring customer-specific needs to expert profiles and availability,” a company spokesperson tells the San Diego Union-Tribune.
     
  • ChatGPT is the most commonly used AI work tool in the world. Hot on its heels are Canva AI Suite and Google Gemini. Quartz has posted a slide show of the top 10 in this category as of May.
     
  • Recent research roundup:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare