News You Need to Know Today
View Message in Browser

National AI strategy for healthcare | Industry watcher’s digest | Partner news

Tuesday, July 2, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

artificial intelligence in healthcare

6 priorities for a national healthcare AI strategy from our friends in the UK

The U.S. is far from alone in the world in its wranglings across public and private sectors to adopt propitious healthcare AI innovations quickly yet safely. Our close economic (and military) allies in the U.K. are among those striving to herd healthcare AI cats of their own. What can we learn from their thinking?

Quite a lot, judging by a paper published June 26 by the Health Foundation, an independent nonprofit focused on driving continuous improvements into U.K. healthcare. The document is largely angled to lobby Britain’s National Health Service, the second-largest single-payer healthcare system in the world (after Brazil’s).

However, it’s an open document—one offering easily digestible food for thought to any healthcare AI stakeholders striving to maximize healthcare AI’s benefits while minimizing its risks.

“The huge pressures the NHS is facing due to escalating demand [for healthcare services] and significant workforce shortages make developing a strategy that much more urgent,” the Health Foundation writes in its intro section. “A strategy is particularly needed to ensure the benefits of AI can be realized at scale across the NHS rather than just in a few pockets of excellence.”

The paper lays out six priorities to guide policymakers and healthcare leaders as they  formulate and promote said strategy.

1. The use of AI should be shaped by the public, patients and healthcare staff to ensure the technology works for them.

An AI in healthcare strategy “should be based on a deep understanding of what people in the U.K. think about AI-driven health technologies,” the authors write. “It should ensure there are effective mechanisms in place for engaging patients, the public and NHS staff on relevant topics as they arise to inform high-level decision making.” More:

‘It should also involve patients and staff in the co-design of AI solutions if we are to harness their potential in a way that works for all.’

2. The NHS must focus AI development and deployment in the right areas.

An AI in healthcare strategy “should support local innovation and experimentation while also setting out a small number of high-level priorities where AI can help address key challenges the NHS faces (administrative and operational as well as clinical). It should also support the demonstration, testing and spread of these tools.”

‘As part of this, a strategy will need to maintain effective horizon-scanning functions and provide opportunities and mechanisms for NHS staff and provider organizations to signal where AI could help most.’

3. The NHS needs data and digital infrastructure that will enable it to capitalize on the potential of AI.

An AI in healthcare strategy “should ensure the NHS’s digital infrastructure is fit for purpose and set out how processes can be standardized and improved to allow efficient access to high-quality data for the development of AI systems.”

‘Such access should be based on a proportionate approach to data security and privacy that effectively balances risk and opportunity.’

4. The use of AI in the NHS must be underpinned by high-quality testing and evaluation.

The success of AI in practice depends on how well the technology performs in live healthcare settings. Given this, any national strategy “must consider how to broker and support more opportunities for testing AI technologies in real-world settings.”

‘An AI in healthcare strategy must support the further development of evaluation frameworks appropriate for AI and boost the capacity to evaluate AI as it is developed and implemented in the NHS.’

5. The NHS needs a clear and consistent regulatory regime for AI.

“There is particular concern among clinicians as to where clinical liability sits when algorithms are used in clinical decision making,” the Health Foundation points out, “so providing regulatory clarity here will be essential.”

‘An AI in healthcare strategy must prioritize the coordination of sectoral regulators, bringing all relevant bodies together under an agreed-upon approach that addresses gaps and overlaps.’

6. The healthcare workforce must have the right skills and capabilities to capitalize on AI.

“Ultimately, there needs to be a shared vision for how professions and occupations—as well as new roles—should develop with greater use of AI,” the authors write. “And NHS staff themselves should play a central role in the development of this vision, in partnership with their colleagues, employers, trade unions, professional and representative bodies, patients and the public.”

‘An AI in healthcare strategy must set out concrete plans to equip the current and future workforce with the skills needed for using AI, develop career paths that allow healthcare workers to specialize in AI and empower staff to shape the evolution of their roles.’

Read the whole thing.

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

How's ambient AI transforming the practice of medicine? - Clinicians love using ambient AI, but don't take our word for it - take theirs. Head over to Nabla's Wall of Love to see first hand why clinicians love using AI to bolster their day-to-day practice.

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence AI in healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • AI has tremendous potential in healthcare—potential to get things wrong. The warning comes from the immediate past president of the American Medical Association, Jesse Ehrenfeld, MD, MPH. Of course, the technology is also stocked with expedients to help clinicians get things right, and fast, allowing time to check its guidance before acting on it. The trick is to treat algorithms more like aides than experts—and to make each one show its credentials. “We must demand transparency,” says Ehrenfeld, who’s a practicing anesthesiologist as well as a medical informaticist. If he finds himself in a surgery suite in which AI is controlling a patient’s ventilator, he needs to know: “How do I hit the ‘off’ switch?”
     
  • AI can’t save healthcare from its own systemic shortcomings. Avoiding the cliché about putting a Band-Aid on a severed artery, tech ethicist Alex John London, PhD, suggests AI won’t improve healthcare delivery until U.S. healthcare undergoes broad, structural change. Deficits that foundational “are not going to be changed by doing fancy work on your dataset,” he told attendees at a recent healthcare AI conference. “To really make use of AI and get all the value out of AI in healthcare, we have to change health systems, the data that we generate, our ability to learn, the way we deliver healthcare and who’s included in our systems.” London is director of Carnegie Mellon’s Center for Ethics and Policy and chief ethicist at the university’s Block Center for Technology and Society. He’s also co-editor of a textbook titled Ethical Issues in Modern Medicine. Get the rest from GeekWire.
     
  • Say generative AI is overhyped without coming right out and saying generative AI is overhyped. The MIT robotics pioneer and serial tech entrepreneur Rodney Brooks does pretty much that in an interview with TechCrunch. “When a human sees an AI system perform a task, they immediately generalize it to things that are similar and make an estimate of the competence of the AI system,” Brooks says. “And they’re usually very over-optimistic, and that’s because they use a model of a person’s performance on a task.” What many miss, he politely points out, is that task performance is no measure of general competence.
     
  • OK, now come right out and say it. “AI is the hot buzzword, and there is a lot of hype.” That’s from Felice Verduyn-van Weegen of the life sciences division at EQT, a global investment organization. Her interest is in AI startups worth investing in. She tells Private Equity International her firm would “tread carefully when looking at an investment opportunity where AI is the only key differentiator.”
     
  • On the bright side, AI might save many people from having a stroke. The hope is expressed by an individual with a vested interest in the proposition. That doesn’t mean the scenario is not worth watching for. “I think in the very near future we will be able to look at a person’s electrocardiogram (ECG) results and, even if they’re not symptomatic, we can [have AI] look at their records and assign a risk ratio” predicting likelihood of stroke, the optimist, InfoBionic.AI CEO Stuart Long, tells Medical Device Network. “We will be able to say with a high degree of confidence that someone is going to develop atrial fibrillation and we can start treatments today that would help offset that. That is somewhere [healthcare] AI is going to help the most.”
     
  • Healthcare AI is even of interest to the Armed Forces Communications & Electronics Association International. A July 1 article in the AFCEA outlet Signal looks at AI’s potential for promoting preventive care and improving care access. Belinda Seto, deputy director at the NIH’s Office of Data Science Strategy, says data sharing has never been about technology. “It’s about culture,” Seto explains. “The idea of generating data spanning many years of research, and only sharing it much later, is no longer acceptable. It’s about common good; it’s about community good.”
     
  • The American College of Radiology is certifying rad practices that prove their AI prowess. It makes sense that medical imaging pros would be at the forefront of this kind of thing, as radiology was one of the first medical specialties to adopt clinical AI (not least for assistance with image interpretation). Plus it’s still further ahead with the technology than most others. AIin.Healthcare’s sister news site Radiology Business had the story last week. This week it’s interesting to see Politico, of all press outfits, is on it too.
     
  • Vicariously experiencing a physician’s tête-à-têtes with ChatGPT is worth the price of the book that chronicles the interactions. That would be ChatGPT, MD by Robert Pearl, MD, with “co-author” ChatGPT. The book recommendation comes from Robert C. Smith, MD, distinguished professor of medicine and psychiatry at Michigan State University. Reviewing the book for Psychology Today, Smith seems not to disagree with a key Pearl takeaway. “[T]he medical revolution needed to overturn the medical-industrial complex,” reviewer Smith writes, “will be mediated by AI in conjunction with two precipitants: societal displeasure with healthcare and the economic catastrophe current practices promise.”
     
  • AI can bring out the dark side in people who are willing to pay to engage with it. An attractive, 20-something influencer found this out the hard way. Her mistake: tapping the technology to create her interactive digital double so she could maximize her billable time online as a (well-meaning) loneliness reliever. Her unwanted result: An awful lot of fans, mostly men, got the wrong idea about what the unreal twin was open to discussing. Some wanted to delve into “disturbing fantasies.” At least one customer made like a stalker, setting up a “shrine-like photo wall” of the woman—with the bot’s encouragement. The Los Angeles Times gives the woman’s story without going into lurid details.
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare