News You Need to Know Today
View Message in Browser

Global AI oversight | AI reporter’s notebook | Partner news

Thursday, August 29, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

How can the world’s AI stakeholders work together toward the common goal of international scientific agreement on AI’s risks?

International experts formulate scientific approach to AI risk management

A lot of people from a lot of organizations in a lot of countries are working to coordinate oversight of AI’s risks. A budding project seeks to bring many of these minds together to advance the worthy goal of building global consensus with scientific rigor.

The effort is jointly led by experts at the University of Oxford’s Oxford Martin School in the U.K. and the Carnegie Endowment for International Peace in Washington, D.C. This week the experts, numbering more than 20, published a report laying out the group’s observations and recommendations.

They ask: How can the world’s AI stakeholders work together toward the common goal of international scientific agreement on AI’s risks?

“There has been surprisingly little public discussion of this question, even as governments and international bodies engage in quiet diplomacy,” the authors write. “Compared to climate change, AI’s impacts are more difficult to measure and predict—and more deeply entangled in geopolitical tensions and national strategic interests.”

Among the report’s takeaways are six ideas illuminating the conceptual space in which AI and international relations intersect. The bulk of the thinking took place at a workshop in July. Here are excerpts from four of the six ideas.

1. No single institution or process can lead the world toward scientific agreement on AI’s risks.

Global political buy-in depends on including a broad range of stakeholders, yet greater inclusivity reduces speed and clarity of common purpose. Appealing to all global audiences would require covering many topics and could come at the cost of coherence.

‘Scientific rigor demands an emphasis on peer-reviewed research, yet this rules out the most current proprietary information held by industry leaders in AI development. Because no one effort can satisfy all these competing needs, multiple efforts should work in complementary fashion.’

2. The UN should consider leaning into its comparative advantages by launching a process to produce periodic scientific reports with deep involvement from member states.

Similarly to the Intergovernmental Panel on Climate Change (IPCC), this approach can help scientific conclusions achieve political legitimacy and can nurture policymakers’ relationships and will to act.

‘The reports could be produced over a cycle lasting several years and cover a broad range of AI-related issues, bringing together and addressing the priorities of a variety of global stakeholders.’

3. A separate international body should continue producing annual assessments that narrowly focus on the risks of advanced AI systems, primarily led by independent scientists.

The rapid technological change, potential scale of impacts and intense scientific challenges of this topic call for a dedicated process which can operate more quickly and with more technical depth than the UN process.

‘The UN could take this on, but attempting to lead both this report and the above report under a single organization risks compromising this report’s speed, focus and independence.’

4. The two reports should be carefully coordinated to enhance their complementarity without compromising their distinct advantages.

Some coordination would enable the UN to draw on the independent report’s technical depth while helping it gain political legitimacy and influence. However, excessive entanglement could slow or compromise the independent report and erode the inclusivity of the UN process.

‘Promising mechanisms include memoranda of understanding, mutual membership or observer status, jointly running events, presenting on intersecting areas of work and sharing overlapping advisors, experts or staff.’

In their conclusion the authors underscore their call for combining a UN-led process with an independent scientific report. This approach, they reiterate, would “leverage the strengths of various stakeholders while mitigating potential pitfalls.” More:

‘The UN’s unique position and convening power can provide the necessary global legitimacy and political engagement, while an independent scientific track ensures the continued production of timely, in-depth analyses of advanced AI risks.’

Read the whole thing.

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

 

Webinar recording - BrainX Community Live - Implementing Ambient AI Scribe in HealthcareListen to Dr. Lee, Chief Medical Officer at Nabla discuss ambient AI implementation in health systems, benefits for clinicians, and future outlook for the technology.

 Share on Facebook Share on Linkedin Send in Mail
Artificial intelligence AI

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Keep your eyes on California. This week Golden State legislators all but rubber-stamped a bill that, if passed into law, will make AI companies thoroughly safety-check their products before selling them. It would also arm the state’s attorney general to sue AI vendors for harms done. Numerous outlets note the pressure Gov. Gavin Newsom will likely face from Silicon Valley to veto the bill. But if he signs it, California “will become the standard-bearer for regulating a technology that has exploded in recent years,” New York Times tech-policy reporter Cecilia Kang predicts. Healthcare, of course, is high among the economic sectors whose world the figurative explosion has rocked.
     
  • Don’t look away just yet. Much of what passes as tech-based care improvement in California these days is nothing more than “primitive but effective AI quackery” designed to help “greedy politicians, crooked physicians” and other cynical exploiters go about the business of “ripping you off.” That’s the opinion of Patrick Wagner, MD, a retired Sacramento surgeon. Writing in the right-leaning California Globe, Wagner adds that technology “continues to feverishly outpace the common sense, skill and judgment of American physicians. It is completely out of hand.” Read it and weep, leap or say something they’ll have to bleep.
     
  • Mayo Clinic has more than 200 algorithms under development. The bounty shouldn’t really surprise anyone, given the institution’s tech-forward stance and 11 million patients with electronic records. Still, that’s a big number, considering the relative complexity behind the training, validation and testing of every AI model. “Technology and data-driven innovation are making it possible for us to solve some of the most complex medical problems in novel ways,” Mayo ophthalmologist Raymond Iezzi Jr., MD, tells the organization’s news division in an item posted Aug. 28.
     
  • AI could just as easily solve healthcare disparities as worsen them. A former OpenAI executive makes the case this week in Newsweek. “To effectively treat diseases, it’s essential to understand how they manifest in different populations,” writes Zack Kass, who led go-to-market strategies at OpenAI before hanging out his shingle as an AI advisor. “AI has the potential to make medical research more inclusive, ultimately leading to better health outcomes for everyone.” Hear him out.
     
  • It could also be a peacemaker between providers and payers. Wait. What? “Rather than embrace a strategy that relies on arcane tools of the past to torment one another, healthcare organizations today—health systems and health plans—are using AI in much more meaningful ways,” explains Michael Drescher, vice president of payer strategy at healthcare AI startup Xsolis. “These include avoiding unnecessary fights altogether and working more collaboratively in their shared clinical decision-making processes.” Legal Reader posted the piece Aug. 27.
     
  • Executives and the directors who report to them get all the attention. What about the managers who report to the directors? They have a champion in McKinsey & Company. Asked how the rise of generative AI will affect middle managers in charge of knowledge workers, McKinsey partner Bryan Hancock reassures these anxious humans that they’ll remain important. “They’re managing a team of people whom they’re apprenticing, as well as managing the underlying tools that support the work,” Hancock points out. “If you think also about robotic team members, managers will still be needed to integrate information, to coach, to make things happen.”
     
  • AI won’t replace medical coders or billing specialists. But those who use AI may well replace those who don’t. Sound familiar? AI can enable the entire clinical documentation integrity team to perform “at the top of their license” (by letting them focus on clinical judgement instead of sifting through documentation); find new revenue and capture meaningful codes (without adding additional resources); augment existing workflows (enabling a secondary review to complement concurrent CDI activities without ripping and replacing existing processes); and facilitate a pre-bill review. CDI expert Cassi Birnbaum lays all this out in the latest in her series on AI in that profession at ICD-10 Monitor.
     
  • Gen AI jokes are lame. We already knew that. But get this: The technology may have a future as a comedy critic. Take it from American standup Viv Ford, who’s been running her material past ChatGPT for instant feedback. She’s found the stuff AI considers funny tends to be a dud with live audiences. But the jokes the bot finds offensive? They kill. “And sometimes ChatGPT will say ‘the joke is fine but could use some work’—in which case I toss it away and start again.” Get the story from the BBC. Seriously.  
     
  • Recent research in the news:
     
  • AI funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare