International experts formulate scientific approach to AI risk management

A lot of people from a lot of organizations in a lot of countries are working to coordinate oversight of AI’s risks. A budding project seeks to bring many of these minds together to advance the worthy goal of building global consensus with scientific rigor.

The effort is jointly led by experts at the University of Oxford’s Oxford Martin School in the U.K. and the Carnegie Endowment for International Peace in Washington, D.C. This week the experts, numbering more than 20, published a report laying out the group’s observations and recommendations.

They ask: How can the world’s AI stakeholders work together toward the common goal of international scientific agreement on AI’s risks?

“There has been surprisingly little public discussion of this question, even as governments and international bodies engage in quiet diplomacy,” the authors write. “Compared to climate change, AI’s impacts are more difficult to measure and predict—and more deeply entangled in geopolitical tensions and national strategic interests.”

Among the report’s takeaways are six ideas illuminating the conceptual space in which AI and international relations intersect. The bulk of the thinking took place at a workshop in July. Here are excerpts from four of the six ideas.

1. No single institution or process can lead the world toward scientific agreement on AI’s risks.

Global political buy-in depends on including a broad range of stakeholders, yet greater inclusivity reduces speed and clarity of common purpose. Appealing to all global audiences would require covering many topics and could come at the cost of coherence.

‘Scientific rigor demands an emphasis on peer-reviewed research, yet this rules out the most current proprietary information held by industry leaders in AI development. Because no one effort can satisfy all these competing needs, multiple efforts should work in complementary fashion.’

2. The UN should consider leaning into its comparative advantages by launching a process to produce periodic scientific reports with deep involvement from member states.

Similarly to the Intergovernmental Panel on Climate Change (IPCC), this approach can help scientific conclusions achieve political legitimacy and can nurture policymakers’ relationships and will to act.

‘The reports could be produced over a cycle lasting several years and cover a broad range of AI-related issues, bringing together and addressing the priorities of a variety of global stakeholders.’

3. A separate international body should continue producing annual assessments that narrowly focus on the risks of advanced AI systems, primarily led by independent scientists.

The rapid technological change, potential scale of impacts and intense scientific challenges of this topic call for a dedicated process which can operate more quickly and with more technical depth than the UN process.

‘The UN could take this on, but attempting to lead both this report and the above report under a single organization risks compromising this report’s speed, focus and independence.’

4. The two reports should be carefully coordinated to enhance their complementarity without compromising their distinct advantages.

Some coordination would enable the UN to draw on the independent report’s technical depth while helping it gain political legitimacy and influence. However, excessive entanglement could slow or compromise the independent report and erode the inclusivity of the UN process.

‘Promising mechanisms include memoranda of understanding, mutual membership or observer status, jointly running events, presenting on intersecting areas of work and sharing overlapping advisors, experts or staff.’

In their conclusion the authors underscore their call for combining a UN-led process with an independent scientific report. This approach, they reiterate, would “leverage the strengths of various stakeholders while mitigating potential pitfalls.” More:

‘The UN’s unique position and convening power can provide the necessary global legitimacy and political engagement, while an independent scientific track ensures the continued production of timely, in-depth analyses of advanced AI risks.’

Read the whole thing.

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.