Partnership on AI valiantly trying to wrap arms around the world
It’s going to take a multinational effort for the global AI community—such as it is—to avoid the emergence of a “fragmented AI landscape.” The danger of such an unwanted scenario is having AI developers and end-users navigate jagged safety guardrails riddled with gaps.
Stepping up to face down this threat, the nonprofit Partnership on AI is doing what it can to help coordinate the big coordinators.
That’s apparent in a report released this month. In 40 or so pages, the San Francisco-based group compares, contrasts and otherwise analyzes eight policy frameworks. Two of these hail from the U.S.—President Joe Biden’s 2023 executive order “Safe, Secure and Trustworthy Development and Use of Artificial Intelligence” and the AI Risk Management Framework from the National Institute of Standards and Technology (NIST).
The report lays out nine recommendations for formulating “a more coherent, effective approach to managing the risks and harnessing the potential of foundation models” so as to ensure accountability and transparency while fostering innovation in the global AI ecosystem.
Here are all nine.
1. When identifying foundation models in need of additional governance measures, national governments and the EU should prioritize cooperation.
‘Agreeing on a common definition and thresholds for the models covered by policy frameworks should flow through to greater alignment between the frameworks.’
2. The G7 presidency should continue developing the Hiroshima Code of Conduct into a more detailed framework.
‘This work should seek input from foundation model providers, civil society, academia and other stakeholder groups equally.’
3. When creating and approving initial Codes of Practice for the EU AI Act, all involved parties should prioritize compatibility with other major AI governance frameworks.
‘The involvement of non-EU model providers, experts and civil society organizations will help advance this objective.’
4. To support the development of standardized documentation artifacts, standards development organizations (SDOs) should ensure that their processes are informed by socio-technical expertise and diverse perspectives as well as required resources.
‘To that end, SDOs, industry, governments and other bodies should invest in capacity building for civil society and academic stakeholders to engage in standards-making processes, including to ensure participation from the Global South.’
5. The development of standardized documentation artifacts for foundation models should be a priority in AI standardization efforts.
‘This would promote internationally comparable documentation requirements for foundation models, [encouraging] interoperability and establishing a baseline for best practice internationally.’
6. International collaboration and research initiatives should prioritize efforts that will support the development of standards for foundation model documentation artifacts.
‘Documentation is a key feature of foundation model policy requirements, and common requirements for artifacts will directly improve interoperability. It will also make comparisons between models from different countries easier, promoting accountability and innovation.’
7. National governments should continue to prioritize both international dialogue and collaboration on the science of AI safety.
‘This work will inform a common understanding of what should be included in documentation artifacts to promote accountability and address foundation model risks.’
8. National governments should support the creation/development of AI Safety Institutes (or equivalent bodies), and ensure they have the resources, functions, and powers necessary to fulfill their core tasks.
‘Efforts should be made to align the functions of these bodies with those common among existing AISIs. This will promote efforts to develop trusted mechanisms to evaluate advanced foundation models—and may, at a later stage, lead to the potential to work towards institutional interoperability.’
9. The fledgling International Network of AI Safety Institutes—and bodies with equivalent or overlapping functions such as the EU AI Office—should be supported and efforts should be made to expand its membership.
‘Consideration should be given to how this network could support broader AI Safety research initiatives.’
Go deeper: