4 points crucial to the nimble regulation of GenAI
With generative AI coming into its own, AI regulators must avoid relying too much on principles of risk management—and not enough on those of uncertainty management.
A new report from the American Enterprise Institute fleshes out the how’s and why’s of this position.
“Regulation based on risk management cannot prevent harm arising from outcomes that cannot be known” based on forecasts, writes AEI senior fellow Bronwyn Howell, sole author of the report. “Some harm is inevitable as society learns about these new [GenAI] applications and use contexts. Rules that are use-case specific rather than generic … offer a principled way of enabling efficient development and deployment of AI applications.”
Embedded in the paper are four points relevant to AI regulation stakeholders across industries and sectors.
1. Managing uncertainty is different from managing risk, so a different sort of regulatory framework is needed for the age of generative AI.
“Whereas classical risk management requires the ability to define and quantify both the probability and occurrence of harm,” Howell writes, “in situations of uncertainty, neither of these can be adequately defined or quantified, particularly in the case of GenAI models.” More:
‘Arguably, insurance arrangements for managing outcome uncertainties provide a more constructive way forward than do risk management regimes, which presume knowledge of outcomes that is just not available.’
2. Classic risk management systems have been largely applicable in the development of classic AI systems. GenAI is changing that paradigm.
GenAI models, Howell points out, “are characterized by the intersection of complex AI systems—which have unknown and unpredictable outcomes—with complex human systems, which have unknowable and unpredictable outcomes.” More:
‘Historic risk management systems are unlikely to safeguard end users and society from unexpected harms.’
3. We should expect unexpected harms, especially in the application of open-source models, which are exempt from most risk management obligations.
While not mandatory, arrangements for risk management developed in the U.S. tend to follow standard risk management processes, Howell notes. “Firms following U.S. guidelines will provide greater assurances and harm reduction than those following the EU regulations.” More:
‘However, the costs of compliance will be higher. Neither set of arrangements is well suited to managing the unexpected outcomes arising from GenAI deployment and use. Consequently, we should expect unexpected outcomes—and harms.’
4. Regulators need to be honest about their limitations in regulating to prevent harm and engender confidence in AI systems.
“They should focus on educating end users and society about the AI environment and their role in managing personal exposure,” Howell writes. “However, there may also be some benefit in considering the extent to which GenAI developers make their models and training data available to independent third parties for evaluation.” More:
‘Given that we can expect unexpected harms, regulators should consider establishing an insurance fund or funds and associated governance—potentially at an international level—to enable compensation when inevitable harms arise.’
Howell likens the present moment in the history of AI to the period in which motor vehicles were new.
“We are on the cusp of a range of new technologies that will be equally or even more transformative,” she writes. “We must become more comfortable about knowing that human advancement comes from facing the unexpected when it occurs and learning from it.” More:
‘Not taking a journey because we cannot be assured that no harm will occur is to guarantee no progress is made.’