A modest proposal to rally AI regulators around a simple game plan

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

The acronym derives from a 4-part formula for establishing a regulatory framework: Identify the Scope of what is meant to be protected, assess the Existing regulations across nodes in the relevant stack, select the Tools of regulation; and choose the Organization meant to enact the regulations.

Published by the Brooks Tech Policy Institute at Cornell University, the report is bylined to political scientist Sarah Kreps, PhD, and poli-sci doctoral candidate Adi Rao.

To write the paper, the duo analyzed existing and proposed regulatory schemes for governing AI. Their work is intended to help develop “a new theoretical roadmap for U.S. policymakers as they grapple with the promises and potential perils of generative AI.”

In a section looking at the third leg of the SETO plan, the authors consider tools available for incentivizing regulatory compliance. They describe five feasible options:

1. Total bans.

The benefit of a total ban is simplicity, Kreps and Rao suggest. “Once very clearly outlawed by the bureaucrat, the use of a technology will lead to criminal punishment. However, a prohibition approach might get in the way of U.S. attempts to cultivate AI technological edge over peers.” More: 

‘Firms might relocate to friendlier jurisdictions for a certain set of AI products, especially civil technologies much less military technologies. Component-based “blueprint” regulations might garner more broad-based appeal as they target precisely the main issue under contention, with a tradeoff of greater regulatory costs.’

2. Taxes and punishments.

Countries or jurisdictions may use economic incentives to make a certain behavior more likely, the authors point out. This strategy includes both punitive measures—taxes, fines, jail time—and positive reinforcement through subsidies and grants, they write. “Taxation is a mainstay of the incentive regime,” they note, while financial and business regulations “also make considerable use of punishments.” More: 

‘However, the use of taxes and punishments may be complicated in the context of AI. The causal chain of malfeasance represents a sort of “Whodunnit?” If an autonomous car swerves left versus right and strikes one individual rather than five, is it the fault of the car manufacturer, the driver—or no one at all?’

3. Blueprint manipulation regulations.

One example of manipulation regulation is in the realm of gun control, Kreps and Rao explain. For example, certain gun-related regulations “ban the use of features or accessories that can be attached to firearms, rather than firearms per se.” They name bump stocks, which enable semi-automatic firearms to simulate automatic fire, as a case in point. More: 

‘Potential restrictions in the context of AI could include bans on the collection of certain types of sensitive data, or of systems making decisions based on identity factors such as race or gender. The benefit of blueprint manipulation is in its moderacy: The specific regulatory issue is targeted with precision.’

4. Information revelation regulation.

Rather than banning technology outright or manipulating its production, regulations “could be put in place to force producers to reveal information so as to inform consumers of potential risks or harms,” the authors comment. “These transparency measures are perhaps most salient in the pharmaceutical sector, but also exist in other industries.” More:  

‘In the context of AI, information revelation might entail demanding that companies disclose what sort of information is fed into their algorithms. However, requiring companies to share information about the algorithms itself, would be more complicated: While we may possess weights about the algorithmic decision-making process, how [outputs] are generated is not possible to readily understand.’

5. Voluntary rules. 

The space industry operates on a global scale, involving countries with diverse interests and priorities, Kreps and Rao observe, adding that imposing strict regulations that do not align with a particular nation’s goals “can lead to conflicts and hinder cooperation.” By contrast, voluntary rules offer a framework for best practices that multiple countries and private entities can agree upon and adhere to” as they are able. More: 

‘In the AI domain, the White House has worked with the several main developers on a set of voluntary measures to develop safe, secure and transparent development of AI technology. This approach fosters a spirit of cooperation, allowing countries to work together toward shared objectives, such as space exploration, scientific research and environmental monitoring.’

Read the full report. 

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup