What federal regulators can learn from the states about AI oversight
If the Trump administration continues taking a laissez-faire stance toward AI—including AI used in healthcare—why not let the states go it alone on regulating the technology?
Here’s one defensible answer: Because disunited regulatory regimes would complicate operations for the many AI suppliers that serve a national clientele.
The Q&A is suggested in a paper posted by the American Society for AI (ASFAI). The document is written as solicited input for the framers of the AI Action Plan drafted in February by the White House’s Office of Science and Technology Policy.
Noting that 40 of 50 states are already moving on AI oversight, ASFAI offers an overview of these state-level initiatives. The group’s stated hope is that federal AI policies will draw from states’ efforts to do the right thing for their residents.
“Understanding these state-level initiatives can provide insights to shape federal policy,” ASFAI writes. More:
‘While a patchwork approach to regulation could be harmful, state governments can also serve as vital laboratories for policy innovation, offering real-world evidence of how different governance approaches succeed or fail in practice.’
The group’s published comment dedicates three of its 14 pages to informing the White House on what it can learn from the states about regulating AI. Here are excerpts from that section, which is broken into implementation models and policies.
STATE-LEVEL IMPLEMENTATION MODELS
1. Dedicated task forces.
Several states, such as Maryland, have established dedicated task forces with specific mandates and sunset provisions, the ASFAI authors point out. “This model typically involves a discrete group of experts working within a defined timeframe to produce specific deliverables, such as policy recommendations or implementation frameworks.” More:
‘This can result in a more focused mission, but the fixed duration can limit long-term oversight capability.’
2. Integrated agency.
Other states have integrated AI governance into existing governmental structures, such as Georgia’s effort spearheaded by the Georgia Technology Authority.
‘This model can be more efficient, but it lacks the focus of a dedicated task force.’
3. Legislative committees.
States such as Colorado have established legislative committees with ongoing AI oversight responsibilities, ASFAI notes before adding that this model “emphasizes continuous legislative engagement and oversight.”
‘However, legislative committees may not have the same expertise available compared to a dedicated task force or a technology-specific government agency.’
4. Hybrid approaches.
Some states have developed hybrid models that combine elements of multiple approaches. For example, ASFAI points out, Oregon has combined a legislative task force with an executive advisory council. The group remarks:
‘Hybrid approaches can balance the advantages of various other approaches.’
STATE-LEVEL AI POLICIES
A. Adoption and preparation.
States have recognized that AI policy should not be solely focused on regulation and limiting risk, ASFAI writes. “Governments can also encourage increased development and adoption of AI.” More:
‘For example, Iowa’s executive task force specifically targets cost reduction and automation opportunities, while Arkansas emphasizes practical applications in unemployment insurance fraud detection and recidivism reduction.’
B. Deepfake/fraud detection.
“California recently passed a law compelling companies to remove deepfakes when identified by users,” ASFAI reminds, “and allowing courts to issue injunctions blocking the distribution of deceptive political content during elections.” Meanwhile,
‘Tennessee passed legislation targeting unauthorized AI-generated replication of people’s voices and likenesses to prevent unwanted AI impersonation.’
C. Consumer protection.
Colorado recently enacted legislation requiring developers of high-risk AI systems to exercise reasonable care to prevent algorithmic discrimination and mandating disclosures to consumers, ASFAI notes. “Similarly, Utah passed a law establishing liability for undisclosed AI use that violates consumer protection laws and requiring disclosures in regulated professions such as healthcare.”
‘Other states have focused on prevention of bias and discrimination. For example, a recent New York law mandates bias audits for automated employment decision tools. Arizona’s judicial steering committee is tasked with addressing bias mitigation in sensitive government functions.’
If its RFI period is proceeding according to plan, the executive branch’s Office of Science and Technology Policy has already considered ASFAI’s full input. The comment period ends tomorrow night at one minute before midnight.