Arizona state lawmaker used ChatGPT to write part of law on deepfakes

<span>The Arizona state representative Alexander Kolodin speaks at the statehouse in Phoenix on 24 April 2024.</span><span>Photograph: Ross D Franklin/AP</span>
The Arizona state representative Alexander Kolodin speaks at the statehouse in Phoenix on 24 April 2024.Photograph: Ross D Franklin/AP

An Arizona state representative behind a new law that regulates deepfakes in elections used an artificial intelligence chatbot, ChatGPT, to write part of the law – specifically, the part that defines what a deepfake is.

Republican Alexander Kolodin’s bill, which passed unanimously in both chambers and was signed by the Democratic governor this week, will allow candidates in Arizona or residents to ask a judge to declare whether a supposed deepfake is real or not, giving candidates a way to debunk AI-generated misinformation.

Kolodin said he used the chatbot ChatGPT to help define what “digital impersonation” is for the bill in part because it was a fun way to demonstrate the technology. He provided a screenshot of ChatGPT’s response to the question of what a deepfake is, which is similar to language that is included in the bill’s definition.

“I am by no means a computer scientist,” Kolodin said. “And so when I was trying to write the technical portion of it, in terms of what sort of technological processing makes something a deepfake, I was kind of struggling with the terminology. So I thought to myself, well, let me just ask the subject matter expert. And so I asked ChatGPT to write a definition of what was a deepfake.”

That portion of the bill “probably got fiddled with the least – people seemed to be pretty cool with that” throughout the legislative process. ChatGPT provided the “baseline definition” and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin said.

Kolodin has used ChatGPT in other lawmaking a couple times, he said, to help write the first drafts of amendments and save time. “​​Why work harder when you can work smarter,” Kolodin replied on Twitter when an Arizona reporter tweeted about his use of ChatGPT in the bill.

The federal government has not yet regulated the use of AI in elections, though groups have been pressuring the Federal Election Commission to do so because the technology has moved much faster than the law, creating concerns it could disrupt elections this year. The agency has said it expects to share more on the issue this summer.

The Federal Communications Commission, meanwhile, will consider whether to require disclaimers on AI-generated content on political ads running on radio and TV, the Associated Press reported Wednesday. The FCC previously made clear that AI-generated voices in robocalls, like an instance in which President Joe Biden’s voice was spoofed to New Hampshire voters, are illegal.

In the absence of federal regulations, many states have advanced bills to regulate deepfakes. It’s typically an area of rare bipartisan agreement.

Some bills have outlawed the use of deepfakes in political contexts in some instances, while others require disclosures that note whether the content is AI-generated.

Kolodin’s bill takes a different approach to concern over deepfakes in elections than that of many other states considering how to regulate the technology. Rather than outlaw or curb usage, Kolodin wanted to give people a mechanism to have the courts weigh in on the truthfulness of a deepfake. Having it taken down would be both futile and a first amendment issue, he said.

“Now at least their campaign has as a declaration from a court saying, this doesn’t look like it’s you, and they could use that for counternarrative messaging,” he said.

The bill does allow for a deepfake to be ordered removed, and the person could seek damages, if it depicts someone in a sexual act or nude, if the person in the deepfake is not a public figure and if the publisher knew it was false and refused to remove it.

The Arizona bill also takes a different approach on disclaimers. Rather than outright requiring them, as some state laws have, it says that a person bringing a potential court action wouldn’t have a case if the publisher of the digital impersonation had conveyed that the image or video was a deepfake or that its authenticity was in dispute, or whether it would be obvious to a reasonable person that it was a deepfake.

Kolodin said disclaimers carry speech concerns for him, too, because they cut into airtime or, in some cases, ruin the joke or the point of a message. He cited a recent instance where the Arizona Agenda, a local publication covering state politics, created a deepfake of the US Senate candidate Kari Lake, where it was obvious to a viewer that the video wasn’t real based on what Lake was saying. (Full disclosure: the reporter of this story was the co-founder of the Arizona Agenda, but is no longer involved.)

“Any reasonable person would have realized that [it was fake], but if you had a label on it, it would have ruined the joke, right?” Kolodin said. “It would have ruined the journalistic impact. And so I think a prescribed label is further than I wanted to go.”

In one instance in Georgia, a state representative trying to convince fellow lawmakers to approve a bill outlawing deepfakes in elections used an AI-generated image and audio of two people who opposed the bill, faking their voices to say they endorsed it.

Kolodin hopes his bill will become a model for other states because he has worried that well-intentioned efforts to regulate AI in elections could trample on speech rights.

“I think deepfakes have a legitimate role to play in our political discourse,” he said. “And when you have politicians regulating speech, you kind of have the fox guarding the hen house, so they’re gonna say, oh, anything that makes me look silly is a crime. I absolutely hope that other state legislators pick this up.”