Nonconsensual AI porn is hated on the left and right. Can Congress act on it?

Liberals and conservatives in Congress — from Rep. Alexandria Ocasio-Cortez to Sen. Josh Hawley — all agree that something should be done to rein in nonconsensual porn generated by AI. The White House issued a “call to action” this week, urging Congress to strengthen legal protections for survivors. But lawmakers have struggled for more than a year to draft a solution, illustrating how ill-equipped Washington is to set limits on rapidly evolving technology with the power to disrupt people’s lives.

Legislation has been mired in debate over who should be held accountable for the deepfakes — with tech lobbyists pushing back on any language that would ensnare the platforms that distribute them.

Meanwhile, it is rapidly becoming easier for anyone with a couple of photos and a computer to make and distribute the videos.

“There are now hundreds of apps that can make non-consensual, sexually explicit deepfakes right on your phone,” Senate Judiciary Chair Dick Durbin (D-Ill.), who co-sponsored a bill against deepfake porn, told POLITICO in an email. “Congress needs to address this growing crisis as quickly as possible.”

Senate Majority Leader Chuck Schumer just released a plan for how Congress should regulate AI that mentioned deepfakes and privacy, but stopped short of embracing specific bill language. Instead, it proposed pouring billions into AI research and development.

Schumer did not reply to POLITICO questions about intimate images, but told reporters, “We need both transformational innovation and sustainable innovation, in a sense, to maximize the benefits of AI and minimize the liabilities.”

Advocates for AI-generated porn victims have been trying for at least a year to get Congress to pay attention, aided by high-profile cases like Taylor Swift and Italian Prime Minister Giorgia Meloni.

But it was a case involving a noncelebrity — a 27-year-old woman in California — that sparked the legislative effort. The woman discovered in 2022 that a man she rejected for a date had used AI to make a deepfake porn video of her “doing explicit things with him,” and shared it in person and online. Horrified to realize it was all legal, she switched gyms, deleted her social media accounts and contacted Virginia-based nonprofit Sexual Violence Prevention Association for help.

“This whole experience was really traumatizing,” the woman told POLITICO via email, staying anonymous to prevent further online harassment. “He deserves to face at least a fraction of accountability for what he did to me.” SVPA chief Omny Miranda Martone said complaints have increased alongside the technology’s power.

“It used to take roughly between 100-200 photos of the victim’s face; you had to have a high-powered computer; you had to have a good amount of technical ability and skill,” Martone said. “Now … you only need one or two photos.”

SVPA began lobbying some 100 lawmakers last year on a bill to penalize the makers of unauthorized deepfake porn. Martone found many legislators hesitated to take up the novel digital rights issue that did not clearly fall into any of their committee mandates.

Durbin’s office told POLITICO the meeting with Martone drove him to act.

Rather than write a new law, Durbin’s office crafted an amendment to the Violence Against Women Act that protects survivors of sexual assault and domestic violence. The DEFIANCE Act would give victims the right to sue creators, solicitors, possessors and distributors of AI-generated porn in civil court for $150,000 in damages plus litigation fees if the perpetrators “knew or recklessly disregarded” that their subjects did not consent to being deepfaked.

The White House appeared to endorse the approach in its recent call to action, naming the Violence Against Women Act, though it did not immediately answer POLITICO questions about whether it supports the DEFIANCE Act. It also encouraged voluntary commitments to cut down on the creation and distribution of the images.

Lawmakers in both chambers told POLITICO they hoped the focus on perpetrators, rather than tech companies, would help the bill succeed. Ocasio-Cortez (D-N.Y.), herself a victim of deepfake porn, amassed eight House co-sponsors for a companion bill and said broader AI regulation could trip political “landmines.”

“Going really big, really fast, with something regulatory in an emerging industry space — that can oftentimes run into its challenges,” she said in an interview. “Centering the bill on survivors’ rights — particularly the right of action — helps us dodge some of those larger questions in the short term and build a coalition in the immediate term.”

Cosponsor Rep. Nancy Mace (R-S.C.) told POLITICO she “fully supports” the legislation, and recently introduced a bill to increase penalties for those behind deepfake porn to $500,000. In the Senate, Missouri Republican and frequent tech critic Hawley joined as cosponsor, praising the bill for giving individuals the right to sue.

Marissa Serafino, an attorney focused on AI policy at Holland & Knight LLP, said tailoring a law to civil penalties could be a selling point. “The DEFIANCE Act demonstrates that liability related to AI is palatable for members on both sides of the aisle, especially when the issue is specific and narrow,” she said.

Daniel Zhang, who leads policy research at the Stanford Institute for Human-Centered Artificial Intelligence, said about half of U.S. states have also started efforts to regulate deepfaked porn. He called the issue “urgent” for constituents. An AI Policy Institute survey in January found 84 percent of voters supported banning deepfake porn, and more supported requiring companies to prevent their models being used to create it.

Already, tech lobbyists have objected to the DEFIANCE Act, with industry group NetChoice writing POLITICO that it “is likely overbroad and unconstitutional” and could violate free speech protections.

The group’s chief lawyer Carl Szabo presented Congress with an alternative in March that would further constrict the bill by only penalizing the distribution — not the possession or production — of deepfaked porn. It would also require a plaintiff to prove someone shared an explicit AI-generated image with intent to “coerce, harass, or intimidate.”

Durbin called the proposal a “nonstarter.”

The Senate has struggled to crack down on tech harms before. Last year, Durbin and Hawley partnered on a bill to strip social media companies of legal immunity for child sexual abuse material, but Oregon Democrat Ron Wyden blocked it, saying it could weaken encryption on social media apps and websites.

Meanwhile, a new AI policy roadmap from Schumer’s four-member working group offered a look at the Senate’s legislative priorities. It did not explicitly endorse the DEFIANCE Act, but said it supported “consideration of legislation" to make sure existing protections address "nonconsensual distribution of intimate images and other harmful deepfakes."

Schumer’s office did not say whether he would support the bill. The office of Sen. Martin Heinrich (D-N.M.) — one of three senators working with Schumer on AI — told POLITICO he plans to join the bill as a co-sponsor. Another member of the working group, Sen. Todd Young (R-Ind.), told POLITICO Tech he was “inclined” to find a solution for deepfake porn and “that may be Senator Durbin’s legislation.” The third, Sen. Mike Rounds (R-S.D.) did not respond to questions on the bill.

Holland & Knight’s Serafino said, “many proposed bipartisan bills were not named in the roadmap, so I’m not reading into its omission.”

The AI policy roadmap pointed to elections as a high-priority issue, and on May 15, three bipartisan bills to regulate AI in elections advanced out of committee with Schumer’s vocal support. However, Republicans opposed two of the bills that would crack down on deepfakes in election content.

In the meantime, consequences for the victims of deepfake porn continue. The California woman said she had to take a week off work at the time, and hire a therapist to cope with panic attacks.

“I kept feeling really intense shame,” she said. Recalling the man who created and spread false images of her, she added: “He’s probably making deepfakes of someone else. I can’t be the only one who rejected him.”