Explicit AI photos of Taylor Swift were shared online. Legal experts weigh in on how she can fight back.

There is a larger concern about laws and social platforms needing to crack down on AI-generated porn — not just of celebrities.

BEVERLY HILLS, CALIFORNIA - JANUARY 7: Taylor Swift attends the 81st Annual Golden Globe Awards at the Beverly Hilton on January 7, 2024 in Beverly Hills, California. (Photo by Lionel Hahn/Getty Images)
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

Earlier this week, explicit AI-generated images of Taylor Swift circulated on X, formerly Twitter, causing the search phrase “Taylor Swift AI” to trend. One post, which was shared by a verified user on the platform, was live for about 17 hours and viewed more than 45 million times before it was removed and the account was suspended.

Swift’s fans were quick to unionize and report all retweets and shares that featured the images, as well as flood the search results with actual photos and videos of the singer, but there are still copies of the fake photos reposted all over the internet. Yahoo Entertainment is choosing not to show or link to any of the images.

The AI images are not an issue of media literacy or the general public's not being able to tell a deepfake — media that’s been manipulated to look or hear like someone else — from reality. The Swift example is almost cartoonish in looks.

Another large concern is about laws and social platforms needing to crack down on AI-generated deepfake porn.

Despite X owner Elon Musk having cut down X’s content moderation team since taking over in 2022, the platform has policies banning synthetic and manipulated media and nonconsensual nudity and seemingly addressed the issue without specifically naming Swift in a post on Jan. 26.

“Posting non-consensual nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content,” it said.

Out of all deepfake videos on the internet, a 2023 report found that 98% of them are pornographic, and 99% of those targeted in the videos are women.

Emily Poler, a Brooklyn, N.Y.-based litigator in the intellectual property space, explained that even though it’s not an actual photo of Taylor Swift, there are enough characteristics and indicators in the image to make it undeniable it’s supposed to be of her.

“Generally speaking, the laws only talk about the commercial exploitation of somebody's image or likeness,” Poler explained to Yahoo Entertainment. Poler referenced a 1992 lawsuit in which Wheel of Fortune host Vanna White sued Samsung for airing a commercial featuring a feminized robot standing in front of a Wheel of Fortune-esque game board.

“No, it's not Vanna White, but they put a blond wig and a strapless evening dress on it and positioned it in front of a board of letters. We all know [the robot] is meant to be Vanna White,” Poler said.

Jason Sobel, a partner at Brown Rudnick, agreed and told Yahoo Entertainment that it doesn't need to be a flawless replica of the person. Sobel has over 20 years of litigation experience in intellectual property law and copyright.

"[Swift is] a public figure, she has the right to control the exploitation of her identity, and that includes her name, her likeness [and] voice," Sobel explained. "It doesn't need to be a perfect facsimile of a person to be to be misappropriating and violating someone's exclusive right to exploit their own identity."

While it’s not known whether the creator of the Swift AI photos had any intention to leverage the virality to sell products, Poler said Swift’s lawyers could look into whether they had any motivation to monetize the photos' popularity.

In terms of whether X profited off of the photo from traffic and could be liable, Sobel said litigators would have to have concrete evidence that X held off on taking down the content that violated their policies in the express interest of benefitting from the traffic.

"That would be difficult to prove," he said.

Neither Swift nor her team have publicly commented on whether they will be pursuing legal action against the creator of the deepfake photographs. Swift’s publicist did not respond to Yahoo Entertainment’s request for comment.

Part of the social media reaction to the AI Swift photos was to dismiss any serious concerns that Swift, who is worth $1.1 billion, is surely too famous and successful to care that a fake explicit photo is circulating on X. Many compared it to the mass photo leak in 2014 — sometimes referred to as “the fappening” — which circulated photos from hacked celebrity phones.

In 2018, actress Scarlett Johansson was the victim of AI-generated explicit videos. In a statement at the time, she said she had “sadly been down this road many times” with images of herself spreading on the internet — real or fake.

“Nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired,” she wrote. “There are basically no rules on the internet because it is an abyss that remains virtually lawless.”

The silver lining in the situation could be that this situation inspires legal change in protecting others against AI porn.

"There's always some high-profile thing that that sparks legislative change," Sobel said.

What if this happens to you, a noncelebrity?

With leaked personal photos or revenge porn, which is when someone distributes explicit images and videos without the consent of whoever is in them, there are guidelines for trying to protect privacy.

But when it’s an artificially generated fake photo, how do you stop someone from making it and sharing it?

“There's not a huge amount of stuff that people can do, unfortunately, at least not at the present,” Poler admitted. “Without your ability to control it, I think there's not a huge amount to do.”

Sobel agreed.

"There isn't a practical, good way to prevent this," he said. "For anyone — Taylor Swift just the same as for Jason Sobel. Someone's out there doing it."