AI-generated images are running rampant on social media. What are X, TikTok and Meta doing to control them?

AI-generated portraits of two people
How can social media take control of the flood of AI content? (Getty Images)
  • Oops!
    Something went wrong.
    Please try again later.

Hundreds of celebrities descended on the steps of the Metropolitan Museum of Art in New York City earlier this month for a chance to be photographed at the 2024 Met Gala, one of the year’s biggest fashion events. But among the more widely circulated images of the night were red carpet photos of two A-listers who weren’t even in attendance.

The images, which depicted Katy Perry and Rihanna in on-theme floral dresses and standing in front of a crowd of photographers outside the Met Gala, were generated by artificial intelligence, or AI. Though they are of questionable quality — a close examination quickly reveals that the carpet is the wrong color and the photographers in the background look slightly warped — they appeared legitimate enough to confuse even Perry’s own mother, as evidenced by a text exchange Perry shared on her Instagram account.

Instagram’s official account even commented “a TRUE goddess,” acknowledging the existence of the image almost 24 hours before Instagram’s third-party fact checkers flagged it as altered.

Fake photos of pop stars at the Met Gala may seem harmless, but the images offer just the latest high-profile example of the ways in which AI can be used to mislead people.

As lawmakers grapple with how to implement safeguards around AI, images like those of Perry and Rihanna, which were widely distributed on Instagram, raise questions about what social media companies are doing to identify and respond to this kind of content on their platforms.

"We welcome more research into AI and cross-platform inauthentic behavior since deceptive efforts rarely target only one platform,” Meta spokesperson Ryan Daniels told Yahoo News, responding to a question about the images of Perry and Rihanna. The platform eventually did label Perry’s repost of the AI image as an “altered photo.”

“The development of AI has just been so rapid,” said Bernhard Gademann, CEO of tech investment firm Pioneer Ventures and co-founder of Edu Smart Technologies, which aims to help teachers incorporate technology into the classroom.

Gademann said that he predicts AI will soon be so advanced that even AI-run detection tools used to help platforms identify and label will struggle with the task.

Earlier this month, TikTok announced it would start automatically labeling videos and images that are made with AI using watermarking technology from the Coalition for Content Provenance and Authenticity. It currently does not apply to audio-only content.

In April, Meta announced that it would label AI-generated content on Facebook and Instagram starting in May. The announcement came in response to criticism from the Oversight Board, an outside group funded by Meta, which recently described the company’s AI policies “incoherent” and “confusing.” The advisory board was called in after Meta declined to take down a seven-second AI-altered video of President Biden showing him inappropriately touching a young woman.

In March, Google introduced a new tool in YouTube’s Creator Studio that requires users to disclose if something is altered or made with generative AI. However, Google disclaimed it would not require creators to flag “clearly unrealistic content” or “beauty filters or other visual enhancements.”

In January, explicit AI-generated photos of Taylor Swift circulated on X, where the search phrase “Taylor Swift AI” was trending. The concern was not about whether the photos were real but about the lack of laws to protect victims of AI, as well as social platforms needing to implement their own standards for flagging or taking down AI.

One post sharing the Swift images, which was shared by a verified X user, was accessible for 17 hours and viewed more than 45 million times before X took it down. While X owner Elon Musk did cut down the platform’s moderation team since taking over the company in 2022, it still does not allow synthetic media and nonconsensual nudity.

“It’s probably no coincidence that a certain AI-generated content is particularly popular,” Gademann said, referring to nonconsensual AI-generated explicit images like Swift’s. “Before the age of AI and deepfakes, there used to be this saying, ‘Seeing is believing.’ But is seeing believing in the age of AI? That is the question.”

Swift isn’t the only person, even famous person, who’s been a victim of AI porn. Legal experts and victims, many of them teenage girls, have demanded that tech companies and government officials take action against the growing crisis.

While several bills have been introduced to Congress, including the AI Labeling Act and the DEFIANCE Act, there are currently no federal laws banning deepfake porn in the U.S.

With AI being so readily accessible and the technology rapidly improving, is a label enough?

“While there are some technological solutions [for AI], both in creation and in distribution, there is also this whole other dynamic of other areas in which outputs are very harmful to people,” Stanford researcher Renée DiResta told Yahoo News. “Nobody's really sure what they're going to do about that yet.”

One of Gademann’s concerns is that legislators are going to overreact and overcorrect its current lack of legal guidelines for AI. To Gademann, the positives of AI technology — which he cites as the broad accessibility to knowledge, the ability to produce new content easily and cheaply and the potential it has in the medical field — far outweigh the downsides, at least for now.

“This technology is generated by humans, it is maintained and administered by humans, its underlying source is data created by humans,” Gademann said. “I think it’s truly important to understand this difference, that [AI] is not a force acting on its own. It is humans who are in charge, which is why the call for accountability is so important.”