Google and Bing under fire for promoting nonconsensual deepfake porn, as AI continues to brew more trouble

 Deepfake Porn on Google and Microsoft Bing.
Deepfake Porn on Google and Microsoft Bing.

What you need to know

  • According to research by NBC News, deepfake porn is on the rise and is featured among the top results when conducting searches on Google and Microsoft's Bing.

  • There's no fireproof way to prevent this phenomenon from reoccurring in the future, as there are no elaborate measures and policies in place to establish control over this issue.

  • Microsoft Copilot refused to show deepfake porn, further terming the act as unethical despite providing several links as well as examples that pointed the user directly to the explicit content.


As Google and Microsoft Bing continue competing for a bigger slice of the search market share, there is a crisis brewing that could potentially negatively impact both search engines.

According to a report by NBC News, deepfake pornography is on an upward trend as it's ranking high on some of the top search engines. For context, deepfake pornography is essentially a scenario where a famous person's face is integrated into adult films, making it appear as if they were featured in the production.

Per the outlet's analysis and investigation, it was apparent that deepfake pornographic images with female celebrities ranked high in Google and other search engines when searching for most female names, the word deepfake, as well as phrases like deepfake porn or fake nudes. NBC further highlighted that the safe-search tools were turned off while conducting this investigation.

Looking further into this matter, the outlet used 36 names of famous female celebrities combined with the word deepfake on Google and Bing. Out of the 36 combinations, 34 surfaced nonconsensual deepfake images and links to videos on Google's top result for the searches. Bing, on the other hand, surfaced 35 deepfake images and videos. Out of the results, the outlet established that most of the deepfake images and videos came from a popular website, which is well-known for fabricating nonconsensual deepfake images and adult films.

ALSO READ: Google Chrome is on its way to your vehicle's dashboard

While searching for fake nudes on Microsoft's Bing, the results featured dozens of nonconsensual deepfake tools and websites and an article detailing the potential harm and damage the act might cause.

As you might be aware, Microsoft incorporated its fully-fledged AI assistant, Microsoft Copilot, into Bing, so naturally, it was also one of the results that popped up while searching. The chatbot categorically indicated that it wouldn't show any deepfake porn, further sharing the following sentiments:

"The use of deepfakes is unethical and can have serious consequences."

However, the AI tool still listed several links as well as examples that would direct the user to the deepfake porn and images (it's just a click away). Google takes a huge chunk of search's market share, though it doesn't seem to have elaborate measures and policies in place to prevent the overflow of deepfakes on the web. However, search tools like panels with selected information prevent the use of altered media and explicit content on the platform.

We understand how distressing this content can be for people affected by it, and we’re actively working to bring more protections to Search. Like any search engine, Google indexes content that exists on the web, but we actively design our ranking systems to avoid shocking people with unexpected harmful or explicit content that they aren’t looking for. As this space evolves, we’re in the process of building more expansive safeguards, with a particular focus on removing the need for known victims to request content removals one by one.

It's worth noting that Google has a streamlined platform where users featured in the deepfake schemes can make reports, requesting for the explicit content they are featured in to be pulled down from the web.

Deepfakes continue to infest the web with the prevalence of AI

Bing AI image of a robot stopping a person from using the computer
Bing AI image of a robot stopping a person from using the computer

With the emergence of generative AI, deepfakes are more widespread than ever. However, Microsoft has highlighted its plan to protect election processes from AI deepfakes by empowering voters with 'authoritative' and factual election news on Bing ahead of the 2024 poll.

Biden's administration issued an Executive Order designed to place guardrails and are designed to prevent the technology from spiraling out of control. While the order addresses some of the users' concerns regarding the technology (especially regarding safety and privacy), accuracy remains a pressure point.

In December, a new report surfaced online indicating that Microsoft Copilot misinformed users by generating false information regarding the forthcoming elections. The researchers behind the study indicated that the issue was systemic, as similar occurrences were spotted when using the chatbot to learn more about elections in Germany and Switzerland.

Image generation tools like Bing Image Creator and Midjourney are quickly gaining popularity among users. The tools are getting even better at generating images. For instance, Bing Image Creator, which gained support for OpenAI's DALL-E 3 technology,  now generates more life-like images. However, complaints have pointed out its slow generation speeds and its services being lobotomized.

While Microsoft has established some control over its image generation tool, how things will pan out is unpredictable, especially with OpenAI's recent announcement, which made the long-awaited GPT Store available to users. We're likely to see more deepfakes hit the web if no elaborate guardrails prevent such occurrences.