Should we be scared of hidden messages in AI optical illusions?

 An AI-generated optical illusion that appears to show the McDonald's logo in an anime illustration.
An AI-generated optical illusion that appears to show the McDonald's logo in an anime illustration.

Just when you think AI art couldn't get any more controversial, along comes a new scare story. Not content will stealing artists' work and then using it to steal their jobs, AI art is now going to brainwash us with subliminal messages hidden in optical illusions.

That's the fear, anyway. A trend for creating optical illusions using AI art generators has led some to worry that companies, politicians and other powers that be could use the technology to influence our thought and sell us their products (see our pick of the best AI art generators for examples of the tools they might be using).

Some of the AI optical illusions we've seen recently have been slightly mesmerising, but some people are concerned that they could also be dangerous. "Many talk about the dangers of "AGI" taking over humans. But you should worry more about humans using AI to control other humans," Cocktail Peanut wrote in a post on Twitter, providing the example of the McDonald's logo embedded in an anime-style AI-generated illustration.

The first example wasn't very subtle. But Peanut followed up with less obvious optical illusions, all made using a Stable Diffusion-powered Hugging Face space called Diffusion Illusion HQ created by Angry PenguinPNG.

The workflow for making the illusions, using Monster Labs QR Control Net, was apparently discovered by accident. The ControlNet technique allows users to specify inputs, for example specific images or words, to gain more control over AI image generations. Monster Labs' tool was created to allow QR codes to be used as input so the AI would generate usable but artistic QR codes as an output, but users discovered that it could also be used to hide patterns or words in AI-generated scenes.

But for all the concerns about brainwashing, it's not the first time we've been here. There were similar scares about subliminal messages in television advertising and in films back in the day. An AI model can't do anything that wasn't already possible with VFX, CGI or even relatively simple Photoshop. It's just making it a lot easier and less expensive.

The fear seems to be that it will now be so cheap and easy, that nefarious parties will be seeking to influence us in every image. But I'm not convinced that's going to happen, mainly because it's not at all clear that hidden messages work.

It was a market researcher called James Vicary who claimed back in 1957 to have invented subliminal advertising by flashing imperceptible messages like 'Drink Coca-Cola' and 'Eat popcorn' to cinemagoers. The public was appalled and the technique was banned, but Vicary turned out to have lied about the results. The cinema where he conducted his experiment didn't see any rise in sales.

Since then, few experiments have explored the area. A 2005 study by researchers from Utrecht University and Radboud University did find some evidence that subliminal messages could impact an affinity for Lipton Ice among people who were already thirsty, but it was conducted in very controlled conditions not representative of the real world.

Despite the enduring fear, subliminal messages aren't a technique that many brands nor anyone else seems to have taken a huge interest in. With the images being made using AI, people might see a McDonald's logo or they might not. But in either case, I don't think they're going to rush to order a Big Mac as a result (if you want to make your own to test it out, see our round up of AI art tutorials).