Why a fake Pope picture could herald the end of humanity

AI-generated fake image of the Pope in white puffer jacket fooled the internet - Pablo Xavier
AI-generated fake image of the Pope in white puffer jacket fooled the internet - Pablo Xavier

For a moment the internet was fooled. An image of Pope Francis in a gleaming white, papal puffer jacket spread like wildfire across the web.

Yet the likeness of the unusually dapper 86-year-old head of the Vatican was a fake. The phoney picture had been created as a joke using artificial intelligence technology, but was realistic enough to trick the untrained eye.

AI fakes are quickly spreading across social media as the popularity of machine-learning tools surges. As well as invented images, chatbot-based AI tools such as OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Bing have been accused of creating a new avenue for misinformation and fake news.

These bots, trained on billions of pages of articles and millions of books, can provide convincing-sounding, human-like responses, but often make up facts - a phenomenon known in the industry as hallucinating. Some AI models have even been taught to code, unleashing the possibility they could be used for cyber attacks.

On top of fears about false news and “deep fake” images, a growing number of future gazers are concerned that AI is turning into an existential threat to humanity.

Scientists at Microsoft last month went so far as to claim one algorithm, ChatGPT-4, had “sparks of… human-level intelligence”. Sam Altman, the creator of OpenAI, the US start-up behind the ChatGPT technology, admitted in a recent interview: “We are a little bit scared of this”.

Now, a backlash against so-called “generative AI” is brewing as Silicon Valley heavyweights clash over the risks, and potentially infinite rewards, of this new technological wave.

A fortnight ago, more than 3,000 researchers, scientists and entrepreneurs including Elon Musk penned an open letter demanding a six month “pause” on development of OpenAI’s most advanced chatbot tool, a so-called “large language model”, or LLM.

Elon Musk - Taylor Hill/Getty Images
Elon Musk - Taylor Hill/Getty Images

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the scientists wrote. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Musk and others fear the destructive potential of AI, that an all-powerful “artificial general intelligence” could have profound dangers for humanity.

But their demand for a six-month ban on developing more advanced AI models has been met with scepticism. Yann LeCun, the top AI expert at Facebook-parent company Meta, compared the attempt to the Catholic church trying to ban the printing press.

“Imagine what could happen if the commoners get access to books,” he said on Twitter.

"They could read the Bible for themselves and society would be destroyed.”

Others have pointed out that several of the letter's signatories have their own agenda. Musk, who has openly clashed with OpenAI, is looking to develop his own rival project. At least two signatories were researchers from DeepMind, which is owned by Google and working on its own AI bots. AI sceptics have warned the letter buys into the hype around the latest technology, with statements such as “should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?”

Many of the AI tools currently in development are effectively black boxes, with little public information on how they actually work. Despite this, they are already being incorporated into hundreds of businesses. OpenAI’s ChatGPT, for instance, is being harnessed by the payments company Stripe, Morgan Stanley bank and buy-now-pay-later company Klarna.

Richard Robinson, founder of legal start-up RobinAI, which is working with $4bn AI company Anthropic, another OpenAI rival, says even the builders of large language models don’t fully understand how they work. However, he adds: "I also think that there’s a real risk regulators will overreact to these developments.”

Government watchdogs are already limbering up for a fight with AI companies over privacy and data worries. In Europe, Italy has threatened to ban ChatGPT over claims it has scraped information from across the web with little regard for consumers' rights.

Italy’s privacy watchdog said the bot had “no age verification system” to stop access by under 18s. Under European data rules, the regulator can impose fines of up to €20m (£18m), or 4pc of OpenAI’s turnover, unless it changes its data practices.

In response, OpenAI said it had stopped offering ChatGPT in Italy. “We believe we offer ChatGPT in compliance with GDPR and other privacy laws,” the company said.

France, Ireland and Germany are all examining similar regulatory crackdowns. In Britain, the Information Commissioner’s Office said: “There really can be no excuse for getting the privacy implications of generative AI wrong.”

However, while the privacy watchdog has raised red flags, so far the UK has not gone as far as to threaten to ban ChatGPT. In an AI regulation white paper published earlier this month, the Government decided against a formal AI regulator. Edward Machin, a lawyer at the firm Ropes & Gray, says: “The UK is striking its own path, it is taking a much lighter approach.”

Several AI experts told The Telegraph that the real concerns about ChatGPT, Bard and others were less about the long-term consequences of some kind of killer, all-powerful AI, but the damage it could do in the here and now.

Juan José López Murphy, head of AI and data science at tech company Globant, says there are near-term issues with helping people spot deep fakes or false information generated by chatbots. “That technology is already here… it is about how we misuse it,” he says.

“Training ChatGPT on the whole internet is potentially dangerous due to the biases of the internet,” says computer expert Dame Wendy Hall. She suggests calls for a moratorium on development would likely be ineffective, since China is rapidly developing its own tools.

OpenAI appears alive to the possibility of a crackdown. On Friday, it posted a blog which said: “We believe that powerful AI systems should be subject to rigorous safety evaluations. Regulation is needed to ensure that such practices are adopted.”

Marc Warner, of UK-based Faculty AI, which is working with OpenAI in Europe, says regulators will still need to plan with the possibility that a super powerful AI may be on the horizon.

“It seems general artificial intelligence might be coming sooner than many expect,” he says, urging labs to stop the rat race and collaborate on safety.

“We have to be aware of what could happen in the future, we need to think about regulation now so we don’t develop a monster,” Dame Wendy says.

“It doesn’t need to be that scary at the moment… That future is still a long way away, I think.

Fake images of the Pope might seem a long way from world domination. But, if you believe the experts, the gap is starting to shrink.

Advertisement