A Scientist Says AI Could Kill Us in About 200 Years

ai robot in red tone, 3d render
Scientist Says AI Could Kill Us in About 200 YearsWong Yu Liang - Getty Images
  • A researcher and SETI leader has quantified his belief that AI must be regulated, and soon.

  • The Drake Equation is a hypothetical estimate of how many other civilizations are out there.

  • These civilizations may die out from their own runaway AI after about 100-200 years, he says.


In a provocative new paper, an English scientist wonders if so-called “artificial intelligence” (AI) is one reason why we’ve never found other intelligent beings in the universe. The Fermi Paradox captures this idea: in an infinite universe, how can it be that there are no other civilizations sending us radio signals? Could it be that, once civilizations develop AI, it’s a short ride into oblivion for most of them? This kind of large-scale event is called a Great Filter, and AI is one of the most popular subjects of speculation about them. Could the inevitable development of AI by technological civilizations create the improbable Great Silence we hear from the universe?

Michael Garrett is a radio astronomer at the University of Manchester and the director of the Jodrell Bank Centre for Astrophysics, with extensive involvement in the Search for Extraterrestrial Intelligence (SETI). Basically, while his research interests are eclectic, he’s a highly qualified and credentialed version of the people in TV shows or movies who are listening to the universe to hear signs of other civilizations. In this paper, which was peer reviewed and published in the journal of the International Academy of Astronautics, he compares theories about artificial superintelligence against concrete observations using radio astronomy.

Garrett explains in the paper that scientists grow more and more uneasy the longer we go without hearing any sign of other intelligent life. “This ‘Great Silence’ presents something of a paradox when juxtaposed with other astronomical findings that imply the universe is hospitable to the emergence of intelligent life,” he writes. “The concept of a ‘great filter’ is often employed – this is a universal barrier and insurmountable challenge that prevents the widespread emergence of intelligent life.”

There are countless potential Great Filters, from climate extinction to (gulp!) a pernicious global pandemic. Any number of events could stop a global civilization from going multiplanetary. For people who follow and believe in Great Filter theories more ideologically, humans settling on Mars or the moon represent a way to reduce risk. (It will be a long time, if ever, before we have technology to make these settlements sustainable and independent.) The longer we stay on only Earth, the more likely a Great Filter event is to wipe us out, the thinking goes.

Today, AI is not capable of anything close to human intelligence. But, Garrett writes, it is doing work that people previously didn’t believe computers could do. If this trajectory leads to a so-called general artificial intelligence (GAI) -- a key distinction meaning an algorithm that can reason and synthesize ideas in a truly human way combined with incredible computing power -- we could really be in trouble. And in this paper, Garrett follows a chain of hypothetical ideas to one possible conclusion. How long would it take for a civilization to be wiped out by its own unregulated GAI?

Unfortunately, in Garrett’s scenario, it takes just 100-200 years. Coding and developing AI is a singleminded project involving and accelerated by data and processing power, he explains, compared with the messy and multidomain work of space travel and settlement. We see this split today with the flow of researchers into computing fields compared with a shortage in the life sciences. Every day on Twitter, loud billionaires talk about how great and important it it is to settle Mars, but we still don’t know how humans will even survive the journey without being shredded by cosmic radiation. Pay no attention to that man behind the curtain.

There are some big caveats, or just things to keep in mind, about this research. Garrett steps through a series of specific hypothetical scenarios, and he uses huge assumptions. He assumes there is life in the Milky Way and that AI and GAI are “natural developments” of these civilizations. He uses the already hypothetical Drake Equation, a way to quantify the likely number of other planetary civilizations, which has several variables we have no concrete idea about.

The mixed hypothetical argument arrives at one strong conclusion, though: the need for heavy and continuous regulation of AI. Garrett points out that the nations of Earth are already in a productivity race, scared to miss out if they hesitate in order to do more regulation. Some misguided futurists also have the odd idea that they can subvert GAI by simply developing a morally good one, faster, in order to control it more -- an argument that makes no sense.

In Garrett’s model, these civilizations have just a couple hundred years in their AI eras before they blip off the map. Over distance and the very long course of cosmic time, such tiny timeframes mean almost nothing. They reduce to zero, which, he says, fits with SETI’s current success rate of 0 percent. “Without practical regulation, there is every reason to believe that AI could represent a major threat to the future course of not only our technical civilisation but all technical civilisations,” Garrett explains.

You Might Also Like