“Should We Be Worried?”: Tech Geniuses Fret That A.I. Is Gonna Kill Us All—Soonish

According to one author, our civilization has only 730 years left. Sorry. But you’ve been warned.

Earlier this week, I attended a dinner party hosted by the Edge Foundation, an association that fosters conversation between science and tech intellectuals, in which I found myself seated among a group of physicists, astrologists, historians, philosophers, technologists, and futurists of various varieties and pedigrees. Given the august and diverse group, I decided to ask a provocative question that seemed likely to bridge their manifold disciplines, and possibly elicit some tragicomic debate, too. In short, would artificial intelligence lead to the end of the human race? This was not an idle query. Over the last decade, we’ve given over our lives to computer networks and smart devices. Every aspect of human civilization, from our phones to farms, electric grids and stock markets, cars and missile-guidance systems, have been intertwined with lines of code. The great infrastructure of our existence has never been so vulnerable to manipulation by outside forces—or, one day soon, perhaps, to manipulating itself.

I was stunned by two aspects of their response. First, no one at the table appeared taken aback, surprised, or flummoxed by the question. Second, they all answered immediately, and in unison, “yes,” as if singing the same low note in a choir. This synoptic answer was followed by a moment of self-reflective silence around the table, as if the inevitable had already happened and we were now paying cerebral tribute to those who had perished during the end of civilization. I wasn’t sure how to respond, myself, so I asked someone to explain further. Are we really going to be killed by technology, I wondered? And I then launched into a number of frenzied follow ups: How? When? A historian and philosopher sitting to my left, while scooping fried Brussels sprouts onto his plate, offered a pithy affirmation as the discussion turned to the possibility that a future A.I. could go rogue and wipe us out. “Yes,” he said, passing the sprouts my way.

As of this moment, it’s still unclear precisely how we might destroy ourselves. Nevertheless, it is something that the country’s smartest minds are correctly consumed with. And many, it appears, are pessimistic about the possible outcomes. We are not far from the day when we could build armies of sophisticated robots—or drones, or nanorobots—that could be unleashed on the world. Nor is it a leap to imagine how these robots could intentionally be designed to destroy, or how seemingly “good” A.I. (programs designed to help humans) could be turned “bad” by rogue hackers who might, say, instruct swarms of UPS-delivery drones or robotic dog-walkers or fleets of driverless taxis to mercilessly punish us. (These are among the nightmares that haunt Elon Musk, Silicon Valley’s leading A.I. doomsayer, who in 2015 co-founded the nonprofit OpenAI to help research safeguards as the technology progresses. “With artificial intelligence, we are summoning the demon,” Musk told Vanity Fair last year. “You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.”)

One common fear among technologists is what could happen if Vladimir Putin or Kim Jong Il used software to shut off our power grid, decimating our food supply, heat, and logistical network. It’s hardly a hypothetical concern: In December, the White House said that North Korea had been behind a ransomeware infection that temporarily brought the British health system to its knees. On Thursday, the Trump administration accused Russia of having infiltrated American nuclear power plants and water and electric systems, giving hackers the ability to shut them down at will. A congressional report in 2008 predicted that if the power were to go out, 90 percent of the population would die within a year, at best (or worst depending how you see it) two years, unable to survive off the land, without running water and electricity.

Or, on a seemingly more prosaic level, what if an adversarial foreign government made the fake news that infected the 2016 cycle seem quaint by comparison. One day, after all, technology will possibly allow a hostile nation to create fake content that perfectly depicts, perhaps, a CNN or Fox News telecast (down to the anchors’ facial tics) instructing American citizens that a revolution is happening in the streets, and that we need to join it or die, overthrow our own government and learn Russian in order to survive. The possibilities are limitless, as we witnessed last year in Hawaii, when a false ballistic-missile alert, pushed to every smartphone on the island, sent thousands of people rushing for their basements. As the scientist puts it in Contact, the film adaptation of the Carl Sagan novel, when passing the cyanide pill to Jodie Foster, “There are 1,000 reasons we can think of to have this with you, but mostly it’s for the reasons I can’t think of.”

There are even scarier prophecies. As Jeff Orlowski, the Oscar short-listed filmmaker told me on the Inside the Hive podcast recently, we are very likely in the midst of the sixth extinction, (also known as the Holocene extinction), an event that could lead to a “biological annihilation” of wildlife on Earth, including the extinction of the entire human race, along with most other life on this planet. Not to worry though, given that Earth has been through five mass extinctions before. We might not survive, but the pale blue dot on which we’re screaming through space probably will. (Previous extinctions have killed off almost 90 percent of life on the planet, only to see new species form millions of years later.) Some people suggest a meteor could hit Earth; others think we’re going to be killed off by inhabitants of other planets, whose technology and manifest destiny outpaces our own.

What’s more terrifying than all these possible outcomes, however, is how close we may be to the fatal misstep or misfortune that brings the whole messy experiment to an end. William Poundstone, the well-known skeptic and author of more than a dozen books, including a biography of Sagan, is currently writing a new book about this very topic based on mathematical equations, and he had a rather epigrammatic answer about how its all going to go down. Poundstone recently told me that there are currently two theories in the scientific community about the extent to which our civilization will last, based on how quickly we manage to get our act together. In short, we either have 20 years, or a million. But, in his opinion, the most accurate prediction is that we have about 730 years of humans left on Earth. In other words, between six and eight generations. So your great, great, great, great, great grandchildren could end up being the last living humans in the universe. Gulp.

How did Poundstone come to that number? He cited the work of the physicist J. Richard Gott III, who created a famous mathematical equation that is a variation of the Copernican Principle, called Gott’s Principle, which you can use to predict the length of something by determining where in its lifetime we are currently. So far, Gott’s Principal has accurately been used to predict when the Berlin Wall would fall, the duration of dozens of Broadway shows (with 95 percent accuracy), the lifetime of corporations, and now, how long we will survive as a species.

Shortly before his death in 1955, Albert Einstein wrote about the growing destructive possibilities of war, specifically those surrounding atomic weapons, and the need to eradicate militarism and nationalism on Earth if our species hoped to survive. “As long as there are sovereign nations possessing great power, war is inevitable,” Einstein wrote, and aptly predicted that, with technology, the potential for destruction only grows more powerful. “That is not an attempt to say when it will come, but only that it is sure to come. That was true before the atomic bomb was made. What has changed is the destructiveness of war.” Stephen Hawking, who passed away this week, echoed something similar about artificial intelligence. “A.I. could be the worst invention of the history of our civilization, that brings dangers like powerful autonomous weapons or new ways for the few to oppress the many,” Hawking said last year. “A.I. could develop a will of its own, a will that is in conflict with ours and which could destroy us. In short, the rise of powerful A.I. will be either the best or the worst thing ever to happen to humanity.” If the smartest minds in the world keep saying this, maybe it’s something we should be listening to?

Kevin Kelly, a scientist, artist, and author, joined me on this week’s Inside the Hive podcast to try to ease my concerns about my progeny being wiped out by a few lines of code. His theory is that Hollywood tropes of the end of the world have colored our thinking about what could go wrong, and in reality, none of that will ever come to fruition. “I think there are plenty of things to worry about with A.I., but they are all very close. [It’s] not this far-away thing of them taking over and killing us all,” he said. “Should we be worried about the things Elon Musk says we should be worried about? Yes, there are some people who should worry about it.” But, not everyone should sit there rocking back-and-forth stressing about a robot knocking down their front door. Kelly likened the threat level to that of an asteroid hitting earth: while it’s possible, and it could happen, it’s mathematically unlikely. “I would say the same thing about robots taking over. There is a chance greater than zero that it could happen, and therefore, we should have some people thinking about this, but it’s so unlikely that we don’t really need to have national policy or even corporate policy or even research policy based on that now.” Let’s hope he’s right. I guess we have about 730 years to find out.