Godfather of AI tells '60 Minutes' he fears the technology could one day take over humanity

Geoffrey Hinton hails the benefits of artificial intelligence but also sounds the alarm on such things as autonomous battlefield robots, fake news and unintended bias in employment and policing.

“We’re entering a period of great uncertainty where we’re dealing with things we’ve never done before,
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

Geoffrey Hinton, who has been called “the Godfather of AI,” sat down with 60 Minutes for Sunday’s episode to break down what artificial intelligence technology could mean for humanity in the coming years, both good and bad.

Hinton is a British computer scientist and cognitive psychologist, best known for his work on artificial neural networks — aka the framework for AI. He spent a decade working for Google before leaving in May of this year, citing concerns about the risks of AI.

Here is a look at what Hinton had to say to 60 Minutes interviewer Scott Pelley.

The Intelligence

After highlighting the latest concerns about AI to set up the segment, Pelley opened the Q&A with Hinton by asking him if humanity knows what it’s doing.

“No,” Hinton replied. “I think we’re moving into a period when for the first time ever, we have things more intelligent than us.”

Hinton expanded on that by saying he believes the most advanced AI systems can understand, are intelligent and can make decisions based on their own experiences. When asked if AI systems are conscious, Hinton said that due to a current lack of self-awareness, they probably aren’t, but that day is coming “in time.” And he agreed with Pelley’s take that, consequently, human beings will be the second-most intelligent beings on the planet.

After the idea was floated by Hinton that AI systems may be better at learning than the human mind, Pelley wondered how, since AI was designed by people — a notion that Hinton corrected.

“No, it wasn't. What we did was, we designed the learning algorithm. That’s a bit like designing the principle of evolution,” Hinton said. “But when this learning algorithm then interacts with data, it produces complicated neural networks that are good at doing things. But we don’t really understand exactly how they do those things.”

Robots in a Google AI lab were programmed merely to score a goal. Through AI, they trained themselves how to play soccer.
Robots in a Google AI lab were programmed merely to score a goal. Through AI, they trained themselves how to play soccer.

The Good

Hinton did say that some of the huge benefits of AI have already been seen in healthcare, with its ability to do things like recognize and understand medical images, along with designing drugs. This is one of the main reasons Hinton looks on his work with such a positive light.

The Bad

“We have a very good idea sort of roughly what it’s doing,” Hinton said of how AI systems teach themselves. “But as soon as it gets really complicated, we don’t actually know what’s going on any more than we know what’s going on in your brain.”

That sentiment was just the tip of the iceberg of concerns surrounding AI, with Hinton pointing to one big potential risk as the systems get smarter.

“One of the ways these systems might escape control is by writing their own computer code to modify themselves. And that’s something we need to seriously worry about,” he said.

Hinton added that as AI takes in more and more information from things like famous works of fiction, election media cycles and everything in between, AI will just keep getting better at manipulating people.

“I think in five years time it may well be able to reason better than us,” Hinton said.

And what that means is risks like autonomous battlefield robots, fake news and unintended bias in employment and policing. Not to mention, Hinton said, “having a whole class of people who are unemployed and not valued much because what they used to do is now done by machines.”

The Ugly

To make matters worse, Hinton said he doesn’t really see a path forward that totally guarantees safety.

“We’re entering a period of great uncertainty where we’re dealing with things we’ve never done before. And normally the first time you deal with something totally novel, you get it wrong. And we can’t afford to get it wrong with these things.”

When pressed by Pelley if that means AI may one day take over humanity, Hinton said "yes, that's a possibility. I’m not saying it will happen. If we could stop them ever wanting to, that would be great. But it’s not clear we can stop them ever wanting to."

So what do we do?

Hinton said that this could be a bit of a turning point, where humanity may have to face the decision of whether to develop these things further and how people should “protect themselves” if they do.

“I think my main message is, there’s enormous uncertainty about what’s going to happen next,” Hinton said. “These things do understand, and because they understand we need to think hard about what’s next, and we just don’t know.”

Pelley reported that Hinton said he has no regrets about the work he’s done given AI’s potential for good, but that now is the time to run more experiments on it to understand, to impose certain regulations and for a world treaty to ban the use of military robots.

60 Minutes airs Sundays on CBS, check your local listings.