Ex-Google CEO Eric Schmidt predicts AI data centers will be ‘on military bases surrounded by machine guns’

Eric Schmidt appears on a panel with an inset of AI
Eric Schmidt appears on a panel with an inset of AI
  • Oops!
    Something went wrong.
    Please try again later.

Google’s former chief executive officer, Eric Schmidt, predicted that the most powerful artificial intelligence systems will be housed on military bases surrounded by machine guns in the US and China.

“Eventually, in both the US and China, I suspect there will be a small number of extremely powerful computers with the capability for autonomous invention that will exceed what we want to give either to our own citizens without permission or to our competitors,” Schmidt told Noema Magazine in an interview that was published on Tuesday.

Former Google CEO Eric Schmidt said that advanced AI computers will be housed on military bases in the US and China. Getty Images for TIME
Former Google CEO Eric Schmidt said that advanced AI computers will be housed on military bases in the US and China. Getty Images for TIME

The former Google boss who headed the search engine from 2001 to 2011 said that AI systems will gain knowledge at such a rapid pace within the next few years that they will eventually “start to work together.”

Schmidt, whose net worth has been valued by Bloomberg Billionaires Index at $33.4 billion, is an investor in the Amazon-backed AI startup Anthropic.

He said that the proliferation of AI knowledge in the next few years poses challenges to regulators.

“Here we get into the questions raised by science fiction,” Schmidt said.

He identified AI “agents” as “large language model[s] that can learn something new.”

“These agents are going to be really powerful, and it’s reasonable to expect that there will be millions of them out there,” according to Schmidt.

Schmidt warned that regulators will have a hard time mitigating risks inherent in the rapid advancement of AI within the next few years. REUTERS
Schmidt warned that regulators will have a hard time mitigating risks inherent in the rapid advancement of AI within the next few years. REUTERS

“So, there will be lots and lots of agents running around and available to you.”

He then pondered the consequences of agents “develop[ing] their own language to communicate with each other.”

“And that’s the point when we won’t understand what the models are doing,” Schmidt said, adding: “What should we do? Pull the plug?”

“It will really be a problem when agents start to communicate and do things in ways that we as humans do not understand,” the 69-year-old former executive said. “That’s the limit, in my view.”

Schmidt said that “a reasonable expectation is that we will be in this new world within five years, not 10.”

He added that tech companies have been working with Western governments on regulating the new technology.

Schmidt said that Western companies dealing in AI are “well-run” and have “exposure to lawsuits” — thus minimizing risk.

“It is not as if they wake up in the morning saying let’s figure out how to hurt somebody or damage humanity,” he said.

But Schmidt warned that “there are evil people” in the world who “will use your tools to hurt people.”

Schmidt, 69, is said to be worth more than $33 billion. POOL/AFP via Getty Images
Schmidt, 69, is said to be worth more than $33 billion. POOL/AFP via Getty Images

“All technology is dual use,” he said. “All of these inventions can be misused, and it’s important for the inventors to be honest about that.”

Schmidt said that the problem of spreading misinformation through AI as well as deepfakes is “unsolvable.”

“There are some ways regulation can be attempted. But the cat is out of the bag, the genie is out of the bottle,” he said.

Last year, a group of tech leaders from OpenAI, Google DeepMind, Anthropic and other labs warned that future AI systems could pose a threat to humanity that would be deadlier than pandemics and nuclear weapons.

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” the statement by the nonprofit Center for AI Safety read.