Is Skynet coming? AI experts explain what 'Terminator 2' got right and wrong — and how the film 'influenced the direction of research significantly.'

Three decades after its release, the James Cameron-Arnold Schwarzenegger sci-fi classic helped inspire ChatGPT.

Terminator 2:  (Illustration by Kyle McCauley for Yahoo / Photo: Everett Collection)
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

More than three decades after blowing up theaters, James Cameron’s Terminator 2: Judgment Day remains not only a cinematic milestone but also a prophetic vision for some — including star Arnold Schwarzenegger — who believe the sci-fi blockbuster may have spurred the evolution of modern artificial intelligence.

Released on July 3, 1991, the sequel to Cameron’s 1984 action classic saw the return of Schwarzenegger as a T-800 cybernetic organism (a machine comprising both organic and biomechatronic body parts) sent back in time from a future where an AI defense network called Skynet has become self-aware and promptly starts a nuclear war to wipe out humanity.

T-800's mission: to protect a young John Connor (Edward Furlong), the future leader of the human resistance, from a more advanced Terminator, the shape-shifting T-1000 (Robert Patrick), dispatched to assassinate the teenager. Alongside John’s mother, Sarah Connor (Linda Hamilton), the trio sets out to destroy Skynet in present-day 1991 before it’s too late.

LOS ANGELES - JULY 3: The movie
Linda Hamilton as Sarah Connor, Edward Furlong as John Connor (background) and Arnold Schwarzenegger as the T-800 Terminator in James Cameron's Terminator 2: Judgment Day. (Photo: Paramount Pictures via Getty Images)

The plot may have sounded far-fetched 30 years ago, but with the rise of machine-learning platforms like ChatGPT, TensorFlow and PyTorch, we couldn’t help but wonder: How close is modern AI to becoming Skynet?

Here’s what some of the leading minds in artificial technology had to say.

Where did Skynet go wrong?

As depicted in the film, Skynet is an AI system created for the U.S. military designed to control the nation’s nuclear arsenal and defense network, allowing for faster and more efficient responses to international military threats. However, once the system builds the ability to learn and improve upon itself, it becomes self-aware. When its creators attempt to deactivate it, Skynet views humanity as an enemy and decides to trigger a nuclear holocaust to eliminate the threat.

According to AI experts, Skynet did exactly what it was designed to do: eliminate threats. When faced with the possibility of extinction, the system went into survival mode, similar to what a human or other organism would do. And given that its creators designed Skynet with virtually no ethical limitations, the system, as designed, sought out to eradicate the enemy.

LOS ANGELES - JULY 3: The movie
As seen in the film, a Series 800 Terminator holds an M95A1 Phased Plasma Rifle in the year 2029, when machines have began to take over the world. (Photo: Paramount Pictures via Getty Images)

That won't happen anytime soon, assures Yulin Wang, a technology analyst at IDTechEx specializing in robotics, who says industry leaders view such storylines as examples of what not to do.

“Certain movies, such as T2, have indirectly contributed to driving the establishment of regulations surrounding the use and commercialization of AI,” he tells Yahoo Entertainment. “This is one of the impacts on society caused by sci-fi movies.”

Indeed, Elon Musk and tech leaders have advocated for a pause in the development of systems surpassing the capabilities of ChatGPT-4 [the latest model]. In May, ChatGPT's founder Sam Altman spoke to Congress about considering similar measures while the European Union, in June, built a commission focusing on AI regulations.

“Considering the rapid evolution of technology surpassing any other historical precedent, the development of AI can progress swiftly,” Wang says. “It is not inconceivable to envision the emergence of a system comparable in complexity to Skynet within the next two decades.”

Can AI turn against us?

Jürgen Schmidhuber, a German computer scientist often billed as the “Father of Modern AI,” says the last thing AI systems want is to harm humans — for now.

“Hollywood likes AIs that enslave humans. That’s silly,” he tells Yahoo Entertainment. “A supersmart AI that can quickly build superhuman robots to satisfy all of its goals won’t be interested in enslaving humans, just like humans are not interested in enslaving cactuses.”

To that end, he admits, scientists are creating machines so advanced that they will eventually require strict regulations to limit their scope. One thing is certain, though: Today’s machines are built to help humans, not replace them.

“My lab has published AIs based on artificial neural networks that aren’t just slavishly imitating humans, but set themselves their own goals,” he says. “Like babies and scientists, they invent their own experiments to figure out how the world works, and what can be done in it. Without this freedom, they would not become more and more general problem solvers.”

Alex J. Champandard, mastermind behind the largest online hub for AI games,, who worked as a senior AI programmer at Rockstar for years, says the depiction of machines in film and video games can distort public perception about the technology.

"T2 has influenced the direction of AI research significantly," he says. "There's a strong irrational fear of [AI], but real life is not quite as dramatic. Turns out, the Terminator will just take our jobs."

What did T2 get right about AI?

Unlike prior decades, autonomous systems and weapons now have the ability to select and engage targets without human intervention, similar to scenes in T2 where the Terminators use tools like facial recognition to locate allies and enemies in a crowd. They gain additional knowledge about their environment by using sight, hearing, taste, touch and smell — much like humans do.

As Wang explains, similar humanoids are being developed by Tesla as well as companies in Japan, Saudi Arabia and China. But equipping them with human qualities, like that of the Terminator, remains a challenge.

SHANGHAI, CHINA - JULY 6, 2023 - Spectators look at Tesla's
Spectators look at Tesla's "Optimus" humanoid robot at the 2023 World Artificial Intelligence Conference. The humanoid is equipped with the same fully autonomous computer and visual neural network system as Tesla cars. It can also use motion capture to "learn" humans. (Photo: CFOTO/Future Publishing via Getty Images)

“It is well known that humans possess inherent flaws, including behaviors such as lying and cheating,” he says. “If these negative human traits were to be learned by robots equipped with immensely powerful computational and physical capabilities, the consequences could indeed be disastrous. Conversely, if robots were to acquire and appropriately utilize positive characteristics such as friendliness, helpfulness, and kindness, the potential benefits to society would be truly remarkable and beyond imagination.”

Furthermore, tools and gadgets throughout the film that may have seemed pure science fiction at the time are quite commonplace today, Wang notes. That includes unmanned vehicles and drones operating remotely or autonomously, as well as self-operating surveillance systems, cybersecurity and command/control systems that process real-time data to enhance human decision-making, all of which are displayed by Schwarzenegger in the film.

Such inventions are a testament to the imagination of designers inspired by sci-fi, Champandard acknowledges, but it’s important to know that machines aren’t coming up with new ideas on their own. They need a little help from us.

“AI learning today is now done by assimilating large quantities of data, and learning from it in bulk,” he says. “It’s not very human-like, but can serve as a substitute. So, instead of learning on-the-fly, AI systems can simply recall past situations from training data and use that.”

Can AI develop consciousness?

It’s important to differentiate consciousness from intelligence. Consciousness, which is not attainable by machines, is a state of self-awareness, whereas intelligence is an ability to learn, understand and ultimately apply that knowledge to adapt to new situations.

Schmidhuber points out that his lab had simple conscious, self-aware machines since at least 1990, and such a machine learns to represent itself by "planning ahead and thinking critically about the consequences of its actions."

"We are getting closer” to AIs becoming as sophisticated as human scientists, he says, noting that computing is getting “10 times cheaper” every few years.

LOS ANGELES - JULY 3: The movie
Arnold Schwarzenegger (as the T-800 Terminator) reveals the metal endoskeleton of his forearm and hand in T2. (Photo: Paramount Pictures via Getty Images)

“The 21st century will see cheap computers with a thousand times the raw computational power of all human brains combined,” he continues. “Soon, there will be millions and billions and trillions of such devices. Almost all intelligence will be outside of human brains.”

It might sound daunting, but Schmidhuber says the amount of intelligence gained within various gadgets and machines will help us get through mundane tasks more efficiently.

“AI will keep making human lives longer and healthier and easier,” he says. “AI will likely play a much more overt role in future society, unlike it is now. Expect more interactions with machines and automated systems.”

‘Its people who are the greatest enemies of people’

Machines declaring war on humans is a terrifying thought, but experts say we shouldn’t allow sci-fi films to deter us from exploring AI’s potential.

“Due to the limited knowledge of the underlying technology and the influence of entertainment media, there exists a sense of apprehension towards the development of AI,” says Wang. “The current state of AI is far from resembling the fictional Skynet system.”

“Unlike in Arnold Schwarzenegger movies, there won’t be many direct goal conflicts between ‘us’ and ‘them.’” Schmidhuber predicgts. “Humans and others are mostly interested in similar beings with whom they share goals, such that they have a reason to compete and/or collaborate.

“Just like humans are mostly interested in other humans, super-smart AIs will be mostly interested in other super-smart AIs” he continues. “It’s people who are the greatest enemies of people, but also their best friends. Similar for self-driven AIs. In the long run, humans will be protected to a certain extent through AI’s lack of interest in humans.”