Jurassic Park Predicted Our AI Debate 30 Years Ago

  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

The post Jurassic Park Predicted Our AI Debate 30 Years Ago appeared first on Consequence.

There’s a reason Jurassic Park endures after 30 years, and no, it’s not simply because the dinosaurs look cool. Nor for the Jeff Goldblum memes and weird laughs. Steven Spielberg’s 1993 film posits a world where scientists genetically engineered extinct creatures back to life through DNA manipulation. Saying they played God feels cliche, but yeah — and thanks to those scientists, we now have a very useful lens through which we can look at scientific progress… and its drawbacks.

Like other stories by author Michael Crichton, Jurassic Park centers on humanity’s hubris running amok, and what happens when ego trumps common sense. That idea has echoed throughout our world since the movie’s inception, but never louder than during our current debates over artificial intelligence (AI). How far is too far? How do we implement guardrails? Should scientists and engineers police the technology themselves, and can we trust them? Does anyone really understand AI’s power? Do the positives outweigh the potential negatives? How does this affect human beings?

That last question carries the most weight from the U.S. Capitol in Washington, D.C., to the sidewalks outside Los Angeles, CA production studios. And it all comes back to that same responsibility Dr. Ian Malcolm (Jeff Goldblum) talked about ad nauseam when he first touched ground on an island filled with cloned dinosaurs. He saw scientists ignoring ethical questions for the right to shout “First!” He lamented their irresponsibility and tunnel vision while engaging in a philosophical back and forth with the park’s chief geneticist, then authors the statement that looms over the entire franchise:

“Your scientists were so preoccupied with whether or not they could, that they didn’t stop to think if they should.”

That one quote explains the movie’s continued cultural relevance. Jurassic Park is a lot of things, but it’s chiefly an allegory for a society that no longer pulls back after pushing innovation to the edge of a cliff. It replaces that caution with an unbridled enthusiasm to swan dive into the abyss just because it can. The movie predicted our current predicament with AI simply because it understood humanity’s nature better than most give it credit.

What happens when everyone shows up late to the party? Jurassic Park reflects this through Doctors Malcolm, Ellie Sattler (Laura Dern), and Alan Grant (Sam Neil). The park’s scientists worked unbothered and unnoticed behind the scenes for quite some time before anyone outside its imaginary walls noticed. Then a Velociraptor killed someone, spoiled all their fun, and the park’s investors demanded those experts give their objective blessing before the park opened.

There’s always something that makes those outside a cloistered community stand up and pay attention — no one lost limbs when ChatGPT entered our world, but it created the same effect. The generative AI program that talks like a human, thinks like a human, and even understands humor spread like a rumor in late 2022, making an automated future talked about in fictional works a reality for students, journalists, and Congress. The White House created a plan as well.

But because technology evolves with the speed of a Gallimimus, our best efforts feel two or two thousand steps behind. Thinking about ethical implications after the technology is already in the wild exposes painfully absent foresight. John Hammond (Richard Attenborough) talked about wrestling back control from dinosaurs once they did what dinosaurs do, only for Dr. Sattler to forcefully remind him that he never controlled them; that was an illusion. He ignored the warning signs along the way and proceeded because he could.

Generative AI learns from our input; it knows our spending habits, favorite foods, daily thoughts, and even those dark little things we think we keep to ourselves. The tool can write thesis papers, books, scripts, do math homework, or recreate performances. Publications like CNET have published several articles written by generative AI, while the automotive industry envisions using it for more automation in areas where qualified workers may find themselves sitting on the sidelines.  Wrapping our collective hands around it now feels like too little too late. Especially when dollar signs rule the day for people like Hammond’s investors and real-world counterparts.

“We will not be having our jobs taken away and given to robots,” said Bryan Cranston at a New York City rally. Hollywood’s actors and writers see AI as a threat and want job protection in writing: Why pay a background actor when scanning their likeness and using it forever is an option? Do you really need writers if ChatGPT, if given the necessary input, can hammer out a Hollywood blockbuster and save money?

For their part, the Alliance of Motion Picture and Television Producers (AMPTP) say they proposed “groundbreaking” AI protections. The AMTP’s proposal “protects the performers’ digital likeness, including a requirement for performer consent for the creation and use of digital replicas or for digital alterations of performance,” plus a minimum payment for said likeness.

Those picketing see it differently, claiming the payment is a one-time thing that gives studios consent from now till whenever. And while the Writers Guild isn’t against AI, they want a seat at the big table when those conversations about AI occur — as opposed to shutting down the topic entirely, as the AMPTP did when presented with the WGA’s initial proposal.

Writers want assurances that generative AI won’t get writing credit or that a studio executive won’t type a couple of words and phrases into a program and create a story that becomes the next billion-dollar hit. And like the actors, they wonder about their compensation if the day comes when the contract demands they split their increasingly smaller paycheck with…a machine.

The technology also raises an alarming question for the future: If studios use an actor’s performance or a writer’s words as training data for AI tools, are they compensated with residuals? And if so, for how long? That predicament goes beyond zeros in the bank account as it raises legal questions about intellectual property, and ethical ones about using someone’s own work to possibly make them obsolete.

Justine Bateman, a Hollywood multi-hyphenate who also holds a computer science degree, studied generative AI in college and validated Cranston’s fears: “Imagine something coming in and not only displacing you but displacing you with your own work.” Bateman also realized how generative AI tools might “widen profit margins” because most times, it really is all about the Benjamins.

Disney has created a task force to envision how it might implement AI and cut costs. Netflix, Sony, Paramount, and other studios made their growing interest more apparent through job announcements for AI engineers and specialists. There’s nothing inherently wrong about studying new technology and companies do themselves a disservice if they ignore innovations. But much like the experts questioned the motivation for InGen’s investors who spent all that money researching and developing “dino DNA,” one wonders how long before the bottomline becomes the only thing that matters to the C Suite.

The pay difference between Hollywood CEOs and writers, reflecting more significant trends across the board, isn’t so much a gap as it’s a canyon. This technology introduces the possibility that they might fatten their coffers even more by potentially paying actors less, and using screenwriters as editors or discarding them altogether.

What assurances do we have that a ruling class enamored with excess and more to expend at anyone’s expense won’t pick AI over humans? To their credit, companies standing at the forefront seem transparent, unlike Hammond’s InGen corporation. But like Hammond, who realized too late that he fostered a bad idea, some of those same minds now believe they went too far. So why do it? Why create the technology in the first place if you knew it posed severe threats?

Dr. Malcolm answered that question thirty years ago. And whether it’s recreating dinosaurs or building technology that apparently has a mind of its own beyond control, the answer remains the same:

“Should” never once factored into their equation.

Jurassic Park Predicted Our AI Debate 30 Years Ago
Marcus Shorter

Popular Posts

Subscribe to Consequence’s email digest and get the latest breaking news in music, film, and television, tour updates, access to exclusive giveaways, and more straight to your inbox.