A Brief History of Motion-Capture, from Gollum to Caesar
This week sees the arrival of Dawn Of The Planet Of The Apes, the latest in the long-running science-fiction series, and the first sequel to 2011’s franchise-reviving Rise Of The Planet Of The Apes. The new film has been met with mostly admiring reviews, almost every one of them heaping praise on its star, Andy Serkis. Dawn also represents a high water mark in the ever-evolving motion-capture technology used to bring the film’s non-human cast to life.
Also known as performance-capture, “mo-cap” has been around in various forms for the past few decades. The technology really came to prominence in 2002, with the second Lord Of The Rings film, in which Serkis — in a breakout performance — turned Gollum into a “living” creature that physically interacted with other characters. To mark his latest performance, we’ve donned our Lycra bodysuits and face-mounted cameras to take a look back at a brief history of motion-capture, which started as a tool, but has now evolved into something of an art.
1970s-2001: The Early Years
The idea of mapping animation onto actors is almost as old as animated feature films: Disney’s pioneering 1938 film Snow White & The Seven Dwarfs partly utilized a process called ‘rotoscoping,’ whereby artists drew over live-action film frames. This technique was later used in animated films like Ralph Bakshi’s Lord of the Rings (1978) and the 1985 music-video for a-ha’s “Take on Me.” Computers pushed the process further: the Rotoshop system used in Richard Linklater’s Waking Life (2001) and A Scanner Darkly (2006) is something of a primitive precursor to motion capture.
Initially developed by bio-mechanical engineers for motion studies, by the mid 1990s, the technology was being used in video games: Highlander: The Last Of The Macleods and Soul Blade both utilized motion-capture to bring greater realism to their on-screen avatars. It was only a matter of time before fimmakers caught on, although the first movie to be made exclusively with the technology is justifiably forgotten. 2000’s Sinbad: Beyond The Veil Of The Mists, an Indian-made animation featured the voice (but not movements) of Brendan Fraser, which looked like a very creaky video game cut-scene, and made just $30,000 on its brief theatrical release. The following year, big-budget animation Final Fantasy: The Spirits Within also utilized the technology in part, though also flopped at the box office.
1995-2006: The First Motion-Capture Characters
Popular conception has it that Gollum in Lord of the Rings was the first motion-captured character in live-action film, but that’s actually incorrect. The technology was first put to use to create a ‘digital double’ for Val Kilmer in 1995’s Batman Forever, and James Cameron populated crowd scenes in Titanic with performance-captured figures, replicated on a grand scale. And in the summer of 1999, actor Ahmed Best played Jar-Jar Binks in Star Wars Episode I: The Phantom Menace — he was present on set, but the CGI character, his movements animated based on the capture of Best’s performance on a sound-stage after the fact, was added in his place afterwards.
Jar-Jar was hated by fans, though, and so it’s no surprise that Gollum has the more lasting place in the history books. British stage actor Andy Serkis was hired by Peter Jackson to play emaciated ex-Hobbit Gollum/Smeagol in the Lord of the Rings films, and flew out to New Zealand essentially ignorant about motion-capture. The process wasn’t dissimilar to the one that Best had been through: Serkis would work with the actors on set, another take would be filmed with him just off-screen (during which he’d describe his motions to the other actors). Months later, Serkis would record Gollum’s movements on a special soundstage at Jackson’s visual-effects company WETA Digital, in front of a camera setup called The Volume. The basics of the technology weren’t all that different to how it works today: an actor (usually clad in a tight-fitting unitard) wears a series of sensors around their bodies. Multiple cameras are placed around the performer, which allows a computer to replicate a 3D model of the movements, which can then be exported into animation programs.