Virtual Reality Shoots Demand a New Set of Tricks

Ss technology evolves, job descriptions change.

For example, some years ago, when digital cameras came to dominate filmmaking, cinematographers began hiring digital-imaging technicians, or DITs, to handle image-quality control and color correction directly on the set.

And now, with the arrival of virtual-reality entertainment, a new job category has emerged — VR operator — to help manage camera systems on such productions.

The challenges of transitioning from traditional film and TV to VR are obvious. A single camera is enough to capture images for a movie or show, but VR content aims to replicate a realistic 360-degree environment. And the movements of a headset-wearing viewer control the point of view from which that environment is seen.

Veteran visual-effects supervisor Ben Grossmann, co-founder of VR-content producer Magnopus, says he gets eye rolls from the videogame-industry types on his team when he mentions the job titles he has to create.

“They always go, ‘That doesn’t make any sense,’ and I’m like, ‘That’s because it’s half of what I know and half of what you know,’ ” Grossmann told the crowd at the From VFX to VR panel at VRLA, a virtual-reality expo that took place in downtown Los Angeles on April 14 and 15.

Grossmann, who won an Oscar as part of the VFX team on Martin Scorsese’s “Hugo,” explained that many of the asset-development people who make objects, textures and matte paintings can work in both visual effects and gaming. But on projects like Magnopus’ Oculus Rift VR experience “Mission: ISS,” created using Unreal and Unity game-engine software, he needs computer programmers with skills specific to those platforms.

The process pushes engineers to the forefront to manage workflow and adds a layer of as many as 30 people working around the clock on quality assurance.

“In visual effects, you’re like: ‘No one’s going to see that corner. We’ll fix that in the grade,’ ” Grossmann explained. “But in [VR], they’re like, ‘No, 600 people are going to hit that bug, and then they’re going to complain.’ ”

In addition to big changes in effects creation, image capture is entirely different in VR because a 360-degree field of vision must be captured using an array of cameras pointed in different directions.

“In visual effects, you’re like, ‘No one’s going to see that corner.’ But in VR, they’re like, ‘No, 600 people are going to complain.’ ”

Magnopus co-founder Ben Grossmann

And data-management issues on a 360-degree video shoot are exponentially larger than those on a traditional film or TV production. Each camera in the array of a VR shoot has a memory card recording an image stream. For instance, Google’s Omni VR camera rig has six GoPro cameras, while a Jaunt One VR rig has an array of 24 cameras.

Salt Lake City-based Cosmic Pictures has built a proprietary system with 25 Blackmagic Micro Cinema Cameras that shoots about 26 gigabytes a minute, according to the company’s VR project manager, Chris Nielsen. “You’re dealing with some major file sizes,” Nielsen said.

Many 360-degree shoots use as many as five camera rigs, compounding the file-management challenges. On the VRLA panel Shooting VR for Post, cinematographer Eve Cohen said that, to her, the various VR camera rigs are akin to the different lenses she uses on a traditional production, “and I’m not going to show up on a shoot with one kind of lens.”

VR operators manage these systems. “To me, it’s like having somebody in charge of that camera — not necessarily from a creative standpoint, but with a technical understanding,” Cohen said.

Traditional three-point lighting is generally out of the question when working with the all-seeing cameras, so producers must find creative ways to illuminate sets. Shooting an interactive/VR experience for the Epix series “Berlin Station,” Chaos Labs co-founder and creative director Stevo Chang used Freedom 360 rigs with four GoPros, which don’t handle low light well.

“So for one scene we had to place an enormous number of lamps around the room just to bring the exposure level up,” he said.

For T Magazine’s 360-degree mini-documentary “The Creators: Taryn Simon,” director Luca Guadagnino wanted just enough light to make artist Simon’s cavernous installation An Occupation of Loss appear as darkly haunting on screen as in person, so he used minimal practical lighting fixtures and removed them in post.

“In a way, it’s almost an identical process to filming a normal narrative film,” said Guadagnino, whose non-VR projects include the 2010 feature “I Am Love.” “It involves a lot of post-production.”

But the fix-it-in-post attitude that reigns in traditional film and TV can get one in trouble on a VR shoot, particularly when it comes to stitching — the joining of image feeds from multiple cameras, accomplished using special software and manual cleanup work by CG artists. If the image is captured improperly, no amount of digital massaging can fix it.

“For me, the most important parameter is how close can you get to the camera,” observed DP Andrew Shulkind during the Shooting VR for Post panel. “Depending on where the [image] overlap is, you may not be able to come closer than five or six feet. If you shoot too close to a chain-link fence, that part of the shot will not stitch. If you move just a couple of feet away, that could make the difference between the shot working and the shot not working.”

The technology is tricky, to be sure. But it’s clear that it takes a VR operator to help figure it out.

Get more from Variety and Variety411: Follow us on Twitter, Facebook, Newsletter