The cameras that become the electronic eyes of airports, train stations, sidewalks, malls and parks are now developing their own digital brains, letting them go beyond counting cars and people to recognize and track individuals.
The advances in machine vision I saw at a two-day event held by Nvidia (NVDA) in Washington last week could bring faster, surer responses to crime—but may also dump far more of our everyday activities into databases that may not see much accountability.
And many of the people behind this technology don’t seem to be thinking too deeply about those risks. Neither do the elected officials in a position to do something about how law-enforcement agencies employ these advances.
Teaching machines to see
I got an eyeful of these possibilities at the GPU Tech Conference, staged by Nvidia to promote uses of its high-performance GPU hardware. That abbreviation originally referred to graphics processing units, but these chips have become general-purpose processors adept at applying machine-learning techniques to live video.
In a demo during the keynote that opened the GTC conference Wednesday, an Nvidia GPU automatically classified a dense grid of overhead shots with labels like “agriculture,” “baseballdiamond,” “sparseresidential,” and “tenniscourt”—at a rate of 563 images per second.
At a later panel, Nvidia global business development lead Adam Scraba showed how image-recognition software from Avigilon can quickly find instances of a selected person in earlier footage.
Noting the difficulty police had in identifying the terrorists behind the Boston Marathon bombing, he said, “This is the kind of technology that, hopefully, will allow that never to happen again.”
Wednesday afternoon, VisionLabs CEO Alexander Khanin bragged that his firm’s face-detection software, used by banks and social networks in other countries, was now handling 58 billion face-recognition requests a year worldwide.
In the exhibit area, a standard-issue webcam set up at Wrnch’s table allowed its BodySlam software to model the movements of passersby—so as to detect such suspicious behavior as somebody carrying a gun.
“If you can see it, we can understand it,” CEO Paul Kruszewski said.
Overcoming human and technical limits
All these systems represent a technically impressive solution to a basic problem: There aren’t enough humans to look through all the images and video our current array of security cameras already collect.
“The number of eyes looking at these cameras is extremely small,” John Garofolo, senior advisor for Information Access Programs at the Department of Commerce’s National Institute for Standards and Technology, said in a panel Thursday.
They also represent a triumph of both coding and teaching, something GPU conference speakers emphasized: Machine-learning algorithms aren’t born knowing what a car, a cop or a criminal might look like, so they have to be exposed to real-world examples.
Sometimes, that raw material is readily available. DeepScience’s software aims to detect store robberies as they happen by spotting people putting on masks or pulling out guns, and an upcoming test with 7-Eleven (SVNDY) has yielded an enormous trove of crime clips.
“I’m not sure any company in the world has more robbery videos than 7-Eleven,” CEO Sean Huver said in a presentation. “It’s been a gold mine for what we do.”
Other times, it’s not. Commonwealth Computer Research found the best way to train its street-analysis software to spot cars, cyclists and pedestrians was to feed it footage from the “Grand Theft Auto V” video game—which both offers high-resolution imagery for AI software to digest and lets coders script the action.
Possibilities for abuse
When asked about the privacy risks in these systems, many of these developers and executives sounded too much like the architects of social networks like Facebook (FB) and Twitter (TWTR) which designed them without considering the inevitability of abuse.
“We’re a platform provider,” Nvidia’s Scraba said towards the end of his presentation. “We can’t necessarily control how some of this is being used.”
At another table in the exhibit area, Entropix chief product officer Nathan Wheeler shrugged off concerns about an upcoming test in Moscow of the firm’s image-enhancement software on more than 100,000 cameras.
“Will it be abused?” he asked. “Probably.”
Comparing his employer to a gun manufacturer, he said he preferred to think of the potential lives saved by automated systems that could detect plots against next year’s World Cup in Russia, then might later spot a child getting abducted in America.
“I can’t wait until an Amber Alert is an obsolete thing,” he said.
It’s true that advances in processing power can protect privacy in some cases. For instance, Nvidia’s Scraba suggested that ever-smaller GPUs could do all the required processing at the camera itself, leaving no centralized database of clips: “The video never leaves the machine or even gets stored.”
But selecting that kind of option—or similar ones like using software to blur the faces of innocent people—would be up to the agency or the company using the camera and the code behind it.
Lawmakers aren’t watching
Law-enforcement agencies, meanwhile, are moving ahead with real-world deployments. The Department of Homeland Security wants vendors to provide face-detection systems to scan foreigners crossing the U.S. border in cars, while the bullet-point list of features for the reborn Pennsylvania Station in New York includes “video facial recognition technology.”
Individual states have taken steps to control what governments can do with data they already have, such as driver’s license photo databases. But at the federal level, we seem to be left at the mercy of technology vendors and law-enforcement agencies.
Congress seems fundamentally incapable of thinking seriously about privacy these days, having spent years failing to pass a basic, badly needed reform of an obsolete e-mail privacy law, the 1986-vintage Electronic Communications Privacy Act. Its last big vote on a privacy issue? A stampede to scrap pending Federal Communications Commission Internet-privacy rules.
That leaves tech vendors in charge. Do we need to see how that’s going to play out?
More from Rob:
- Study shows US has slower LTE wireless than 60 other countries
- How Trump’s FCC chair could limit your media choices
- A bill aimed at Facebook’s bogus political ads has some big problems
- Why the Feds want to make it easier for them to get into your phone
- Not OK, Google: The headphone jack exists for a reason
- 5 things to know about what’s next for wireless internet
- Why Equifax needs to give up some details about how it got hacked
- 4 amazing new gadgets you can’t get in the US
- Why you might not want a laptop with a 4K display