Researchers have studied the potential for bias in facial recognition algorithms before, but now it's the US government's turn to weigh in. The National Institute of Standards and Technology has published a study indicating "demographic differentials" in the majority of the facial recognition algorithms it tested. The report, which examined both one-to-one matching (such as verifying a passport photo) and one-to-many matching (looking for criminals in a crowd), saw noticeable surges in false positives based on gender, age and racial background -- but cautioned against this representing definitive proof of systemic bias.
In one-to-one matches, there were dramatic increases in false positives for African American, Asian and native American faces compared to their Caucasian counterparts, with mistakes frequently happening "10 to 100 times" more often. African American women were also more likely to be the victims of false positives in one-to-many matches, and women as a whole were two to five times more likely to deal with those false hits. However, these problems didn't creep up everywhere. Asian-developed algorithms, for example, didn't show large discrepancies in results between Asian and Caucasian faces. NIST suggested that this might be due to a more diverse set of training images. In other words, the flaws may stem not so much from the algorithms themselves as their source data.
The study is one of the more comprehensive of its kind. While the study for African American women relied on 1.6 million FBI mugshots, the majority of the study relied on 18.27 million images of 8.49 million, all plucked from the FBI, Homeland Security and the State Department. None of it was taken from social networks or surveillance cameras, NIST said.
The institute stressed that its researchers "do not explore" the causes of these differences in the report itself. With that said, it believed the information could prove vital to developers, governments and customers who want to understand the "limitations and appropriate use" of facial recognition algorithms.
For civil rights groups, NIST's findings stood as evidence that government and police should curb their uses of facial recognition. ACLU Senior Policy Analyst Jay Stanley maintained that this was evidence facial recognition tech was "flawed and biased," and that a bad result could lead to everything from inconveniences like missing a flight to dire consequences like being placed on terrorist watch lists. Stanley called on government agencies to "immediately halt" use of recognition tech.
Those rights advocates are already getting their wish in some areas, if not as many as they might like. While non-Americans will still deal with face scans, Customs and Border Protection stressed that it wouldn't require scans for US citizens. Likewise, multiple cities have banned facial recognition, with the potential for bias often cited as a factor in the decision. This isn't the same as banning the use of the tech across whole federal- or state-level governments, though, and those deployments that persist won't necessarily address flaws in algorithms or training data. The NIST study could help -- but only if officials take it under serious consideration.