Facebook moderators were reportedly not prepared to catch Russian ads

They're trained to spot offensive ads or scams, not content influencing public opinion.

Ralph Orlowski / Reuters

Facebook's admission that Russian-linked advertisers spent $100,000 on ads leading up to and after the 2016 presidential election has led to serious questions about their effects. But how did they make it through the social network's filters? Four anonymous advertisement monitors explained to The Verge that, as contractors handling hundreds to thousands of ad parts per day, they weren't adequately prepared to screen the propaganda and keep it off the site.

Three of the four workers were at Facebook during the June 2015 to May 2017 time frame during which these ads hit the social network. According to them, each used screening software to evaluate specific portions of an advertisement queued up for review, perhaps vetting images or text segments but never looking at the whole ad. The workers typed in key codes to tag each segment with descriptors, which enabled them to quickly but not critically sift through material, the anonymous sources told The Verge. They were on the lookout for sexually explicit or violent material as well as scams, not subtle attempts to influence the election.

Or, as one worker told The Verge, "They weren't screening for, like, propaganda or anything." They were looking more for marketing attempts capitalizing on fear to sell products, not change opinions.

The Russian-backed ads had some criteria that could've potentially raised red flags among reviewers, but the massive volume of advertisements flowing in daily meant there was room for the ones in question to sneak by. An algorithm was also used to screen ads, which the workers were constantly training, they told The Verge; Conceivably, it could have approved all the Russian-backed content.

We've reached out to Facebook for comment and will include it when we hear back.