Facebook today handed over to congressional investigators 3,000 ads that were bought by a Russian company to influence U.S. politics. "Many appear to exploit racial and social divisions and exploit ugly stereotypes. We find this interference deeply offensive," a Facebook spokesperson wrote this morning.
Facebook also announced specifics of how it will implement changes to its advertising systems in order to thwart abuse and, specifically, election interference, which CEO Mark Zuckerberg promised last week. He later asked for forgiveness for how his products have been used to divide people.
Facebook briefed TechCrunch on the changes that include hiring 1,000 more people to its global ads review team over the next year, and making it so anyone can see any ad run by any organization on Facebook instead of only the ads targeted to them.
The changes should boost the integrity of Facebook's ad systems and prevent some of the abuse that plagued the 2016 U.S. presidential election.
Facebook told TechCrunch last night that it planned to share the 3,000 ads with the congressional investigators this morning. As we wrote then:
Facebook’s disclosure to the House and Senate Intelligence Committees and the Senate Judiciary Committee will include information on the ads’ content and targeting as well as the accounts that paid approximately $100,000 for them to run between 2015 and 2017 in the U.S. It previously announced these ads were tied to 470 accounts and Pages “associated with a Russian entity known as the Internet Research Agency.”
Facebook believes that congressional investigators for the three committees are best placed to review the ads and make determinations on them based on their access to classified intelligence and information from all relevant companies and industries, beyond Facebook own internal investigation, according to a spokesperson. Facebook does not plan to release the ad data publicly.
Facebook now writes that "This manipulation runs counter to Facebook's mission of building community and everything we stand for. It is especially distressing that people tried to use our products to maliciously influence our election and divide us as a country." The ads have all been taken down, though the damage is done.
But at least next time it will more difficult, thanks to five new changes Facebook is making. Here are the changes, with our analysis of their potential impact:
1. Making advertising more transparent
"We believe that when you see an ad, you should know who ran it and what other ads they're running -- which is why we show you the Page name for any ads that run in your feed. To provide even greater transparency for people and accountability for advertisers, we’re now building new tools that will allow you to see the other ads a Page is running as well -- including ads that aren’t targeted to you directly. We hope that this will establish a new standard for our industry in ad transparency.
We try to catch content that shouldn’t be on Facebook before it's even posted -- but because this is not always possible, we also take action when people report ads that violate our policies. We're grateful to our community for this support, and hope that more transparency will mean more people can report inappropriate ads."
Analysis: Facebook's "dark posts," aka ads that aren't publicly visible, made it impossible to know what exactly advertisers were saying. The tool to view all variants may be clumsy, but will force advertisers to avoid controversial or abusive content and be accountable. However, it may deter innocent experimentation in advertising since poorly made ads could embarrass their buyer. Ad watchdogs will soon be able to aggressively monitor questionable advertisers.
2. Strengthening enforcement against improper ads
"We use both automated and manual review, and we’re taking aggressive steps to strengthen both. Reviewing ads means assessing not just the content of an ad, but the context in which it was bought and the intended audience -- so we’re changing our ads review system to pay more attention to these signals. We're also adding more than 1,000 people to our global ads review teams over the next year, and investing more in machine learning to better understand when to flag and take down ads. Enforcement is never perfect, but we will get better at finding and removing improper ads."
Analysis: Facebook is wise to admit it can't catch every abuse. At least now it will put more of its $3 billion+ in quarterly profit toward reducing the problem. It's the least such a money-printing company could do. The new algorithmic review systems will have to be closely managed, though, to avoid accidentally censoring legitimate ads.
3. Tightening restrictions on advertiser content
"We hold people on Facebook to our Community Standards, and we hold advertisers to even stricter guidelines. Our ads policies already prohibit shocking content, direct threats, and the promotion of the sale or use of weapons. Going forward, we are expanding these policies to prevent ads that use even more subtle expressions of violence."
Analysis: Explicitly forbidding more subtle abuse will give Facebook rules it can point to during enforcement. But the bigger challenge than writing rules is consistently and fairly enforcing them. That's especially tricky since Facebook hasn't detailed what constitutes "subtle expressions of violence," though we've asked for more info.
4. Increasing requirements for authenticity
"We're updating our policies to require more thorough documentation from advertisers who want to run US federal election-related ads. Potential advertisers will have to confirm the business or organization they represent before they can buy ads. As Mark said, we won't catch everyone immediately, but we can make it harder to try to interfere."
Analysis: Facebook will hopefully expand this documentation policy to encompass more countries with elections targeted by trolls and foreign governments. This is something Facebook could have implemented before the U.S. election had it taken the threat more seriously, and shows how it must improve to be proactive instead of reactive with its protections.
5. Establishing industry standards and best practices
"In order to fight threats like these, we're all going to need to work together. We are reaching out to leaders in our industry and governments around the world to share information on bad actors and make sure they stay off all platforms."
Analysis: Twitter has already admitted it too was used by Russian election interferers. Now beyond spam, security, terrorism and child pornography, Facebook will collaborate with fellow companies and governments to create cross-web databases of known malicious accounts, ads and the strategies trolls use to avoid detection. That could allow one intrusion to lead other platforms to be inoculated.
Facebook has again shown it can adequately react to problems, and hopefully its past failures will push it to anticipate future worst-case scenarios.