Pretty much anything a politician says on Facebook goes, from the company's perspective. It typically won't remove their posts under its content guidelines and it doesn't fact check them. However, it might still limit the spread of political posts that include previously debunked misinformation (say, a climate change-denying video its fact checkers have discredited).
Nick Clegg, Facebook's vice president of global affairs and communications, confirmed the approach to politicians' posts during a speech Tuesday. He said:
"We have a responsibility to protect the platform from outside interference, and to make sure that when people pay us for political ads we make it as transparent as possible. But it is not our role to intervene when politicians speak.
"That's why I want to be really clear today – we do not submit speech by politicians to our independent fact-checkers, and we generally allow it on the platform even when it would otherwise breach our normal content rules.
"Of course, there are exceptions. Broadly speaking they are two-fold: where speech endangers people; and where we take money, which is why we have more stringent rules on advertising than we do for ordinary speech and rhetoric."
The company previously said content reviewers wouldn't remove a politician's post for breaking its rules if it was deemed newsworthy. Since the platform says it essentially views all politicians' posts as newsworthy, they aren't going anywhere.
Facebook uses third-party fact checkers to determine the veracity of content. But a post isn't eligible for a fact check rating if it "contains a claim that is not verifiable, was true at the time of writing, or from a website or Page with the primary purpose of expressing the opinion or agenda of a political figure." That policy has been in place for over a year, former British deputy prime minister Clegg noted.
"I know some people will say we should go further. That we are wrong to allow politicians to use our platform to say nasty things or make false claims. But imagine the reverse," he said. "Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don't believe it would be. In open democracies, voters rightly believe that, as a general rule, they should be able to judge what politicians say themselves."
Facebook is effectively taking a hands-off approach to politicians' posts, if not their ads. It seems likely the stance will allow campaigns to spread lies, harassment and nastiness across Facebook's vast network, as long as they're not paying for promoted posts. Fact checking content elsewhere on the platform will be less effective if the political classes can post whatever they like, unburdened of Facebook's guidelines. If they post misinformation, users might take that at face value, and Facebook won't point out that they're not telling the truth. That's a risky approach that might well lead to deeper confusion among voters.
The company might very well be trying to appease critics who have accused it of stifling conservative speech. Still, the policy raises plenty of questions, despite Facebook's attempt at clarifying it. Clegg didn't define how Facebook determines whether someone is a politician (just elected officials? Candidates? Former congresspeople?), nor is it clear how Facebook determines whether a post could lead to violence.
Twitter also sees politicians' tweets as broadly newsworthy and typically won't remove them, but it's adding a label to posts that violate its terms of service. It also clearly draws the line on who the rule applies to: verified accounts representing a person being considered for a government position, elected officials and candidates. They also need to have at least 100,000 followers.