Facebook vs. ISIS: Inside the tech giant’s antiterror strategy

Facebook, under pressure to crack down on Islamic State militants using its network, has quietly ramped up its efforts to block terrorist messages and videos in what some experts say is a potentially significant move by the Silicon Valley giant in the U.S. government’s battle against the terror group’s propaganda and recruitment efforts.

In an rare interview with Yahoo News, Monica Bickert, a former federal prosecutor who serves as Facebook’s top content cop, provided the company’s most detailed accounting yet of its efforts to identify and remove terrorist material from its site.

As described by Bickert, Facebook has set up what amounts to its own counterterrorism squad, with at least five offices around the world and scores of specialists fluent in several dozen languages. The group, part of Facebook’s Community Operations team, responds around the clock to reports by Facebook users and others of terrorists using the social media network, taking terror-related material down — and then looking for related accounts that the same actors might be using.

“If we become aware of an account that is supporting terrorism, we’ll remove that account,” said Bickert, chief of global policy management, in an interview at Facebook’s sprawling campus in Menlo Park, Calif. “But we want to make sure that we get it all. So we will look at associated accounts — pages this person liked or groups that he may have belonged to — and make sure that we are moving all associated violating content.”

Facebook — which now has 1.5 billion users, 70 percent of them outside the United States and Canada — is not only looking for messages or posts from avowed members of the Islamic State or other terror groups, Bickert says. It has instituted community standards policies — controversial in some circles — that go beyond those of other social media firms, banning expressions of “support” for terrorist groups and expressions “praising” their leaders or “condoning” violent terrorist acts.

image

“If there is a terror attack that happens in the world, and somebody shares even a news report of that and says, ‘I think this is wonderful,’ or they’re mocking the victims or celebrating what has happened, that violates our policies and we’ll remove it,” said Bickert.

Her comments come at a time when Facebook and other social media firms have been feeling increasing heat from U.S. and other Western counterterrorism officials, members of Congress and even presidential candidates like Hillary Clinton, who have called on the companies to be more aggressive in shutting down ISIS propaganda and reporting it to the government.

The companies, wary about being perceived as too cozy with the U.S. intelligence community and of compromising the privacy of their users, have resisted some of these proposals, arguing that they could require them to engage in censorship. One provision, sponsored by Senate Intelligence Committee Chairman Richard Burr, R-N.C., and Sen. Dianne Feinstein, D-Calif., would require social media firms to report to U.S. law enforcement “all terrorist activity” on their networks.

The provision was inserted into a Senate intelligence bill last summer — and then removed after intense lobbying by a host of trade associations representing Facebook, YouTube, Twitter and other social media and tech companies, including Yahoo, congressional staff members told Yahoo News.

The provision “would impose a new government mandate requiring a broad spectrum of companies to report users’ activities and communications to the U.S. Government … and risks chilling free speech,” the trade groups, including Reform Government Surveillance, a coalition of social media firms that includes Facebook, wrote in a Dec. 11 letter obtained by Yahoo News.

image

On the campus of Facebook’s headquarters, Menlo Park, Calif. (Photo: Beck Diefenbach/Reuters)

The proposal, and similar calls for the social media firms to engage in greater cooperation with the government, have also alarmed civil liberties advocates. After President Obama’s top national security team — including FBI Director James Comey and Director of National Intelligence James Clapper — flew out to Silicon Valley this month for a meeting with executives of tech and social media firms, including Facebook’s Sheryl Sandberg, ACLU lawyer Hugh Handeyside wrote a blistering critique of what he viewed as the potential dangers on the widely read Just Security blog.

“The notion that social media companies can or should scrub their platforms of all potentially terrorist-related content” not only is “unrealistic,” he wrote, but could also “sweep in protected speech and turn the social media companies into a wing of the national security state.”

As the debate over the Burr-Feinstein proposal continued, Facebook executives began closed-door briefings in Washington last month to preempt such moves by detailing the company’s own internal efforts to identify and remove terror-related content — without drawing too much public attention to them and riling the civil-liberties critics.

In a private “off the record” dinner attended by former senior intelligence community officials — many with continuing close ties to their former agencies — Alex Stamos, the company’s chief security officer (and a former head of security at Yahoo), gave a 20-minute presentation laying out sophisticated methods, including the use of algorithms, that Facebook was using to identify terrorist content, according to a source familiar with the briefing.

image

At the same time, the company is helping to fund a Department of Homeland Security initiative, called Peer 2 Peer, to create a competition among college students for videos and other messaging that combats jihadi propaganda — more effectively, officials hope, than the State Department’s own failed attempts.

These efforts have won Facebook goodwill among some Obama administration officials and outside groups that have been tracking terrorists’ use of social media. “They’re setting the tone for what other companies can do,” said Steven Stalinsky, executive director of the Middle East Media Research Institute, a private organization that closely tracks jihadi propaganda on the Internet.

Stalinsky said that Twitter (which has been far less public about its own internal policing) hosts a significantly larger portion of jihadi propaganda, especially because users quickly reappear under new handles once their accounts get taken down. Another new favored platform for the jihadis: Telegram, created by Pavel Durov (dubbed the “Russian Mark Zuckerberg”), a Berlin-based social media site that openly advertises opportunities for groups and “supergroups” of users to have encrypted conversations.

But Facebook has still faced more than its share of stinging criticism, especially overseas. In 2013, after British soldier Lee Rigby was brutally hacked to death near his military barracks outside London, British officials discovered that one of his two attackers, Michael Adebowale, had made “graphic” threats to kill military personnel in a Facebook chat six months earlier with somebody known as “Foxtrot,” a suspected al-Qaida operative in Yemen. A parliamentary security committee later harshly criticized Facebook for not proactively sharing that information, saying there was a “significant possibility” that MI5, the British domestic intelligence agency, could have prevented Rigby’s murder had it been informed about the Facebook threats.

image

A courtroom sketch from 2013 shows Michael Adebowale, flanked by police officers, during a court appearance in London, where he was accused in the murder of British soldier Lee Rigby. (Illustration: Elizabeth Cook/PA/AP)

The incident appears to have been a turning point for Facebook, one U.S. law enforcement official said. In a response to the report last year, British Prime Minister David Cameron, without identifying Facebook by name, said the company in question had begun discussions with the British government about how it could “rapidly improve the identification of imminent threats and reporting them to law enforcement.”

But in their interviews with Yahoo News, Bickert and other Facebook officials were vague about how they define “imminent threats” — a threshold that, under Facebook’s own policies, triggers voluntary reports by the company to law enforcement. Bickert was asked: If a Facebook user is discovered writing that he intends to travel to Syria to wage jihad, does that meet Facebook’s standard for alerting the FBI? Those decisions, she said tersely, are made by the company’s legal team “on a case-by-case basis.”

The issue of exactly how vigilant Facebook is in monitoring its network arose again after the Dec. 2, 2015, rampage in San Bernardino, Calif., that killed 14 people. Just minutes after the attack, one of the perpetrators, Tashfeen Malik, posted a message on a Facebook account pledging allegiance to Islamic State leader Abu Bakr al-Baghdadi.

Alerted by the FBI, Facebook promptly took down the message that day, before the news media or others were able to find it. But Facebook appears to have missed another disturbing message — just weeks earlier — posted by Enrique Marquez, who has been charged with buying two of the weapons Malik and her husband, Syed Farook, used in the attack. “No one really knows me. I lead multiple lives … involved in terrorist plots,” Marquez wrote in a Nov. 5 chat on Facebook, according to an FBI criminal complaint against him.

image

Tashfeen Malik, left, and Syed Farook, suspected terrorists in the Dec. 2 San Bernadino massacre, as they passed through Chicago’s O'Hare International Airport in July 2014. (Photo: U.S. Customs and Border Protection via AP)

Could Facebook do more to find such messages, using the same automated image technology it is already using to identify and report child pornography? Bickert, who previously specialized in child pornography cases while serving as an assistant U.S. attorney in Chicago, said there is a critical difference.

“When you’re talking about child exploitation offenses, and child pornography specifically, the image itself is the criminal contraband,” she said. “There is no context that is necessary. When you’re talking about content that supports terrorism, context is everything.’’