Terrorism experts: Here's how tech companies should see the problem

Big tech is taking big strides in their efforts to combat terrorism-related content online.

Google (GOOG, GOOGL) announced a $5 million innovation fund that will focus on countering hate and extremism on Wednesday. Earlier this year, Facebook (FB) hired 3,000 additional human content monitors, which brings the team’s total count to 7,500. This summer, Facebook, Microsoft, YouTube and Twitter created the Global Internet Forum to Counter Terrorism.

But beyond throwing money at the problem, companies need to understand the solution.

Yahoo Finance spoke with several counterterrorism experts who believe that internet companies are on the right path, but the issue is more complex than Silicon Valley has acknowledged.

Silicon Valley’s dilemma

There is an inherent conflict between the tech industry and the government, says Aaron Zelin, a fellow at the Washington Institute for Near East Policy specializing in online jihadism.

“People in Washington don’t understand how tech companies really work. On the opposite end, many tech companies don’t understand how the government functions,” Zelin said. “It’s hard to square their differences.”

From the conventional government perspective, tech companies need to step up and regulate content on their platforms.

“I’m a strong believer in the Navy view of accountability. When you’re the captain and one of your chiefs screws up and the ship hits a sand bar, you’re responsible,” said Andy Liepman, a senior RAND Corporation policy analyst who previously served as principal deputy director of the National Counterterrorism Center after 30 years in the CIA. “Zuckerberg has to take responsibility for the actions of his company.”

Tech companies, on the other hand, want to provide platforms that are open and dynamic.

“These tech companies are ultimately in the business of making money,” said Jeff Ringel, Director of the Soufan Group, an international consultancy firm that advises governments and corporations on policy and security. “They are private companies. What we have to remember is their job is to make money so they can employ more people and keep that cycle going.”

Facebook CEO Mark Zuckerberg
Facebook CEO Mark Zuckerberg

And while Mark Zuckerberg and Alphabet CEO Sundar Pinchai may not want to crack down on their users, calls for regulation are an evolutionary outcome of their scope and scale.

“Tech CEOs and founders created platforms for a particular reason, but they have a responsibility to deal with how people are using their products. They’ve become important places where people react to one another. While they aren’t media companies by name, they are facilitators of content, news, and people,” said Zelin.

Furthermore, tech companies have to think about potential liability if their platforms impact real-world events.

“Tech companies may have a legal responsibility to protect themselves. That’s why they want to be seen as combating this problem,” Ringel argued. “In the same way that cigarette companies and gun manufacturers are being sued because they produce products that can hurt people, tech companies may eventually be sued for facilitating an attack or a violent action.”

‘Very targeted, very tailored’

One thing that Washington and Silicon Valley agree on is the importance of thinking locally.

“We want campaigns that are global. We want to reach a big audience, but we’ve seen that in order to really succeed with counter-speech, it has to be local,” Facebook’s head of counterterrorism Monika Bickert told Yahoo Finance earlier this year.

“Students are the most credible voices and they know the issues better than anybody. At the same time, the insights are shared among all the universities … that allow the program to reach tens of millions of people,” Bickert said.

Terrorism experts know better than most that community-level engagement is crucial.

“The answer is local,” Liepman said. “The State department has tried setting up The Global Engagement Center, but it’s so generalized that you’re not getting to the audience that you need to.”

In fact, the most effective strategies can be derived from the very groups these companies are trying to squelch.

“We have to take a page out of the ISIS book and do what they do,” Liepman argued. “Their real success is very targeted, very tailored messages to very small audiences.”

Putting it bluntly, ISIS recruiters have succeeded where the government and tech companies have not.

“They’re out there looking for vulnerable populations and vulnerable individuals, be it in the U.S., Spain, Morocco or Burkina Faso. How do we go about using their tactics and getting ahead of them? It’s about providing a positive message rather than a negative one,” said Liepman.

Facebook’s Peer to Peer: Challenging Extremism program is a collaborative effort with Homeland Security and EdVenture Partners
Facebook’s Peer to Peer: Challenging Extremism program is a collaborative effort with Homeland Security and EdVenture Partners

Children in all communities are looking for some way to identify themselves. Whether it’s through youth clubs or basketball leagues, tech companies must provide positive alternatives.

Zelin agreed that companies need to connect a broader message with day-to-day grievances.

“Groups like ISIS adopt hyper-localized approaches,” he said. “And they are effective.”

Giving the example of Minneapolis as a U.S. hub for ISIS recruiting young Somali-Americans, Zelin said the community has done “great work building up NGOs and programs in that particular context.”

The key thread? These efforts are community-driven and stems from a cultural understanding of one another rather than an outsider.

“There needs to be some connective tissue to the local community, with also an understanding of these communities historically,” he said.

The pitfalls of relying on tech

Tech companies, by definition, solve problems with engineering solutions. But it’s evident that more humans need to take part in combating such complex and nuanced issues.

In August, YouTube announced a new initiative to combat terrorism-related content with “better detection and faster removal driven by machine learning.” But the crackdown included the suspension of accounts belonging to journalists and researchers whose channels included videos about al-Qaeda and the war in Syria.

“YouTube has now suspended my account because of videos of Syria I uploaded two to three years ago. Nice anti-ISIS AI you’ve got there, YouTube,” researcher Eliot Higgins tweeted. “Ironically, by deleting years old opposition channels YouTube is doing more damage to Syrian history than ISIS could ever hope to achieve.”

Zelin runs his own website, jihadology.net, which includes primary source material, analysis, and translation of jihadi documents. At one point, Twitter removed @jihadology_net, the account affiliated with his site, because it had the word “jihad” in it.

A screenshot of Aaron Zelin’s personal research website, jihadology.net. The affiliated Twitter account was temporarily shut down.
A screenshot of Aaron Zelin’s personal research website, jihadology.net. The affiliated Twitter account was temporarily shut down.

“I don’t think this process can be completely automated,” Ringel said. “Innocent, innocuous messages may have a wrong combination of keywords that instantly pulls it down. You can use automation to identify an issue and have human reviewers.”

Tech companies seem to be catching on to the hybrid approach. This week Facebook COO Sheryl Sandberg announced that the company will add more human review and oversight to its automated processes. She vowed to address any “unintended consequences” going forward.

The middle ground

Ultimately, while it’s vital for tech companies to be making concerted efforts to disrupt extremists from utilizing their platforms, any technological solution isn’t a cure-all.

“Posting content online is important in the recruitment process, but radicalization still mostly happens peer-to-peer and in-person,” Zelin explained. “A lot of this is just about suppression. These individuals are still looking for some meaning in their lives. For those who are already in the milieu, simply suppressing content won’t change their minds.”

Nevertheless, tech companies can do their part. For Liepman, the RAND analyst and CIA veteran, the willingness of tech companies to explore solutions is a crucial factor in curbing the spread of hate on their platforms.

“I would ask them to be as creative in this arena as they are in devising new ways to make money.”

Melody Hahm is a writer at Yahoo Finance, covering entrepreneurship, technology and real estate. Follow her on Twitter @melodyhahm.

Read more: