Facebook, Google Hit With Public Policy Questions on White Supremacy, Hate Online

Google offices in Santa Clara, California. Photo by Jason Doiy/The Recorder
Google offices in Santa Clara, California. Photo by Jason Doiy/The Recorder

Google offices in Santa Clara, California. Photo by Jason Doiy/The Recorder

In a tense hearing Tuesday morning, representatives from Google and Facebook joined advocacy group members to discuss the impact of white nationalism's spread online, in the wake of violent hate crimes in the U.S. and New Zealand sparked, in part, by groups on social media.

The Senate Judiciary Committee hearing got off to a rough start—one that inadvertently highlighted the need for a discussion on online hate speech. Before a YouTube livestream of the hearing even began, accounts filled a live comment section with calls to "end Jewish supremacy," spewing racist and anti-Semitic hate speech. After about a half hour, YouTube suspended the chat.

When the hearing did begin, representatives spent the much of the four-hour session in partisan arguments over which party was racist or anti-Semitic. Many Democrats questioned the selection of conservative commentator Candace Owens as a witness.

Nearly halfway through the hearing, Georgia Rep. Henry Johnson asked Facebook and Google the first question about social media's role in spreading a video of the shooting of 50 Muslims in New Zealand mosques last month. The shooter mentioned YouTube, which has been criticized for spreading flat-earth and other conspiracy theories, during his Facebook livestream of the attack.

"Many white nationalists have used misinformation propaganda to radicalize social media users. How is YouTube working to stop the spread of far-right conspiracies intent on skewing users perceptions of fact and fiction?" Johnson asked.

Alexandria Walden, counsel for free expression and human rights at Google, responded that YouTube does not allow content that promotes or incites violence or hatred. She added the platform does not delete "content on the border" that could be considered harmful, but said they "no longer include those videos in our recommendation algorithm … and comments are disabled."

Yet an investigation published by Bloomberg last week found that YouTube executives purposefully allowed extremist views to spread throughout the platform because it drummed up engagement and, based on the platform's business model, more money, findings no representatives mentioned in Tuesday's hearing.

On Facebook's end, public policy director Neil Potts said the shooter's video was uploaded around 1.5 million times in various forms, with edits that made it difficult for the platform's automatic filter tools to detect and remove each copy.

Louisiana Rep. Cedric Richmond said Google, Facebook and other tech platforms should better coordinate content moderation of white nationalism hate speech and violent content such as the New Zealand shooter video, hinting regulation could come if tech companies don't shape up.

"Figure it out because you don’t want us to figure it out for you," Richmond said.

Potts also faced questions about Facebook's recent decision to classify white nationalist and white separatist content as banned hate speech that is indistinguishable from white supremacy. Fellow hearing witness Kristen Clarke, the president and executive director of the National Lawyers' Committee for Civil Rights Under Law, noted that despite the change, many white nationalist groups still appear on the platform.

According to Potts, the latest policy change is one Facebook has "been looking at for a long time" as it has promised to increase safety on the platform. He said Facebook now has around 30,000 safety and security staff.

For some representatives, Facebook's policy change is too little, too late. Texas Rep. Sylvia Garcia shared the story of a Houston-area resident targeted and attacked for being Mexican-American, and who later committed suicide. She shared her concern over rising hate crimes against Latinos and asked how platforms are being "more proactive in stopping some of this language" online.

Walden said Google and YouTube track trends on hate speech to identify dog whistles and slurs. Potts said Facebook uses automation and partnerships with academics and advocacy groups, as well as policy updates.

"Well, I hope you do more," Garcia said.

Advertisement