Facebook's CEO Mark Zuckerberg has called for global regulations on internet platforms but critics say the leading social network is shirking its responsibility to weed out violent and abusive content
Washington (AFP) - Facebook found itself embroiled anew in controversy Thursday after chief executive Mark Zuckerberg argued the leading social network should not filter out posts denying the Holocaust.
The comments by Zuckerberg drew fierce criticism and appeared to undermine Facebook's latest effort to root out hate speech, violence and misinformation on its platform.
In an interview with tech website Recode on Wednesday, Zuckerberg said that while Facebook was dedicated to stopping the spread of fake news, it would not filter out posts just on the basis of being factually wrong -- including from Holocaust deniers and the conspiracy theory website Infowars.
"I'm Jewish, and there's a set of people who deny that the Holocaust happened," he said in the interview.
"I find that deeply offensive. But at the end of the day, I don't believe that our platform should take that down because I think there are things that different people get wrong. I don't think that they're intentionally getting it wrong."
Critics quickly lashed out at Zuckerberg over the comments, saying these kinds of comments can incite hatred and violence.
"Holocaust denial is the quintessential 'fake news,'" said Abraham Cooper of the Simon Wiesenthal Center, a rights group named for a famed Nazi hunter.
"The Nazi Holocaust is the most documented atrocity in history, allowing the canard of Holocaust denial to be posted on Facebook, or any other social media platform cannot be justified in the name of 'free exchange of ideas.'"
Zeynep Tufekci, a University of North Carolina professor who follows social media said on Twitter: "Harder to find a group of people more *intentional* about "denying" an atrocity in order to pave the way for more violence than holocaust-deniers."
Zuckerberg later emailed Recode to clarify his comments, stating that if something is spreading and rated as false by the site's fact checkers, "it would lose the vast majority of its distribution" on user feeds and that "if a post crossed line into advocating for violence or hate against a particular group, it would be removed."
- Distraction from new effort -
The episode was an unwelcome distraction for Facebook after it held a media briefing on the company's new policy to remove bogus posts likely to spark violence.
The new tactic being spread through the global social network was tested in Sri Lanka, which was recently rocked by inter-religious violence over false information posted on the platform.
Jennifer Grygiel, a social media professor at Syracuse University, said that despite Facebook's ramped up efforts it needs far more people to weed out posts that can be harmful on a platform with some two billion users worldwide.
Zuckerberg "needs to figure out content moderation and he can't do it without more people. This has life and death implications" Grygiel told AFP.
"I don't think he understand the decisions he makes has real-world implications for democracy."
Facebook has been blamed for failing to curb incitations to violence against the Rohingya Muslims in Myanmar and its WhatsApp messaging service has been implicated in lynchings and mob violence in India.
The latest controversy comes with Facebook seeking to repair the damage from misinformation spread on the platform during the 2016 US election campaign and the hijacking of private data by consulting firm Cambridge Analytica as it worked on Donald Trump's campaign.
At the same time, Facebook has been accused by some politicians in Washington of bias in filtering out conservative voices.
Some analysts said Facebook faces a difficult task in seeking to filter out misinformation and calls to violence and conform with regulations on hate speech in various countries while still remaining an open platform that allows free speech.
"Facebook is in over its head but nobody has a full answer," said Tufekci in a tweet.
Fellow North Carolina professor Daniel Kreiss responded by saying "the issues are *really* challenging -- a big problem is that FB never thought about any of the implications of its platform, data, speech policies, or misinformation before 2016, even as many of us were raising concerns."