Anti-vax myths, distorted Nancy Pelosi videos, a conspiracy theory that a recent mass shooter was a supporter of presidential candidate Beto O’ Rourke — misinformation abounds on Facebook (FB). In an exclusive interview, top Facebook executives said the company has made progress addressing false posts but still struggles to identify them, especially in the most high-stakes regions where misinformation can lead to deadly violence.
“We don't want to be in the position of determining what is true and what is false for the world,” says Monika Bickert, the head of global policy management, which sets the rules for the site’s 2.4 billion users. “We don't think we can do it effectively.”
“We hear from people that they don't necessarily want a private company making that decision,” she adds.
Reluctant to judge veracity on its platform, Facebook partners with fact-checking organizations that vet posts, an arrangement that began after the 2016 presidential election. But Bickert acknowledged that the company often lacks such partnerships in violence-prone regions.
“The sad reality is, in the places in the world where you are most likely to have on the ground violence, those are often the same places where it's hard to have a fact-checking partner, or even a safety organization, tell us what the real situation is on the ground,” she says.
Last year, U.N. Human Rights experts examining violence perpetrated against Rohingya Muslims in Myanmar said that social media played a “determining role” in the conflict. Marzuki Darusman, chairman of the U.N. Independent International Fact-Finding Mission on Myanmar, specified: “As far as the Myanmar situation is concerned, social media is Facebook, and Facebook is social media.”
A Facebook-commissioned report, released last November, acknowledged that the company had failed to prevent the platform from “being used to foment division and incite offline violence” in Myanmar.
“If we have misinformation where a safety partner is able to confirm that can contribute to imminent or ongoing violence on the ground, then we will remove it,” Bickert says.
Concerns about false and inauthentic posts on Facebook reached a fever pitch after the 2016 presidential election, the outcome of which some have attributed to a disinformation campaign on the platform carried out by a Russian intelligence agency. The Mueller Report, released in April, detailed Russia-operated Facebook Groups like “United Muslims of America” and “Being Patriotic” that each had hundreds of thousands of followers.
The site drew criticism early this year for allowing opponents of vaccination to spread false information about the treatment, and in May, for permitting distorted videos of U.S. House Speaker Nancy Pelosi (D-CA) to be viewed millions of times. (Facebook reduced the distribution of such videos and attached a warning to them, but did not remove them.)
In an exclusive interview at Facebook’s Menlo Park headquarters, Yahoo Finance Editor-in-Chief Andy Serwer spoke with the three executives who oversee content at Facebook — Bickert; John DeVine, VP of Global Operations; and Guy Rosen, VP of Integrity.
The top executives said the company has come a long way in addressing misinformation since the 2016 election.
“We've already made a lot of progress on misinformation,” says Rosen, who oversees the development of products that identify and remove abusive content on the site.
Last Thursday, Facebook launched a partnership with the World Health Organization (WHO) that will direct users searching for information on vaccines to the WHO’s website.
“There's always going to be continued challenges,” Rosen says. “And it is our responsibility to make sure that we are ahead of them and that we are anticipating what are the next kind of challenges that bad actors are going to try to spring on us.”
Andy Serwer is editor-in-chief of Yahoo Finance. Follow him on Twitter: @serwer.