How Facebook should fight fake news

In this article:
There are ways Facebook can fight fake news. (image: AFP — Josh Edelson)
There are ways Facebook can fight fake news. (image: AFP — Josh Edelson)

Facebook (FB) is once again promising to do better at stopping the spread of disinformation, which means it must be a week with seven days.

The social network’s latest pledge to fight hoaxes shared on its platform came in a post Thursday highlighting its election-security efforts. In it, product manager Tessa Lyons noted that Facebook’s existing efforts to slow the spread of stories rated false by third-party fact-checkers were “dropping future views on average by more than 80%.”

Lyons also announced that as of Wednesday, Facebook had begun fact-checking photos and videos in France, in partnership with the news service AFP, with “more countries and partners” coming soon.

While U.S. users wait for similar accountability — some of my Facebook friends can’t seem to stop sharing infographics with dubious data — you may ask yourself if it’s even possible for a social network to be not terrible when it comes to fake news.

The answer is: Yes. But doing so on at a scale as large as Facebook’s lands somewhere between painful and implausible.

Truth is easier on a smaller scale

Look to small groups populated by “people within social networks who are trying not to be used,” said Staci Kramer, a long-time journalist and an astute observer of social media.

She cited in particular a handful of Facebook groups with strong leadership by example — for instance, one set up by the policy-news site Vox to discuss health care.

You’ll see some really stringent requirements about how you share information,” she said.

But don’t count on Facebook’s algorithms to point you to these shining cities on a hill.

Facebook is really indiscriminate in how it recommends groups to people,” Kramer warned. “Facebook doesn’t know if it’s recommending a valuable group or not; it’s recommending a bunch of keywords.”

She also pointed to Reddit as a good if not unexpected example, thanks to the way it lets “subreddit” forums develop their own culture.

So while on the one hand you have the pro-Trump r/The_Donald subreddit—about which Reddit CEO Steve Huffman called “crass and offensive” at a SXSW panel, you can have others that aspire to do better.

I think subreddits have the best potential for self-policing,” Kramer said. “Because they get really territorial about what they say they’re gonna be.”

The example of Wikipedia

Alex Howard, deputy director of the Sunlight Foundation, a Washington non-profit that advocates for government transparency, noted that such early social networks as bulletin-board systems and mailing lists had structural incentives to play nice: If you didn’t contribute, everybody would know soon enough.

One of the downsides of planetary-scale social platforms is that those incentives get blown up,” he said.

But one large site, Wikipedia, has been able to keep that sense of personal accountability through its self-government. “There’s traditions, there’s structures, there’s a community that’s involved and invested,” he said.

(Howard noted that Katherine Maher, the executive director of Wikipedia’s parent foundation, sits on Sunlight’s board.)

Beyond the public edit histories for every page on the crowd-sourced encyclopedia (see, for example, the log for Sunlight’s Wikipedia entry) and Wikipedia’s volunteer moderators who are themselves subject to the rules, Howard noted how Wikipedia deals with identity.

Instead of imposing a real-names policy like Facebook, Wikipedia’s accounts are pseudonymous but persistent and include a public log of contributions. “There’s identity which over time accretes authority,” he said.

But do you want that sort of rating system on a Facebook or a Twitter (TWTR)?

I don’t think Americans are going to be comfortable with the social credit score that China’s going towards,” Howard said. In March, the government warned that it might yank the travel privileges of citizens with low scores.

Little tweaks can help

That doesn’t mean that a Facebook or a Twitter can’t tweak their systems to combat what we used to call “fake news” before President Trump began slapping that label on coverage he doesn’t like.

Kramer urged Twitter to add not an edit function but the ability to append a correction — unlike replying to your own tweet, this addition would always be visible, even when the tweet was embedded elsewhere.

Howard suggested that social networks be more aggressive in taking the microphone away from people who broadcast disinformation.

Another observer and critic of social media, Future Today Institute founder Amy Webb, called on Facebook to put more of its vaunted machine-learning work into fighting hoaxes.

There are ways to manage this algorithmically, but they’ve chosen not to,” she said.

But change also has to include you and me calling out phony news.

I don’t think we need better sites/apps as much as we need to improve ourselves,” Dan Gillmor, a professor at Arizona State University’s Walter Cronkite School of Journalism and Mass Communication, explained. “Since we’re all creators, too, we need to do that responsibly.”

More from Rob:

Follow Yahoo Finance on Facebook, Twitter, Instagram, and LinkedIn.

Email Rob at rob@robpegoraro.com; follow him on Twitter at @robpegoraro.

Advertisement