At SXSW, talking about online harassment — but is anyone listening?

image

(Images by Rob Pegoraro)

SXSW Interactive — the annual festival that wrapped up Tuesday in Austin, Texas — has a special fondness for talking up all the ways that technology is going to be awesome. But this year, festival organizers set aside most of its first full day to discuss how technology has enabled some of our uglier habits.

First, though, those organizers had to be essentially shamed into hosting a track of talks on the topic of online harassment. And even then, the overwhelming majority of SXSW attendees skipped the whole thing. In the movement to make it harder to harass people whose opinions you dislike via social media, that is what passes for progress .

How we got here

Originally, the Online Harassment Summit that took place Saturday was not even on the schedule. SXSW had cancelled a scheduled panel about online harassment in gaming after receiving threats of violence. But when BuzzFeed and Vox Media threatened to pull out of the festival in protest, organizers reversed course and put together this lineup of talks.

(Very SXSW sort of disclosure: One speaker at the summit, Jonathan Godfrey of ACT: The App Association, stayed in the house I rented for this week.)

They elected to hold the gathering in a relatively remote location, across the river from downtown Austin, and under strict security — I’ve never had my laptop bag inspected that closely, and I was told repeatedly that if I left it anywhere it would be confiscated.

That airport-esque regime and the outsized commute suppressed attendance; at one panel, I counted only 18 other people in the audience.

Of course, some SXSW attendees — the kind of people who say that recipients of online death threats and other abuse are just making it up or need to grow thicker skin — cited that low turnout as proof that Internet harassment wasn’t a real problem.

But as Saturday’s panelists told their limited audience, the problem is real, and it’s not just a matter of people using mean words.

Saturday’s testimony

The pattern is well documented: Somebody — usually a female somebody — expresses an opinion online and, in response, gets a torrent of spittle-flecked rage, up to and including rape and death threats personalized with their home or work address.

Jamia Wilson, executive director of Women, Action, and the Media, offered a breakdown of the Twitter harassment reported by people through its site: 27 percent consisted of hate speech; 22 percent “doxing” (posting private information); 12 percent featured threats of violence; 9 percent involved lying about the target; and 3 percent included revenge porn (posting real or fake naked photos of the target).

There’s also a political dimension to this, as former Texas state senator Wendy Davis said. “I could literally say it’s a beautiful day in Austin today, and the responses I get on Twitter are ‘baby murderer’.“

During her unsuccessful campaign for governor of that state, that Democrat had to deal with an incessant flood of abuse, including fake photos of her in one sex position or another. “Some of it was excruciatingly difficult to read,” she said, adding that she eventually took Twitter and Facebook’s apps off her phone.

In an interview afterwards, Davis told me those apps are back on her phone now, and she “uses them on a regular basis.” I was not surprised by that: How is a politician supposed to do her job these days without social media?

It is sometimes suggested that victims of online harassment “stay off the Internet for a while.” But, as multiple panelists said, that advice is both irritatingly useless and career-limiting.

Panelists also complained about the continued prevalence of hostile comments on media sites and suggested that a broken ad business model was part of the problem.

“They’re showing ads on every page and every comment and every click,” said Elisa Camahort Page, chief community officer at SheKnows Media. “Are they economically motivated not to moderate?”

image

What is to be done?

I’ve heard these stories before. But during Saturday’s sessions, and later on throughout SXSW, I found some ground for optimism about our ability to chip away at the problem of online harassment. And it starts with the ways social media networks deal with hostile behavior intended to silence speakers.

Multiple panelists commended Facebook and Twitter for improving their tools for dealing with abuse. “There has been a sea-change,” said Women’s Media Center director Soraya Chemaly of Facebook. Wilson noted that Twitter, historically a laggard, has been more responsive lately: “Twitter listened and made some changes.”

Davis echoed those compliments in our conversation afterwards: “You’re seeing a greater understanding by these social-media platforms of exactly how to define harassment, and what the tools need to be to address that harassment.”

Creating new tools to fight harassment can also help. Medium, for example, is experimenting with machine-learning to spot patterns of abuse, such as repetitive insults. “This person is not conversing,” trust and safety coordinator Greg Gueldner said Saturday. “They’re sniping.”

Another panel that morning featured the Israeli non-profit Red Button’s app, which lets people (and especially kids) report abusive behavior to have it investigated by volunteers.

At an afternoon session, Rep. Katherine Clark (D.-Mass.) — the victim of a fake police report intended to get cops to storm her house — said she would introduce a bill to fund investigation and prosecution of online abuse. Clark announced the legislation, the Cybercrime Enforcement Training Assistance Act, on Wednesday.

And on Monday, the Coral Project — an open-source collaboration between the Washington Post, the New York Times, the Mozilla Foundation and the Knight Foundation — used a SXSW panel to announce its first shipped product. Its Trust software helps publishers rate commenters and find the most enlightening individuals among them.

(Don’t overlook the importance of having a service’s staff show up in comments. In a session about the role of community managers, Genius artist relations manager Rob Markman said that helped set a better tone: “We’re not necessarily [just] policing the community, but modeling behavior as well.“)

I like Coral’s goals, but the conversations I’ve had with people there suggest we’re unlikely to see sites share information about each other’s commenters. That could leave harassers free to jump from platform to platform, a problem Medium’s Gueldner confessed some uncertainty about Saturday.

One reason why: Many abusers don’t realize that they’re being jerks until somebody breaks things down for them. “Education in controlling harassment is very effective,” Gueldner said. “It’s hard to scale, but it works.”

Email Rob at rob@robpegoraro.com; follow him on Twitter at @robpegoraro.