How 8chan Became the Go-To Platform for Mass Shooters

8chan—pronounced “infinitychan”by its users—has one primary slogan: “Embrace infamy.” And the message board has more than lived up to its motto.

For years, it’s been the go-to place for extremist ideologues to chat anonymously with one another, a Wild West of the Internet with its own jargon and a fleet of atomized, angry users from around the world who sought common ground in a place that shunned censorship of any kind. And over the past six months, three separate far-right mass shooters used 8chan to publicize their actions beforehand, posting manifestos, video links, and playlists to augment the experience for their ideological comrades. When Brenton Tarrant posted a live-stream of himself murdering 51 people in Christchurch, New Zealand, on 8chan, he received near-universal praise from its users, several of whom hoped that he would rack up a “high score” of deaths. In total, the terror attacks by gunmen who posted manifestoes to 8chan have resulted in a death toll of 71 since March.

After the latest such incident—a terrifying attack on Hispanics in El Paso that killed 22 and injured 26 more—tech companies that work hard to purvey an image of neutrality felt forced to act, and on Sunday, Web security provider Cloudflare dropped 8chan as a client. Cloudflare provides protection for websites against DDos (denial-of-service) attacks, which flood sites and render them unusable. “8chan has repeatedly proven itself to be a cesspool of hate,” wrote Cloudflare CEO Matthew Prince. “We just sent notice that we are terminating 8chan as a customer effective at midnight tonight Pacific Time. The rationale is simple: they have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths.”

8chan was originally launched as an offshoot of 4chan, an older but similarly oriented “Wild West” message board that remains a congregation point for Internet users enamored of racial slurs, political violence, and animé porn. (A quick glance at 4chan’s /pol/ board, the place for its users to post about politics, this morning revealed multiple posts of Nazi flags, uses of the N-word, and praise of Croatia as a “Christian white ethnostate.”) As extreme as 4chan was and remains, 8chan was built to house users that couldn’t tolerate even the barest of moderation. The split occurred during the Internet-wide, misogynist cataclysm now known as GamerGate.

That “movement”—a loosely organized collective of Internet trolls, some anonymous, others emergent ideologues—began as retribution, after Eron Gjoni, a then-24-year-old man, posted a 10,000-word diatribe about the alleged infidelities of his ex-girlfriend, a 26-year-old indie video-game developer named Zoë Quinn. Among his allegations were that she had slept with a video-games writer in exchange for favorable coverage. The screed spread wildly among self-identified gamers. Its immediate repercussions were the vicious harassment of Quinn—who received a cascade of death threats; had her accounts hacked; and had her personal information, including her address, posted online, causing her to leave her home in fear for her safety. The movement soon metastasized, taking the false allegation that Quinn had traded sex for favorable coverage to signal an industry-wide crisis in “ethics in games journalism.”

Despite its occasional male targets, GamerGate never lost its misogynist rancor. Trolls learned to gamify their tactics, overwhelming selected targets with abuse or contacting en masse the ad sponsors of journalistic outlets who had the temerity to criticize them. As a 2017 report from the think tank Data & Society on GamerGate’s broader consequences for disinformation online put it: “Gamergate participants asserted that feminism—and progressive causes in general—are trying to stifle free speech, one of their most cherished values. This is a retrograde populist ideology which reacts violently to suggestions of white male privilege.”

By September 2014, according to reporting by the Daily Dot, 4chan’s moderators effectively banned GamerGate discussions from the board, citing the site’s rules against posting personal information. A group of militant misogynists, enraged by the ban, congregated on newly created message board 8chan, to better coordinate harassment against women, particularly women of color. They were soon joined by a network of pedophiles who used the site’s anything-goes ethos to post graphically sexual images of children.

Over the ensuing five years, GamerGate itself has faded, but the harassment techniques forged in that crucible have become an inescapable part of the current-day Internet. As overt racist ideology became more mainstream in the era of Donald Trump, many of the men involved in GamerGate became part of campaigns that utilized the same tactics to push racism, anti-immigrant sentiment, and white nationalist propaganda. More recently, 8chan was the spawning ground for the baroque, bizarre, and violent conspiracy theory known as QAnon—a catchall conspiracy that alleges that Democrats and celebrities run a pedophilic, Satanic sex ring that President Trump is perennially on the verge of busting.

While Frederick Brennan, the site’s creator, has deplored its current state in recent news interviews, since 2015, 8chan has been run and bankrolled by Jim Watkins, a United States expat currently living on a hog farm in the Philippines. Watkins, who made his money by getting in on the ”ground floor” of online pornography and helping Japanese pornographers skirt the country’s strict regulations, has been frank about his tolerance of the white supremacists who utilize his site. He told Splinter in 2016, “I don’t have a problem with white supremacists talking on 8chan. They have reasons for their beliefs. I don’t need to justify their reasons."

Watkins’s laissez-faire approach mirrors that of the man who stepped up to provide support to the message board after Cloudflare cut ties. Rob Monster, CEO of Epik, a provider of domain registration and web hosting, quickly took on 8chan as a new client. It’s not Monster’s first brush with infamous far-right sites: he has previously stepped in to provide hosting for the social-media network Gab, and acquired DDos protection service BitMitigate—a small company that made a big splash providing services for the neo-Nazi site Daily Stormer.

As of this writing, 8ch.net remains dark, a hole in the Internet where infamy once spawned. (Epik, as it turned out, was leasing web space from web services company Voxility—which cut ties with Epik immediately after Stanford researcher Alex Stamos pointed out its role on Twitter.) But the fact remains that it took multiple mass murders for tech companies to face up to their responsibilities as conduits for violent ideology.

Providing succor and service to hateful and violent content was never neutral in the first place.

8chan is a particularly egregious example of the worst of the Internet, but white supremacist rhetoric, calls to violence, and harassment are ongoing, endemic problems on all major social media sites. Tech companies, it seems, are beginning to grasp the scope of the problem, offering at the very least lip service to the necessity of tamping down far-right agitation on their platforms. Facebook, for example, recently introduced a policy formally barring outright white nationalist statements on its platform; YouTube made a similar announcement in June, and Reddit has quarantined several far-right communities due to ongoing violent rhetoric. The heady, unlimited, libertarian commitment to “free speech” that defined earlier days on the Web has shifted, at least minutely, in Silicon Valley, as tech executives evolve to face a world that they themselves have shaped. One solution is for companies to pour resources into moderation to ensure their extant rules are more consistently applied; Mark Zuckerberg has called for global regulations on hateful and violent content.

What makes 8chan unusual is its affiliation with openly extremism-supporting executives, rendering the site itself somewhat immune to public pressure; that’s what made Cloudflare, which is not a platform but a protection-service provider, step in. The infrastructure of the Web itself is made up of companies who have traditionally thought of themselves as neutral actors. But in a world of Internet ubiquity—a world in which genocides can be coordinated on social media, and extremist movements can go global in a heartbeat—neutrality itself can be a dangerous position.

The slain in Christchurch, Poway, and El Paso revealed to the world what marginalized Internet users have long known: providing succor and service to hateful and violent content was never neutral in the first place. The question of how tech companies can curtail dangerous rhetoric without unduly impinging on their users’ free expression is a complex one; it’s not clear that a white-male-dominated Silicon Valley is even up to the task. What is clear, however—what has been clear for years—is that freedom of speech for harassers and abusers, who brigade, doxx, harass, and threaten minorities when not shooting them, means enforced silence, self-censorship, and fear for their targets.

Talia Lavin is a writer based in Brooklyn. Her first book, Culture Warlords, is forthcoming in 2020 from Hachette Books.

Originally Appeared on GQ