Ending Section 230 Would Kill the Internet as We Know It

Reps. Cathy McMorris Rodgers (R–Wash.) and Frank Pallone Jr. (D–N.J.) preside over a hearing of the House Energy and Commerce Committee.
Tom Williams/CQ Roll Call/Newscom
  • Oops!
    Something went wrong.
    Please try again later.

Described as "the 26 words that created the internet," Section 230 of the Communications Decency Act catches a lot of flak for a piece of legislation that is largely responsible for online platforms' willingness to host discussion forums. In its absence, social media companies and message boards would likely return to the previous era of either allowing anybody to say anything, or else taking legal responsibility for every insult and slur posted on their platforms. That would probably mean the end of online discourse as we know it—which may be what happens if proposed bipartisan legislation "sunsets" Section 230.

Immunity From Consequences?

"The fact that Section 230 has operated as a near complete immunity shield for social media companies is due to decades of judicial opinions trying to parse its ambiguities and contradictions," Rep. Frank Pallone Jr. (D–N.J.) huffed this week at a hearing on the Section 230 Sunset Act which would abolish that law after December 31, 2025. After a litany of the alleged horribles available online, including "videos glorifying suicide and eating disorders" and promotion of "illegal opioid sales to people searching for addiction recovery gatherings," Pallone added: "I reject Big Tech's constant scare tactics about reforming Section 230. Reform will not 'break the internet' or hurt free speech."

In his certainty of the necessity for ditching Section 230, Pallone, ranking Democrat on the House Energy and Commerce Committee, is joined by committee chair Cathy McMorris Rodgers (R–Wash.). The two co-authored a Wall Street Journal op-ed this month insisting their legislation "would require Big Tech and others to work with Congress over 18 months to evaluate and enact a new legal framework that will allow for free speech and innovation while also encouraging these companies to be good stewards of their platforms."

But Section 230 doesn't do what they claim, and repealing it won't create the better world they envision.

…or Protection From Censorship?

"How do you know when Section 230 is being misunderstood?" Robert Corn-Revere, chief counsel at the Foundation for Individual Rights and Expression (FIRE) quipped last year. "A politician is talking about it."

As Corn-Revere points out, "adopted in 1996, Section 230 was proposed as a way to counter efforts to censor internet speech." Prior to its passage, online platforms were treated as publishers of material posted on their sites if they made any attempt at moderation. They were incentivized to allow free-for-alls, or else scrutinize all content for legal liability—or not allow third parties to post anything at all.

Included in the Communications Decency Act, Section 230's important provisions survived the voiding of most of that law on constitutional grounds. It reads, in part: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." Those are the 26 words credited as creating the internet by Jeff Kosseff's 2019 book. They also take the blame for what so many politicians hate about the online world.

"Democrats say too much hate, election meddling, and misinformation get through, while Republicans claim their ideas and candidates are censored," Bloomberg's Todd Shields and Ben Brody noted in 2020 during an earlier spasm of bipartisan hate directed at Section 230. The left is angry that material they don't like can be posted online without platforms being compelled to suppress it, while the right is upset that those platforms exercise moderation by their own rules rather than those of posters. Both want to strip firms of the ability to exercise their own judgment over what appears on their services.

Section 230 Most Protects Small Fry, Not Big Tech

"For the biggest players, more carefully policing content would probably mean bolstering the ranks of thousands of hired moderators and facing down far more lawsuits," added Shields and Brody. "For smaller players, the tech industry argues, it could prove ruinous."

That is, online discussions could become more hobbled and expensive, or even largely disappear.

"The law is not a shield for Big Tech," point out the Electronic Frontier Foundation's (EFF) Aaron Mackey and Joe Mullin in defending Section 230. "Critically, the law benefits the millions of users who don't have the resources to build and host their own blogs, email services, or social media sites, and instead rely on services to host that speech."

Mackey and Mullin worry that lawmakers' efforts to "require Big Tech and others to work with Congress" to establish new liability rules for content moderation would ditch current neutral rules in favor of a new regime that favors politicians and big companies over everybody else.

"Online speech is frequently targeted with meritless lawsuits," they write. "Big Tech can afford to fight these lawsuits without Section 230. Everyday internet users, community forums, and small businesses cannot."

Don't Unleash Censors, Restrain Them

FIRE, for its part, thinks people complaining about online conduct may well have valid concerns, but that sunsetting Section 230 is the wrong way to go. The free speech group highlights polling which shows that "two-thirds of Americans don't trust the government to regulate content fairly — including majorities of Democrats, Republicans, and independents."

In place of eliminating Section 230, FIRE proposes legislation that would compel the government to report, within 72 hours, any content moderation demands made of internet services. The disclosure would have to include the identity of the agency involved, the service targeted, and a "description or a copy of the content published on a covered platform."

FIRE also recommends that online platforms voluntarily adopt transparent and unbiased moderation policies. Users "should be able to appeal moderation decisions that affect them."

Those proposals won't give us a perfect internet, because nothing will ever meet that impossible bar. They also are unlikely to satisfy the concerns of politicians who pretend that Section 230 grants "Big Tech" immunity even as they craft legislation that will most hurt small businesses and discussion forums—but disappointing government officials is a feature, not a bug.

"The First Amendment—not Section 230—is the basis for our free-speech protections in the U.S.," insist Pallone and Rodgers as they peddle a law that would extend their power over our conversations.

More protection from government meddling is better than less, and Section 230 functions as part of a shield for speech that needs strengthening, not sunsetting.

The post Ending Section 230 Would Kill the Internet as We Know It appeared first on Reason.com.