We live in dangerous times when newspapers are demonizing the very law that helps stop the spread of hate and extremist speech. Despite what some headlines might say, Section 230 of the Communications Decency Act succeeded in its goal to make the internet a better place.
But not letting facts and reality prevent a click-worthy headline, we’ve seen several attacks on this amazing law from leading newspapers.
There are those who wrongly say Section 230 is the reason for problems on the internet. They claim we would be better off without that law’s incentives to moderate content created by users. These critics appear confused or disingenuous about what Section 230 actually does, and have apparently forgotten that our First Amendment says government cannot block hateful or disturbing speech — whether online or off.
Section 230 doesn’t enable hate speech on the internet. It doesn’t make the internet a worse place. It is actually the law that stands between an internet where much offensive content is removed and an internet where anything goes.
Despite the misinformation about the law, Section 230 actually has two components. The oft-cited “immunity provision” — Section 230(c)(1) — says that a platform is not liable for the content created by others, unless that content violates federal criminal or copyright law.
Despite what anti-tech advocates want you to believe, this is not a novel idea. This was Congress in 1996 enshrining what is called “conduit immunity,” a legal concept that has been applied to all kinds of intermediaries since the 1950s — well before the creation of the internet.
Take for example Barnes & Noble. If it sells a book with libelous content, it would be absurd to hold Barnes & Noble liable. And if a criminal uses a phone to commit a crime, it would be absurd to hold AT&T liable. If you bought a lemon of a car listed in the NY Times classifieds, you could not hold the Times liable for misrepresentation in that ad.
Along comes the internet, and in 1991 a court applied this “conduit immunity” to an online message board that did no content moderation whatsoever. In essence, Section 230(c)(1) simply enshrined “conduit immunity” in law, but gave no platform immunity for violations of federal criminal or copyright law.
It is in the lesser-known Section 230(c)(2) that we see the real brilliance and benefit of this law at protecting us from hateful and extremist speech. Section 230(c)(2) empowers platforms “to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”
Section 230(c)(2) enables Gmail to block spam without being sued by the spammers. It lets Facebook remove hate speech without being sued by the haters. And it allows Twitter to terminate extremist accounts without fear of being hauled into court. Section 230(c)(2) is what separates our mainstream social media platforms from the cesspools at the edge of the web.
Now let’s suppose anti-tech advocates get their wish, and upend Section 230. What would be the effect?
A diminished Section 230 makes it easier for hateful and extremist speech to spread to every corner of the internet. A diminished Section 230 makes it easier to send spam messages and viruses across the internet.
While some vile user content is posted on mainstream websites, what is often unreported is how much of this content is removed. In just six months, Facebook, Twitter, and YouTube took action on 11 million accounts for terrorist or hate speech. They moderated against 55 million accounts for pornographic content. And took action against 15 million accounts to protect children.
All of these actions to moderate harmful content were empowered by Section 230(c)(2).
Did Section 230 make the internet perfect? No. Nor did seat belts stop automobile fatalities. Is there room to improve the internet? Of course. But diminishing Section 230 will only make the internet worse, not better.
Perhaps all this vitriol against Section 230 is just the work of newspapers and broadcasters, upset about how social media is reducing their ad revenue and influence. It could just be these legacy industries want to gut Section 230 so that content moderation won’t happen on the big social media sites. They might just want social media to become virtual cesspools in the hopes that might bring readers and advertisers back to traditional newspapers and broadcasters.
Hopefully that isn’t their true motive for hating Section 230. Instead, let’s assume that Section 230 critics just don’t understand that particular law and our Constitution’s First Amendment.
Carl Szabo is general counsel for NetChoice, a trade association of eCommerce businesses and online consumers that includes Google and Facebook, and is also an adjunct professor of privacy law at the George Mason University Antonin Scalia Law School.
Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.
Get the latest news, data and insights on key trends affecting tech and tech policy.