Tech

Congress’ Competing Demands on Content Moderation

The internet has revolutionized almost every facet of global life since it became publicly available in the early 1990s, allowing people across the world to connect, share their stories, and gain quick access to the latest news and events. But the same factors that make the internet great for spreading information — the ability to share any idea at any time with anyone connected to the network — also make it equally easy to spread misinformation and other problematic content.

While internet misinformation isn’t a new phenomenon, the 2016 election and its aftermath have made it painfully clear that bad actors are harnessing such misinformation in ever more damaging ways. Unfortunately, most of the proposals offered by lawmakers to address online harms seem to focus on the ultimately counterproductive goal of putting more of the responsibility for problematic online content on the websites that host the content rather than the actual speakers, demanding that websites preemptively remove legal and illegal content alike. The House Intelligence Committee recently held a hearing to discuss how to force websites to remove “deepfakes” — realistically altered videos often used to convey false information — and other misinformation like the slowed-down video of Speaker Nancy Pelosi that caused a stir when Facebook refused to delete it from its platform.

At the same time, critics of online platforms are increasingly expressing concerns about how websites are dealing with user speech, claiming that sites are unfairly removing legal content for political reasons. Last week, Sen. Josh Hawley (R-Mo.) released a bill that would make websites liable for user speech if they engage in “politically biased” content moderation.

You might notice a tension between these two policy demands. Requiring platforms to disable access to problematic but legal content like the doctored video of Speaker Pelosi while penalizing them for removing legal but problematic political speech — like the doctored video of Speaker Pelosi — puts websites in an untenable position. The recent history of content moderation policy is full of impossible demands from politicians, including calls for automated content filters that can automatically identify copyrighted content (no such technology exists).

This technical illiteracy wasn’t always the norm. Way back in 1996, Congress recognized that holding platforms liable for any and all content on their sites was unworkable for a medium as limitless and immediate as the internet. By passing Section 230 of the Communications Decency Act, Congress ensured that platforms wouldn’t be held liable for failing to meet impossible standards, but would have the freedom to moderate content when appropriate.

Section 230 establishes two broad concepts: first, because it is impossible for any internet company to fully prevent bad actors from sharing illegal content online, the law should place liability on the users who actually upload such content, not the websites they use; second, notwithstanding this apportionment of liability, websites are empowered to take a proactive role in removing bad online content (even problematic legal content) and cannot not be held liable when they decide to disable access to content they view as objectionable.

These technical realities underlying Section 230 are just as true today as they were at the beginning of the internet era. Content moderation will never be perfect. Even for content that is relatively easy to identify as a technical matter, mistakes are common. Child exploitation imagery, for example, is relatively easy for platforms to handle, as the illegality of such content is facially apparent, and there is a robust, widely accessible database of known files that websites can cross-reference to identify illegal content. And even for something as straightforward as child exploitation imagery, content matching is imperfect. Without Section 230, even one mistake, no matter how unpredictable, could bankrupt a small startup. While accurately reviewing every piece of relevant content might be feasible for a newspaper or magazine, it is simply not feasible for websites.

If identifying child exploitation imagery isn’t a foolproof process for platforms, imagine how difficult it is to stop the spread of misinformation. Deepfakes are a serious problem precisely because they’re difficult to identify. If it’s hard for users to distinguish between real and doctored videos, why would it be any easier for websites — particularly small startups — to know what to delete?

If lawmakers were truly earnest about finding ways to curb the spread of misinformation online, they should be looking to strengthen Section 230, as it encourages platforms to engage in content moderation by protecting them from legal liability for taking down. That’s why calls to strip platforms of their ability to remove offending content over allegations of political censorship would only exacerbate the problem further. Platforms that cannot moderate third-party content at all will become overrun by hate speech, misinformation, terrorist propaganda, and other negative content that will drive away users.

Conservatives in particular are complaining, without more than anecdotal evidence, that websites are systematically censoring and removing right-leaning content. Sen. Hawley’s recent bill — the Ending Support for Internet Censorship Act — would even require websites to be “politically neutral” in order to obtain Section 230’s limitation on liability. It’s not at all clear what “politically neutral” could possibly mean in this context or how this obligation could possibly result in less problematic content online. Should websites be legally bound to keep up posts from neo-Nazi and jihadist groups, since that qualifies as “political speech”? As a result of this type of deeply misguided legislation, websites would have to keep up all manner of objectionable (but not clearly illegal) content — including deepfakes.

Internet platforms face a lose-lose proposition here. It’s difficult enough for websites to even find all the controversial speech online, much less decide what to delete and what to keep up. Making this process even more difficult by putting legal liability on websites when they take down too much or too little, as policymakers seem insistent on doing, will not improve anything. Congress understood this in 1996, but today’s policymakers seem to have forgotten the lesson.

Evan Engstrom is executive director of Engine, a nonprofit advocacy and research organization that supports pro-startup public policy.

Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.

Morning Consult