logo

On Policing Content, Social Media Companies Face a Trust Gap With Users

As a result, 59% of users want the government to have a role in crafting content moderation policies
Morning Consult illustration by Vladimir Gorshkov
October 09, 2019 at 12:01 am UTC

Key Takeaways

  • Most said they had “not too much” or “no confidence” in social media companies to completely remove offensive content, false information, harassment and hate speech.

  • Many users also say companies are partially responsible for the content spreading online.

Facebook Inc. could now be compelled through court orders to remove the illegal content shared on its platform under a court ruling from the European Court of Justice last week. It’s a move that the American public might applaud, according to Morning Consult polling, which finds that most social media users have little to no confidence that social media companies are capable of removing the harmful content on their sites.

According to a Morning Consult survey of 2,072 self-identified social media users, most said several types of objectionable content, such as misleading health information, should be removed from social media platforms. But most of them also said they have “not too much” or “no confidence” in social media companies to completely remove four different types of harmful content: false information (62 percent), hate speech (53 percent), harassment (55 percent) and offensive content (52 percent). 

With low trust in self-regulation, people may be seeking a solution from Washington: 59 percent of social media users say the government should play a role in regulating platforms’ content moderation policies. And pluralities say social media companies are at least sometimes legally responsible for the harmful content users share on the sites -- raising questions about the viability of Section 230, the tech industry’s legal shield against being sued for the content its users post.

The Aug. 21-24 survey, which has a margin of error of 2 percentage points, showed that users rarely differed from one another on their stances on content moderation, regardless of which platforms they said they had — meaning a Facebook user and a Twitter user often had the same viewpoints.

While content moderation has long been a subject of internal discussion for social media companies, recent events, such as 2016 election interference and accusations of anticonservative bias from President Donald Trump and Republican politicians, have increased scrutiny of the issue from lawmakers and regulators. The heightened conversation around how social media companies craft and implement rules regarding what content is allowed on their sites has prompted several recent congressional hearings in both chambers.

No matter the catalyst, the debate almost always comes back to whether to amend or revoke a law passed in the 1990s: Section 230 of the Communications Decency Act, which grants websites some legal immunity for posted content. This means internet companies like Facebook, Twitter Inc. and Airbnb Inc. can’t be sued for disruptive or harmful content shared by a person on their forums.

But experts and trade associations note that rolling back Section 230 protections might not be the cure-all for content moderation complaints that Washington officials think it is. As Dipayan Ghosh, co-director of the Digital Platforms and Democracy Project and Shorenstein fellow at the Harvard Kennedy School, said, the issue with content moderation stems from three places: the amount of information companies are trying to regulate, the role of free speech protections and the ethical considerations surrounding content, such as the definition of falsehoods. 

“These companies, they can pretty well determine without any moral headache that the Christchurch shooting should not belong on their platform,” Ghosh said. “But when, for example, a domestic disinformation operator starts becoming active on their platform and pushing lies about a political candidate, how do these companies treat that? Those are waters that they don't want to wade into.”

In recent months, social media platforms’ efforts to combat content moderation issues have largely been about regulating themselves. Last month, Facebook rolled out the charter to its new independent oversight board, a panel that will make final decisions on controversial moderation cases. In July, Twitter expanded its definition of hate speech to include language that dehumanizes people on the basis of their religion, and YouTube Chief Executive Susan Wojcicki spent the entirety of her quarterly letter to creators in August defending the idea of an “open platform” amid pressure from lawmakers.

Facebook said in May it had disabled 2.2 billion fake accounts between January and March this year, while Twitter said it took actions against 56,577 accounts between July and December 2018 for posting violent threats, including terrorism promotions. And Google-owned YouTube said it had removed more than 4 million channels for violating its community guidelines between April and June.

 But any self-regulation effort comes up against the problem of scale: Carl Szabo, vice president and general counsel of internet trade association NetChoice, which advocates for Section 230, said that part of the issue is the sheer amount of content that filters through the platforms. For example, Facebook’s moderators review more than 2 million pieces of content daily, according to a spokesperson, who also said by email that the company has tripled the number of people working on safety and security issues since 2016. 

Scott Haber, a spokesperson for the Internet Association, which represents internet companies, said in a statement that Section 230 is the law that “makes all of the moderation possible” because the law states that companies are granted protections for “any action voluntarily taken in good faith” to remove or restrict access to obscene and harmful material.

But Ghosh said the current business model for social media, which relies heavily on sharing content and creating buzz, often incentivizes the spread of harmful content, because it is usually the most engaging.

The problem of pushing harmful and controversial content will not go away “so long as it’s legal to do so,” Ghosh said. 

In Washington, the momentum behind giving social media companies less cover for that content has been increasing on both sides of the aisle: Sen. Josh Hawley (R-Mo.) introduced legislation (S. 1914) in June, which proposes removing liability protections for companies that fail to provide “politically unbiased content moderation” or submit to a third-party audit. In the House, Rep. Ed Case (D-Hawaii) introduced a bill in September (H.R. 4232) that would amend Section 230 so that short-term rental platforms like Airbnb are responsible when a listing violates local laws. And the House Energy and Commerce Committee’s consumer protection panel will hold a hearing on content moderation later this month.

However, the appetite to hold social media companies legally accountable for their users’ content is much less among the users themselves. Morning Consult’s survey found that majorities of social media users said users are always liable for objectionable content they post. By comparison, 20 to 25 percent said social media companies were always liable, depending on the type of offensive content. 

For social media companies, the fight might never be over, said Jim Harper, a fellow at the American Enterprise Institute focused on tech issues. Although Harper says the liability protections afforded to companies in the 1990s were a “pretty good guess” at what would be needed, the means to getting there – through legislation rather than court action -- sets the law up to be a political football.

“You’ve got this perception that the big tech companies got a special exemption,” Harper said. “And a special exemption is not going to be as respected the way that a rule that gets hammered out over the years will be respected.”

The story has been updated to clarify in Scott Haber's statement which phrase was excerpted from Section 230. 

Sam Sabin previously worked at Morning Consult as a reporter covering tech.

We want to hear from you. Reach out to this author or your Morning Consult team with any questions or comments.Contact Us