logo

Social Media Users Think Politicians Should Pay Heavy Price for Spreading False COVID-19 Information

Strong support for banning or suspending politicians, president, journalists, government agencies
Thirty-eight percent of social media users said politicians should be banned from platforms for spreading false or misleading coronavirus misinformation, ahead of the president (37 percent), journalists (36 percent), government agencies (36 percent) and business leaders (32 percent), among others. (Getty Images)
May 20, 2020 at 4:10 pm UTC

Key Takeaways

  • 38% of social media users said politicians should be banned from platforms for spreading false or misleading coronavirus misinformation -- the highest of any public figure surveyed.

  • 32% of social media users want false or misleading coronavirus content removed from all social media sites, not just the platform where it was originally posted.

On Twitter, President Donald Trump has tossed his support behind the antimalarial drug hydroxychloroquine, which some public health experts have warned can have serious or potentially fatal side effects, as a COVID-19 treatment, and he’s encouraged protesters rallying against stay-at-home mandates -- putting a strain on the social media site’s policy against content that could prompt real-world harm.

And Trump hasn’t been the only world leader testing those boundaries in recent months. In March, Twitter Inc. deleted two tweets from Brazilian President Jair Bolsonaro that also praised the use of hydroxychloroquine to treat COVID-19 and encouraged the end of social distancing guidelines in his country. Twitter also deleted a tweet from Venezuelan President Nicolás Maduro promoting a “natural brew” to cure the illness.

A new Morning Consult survey suggests that social media users support platforms such as Twitter playing a more active role in moderating false or misleading coronavirus content spread by politicians, including Trump -- although users are split on what form their responses should take. 

Among a sample of 2,044 social media users, 37 percent say the U.S. president should be banned from a platform for spreading such information, while 36 percent said such an offense should result in a temporary suspension. Similar support was seen for actions against politicians overall: 38 percent said they should be banned when spreading false or misleading coronavirus information, and 41 percent said they should be suspended. 

While Democrats and Republicans share similar sentiments regarding whether to suspend the president or politicians overall in such cases, party lines are strongly drawn when it comes to permanently banning them from the platforms. Twenty-three percent of Republicans said the U.S. president should be banned compared to 47 percent of Democrats. Meanwhile, 35 percent of Republicans said a politician should be booted, a 10-point difference from the 45 percent of Democrats who said the same. 

The survey, conducted May 14-17, has a margin of error of 2 percentage points. 

With Americans heading online to figure out how to best protect themselves against the coronavirus threat, which as of Wednesday has infected over 1.5 million people in the United States and killed more than 90,000, social media companies are finding themselves in a familiar position as gatekeepers for what information should remain on their platforms. The coronavirus pandemic, however, has placed a spotlight on how platforms treat world leaders who share content that skirts their rules, including tweets from Trump that encourage his supporters to protest state lockdowns despite stay-at-home orders or try unproven treatment options. 

The major social media sites at the center of this discussion -- Facebook Inc., Twitter and Google’s YouTube -- have typically taken a hands-off approach to moderating information that comes from public officials, instead opting to either leave it up because of its newsworthiness or placing warning labels on the content that violates its rules. 

And in the time of the COVID-19 pandemic, those stances have remained mostly the same, although Twitter has said it will take action against world leaders who violate its community guidelines regarding the pandemic. 

A Twitter spokesperson said no one is exempted from its rules to remove content that is deemed to have a clear call to action that could either directly prompt harm or is linked to offline harm. The company also said last week it would start placing warning labels on tweets that include misleading information about the virus.

Representatives from Facebook Inc. and Google did not respond to a request for comment.

Part of the issue plaguing large social media companies is that they aren’t the only distribution options for political leaders, meaning that if a politician were booted from a platform, he or she could easily find another site to serve as a mouthpiece, said Katy Glenn Bass, research director at the Knight First Amendment Institute at Columbia University.

“If your concern is that a prominent public figure is spreading this information, booting him off Twitter doesn’t silence him,” she said. “He has lots of other places where he can peddle misinformation.” 

Glenn Bass added that because of this, one of the best mitigating options for the tech platforms to pursue is to incorporate warning labels or links to accurate health information with the questionable post. 

Dipayan Ghosh, co-director of the Digital Platforms and Democracy Project and Shorenstein fellow at Harvard’s Kennedy School of Government, said since removing questionable content from politicians and world leaders can be tricky, it’s important for the platforms to be transparent about how they determine what content violates their rules.

“I'm sure that companies like Facebook and Twitter have developed internal guidelines around what to do if and when a public official says a particular type of thing concerning something that is particularly offensive,” he said. “What they should do is make those policy guidelines public so that we can see how their decision making is done.”

Social media users in the survey suggested that they’d support a coordinated effort to remove rule-violating posts from across various social media platforms or from the entire internet altogether: 32 percent said false or misleading information about the coronavirus should be removed from all social media sites, including the one where it was originally posted, and 35 percent said it should be removed from all online sites.

When it comes to the type of content that should be removed, users are more inclined to prioritize fake accounts and pages created to spread misinformation (83 percent) and false information about virus cures and treatments (78 percent), than they are paid advertisements, including those promoting various virus-related health claims (45 percent). 

To have the most effective results though, Spandana Singh, a policy analyst at New America’s Open Technology Institute, said companies should work with their public partners and local officials to determine which pieces of content could be harmful to public health -- especially as states start to enter a patchwork of reopening plans. 

“It's still a pretty subjective decision for the platform, but I think the best way they can go about it is by applying their policies consistently,” she said.

Sam Sabin previously worked at Morning Consult as a reporter covering tech.

We want to hear from you. Reach out to this author or your Morning Consult team with any questions or comments.Contact Us