content moderation

Delay of Presidential Election Result Casts Spotlight on Social Media’s Misinformation Problem

Thousands of social media posts targeting swing states mentioned “stealing” the election, data shows

President Donald Trump speaks in the East Room of the White House in the early morning hours of Nov. 4, 2020, in Washington, D.C. Trump posted false claims about election returns on Facebook and Twitter a little before 1 a.m. (Chip Somodevilla/Getty Images)

As the United States again heads down a long-anticipated timeline where the presidential election results don’t arrive on election night, a wave of disinformation aimed at undermining the ballot-counting process is already swarming the larger social media platforms, putting their newly minted election-related content moderation policies to the test. 

Between 8 p.m. Tuesday and 9 a.m. Wednesday, there were 415,874 total social media posts that included the words “steal” or “stealing” in the context of the 2020 election. Posts with that language were targeted especially to battleground states, with the highest number of mentions on posts related to Pennsylvania (20,819), Wisconsin (15,011) and Michigan (11,947), according to data from media insights company Zignal Labs shared with Morning Consult. From midnight to 6:30 p.m. on Election Day, there were 8,150 posts that mentioned the words “steal” or “stealing” in the context of Pennsylvania’s 2020 elections. 

Social media post mentions related to President Donald Trump’s false early victory call hit 125,586 mentions from 8 p.m. to 9 a.m., with a spike around 12:45 a.m. Wednesday when Trump tweeted the claims, according to Zignal. Those mentions were highest in Michigan (11,413), Georgia (10,883) and Pennsylvania (10,755), all of which are still in the process of counting ballots. 

A range of experts, including former White House officials and democracy advocates tracking election-related misinformation who have been preparing this scenario, told Morning Consult on Wednesday that while social media companies have been taking a far more proactive approach to moderating false claims of election victory, missteps in their actions serve as indicators of the lack of transparency in their approach.

“They did not publish what their governance process would be, and transparency is the best shield against misinformation campaigns,” said Theresa Payton, chief executive of cybersecurity company Fortalice Solutions and former White House chief information officer from 2006 to 2008. “So as they begin to flag and suspend accounts and suspend posts, what is going to happen is another misinformation campaign, which is ‘the system is rigged,’ ‘they don’t like my candidate’ and ‘we’ve been censored.’” 

Facebook Inc. and Twitter Inc. were put to the test early on when Trump falsely claimed around 1 a.m. in posts on both sites that he is “up BIG” and that Democrats “are trying to STEAL the election.” Twitter placed the content behind a warning label indicating that the post contains “disputed” and “misleading” information about the election, while Facebook’s label says that votes are still being counted. 

The two social media companies also flagged a second post from the president on Wednesday morning that aimed to undermine the validity of mail-in ballot counts, saying that his lead in many states Tuesday night “started to magically disappear as surprise ballot dumps were counted,” although election officials have been following the proper procedure for counting ballots.

A Twitter spokesperson said the company will “continue to take enforcement action” against posts that “prematurely declare victory or contain misleading information about the election broadly” as the votes continue to be counted. Facebook said in a tweet that it’s automatically applying labels informing users that the votes are still being counted to both presidential candidates’ posts. 

These responses shed light on the question of how effective labeling posts is when it happens after a substantial number of users already saw it, several experts said. Yosef Getachew, media and democracy program director at government watchdog group Common Cause, said in the coming days, he expects advocacy groups to rally behind existing calls for a screener system on certain content so platforms can review it for misinformation before approval, similar to the time delay that live broadcast programs adhere to in case they need to censor inappropriate materials. 

And while the president is far from the only government official and political candidate who has been flagged for election-related misinformation since Tuesday, the posts are indicative of a strategy change happening among disinformation agents, said Ian Vandewalker, senior counsel for the NYU Brennan Center for Justice’s democracy program. 

“The strategy is going to change from a voter suppression one, where targeting is important because you want to suppress the other side’s votes and not yours, to a kind of national narrative where people want to be able to say whatever” helps their cause, he said. “But when they’re saying it in a way that involves falsehoods, or includes threats of violence, that’s when it gets into a problematic situation.” 

Lisa Macpherson, senior policy fellow at public interest group Public Knowledge, said the coming days will include plenty of misinformation that focuses on delegitimizing the election, such as spreading rumors about election rigging or vote tampering and the integrity of the ballot-counting process, with the White House already proving to be a potential source for such false information. 

However, Macpherson is optimistic about the ways that social media companies, as well as media outlets, are implementing their fact-checking tools, saying that the platforms “are so far showing that they’re willing to enforce some of the policies that they’ve put in place to protect the integrity of the election.” 

But Payton, who wrote “Manipulated: Inside the Cyberwar to Hijack Elections and Distort the Truth,” said enforcement tools like labeling, halting engagements on a post or removing content altogether after publication have a fundamental loophole: the ability for these materials to spread in private chats, like those on WhatsApp or Snapchat. 

“How do you have people unsee misinformation after they see it, make up their mind about it, and then act on it?” she said. “That is the challenge that did not get addressed for 2020.” 

Do NOT follow this link or you will be banned from the site!