Tech

The Facts on Social Media Bias

With the Capitol riot still fresh in everyone’s mind, social media platforms are again facing criticism from all sides for their moderation decisions.

Democrats claim these sites amplify right-wing views and assail the platforms for failing to remove hate speech, health care misinformation or government propaganda in countries around the world.

Republicans claim these sites are biased against conservatives. Right-wing voices castigate these sites for taking down what they see as legitimate political speech – often the same language the left views as hate speech – and information about disputed COVID-19 cures.

Both parties are trying to force platforms to minimize the other side’s speech, while making sure their speech remains available. Meanwhile, the platforms are in a no-win situation: any perceived move toward one viewpoint will only enrage the other. Like Congress, Americans are conflicted on these issues. According to a recent Harvard-Harris poll, 65 percent of people are concerned about censorship on social media platforms. At the same time, a majority (54 percent) also support the indefinite ban of Donald Trump from Twitter.

I suggest a third view. The large social media platforms are not biased toward liberals or conservatives. Instead, they are private businesses trying – imperfectly – to keep harmful content off their platforms. They are doing their best to remove content they believe violates their terms of service by providing misinformation or encouraging violent or hateful conduct.

Social media content moderators are faced with a Herculean task. They operate at an unimaginably massive scale. Twitter gets 200 billion tweets per year. Facebook gets 350 million pictures posted every day. Even if everyone agreed on what content was “bad,” and even if a platform’s content moderation system were 99.9 percent effective, countless inappropriate posts would still sneak through.

Given the pace and volume, moderators inevitably make mistakes. They will also make errors in judgment – such as Facebook and Twitter’s temporary removal of the Hunter Biden laptop story before reversing their decisions.

While conservatives claiming bias point to individual instances of content moderation, studies do not support claims of systemic suppression of right-wing views. A 2019 study by The Economist said that Google does not favor left-leaning websites. A 2019 Media Matters for America study found that, over a 37-week period, right-leaning Facebook pages had nearly identical engagement as left-leaning pages. An NYU Stern Center for Business and Human Rights report released last month states that “no trustworthy large-scale studies have determined that conservative content is being removed for ideological reasons or that searches are being manipulated to favor liberal interests.”

More, conservative views often do very well online. The Twitter feed @FacebooksTop10 regularly shows many of the top 10 Facebook stories are by conservatives. A 2020 study by Politico and the Institute for Strategic Dialogue concluded that conservatives routinely surpass both liberals and traditional news media in driving online dialogue.

The latest accusation of bias concerns Amazon Web Services’ decision to cut support to right-wing social website Parler. However, this is not an example of partisan targeting. AWS asked the platform for months to clean the site of hate, including calls for the assassination of public officials, to conform with the AWS terms of service. Parler refused – highlighting its lax attitude to moderation – failing to employ any misinformation checks or third-party fact-checkers.

Social media companies are not perfect. They could improve by providing greater disclosure or justification for why content is taken down. They could consider offering users more choices in moderation algorithms that reflect user preferences or values. Both measures could increase transparency and address concerns about viewpoint discrimination.

In fact, platforms are trying innovative ways to police their sites and ensure consistent content moderation practices. Twitter started a program called Birdwatch, allowing users to identify information in tweets they believe is misleading and write notes that provide informative context. Facebook set up an independent review board to oversee and potentially reverse moderation decisions.

Political conversation in a democracy is passionate and messy. Social media platforms have the difficult task of refereeing this discussion on a massive and unprecedented scale.

While not everyone will ever be happy with all their decisions, the fact that the left and the right are equally dissatisfied may just mean the platforms have the right balance.

 

Michael Petricone is the senior vice president of government affairs for the Consumer Technology Association.

Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.

Morning Consult