Tech

Big Tech Running Is Out of Excuses for Inaction

Facebook and Twitter’s recent efforts to deplatform individuals and groups following the storming of the U.S. Capitol building on Jan. 6 is but the latest in a litany of reactive and piecemeal “too little too late” measures implemented more to deflect criticism than protect the public. Although the move has helped to slow the spread of violence-inciting misinformation, a closer look at Big Tech’s history with removing extremist and terrorist content reveals that the industry has done little to seriously address the systemic ills that plague social media platforms. 

It is typical yet unsurprising that Facebook and Twitter decided to act only after the riot at the Capitol broke out. Such reactive policies are the norm. For example, it was not until October 2020 that Facebook updated its hate speech policy to ban content that “denies or distorts the Holocaust.” The company only took action after mounting pressure from the media and advocacy groups. Facebook CEO Mark Zuckerberg was also heavily criticized for his 2018 statement that Facebook should not remove Holocaust-denying content because users were not necessarily “intentionally getting it wrong.” 

It is also true to form that when Big Tech finally does act, its takedown efforts prove to be insufficient. There have been reports of posts promoting violence during inauguration week circulating on Facebook, despite the company’s crackdown. 

Human rights groups and activists have also rightly called on Facebook and Twitter to apply their content removal policies evenly. As noted in the New York Times in 2019, Facebook did not remove posts by a member of Slovakia’s parliament, who was convicted and stripped of his government position for incitement and racist comments. Last year, activists published an open letter to Facebook, highlighting how the popular platform was used to spread violence and hate in Ethiopia.

We at the Counter Extremism Project have long fought against extremists’ and terrorists’ misuse of social media and the internet, urging Facebook, Twitter, YouTube and others to remove and prevent the reposting of content clearly prohibited by their Terms of Service. Nonetheless, the tech industry has pushed back on such efforts through lobbying and PR campaigns seeking to stymie criticism and paths to smart regulation, choosing instead an approach of inaction that is ineffective and ultimately dangerous. The tech industry has pushed several rationales for inaction including the following:

“One man’s terrorist is another man’s freedom fighter.”

This quote, from a Twitter official, epitomized Twitter’s dismissiveness of the growing problem of the hijacking and weaponization of its platform by violent extremists. Indeed, the Twitter official noted that the company was not interested in defining terrorism or silencing political speech — a comment made in response to requests for the removal of material produced by and in support of ISIS.

Removing extremist content hinders law enforcement and intelligence operations.

Law enforcement officials who track and attempt to prevent extremist activity know that attack planning takes place on encrypted applications, not on public facing platforms. But pushing extremists off these platforms into smaller ones has the benefit of reducing their ability to promote misinformation and expand their influence on greater numbers of impressionable people.

Removing content could further radicalize extremists online.

This contention conflates feelings of disappointment, anger, or inconvenience that may arise from having one’s social media content removed, with the complex psychological process of extremist radicalization. Surely if companies refused to enforce Terms of Service out of a fear that a user backlash akin to radicalization would result, then companies who never act at all. This nonsensical excuse for inaction has been belied by the sweeping removals and account suspensions that took place after the events of January 6. 

These arguments made by the tech industry have always been self-serving platitudes that served as excuses for inaction. Now faced with political and media pressure to act, Twitter has adopted a new standard for content removal: “behavior that has the potential to lead to offline harm.” This standard is absurdly general and could potentially cover all types of otherwise protected speech and political discourse. If enforced, this new standard will be used to silence large numbers of law-abiding and non-extremist users.

CEP has long argued for tech industry removal policies that are transparent and based on established standards and laws. For example, we have called for social media platforms to ban participation from U.S.-designated Foreign Terrorist Organizations and Specially Designated Nationals. Such a commonsense approach will help ensure that the tech industry can focus on a clear and defined set of targets and be held accountable when companies fail to take effective and permanent action against actual extremists with a history of advocating for violence or carrying out terrorist attacks.

 

David Ibsen is the executive director of the Counter Extremism Project.

Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.

Morning Consult