Tech

An Undeserving Legal Shield

Technology giants like Facebook, Twitter, and Google maintain they are neutral platforms that cannot be held responsible for the content posted on their sites. The root of that misguided and socially harmful belief stems from Section 230 of the Communications Decency Act, which was written when the internet was in its infancy. The law provides a liability shield to tech companies that proactively removed harmful or indecent content from their sites. As the internet has matured, it’s become clear that it’s time to end the blanket protections.

Rather than properly enforcing their terms of service under the protection afforded by Section 230, tech companies have done little to rein in dangerous content. They have, instead, used Section 230 to fend off lawsuits from victims of terrorism and other real-world harms that have links to content on these sites.

Given tech’s ongoing inability to comply with the spirit of Section 230, the Department of Justice has rightfully proposed a legislative plan to revise these broad legal protections — including removing immunity when it comes to terrorist content.

Any fair analysis of Silicon Valley’s practices and behavior in 2020 would show that the companies are not operating as benign and objective neutral platforms as the CDA requires. Since the law’s enactment in 1996, tech companies have evolved to collect, sell, promote and even produce their own content. 

In addition, Dr. Hany Farid, a University of California, Berkeley professor and senior adviser to the Counter Extremism Project, recently explained that tech companies algorithmically amplify certain content over others. In fact, Farid and researchers at UC Berkeley noted that an estimated 70 percent of content viewed on YouTube is recommended content. The platform’s algorithms use several factors — including optimizing for user engagement or view-time — in order to promote or recommend videos. 

This is done because advertising remains the primary source of revenue for the tech industry and companies want to keep users on their sites for as long as possible. The dynamic incentivizes tech companies to invest in developing algorithms that actively direct microtargeted videos (and the ads attached to them) at users and keep them engaged. Inevitably, the most outrageous, controversial and sometimes radical content is recommended.

And despite protests to the contrary, tech companies are actively pursuing content-producing endeavors. In 2016, CEO Mark Zuckerberg insisted that Facebook is “a tech company, not a media company.” But by the following year, Facebook was reported to be willing to spend up to $1 billion on original content and looking to revamp its Facebook Watch tab. Facebook was meeting with publishers and studio producers to develop new shows for Watch by 2019. This is not the behavior of a neutral actor. 

Big Tech’s behavior over time makes one thing crystal clear: Companies have been and intend to continue to be actively involved in the collection, manipulation and creation of content. 

It’s a rare point of bipartisan consensus that Section 230 is problematic. Attorney General William P. Barr has pointed out that Section 230 was intended to help shield tech companies from liability if they opted to moderate content. Speaker Nancy Pelosi said last year: “230 is a gift … and I don’t think they [tech companies] are treating it with the respect that they should … For the privilege of 230, there has to be a bigger sense of responsibility on it, and it is not out of the question that that could be removed.”

Predictably, the tech industry wants to continue reaping the financial benefits derived from operating like a content arbiter, curator and creator with none of the associated legal and liability risks. Indeed, tech is keen to maintain this peculiar and highly lucrative arrangement as evidenced by the record amounts of lobbying it has engaged in to maintain the coveted shield.

Yet, despite the industry’s lobbying, there is growing bipartisan support in Congress to amend the law. During a 2019 congressional hearing into internet abuses, Dr. Farid observed: “How, in 20 short years, did we go from the promise of the internet to democratize access to knowledge and make the world more understanding and enlightened, to this litany of daily horrors? Due to a combination of naivete, ideology, willful ignorance, and a mentality of growth at all costs, the titans of tech have simply failed to install proper safeguards on their services.”

The Justice Department’s latest actions are the result of a consultative and thoughtful process that is necessary to end the “neutral platforms” farce and return responsibility for public safety to the hands of government. No American business — let alone the most powerful and profitable — should be shielded from immunity for activities or content that violates criminal law, such as online scams, drug trafficking, online child exploitation and sexual abuse, terrorism or cyberstalking. The tech industry is no exception. 

 

David Ibsen is the executive director of the Counter Extremism Project. Lara Pham is the deputy director for the Counter Extremism Project.

Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.

Morning Consult