Tech

The Era of Big Tech Free From Regulation Is Over

In 1996, when Congress passed the Communications Decency Act, few would have been able to predict the scale to which websites and social media platforms would grow and the importance they would soon have in our daily lives, even extending into the discourse of our politics.

Since the advent of the world wide web, tech companies and the government alike have seen the internet as a medium for connection and dialogue, acting as a natural extension of the U.S.’s legacy of freedom of expression. This led Congress to codify into law Section 230 of the Communications Decency Act which protected tech companies from liability over what users posted on their platforms, thereby allowing them to create product offerings that encouraged unrestricted online expression.

Section 230 is not without controversy. After foreign election interference in 2016 and concerted disinformation campaigns launched on these platforms, many have begun calling for adjustments to the law. While the public and politicians can debate the merits of these changes, for businesses, specifically tech firms that operate online social media platforms in the U.S., any adjustment to Section 230 could significantly impact their operations. If more liability is introduced to these companies, they would be forced to dedicate vast amounts of resources to monitor and screen information that is posted to sites by their users — even if AI and software could catch some harmful materials, a small army of trained workers would be necessary to comply with a regulatory framework.

As the nation’s political and social dialogue has largely moved online, with policy decisions sent by tweet or social posts, the way in which we absorb and discuss political and social issues has changed. In this context, some policymakers have begun questioning the effects of Section 230, fearing that shielding tech companies from liability has resulted in a flood of hate speech, disinformation and illegal content on their platforms. Tech companies have fought back hard against this narrative, arguing that placing liability on internet platforms would force them into a censorship role and dramatically diminish free speech across the country. The Internet Association argues that threatening liability on tech companies would have the effect of “incentivizing them to block user generated content, even if legal – making the web less free, innovative, and collaborative.”

Global business leaders are bracing for regulators to take a more aggressive stance online. According to PwC’s Global CEO survey, 71 percent of the world’s CEOs believe that the government will soon “force the private sector to regulate content on the internet,” and 36 percent listed themselves as “extremely concerned” about overregulation’s impact on their businesses, higher than any other category.

Last week, Rep. Jan Schakowsky (D-Ill.), who oversees a critical consumer protection subcommittee, confirmed such companies’ fears when she announced that her staff is researching possible changes to Section 230, telling Reuters that she is “really worried about this idea that whatever content is paid for and posted, [tech companies] have no responsibility for.” Rep. Schakowsky has not yet disclosed how she will propose to change the law. Former Vice President Joe Biden, in a recent interview, also noted his support to adjusting Section 230.

While calls to adjust or entirely eliminate Section 230 grow, any added liability placed on these firms would drastically change the relationship they have with their users. Instead of encouraging the development of content and engagement, these online platforms will be required to continuously police and monitor content. While today, postings such as hate speech and violence are monitored, the threat of legal action may well expand the parameters by which these companies would need to review content that is posted. Unfortunately, that means the reality for start-ups and newer firms is that the compliance cost might too greatly exceed the cost of doing business. Further, as posts become more closely monitored and screened, these platforms could lose some of their power as tools of free expression and dialogue.

Any policy developed this year or more likely in the subsequent Congress, could face a steep uphill battle to becoming law, especially as the issue continues to be framed around the polarizing topic of election interference. If any policy makes it through the legislative process, it would surely be confronted with a barrage of legal challenges by the tech industry, followed by confusion over how to properly comply with the law. Not until federal regulators and consumer groups file lawsuits and courts begin to rule on the statute would greater clarity likely be given on any potential update.

Overall, as the tech industry, politicians and the public debate this issue, all should be aware of the potential impact any adjustments to Section 230 can have on these platforms. Tech companies have an opportunity to showcase their ability to help build constructive dialogue around regulation in the year ahead.

 

Alison Kutler is a PricewaterhouseCoopers principal and is the strategic policy advisers leader.

Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.

Morning Consult