Tech

Tech Companies Aren’t All the Same, Regulations Shouldn’t Be Either

In the weeks leading up to May 25th, the whole world received a seemingly endless stream of privacy policy update emails, from what felt like every website and app they’d ever visited. Why? Because that was the day that the European Union’s General Data Protection Regulation went into effect and had massive implications on organizations and citizens across the world. American companies that have any EU customers, or even customers living in America that are EU citizens, have had to integrate an unprecedented number of costly new practices and procedures into their organization’s data collection activities. This new era of data protection, while well-intentioned, places an undue burden on startups and entrepreneurs. And now, the EU Parliament is once again considering a rule that will have numerous, seemingly unconsidered ramifications for American startups; this time, regarding content filtering.

In September 2016, the EU Parliament released its plan to overhaul their copyright system. Included in the plan was a proposal to shrink the supposed value gap between the revenues made by content creators and those made by content distributors, by forcing content distributors to use content filters so no copyrighted content could be hosted on their site without permission from the copyright holder. And on Wednesday, the Legal Affairs Committee of the EU advanced the measure, meaning that, once it is agreed upon by representatives of the EU’s 28 member countries, it will become law. Similar to GDPR, this regulation applies to any company that has customers in the EU, meaning that thousands of America-based businesses will be affected. But what is so problematic about this regulation? More than anyone might think.

While the rule does not explicitly require companies to use content filters, there is no other way that companies can comply with the regulation. But content filtering is complicated, and we haven’t perfected it yet. In order to filter content effectively, three things must be available: content filtering technology for every type of content a site hosts (video, audio, text, etc.), up-to-date databases of all copyrighted content and its owners, and agreement between the content creator and distributor on what to do when copyrighted content is discovered.

Putting aside the fact that both content-filtering algorithms and copyright databases for certain types of content like 3D printing, software code, or text don’t even exist, content-filtering software is not the whole picture. Even when all of the components of content filtering are achieved, the algorithms still aren’t perfect. According to our conversations with content distribution platforms, content-filtering algorithms, at best, can make life easier in terms of flagging content for removal, but cannot replace human decision-making entirely. And that means content moderation requires a massive amount of manpower. This isn’t a problem for a company like Facebook, who can afford to employ the small armies of content moderators they need to ensure compliance with these new rules. As of February 2018, Facebook had a content moderation staff of 7,500, up from 4,500 just 8 months earlier. The average tech startup, which has just five employees, cannot come close to securing the staff necessary to comply with this regulation, and if this rule is adopted, many will suffer, or won’t even get off the ground, because of it.

This new mandate would only apply to websites that host “large amounts” of content. Unfortunately, the mandate never defines what that means. Thanks to this lack of clarity, any enforcement agent can arbitrarily decide what a “large amount” of content is, and can pursue charges or violations accordingly. As a result of this ambiguity, every company that hosts content, from the smallest startups to Fortune 50 companies, would have to operate in compliance with this rule.

A consistent part of the discussion surrounding GDPR in the United States has been proposals to “copy” the rule into legislation here in America. And given the discussion of content platforms hosting content that violates copyright laws, there is a very real possibility that we will see efforts to replicate a rule like this here at home which would be disastrous for startups across the country.

That’s why Allied for Startups partnered with Alexander Gann and Analysys Mason to produce a report discussing the flaws and potential implications of this rule. If we are going to try to solve the very real problems of copyright infringement and content moderation here at home, we need to do so in a way that reflects an understanding of how content platforms, and content filtering tools, actually work. We need carefully thought-out, nuanced regulation that takes the perspectives and needs of all sizes of companies into account. Only then will we actually address this issue in a way that is effective, and avoids quashing the American dream for thousands of current and future digital entrepreneurs.

Melissa Blaustein is the founder and CEO of Allied for Startups, a global network for startups, entrepreneurs, VCs, and advocacy organizations working together to build a worldwide consensus on key public policy issues impacting startups which is headquartered in Brussels and has been closely involved in discussions surrounding the digital single market in the EU.

Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.

Morning Consult