Tech

Sweat the Details to Improve Online Safety

Few topics are hotter at present than how to “fix” the internet. Digital services have kept us connected, educated and entertained amid a pandemic, but their misuse has commanded the attention of policymakers on Capitol Hill and around the world. Revelations from whistleblowers have reinforced the concerns of user advocates and researchers who have long been working on harmful online content and conduct.

Calling out problems may command headlines, but finding solutions to online safety is going to be a lot of hard work. To make the internet better, we must delve into the details of how companies manage risks.

Complex issues, such as grappling with terrorist content or incitement of hate speech, rarely have simple solutions. And regulating online content is an especially thorny challenge here in the United States, where just about any government effort to impose rules on speech must be reconciled with the First Amendment. Moreover, while everyone on both sides of the aisle is making political hay by going after “Big Tech,” they fundamentally disagree about the problem at hand.

Progressives want digital services to limit hateful content and conspiracy theories more aggressively. But they perhaps have not fully considered how heavy-handed company restrictions could wind up constraining expression for marginalized and vulnerable communities. Conservatives, on the other hand, continue to tilt at the windmill of corporate censorship of conservative voices, despite considerable evidence that such viewpoints have benefited significantly from the social media era.

In this zero-sum struggle, each side wants certain content to stay up and other content to come down, positions that are fundamentally at odds and often unconstitutional. Adding to the challenge, even well-intended legislative reforms adopted to try to narrowly address a serious issue like online trafficking have wound up causing further harm to marginalized communities.

There are thoughtful proposals in play, from mandating independent academic access to company data to providing funds to better resource government efforts for enforcement and prevention of child exploitation. None of these steps are silver bullets, but they are important in their own right and merit good-faith consideration by lawmakers.

There is no one-size-fits-all answer to trust and safety online. Many different types of digital services, with very different business models, all face constantly changing risks of misuse and abuse. This is not just about social media, as companies ranging from podcasts to financial services are grappling with harmful content and conduct.

Tech companies have been working on these issues for years. Over the past two decades, technology companies have built “trust and safety” teams (or the equivalent) that work to address harmful online content and conduct. By this point, it’s a basic business operation you find at many companies, like cybersecurity or a legal team. These operations rarely attract public scrutiny and have long operated in the background.

That is changing, with organizations like the one I lead, the Digital Trust & Safety Partnership, bringing together leading tech companies to create a framework for how the tech industry can strike a balance between reducing harms and protecting people’s ability to stay connected with others, conduct business, and express themselves.

DTSP and other organizations, like the Trust and Safety Professionals Association, are working to organize and advance the evolving trust and safety discipline.

DTSP has articulated best practices for how high-performing trust and safety teams can handle harmful online content and conduct. And we have now published our approach to evaluating how companies are implementing these practices. We use a proportionate, risk-based approach where the level of scrutiny of a company’s practices is determined not just by its size, but by the impact of its services and the specific risks posed by product features.

It is imperative that today’s largest tech companies take their safety responsibilities seriously, but we can’t stop there. We need professional standards for trust and safety that can scale, so that the startups of today can embed best practices into their DNA from the start and not be caught by surprise by trust and safety challenges as they grow into the tech success stories of tomorrow.

Few topics cause eyes to glaze over faster than industry standards and management systems. But if you want technology companies to improve how they identify and account for the challenges at the root of today’s social media controversies, this is where to start. Rigorous systems to implement best practices and continuously assess and improve their implementation will not grab headlines. But they might just make the internet better.

 

David M. Sullivan is the executive director of the Digital Trust & Safety Partnership.

Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.

Morning Consult