Tech

Demystifying Algorithms and Giving People More Control of Their Experience on Twitter

As Twitter aims to provide meaningful transparency and more consumer control and choice, algorithms are sometimes viewed as the antithesis – opaque mechanisms over which consumers have little say or control.

Concerns about algorithms are legitimate. There’s no question that our industry can do more to address and explain the impact of some of these systems. At Twitter, we’re focused on filling these gaps. That means solving for the unintended consequences of algorithms, promoting algorithmic choice and providing a clearer view into how Twitter works.

That’s also why we recently announced our Responsible Machine Learning initiative, a company-wide approach to study our use of machine learning and understand in depth the good, bad and potentially unknown impacts that these technologies may have on the experience of millions who use our service. This initiative is the very first step and investment into a journey of evaluating our algorithms and working through ways we can apply those findings to make Twitter and our entire industry better.

But how are we using these technologies in the first place, and to what ends? At Twitter, we deploy machine learning, algorithms and automation for two main reasons: to keep people safe and secure, and to help people find the most relevant content.

From choosing the route for your next road trip to finding the cheapest flights, algorithms organize the internet. Algorithms are formulas or sets of technical rules that — in the Twitter context — allow our systems to do things like rank and prioritize content. This enables us to offer content most relevant to the diverse and unique interests of our users. We’re also different from other services in that we allow optionality. Our Sparkle icon, for example, allows people on Twitter to reverse the chronological order ranking of tweets from accounts or topics they follow. This function gives people more control over the content they see, and it provides greater transparency into how our algorithms impact their experience on the service.

As policymakers and members of Congress debate the future of internet regulation, they must closely consider the ways algorithms and machine learning make services like Twitter a safer place for more to express themselves in the public conversations occurring online.

For example, our machine-learning tools surface potentially abusive or harmful content to human moderators for review. In fact, we now take enforcement action on more than 50 percent of abusive tweets that violate our rules before they’re reported. Why does this matter? Because we believe the people impacted by online abuse shouldn’t also have the burden of telling us about it — and we think automation and algorithms can help with that.

Without algorithms and machine learning, we wouldn’t be able to move as rapidly to curb the spread of terrorist content. As a member of the Global Internet Forum to Counter Terrorism, we share “hashes” with our industry peers when terrorist content, such as videos of violent attacks, are shared, enabling all of us to take proactive enforcement at scale.

These technologies also enable proactive action on child sexual exploitative material, and in their absence, these efforts would be hamstrung. For example, we suspend hundreds of thousands of unique accounts for violating Twitter policies prohibiting child sexual exploitation. And 91 percent of those accounts were proactively identified using algorithmic detection.

While these algorithms are important tools to power our online experience, we recognize that there’s work to be done. We must demystify how algorithms work, be transparent in our findings about their unintended consequences, and work to give people on Twitter more control over their experience.

We’re committed to working with a diverse group of stakeholders in Washington, D.C., across the country, and around the world to get this right for the future of the internet. We’ll continue to listen, develop our approach, and be transparent along the way. The future of the internet is more open and more decentralized, and Twitter’s aim is to be a leader in developing solutions.

Lauren Culbertson is Twitter’s head of U.S. public policy. 

Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be found here.

Morning Consult