Brands

People Hold Users, Social Platforms Responsible for Policing Content

Don't use this - photos of screens are dull and suck energy!

As Facebook grapples with how to handle offensive or violent content posted to the site, Americans see a role for users themselves in policing live streaming platforms, even if they ultimately hold sites like Facebook responsible for what stays or gets taken down.

On Monday, Facebook pulled clips of a man in Thailand who murdered his 11-month-old child live on the site; the video was available for about a day, according to media reports. “There is absolutely no place for acts of this kind on Facebook and the footage has now been removed,” a Facebook spokeswoman said.

The Thailand post came a little more than a week after a man was killed in Cleveland, Ohio, and the video was later posted to Facebook. And the violence broadcast online is hardly a new phenomenon; a Wall Street Journal tally from March showed at least 50 acts of violence have been streamed via Facebook Live.

A majority of people agree that social media platforms are ultimately responsible for removing offensive content rather than retaining it in the name of free speech, recent Morning Consult polling shows.

But when asked specifically who should remove the content, respondents appeared split. Thirty-three percent said the social media site was responsible for monitoring the site manually and removing violent or graphic content, 29 percent said the site should use automatic algorithms to remove content, and another 22 percent believed users on the site should report the content.

The data mirrors how Americans said they felt about fake news appearing in their social feeds late last year: Polling showed people considered the readers themselves most responsible for catching fake news that came across their timelines.

The issue of policing content on social platforms is “complex” because of the lack of any regulations or federal oversight, said Matthew Crain, an assistant professor at the City University of New York who specializes in areas including internet and digital media. And automation isn’t as easy as it looks, because humans still have to tell the artificial intelligence what to flag, he said.

In some cases, people have posted acts of police brutality to Facebook, in addition to the recent standalone acts of violence, and algorithms may have trouble distinguishing between the two. “It’s going to come down to taking a political stance, which Facebook is not eager to do,” Crain said.

Sites like Facebook could take a page from YouTube, which outsources surveillance to contracted workers who decide what content to flag based on the company’s guidelines, Crain suggested.

Sarah Roberts, an assistant professor of information studies at the University of California, Los Angeles, said one of the barriers to effectively tracking content is the lack of manual or computational power to review everything that is posted. That is only compounded by a live streaming option.

“We’re all beta testing the product at the end of the day,” she said, so it is not surprising that people use platforms like Facebook Live to shine a light on the worst of human nature.

Whatever drives people to display violence online for the world to see, streaming platforms will only start to take the issue seriously because of pressure from the site’s users, who could be driven away by offensive posts, she said.

Polling was done between April 18-21 among 2,206 adults. See full results here and here.

Morning Consult