By Amir Nasr
January 19, 2016 at 6:00 am ET
President Obama’s new task force on terrorism and the internet shows that the White House has identified social media as a crucial battlefront in its fight against the Islamic State. The Homeland Security, Justice and State departments say they will work together to stop terrorists’ use of the internet to “radicalize and mobilize recruits” all over the world.
There are two ways to combat the presence of terrorism online: One is to pump the same social media channels that terrorists use with positive “counter-messaging” campaigns to crowd out radical views. The other is to monitor social media and report suspicious content that law enforcement can then track. Neither of these options is proven to hinder online radicalization, which means that much of what the task force will be doing is experimental.
Outsiders who are watching these activities say the task force seems more engaged in counter-messaging than in monitoring. This could be because monitoring is tough to do without treading on individual rights and tech businesses models.
The State Department says its new “global engagement center” will seek to empower nongovernment organizations that condemn the “nihilistic vision” of ISIS online. To do this, the center will plan “thematic social media campaigns” that provide facts to counter the “disinformation” coming from terrorist recruiters. The hope is that other organizations will pick up on these themes and disseminate them through their own online networks.
There are many unknowns here. How exactly will the task force orchestrate these campaigns, even with the help of friendly outside organizations? More importantly, how effective can they be in dissuading potential terrorist recruits? It’s hard to imagine that a person with deep anti-establishment beliefs would be dissuaded from joining a terrorist group by a government-backed information campaign.
Because potential terrorists are unlikely to respond to an overt government campaign, it’s important to have the anti-terrorist message come from independent entities, according to Emma Llansó, director of the Free Expression Project at the Center for Democracy and Technology. “They need to support genuine speakers who are providing counter-messaging to what we’re hearing out of ISIS,” she said in an interview.
But the tech community worries that the positive messaging strategy won’t be all the task force does. The information campaigns could get coupled with monitoring online content for pro-ISIS propaganda, according to Llansó. A growing fear in the tech world is that the government will look for such propaganda through partnerships with social media services.
“If you put the onus of monitoring on a private company, they have a lot more leeway to restrict content than the government could,” Llansó said.
If the government were to patrol social media websites and decide what is or isn’t suspicious, Llansó said, that could veer into censorship. However, if private companies report content they find suspicious to the government, it’s less like censorship and more like a business simply overseeing its customer service.
Tech companies don’t like this concept, but some members of Congress do. A Senate bill introduced in December by powerful intelligence chiefs would require social media companies to report suspicious activity on their sites.
This baffles the tech community because, they point out, social media companies already report potential threats to law enforcement. Twitter’s policies specify that service providers take action on any threat or promotion of violence. Facebook’s policies say the company will “remove content, disable accounts, and work with law enforcement” when there appears to be a “genuine risk of physical harm or direct threats to public safety.” Facebook also provides mechanisms that allow users to flag posts from others that they find suspicious.
The bill, introduced by Senate Intelligence Committee ranking member Sen. Dianne Feinstein (D-Calif.), would require tech companies, mainly social media websites, to report “any terrorist activities” to law enforcement.
The wording upsets the tech community because, notably, the bill doesn’t elaborate on the definition of “terrorist activities.” Tech firms could face legal recourse if they fail to report something that later emerges as potentially dangerous or suspicious. Without any clarification on “terrorist activities,” the decision to report something or ignore something depends on entirely on subjective opinion.
“Our concern is that companies are going to start having to report anything that might conceivably be later interpreted to be terrorist activities,” said Berin Szoka, President of Tech Freedom, said in a podcast last week. “It could include far more than just ISIS activities. It could be anything that might be potentially suspicious.”
To Szoka’s point, Llansó said a lot of the tech community’s concerns stem from the focus on “the more amorphous category of propaganda or radicalization” when policing social media sites.
What constitutes propaganda? Can they report someone for an angry tweet or Facebook post that shows anti-American leanings? Who decides what is dubious terrorist propaganda and what is simple political beliefs? This issue becomes increasingly difficult when social media companies rely on their users feeling comfortable to post their thoughts and feelings.
Even so, the measure, Requiring Reporting of Online Terrorist Activity Act, has bipartisan Senate support from influential cosponsors — Senate Intelligence Committee Chairman Sen. Richard Burr (R-N.C.), Senate Commerce Committee ranking member Sen. Bill Nelson (D-Fla.), and Sen. Marco Rubio (R-Fla.).
The tech sector has a Senate ally in Sen. Ron Wyden (D-Ore.). He often takes up fights against legislation he sees as attacks on civilian privacy. Wyden disapproves of the bill because he says it would turn private social media companies into government enforcers.
Wyden’s protests have been effective in the past. When the same bill was included in an intelligence authorization measure last year, Wyden blocked the legislation until the social media language was removed. Now the sponsors are trying again, facing the same arguments.
Amir Nasr previously worked at Morning Consult as a reporter covering tech.