The role of internet platforms in keeping their systems free of harmful speech and activity is back in the news, driven this time by the use that ISIS is making of social media to recruit new adherents to its cause. By making needed judgements, social media platforms are acting in a socially responsible way. We don’t need government agencies to step in and take over this delicate balancing role.
One instance of governmental overreach in this area is pending in section 603 of the Senate version of the Intelligence Authorization Act for Fiscal Year 2016. This section would create an obligation for social media companies and others to report undefined “terrorist activity” to the U.S. government. This completely unnecessary provision risks bringing innocent people under surveillance by the government for protected expression.
This is not a new issue. For years, Internet intermediaries have taken steps to keep their systems free of child pornography, Internet gambling, controlled substances, hate speech, revenge porn, to name just a few. A couple of examples illustrate the point. When WikiLeaks first revealed confidential State Department information, online financial institutions and web hosting companies stopped providing service. When the Innocence of the Muslims video caused violent protests throughout the Middle East, social media platforms did not take it down globally, saying it passed internal policies against hate speech. Rather they selectively disabled access to the video in hotspots in the Middle East and Asia as way to prevent the violence from spreading.
The last decade has seen the rise and public acceptance of the socially responsible Internet intermediary. Search engines, online payment systems, social networks, online marketplaces, web hosting companies, ISPs all have policies and procedures in place reasonably designed to stop the use of their systems for harmful or unprotected conduct or speech. Each company has the flexibility to craft the policy that works best for its users and for the public, and to vary it depending on the type of speech or conduct. It takes time and judgment to apply these policies to particular cases.
We have seen these internal policies at work in the case of ISIS. Facebook has zero tolerance of ISIS accounts and support, saying “We don’t allow praise or support of terror groups or terror acts, anything that’s done by these groups and their members.”
Twitter has a similar internal rule stating that “The use of Twitter by violent extremist groups to threaten horrific acts of depravity and violence is of grave concern and against our policies, period.” Indeed, so successful have these policies been that Twitter employees have been threatened by ISIS over their take-down policy.
Moreover, the policies are effective in limiting the reach of terrorist groups. As the Brookings Institute reported earlier this year, “Thousands of accounts have been suspended by Twitter since October 2014, measurably degrading ISIS’s ability to project its propaganda to wider audiences.”
But there is a balance here. Some important conversations and information about terrorism can easily be stifled if social media platforms are too vigorous in shutting down terrorist accounts, websites and messages. Indeed, that is what is wrong with a “terrorist activity” reporting requirement.
More generally, issues of Internet freedom are at stake. In 2011, then-Secretary of State Hillary Clinton reminded social media platforms that “The first challenge is for the private sector to embrace its role in protecting internet freedom. Because whether you like it or not, the choices that private companies make have an impact on how information flows or doesn’t flow on the internet and mobile networks. They also have an impact on what governments can and can’t do, and they have an impact on people on the ground.”
Sometimes, it seems as if intermediaries are in a no-win middle ground – urged to take action against bad conduct and speech and then criticized when they do.
When financial intermediaries cut off WikiLeaks, the public reaction was harsh. One commentator said “…the cowardice of the payment systems, not to mention their hypocrisy, is a disgrace…” He called for them to be treated as common carriers with no responsibility for the content of their traffic. The New York Times also wanted to consider common carrier status for intermediaries, asking:
“What would happen if a clutch of big banks decided that a particularly irksome blogger or other organization was “too risky”? What if they decided — one by one — to shut down financial access to a newspaper that was about to reveal irksome truths about their operations? This decision should not be left solely up to business-as-usual among the banks.”
It makes no sense to grant terrorists and criminals an unlimited right of access to intermediary platforms. Under it, Facebook would be powerless to remove a “Homage to the Killer” page; Google would be unable to take down videos of an autistic teenager being tormented; payment systems would be unable to block payments for child pornography. And so on. But the point is that commentators were so incensed by the intermediary actions in WikiLeaks case that they contemplated a public policy of taking away all discretion from intermediaries and requiring them to publish all speakers.
The reality is that intermediaries have to do this job because no one else can or should. They have to strike a balance. They have to make judgments about the limits of speech and conduct on their systems. Generally, they need to allow speech and advocacy, but draw the line when the speech or advocacy becomes genuinely harmful. In our system of free expression, they have to be the responsible stewards of public discourse.