Last week on July 14, at the Center for Strategic and International Studies (CSIS), the Internet Governance Forum (IGF) USA hosted an interesting and, unfortunately, once again, timely panel discussion entitled: “Content and Conduct: Countering Violent Extremism and Promoting Human Rights Online.” Courtney Radsch from the Committee to Protect Journalists ably moderated and asked pertinent questions throughout the conversation. Yolanda Rondon from the American-Arab Anti-Discrimination Committee and Matt Mitchell from Black Lives Matter provided a civil rights perspective. J.D. Maddox from the State Department described what the U.S. government is doing to combat online violent extremism abroad. I was on the panel to give SIIA’s view, which is informed by our diverse membership, which includes content and social media companies.
Rondon and Mitchell eloquently described the potential dangers and dilemmas faced by policymakers in determining how to address violent extremism online. Can takedowns of content drive people into the “deep web?” Does withdrawing Nazi content in Germany, e.g. “Mein Kampf,” lead to a sense of victimization and isolation among far right groups? As Radsch asked, should ISIS be the only group targeted for online action? What about white supremacy groups in the United States? Are government campaigns such as “Don’t be a Puppet” clumsy and counterproductive? As Radsch pointed out, the majority of people who hold extremist beliefs do not commit violent acts. She also said that the Turkish government had arrested journalists simply for covering PKK activities.
There was a lot of discussion about how to measure success or failure in countering violent online extremism. Maddox said that pro-ISIS tweets had declined by 45% over the past year. But Rondon and Mitchell suggested that this was not a good metric, again in part because people could be driven to “the dark web.” Several audience members seemed to share that view as well.
In general, Rondon, Mitchell and Radsch expressed deep skepticism about blocking or taking down content. They expressed a general preference for judicial, rather than extra-judicial, measures to counter violent extremism. More fundamentally, they questioned the very premise of somehow curtailing speech, however extremist, unless that speech was connected to an impending violent act. Rondon criticized the emergence of a “cottage industry” of government countering violent extremism programming in the Department of Homeland Security, the FBI, the State Department etc.
JD Maddox spoke about the activities of the State Department’s Global Engagement Center (GEC), which was established this year by an Executive Order. The Center conducts campaigns, establishes partnerships, coordinates interagency policy, and oversees data analytics programming. Its mandate is to coordinate online countering violent extremism overseas, not in the United States. The GEC’s most well-known international partnership is with the Sawab Center in the United Arab Emirates.
My intervention was based on the thinking we have been doing since SIIA, together with George Washington University, hosted a March 24, 2016 panel discussion on: What are the Responsibilities of Tech Companies in an Age of International Terrorism?” Basically, companies have three responsibilities.
· They have take-down responsibilities.
· They have countervailing responsibilities to foster free speech and association.
· They have affirmative responsibility to counter violent extremism.
It is critical to understand in this context that while we consider these things to be “responsibilities,” they are not legal obligations. They are more things companies should do as responsible corporate citizens. Legally, the most relevant provision in U.S. law is Section 230 of the Communications Decency Act.
Section 230 says:
"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 U.S.C. § 230).
This shields companies from liability with respect to what other people say on the platforms that these companies provide. The measure was passed to shield companies from a legal catch-22 that emerged in early Internet jurisprudence. Responsible Internet companies responded to complaints that defamatory or otherwise harmful material was on their systems, and suddenly found themselves liable for having it up there to begin with. Section 230 allowed them to play Good Samaritan and take-down material that in their judgment was harmful without facing this legal liability.
Companies have used this legal protection to establish terms of service that can guide their socially responsible actions. So, for instance, Facebook has a zero tolerance for ISIS accounts. The Brookings Institution reported earlier this year that: “Thousands of accounts have been suspended by Twitter since October 2014, measurably degrading ISIS’s ability to project its propaganda to wider audiences.”
We recognize that companies are going to be expected to play a part in countering violent extremist speech even in the absence of a judicial order. As socially responsible companies they ought to respond to complaints that harmful material appears on their systems. And that has to include material encouraging violent extremism.
Tech companies need to be open and transparent about what material violates their terms of service and responsive to complaints that they have taken down material in error. But doing nothing is not responsible.