This first appeared in The Hill's Congress Blog.
The U.S. last week carried out a military strike in Iraq as part of the ongoing battle against ISIS. While the reliance on military action may be at the core of our anti-terror efforts, it has recently become clear that the government is seeking to also rely on Facebook (and Google, and Yahoo, and Reddit, and any other online platform). The desire among political leaders to deputize tech companies in the war on terror is not only a significant threat to free speech and online communications; it is unlikely to produce results.
Social media companies understand their powerful position, recognizing that content on their websites appears globally and can be used for good or ill. Thus, companies have adopted nuanced and evolving user rules that allow them to meet social responsibilities. They enforce these rules by removing or refusing to distribute illicit material, and suspending or terminating users who violate their rules.
These voluntary efforts were discussed earlier this week in New York City, as part of the State Department’s Global Youth Summit Against Violent Extremism. Google, Facebook and other tech companies gave insights into how they combat extremism, while protecting free speech. Recognizing their responsibility – and frankly, that it’s important for the success of their own businesses – these companies are generally going above and beyond what is required under federal law.
Monica Bickert, Facebook’s head of global policy management, detailed how they think about offensive content. According to Politico, Bickert said Facebook responds when posts, comments, photos and so on have been reported by users. But, "with extremist content, our experience has been that people tend to be in groups, and so if we disable an account for being a member of a violent organization, for instance, we may fan out from that account and try to find other bad actors or bad content that's associated with that person."
Social media platforms are actively reporting what needs to be reported and taking down what needs to be taken down. Meanwhile, Twitter’s rules explicitly state that users “may not make threats of violence or promote violence, including threatening or promoting terrorism.” And yet, there is a growing push in Washington to limit the First Amendment rights of these private companies and put government-mandated responsibility on them for stopping the spread of extremism.
This summer, an effort was made as part of the Senate Intelligence bill to force social media companies and others to report “terrorist activity” – which was not defined – to the U.S. government. A law such as this would essentially require private companies to begin acting as government law enforcement agents. Recognizing the clear dangers of deputizing these companies, Sen. Ron Wyden (D-Ore.) was able to kill this provision. Yet, once policymakers start down this path of looking to Internet platforms to advance a government agenda, it is very hard to stop.
The White House, for example, continues its efforts to recruit social media platforms to be part of a message campaign against ISIS. It would do far more harm than good for social media, already viewed around the world as too closely linked to the U.S. government, to become the foreign broadcasting channel for America’s propaganda wars.
Social media is far from perfect, but it is an unquestionably amazing tool that has allowed for the autonomous expression of ideas and advocacy in the 21st century global information economy. To turn the Internet into a law enforcement vehicle or a reincarnation of the Cold War Voice of America would do lasting social and economic damage. We would lose the vibrancy, freedom, economic growth, jobs and innovation that these platforms have enabled in the digital age.
When it comes to policy, leaders must consider the long-term value created by digital media companies and their products. And they should be guided by Section 230 the Communications Decency Act – that online platforms should not be “treated as the publisher or speaker of any information provided by another information content provider.” Ultimately, government efforts to combat violent extremism should not rely on the platforms that terrorists might use to spread their messages, but on stopping the terrorists themselves.