Social Media Platforms Continue to Innovate to Thwart Terrorist Propaganda

Share |

From its 1.15 billion mobile daily active users, 510,000 comments, 293,000 statuses and 136,000 photos flow through Facebook every 60 seconds. Unfortunately, some of this web traffic is driven by terrorist propaganda, as terrorist organizations increasingly look to use social media to spread their violent messages to broader-reaching audiences. For Facebook, it is impossible to comb through the vast amount of data that flows through Facebook daily in order to vet terrorist content without the help of technology. Like other social media and internet platforms, Facebook is increasingly using artificial intelligence (AI) to assist analysts in identifying and blocking terrorist propaganda.

Previously, Facebook relied primarily on human moderators to identify inappropriate content like child pornography, fake news or terrorist propaganda. Even when social media platforms began to incorporate algorithms to identify this content, humans would make the decision to take down the content in each and every case. Although Facebook removed more terrorist propaganda over the past two years than ever before, using these old methods to vet content had one major fault: when an algorithm identifies content and a human thereafter removes the content, it doesn’t necessarily prevent the individual who posted or shared the same violent or inappropriate content from posting and spreading propaganda again via another account or alias.

Going forward, the combination of human expertise and AI will be transformative for social media platforms to monitor and police terrorist propaganda. AI is the ability of machines to perform tasks that humans would deem “intelligent.” While relying on human input to identify content that may actually just be satire or religious speech is important, AI tools scan and block obvious terrorist content, improving efficiency and giving human moderators more time to make decisions that require human expertise. Moderators don’t necessarily have to take down content each and every time, but still play a pivotal role in identifying and making the initial judgement to take down problematic content. Then incorporating AI into new software tools gives companies the ability to identify and prevent individuals contributing terrorist propaganda from opening new accounts or reposting content after they have been blocked or taken down. It can also be used to quickly take down violent Facebook live videos quicker than a traditional algorithm or human could identify and take down.

Thus, at a time when society is relying on social media platforms to do more to prevent the spread of terrorist propaganda, Facebook introduced new AI software tools that do just this. In her blog post “Hard Questions: How We Counter Terrorism,” Director of Global Policy Management at Facebook Monika Bickert highlights some of the technical solutions like language understanding, which “ text that [they’ve] already removed for praising or supporting terrorist organizations such as ISIS and Al Qaeda so [they] can develop text-based signals that such content may be terrorist propaganda.” Moreover, this new AI-driven software can scan videos before they are posted and can automatically recognize and block videos that have already been identified as terrorist material even before they are seen by anyone on Facebook.  

In addition to the new software tools, Facebook plans on adding 3,000 new moderators to its team that vets content that violates their policies, supporting its 150-odd employees who are already strictly focused on counterterrorism measures. Even though the automatic take down of previously vetted material is done without humans looking at it again, human expertise is still vital in Facebook’s counterterrorism approach. As Bickert goes on in her blog, “AI can’t catch everything.” In their persistent and effective approach, Facebook employees review reports in many different languages, 24 hours a day. To respond to emergency requests from law enforcement, Facebook has a global team ready to respond quickly. Using these new AI tools, along with human expertise, will help Facebook scan for terrorist content more accurately and efficiently. In this way, Facebook and other social media platforms can contribute to a stronger and safer community.

As we have highlighted in previous blogs, SIIA encourages voluntary cooperation between social media companies and law enforcement. Social media companies are extremely helpful in removing terrorist content from their platforms, and AI will only enhance their ability to scan and remove terrorist propaganda. Facebook’s global team responding to emergency requests from law enforcement agencies is a great example. In fact, recent reports suggest that Europol is referring cases for evaluation to various tech companies, and that these companies “cooperate with 94% of Europol’s requests” to remove this content. Internet platforms already play a critical role in the global campaign to combat online extremist propaganda, and AI is critical to enhancing this role.

Moving forward, it is essential that this cooperation remains voluntary, and both parties work together to eradicate terrorist propaganda from Internet platforms. Moreover, we encourage all Internet social networks and platforms to embrace AI and other innovative tools to fight the spread of terrorist content online.

Niko Nikola Marcich is an intern with the SIIA Policy team. He is currently an undergraduate student at the University of Virginia studying foreign affairs and international economics.