In the wake of the tragic violence in Charlottesville earlier this month, it’s even more evident that online extremism continues to pose serious threats to society. While battling extremism is an important step in preventing the spread of hate and violence around the world, there are many inherent challenges, including the sheer volume of daily posts, complexity of purpose for videos, and the need to adhere to free speech rights. In spite of these challenges, industry has been committed to this effort for many years—we’ve highlighted some of this in the past, including industry efforts to cooperate with law enforcement, and formation of a partnership to fight terrorism.
Terrorists and other hate groups like al-Qaeda, ISIS, white supremacists, and neo-Nazis use social media and video streaming platforms to publish and spread their hateful and offensive content for radicalization, propaganda, or organizational purposes. After the recent tragic events in Charlottesville, the tech community has been figuring out ways to respond. Platforms have increased the rate of which they either take down white supremacist content or make it harder to find. But, many companies and platforms have been flagging and taking down such harmful for a long while, especially pertaining to terrorist content.