In the wake of the tragic violence in Charlottesville earlier this month, it’s even more evident that online extremism continues to pose serious threats to society. While battling extremism is an important step in preventing the spread of hate and violence around the world, there are many inherent challenges, including the sheer volume of daily posts, complexity of purpose for videos, and the need to adhere to free speech rights. In spite of these challenges, industry has been committed to this effort for many years—we’ve highlighted some of this in the past, including industry efforts to cooperate with law enforcement, and formation of a partnership to fight terrorism.
We have seen multiple positive developments on this front in 2017. For instance, leading internet companies formed the Global Internet Forum to Counter Terrorism, Facebook enlisted AI into its algorithms to quickly and efficiently remove extremist content, and Google announced four new steps to crackdown on extremist videos online.
Building on its announcement in June, Google implemented some of its new tools last week. Going forward, videos that contain inflammatory religious or supremacist content will appear behind a warning that they will not be monetized, recommended, or eligible for comments or endorsements by other users. This step will help Google continue to allow for free expression and access to information, without promoting extremely offensive or dangerous viewpoints.
Additionally, Google is expanding its role in counter-radicalization efforts by building on its Creators for Change Program. This program has enjoyed success promoting YouTube voices against hate and radicalization to by harnessing targeted online advertising to reach potential ISIS recruits across Europe and redirect them towards anti-terrorist videos that could change their minds about joining—thus far, there has been a high click-through rate leading to videos that debunk terrorist recruiting messages.
Other new developments from Google on this front represent a doubling down of the methods that have proven to work to identify and block terrorism-related content. These are two-fold. First, increasing the use of technology to help identify this content—including devotion of enhanced engineering resources to apply Google’s most advanced machine learning research to train new “content classifiers.” Second, greatly increasing the human resources, the independent experts in YouTube’s Trusted Flagger Program.
The industry response to online extremism will continue to evolve with the threat, and take advantage of new technological opportunities. While there will never be a silver bullet to stop the threat of online extremism, industry will remain committed to fighting it, and efforts will continue to evolve.