During a recent three-month period, the Counter Extremism Project (CEP) conducted a limited study using its own video-matching technology and YouTube’s API to identify the presence on YouTube of a small sample of 229 previously identified ISIS-generated videos (just a fraction of the trove of extremist material available). The goal of this study was to better understand the distribution and prevalence of extremist material online. Over the course of the three-month period and based on CEP’s narrow research parameters, this is what we learned:

No fewer than 1,348 ISIS videos were uploaded to YouTube in a three-month period, garnering more than 163,000 views.
91 percent of the videos were uploaded more than once.
76 percent of the videos remained on YouTube for less than two hours, but still generated a total of 14,801 views.
278 different YouTube accounts were responsible for these 1,348 uploads, one of which uploaded as many as 50 videos.
60 percent of accounts remained active even after videos from the account had been removed for content violations.
Our findings reveal that YouTube’s combination of automatic and manual review to identify and remove terrorist content are failing to effectively moderate hate-speech on their platform. Additionally, YouTube’s promise to take action against accounts that repeatedly violate their terms of service is simply not being enforced.

Each day, too much dangerous material is being uploaded, it is remaining online for too long, and even if it is eventually removed, it quickly reappears, meaning that this content is effectively always available as a pernicious radicalizing force.

Measuring the efficacy of countering online extremism by the number of take-downs and the time to take-down does not tell the complete story. We should measure efficacy by how many views violent or radicalizing content garners, how many times it is uploaded and allowed to remain on-line, and how aggressively accounts are removed after clear violations. ISIS material, similar to other types of propaganda, is posted in order to influence opinions and actions. A larger audience raises the possibility that someone may commit an act of terrorism. Setting standards for removal time periods is a good first step, but lawmakers should also consider regulating and potentially fining companies based on highly viewed terrorist material and inaction in removing accounts that repeatedly upload banned content.

There’s no doubt that social media companies, through major lobbying and public-relations campaigns, now say the right things about the connection between extremist content and terrorist acts – something they have previously denied. But when examples of terrorist content – including brutal executions and beheadings – remain pervasive and are easily found online, it is time to question whether removal counts and times is an inflated metric.

A study released by public relations firm Edelman starkly shows that trust in social media companies is falling worldwide and consumers want legislators and advertisers to push for industry reforms. Insisting on a considerably more aggressive approach to addressing online extremism would be a good first step.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also

Gavin Ashenden Ex Chaplain to the Queen exposes the danger our civilization faces