This research project has developed an evidence-based approach for classifying violent extremist content that appears on Facebook. Its aim is to feed into Facebook’s broader policy work around content review and assist in its ability to forecast when intervention or escalation may be necessary.
Facebook’s Community Standards state:
“We are committed to removing content that encourages real-world harm, including (but not limited to) physical, financial and emotional injury. [Furthermore,] we do not allow content that praises any of the above organisations or individuals or any acts committed by them.”
As such, Facebook tries to remove all materials produced by proscribed violent extremist organisations, unless they are being disseminated in very specific contexts (i.e., by journalists or academics). In recent years, these efforts have become highly effective, with almost all terrorist content being removed from the platform before it has even been reported.
However, not all terrorist content is created equal. The removal of some materials needs to be prioritised over the removal of others. Because Facebook is committed to using technology to detect harmful content but human reviewers to make the final decision as to what happens to it, it faces a serious challenge regarding how, when, and what to prioritise for review. Somehow, a distinction needs to be made between the materials that need fast-tracking for review and the materials that can be queued.
This project developed a way to help Facebook make that distinction in a timely and accurate fashion. Using the Islamic State as an initial test-case—but with a view to future, cross-ideological applications—it will nuance how harmful content is classified. Its basic premise is that it is possible to codify the intent of harmful content by studying, and then testing, the logic behind its production. If intent can be identified—i.e., if a clear distinction can be drawn between tactical, action-based content and strategic, brand-based content—then it will be possible to better prioritise according to risk posed.
Recent Policy Reports
- ICSR Team
ICSR & CST Report Launch – “We are Generation Terror!”: Youth-on-youth Radicalisation in Extreme-right Youth Groups, December 14thRead more
Where: Online When: Tuesday December 14th, 5pm GMT / 12pm EST With the increasing number of teenage…
- ICSR Team
GNET Report Launch – A Comparative Analysis of Three Online Reactionary Meme Subcultures, December 13thRead more
Where: Online When: Monday December 13th, 5pm GMT / 12pm EST The Global Network on Extremism and…
- ICSR TeamRead more
Where: Online When: Tuesday December 7th, 5.30pm GMT / 12.30pm EST This event explores the nexus between…