Researchers from the Tackling Hate Lab have just published a new paper exploring how different strategies to counter online extremism work together. The paper used an innovative method in terrorism studies called Agent-Based Modelling to test two common approaches to countering online extremism: deplatforming and inoculation. It asked whether inoculation could enhance or partly replace deplatforming in reducing extremism, and how this trade-off affects costs.
The researchers recreated an anti-immigration Facebook group as a testing ground. They then built a mathematical model that populated this virtual group with four kinds of characters. The first were ordinary users, who came across posts and conversations. These users were modelled to behave a bit like real people processing information: every time they saw something new, they weighed it against what they already believed before deciding whether to shift their views. The second group were extremists, whose role was to spread radical messages. The third group were enforcers, who acted like moderators or police, monitoring the space and removing extremists when they were identified. Finally, the model included inoculators. Their job was to deliver inoculation messages, that is, short exposures to weakened extremist arguments that help people build resistance, much like a vaccine trains the body to fight off a virus.
To make the model realistic, the researchers drew on real data. People’s vulnerability to extremist ideas was calibrated from systematic reviews of risk factors for radicalisation, such as age, education, or prior attitudes. The strength of inoculation was calibrated from results of randomised experiments testing inoculation against extremsim. The costs of moderation were grounded in real wages and productivity data from commercial content moderators.
The simulation ran for one year of online activity, broken down into daily steps. It tested 121 different mixes of policies, combining 11 levels of monitoring with 11 levels of inoculation frequency. Each mix was repeated 100 times to capture variation. The main measure of performance was the share of users ending up “vulnerable” to extremist beliefs, tracked both as an average across the year and at the end of the year. The authors then ran statistical analyses and plotted what they called a “possibility frontier” to see which policy combinations pushed risk down the most efficiently. They also ran a cost–benefit analysis to see how much different mixes would cost under US and Kenyan labour market conditions.
The study results are revealing. Both deplatforming and inoculation worked to reduce the number of vulnerable users. The two also reinforced one another: inoculation made deplatforming more effective, although only up to a point. Inoculation could not replace deplatforming altogether, but it reduced the need for costly removals by stabilising belief dynamics in the group. A unique finding was that inoculation made outcomes more predictable, lowering random swings in the spread of extremism and supporting group resilience.
Policy implications. Soft approaches like inoculation should not be treated as optional extras. They should be budgeted alongside enforcement to get the best results at lower cost. Policymakers can use the possibility frontier to plan cost-efficient mixes, and should prioritise steady, persistent inoculation over one-off, broad coverage. Inoculation also helps where enforcement capacity is limited or legally constrained.
Implications for research. Future work should combine real-world trials of inoculations and other CVE interventions with simulations to work out the right dose, timing, and persistence. The model could be extended to encrypted platforms and to test how policy shifts or ownership changes affect extremist recruitment.