The utilisation of the internet for radicalisation purposes, particularly via social media platforms, has been thoroughly examined and acknowledged. Increasingly, this radicalisation is disguised within internet memes, using humour and coded language as means of subtly influencing audiences. This new article highlights the complications this presents in countering such efforts, even with new measures like the Digital Services Act (DSA).
The von der Leyen Commission’s assertive approach towards regulating online activities to protect the EU’s digital sovereignty has been considerable. Measures have been put in place to control political advertising, disinformation, hate speech, and online radicalisation. However, concerns have been raised about the effectiveness of these platforms in preventing malicious communications. In response, the Commission proposed the DSA to clarify the obligations for social media platforms without creating new rules around illegal content. This builds on the principles set out in the E-Commerce Directive, intending to ensure online safety while protecting fundamental rights.
The DSA expands the existing framework through policy layering, which maintains the intermediary immunity from liability provisions. It introduces a new ‘co-regulatory structure’, allowing national authorities to order content removal, which can then be communicated to Digital Services Coordinators in other Member States. Large platforms like Facebook and Twitter are required to perform risk assessments of illegal content and its impacts on fundamental freedoms, like freedom of expression. Yet, despite its comprehensive nature, the DSA still largely leaves it to platforms themselves to decide how best to fulfil these obligations.
However, the act does not substantially change the approach to tackling hate speech conveyed through memes and other forms of coded communication. The Code of Conduct on Illegal Hate Speech remains the cornerstone of these actions. What has evolved is the growing recognition of the scale of the ‘ironic hate’ problem online. Fringe communities often produce hate-filled memes that go viral and spread to larger platforms, obscuring the lines between humour and potentially radicalising messages.
This issue is made even more complex by the pervasive use of memes as a tool for conveying covert radical messages. An example is the subversion of the ‘virgin vs Chad’ meme, which can be used to express antifeminist messages under the guise of humour. Similarly, ‘Pepe the frog’—a frequent symbol in alt-right messaging—has been adopted by various groups, further complicating content moderation.
Algorithms increasingly play a critical role in content moderation due to the sheer volume of online communications. However, the opacity of these algorithms and their potential for over-removal of content, impacting freedom of expression, raises serious concerns. The potential for under-removal is also a problem, as the subtle and ironic use of language in hate speech can be challenging to detect, especially for machine learning processes.
The digital platforms themselves have considerable discretion in these matters. For instance, following Elon Musk’s takeover of Twitter, a change in its content moderation guidelines aligned with his ‘absolutist’ views on freedom of speech has been announced.
In conclusion, regulating online speech is complex due to the balancing act between protection from harm and freedom of expression. Current self-regulation approaches based on principles established in the late 1990s struggle with non-traditional forms of hate speech disguised in humour, irony, and memes. Whether through algorithmic control or human intervention, these attempts to combat radicalising content will likely continue to face challenges.