The Untapped Social Potential of Hate Speech Detection

Check out this paper by Sidney G.-J. Wong on the social benefits of hate speech detection research. The study systematically reviews 48 hate speech detection systems, aiming to assess why these systems have not had widespread uptake by policymakers and non-profit organisations despite the rapid development of natural language processing (NLP) tools.

A key issue identified is the lack of engagement with ethical frameworks in the development of hate speech detection systems. While NLP researchers have focused on improving model accuracy and performance, the paper argues that this emphasis has overshadowed the need to address social and ethical concerns, particularly regarding the communities affected by hate speech. Wong’s review shows that many of the systems reviewed fall short of ethical standards, especially in terms of fairness and accountability. For example, 95.8% of the systems did not meet the principle of accountability, which includes involving affected communities in the system’s design and evaluation.

Wong applies the Responsible Natural Language Processing (RNLP) model, an ethical framework for evaluating NLP systems. The paper finds that, while some progress has been made in areas like transparency—such as publishing de-identified data—many systems still rely on anonymous crowdsourcing to label hate speech data. This practice raises concerns about data quality and the well-being of annotators, who may be exposed to harmful content with insufficient support.

The paper concludes that for hate speech detection systems to achieve greater social impact, researchers must move beyond technical improvements and work collaboratively with affected communities. Wong suggests that adopting more robust ethical frameworks, such as RNLP, can help ensure that hate speech detection systems are not only technically effective but also socially responsible.