Is hate speech detection the solution the world wants?

Check out this article published in one of the best academic journals, the Proceedings of the National Academy of Sciences (PNAS).

The article discusses the lack of a unified approach to identifying and addressing online hate speech among different stakeholders, namely governments, nonprofits, and platforms. While all stakeholders agree that protecting individuals and mitigating hate speech are important, they have different approaches to achieving these goals. Governments delegate the role of responding to online hate speech to platforms but want to maintain control over solutions. Nonprofits focus on the Villain-Victim relationship, where individuals are subject to harm due to hate speech, and perceive governments or platforms as the primary villains. Platforms primarily discuss their user moderation policies and their responsibility for maintaining safe online communities.

However, there are two key contradictions. Firstly, platforms are the only stakeholder significantly concerned with detecting hate speech, but they do not rely solely on automated detection and rely on users to report harmful content. Secondly, both governments and platforms want to single-handedly decide how to respond to online hate speech, resulting in a power dispute. This lack of discussion of how to identify hate speech and disagreement on who decides how to respond to it is concerning.

The authors propose a paradigm shift where the computer science (CS) community should think of online hate speech as a problem that demands solutions rather than just methods. They suggest that the CS community has significant technical knowledge about identifying hate speech and the creativity to come up with innovative methods to mitigate it. However, they need to engage with other stakeholders and consider their concerns about governing the response to hate speech. The authors emphasize the need for collaboration among different stakeholders, including the CS community, to address online hate speech effectively.

The authors argue that technical innovation that does not accommodate legal frameworks or ethical concerns and policy that does not reflect the most current technology are equally ineffective in creating real change. They stress the importance of considering hate speech in context and orienting CS research towards the concerns of other stakeholders to begin a collaborative pursuit towards a safe internet.

Overall, the article highlights the lack of a unified approach towards identifying and addressing online hate speech among different stakeholders and the need for collaboration and engagement to address it effectively. The authors propose a paradigm shift towards solutions-oriented research and collaboration among stakeholders, including the CS community, to address the problem of online hate speech effectively.