Check out this new article by Z.P. Rosen and Rick Dale in Humanities and Social Sciences Communications that dives into how online hate speech affects the way people communicate on Reddit. The paper asks a simple but powerful question: does Antisemitic and Islamophobic hate speech change how others talk in the same thread?
To test this, the authors looked at Reddit comment threads, analysing how expressions of antisemitic or Islamophobic hate speech influence the diversity of language in subsequent comments. They used a computational method called convergence-entropy (CE), which measures how much the content of one comment can be “recovered” or predicted from another. Lower CE means more repetition and less diversity.
They collected data from subreddits known to tolerate hate content, based on mentions in r/AgainstHateSubreddits, and applied a hate speech classifier to rate the intensity of hate in comments. They then measured CE between pairs of comments—looking at both direct replies and sibling comments (replies to the same parent comment). The dataset included thousands of Reddit comments posted before and after October 7, 2023, the date marking the escalation of the Israel-Gaza conflict.
Here’s what they found: comments that ranked high in antisemitic hate speech or Islamophobic hate speech led to a measurable drop in lexico-semantic diversity in sibling comments. In other words, when someone posts a strongly antisemitic or Islamophobic comment, others responding in the same thread tend to mirror the ideas and language of that initial post more closely. This effect was amplified after October 7, suggesting that real-world conflict increased the influence of online hate speech.
Interestingly, direct replies to hateful comments didn’t show the same consistent pattern. This suggests that “hate parties”—threads where multiple users pile on with similar hate-laden content—are more about users responding to the same comment rather than directly replying to each other.
The policy implications are clear: deleting a single hateful comment might not be enough. Moderators should also look at the surrounding comment thread, especially sibling replies, which may be part of a larger pattern of hate-based convergence. For researchers, this opens new directions for studying how hate spreads not just in content, but in conversational dynamics. And for platforms, it’s a push to rethink how they detect and manage coordinated hate speech at the thread level, not just the individual comment level.