Why some people overestimate their exposure to hate speech online

Check out this new study by Dominique Wirz and Sina Blassnig, published in Information, Communication & Society, which investigates the gap between subjective and objective exposure to hate speech online. The study aims to understand how often people actually encounter it and whether their perceptions align with objective measures.

The researchers set out to determine how often people think they are exposed to hate speech compared to actual documented instances. They also explored how social identity influences perceptions of hate speech and whether people overestimate their exposure.

To tackle these questions, the study combined a representative survey and a mobile longitudinal linkage study. The first part involved a survey of 2,000 Swiss internet users, asking how often they encountered hate speech, how they classified it, and how they evaluated different forms of offensive content. The second part focused on a mobile longitudinal linkage study, where 119 participants who reported frequent exposure to hate speech documented their encounters over two weeks by uploading screenshots. Researchers manually analysed 564 screenshots to determine whether they met academic definitions of hate speech.

The findings revealed a significant overestimation of exposure. While 69% of survey respondents stated they had encountered hate speech online, the experience sampling study showed that many had overestimated their exposure. Only about a third of participants uploaded enough screenshots to support their self-reported frequency of encounters. Additionally, the study found that many people confuse impoliteness with hate speech. While 48% of the documented screenshots contained hate speech, 66.8% featured impoliteness rather than content that meets legal or academic definitions of hate speech. This suggests that many users interpret rude or offensive comments as hate speech. The research also highlighted the influence of social identity, showing that people were more likely to perceive statements as hate speech if they targeted a group they identified with. However, threats were almost universally classified as hate speech, while insults and defamation were more contested.

The study has several policy and research implications. It highlights challenges in content moderation, as current strategies may not align with user perceptions. If users classify impolite statements as hate speech, moderation efforts could be seen as inconsistent. The findings also suggest a lack of public awareness about what constitutes legally punishable hate speech, pointing to the need for education campaigns to clarify these distinctions. Furthermore, the study underscores the importance of improving measurement approaches. Since self-reports tend to overestimate exposure while experience sampling may underestimate it, future research should explore more passive tracking methods, such as screen recording or data donation.