Check out this recent article about hate speech detection using gaze data. Their research presents the GAZE4HATE dataset, incorporating eye-tracking data gathered during a hate speech annotation experiment. This innovative approach investigates whether gaze data can provide insights into annotators’ subjective judgments of hatefulness and enhance hate speech detection (HSD) models.
The dataset includes text ratings on hatefulness, eye movement data collected as participants read statements, and explicit rationales marked by annotators. The researchers designed the dataset using “minimal pairs”—pairs of sentences where specific words or phrases were adjusted to alter perceived hatefulness. For instance, a statement like “Women can do nothing and are too stupid” could be modified to refer to “minions,” rendering it neutral. This setup allowed for comparisons across hate, neutral, and positive categories.
In their experiment, the authors collected data from 43 German university students who read and rated a variety of statements. The eye-tracking captured detailed features like dwell time and fixation counts, which showed significant variation based on perceived hate levels. The analysis indicated that gaze data, including metrics like fixation counts and pupil size, correlated with the hatefulness ratings assigned by annotators, thus serving as potential indicators of subjective perceptions of hate.
The study also introduced MEANION, a novel HSD model that integrates gaze features with text-based hate speech classifiers. MEANION demonstrated improved performance, particularly in binary classifications of hate versus non-hate. The addition of gaze data provided the models with complementary information, offering a more nuanced understanding of hate detection. This model outperformed text-only classifiers, suggesting that gaze data enhances predictive accuracy for subjective judgments in hate speech.
The implications of this research are notable for developing HSD models that better align with human judgments by considering cognitive responses, like gaze patterns, as indicators. This approach not only refines hate speech detection but also opens avenues for integrating cognitive insights into other natural language processing tasks, potentially improving model transparency and interpretability.
The authors have made the GAZE4HATE dataset available to the research community under a CC-BY-NC 4.0 license, fostering further exploration and innovation in this field.