Check out this paper by Kamalakkannan Ravi and Jiann-Shiun Yuan, which systematically reviews research on ideological orientation and extremism detection on social media platforms. Their study spans from 2005 to 2023, analysing 110 research articles on how extremism surfaces across platforms like Twitter (X), Facebook, Reddit, TikTok, Telegram, and Parler, among others.
The review examines methodologies used to detect ideological orientation and extremism, focusing on machine learning (ML), natural language processing (NLP), and graph-based and statistical methods. Ravi and Yuan explore various approaches to detect political affiliations and ideological divides, such as those between liberal and conservative stances or Republican and Democrat perspectives. They identified NLP as the most popular technique for this analysis, followed by ML and deep learning (DL). The diverse range of methodologies highlights the complexity of analysing ideological content online.
Ravi and Yuan also conducted a structured data extraction from existing studies, exploring which platforms were most commonly examined for ideological trends. Twitter emerged as the primary dataset source due to its openness and extensive political discourse. However, the authors also highlight the importance of examining broader online spaces and note a growing interest in platforms like TikTok, Telegram, and Parler. The paper underscores the need for cross-platform studies to capture the full spectrum of ideological expression.
This review identifies significant gaps in the field. One critical area is the reliance on static methods, like sentiment analysis and clustering, which may overlook how ideological perspectives shift over time. The authors advocate for more longitudinal studies to capture ideological evolution in response to events or policy changes. Additionally, there is a call for broader standardisation of datasets across platforms to enhance comparability and support reproducibility.
The review concludes with practical recommendations for researchers and policymakers. These include the need to mitigate bias in training datasets, which can skew ideological representation, and to develop standardised data practices. Emerging social media platforms are highlighted as important yet underexplored venues for future research. Finally, the authors suggest prioritising fairness and transparency in model development to accurately reflect the complexity of online ideological extremism.