How Search Engines Shape Attitudes to Refugees

Check out this new article by Franziska Pradel published in New Media & Society. The study looks at how hate speech about refugees in search engines influences people’s political attitudes and trust in information sources.

The paper asks a simple but important question: do biased search suggestions about refugees (positive, negative, or neutral) shift how people think about immigration, asylum policies, and trust in search engines? And does political ideology matter?

The researchers ran an online survey experiment in Germany with 1,200 participants recruited through an online panel. Participants were randomly assigned to one of eight groups. They saw search suggestions about refugees framed in negative (e.g. “refugees are criminal”), positive (e.g. “refugees are peaceful”), neutral (e.g. “refugees are in Germany”), or control (unrelated) terms. These suggestions were presented either as coming from a search engine or from a politician, allowing a comparison of trust in algorithmic versus politicised sources.

After exposure, participants answered questions about their political ideology, their attitudes towards immigration and asylum policies, and their trust in both the source and the content. At the end, they were also asked which search suggestion they would be most likely to click on, giving insights into engagement behaviour.

The findings are clear. Hate speech in search suggestions mainly affects people at the extreme-right end of the political spectrum, who became more hostile towards refugees and asylum policies after exposure. Moderates and left-leaning participants showed little change. Interestingly, even neutral content such as “refugees are in Germany” pushed extreme-right individuals towards more restrictive attitudes.

Trust also mattered. Overall, people trusted search engines more than politicians. But when search engines presented biased (positive or negative) content, trust in them dropped to the same low levels as trust in politicians. Importantly, right-wing participants trusted hate speech content far more than left-wing participants, and they were almost three times as likely to say they would click on hate-speech suggestions.

The study highlights the power of search engines as gatekeepers of political information. While most users weren’t heavily influenced, those with extreme-right views were more likely to trust and engage with hateful suggestions. This raises questions about how algorithmic curation may amplify polarisation.

For policymakers, the findings reinforce the need for stronger moderation of hate speech in search functions, given the high trust people place in search engines. For researchers, it points to a need for further work on the long-term effects of repeated exposure, and on how algorithmic design can either mitigate or reinforce hostility towards vulnerable groups.