The ESRC-funded HateLab is analysing data to see how social media spreads and amplifies hate speech in the aftermath of terror attacks and what factors help or hinder.
By Professor Matthew Williams, Professor Pete Burnap and Sefa Ozalp
In 2016 the Brexit referendum was linked by the Home Office to the largest increase in police-recorded hate crime since records began. This was overtaken by a spike in hate crimes around the 2017 terror attacks in Manchester and London. Political votes and terror attacks have become 'trigger events' for hate crimes on the streets, but also for hate speech on social media.
Exclusive analysis by the ESRC-funded HateLab, part of the Social Data Science Lab, has shown that following trigger events it is often social media users who are first to publish a reaction. Discernible spikes in online hate speech were evident in 2017 that coincided with the UK terror attacks in Westminster, Manchester, London Bridge and Finsbury Park.
HateLab analysis has shown that social media acts as an amplifier of hate in the aftermath of terror attacks. Hate speech is most likely to be produced within the first 24-48 hours following an incident, and then dies out rapidly – much like physical hate crime following terror attacks. Where hate speech is re-tweeted, the evidence shows this activity emanates from a core group of like-minded individuals who seek out each other's messages via the use of hashtags. These Twitter users act like an echo chamber, where grossly offensive hateful messages reverberate around members, but rarely spread widely beyond them.
In the minutes to hours following an attack those associating themselves with far-right ideologies on Twitter capitalise on the event to spread messages of hate and division. These tweeters are also known to have spread messages posted by Russian-linked fake accounts, attempting to ignite and ride the wave of anti-Muslim sentiment and public fear.
For instance, in the wake of the Westminster attack, fake social media accounts retweeted fake news about a woman in a headscarf apparently walking past and ignoring a victim. This was retweeted thousands of times by far-right Twitter accounts with the hashtag '#BanIslam'.
The additional challenge created by these fake accounts is that they are unlikely to be susceptible to counter-speech (eg, challenging stereotypes, requesting evidence for false claims) and traditional policing responses. It therefore falls upon social media companies to detect and remove such accounts as early as possible to stem the production and spread of divisive and hateful content.
The first few hours following a terror attack represent a critical period within which police and government have an opportunity to prevent hate speech, through dispelling rumour and speculation, appealing for witnesses and providing factual case updates. HateLab analysis shows that tweets from media and police accounts are widely shared in the aftermath of terrorist incidents. As authorities are more likely to gain traction in the so-called ‘golden hour’ after an attack, they have an opportunity to engage in counter-speech messaging to stem the spread of hate online. In particular, the dominance of traditional media outlets on Twitter, such as broadsheet and TV news, shows that these channels still represent a valuable pipeline for calls to reason and calm following criminal events of national interest. However, where newspaper headlines include divisive content, HateLab analysis suggests these can increase online hate speech.
HateLab continues to examine the factors that enable and inhibit the spread of online hate around events like terror attacks and key moments in the Brexit process. It has officially partnered with the National Police Chiefs’ Council’s (NPCC) National Online Hate Crime Hub to develop an Online Hate Speech Dashboard to monitor aggregate trends in real-time using cutting-edge artificial intelligence. The Dashboard will be evaluated in operations throughout 2019.