User self-governance could play a key role in limiting spread of harmful content on social media, reports the Digital Wildfire study.

The two-year project was led by the University of Oxford in collaboration with the Universities of Cardiff, Warwick and de Montfort. It finds that the rapid growth of social media platforms such as Twitter has been accompanied by 'digital wildfires', which involve the viral spread of harmful content such as rumour, hate speech or malicious campaigns.

"A wide range of agencies including the police, councils, news agencies, anti-harassment organisations, anti-bullying groups and schools are now attempting to manage the harms caused by rapidly spreading content that is in some way inflammatory, antagonistic or provocative," says researcher Dr Helena Webb. "Young people are particularly vulnerable to this content, both through what they see and through making posts which they might later regret."

Existing governance mechanisms to deal with social media content are often limited. "Legal sanctions tend to be slow and the police lack capacity to deal with the high volume of problem content online," says Dr Webb. "Social media platforms mostly rely on user reports after content has already spread and caused harm."

In view of these limitations, the Digital Wildfire team concluded that user self-governance could play an important role in limiting the spread of harmful content in real time. This involves actions by users to protect themselves and others. Education, particularly among young people, can foster more personal responsibility, digital maturity and digital resilience.

The team’s analysis of social media content also found that counter speech could be effective in limiting the spread of hate speech. "Examination of racist, sexist and homophobic posts on Twitter suggests that the greater the number of individuals prepared to challenge such posts through counter speech, the more likely this is to quell it. Posting counter speech may encourage others to reflect before sharing or forwarding content while also upholding rather than undermining freedom of speech," says Dr Webb.

Social media platforms could, the researchers believe, employ specific features to support communities of people in coming together to halt the spread of harmful content in real time as well as using their terms and conditions to encourage more civil behaviour online.

Recognition of shared responsibility by all those involved is needed. "The law is too limited to deal with this problem alone," she says. "But other opportunities exist to promote responsible behaviour online while upholding freedom of speech."