This ESRC funded project aims to identify the drivers behind the current climate of conspiracy theories and racist 'infodemic' miscommunication concerning Covid-19.
There is a direct gap in understanding how conspiracy theories and miscommunication on social media sites is being used to create a Covid-19 'infodemic'.
This is particularly relevant in the context of Muslim communities as members of the far-right are able to use irrational beliefs and fake news ideology to peddle hate, with such narratives quickly being able to penetrate the mainstream and become normalised.
For example, one video shared on the social media site Telegram by the former leader of the English Defence League, Tommy Robinson, alleges to show a group of Muslim men leaving a secret mosque in Birmingham to pray. West Midlands Police debunked this video as being fake as the Mosque had already been closed down. A number of similar examples show the rising tension of fear fake news creates and the implications for such information which risks alienating communities and can have a real significant offline effect where people become more insular. Understanding the drivers of such communication is critical to ensuring a more effective and trustworthy media source where complex information, can be used to aid policy-makers and the wider general public.
This study will address this gap through rich empirical data that can be used to highlight what law enforcement should do when confronting online conspiracy theories and offline attacks. The nature of such information is that it can spread quickly and our project will address the drivers of this and the perpetrators involved which will be significant for social media companies, the police, policy-makers and other key stakeholders.The current climate of conspiracy theories and racist 'infodemic' miscommunication on Covid-19 can have significant consequences when social distancing measures are lifted.
Due to the nature of social media, and the range of social media comments and behaviour gathered, this project will be able to focus on national issues as we identify trigger events. The detail provided by the social media comments, including in some cases location (either explicitly in the social media comment, user profile, or comment meta-data), will allow for there to be a focus on certain regions with the UK, or countries as a whole. This may facilitate the tracking of and response to localised issues linked to Covid-19, extreme content and miscommunication.
Download the full report:
Covid-19 and Islamophobia on Twitter – User Typology
Covid-19 and Islamophobia on Twitter - A Thematic Analysis
Study 1 Key Findings
Our initial investigation gathered tweets and YouTube comments relating to Covid-19, utilising Covid-19 related and extremist search terms. A sample of approximately 100, 000 tweets/comments was collected. The impact of differing levels of user anonymity, postage frequency, and membership length have on Covid-19 related extremism and miscommunication was established. Read our key findings from Study 1.
Study 2 Key Findings
Our second study utilised the data, or sub-set of data from study one and applied a form of content analysis. This analysis recorded instances of pro-social and anti-social behaviour or action from users (threats, advice, compliments etc.), again broken down by levels of anonymity, postage frequency and membership length. This allowed the investigation of the impact of these factors on actions and behaviours relating to Covid-19 miscommunication and extremism, whereas study one focuses on language and sentiment. Read our key findings from Study 2.
Study 3 Key Findings
A smaller sub-set of the data was used to conduct a qualitative Thematic Analysis. This allowed the exploration of themes and patterns in the data in deeper and richer detail. Such as the use of these narratives and how Covid-19 related information (or misinformation) is constructed, discussed and justified. How it is reacted to, as well as how debunking of misinformation and miscommunication is reacted to, was part of this study. Read our key findings from Study 3.
Study 4 Key Findings
In our final study we used a series of case studies, based on the data and analysis arising in studies one to three, to explore the link between online miscommunication and offline actions and consequences. In particular evidence of how the online driven language and Covid-19 misinformation has impacted on real world events. Read our key findings from Study 4.
Executive Summary and Reccomendations
The executive summary highlights our key findings and recommendations in relation to our research. Some of these key points point back to the language and sentiment used against Muslims during Covid-19 and the conspiracy theories being promoted. It also highlights our aim to get social media companies to address the conspiracy theories that are online in these echo chambers where Muslims are targeted. Read our Executive Summary.
- Covid-19 and Islamophobia on Twitter - Case studies
- Covid-19 and Islamophobia on Twitter - Linguistic analysis
- Covid-19 and Islamophobia on Twitter - Sentiment analysis
Displays of racism on social media are, unfortunately an all too common occurrence, but our research shows that the severity of anti-Muslim hate crimes are influenced by ‘trigger’ events of local, national and international significance.
Covid-19, for instance, led to a number of conspiracy theories that specifically blamed Muslims for the outbreak of the pandemic. But what drives these social media conspiracies?
Over 100,000 Tweets and YouTube comments were analysed and revealed that even those with identifiable accounts were engaged in spreading misinformation and islamaphobia.
Several videos received comments around the role of Muslims supposedly spreading Covid-19 through religious festivals such as Ramadan. Comments also described Muslims as super spreaders of the virus, despite the lack of real world evidence to support this.
We propose that companies implement a button that helps users report misinformation and halt the spread of conspiracy theories.
A new online digital charter should be adopted, which empowers companies to prohibit the use of dehumanising language on their platforms.
Informing users by providing tools to detect and filter harmful webpages could help stop the spread of misinformation.