Looting, arson, violent attacks and unlikely allegiances; over the past week, far-right agitators have been rioting in the streets of England and the North of Ireland. With no centralised organisational structure, how were so many separate organised events able to come together? Unsurprisingly the answer is social media.
Despite its promise of social cohesion, social media has long been criticised for failing to limit or prevent hateful and harmful content; whether it’s Facebook, X (formerly Twitter), Instagram or TikTok, social media has become the primary vehicle across demographics for inciting hate, spreading disinformation and organising disruption — even violence.
The content we consume is based on an algorithm whose sole purpose is to keep people online for as long as possible. More clicks equals more data; the currency through which social media companies profit most.
Posts that cause offence or elicit emotional response receive more engagement. The algorithm then promotes these posts regardless of whether the content is provided by a trustworthy source or one which curates disinformation for nefarious actors.
The catalyst for the upsurge in violent disorder can be traced to several social media accounts that weaponised the anger and shock the public felt following a horrific attack in Southport in which three young girls were murdered and several more injured. In the hours following the attack social media became flooded with fake posts claiming the attacker was an asylum seeker.
These posts and claims of a “cover up” were amplified by accounts with large followings in the United Kingdom and the United States. The engineered anger online spilled over into the real world when rioters targeted and dispersed a Southport vigil being held for the children killed, going on to attack a nearby mosque and setting local businesses on fire.
Police were unprepared to deal with the degree of violence and criminal opportunism that ensued, and bolstered by the views and clicks on social media and the property they destroyed and stole, increasing numbers of rioters have continued to organise online in order to target more people and their livelihoods.
In Belfast, far-right instigators from south of the border joined arms with paramilitary-linked loyalists to shout racist slurs at communities that have long called Ireland their home. For anyone with an ounce of knowledge about the history of this island their partnership was an affront.
The violent disturbances have cost business owners their livelihoods, the Islamic centre is under a constant police guard, and immigration lawyers, housing agents and rights groups have all been put on notice that they are under threat.
Social media has become one of the most powerful promoters of domestic extremism, an engineered hate factory that weakens public trust in news sources, policing and political institutions — all central pillars of a democratic society.
Social media-driven societal division and unrest is a global threat. We saw it in Germany in 2018 during the Chemnitz riots, and in the United States during the January 6 insurrection.
In Ireland, research demonstrates that most posts promoting anti-immigration riots and disruption with hashtags like “IrelandisFull” or “IrelandIsForTheIrish” originate in the US. Racists and proponents of far-right policies use the global network of social media platforms to artificially inflate the scale of domestic far-right movements, and the multinational technology conglomerates continue to provide fertile ground for them to cultivate hate.
For millions of people, social media has become their number one source of “news” but there is little in the way of safeguarding in place to prevent fake or misleading content from being presented as news. Each platform sets its own rules and regulations on hate speech and misinformation; all are poorly enforced, whilst X has no rules on misinformation at all.
After taking over the social media giant, Elon Musk dismantled Twitter’s content moderation teams, created a paid verification model and restored the deactivated accounts of far-right instigators such as Tommy Robinson who had previously been banned from the platform.
Robinson’s name was chanted at the riots in Belfast. He has been an active promoter of the riots and subsequently posted a video on X threatening the families and children of journalists who exposed the fact that he is spreading misinformation from his holiday in Cyprus.
Musk has come under increasing scrutiny for allowing X to become a haven for the far right. The social media company’s paid-for blue tick service enables malicious accounts to spread disinformation under the veil of so-called ‘verification’; blue tick accounts receive more visibility and promotion under X’s algorithm. According to Newsguard, 74% of the accounts spreading fake posts about Israel’s war on Gaza bear a blue tick.
Those waiting on Musk to act in good faith will be left wanting; a cursory look at his own page reveals a contrarian who thrives on division, who uses language such as “civil war” when discussing the riots in the UK, who openly supports Donald Trump, and who shares AI-generated fake content to stoke further political polarisation in direct violation of his own platform’s policy regarding AI-generated fake content.
Musk possesses a slew of controversial and far-right viewpoints and consistently airs them when wading into geopolitical affairs.
Tackling the sheer volume and spread of online disinformation will require a multi-pronged approach. For governments, legislative action is possible; the EU’s Digital Services Act has had proven success just six months into its operation with TikTok dismantling its Lite Rewards Program across the EU following an EU commission investigation into breaches of the DSA act.
Meta and X are both under investigation for breaches of the Act and could face fines of up to 6% of global revenue. Musk responded to the investigation by saying “The DSA is misinformation”.
Domestically, governments can take further action to enforce regulations on hateful content, incitement to violence and misinformation. They can go even further by sanctioning individual owners like Musk with fines or even travel bans.
These are private, for-profit companies that hold significant power to disrupt society and influence political discourse — there has to be a more robust and accountable approach.
Taking action against social media companies alone, however, will not resolve the problem; there is an undercurrent of socioeconomic deprivation that aids in maintaining a suitable environment for extremist views to flourish. The UK and Ireland need to implement effective and ambitious community cohesion strategies, increase digital literacy education for all ages and tackle deprivation with meaningful reform.
A powerful showing of anti-racist counter protests on Wednesday night in England that dwarfed the far-right demonstrations may have helped dissipate this wave of unrest but without urgent intervention, further violent disorder could be an X post away.