The recent public unrest in Britain proved something of a litmus test for the British government in terms of how willing it was to go toe-to-toe with the current culture of misinformation and social agitation which led to the worst riots seen in that country for over a decade.
After days of severe rioting, police forces around Britain made one thing clear — justice would be swift, and those who had misguidedly got themselves involved in the mayhem would come to rue their decision.
So it came to pass, with brutal sentences passed down mere days after the related crimes took place. A lack of physical presence at the flashpoints was no excuse, with one woman, 53-year-old Julia Sweeney, handed a 15-month sentence for using inflammatory language regarding a mosque on Facebook.
In receiving her sentence at Chester crown court, the presiding judge told Sweeney that “so-called keyboard warriors like her have to learn to take responsibility for their language”.
Comparing the British unrest with Ireland’s far-right riots last November is instructive, not least because a narrative has emerged that what happened in Britain in recent weeks followed the exact playbook seen in Dublin last year, albeit on a much greater scale. That is, a vicious attack was carried out against children, followed by disinformation spreading online within minutes of its occurrence that the perpetrator was an asylum seeker.
Further, in the Irish context, those spreading the misinformation and causing the unrest saw that Ireland’s political parties were not quite sure what to do with the new status quo — with some even showing a willingness to bend their ideology more in the direction of a greater intolerance for immigration than for which they had hitherto shown an appetite.
The circumstances in the two countries were different, of course — for one, the stabbing of several children at a dance studio in Southport, near Liverpool, last month post-dated a general election as opposed to happening in the build-up to one, giving the new government a mandate to bring the hammer down hard on those participating in the mayhem that followed without fear of any electoral blowback.
Secondly, British policing culture has far more experience and history with the type of public unrest which caught An Garda Síochána so unaware late last year. The approach across the water was unequivocally heavy-handed, as were the sentences which followed the rioting.
One constant between the two jurisdictions is the use of social media to foster trouble, often via blatant disinformation — most obviously being that, in both the Irish and British cases, the perpetrator of the stabbings was a citizen, not a recent immigrant as was asserted online.
However, are social media users being held to account to the same extent for their postings in both jurisdictions? And what of those who choose to spread hate and incite mayhem anonymously?
Mark Malone, researcher with far-right monitoring group the Hope and Courage Collective, believes “there is a difference of approach in terms of online incitement” between Britain and Ireland.
“If you look at the example of Conor McGregor, who at the time of the riots was calling for people to vaporise buildings, there was never a sense that what he was saying was being perceived as a clear call to action, either by the social media platforms or law enforcement,” said Mr Malone.
He notes that such rhetoric is not confined to Mr McGregor, with several Irish local representatives having happily posted online in eyebrow-raising terms in recent times about immigration.
“There seems to be a difference between the UK and here, a difference in the type of state reaction being seen,” said Mr Malone.
“There’s a caveat though — these swift and long sentences shouldn’t be celebrated, the way the harmful stuff is amplified is also a significant problem.”
Here, he is talking about the algorithms on social media platforms which can serve to recommend hateful content.
“There’s nothing particularly organic about that.
“We’d be wary of thinking that it is solely criminal justice that will fix it, there needs to be accountability for those entities aiding in that incitement.
But there is an open question as to whether or not there is really a political desire to take on the tech companies which play a particular role in incitement without any accountability regarding the consequences.”
Of those platforms, X, formerly Twitter, is perhaps the most egregious, given that its owner Elon Musk reintroduced many agitators to the platform who had, prior to his takeover in November 2022, been banned for posting hateful content, and also taking into account that Mr Musk is more than capable of posting grim material himself — his recent criticism of British prime minister Keir Starmer as ‘Two Tier Keir’ on the back of the government’s response to the rioting being a case in point.
X, Facebook and Instagram owner Meta, and video-sharing platform TikTok were all contacted for comment regarding their policies on abusive content and how they interact with law enforcement, An Garda Síochána in an Irish context, when it comes to unmasking those posting hateful content anonymously.
Meta said that it has invested “more than $20bn” to enhance its trust and safety division since 2016, and said that it has moved to combat misinformation “by investing more than $150m” in its third-party fact-checking programme.
A spokesperson added that the company is “constantly working” to identify and disrupt “co-ordinated inauthentic behaviour” — disinformation, in other words. They added the platforms “work with law enforcement when we believe there is a genuine risk of physical harm or direct threats to public safety”.
TikTok did not comment at length, but pointed towards its community guidelines, saying that any hateful content “would be removed”.
A spokesperson said that its trust and safety team — the service Mr Musk notoriously did away with at Twitter upon buying the platform — now stands at 40,000-strong worldwide.
X, as is its wont, did not reply.
Off the record, it is understood that gardaí seeking information regarding anonymous individuals from the platforms simply seek it in the same way they might look for CCTV footage from a retail business to aid in an investigation, and that the platforms or internet service providers are generally pretty responsive when it comes to those forays.
It is unclear to what extent An Garda Síochána makes use of those powers — the force did not respond to questions on the subject.
However, how often it does so would surely be its own decision.
Of course, there are other aspects to problematic content posted online.
One is instances where a private individual is defamed by an anonymous person and wants to take a civil court action.
It is an expensive, and slow, process.
“It is the case that it’s costly to identify someone,” says privacy solicitor Simon McGarr.
Allowing for anonymous speech has historically been very important. There’s a balance to be struck there.”
How much does it cost to get such a Norwich Pharmacal order (named for the first company to obtain one)? About €70,000 between both sides, with the cost typically borne by the loser (Ireland’s new defamation bill seeks to reduce those costs by up to half by moving the application to the circuit court).
For people more concerned about how Ireland’s pending general election will be affected by disinformation, there is encouraging news in the form of the Electoral Reform Act 2022 — a piece of legislation passed two years ago but which was never commenced by the Government — which the State is seeking to update.
Under the new law, should it pass and the plan is to pass it before the next election, the Electoral Commission can be notified by the public about harmful but not illegal content online and, should the regulatory body agree, have the content removed.
The key is in the phrase ‘harmful but not illegal’ — the latter is a job for the gardaí.
However, it indicates that a new definition is in the offing which will label a new class of discourse — that which is not unlawful but is harmful.
Whether it will cover disinformation will not be known until the commission comes up with a code of conduct and thus a definition of what qualifies as harmful content.
“We can’t say yet if it’s a bad idea or a good idea,” says Mr McGarr.
“But it’s definitely significant that it’s happening. It will amount to a very significant and far-reaching social change.”