Social media firms ‘not ready to tackle misinformation’ during global elections

Social media firms ‘not ready to tackle misinformation’ during global elections

Of That Not Election Warn Experts The Integrity And Action Citizens Do If It The Elections Have Plans, Could Risk Strong Of Platforms Safety

Social media companies are not ready to tackle misinformation during elections due to take place around the world in 2024 because of language barriers, experts warn.

Global Coalition for Tech Justice, a movement of civil society leaders and survivors of tech harms, is calling on leading big tech companies, including Google, TikTok, Meta and X (formerly Twitter), to ensure their platforms are equipped to protect democracy and safety during votes next year.

In 2024, two billion people are due to vote in more than 50 elections, including in the US, India and the EU.

In July, the coalition asked Google, Meta, X, and TikTok to establish fully resourced election action plans, at the global and country levels, for the protection of freedoms, rights and user safety during the 2024 elections. Social media companies did not respond to this request to share their plans.

The coalition had sought details of the number of employees and contractors for each language and dialect to ensure there is expertise on national and regional context for content moderation.

Experts warn that if platforms do not have strong election action plans, it could risk the integrity of elections and the safety of citizens.

Katarzyna Szymielewicz, co-founder and president of Panoptykon Foundation — a Polish non-governmental organisation that monitors surveillance technology — said big tech companies could do more to ensure content is moderated across languages.

This is a big challenge for large platforms to do effective moderation in different cultural contexts, and the further from English language it gets, the more complicated it is.

“Moderation of sponsored content, or organic content to eliminate political violence, misogynist content, various types of abuse are not so obvious when you just look at the language … The platforms should try harder to address this and invest a lot more in humans who can moderate more effectively.” 

Szymielewicz says big tech platforms must de-amplify disinformation and hate by making their systems safe by design by default and not just during election periods. 

These include measures to suppress the algorithmic reach and visibility of disinformation and hate-spreading content, groups and accounts.

South Africa has seen violence caused by xenophobia on social media targeted towards migrants, refugees and asylum seekers.

In India, which has seen a new wave of anti-Muslim violence, the coalition claimed Meta had flouted its own rules on hate speech propagated by the ruling Hindu nationalist Bharatiya Janata Party (BJP).

Ratik Asokan, a member of the diaspora organisation India Civil Watch International, a coalition member, said: “Meta hasn’t come even close to policing its platforms in India because of lack of content moderators and due to its ties to the BJP.

“BJP has no incentive to take down hate speech because this is part of their politics, it contributes to their electoral success. The only people we could appeal to are turning a blind eye to this. Meta speaks the language of human rights but it is acting as an ally for the Hindu far right.” 

 Meta, X, TikTok and Google were contacted for comment.

• Guardian

More in this section

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

Group Examiner © Echo Limited