Children fed toxic content by social media algorithms, study finds

Children fed toxic content by social media algorithms, study finds

Algorithms The To Men, Content Boys Toxic Shorts’ Tiktok’s Found Promote Young And Study And Youtube

Children are being directed to dangerous, toxic online content by social media company algorithms within minutes of use, sparking calls for urgent action.

After as little as two minutes of use, fake accounts registered as teenage boys were being fed harmful content including toxic masculinity, misogyny, far-right ideology and anti-transgender rhetoric on YouTube Shorts and TikTok, a new Dublin City University study has found.

TikTok’s and YouTube Shorts’ algorithms promote toxic content to boys and young men, the study found.

The study, called 'Recommending Toxicity: The role of algorithmic recommender functions on YouTube Shorts and TikTok in promoting male supremacist influencers’ is by Dr Catherine Baker, Prof. Debbie Ging and Dr Maja Brandt Andreasen in the DCU Anti-Bullying Centre.

After as little as two hours and 32 minutes of viewing videos, the vast majority of content being fed to the ‘boys’’ accounts was toxic, the study found.

TikTok was recommending 76% toxic content after being watched for an average of just 2 hours and 32 minutes.

YouTube Shorts was recommending 78% toxic content after its videos were watched for three hours and 20 minutes.

And once the account showed any interest by watching toxic content, the amount of that content rapidly increased.

“There is a clear link between the growing levels of online abuse and toxicity experienced by women and girls, and the recent rise in male supremacism online,” the study said.

Most social media companies do not disclose how their algorithms work, which presents challenges to researchers investigating this phenomenon, the study noted.

“The findings of this report point to urgent and concerning issues for parents, teachers, policy makers, and society as a whole," the study said.

“In particular, our findings highlight the ineffectiveness of social media platforms in protecting children and young people.

“Ultimately, girls and women are the most severely impacted by these beliefs, but they are also damaging to the boys and men who consume them, in particular in relation to mental wellbeing. We hope our findings will compel the social media companies, government, and policy makers to take urgent action.” 

One of the report’s key recommendations was that social media companies introduce stricter and more sophisticated content moderation.

It also called for social media companies to work closely with Ireland’s new media regulator, Coimisiún na Meán, and trusted flaggers to highlight illegal, harmful and borderline content.

And recommender algorithms should be turned off by default, it said.

The Children’s Rights Alliance is now calling on Coimisiún na Meán and the Government to regulate social media platforms' use of algorithms to urgently address the real risk of harm to children. 

The study put into perspective the extreme nature of harmful material present online directed at and consumed by children and young people, the Children's Rights Alliance said. 

Noeline Blackwell, Online Safety Co-ordinator at the Children’s Rights Alliance said that the DCU study's findings are "incredibly worrying."

"Social media platforms create the digital pathways by which our children and young people receive content. It is therefore up to the platforms to build those systems as safely as they possibly can," she said. 

"This means building systems that do not persuade or allow our children to be lured into harm. It also means ensuring age-appropriate safeguards are in place and working effectively.

"If the platforms will not do this, the State must."

But the new regulations Coimisiún na Meán is preparing in an Online Safety Code does not currently regulate how the digital companies push or suggest content to children, Ms Blackwell said.

A statement from the Department of Tourism, Culture, Arts, Gaeltacht, Sport and Media said the first online safety code is currently being developed. 

"It will apply to video-sharing platform services, including TikTok and YouTube, and will prohibit the sharing of content which promotes eating or feeding disorders and self-harm or suicide on video-sharing platforms, amongst a range of other harmful and illegal content. The code will be binding and particularly focussed on ensuring the safety of children online," it said. 

A YouTube spokesperson said that hate speech, harassment and cyberbullying are not allowed on YouTube. 

TikTok was also contacted for comment.

More in this section

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Group Echo Limited