The research, conducted with the US-based Algorithmic Transparency Institute and AI Forensics, found that children and young people who watched mental health-related content on the ‘For You’ page were drawn into “rabbit holes” of potentially harmful content.
The ‘For You’ feed’s algorithms feed users new content and is highly personalised based on the content already viewed.
Amnesty said that focus group discussions, interviews, and simulations of children’s Tiktok accounts in Kenya, the Philippines, and the US found it was “addictive by design”. One participant in the Philippines said:
Amnesty said it is designed to keep users hooked on the platform to collect “ever more data about them”.
“Tiktok then uses this data to create profiles of users and draw inferences about them, which allows it to cluster users in groups to target them with highly personalized content to keep them engaged,” it said.
“These groups and categories are also made available to advertisers so that they can target users with personalised ads.”
In response to Amnesty’s findings, Tiktok pointed to its community guidelines, which set out content planned on its platforms, including anything showing or promoting suicide or self-harm. It also said it is developing a “company-wide human rights due diligence process”.
In September, Tánaiste Micheál Martin said the primary school curriculum must be overhauled to include education on the dangers of Tiktok and similar platforms.
He said the findings of a probe from Ireland’s data watchdog into Tiktok’s approach to privacy settings for children were “very serious” and “concerning”.
The Government has taken steps to ban Tiktok on all Government devices, following the example of other countries worldwide.