Artficial intelligence (AI) technology has numerous benefits and significant dangers, especially for children.
These risks are becoming increasingly obvious as children interact more with AI-driven applications, platforms, and devices.
Privacy is a concern. AI systems rely on data collection, which means they store users’ personal information, browsing habits, and preferences.
There is a real risk that AI-powered devices with cameras and microphones could be misused, turning them in to surveillance tools that capture sensitive data without the consent of children or their guardians.
The potential for misuse is significant, and it’s crucial that parents take action to protect their children’s data.
Despite efforts by children’s data protection regulators, such as COPPA (Children’s Online Privacy Protection Act), in the US, or Comisúin na Meán, in Ireland, the regulations are not adequately enforced or updated to cover new AI technologies.
These policies lack prescriptive direction to ensure that technology companies do not exploit users’ data and there is insufficient sanction if data is mined unethically.
Updated regulations to protect our children’s data from AI systems are required, and parents play a crucial role in advocating for these changes.
The next issue is the potential for AI to inadvertently expose our children to inappropriate content. While AI systems are impressive, they are not infallible.
AI systems designed to filter content may not always work perfectly, leading to algorithmic failures or oversights that could expose children to harmful or inappropriate content.
A major problem in the next few years will be AI’s ability to create deepfakes and spread misinformation. A deepfake is a video of a person whose face or body has been digitally altered to appear to be someone else.
Deepfake videos are convincing manipulations of images and sounds that can be used to spread misinformation, which children might not be able to distinguish from reality.
This technology can lead to confusion and dissemination of harmful ideas or images. Future deepfakes will likely contribute to cyberbullying and how it is carried out.
AI programmes may also increase parents’ worries about their children’s screen time.
Existing AI-driven apps and games can be highly addictive, leading to an overreliance on technology and affecting children’s physical health, sleep patterns, and social interactions.
Gamified AI applications often employ psychological tricks to keep users engaged, which can condition children to seek instant gratification and reduce their attention spans. Advancements in AI will only further complicate this problem.
AI algorithms on social-media platforms can expose children to idealised images and lifestyles, causing unhealthy social comparisons that can negatively impact their body image and self-esteem.
Another concern is the risk of increased social isolation. Already, many parents worry that their teenager spends little time in face-to-face contact with their peers outside of school.
As children spend more time on AI-powered devices and these platforms become more advanced, they risk experiencing increased social isolation, which will impact their ability to form meaningful relationships.
AI systems are imperfect and can inherit biases in the data they are trained on, leading to discriminatory outcomes that may affect children’s perceptions of race, gender, or social class.
AI can reinforce stereotypes through content recommendations or interactions, which can negatively influence young minds and add to the growing sense of polarisation in society.
Promoters of misogynistic content or racist/anti-migrant far-right ideologies are already using online platforms to spread harmful messaging.
It is also worrying that AI-driven’ bot farms’ are being used to spread misinformation.
Bot farms are networks of bots or automated programmes created to perform repetitive tasks at a scale far beyond human capability. They can spread misinformation, artificially increase website traffic, or launch cyber attacks.
US social philosopher Daniel Schmachtenberger describes how AI systems can analyse children’s behaviours and preferences to deliver targeted ads or content that manipulates their choices and opinions.
Schmachtenberger warns that AI algorithms might inadvertently expose children to specific political or ideological content, potentially shaping their views without proper context or understanding.
In a world where we need our children to be critical thinkers and scrutinise the information they receive for its validity, pervasive use of AI may have the opposite effect and result in a loss of creativity and critical thinking.
We know that when it comes to cognitive fitness, there is a ‘use it or lose it’ effect, so an over-reliance on AI to solve problems or answer questions could hinder children’s problem-solving and critical-thinking skills.
Examples can be seen when children use voice assistants or search engines to do their homework. Furthermore, AI tools that create art, music, or writing might discourage children from exploring their creativity, resulting in passive consumption rather than active creation.
AI-powered devices and applications can be vulnerable to hacking, potentially exposing children to harmful content or interactions. Children might not recognise phishing attempts or malware disguised as games or educational apps, leading to security breaches.
The role of AI in academic assessment is a massive challenge to educators. Some believe we need to embrace AI, and others believe we need to return to pen-and-paper exams to mitigate the risk of AI-produced content.
For example, new software can take a 200-page PDF document and provide a bullet-point summary of the content, so it is impossible to know, when a student provides you with a review of the document, whether or not they have read it or used the AI tool.
As convenient as that may appear from a student’s perspective, certain aspects of academic work require you to engage in lengthy intellectual endeavours, so the challenge is how to navigate this issue over the next few years. A solution has yet to be found.
Access to AI educational tools is varied, which could widen the gap between children from different socio-economic backgrounds, creating disparities in learning opportunities.
The evolution of AI will pose even further complications for parents trying to raise well-adjusted children in this ever-changing world. We cannot rely on institutional structures, because the current legal frameworks are not sufficient to address the unique challenges posed by AI.
In short, the responsibility of protecting our children falls to parents. The difficulty is that none of us have navigated this space before, so we are attempting to ‘learn as we go’ while parenting our children during the technological evolution.
Here are my recommendations for helping your child navigate AI:
- Actively monitor and guide your children’s use of AI technologies, teaching them about privacy and critical thinking;
- Upskill your knowledge of AI technologies and be aware of the risks;
- Help your child become a critical friend of technology, aware of how their personal data can be used or manipulated, and able to scrutinise content for signs of manipulation;
- Encourage open communication between your children, other parents, and educators about the risks and benefits of AI, fostering a safe and informed environment for technology use.
But it should not all come down to parents. AI developers must prioritise creating child-friendly AI systems that are inclusive, unbiased, and designed with children’s best interests in mind.
Governments and organisations also need to be alert to the dangers of AI, enforce strict regulations to protect children’s data, and ensure that its systems are transparent and accountable.
While fast-evolving AI may disturb parents, it is essential to take a balanced approach to integrating AI into our children’s lives.
Ideally, we should ensure that the benefits of these technologies are maximised, while the risks are minimised. Understanding and addressing these potential dangers is the only way to create a safer digital environment for the next generation.
- Dr Colman Noctor is a child psychotherapist