UN Warns of “Dizzying” AI Threats Targeting Children Worldwide

The UN warns of a sharp rise in AI-driven threats to children, from deepfakes to predatory grooming. Discover the global response and the urgent push for AI literacy.

The United Nations has issued an urgent call for a comprehensive suite of measures to protect children from a “staggering” amount of harmful AI-generated content online . This warning comes as experts identify a rapidly escalating landscape of abuse, exploitation, and mental trauma facilitated by artificial intelligence .

According to Cosmas Zavazava, Director at the International Telecommunications Union (ITU), the digital world now presents a “dizzying array” of dangers . These threats are no longer limited to simple interactions but have evolved into sophisticated methods of targeting vulnerable young people .

The Evolution of Digital Predation

The rise of AI has provided predators with powerful tools to refine their tactics . Predatory grooming has become significantly more dangerous as AI allows offenders to analyse a child’s emotional state, online behaviour, and personal interests to tailor their approach .

Furthermore, AI is being used to create explicit deepfake images of real children, which is driving a disturbing new trend of sexual extortion . The scale of this problem is reflected in recent data from the Childlight Global Child Safety Institute, which reported that technology-facilitated child abuse cases in the US surged from 4,700 in 2023 to more than 67,000 in 2024 .

Global Regulators Strike Back

As the severity of these digital risks becomes clear, several nations are taking unprecedented steps to safeguard their youngest citizens.

• Australia’s Landmark Ban: At the end of 2025, Australia became the first country to ban social media accounts for children under 16 . The government cited research showing that over half of children aged 10–15 had experienced cyberbullying, and two-thirds had viewed violent or distressing content .

• A Growing Coalition: Countries including the UK, France, Canada, and Malaysia are now preparing their own regulations and laws to implement similar restrictions or bans .

Consequently, the global consensus is shifting toward the belief that the risks of unrestricted social media access for children far outweigh the potential benefits .

The AI Literacy Crisis

A significant hurdle in protecting children is what experts call “AI-illiteracy” . On 19 January 2026, a wide range of UN bodies released a Joint Statement on Artificial Intelligence and the Rights of the Child, highlighting a systemic failure to keep pace with technological change .

The UN warns that there is a critical lack of AI literacy among children, students, parents, primary caregivers, teachers and educators.

Additionally, the statement points to a “dearth of technical training” for policymakers . Governments currently struggle to implement effective data protection methods or conduct child rights impact assessments because they lack the necessary technical framework to understand AI’s full reach .

Profit vs. Protection: The Innovation Argument

A major point of friction between global regulators and tech giants has been the fear that strict safety standards might stifle innovation. However, the UN’s message to the private sector is clear: responsible AI deployment is not a barrier to financial success.

“Initially, we got the feeling that they were concerned about stifling innovation,” Zavazava remarked. “But our message is very clear: with responsible deployment of AI, you can still make a profit, you can still do business, you can still get market share.”

Furthermore, the UN views the private sector as a necessary partner in this transition. While global bodies hold regular meetings with industry leaders to discuss their responsibilities, the ITU maintains that it must “raise a red flag” whenever technological advancements lead to unwanted or harmful outcomes for children.

A Fundamental Issue of Children’s Rights

The push for safer AI is grounded in international law. In 2021, new language was formally attached to the Convention on the Rights of the Child—the most widely ratified human rights treaty in history—to specifically address the dangers of the digital age.

Consequently, the UN is now calling on all parts of society to take responsibility for how these technologies are used. Because children are accessing the internet at increasingly younger ages, the ITU has established a comprehensive four-part protection framework:

1. Parents: Guidance on monitoring and supporting children’s digital journeys.

2. Teachers: Resources for fostering AI literacy in the classroom.

3. Regulators: Frameworks for effective national legislation.

4. Industry: Standards for designing products that respect child rights from the ground up.

Key Recommendations for a Safer AI Future

The UN bodies have produced a roadmap for states and companies to ensure that AI serves the best interests of the next generation. The core recommendations include:

  1. Transparency and Accountability: Governments and companies must ensure AI systems are understandable and that there is clear responsibility for their outputs.
  2. Violence and Exploitation Prevention: States must proactively address AI-amplified violence or exploitation.
  3. Child-Centred Data Protection: Stronger privacy measures are needed to prevent the misuse of children’s data within AI models.
  4. Inclusive and Bias-Free Design: AI must be designed to ensure all children benefit equally, regardless of background.
  5. Sustainability: AI development should minimise ecological harm to ensure a healthy environment for future generations.

In addition, the UN stresses that children’s own views and experiences should meaningfully inform how these systems are designed and regulated.

Q&A: Protecting the Digital Generation

Q: How is AI specifically used in grooming? A: Predators use AI algorithms to monitor a child’s interests and emotional vulnerabilities . This allows them to create highly personalised and convincing personas to gain a child’s trust more effectively than traditional methods .

Q: Why did Australia ban social media for those under 16? A: The decision followed a report showing that most children in the 10–15 age bracket were being exposed to hateful or violent content and cyberbullying on these platforms .

Q: What was the primary finding of the 2025 Childlight report? A: It found a massive spike in technology-facilitated abuse, with cases in the US increasing by more than 1,400% in a single year .

FAQ: Frequently Asked Questions

What are AI deepfakes in the context of child safety? Deepfakes are AI-generated images or videos that look real. In this context, they are often explicit fakes of real children used for bullying or sexual extortion .

When was the UN Joint Statement on AI and Child Rights published? The statement was published on 19 January 2026 .

What is “AI-illiteracy”? It refers to a lack of understanding of how AI works, its risks, and how to use it safely . The UN identifies this as a major gap for parents, teachers, and policymakers .

Are other countries banning social media for kids? Yes, countries like the UK, France, and Canada are currently preparing similar laws following Australia’s lead .

Did the COVID-19 pandemic affect online abuse? Yes, the ITU noted that during the pandemic, online abuse—particularly against girls and young women—increased and frequently translated into physical harm .

LEAVE A REPLY

Please enter your comment!
Please enter your name here