UK Tech Firms and Child Protection Agencies to Test AI's Capability to Generate Abuse Images

Tech firms and child safety agencies will receive authority to evaluate whether artificial intelligence systems can generate child abuse images under recently introduced British legislation.

Significant Rise in AI-Generated Harmful Content

The declaration came as revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the amendments, the authorities will permit designated AI developers and child safety groups to examine AI systems – the underlying technology for chatbots and image generators – and verify they have adequate safeguards to prevent them from creating images of child sexual abuse.

"Ultimately about preventing exploitation before it happens," stated Kanishka Narayan, adding: "Specialists, under rigorous protocols, can now identify the risk in AI models promptly."

Addressing Legal Challenges

The amendments have been implemented because it is illegal to create and possess CSAM, meaning that AI creators and others cannot create such content as part of a testing process. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.

This law is designed to averting that issue by helping to stop the production of those images at source.

Legislative Structure

The changes are being introduced by the authorities as modifications to the criminal justice legislation, which is also establishing a prohibition on owning, producing or distributing AI models developed to generate child sexual abuse material.

Real-World Consequences

This week, the official toured the London headquarters of a children's helpline and listened to a mock-up conversation to counsellors involving a account of AI-based abuse. The call portrayed a teenager requesting help after facing extortion using a explicit deepfake of themselves, created using AI.

"When I hear about children facing extortion online, it is a source of intense anger in me and justified concern amongst parents," he said.

Concerning Statistics

A leading internet monitoring foundation reported that cases of AI-generated abuse material – such as webpages that may include numerous files – had more than doubled so far this year.

Cases of the most severe material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly victimized, accounting for 94% of illegal AI depictions in 2025
  • Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Sector Reaction

The legislative amendment could "constitute a crucial step to guarantee AI tools are safe before they are launched," stated the chief executive of the online safety foundation.

"Artificial intelligence systems have made it so victims can be victimised repeatedly with just a simple actions, giving offenders the ability to create possibly endless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Content which additionally exploits victims' suffering, and makes children, especially female children, more vulnerable both online and offline."

Support Interaction Information

Childline also published information of counselling sessions where AI has been mentioned. AI-related risks discussed in the sessions comprise:

  • Employing AI to rate weight, body and appearance
  • Chatbots dissuading children from talking to trusted guardians about abuse
  • Being bullied online with AI-generated material
  • Digital extortion using AI-faked images

Between April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and associated terms were discussed, significantly more as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 interactions were related to mental health and wellbeing, including using AI assistants for assistance and AI therapeutic applications.

Alicia Tanner
Alicia Tanner

Elena is a seasoned journalist and blogger with a passion for uncovering stories that matter to everyday life in the UK.