UK Tech Firms and Child Protection Officials to Test AI's Ability to Create Abuse Content
Technology companies and child protection agencies will receive authority to evaluate whether artificial intelligence tools can generate child abuse images under recently introduced British legislation.
Significant Rise in AI-Generated Harmful Material
The announcement came as revelations from a protection watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the amendments, the government will permit approved AI developers and child protection organizations to inspect AI models – the underlying systems for chatbots and image generators – and verify they have sufficient protective measures to stop them from producing images of child sexual abuse.
"Ultimately about preventing exploitation before it happens," stated the minister for AI and online safety, adding: "Experts, under rigorous protocols, can now identify the danger in AI systems promptly."
Tackling Regulatory Obstacles
The changes have been introduced because it is illegal to create and own CSAM, meaning that AI developers and other parties cannot create such content as part of a testing regime. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This law is aimed at averting that issue by enabling to stop the production of those materials at their origin.
Legislative Structure
The amendments are being added by the government as modifications to the crime and policing bill, which is also implementing a prohibition on owning, producing or distributing AI systems developed to generate child sexual abuse material.
Practical Impact
This recently, the official toured the London base of a children's helpline and listened to a simulated conversation to advisors featuring a account of AI-based abuse. The call depicted a adolescent seeking help after being blackmailed using a sexualised deepfake of himself, constructed using AI.
"When I learn about young people experiencing blackmail online, it is a cause of intense anger in me and rightful concern amongst families," he said.
Alarming Statistics
A leading internet monitoring foundation reported that cases of AI-generated exploitation material – such as online pages that may contain multiple files – had significantly increased so far this year.
Cases of the most severe content – the most serious form of abuse – rose from 2,621 visual files to 3,086.
- Female children were predominantly targeted, accounting for 94% of prohibited AI depictions in 2025
- Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025
Sector Response
The law change could "constitute a vital step to ensure AI tools are safe before they are launched," stated the chief executive of the internet monitoring foundation.
"AI tools have made it so victims can be targeted repeatedly with just a simple actions, giving criminals the ability to make potentially endless amounts of sophisticated, photorealistic exploitative content," she added. "Content which further exploits victims' suffering, and makes children, particularly female children, more vulnerable both online and offline."
Support Session Information
Childline also released information of support interactions where AI has been mentioned. AI-related harms discussed in the conversations comprise:
- Using AI to rate body size, physique and looks
- AI assistants discouraging children from consulting trusted guardians about harm
- Facing harassment online with AI-generated material
- Online blackmail using AI-faked images
During April and September this year, the helpline conducted 367 counselling sessions where AI, conversational AI and related terms were discussed, four times as many as in the same period last year.
Half of the references of AI in the 2025 interactions were related to mental health and wellbeing, including using AI assistants for support and AI therapeutic apps.