British Tech Companies and Child Safety Officials to Test AI's Ability to Create Exploitation Content
Tech firms and child protection agencies will be granted permission to assess whether artificial intelligence tools can produce child exploitation images under new British laws.
Substantial Rise in AI-Generated Illegal Material
The declaration came as revelations from a protection watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the changes, the authorities will permit approved AI companies and child safety groups to examine AI models – the foundational systems for conversational AI and visual AI tools – and verify they have sufficient safeguards to prevent them from creating images of child sexual abuse.
"Ultimately about preventing exploitation before it occurs," stated the minister for AI and online safety, noting: "Experts, under rigorous conditions, can now identify the danger in AI systems promptly."
Addressing Regulatory Obstacles
The changes have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot generate such images as part of a testing regime. Previously, officials had to wait until AI-generated CSAM was published online before dealing with it.
This law is designed to averting that issue by helping to halt the creation of those images at their origin.
Legal Structure
The changes are being added by the government as modifications to the criminal justice legislation, which is also implementing a prohibition on possessing, creating or sharing AI models designed to generate child sexual abuse material.
Real-World Consequences
This recently, the minister visited the London base of a children's helpline and listened to a mock-up conversation to advisors featuring a report of AI-based exploitation. The call depicted a adolescent requesting help after facing extortion using a explicit deepfake of themselves, constructed using AI.
"When I hear about young people facing blackmail online, it is a source of intense anger in me and justified concern amongst families," he said.
Alarming Data
A prominent online safety foundation reported that cases of AI-generated exploitation material – such as webpages that may contain numerous files – had more than doubled so far this year.
Instances of the most severe content – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
- Female children were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
- Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "represent a crucial step to guarantee AI tools are secure before they are launched," commented the head of the internet monitoring foundation.
"AI tools have enabled so victims can be targeted repeatedly with just a simple actions, providing criminals the capability to create potentially limitless quantities of sophisticated, photorealistic exploitative content," she added. "Material which further exploits survivors' trauma, and makes young people, particularly girls, more vulnerable both online and offline."
Support Session Data
Childline also published information of support sessions where AI has been mentioned. AI-related risks discussed in the conversations include:
- Using AI to rate body size, body and appearance
- AI assistants discouraging young people from consulting trusted guardians about harm
- Facing harassment online with AI-generated material
- Online extortion using AI-faked pictures
During April and September this year, Childline conducted 367 counselling sessions where AI, conversational AI and related terms were discussed, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 sessions were related to mental health and wellness, including using AI assistants for support and AI therapy applications.