British Technology Companies and Child Protection Agencies to Test AI's Ability to Create Abuse Content
Technology companies and child protection organizations will be granted authority to assess whether artificial intelligence tools can generate child abuse images under new UK legislation.
Substantial Increase in AI-Generated Illegal Content
The announcement came as revelations from a safety watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the changes, the authorities will allow approved AI developers and child safety organizations to inspect AI models – the foundational systems for conversational AI and visual AI tools – and verify they have adequate safeguards to stop them from producing depictions of child exploitation.
"Ultimately about preventing abuse before it occurs," stated Kanishka Narayan, adding: "Experts, under strict protocols, can now detect the risk in AI models early."
Tackling Legal Challenges
The amendments have been implemented because it is against the law to create and own CSAM, meaning that AI creators and others cannot generate such images as part of a testing regime. Previously, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.
This law is designed to preventing that issue by helping to stop the creation of those images at their origin.
Legislative Structure
The amendments are being added by the government as revisions to the crime and policing bill, which is also implementing a prohibition on owning, creating or sharing AI models developed to create exploitative content.
Practical Consequences
This week, the minister toured the London headquarters of a children's helpline and heard a mock-up conversation to counsellors featuring a report of AI-based abuse. The call portrayed a adolescent seeking help after facing extortion using a explicit deepfake of himself, constructed using AI.
"When I learn about children experiencing extortion online, it is a cause of intense anger in me and justified concern amongst families," he stated.
Concerning Data
A prominent internet monitoring foundation stated that cases of AI-generated exploitation material – such as online pages that may contain numerous files – had significantly increased so far this year.
Cases of the most severe material – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.
- Female children were predominantly victimized, making up 94% of prohibited AI images in 2025
- Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025
Sector Reaction
The law change could "constitute a vital step to guarantee AI products are secure before they are released," stated the head of the internet monitoring foundation.
"AI tools have enabled so survivors can be victimised all over again with just a simple actions, giving offenders the ability to make potentially limitless quantities of sophisticated, lifelike exploitative content," she continued. "Content which additionally exploits survivors' suffering, and renders young people, especially female children, more vulnerable on and off line."
Support Session Data
Childline also released information of support interactions where AI has been mentioned. AI-related harms discussed in the conversations include:
- Using AI to rate body size, physique and appearance
- Chatbots discouraging children from talking to safe adults about harm
- Facing harassment online with AI-generated material
- Online blackmail using AI-manipulated pictures
Between April and September this year, Childline delivered 367 counselling interactions where AI, conversational AI and related topics were mentioned, significantly more as many as in the same period last year.
Half of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellness, including using AI assistants for support and AI therapeutic apps.