British Tech Companies and Child Safety Agencies to Examine AI's Capability to Create Exploitation Content
Tech firms and child safety organizations will be granted permission to evaluate whether artificial intelligence tools can generate child abuse images under recently introduced British laws.
Significant Increase in AI-Generated Illegal Material
The declaration coincided with revelations from a safety monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the changes, the authorities will allow designated AI companies and child protection organizations to inspect AI systems – the foundational systems for conversational AI and visual AI tools – and verify they have sufficient protective measures to stop them from producing images of child exploitation.
"Ultimately about preventing exploitation before it happens," stated the minister for AI and online safety, adding: "Specialists, under strict conditions, can now detect the danger in AI systems early."
Tackling Regulatory Challenges
The amendments have been introduced because it is against the law to produce and own CSAM, meaning that AI creators and others cannot generate such images as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.
This legislation is designed to preventing that issue by enabling to stop the production of those materials at source.
Legal Structure
The amendments are being added by the government as modifications to the crime and policing bill, which is also establishing a ban on owning, creating or distributing AI systems designed to create child sexual abuse material.
Real-World Consequences
This week, the official visited the London headquarters of Childline and heard a mock-up call to counsellors involving a account of AI-based abuse. The call portrayed a teenager seeking help after being blackmailed using a explicit AI-generated image of himself, constructed using AI.
"When I learn about young people experiencing extortion online, it is a cause of intense frustration in me and justified concern amongst parents," he said.
Alarming Data
A prominent online safety foundation reported that cases of AI-generated abuse content – such as webpages that may contain numerous images – had significantly increased so far this year.
Cases of category A content – the most serious form of abuse – rose from 2,621 visual files to 3,086.
- Female children were predominantly targeted, making up 94% of illegal AI images in 2025
- Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025
Industry Response
The law change could "constitute a vital step to guarantee AI tools are secure before they are launched," commented the head of the online safety organization.
"AI tools have enabled so victims can be targeted repeatedly with just a simple actions, providing criminals the capability to create possibly endless amounts of sophisticated, lifelike child sexual abuse material," she added. "Content which further commodifies survivors' trauma, and renders young people, particularly girls, more vulnerable both online and offline."
Counseling Interaction Data
Childline also published details of support interactions where AI has been referenced. AI-related risks mentioned in the conversations comprise:
- Employing AI to rate body size, body and appearance
- AI assistants dissuading children from consulting trusted adults about harm
- Being bullied online with AI-generated material
- Digital blackmail using AI-faked images
During April and September this year, Childline delivered 367 counselling interactions where AI, chatbots and associated terms were discussed, four times as many as in the same period last year.
Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, including utilizing chatbots for support and AI therapy applications.