Technology companies and child safety organizations will receive permission to assess whether artificial intelligence tools can produce child abuse images under new British laws.
The announcement came as findings from a protection watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Under the changes, the authorities will permit approved AI developers and child protection groups to inspect AI systems – the underlying systems for conversational AI and visual AI tools – and verify they have sufficient safeguards to prevent them from producing depictions of child sexual abuse.
"Fundamentally about stopping exploitation before it happens," stated Kanishka Narayan, noting: "Experts, under strict conditions, can now detect the risk in AI systems early."
The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot generate such images as part of a testing regime. Until now, officials had to delay action until AI-generated CSAM was published online before addressing it.
This law is aimed at preventing that problem by enabling to halt the creation of those materials at their origin.
The amendments are being introduced by the government as modifications to the criminal justice legislation, which is also establishing a ban on owning, creating or distributing AI models developed to generate child sexual abuse material.
This week, the official toured the London base of Childline and heard a mock-up call to counsellors featuring a account of AI-based exploitation. The interaction portrayed a adolescent seeking help after facing extortion using a sexualised deepfake of themselves, constructed using AI.
"When I hear about young people facing blackmail online, it is a source of extreme anger in me and justified concern amongst families," he said.
A leading internet monitoring organization reported that cases of AI-generated exploitation content – such as webpages that may include numerous images – had more than doubled so far this year.
Instances of category A content – the most serious form of abuse – increased from 2,621 images or videos to 3,086.
The law change could "represent a vital step to ensure AI products are safe before they are launched," stated the chief executive of the internet monitoring organization.
"Artificial intelligence systems have made it so victims can be targeted all over again with just a simple actions, providing offenders the ability to create potentially endless quantities of sophisticated, photorealistic exploitative content," she added. "Content which further commodifies survivors' suffering, and renders children, particularly girls, more vulnerable on and off line."
The children's helpline also published information of support sessions where AI has been mentioned. AI-related risks mentioned in the sessions comprise:
During April and September this year, the helpline delivered 367 support sessions where AI, conversational AI and associated terms were mentioned, significantly more as many as in the same period last year.
Fifty percent of the references of AI in the 2025 sessions were related to mental health and wellness, including utilizing AI assistants for support and AI therapeutic apps.
A tech enthusiast and journalist with over a decade of experience covering emerging technologies and digital transformations.
Michael Hunter
Michael Hunter
Michael Hunter
Michael Hunter