Lighting designer with over a decade of experience in sustainable and aesthetic lighting solutions for residential and commercial spaces.
Technology companies and child safety agencies will receive permission to evaluate whether AI systems can generate child abuse images under new UK laws.
The announcement coincided with revelations from a protection watchdog showing that cases of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Under the changes, the authorities will permit designated AI developers and child protection organizations to examine AI models – the foundational systems for chatbots and image generators – and ensure they have adequate protective measures to prevent them from producing depictions of child exploitation.
"Ultimately about stopping abuse before it occurs," stated Kanishka Narayan, adding: "Experts, under rigorous protocols, can now identify the danger in AI systems promptly."
The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI creators and other parties cannot create such images as part of a evaluation process. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it.
This law is designed to averting that problem by enabling to halt the creation of those materials at their origin.
The amendments are being added by the authorities as modifications to the crime and policing bill, which is also implementing a prohibition on possessing, creating or sharing AI models developed to generate exploitative content.
This recently, the minister visited the London headquarters of a children's helpline and heard a mock-up call to counsellors involving a report of AI-based exploitation. The call portrayed a adolescent seeking help after facing extortion using a explicit deepfake of themselves, constructed using AI.
"When I hear about children experiencing blackmail online, it is a source of intense frustration in me and justified anger amongst parents," he stated.
A prominent internet monitoring foundation reported that instances of AI-generated exploitation material – such as webpages that may contain multiple files – had significantly increased so far this year.
Instances of category A material – the gravest form of abuse – rose from 2,621 visual files to 3,086.
The legislative amendment could "represent a crucial step to ensure AI tools are secure before they are launched," stated the chief executive of the online safety organization.
"Artificial intelligence systems have made it so victims can be targeted all over again with just a few clicks, giving criminals the ability to make possibly endless quantities of advanced, lifelike child sexual abuse material," she continued. "Content which further exploits victims' trauma, and makes children, especially female children, less safe both online and offline."
Childline also released details of support interactions where AI has been mentioned. AI-related risks mentioned in the sessions include:
Between April and September this year, the helpline conducted 367 counselling sessions where AI, conversational AI and associated terms were mentioned, significantly more as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellness, including utilizing AI assistants for support and AI therapy apps.
Lighting designer with over a decade of experience in sustainable and aesthetic lighting solutions for residential and commercial spaces.