Uncategorized
Posted in

UK Child Safety and Tech Companies to test the ability of AI tools to produce images of abuse | Artificial Intelligence (AI)

UK Child Safety and Tech Companies to test the ability of AI tools to produce images of abuse | Artificial Intelligence (AI)
Posted in

Tech companies and child protection agencies will be given the power to test whether artificial intelligence tools can produce images of child abuse under a new UK law.

The announcement was made as a safe Watspondog revealed that reports of a material abuse of a child abuse AI [CSAM] will more than double last year from 199 in 2024 to 426 in 2025.

Under the change, the government will give designated AI companies and child safety organizations such as CHATGPT and Veo 3 images – and ensure they have Google chats on child sexual abuse.

Kanishka Narayan, the Minister for AI and Online Safety, said the move was “to end abuse before it happens”

The changes were introduced because it is illegal to create and acquire CSAM, which means AI developers and others cannot create such images as part of a testing regime. Until now, authorities had to wait until the AI-generated CSAM was uploaded online before dealing with it. This law aims to get around that problem by helping to prevent the creation of source images.

The changes were introduced by the government as amendments to the bill, legislation that also introduced the ban on AI sexual abuse models.

This week Narayan visited the London Children’s Base in London, an aid for children, and listened to a mockery of a call to a report of AI abuse. The call depicts a teenager asking for help after being exposed to a sexual abyss of his own, built using AI.

“When I hear about children experiencing Blackmail Online, it is a source of great anger for me and rightful anger for parents,” he said.

The Internet Watch Foundation, which monitors CSAM Online, said reports of material abusing AI — such as a webpage that may contain too many images — have more than doubled so far. Instances of the material category – the most serious form of abuse – rose from 2,621 images or videos to 3,086.

Girls are targeted, making up 94% of illegal AI images in 2025, while depictions of newborns in five from 2024 to 2025.

Kerry Smith, the Chief Executive of the Internet Watch Foundation, said the law change could be “an important step to ensure AI products are safe before they are released”.

“AI tools make it so that survivors can be victimized in just a few clicks, giving criminals the ability to create a limited amount of sophisticated,” Photorealistic sexuised material,” he said. “Material that activates the suffering of victims, and makes children, especially girls, less safe in line.”

The child also released details of counseling sessions where AI was discussed. AI Harms discussed in the conversations include: Using AI to rate weight, body and appearance; Chatbots prevent children from talking to safe adults about abuse; which is mocked online as containing AI-generated content; and online blackmail using AI-FAKE images.

Between April and September this year, Childline delivered 367 counseling sessions where AI, chatbots and related terms were mentioned, four times as many as in the same period last year. Half of the talks on AI in 2025 session are related to mental health and good use of chatbots for support and AI therapy apps.

Source link

Join the conversation

Bestsellers:
SHOPPING BAG 0
RECENTLY VIEWED 0