The use of artificial intelligence (AI) is a rapidly expanding and developing field, and the potential risks that it poses to children is an area of increasing concern.
In October 2023, the Internet Watch Foundation (IWF) published a report on the use of AI to generate child sexual abuse material and the wide-ranging harms associated with this, which was then reviewed and updated in mid-2024, tracking some of the rapid advancements in the technology and level of use.
The reports (IWF, 2023; 2024) demonstrate the horrifying ways in which AI is being used to create images – and now deepfake videos – of child sexual abuse, as well as recommending ways that government and tech companies can respond to this issue.
However, there is currently little in the way of guidance for schools to support understanding of the issue from a safeguarding perspective, to provide strategies or ideas for the prevention of harm to children, or indeed ways to approach the issue in the curriculum.
The scale of the problem
It is illegal to possess, take or distribute ‘indecent pseudo-photographs’ of children and illegal to possess a ‘prohibited image of a child’, both of which cover AI images of child sexual abuse.
However, the IWF has found growing evidence that AI is being used to generate thousands of images of abuse and perpetrators are sharing them on dark and clear web forums.
The nature of the images is also becoming more severe and extreme over time. The IWF (2003) report detailed findings from monitoring one dark web forum, in which abuse images are shared. They found that in one month (October 2023), 20 254 images were shared and of these, 2 978 were found to be criminal prohibited images.
It is currently not illegal to create and distribute guides on how to generate abuse material using AI. Many of these images contain depictions of ‘Category A’ form of abuse – the most severe – of children, including very young children.
The risks to children
It is important that schools and those working in safeguarding understand the emerging threats and risks to children as a result of advancements in AI. Using AI to generate child sexual abuse material has wide-ranging, devastating impacts because:
Understanding how technology is being used to harm children is crucial in considering how to protect children and equip them with the knowledge that they need to stay safe.
How can schools respond to this issue?
Currently, the guidance for schools on the use of AI focuses mainly on the use of AI in managing workload and generating resources. There is scant mention of online safety and little in the way of concrete advice.
The non-statutory guidance document Sharing nudes and semi-nudes: Advice for education settings working with children and young people was updated in March 2024 to include greater reference to this issue and to ‘deepfakes’, with some overarching advice about responding in cases where these kinds of images have been shared (Department for Education [DfE], 2024).
Despite this, however, there are things that schools can do to begin to respond to this issue and protect children. Here are 10 ideas.

Final thoughts
As the risks become clearer and the use of AI in all forms becomes more mainstream, it is likely that schools will require specific information to support in responding to issues that emerge. However, as detailed in this article, there are already things that you can do using existing mechanisms to begin to respond to this threat and develop knowledge and expertise of the risks. Getting ahead of emerging issues and thinking about prevention is essential in keeping children safe from all forms of harm, including this new and concerning development.