References

Department for Education. Guidance: Sharing nudes and semi-nudes: advice for education settings working with children and young people. 2024. www.gov.uk/government/publications/sharing-nudes-and-semi-nudes-advice-for-education-settings-working-with-children-and-young-people/sharing-nudes-and-semi-nudes-advice-for-education-settings-working-with-children-and-young-people (accessed 24 January 2025)

Internet Watch Foundation. How AI is being abused to create child sexual abuse imagery. 2023. www.iwf.org.uk/media/q4zll2ya/iwf-ai-csam-report_public-oct23v1.pdf (accessed 24 January 2025)

Internet Watch Foundation. What has changed in the AI CSAM landscape?. 2024. www.iwf.org.uk/media/drufozvi/iwf-ai-csam-report_update-public-jul24v12.pdf (accessed 24 January 2025)

Artificial intelligence and deepfakes: Keeping children safe in schools

02 January 2025
Volume 2 · Issue 1

Abstract

Articifical intelligence is now being used to create images and deepfake videos of children and child abuse. Elizabeth Rose looks at the implications for safeguarding work in schools

The use of artificial intelligence (AI) is a rapidly expanding and developing field, and the potential risks that it poses to children is an area of increasing concern.

In October 2023, the Internet Watch Foundation (IWF) published a report on the use of AI to generate child sexual abuse material and the wide-ranging harms associated with this, which was then reviewed and updated in mid-2024, tracking some of the rapid advancements in the technology and level of use.

The reports (IWF, 2023; 2024) demonstrate the horrifying ways in which AI is being used to create images – and now deepfake videos – of child sexual abuse, as well as recommending ways that government and tech companies can respond to this issue.

However, there is currently little in the way of guidance for schools to support understanding of the issue from a safeguarding perspective, to provide strategies or ideas for the prevention of harm to children, or indeed ways to approach the issue in the curriculum.

The scale of the problem

It is illegal to possess, take or distribute ‘indecent pseudo-photographs’ of children and illegal to possess a ‘prohibited image of a child’, both of which cover AI images of child sexual abuse.

However, the IWF has found growing evidence that AI is being used to generate thousands of images of abuse and perpetrators are sharing them on dark and clear web forums.

The nature of the images is also becoming more severe and extreme over time. The IWF (2003) report detailed findings from monitoring one dark web forum, in which abuse images are shared. They found that in one month (October 2023), 20 254 images were shared and of these, 2 978 were found to be criminal prohibited images.

It is currently not illegal to create and distribute guides on how to generate abuse material using AI. Many of these images contain depictions of ‘Category A’ form of abuse – the most severe – of children, including very young children.

The risks to children

It is important that schools and those working in safeguarding understand the emerging threats and risks to children as a result of advancements in AI. Using AI to generate child sexual abuse material has wide-ranging, devastating impacts because:

  • Children who have appeared in child sexual abuse material (known victims) are being used to create new images, revictimising them over and over again
  • Analysts are spending time identifying whether images are AI-generated or feature ‘real’ children, meaning that the rescue of victims is delayed
  • Photos from websites, including photographs of famous pre-teen children, are being used to create images and videos of abuse
  • It provides opportunities for perpetrators to groom, coerce and blackmail children using AI-generated images
  • Adult offenders may share indecent AI images of children with a child in order to coerce or elicit real images from that child.
  • Understanding how technology is being used to harm children is crucial in considering how to protect children and equip them with the knowledge that they need to stay safe.

    How can schools respond to this issue?

    Currently, the guidance for schools on the use of AI focuses mainly on the use of AI in managing workload and generating resources. There is scant mention of online safety and little in the way of concrete advice.

    The non-statutory guidance document Sharing nudes and semi-nudes: Advice for education settings working with children and young people was updated in March 2024 to include greater reference to this issue and to ‘deepfakes’, with some overarching advice about responding in cases where these kinds of images have been shared (Department for Education [DfE], 2024).

    Despite this, however, there are things that schools can do to begin to respond to this issue and protect children. Here are 10 ideas.

  • Safeguarding teams should familiarise themselves with the issue of AI-generated abuse material by reading the IWF reports and understanding the emerging picture of risk.
  • Staff members should be trained to understand the risks of AI and deepfakes and refer any concerns to the designated safeguarding lead (DSL) as they usually would.
  • Consideration should be given to the online safety curriculum and how to teach children to stay safe when using artificial intelligence themselves.
  • The curriculum should equip children with the wider knowledge and understanding that they need to stay safe online – including key messages stressing that they should only interact with people that they know, and use safe and suitable websites. Consideration should be made around how to support them in developing digital resilience.
  • Parents should be informed regularly of risks to children online, supported to ensure that suitable safeguards are applied to home broadband networks and encouraged to check children's devices regularly. Remind parents not to share images of children publicly online.
  • Staff members should also be reminded not to share images of themselves publicly online (keep social media profiles private).
  • Filtering and monitoring systems in school should prevent children from accessing any harmful content online.
  • A robust response involving the necessary safeguarding partners should be put in place in the event that children generate or distribute deepfake images or videos of peers.
  • DSLs should be familiar with tools to support in the removal of abusive or indecent images, such as the Report Remove tool (see Further information). Specialist support should be sought where children have experienced online sexual abuse, as well as schools following local procedures for referring any incidents of harm or abuse to social care and to the police.
  • Remind children regularly of reporting mechanisms and support available in school to respond to any safeguarding issue, including issues online.
  • Final thoughts

    As the risks become clearer and the use of AI in all forms becomes more mainstream, it is likely that schools will require specific information to support in responding to issues that emerge. However, as detailed in this article, there are already things that you can do using existing mechanisms to begin to respond to this threat and develop knowledge and expertise of the risks. Getting ahead of emerging issues and thinking about prevention is essential in keeping children safe from all forms of harm, including this new and concerning development.

    Further information

  • DfE. Policy paper: Generative artificial intelligence (AI) in education. 2023. Available at: www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education#opportunities-for-the-education-sector
  • Internet Watch Foundation. Report Remove. Available at: www.iwf.org.uk/our-technology/report-remove