References
Artificial intelligence and deepfakes: Keeping children safe in schools

Abstract
Articifical intelligence is now being used to create images and deepfake videos of children and child abuse. Elizabeth Rose looks at the implications for safeguarding work in schools
The use of artificial intelligence (AI) is a rapidly expanding and developing field, and the potential risks that it poses to children is an area of increasing concern.
In October 2023, the Internet Watch Foundation (IWF) published a report on the use of AI to generate child sexual abuse material and the wide-ranging harms associated with this, which was then reviewed and updated in mid-2024, tracking some of the rapid advancements in the technology and level of use.
The reports (IWF, 2023; 2024) demonstrate the horrifying ways in which AI is being used to create images – and now deepfake videos – of child sexual abuse, as well as recommending ways that government and tech companies can respond to this issue.
However, there is currently little in the way of guidance for schools to support understanding of the issue from a safeguarding perspective, to provide strategies or ideas for the prevention of harm to children, or indeed ways to approach the issue in the curriculum.
Register now to continue reading
Thank you for visiting Journal of Child Health and reading some of our peer-reviewed resources for children’s health professionals. To read more, please register today. You’ll enjoy the following great benefits:
What's included
-
Limited access to our clinical or professional articles
-
New content and clinical newsletter updates each month