The explosion of social media use over the last decade has brought with it a failure of society to either predict the threats that the phenomenon would unleash into the lives of our children and young people, or to offer them any kind of protection against its harms. Big business drove its rapid development, many waxed lyrical about its power for good and little heed was given to the safeguarding and wellbeing of children.
We are now scrambling to catch up in the face of social media's clear impact on children's mental health and wellbeing. The evidence is mounting, but here are just two recent examples:
- In 2017, 14-year-old Molly Russell took her own life. The inquest into her death discovered that she had viewed thousands of images about suicide and self-harm on social media. Of 16 300 posts Molly saved, shared or liked on Instagram in the 6 months before her death, 2100 were depression, self-harm or suicide-related
- In 2023, Dame Rachel de Souza, Children's Commissioner for England, reported that 10% of children have watched pornography by the age of 9, with the average first age of exposure being 13. The most common source of this pornography? Twitter. Also in the top five were Instagram and Snapchat (Children's Commissioner, 2023).
It is argued that there is a place for social media and that there are benefits for children, many of whom are more likely to socialise and communicate with friends online than in real life. On the other hand, it is possible that social media will be looked upon in years to come as one of the biggest mistakes of the 21st century, giving a voice as it has to extremist, racist and misogynist views.
Are we learning any lessons? It is unclear, partly because the same rapid development is now happening with artificial intelligence (AI). Its supporters are arguing for its importance in terms of economic rebirth, and its potential for revolutionising our working lives and solving the world's problems – pick your religion. Once again, talk of safeguarding, of threats and of AI being used for more nefarious purposes is dismissed as Luddite ignorance or stoking fear among the public. However, the Internet Watch Foundation (2004; 2005) is not scaremongering. Last year, it revealed the ways AI is being used to create images and deepfake videos of child sexual abuse, with evidence that thousands of AI images of child abuse are being shared on dark and clear web forums. Findings from IWF monitoring of these sites show that, in just one month, more than 20 000 images were shared; of these, almost 3000 were found to be criminal prohibited images. Thanks to the wonders of AI, images of abuse can now be easily created from photos, taken for example from websites and social media profiles. Perpetrators can reuse these images again and again.
As safeguarding professional Elizabeth Rose (see page 44) points out in this issue, analysts are now spending increasing amounts of time identifying whether images are AI-generated or feature ‘real’ children, delaying intervention and the rescue of victims. Another outcome is that perpetrators can now use deepfake images to groom, coerce and blackmail children (no doubt taking advantage of social media to do so).
Where damage is caused, as ever, schools and health professionals will try to pick up the pieces. As Ms Rose explained: ‘It is important that schools and those working in safeguarding understand the emerging threats and risks to children as a result of advancements in AI. Using AI to generate child sexual abuse material has wide-ranging, devastating impacts.’
If we are unable to ensure the safety of AI and social media platforms in the hands of developers, or to believe that policy makers will take effective action, it falls to us to protect and educate our children. Until this is done, we have work to do.