A guide to AI-generated CSAM for investigators of online exploitation

18 November, 2024

A guide to AI-generated CSAM for investigators of online exploitation

AI-generated child sexual abuse material (AIG-CSAM) continues to threaten children’s safety. However, knowledge sharing can help investigators prepare to combat the issue, and safeguard victims against the threat of online exploitation.

That’s why we’ve created this guide for investigators, covering:

  • What is AIG-CSAM?
  • Is AIG-CSAM illegal?
  • What is the scale of the AIG-CSAM issue?
  • What are the dangers of AI-generated CSAM?
  • What AI image generation techniques should investigators be aware of?
  • What are the dangers of open-source AI tools?
  • What are the challenges of AIG-CSAM for investigators?
  • What is the tech industry doing to combat AIG-CSAM?
  • What is CameraForensics doing to combat AIG-CSAM?

What is AIG-CSAM?

AIG-CSAM is child sexual abuse material that has been created or modified using artificial intelligence (AI).

It may be harmful content that is generated entirely by AI, such as through models that turn text prompts into images and videos. Or, it may be existing content that has been altered using AI models. For instance, benign images of children that are modified to depict them engaging in sexual acts.

The dangers of AI-generated CSAM are far-reaching, both for child victims and for investigators of online child exploitation. AI image generators can generate high volumes of harmful content rapidly, put even more children at risk of victimisation, and make it difficult for investigators to decipher whether there is a real child in need of safeguarding.

In this guide, we explain the dangers posed by AI-generated images in more detail.

Is AIG-CSAM illegal in the UK?

AI image generation is a relatively new threat to children’s online safety. However, laws do exist in the UK to protect victims against AIG-CSAM. These include:

It’s worth bearing in mind this clarification from the Internet Watch Foundation (IWF):

“Proving whether an image is AI-generated is not an evidential requirement for prosecution under the PoC Act – it only needs to look like a photograph and be an indecent image of a child.”

  • The Coroners and Justice Act 2009, which makes it an offence to possess a “prohibited image of a child.” This includes images that are “grossly offensive, disgusting or otherwise of an obscene character.”

The recent sentencing of Hugh Nelson shows that in the UK, perpetrators found guilty of using AI to generate CSAM will face legal action. AI-generated CSAM is still real CSAM – and it must be treated as such.

What is the scale of the AIG-CSAM issue?

AI-generated CSAM is already a large-scale problem, but it’s quickly getting worse – as a 2024 report by the Internet Watch Foundation shows. IWF analysts visited a dark web forum dedicated to CSAM over a 30-day period, and analysed more than 12,000 unique AI-generated images.

They found that:

  • 29% of them were actioned as criminal;
  • The amount of images classified as Category A (the most severe category) had increased by 10 percentage points from September 2023;
  • Deepfake videos had started to appear.

The IWF also recently announced that its hotline has actioned more webpages of criminal AI content in the past six months than in the entire previous year.

Clearly, finding ways to mitigate the risks of AI-generated images and videos – and safeguard child victims from harm – is an urgent issue. It’s also an issue that we are passionate about combatting through our R&D projects with law enforcement agencies and fellow technology providers.

What are the dangers of AI-generated CSAM?

Investigators into online child exploitation have identified AI-generated child sexual abuse material as a threat for multiple reasons. These include:

Real children are victimised

AI CSAM is not a victimless crime. Offenders can harm real children and real victims directly in the following ways:

  • Offenders use AI to modify a benign image of a child to turn it into something inappropriate or explicit.
  • Offenders create their own specialised AI models based on a real child’s likeness, allowing them to create countless images of any situation or scenario containing that child’s identity.
  • Survivors of child sexual abuse are being revictimised by offenders who create specialised models based on images of their abuse, so that they can continually create new abuse material of those children.

Overwhelming investigators’ resources

AI-generated abuse material can be extremely realistic. This is making it more time-consuming for investigators to distinguish between “real” and AI-generated CSAM.

As a consequence, it’s also more difficult for them to allocate their resources to the victims that need the most urgent support.

Perpetuating harmful behaviour

As AI image generators become more sophisticated, and techniques for using them are shared freely, it’s becoming easier for offenders to generate and manipulate abuse imagery.

When you consider that 37% of those that engage with CSAM have sought direct contact with a child online, it’s clear that viewing abuse imagery can lead to further harmful behaviour.

To learn more about the ethical implications of generating realistic, non-consensual content with AI, read Understanding AI-generated imagery next.

What AI image generation techniques should investigators be aware of?

Offenders use various methods and techniques to generate AI CSAM:

Text-to-image is the generation of new images using text prompts.

Image-to-image generation takes an existing image and creates a new one based on a text prompt. The new image preserves aspects of the original image, and how strongly it resembles it is chosen by the user. Offenders can use this method to create a CSA image by starting from an image of adult pornography.

Inpainting is the modification of an existing image using AI. Regions of the existing image are selected by the user, who describes what they want to put there instead. The AI then seamlessly integrates the new content into the existing image.

“Nudifying” apps are a specific application of inpainting, that are specifically used for the “clothes removal” of a person in an image, creating a nude depiction of them.

Text-to-image, image-to-image and inpainting can also be used with specialised models based on a real person’s likeness, allowing for the creation of AI images of a real person.

Deepfake videos involve altering existing videos by swapping the faces and/or voices of the content’s subject with that of another person. Offenders can use this to impose the faces of child victims onto explicit scenes.

To learn more about deepfakes and how they create opportunities for online exploitation, we recommend reading our blog Deepfakes: A new threat to personal privacy and identity.

What are the dangers of open-source AI tools?

Open-source image generation models, such as Stable Diffusion and Flux, are the most open to misuse by bad actors. Being open-source, anyone can download these models for free to their personal computer and create images using these models completely offline, without any restriction or moderation.

Additionally, anyone can finetune these open-source models on their own datasets, making them even more capable of generating certain types of harmful images.

To learn more about the risks of open-source AI image generation, read The dark reality of Stable Diffusion by our very own Dr Shaunagh Downing.

What are the challenges of AIG-CSAM for investigators?

“AI is constantly improving, and the fact that it is already capable of creating images that are visually indistinguishable from real photographs poses significant challenges for law enforcement and child safeguarding efforts.”

  • Dr Shaunagh Downing, CameraForensics

AI-generated CSAM poses a series of challenges for investigators of online child exploitation. This includes the fact that investigators don’t immediately know if a photorealistic AI image depicts a real child. As a result, they may divert their time and resources towards children that don’t exist, and away from victims who need safeguarding.

This is also complicated by the fact that abuse images can be part real and part AI, meaning a real child, or real physical abuse, may be depicted in the image.

For investigators, another challenge is trying to keep up with the evolution of AI technology and understand its various threats. Being able to detect whether images have been generated or manipulated with AI is a great first step towards safeguarding children and identifying offenders, but understanding that children depicted in AI images are also victims and can possibly need safeguarding is also crucial.

In Unveiling the challenges of AI-generated CSAM, Dr Shaunagh Downing digs deeper into the challenges facing investigators, and how these will evolve in the future.

What is the tech industry doing to combat AIG-CSAM?

Big Tech must work to limit the accesibility of exploitative AI tools. For example, search engines should delist nudifiying websites, app stores should remove any apps capable of creating non-consensual material, and social media platforms need to limit what they are advertising to young people.

One way that technology developers can mitigate the risks of generative AI to child safety is by following Safety by Design principles.

These principles, developed in collaboration between Thorn, All Tech Is Human, and others, try to ensure that AI tools are developed, deployed, and maintained with child safeguarding at their core.

The full guide includes pledges such as “Responsibly source your training datasets, and safeguard them from CSAM and CSEM” and “Prevent your services from scaling access to harmful tools”.

To learn more about these principles, and to find out what Big Tech still needs to do to combat AI-generated abuse material, read How ‘Safety by Design’ principles aim to change the AI industry.

What is CameraForensics doing to combat AIG-CSAM?

“Our mission is to equip investigators in the fight against illicit image sharing. Keeping up with the latest technology is essential, and Generative AI presents a significant new challenge. By understanding its capabilities and what makes it so effective, we can contribute to the global effort to mitigate its threats.”

  • Alan Gould, Operations Manager at CameraForensics

We regularly partner with law enforcement agencies to understand the unique challenges they face when identifying victims and investigating offenders of child sexual exploitation. By doing so, we can build tailored image forensics solutions to solve complex problems.

We’ve also been working alongside Lux Aeterna to prevent the spread of harmful content, including AI-generated CSAM. As Lux Aeterna pushes the boundaries of generative AI, we’re stripping back the team’s processes to better understand how they could be used by bad actors.

This approach has allowed us to build effective countermeasures to illicit content sharing, so that we can help investigators to safeguard victims effectively.

To find out how we’re using R&D opportunities to support the fight against online child exploitation, read Why it’s important to push the boundaries of AI techniques, now.

Combat online child exploitation with CameraForensics

As we grapple with ever-evolving threats to children’s online safety, image forensics techniques have become a vital part of law enforcement agencies’ toolkits.

If you’d like to find out how image forensics could help you to analyse clues from online content, connect the dots between sources, and safeguard children against sexual exploitation, download A beginner’s guide to image forensics today.


Subscribe to the Newsletter