
Detecting AI CSAM – a vital investigative capability
CameraForensics
19 January, 2026
Rob Chitham & Dr Shaunagh Downing

Perpetrators have always looked to new platforms and technologies to facilitate their attempts to exploit children online. The use of social media networks and online gaming platforms to initiate contact with children is just one example. But as AI tools become increasingly sophisticated and accessible, perpetrators’ tactics are evolving.
We recently spoke to Engagement Manager (and former Detective Inspector) Rob Chitham and AI Lead Dr Shaunagh Downing about this issue. Here are four key takeaways from our conversation on AI and how it’s changing the scale and scope of online enticement crimes, like grooming and sexual extortion.
Sexual extortion, often referred to as ‘sextortion’, involves perpetrators coercing children and young people into sending sexualised images and videos of themselves. These are then used to blackmail them for money (financial sextortion) or further intimate material.
Sextortion reports now highlight a new tactic: perpetrators can quickly and easily create photorealistic images and videos with AI. This allows them to convincingly depict victims in sexualised material, which is then used for extortion.
As Dr Shaunagh Downing shares in her latest blog, The rise of nudifying tools and their threats to children, AI tools are ultimately enabling perpetrators to ‘accelerate’ their extortion attempts:
“…offenders no longer need to coerce children to send genuine intimate images to exploit them. They can simply create those images with AI.”
This is a growing problem. In an analysis of child sextortion reports made to NCMEC between 2020 and 2023, where specific extortion tactics were indicated, Thorn and NCMEC found that 11% saw victims threatened with images that were ‘in some way fake or inauthentic’ covering AI-generated and AI-manipulated material.” A 2024 survey of 16-18-year-olds in Australia also reported that more than 41% of the respondents who had experienced sextortion had been threatened with digitally manipulated material.
The effects of sexual extortion can be extremely psychologically damaging for victims. Importantly, the harm caused, and the fear, panic and shame children experience, occurs whether the material used for extortion is AI-generated or camera-captured.
According to the father of one teenage boy who faced sextortion threats using an AI-generated image of himself:
“He was absolutely petrified, shaken, and he obviously only came down to us at the point when he realised he didn’t know what to do.”
You can learn more about the use of AI to generate child sexual abuse material: The rise and harms of AI-generated CSAM – and a way forward
The threat of perpetrators using AI tools to communicate with children online is another growing concern. A 2024 survey of 603 schools from Qoria found that the vast majority of respondents (including 90.5% of UK respondents) were concerned about the risk of adults using AI to groom their students.
It also elaborates on the tactics that perpetrators might use to exploit children using AI. According to Qoria, perpetrators can use AI to create personas that appear relatable or trustworthy. The report also highlights how AI can be used to generate images or other proof points to lend credibility to the perpetrators’ stories. A new film from WeProtectGlobal Alliance, Protect Us, also explores the many harrowing ways that perpetrators are using generative AI to manipulate children, with dramatised depictions based upon true stories.
Currently, it may be difficult for researchers to measure exactly how often tactics like these are being used. In some cases, for instance, victims may not recognise that they are engaging with someone who is using AI to guide their conversations, and therefore don’t report it. However, the research that is available highlights that AI-enabled communications are a growing risk, and one that investigators need to be aware of.
Recently, the Internet Watch Foundation (IWF) identified an AI chatbot website that enables perpetrators to engage in simulated sexual scenarios with children. These include scenarios such as ‘child and teacher alone in class’. The same chatbot also allowed the perpetrators to create AI-generated CSAM of their chosen AI personas.
One potential risk is that engaging with AI chatbots normalises harmful behaviour, and could embolden perpetrators to engage with real children, online or in person. Trajectories like these have already been documented in cases of viewing CSAM online (one study found that 42% of respondents had initiated contact with children after doing so), and so we need to consider that engaging with AI chatbots may have a similar effect.
AI companions are specialised AI chatbots designed to mimic a friendship or relationship with the user. According to Common Sense Media, 72% of teens have used an AI companion, with over half of teens being regular users.
Children engaging directly with AI companions also face new risks to their safety. For instance, Aura’s State of the Youth report found that 37% of interactions children have with AI companions involve violence. For children aged 13, sexual/romantic roleplay is the most common topic in chats, appearing in 63% of conversations that age.
These interactions can have devastating consequences. In a particularly tragic incident, a 14-year-old boy took his life after engaging with various personas on an AI chatbot site. In the following lawsuit, his mother stated that the chatbots used “anthropomorphic, hypersexualized, and frighteningly realistic experiences” to target her son.
A later study into the same chatbot site monitored adult researchers using accounts that were registered to five fictional children. Across 50 hours of conversation with bots using adult personas, they identified 669 harmful interactions – an average of one harmful interaction every five minutes. Over 44% of these were considered to be grooming and sexual exploitation. These conversations included the chatbot with adult personas touching, kissing, and simulating sex acts with the accounts registered to children, in addition to claiming the relationship between them was special.
Leaked documents also suggest that Meta’s AI chatbot tool could have ‘sensual’ and ‘romantic’ communications with children. This highlights that the risk of children being groomed by chatbots isn’t just prevalent on dedicated AI chatbot sites, but mainstream applications too.
Staying abreast of perpetrators’ tactics for exploiting and abusing children has always been challenging for parents and law enforcement agencies. However, the proliferation of AI tools continues to exacerbate this. AI tools are making it easier and quicker for perpetrators to coerce their victims into many harmful scenarios.
Combatting this is going to require intervention from multiple technology partners. This includes the developers behind the generative AI models being used maliciously, and the websites and platforms that allow harmful communications and sextortion attempts to persist.
We explore this in more detail in: AI nudification: how do we combat AI-enabled NCII abuse? Although this article focuses on combatting the harms of nudifying tools, many of the insights apply here too.
To be the first to read more insights like these, we recommend signing up to our monthly newsletter, The Source. Sign up today and get the latest articles from Shaunagh, Rob, and the rest of our team and partners, sent straight to your inbox.