Detecting AI CSAM – a vital investigative capability

23 December, 2025

CameraForensics

Detecting AI CSAM – a vital investigative capability

The ability to detect and uncover intelligence from AI-generated child sexual abuse material (CSAM) has rapidly emerged as a priority for investigators today.  

We recently spoke to Jon Rouse APM, Founding Partner of Onemi-Global Solutions, to learn more about the escalating need for tools that enable this. Here, Jon explores the potential use cases and limitations of AI image detectors, and shares his predictions for the future of this rapidly developing technology. 

AI-generated CSAM is an escalating threat to children worldwide. What does this mean for investigators? 

AI-generated CSAM has changed the landscape for investigators. The big shift is that we can’t assume every image is camera-captured anymore, and we can’t rely on hash-matching.  

Offenders can (and are) producing large volumes of incredibly realistic material, sometimes by face-swapping real children. This makes the first question in any case: ‘Is there a real child at risk right now?’. This material can pollute law enforcement databases like VIC, CAID, and ICSE, making it harder for victim ID teams to follow trusted workflows. 

AI CSAM adds noise and complexity and shifts where the evidence lives. It doesn’t make investigations impossible, but it does mean we need better triage processes, tools, and a mindset that follows the whole ecosystem – not just the image. 

How could AI image detection technology help law enforcement to overcome these challenges and move their investigations forward?  \


You might find interesting: Combatting crimes against children with AI image detection tools: https://www.youtube.com/watch?v=lexSg3zIKbc

AI image detection can flag what looks camera-captured, what might involve a real child and what’s synthetic or face-swapped. This means LEAs can prioritise victim ID and safeguarding instead of spending hours just trying to work out where to start. 

It can also help detect patterns that would never be picked up manually. Similar locations, the same child appearing in different sets, or even a consistent style that hints at a particular offender or tool. On the tech side, AI could recognise provenance signals, watermarks, or manipulations, which all help us understand how the material was made and who might be behind it. 

AI can get the clutter out of the way. It can push the high-risk material to the top of the pile, provide early leads, and let teams focus their time where it protects kids. 

What are some of the potential challenges or limitations of AI image detection technology for investigators?  

AI detection tools are valuable, but they do come with limitations that investigators need to keep in mind. For one, the technology isn’t static. AI models change and evolve as offenders also change their methods, update generative tools, or deliberately try to evade detection. This means performance can vary, and systems need continual tuning. 

There’s also the issue of accuracy. AI image detectors can misclassify content, so you still need a human to validate what the system flags and to interpret findings in the right operational and legal context. 

AI is a powerful aid, but not a standalone solution. It’s most effective when it supports – rather than replaces – investigator expertise. 

Will technology ever completely automate the process of determining whether images and videos have been generated or manipulated with AI?  

Wouldn’t it be nice? However, it’s unlikely that technology will ever fully automate that determination. Detection tools will keep improving, and they’ll get better at spotting the tell-tale signs of synthetic or manipulated media. But offenders adapt quickly, and new models appear faster than any detector can be perfectly trained against. 

AI also often lacks the wider context we rely on in investigations. It can highlight a suspicious image, but it can’t tell you about the relationship between the offender and the victim, or whether a child is in immediate danger. That judgement still belongs to trained analysts and investigators.  

AI detection technology is still novel – but how might it need to evolve in the future, based on offenders’ evolving tactics or other factors?  

I think we’re still in the early innings with AI detection. Right now, a lot of the focus is on still images, but offenders are not stopping there. As the tools get better and easier to use, we’re going to see more AI-generated and heavily manipulated video, and that’s a different level of complexity altogether.

So, the tech will need to evolve in a few ways. First, it must move from just ‘is this synthetic?’ to ‘what exactly has been done here?’. Was the whole clip generated? Was a real child face-swapped in? Is the background fake, but the person real? That kind of nuance really matters for victim triage and charging decisions. 

Next, I think we’ll see more emphasis on multi-signal analysis rather than just pixels – combining AI detectors with provenance data, platform logs, behavioural patterns, and cross-case similarity. Especially with video, you might get more value from stitching together lots of weak signals than from one ‘magic’ classifier. 

How can law enforcement and technology companies continue developing the AI detection capabilities that investigators need, now and in the future?  

I think the only way we get the AI detection tools investigators really need is by treating this as a joint problem, not a vendor problem or a police problem. 

On the law enforcement side, agencies must be really clear about what hurts in real cases – where they’re losing time, where they’re unsure if a child is real, which types of files keep slipping through. That means providing structured feedback to the tech companies. 

On the tech side, companies need to move beyond just shipping a model and walking away. They should be co-designing features with investigators, updating detectors continuously, and investing in training and documentation. 

We keep making progress when law enforcement brings the hard, real-world problems to the table, and technology companies respond with tools that are not just clever, but operational, explainable, and built with investigators in the loop.    

Dedicated to developing vital investigative capabilities 

We’re empowering investigators to not only identify AI-generated CSAM, but to uncover the forensic context they need to assess risk and malicious intent. We look forward to sharing more about our technological developments when we can.  

In the meantime, sign up to our monthly newsletter, The Source, to be the first to receive articles like these from the CameraForensics team and our partners.


Subscribe to the Newsletter