AI nudification: how do we combat AI-enabled NCII abuse?

16 December, 2025

CameraForensics

AI nudification: how do we combat AI-enabled NCII abuse?

AI nudification tools are being used to generate non-consensual intimate imagery (NCII) and child sexual abuse material (CSAM) of real people across the world. What can be done to combat this particularly harmful, and prevalent, form of abuse? We explore this important question here.

What is AI nudification? 

AI nudification generally refers to online services, such as apps and websites, that turn photos of real people into realistic explicit images.   

AI nudification often uses a technique called ‘AI inpainting’, which modifies existing images using AI.  For instance, by using apps to ‘remove’ clothing from images of children and replace them with nude, sexualised depictions.  

To learn more about techniques like inpainting, read A guide to AI-generated CSAM for investigators of online exploitation

What are the harms of AI nudification? 

Nude images generated by AI tools are sometimes referred to as ‘ai nudes’ or ‘deepfake nudes’. Yet indecent images that contain AI-generated – or ‘fake’ – elements, can still have very real harms to children. These include psychological harms, such as feelings of lost autonomy and fears around the anonymity of the perpetrator.  

To learn more about the many risks of AI nudification abuse, read Dr. Shaunagh Downing’s insights in The rise of nudifying tools and their threats to children

What are the challenges of combatting AI nudification for law enforcement? 

AI-generated images are becoming increasingly realistic, which poses many challenges to law enforcement. Investigators must determine whether abuse images are entirely or partially synthetic, before they can take the most appropriate next steps to protect the victim. This adds to investigators’ already high workloads and can leave vulnerable children waiting longer for support.   

AI nudification also poses an additional, unique challenge to law enforcement: children can be both victims and perpetrators of this abuse. In fact, research from Thorn suggests that 2% of young people have admitted to creating nude images of someone else using AI (sometimes called ‘deepfake nudes’).  

You can learn more about the complexities of navigating AI-enabled image abuse for law enforcement in Unveiling the challenges of AI-generated CSAM

How can we prevent nudification abuse? 

Mitigating the harms of AI-enabled NCII and CSAM requires a proactive approach. It also relies on interventions from the entire technology ecosystem responsible for enabling and amplifying AI nudification tools.  

To prevent nudification abuse, technology companies must: 

1. Embed ‘Safety by Design’ principles into AI tools 

The problem: 

AI nudification services can often be built upon powerful, open-source diffusion models, such as Stable Diffusion . Open-source diffusion models can be downloaded, finetuned on any images that the user wants, and used offline with no moderation. 

What can be done? 

AI developers should embed guardrails into their tools that anticipate and prevent downstream harms, particularly to children.  

This means applying Safety by Design principles throughout development, deployment, and maintenance. For instance, by making sure that any datasets used to train generative models have been sourced responsibly, and do not contain child sexual abuse material.  

Safety by Design principles may not address the harms that have already been caused by generative AI, but they could help to mitigate them in the future. 

You might find interesting: How ‘Safety by Design’ principles aim to change the AI industry 

2. Remove nudifying technology across the web 

The problem: 

Many AI nudification tools are easily accessible across the web. They are found in search engine results, advertised across social media platforms, and downloaded via app stores.  

What can be done? 

Technology companies have a vital role to play in preventing perpetrators from finding and using AI nudifying services. To do this, they must remove abusive AI applications – and any related advertisements – from their sites and platforms. 

A recent example of Big Tech working to combat NCII abuse comes from Meta, which alerts other technology companies to harmful apps and websites via the Tech Coalition’s Lantern programme. However, to have the greatest positive impact, platforms must commit to tackling AI-assisted abuse proactively. This means blocking nudification technology from being hosted online altogether, rather than just responding to requests to remove it. 

How can we combat nudification abuse? 

Preventing the creation and distribution of AI abuse material is the most effective way to protect victims against harm. Nevertheless, efforts should also focus on preventing the spread of NCII and CSAM once it has been created, and reducing the risks of revictimisation. 

To combat nudification abuse, schools, policymakers, and law enforcement can: 

1. Provide adequate support to victims 

The problem: 

Victims of AI nudification tools don’t always have access to the appropriate support. For instance, only 37% of students believe that their school effectively supports students depicted in synthetic non-consensual intimate imagery.  

What can be done? 

Decision-makers must listen to the voices of children who have been victimised by AI nudification. Listening to experiences of people like Francesca Mani can help schools, governments, technology companies, and other groups to understand the full scope of the harm caused by AI, identify gaps in current protections, and shape solutions that make an impact. 

2. Use hashing to remove non-consensual material 

The problem:  

Once an image or video is online, it can be spread across platforms, forums and users. Even if the original image is removed, there is a risk that others will have made and distributed copies elsewhere.  

What can be done? 

Hashing can help to find and remove harmful images, including those generated with nudifying apps.  

It works by assigning a unique ‘hash’ to an image and then using this digital fingerprint to search for duplicates. Technology companies can also use hashes to prevent new copies of the image from being uploaded to their sites and platforms. 

Some of the organisations with hashing and matching initiatives include: 

  • StopNCII.org  – a free tool for people to create hashes of NCII. 
  • NCMEC (Take It Down) – a free tool for people to generate a hash of an image or video that was created of them while they were under 18. 
  • Internet Watch Foundation (IWF)’s Hash List – a list of over 2.8 million hashes generated by IWF analysts to help technology organisations block CSAM from appearing on their sites.  

Law enforcement can also use hashing to further their investigations into child sexual abuse. For instance, we’ve embedded IWF’s Hash List into CameraForensics technology to help investigators quickly filter through known abuse material. Investigators can also search for direct or derivative copies of abuse material using the CameraForensics platform’s PhotoDNA feature.  

3. Innovate to support investigators  

The problem: 

Nudifying apps are creating increasingly realistic depictions of children. This puts additional strain on investigators to determine whether child sexual abuse material has been generated by AI nudification tools. 

What can be done? 

At CameraForensics, we are currently developing AI image detection technology that empowers investigators to: 

  • Quickly determine whether material has been generated or modified by AI. 
  • Identify which parts of the material have been altered. 
  • Take the most appropriate next steps to protect victims. 

We look forward to sharing more about this development when we can.  

You might find interesting: Detecting AI CSAM – a vital investigative capability

Learn more with The Source 

If you would like to continue learning more about emerging threats to children online, the challenges they pose to law enforcement, and the technology being developed to counter them, why not sign up to our newsletter, The Source? Each month, we’ll share the latest insights from our team, directly to your inbox.


Subscribe to the Newsletter