Urgent action is needed to protect victims of online abuse
Matt Burns & Dave Ranner
25 November, 2020
The nature of online crime requires us to constantly develop our strategies. As technology grows and changes, so do the techniques of criminals, making it more difficult for investigators and law enforcement to keep up with their workloads and catch offenders.
That’s why collaborative R&D makes up a prominent area of our work, allowing us to create new, innovative capabilities that can empower investigators to stay one step ahead.
Device-level blocking is something that could help safeguard victims, slow down the sharing of illegal images, and ease the pressure on investigators. While the technology exists in theory, it would need the support of global tech organisations to become an impactful reality.
Child sexual exploitation (CSE) material is a prolific issue. It exists all over the internet, in the cloud, in people’s networks and on their phones. The ease at which it can be shared or duplicated means it’s extremely difficult to find the original perpetrator, and to stop it from spreading.
Device-level blocking could be an extremely effective solution. In theory, this approach could prevent an individual from receiving, or opening CSE material on their device.
This would significantly reduce the amount of CSE present in the digital world and resultantly make it much easier for investigators to find and safeguard their victims, plus prevent further revictimization through image sharing.
It could also reduce the number of individuals who perpetrate these kinds of offences. Not only would it reduce the normalisation of this kind of behaviour, but the act of being blocked could be enough to stop a first-time offender from getting more involved.
This capability could be made using cryptographic hashes. Whereas similarity hashes or AI identification technology could assess whether an image matches or is similar to known abuse material within a given level of accuracy, cryptographic hashes can determine with 100% certainty whether an image is a replica of an illegal file.
In fact, this technology isn’t very complicated to develop and could be an important tactic for combatting crime. So, if device-level blocking of this kind was created; how would it be implemented to stop the sharing of child sexual abuse material (CSAM)?
As the name suggests, the base capability would be to block illegal images.
It could exist on devices and contain an established database of the most common or critical CSAM images. It would then automatically identify when one of these illegal images appeared on your phone and stop it from loading, opening, or downloading.
There is also the controversial possibility to respond to an individual attempting to open images.
There are two approaches to this. Firstly, action could be taken against the individual trying to open an image by either automatically notifying law enforcement of the attempt or conducting counter-analysis to gain more intelligence.
Alternatively, help could be provided to the individual in the form of self-help links or educational material on the impact of CSE to victims.
This reponse step is so controversial that in our opinion it is not recommended until after the simple act of blocking has been implemented. It raises many valid questions but they would only delay any rollout of a blocking process which would achieve most of the goals.
The involvement of large technology or SaaS providers and developers could accelerate the impact and reach of this approach.
One option is to offer the capability as a separate app which people voluntarily download. But of course, it would be extremely unlikely for the primary targets of device-level blocking to download and therefore the app would have very little impact.
The most effective method would really be to produce this capability as a built-in feature in various digital platforms. It would exist as standard, much like anti-virus features in phones, then automatically recognise and block illegal images that match its database.
This is where the big tech players come in. For this to work, they would need to approve and implement this feature into their technology during development. This would make the capability much more widespread and effective than being produced as an independent app.
Here are some primary platforms where device-level blocking could be implemented with the help of big tech players:
Applications (e.g. Photoshop)
Operating systems (e.g. Android)
Low-level libraries (e.g. jpeg)
RELATED: Why we integrate our platform with other industry leaders
It’s important to note that there are a wealth of considerations surrounding this idea. For example, one large concern would be the privacy of users on the platforms that contained device-level blocking.
If this, or something similar, is ever to work in the real-world we need to start the discussion now.
The benefits of device-level blocking
The biggest benefit of device-level blocking is that it could block individuals at the very beginning of the image sharing cycle. If the image can’t even be opened, the attempt is completely prevented, and no further action can be taken.
This helps safeguard victims from additional crimes like revictimization and gives them more security knowing people can’t see their image. It also could reduce the workload for investigators – diminishing the number of images in circulation and potentially the number of individuals taking part in abuse.
For first-time offenders in particular, the act of being blocked or receiving help could be enough to prevent them from ever entering into the world of crime.
The biggest challenge related to this idea is user privacy.
On one hand, the minimal version of implementation would only block images known to be illegal to possess everywhere. Therefore, by blocking this image, one isn’t infringing on an individual’s right by preventing them from committing a crime.
However, the amount of control a technology has over your actions on a personal device – whether legal or illegal – is relatively subjective and would need to be carefully considered to ensure people’s privacy and rights remain intact.
Another factor to consider is financial remuneration. For larger organisations, there isn’t a tangible financial motivation which could dissuade these players from getting onboard.
That being said, it’s the right thing to do, and many tech organisations have already taken similar measures to combat CSE on their platforms, such as the Project Protect coalition which includes Facebook, Microsoft, Google and more. Similarly, it’s the first item in the Voluntary Principles which these organisations have endorsed.
Some people are already making steps in a similar direction. For example, Thorn is working to eliminate child sexual abuse material from the internet and is developing a product that can detect all material that falls into this category.
But device-level blocking to combat CSAM online isn’t something that can or should be achieved alone – it requires the involvement and support of different organisations, from investigative technology developers like us to law enforcement to large software providers.
And this isn’t just because it’s needed for the creation and implementation of the technology. But because we all have a social responsibility to do what we can, where we can.
Together, as a collaborative network, we can do so much more. We could address the challenges of device-level blocking to ensure greater safety and privacy, plus build a stronger capability which can truly make a dent in the proliferation of online CSE and CSAM, for the benefit of both investigators and victims.
To find out more about our R&D work or how we can work together, please get in touch.