How Can Law Enforcement Leverage AI for Criminal Investigations?

15 January, 2019

Ben Gancz, CEO and Founder of Qumodo

How Can Law Enforcement Leverage AI for Criminal Investigations?

Unless you’ve been living under a rock, you’ve probably heard lots about AI technology recently. Like most, you’ve probably reacted with an equal measure of interest and scepticism. You’re very grateful when AI directs you around traffic but wonder if we’re heading towards a dystopian future.

This new technology certainly presents many challenges in terms of ethics, but it also opens up many opportunities. I’m going to take a look at how Law Enforcement are currently utilising AI and how it might be employed in future to combat crime.

The Technology Arms Race

For the last 100 years or so, cops and criminals have been locked in a technology arms race. I remember in my early days in the police we carried the old Motorola MTS2000 radios. These things were the size of a brick, and the attached microphone inevitably ended up dragging along the floor behind you whenever you tried to chase anyone. Criminals were all communicating on highly reliable mobile phones and able to send text messages, while our VHF radios would flake out at the first opportunity.

Criminals have always been quick to adopt new technology, with none of the admin, considerations, law and regulation which slow down police adoption. So how is the advent of powerful AI technology likely to play out in this arms race?

Criminal use of AI

For cyber-crime, it is hard to predict exactly what the impact of AI might be, but I can imagine chatbots being used to scam people on a mass scale, I can imagine software exploits learning to overcome defences and I can imagine the use of DeepFakes to blackmail people.

In the real world, criminals might try to take control of automated systems (e.g. causing an autonomous car to crash). At the moment, from an outsider’s perspective these crimes seem to be mainly a future concern.

Augmenting Humans with AI in the Police

How about on the Police’s side? Well there is a lot more activity in this area, and maybe Law Enforcement will have the rare advantage this time. Before we get into this, I’ll address the elephant in the room: I don’t think AI will replace any humans in Law Enforcement. Instead, I think it’ll make the resources we have more efficient and effective. Given recent public reactions to the use of facial recognition technology, I think we’ll never see a fully autonomous justice system (at least I hope we never do).

Building Human-AI Teams

A few years ago, I was one of the early adopters of AI technology within the Police and Security sectors. Teams were starting to bring intelligent systems in and incorporating them with their current forensic workflows.

We’re used to computers being tools, basically an instrument that follows our instructions, but now we’re working with computers that make their own decisions, and that sometimes get things wrong. This is a whole new paradigm and requires a totally new way of working, which is sometimes called human-machine teaming, humanising AI or human-AI interaction. As with human teams, everyone involved needs to understand each other’s intentions, morals, ethics, performance and abilities. We usually work these things out through our complex social interactions, so how will we do this with AI?

Challenging Automation Bias

These revelations led me to set up Qumodo. Our mission is to build human-centred AI and create human-machine teams to make the world a safer place. We mainly work in the public safety and community safety spaces, helping to protect children and victims, and prevent crimes such as terrorism by expanding law enforcement’s knowledge in this area of AI expertise. We’re investing heavily in psychology and AI research to bridge these gaps, and have already made a number of important discoveries, which inform our system designs.

People can’t help but humanise intelligent machines: if they appear to function really well people begin to feel they are infallible and always function correctly, which can lead to blind or misplaced trust. If there is a flaw in the machine, this will compound the error over time.

On the other hand, if the machine makes a mistake this can have a disproportionate impact on trust, leading to people becoming inherently sceptical of all intelligent machines.

We need to build human-AI teams which enable the calibration of appropriate trust, so human users know when they should and shouldn’t listen to the machine. The result will be AI systems which can relieve the burden on busy police forces and bring them one step closer to winning the technology arms race.

Discover more Qumodo insight


Subscribe to the Newsletter