Clicky

Can Artificial Intelligence Improve Law Enforcement?
by Shelly Kramer | January 17, 2017
Listen to this article now

There are a myriad of questions swirling around the topic of machine learning and artificial intelligence (AI), from how to establish ethics and pursue innovation safely to addressing the monumental task of programming empathy in devices like chatbots. Underneath the collective uncertainty, though, lies a simple truth: AI is a powerful technology that helps humans become more efficient, and many forward-thinking industries are already on board. One such industry is law enforcement. And no, despite what sci-fi movies want you to think, Robocop isn’t roaming the streets. Law enforcement agencies across the country are using AI in sophisticated ways to stop crime, find suspects, protect officers, and more. Let’s explore what this practice looks like today and what it could lead to tomorrow.

Five Applications for AI in Law Enforcement Today

A recent Stanford University study, Artificial Intelligence and Life in 2030, found eight domains most apt for growth in AI by 2030: service robots, healthcare, transpiration, low-resource communities, education, employment and workplace, entertainment, and public safety and security. It’s the last in that list we’re focusing on today. As noted in the report, law enforcement agencies around the country are becoming increasingly interested in implementing AI solutions, and even government agencies like the Defense Advanced Research Projects Agency (DARPA) and the Intelligence Advanced Research Projects Activity (IARPA) are working on research and development in areas ranging from national security to local police forces.

In short—this is happening.

Below are five of the most viable AI applications for law enforcement today:

  • Bringing in robots to detect and deactivate bombs. Using robotics isn’t necessarily a new trick for law enforcement. In fact, CNN Money reported $55.2 million worth of military robots have been sold to law enforcement since 2010 through a government program. Thanks to AI, however, their functions are becoming increasingly sophisticated. Consider this: Often, forces will send in robots to investigate explosive devices to determine whether a threat is a valid one. Soon, however, experts expect police robots will be able to deactivate bombs in addition to aiding in their detection. This capability seems more than plausible when you consider Dallas police recently made history when they used a robot to kill an active shooter.
  • Using drones for surveillance. Drones are used by the military and many law enforcement agencies for a variety of tasks. The drone in Figure 1, for example, is used by authorities in India to control crowds by spraying pepper spray and shooting paintballs. There’s more—it also has speakers so law enforcement can communicate with crowds or individual suspects in real-time. And, of course, it has surveillance capabilities including multiple cameras and a microphone. As fascinating as that is, it’s worth noting that surveillance drones powered by AI will soon be able to predict crimes before they occur with tools such as facial recognition software (to identify those with criminal records) and machine learning software (to determine when to report suspicious activity)

 

Drone

Figure 1. Source: Wired

  • Scanning social media for illicit activity. Smartphone usage has soared in recent years, and that coupled with social media has revolutionized how both consumers and companies communicate. You can apply a social selling strategy to just about anything—even drugs, as it turns out. Sites like Instagram and even Tinder have been widely used for peddling paraphernalia, but, as The Daily Dot recently reported, police officers like those in New York State are cracking down using AI. In fact, the New York Attorney General’s Office co-authored research on a new pre-investigative method, developing a sophisticated algorithm of classifiers that searches hashtags and recent activity indicative of drug dealing. Once a target is identified, the technology passes that information along to the physical enforcement team to investigate. Figure 2 shows a framework of the approach, as proposed in the full report.

Instagram surveillance

Figure 2. Source: Tracking Illicit Drug Dealing and Abuse on Instagram using Multimodal Analysis

  • Scanning social media for individuals who might be radicalized. There’s another application for social and AI that involves selling something dangerous—only this time, it’s something one believes, not ingests. The aforementioned Stanford report points out some law enforcement agencies are using AI to monitor and analyze conversations in social platforms “to prevent those at risk from being radicalized by ISIS or other violent groups.” One such monitoring tool has been dubbed iAWACS, or internet AWACS. According to the Washington Post, that’s nod to the acronym used by the military to describe their airborne intelligence and command stations. The purpose of iAWACS is to prevent events by monitoring online activity indicative of active shooter situations or scenarios at high-risk for extremists.
  • Creating Interview ‘bots’ to detect lies from suspects. Researchers in the Netherlands have created an AI-powered chatbot interviewer named Brad, designed to detect deception via physiological cues and machine learning algorithms. While Brad has a long way to go before he becomes mainstream, the implications of the technology as an initial screening tool has the potential to make law enforcement’s interviews more efficient and even improve security at high-risk areas like airports or sporting events.

How Far is Too Far?

There’s certainly promise in AI applications for law enforcement, especially given the hypercharged climate today’s officers face. With that encouraging promise, though, comes a host of risks and responsibilities. Will law enforcement’s use of AI make us safer, or is it crossing a line?

Surveillance and crime prevention initiatives both on and offline are areas ripe for AI’s intervention, but the potential for invading the privacy of private citizens, wrongfully targeting individuals for “suspicious” behavior, or otherwise abusing the power of AI—even unintentionally—are real concerns. To address them, forward motion must be made in a manner that is consistent with the underlying goals of these AI applications: To protect the safety, rights, and lives of both the public and the officers sworn to protect them.

Additional Resources on this Topic

The Government Explores Artificial Intelligence
How Artificial Intelligence could help warn us of another Dallas
Why Deep Learning (and AI) Will Change Everything
11 Police Robots Patrolling Around the World

Photo Credit: huppertzpowers Flickr via Compfight cc

About the Author

Shelly Kramer is a Principal Analyst and Founding Partner at Futurum Research. A serial entrepreneur with a technology centric focus, she has worked alongside some of the world’s largest brands to embrace disruption and spur innovation, understand and address the realities of the connected customer, and help navigate the process of digital transformation. She brings 20 years' experience as a brand strategist to her work at Futurum, and has deep experience helping global companies with marketing challenges, GTM strategies, messaging development, and driving strategy and digital transformation for B2B brands across multiple verticals. Shelly's coverage areas include Collaboration/CX/SaaS, platforms, ESG, and Cybersecurity, as well as topics and trends related to the Future of Work, the transformation of the workplace and how people and technology are driving that transformation. A transplanted New Yorker, she has learned to love life in the Midwest, and has firsthand experience that some of the most innovative minds and most successful companies in the world also happen to live in “flyover country.”