Clicky

Reality check: your paranoia about technology companies partnering with government agencies is (mosty) unfounded.
by Olivier Blanchard | September 7, 2018
Listen to this article now

People sometimes don’t know how to feel about tech companies working with law enforcement agencies and the military to improve facial recognition, surveillance, AI, and other legitimate operational capabilities. The default, in many, is distrust. In some, it can be immediate outrage. Many of us have been conditioned to look for invasions of privacy, abuses of power, and nefarious plots lurking behind every corner, and not without cause: Companies like Google, Facebook, Apple, Amazon and others do seem to be collecting more data on users than we feel comfortable with, and often without our knowledge. CCTV cameras and “the surveillance state” aren’t abstractions. We do live in a world in which privacy has become a very relative term, and increasingly more of a rare privilege than a right. In a culture in which trust is in short supply, it isn’t surprising that people have become suspicious about every little bit of news involving “the authorities,” surveillance technology, and the end of privacy and anonymity. There is a difference between being vigilant and paranoid, however, so I thought I should address that today.

I came across this piece in The Intercept about IBM and the NYPD’s collaboration on facial recognition software that better identifies different gradients of skin tone. The piece is an interesting read, and makes lots of interesting points, but it also leans towards paranoia rather than objective analysis. “What if this technology could be used improperly?” it asks, over and over again? “In how many ways could it be used to target minorities, and reinforce racial profiling, and invade people’s privacy, and on and on. While these questions are valid and should be asked, here is the reality of this type of technology: It isn’t designed to be an instrument of discrimination. It is designed to improve the the speed and accuracy with which law enforcement agencies can locate a dangerous suspect in a crowd. It’s a tool. And yet, people’s reaction to that piece is, predictably, somewhere between panic and outrage. “Why is IBM doing this?” “Why is the NYPD being so secretive about it?” “Why wasn’t there more transparency about this project?”

IBM is doing this because it can, and because it should. Technology that performs better is good. Technology that performs poorly is bad. IBM is interested in this because it’s an engineer problem-solving challenge, because it will help save lives, and because it has other applications besides the one pursued by law enforcement agencies. Teaching smart cameras in digital home security systems better identify people will improve their ability to recognize authorized and unauthorized visitors. Teaching smart cameras on our smart devices to better identify faces will help with security and collaboration. There’s more, but you get the idea.

The NYPD is doing this because it wants to be able to identify suspects in a crowd faster, and with fewer mistakes. If you are working counter-terrorism, for instance, and you need to use software to quickly identify a dangerous suspect in crowds of thousands, including subway platforms, busy streets, crowded squares, museums, theaters, and stadiums, you need software that can narrow your search from thousands of possible faces to under a dozen. That’s what this type of software aims to do.

Why wasn’t there more transparency about this project? Because the stakes are high, and the last thing we want is to let criminals and terrorists know what types of technologies and capabilities are being developed and deployed to identify, track, stop, and apprehend them. You don’t tell the enemy what your secret weapons are. It isn’t rocket science.

More to the point: Skin color is an identifiable characteristic, like height, weight, gender, eye color, skin color, tattoos, scars, gait, and so on. Allowing police to properly identify suspects in a crowd by skin color in addition to other identifying characteristics isn’t “overstepping.” It isn’t racist. Merely observing and identifying characteristics isn’t racist.

IBM’s role in developing filters that do a better job of identifying features in individuals isn’t problematic. Law enforcement agents can already see skin tone. Allowing cameras and facial recognition software to be able to do the same thing isn’t scary or creepy or dangerous. It doesn’t change laws. It doesn’t rob anyone of their rights. It is no different from designing cameras with better resolution and clearer low-light performance.

We can and should be vigilant about is how law enforcement officers and agencies *might* target certain groups based on race, religion, economic status, and/or gender, but that isn’t a technology discussion. It is a behavior and policy discussion. In other words, our attention should be focused on policy, oversight, and accountability, NOT on whether software and smart cameras can gauge skin tone. If you really want to prevent racism and discrimination by the authorities, pay closer attention to local, state, and federal elections than to how good cameras and software are getting. Hold public officials accountable if they misuse this tool or any other tool. Absolutely. But don’t shake your finger at a new technology just because someone might decide to use it for the opposite of what it is meant to be for. There is nothing nefarious going on here, especially with regard to IBM’s project with the NYPD. There is no evil secret conspiracy at play.

One last thing: Bear in mind that when the suspect happens to be white, as most mass shooters tend to be, (and domestic terrorists, at least in the US,) this system will help filter out darker-skinned individuals in a crowd, and allow law enforcement agencies to more quickly find the suspect(s) they are hunting for. This technology, like almost all technologies, is bias-agnostic. It works both ways.

So next time you read about a technology company like Amazon, Google, IBM or whomever else working with the police or the military or any government agency on some kind of facial recognition, voice recognition, or pattern recognition software, don’t panic. Don’t assume the worst. Don’t jump to conclusions. Maybe there is something to be concerned about, but most of the time, there probably isn’t. Take a deep breath, think about why the technology in question might be helpful, and take your time looking into it. And if the specters of discrimination and tyranny worry you, as well they should, vote. Making sure that good people are in positions of authority is even more important than making sure that the technology we put in their hands is the best it can be. The lesson here is that the Who ultimately informs the How. The What is just a medium.

In short, vigilance and awareness are good. Paranoia and panic aren’t.

Cheers,

Olivier

About the Author

Olivier Blanchard has extensive experience managing product innovation, technology adoption, digital integration, and change management for industry leaders in the B2B, B2C, B2G sectors, and the IT channel. His passion is helping decision-makers and their organizations understand the many risks and opportunities of technology-driven disruption, and leverage innovation to build stronger, better, more competitive companies.  Read Full Bio.