In November 2016, Amazon released a product called Rekognition. The technology at the most basic level is an image and video analysis software that allows its users to tailor the program’s capabilities to various business needs. The program was described to have an autonomous ability to “ identify objects, people, text, scenes, and activities, as well as detect any inappropriate content’’, which was a pretty impressive claim. However, since its release, Rekognition has attracted a great deal of controversy around the end users of the technology and how they plan on using it.
In order to better understand why this has received so much attention and scrutiny, we must first look at the potential use cases of Rekognition. Since the program is designed to be a tool for developers to do anything from facial recognition, facial comparisons, finding and extracting text, to tracking people and objects through videos. With these kind of capabilities it is hard to think of an industry that wouldn’t have a use case for this program.
Currently, the biggest use case is Rekognition being utilized by major marketing firms to sift through millions of images in order to classify and identify client products. Dating apps and services are using the tool to find fake profiles and identify potentially offensive material before their users are subjected to either. There are countless ways that Rekognition has been used in the digital world with a prolific adoption of this software by new companies. This trend shows no signs of slowing down either as there is no shortage of new companies and industries that will be able to use it for their own specific purposes.
So why is there so much controversy surrounding this product if it can be so helpful to so many different industries?
At the end of the day, Rekognition is merely a tool; whether or not this powerful product is deployed for good or bad is subject to the user. The debate lies primarily with the use and potential misuse of Rekognition’s facial recognition. Amazon has been quietly licensing this technology to governmental agencies. Even a few local police departments have been using this service for their own image databases in order to more readily identify people with prior criminal records. With body cameras becoming more prevalent some companies and law enforcement agencies have proposed using Rekognition in the body cams to identify citizens as potential threats based on prior bookings. Some have criticized these uses because of how little oversight these small departments operate under. While there is certainly a case to be made for the technology helping police and law enforcement, there is another case to be made for the potential misuse. The most controversial case is Rekognition being pitched to ICE for immigration. Without proper oversight and proper implementation the program is subject to falsely making a positive ID on the wrong person, which has already been documented in some cases.
With Rekognition being so powerful but also inexpensive, many Amazon shareholders/employees have felt the need to raise concerns. Some have gone as far as calling for Amazon to stop selling the product entirely until Amazon can ensure that there is proper oversight. Civil rights organizations have also spoken out and pointed to this type of tech opening the door for the creation of a police state and round-clock-surveillance.
While Amazon has acknowledged their employees’ and shareholders’ concerns, they have also announced they will be continuing to sell the Rekognition service with no changes to who their end users will be.
Whether the technology will bring about a dystopian future or not is far from being straight forward. What is certain, however, is that there needs to be far more oversight and even regulation of these types of technologies. Regulation can certainly stifle innovation, but to allow these powerful AI programs to go unchecked may be far worse.