September 7, 2022
Dr. Hare describes the controversy of having a national ID card containing biometric data in the UK. Boris Johnson, UK Prime Minister, has denounced the national ID card for several reasons, such as preventing the UK from becoming a police state and mitigating the risk of profiling. However, biometric technologies, such as facial recognition, are becoming prevalent, and [facial recognition] may cause more harm than good. Dame Cressida Dick, London’s Metropolitan Police commissioner, echoes the sentiment that facial recognition software may create a police state where UK citizens would be surveilled incessantly. The Met, a UK police force, has been more than glad to adopt facial-recognition technology, and it has ignored all warnings and cautions for adopting facial-recognition technology from the House of Commons Science and Technology, the UK Equality and Human Rights Commission, and a High Court Ruling. Despite all the warnings, the Met has an incentive, yet it may risk UK citizens’ privacy and autonomy.
Furthermore, the author elucidates how facial-recognition technology intersects with the six branches of philosophy, such as metaphysics, epistemology, political philosophy, logic, aesthetics, and ethics.
Dr. Hare explains that facial recognition technology is a biometric technology that transfers our physical characteristics to the virtual world; these biometric markers include DNA, voices, fingerprints, eyes, gut flora, and many other features. There have been instances where law enforcement successfully cracked down on crime with biometric data. For example, in 2012, the German police solved nearly 100 burglaries based on the burglars’ earprints. However, our biometric data is unique and memorable, so our data cannot be changed. If someone decides to hack one’s biometric data, they can ruin a person’s life. Additionally, there have been instances in which peoples’ biometric data has been gathered without consent, such as the Home Office Incident, in which Home Office’s employees coerced 449 people to give DNA samples to live in the UK.
Though facial-recognition technology has been widely accepted and adopted, it may be too late to retract those changes in technology. However, there are ways in which it can be ethically implemented.
Dr. Hare provides the history of facial recognition, starting in 1827 with Joseph Nicéphore; he took the earliest photograph of the roof of his family estate. Then, in 1839, Alphonse built the first commercially available photographic camera, and Robert Cornelius, an American, took the first selfie. However, facial images were used to record someone's existence, letting the world know that they existed; though; in 1881, Alphonse Bertillon, a policeman, wanted to find ways in which a suspect could be identified by their features, thus creating the Bertillonage system. The Parisien police used the system to generate driving licenses, military IDs, and permits; though, the Bertillon system did miss something, fingerprinting, which the UK introduced in its identification system. Now, we have the Bertillon System and the fingerprinting system used in passports for accurate identification.
Unfortunately, facial recognition technology is posited to predict who would commit a crime, and the criminal is determined by their physiological profile. One company, in particular, Clearview AI, had taken images of people from the internet without their consent, using them to aid police in finding criminals; after having been fined, Clearview AI has been defunct. However, Clearview AI is not the only company to surveil and control the population.
Facial recognition technology has its good and bad sides, but we must actively find ways to mitigate the harms.
The author shows a table of the types of facial recognition, how they are used, and their risks of harm.
1:1 face matching is when a face matches against a facial image; this facial image stores on a phone or database.
To unlock our smartphone
A smartphone would capture the raw facial biometric, map the face with points, and create a mathematical representation of our face.
Risk of Harm: Low
To access government services
For example, India has the ‘world’s largest biometric project’ for more than 1 billion Indian citizens; India uses a multimodal biometric system, and it’s called Aadhaar. Aadhaar solves many real-world problems because India does not provide any official identification.
Risk of Harm: High
To pay for things
Facial recognition technology is to pay for things, like food from Daddy’s Chicken Shack in southern California. However, in China, face scans are an everyday transaction. Face scans include hotel check-in, boarding flights, trains, banks, and hospitals.
Risk of Harm: Medium-High (for children)
To monitor workers
Uber Eats’ employees told a magazine called WIRED that they were wrongfully suspended or fired because they failed the company’s Real-Time ID Check. Real-Time ID Check uses the 1:1 facial recognition technology to ensure that drivers are not allowing people who have not passed a background check to operate an Uber vehicle.
Risk of Harm: High
To enter a building
The Nelson Management Group, owners of the Atlantic Plaza Towers in Brooklyn, proposed using facial recognition technology to replace key fobs; however, it has caused many issues, such as profiling residents.
Risk of Harm: High
To access humanitarian assistance
The United Nations Refugee Agency uses face and fingerprint biometrics of refugees.
Risk of Harm: High
1:many matching compares images of our faces against those stored in a database or the cloud.
To tag someone on social media
Social media platforms, like Facebook, have a feature in which a person is tagged in a photo. People have more control over whom and what pictures they are tagged.
Risk of Harm: Medium
To identify a person in a crowd
Facial recognition is used to find a person in a group. Russian authorities have been using facial recognition technology to identify and detain people who protested for the release of Alexey Navalny.
Risk of Harm: High
As a tool of foreign policy
75 of 176 countries are using AI technologies to surveil with smart cities, facial recognition systems, and smart policing. China uses its Belt and Road Initiative as a foreign policy tool, which involves infrastructure projects and loans.
Risk of Harm: High
As a tool with which to wage war
The military can use facial recognition technology to wage war. NATO has amused facial recognition technology in Iraq and Afghanistan since September 11th Attacks.
Risks of Harm: High
Dual use 1:1 and 1:many used both 1:1 and 1:many technologies.
To control borders and immigration
After the September 11th attacks, the US Congress permitted only gathering biometrics from non-citizens. The US Customs has lifetime storage [75 years] of facial images from non-US citizens.
Risks of Harm: Low
Facial analysis: physical uses images of faces to determine the physical health of a person and their characteristics.
To apply online for a passport
BBC, in October 2020, investigated the Home Office’s system and found out people with darker skin tones had their applications rejected.
Risks of Harm: High
To analyze our physical health
Facial recognition technology monitors blood pressure, cardiovascular disease, dementia, and type 2 diabetes.
Risks of Harm: Medium
Face analysis: classification uses images of faces to determine gender, sexual orientation, and other intangible characteristics.
By ethnicity and race
The Chinese government has the technology to identify someone as an Uyghur, an ethnic group that the Chinese government has persecuted for years.
Risks of Harm: High
By sexual orientation or political affiliation
Stanford University received permission in 2017 to find out if they could use AI to identify people’s sexual orientation based on their faces.
Risks of Harm: High
By emotional state
Emotion Technology is known as affective technology, and it is very controversial. Researchers have called for it to be banned, and it [emotion technology] is based on assumptions.
Risks of Harm: High
The author focuses on the power dynamics of technology in two countries, the United States and the United Kingdom. In the United States of America, facial recognition technology has bias. Joy Buolamwini, A.I. Researcher, identifies that facial recognition technologies at IBM, Amazon, and Microsoft did not recognize darker-skinned people, particularly women, with an error rate of 35%, as opposed to lighter-skinned people who had an error rate of 1%. Although these matters have been brought into court, passing laws to prevent racially biased algorithms and technologies is very difficult to achieve. The UK has had similar issues with facial recognition technology, and some police forces, like the Met, have disregarded the laws to ban their use of facial recognition technology.
Dr. Hare details the many instances of facial recognition technology and how we use it in our daily lives. Stores, law enforcement, museums, conference centers, and casinos are implementing facial recognition technology with many biases in the UK.
Dr. Stephanie Hare poses questions on whether facial recognition is ethical.
1. For whom is it good or bad?
2. Under what conditions is it good or bad?
3. What unintended consequences might be there using it?
Dr. Hare states there is hope to implement ethics into facial recognition technology. First, we would need to pay attention to the European Union which has set standards for the regulation of facial recognition technology, account for the growing use of facial recognition technology in the US, and understand China’s use of facial recognition technology on the Uyghur muslims and not replicate its method of use.
Dr. Hare details SARS-CoV-2, especially its mechanism of transmission. SARS-CoV-2 is a respiratory virus that mutates rapidly to spread its genetic material; as of December 15, 2021, the WHO had recorded 5,318,216 deaths worldwide, and 146,627 were in the UK alone.
There were disease control methods, such as hand-washing, wearing face masks, social distancing, contact tracing, and many others. However, there was a technology in which a person is identified as vaccinated or not, and it would be the immunity passports. After considerable consultation, the United Kingdom wanted to introduce immunity passports but decided to abandon the project in December 2021.
Immunity passports were a way in which governments planned to open their countries, and those who recovered from the virus would receive them. However, a few reasons stopped the adoption of immunity passports, like perverse incentives. Some people may have wanted to infect themselves intentionally to regain their freedom to travel to different countries. Additionally, employers would have mandated that employees have their immunity passports, and the police may have stopped civilians in the street to see their passports. Although the UK dismissed immunity passports, they [immunity passports] heralded in other technologies, such as QR code scanning, exposure notification apps, and vaccine passports.
The author begins with a quotation from Professor James Larus at the New York Times; he says, “it turns out that it is very challenging to get people to use a Covid app. We went into it thinking that of course, people would want to use this, and we were very surprised.” He was one of the computer scientists who worked at two of the MAANG companies, Apple and Google, and he created an Exposure Notification Application Programming Interface (API). This technology would use BlueTooth technology to record the digital IDs of nearby phones. They work by two smartphones in proximity, indicating to another that its user may have been in contact with someone who has COVID.
Dr. Hare discusses the advent of QR technology, its utilizations, and its apparent flaws.
The NHS [National Health Service] created the QR code check-in in 2020 to alert those exposed to the COVID-19 virus. Masahiro Hara, the chief engineer at Denso Wave, started the QR code technology in 1994, and he had gotten this idea from playing a Chinese board game called Go.
QR codes are more effective than a standard barcode because they embed ‘200 times more information,’ and they [QR] codes can interpret a code’s information 20 times faster. On COVID-19, a cellphone user would scan the QR at a venue within the exposure application. If someone has tested positive, those who have scanned their phones will receive a notification that they have been in the vicinity of someone infected.
The NHS barely used QR code scanning, requiring businesses to display QR codes for customers. Only 284 alerts were sent, for 276 venues, despite the 100 million venue check-ins in total using the app. Additionally, public health businesses were instructed to contact individuals directly, not through human contact tracers; this was a breach of data protection law. QR codes in late 2021 became obsolete because the government did not require businesses to use check-in.
Dr. Hare explains the use of vaccine passports, and she highlights the example of how the NHS uses them. The UK would pilot several vaccine passport programs, such as one implementing facial recognition technology and another implementing fingerprint vein technology.
The vaccine passport would be added to the NHS application, and people would be able to book appointments and see their medical records. However, the vaccine passport was ultimately denied because it would create an unequal balance of power. People were opposed to vaccine passports because people may be infected purposely, hackers may hack the application, and they [vaccine passports] may be discriminatory to less-tech savvy people.
Many British citizens denied having a vaccine passport since citizens don’t carry a national ID. The vaccine passport would be a subtle way of giving UK citizens an ID.
Dr. Hare explains that digital health tools, although they had their flaws, were useful in mitigating COVID-19 infections.
Dr. Stephanie Hare closes the book with universities that are implementing technology ethics into their curriculum, and she goads the audience to push past the diagnosis of the problem of technology ethics to get to real-sustainable solutions. Additionally, she enumerates ways in which ethics can be implemented, from taxing big tech companies to creating legal protections and financial incentives for whistleblowing.
I appreciate the scholarship of her work; Dr. Hare has edified my knowledge on technology ethics. As a people, we have a right and responsibility to change our lives for the better, and we would have to continue to be vocal about our concerns and ideas in order for institutions to make beneficial changes in their technology implementations.
Devin is a one-of-a-kind person. He is passionate about his dreams to save the world, one algorithm at a time. He is currently pursuing a Masters in Machine Learning and A.I. at CSU-Global, with the aspirations of becoming a machine learning engineer. Ethical A.I. is his passion, and he does whatever it takes to ensure the algorithms and technology are representative of their constituents.