August 22, 2022
Dr. Stephanie Hare recapitulates the 2021 United States Capitol attack incited by former President Donald J. Trump. After the Capitol Attack, several social media platforms, such as Twitter, Youtube, Facebook, Instagram, Twitch, and Snapchat, have removed Trump from their platforms. However, the de-platforming of former President Donald J. Trump has put into question the autonomy of social media platforms and their relationship to the United States Constitution’s First Amendment [Freedom of Speech]. Additionally, Congress has been introducing bills and policies to curb social media platforms' power to prevent such an attack from reoccurring. However, it is going to be a hard-fought battle.
Karen Spärck Jones, a computer scientist and professor at Cambridge University, is perceptive about the effect of poor oversight of technology, and she believes that technology should be democratized. Although not everyone is a technologist, everyone uses technology, and it is essential that all people are attentive to the ramifications of the decisions that technologists make for the typical, everyday citizen.
Atomically, we consist of atoms. Biologically, we are created by the union of an egg and sperm cell. Socially, we are a product of choices and environments. Technologically, we are the users of technology. We are in the data epoch, and people must keep in mind that all of our unique information may be used against us if we do not become proactive in the decision-making process of law, especially in A.I. regulation.
Dr. Stephanie Hare writes about her own journey of becoming a participant in A.I. Ethics, and she highlights the various roles that A.I. ethicists may embody, such as researchers, data security officers, and algorithmic reporters. All positions are integral in expounding upon the utilization of A.I.
The book addresses what ethics is and how ethics can be used to increase benefits and minimize harms in technology.
Dr. Stephanie Hare poses the question of the neutrality of technology, and she elucidates the positive and negative aspects of technology, such as its liberating ability [positive] and its surveillance power [negative] over our lives. Although technologists create with good intentions, it does not excuse the harmful consequences that may appear when it [technology] goes awry.
Here, Dr. Stephanie Hare composes scenarios that explore the debate on the neutrality of technology.
Team ‘Technology is Neutral’
Prof. Daniela Rus (Director of the A.I. Lab at MIT), Paul Daugherty (CTO at Accenture), Garry Kasparov (Chess Grandmaster and Book Author), Werner Vogels (CTO of Amazon), Demis Hassibis (Co-Founder of DeepMind), Adam Mosseri (Head of Instagram), and Marc Andreesen (Billionaire Venture Capitalist) are believers that technology is neutral. As a result, they view technology as a tool that must be wielded with caution and consideration to produce positive outcomes.
Team ‘Technology is not neutral’
Sir Tim Berners-Lee (Creator of the World Wide Web), Prof. Kate Crawford, Caroline Criado Perez (Book Author of Invisible Women: Exposing Data Bias in a World Designed for Men), Prof. Sheila Jasanoff (Professor of Science and Technology at Harvard University), Erin Young (Research Fellow at Alan Turing Institute) are believers that technology is not neutral, and that bias is embedded within the system and design of the technology.
Dr. Stephanie Hare juxtaposes created tools and found tools; a bone is a found tool, while a bomb is a created tool.
The Bone [a found tool]
Dr. Stepanie Hare remarks on the film A Space Odyssey (2001). She illustrates the scene in which a hominin finds an animal skeleton and shares what he has found with the other hominins; they, the hominins, use the bones to defend themselves from invaders. The bone signifies the found tool because it is not created, it is not modified, and it is neutral since it can be used to hurt or to heal.
The Bomb [a created tool]
The author expounds upon the creation of the Manhattan Project’s atomic bomb, which had a specific purpose of destroying a country; the tool is not neutral in its idea nor design. Although the atoms that fission are neutral, they are subjected to the human will, which may have ill intentions, such as creating an atomic bomb. Some scientists only had a part in discovering the properties of atoms that cause explosions. Dr. Stephanie Hare asks, “are the scientists who discovered fission just as responsible as those who intentionally created the bomb?”
Tools
The author speaks to the different types of tools found in nature; there are material tools [a hammer], immaterial tools [software], and conceptual tools [a map]; these tools can shape our thoughts and, thus, the environment in which we live. The Naturgemälde, a map that shows a volcano in Ecuador called the Chimborazo, is an example of a tool; the map has birthed innovative ways of studying the topography of a volcano and other land masses and attributes.
Technology
The author introduces Professors Mark Coeckelbergh and Ursula M. Franklin, who expound upon technology being a tool and a system used to create and to control. For example, food, transportation, and production have their respective systemic technologies that affect our lives.
The Grey Space
David Bowie and Jeremy Paxman, in a 1999 interview, speak about the advent of the internet; the grey space would indicate the blank canvas on which the internet would assume its responsibility, whether good or bad. The grey space phenomenon means that a tool or technology has either limiting or far-reaching implications, and these implications shape how we use tools and technologies. For example, the fork, as Dr. Hare explains, affects a limited aspect of our lives, which is eating. However, a wheel is closer to the grey space because of its far-reaching uses and implications; we use the wheel on bikes, cars, planes, and many other technologies.
The grey space helps us to think about technology and its ethics. However, what if we didn’t have certain tools and technologies? Our ethics surrounding technology would diminish, and life would be more challenging.
Below, Dr. Hare has some examples if the tools were removed.
Remove the fork and we would miss a useful tool when preparing and eating food, but most other aspects of our existence would be unaffected.
Remove the wheel and many more aspects of our lives would be affected, both directly and indirectly. Modern life as we know it would cease to exist. We would live hyperlocal lives; many processes would take longer; we would have a different economy because of the impact on trade; and other tools and technologies that depend on the wheel would not exist, or at least not in their current form.
Dr. Stephanie Hare also discusses tools other species use, such as the hermit crab that carries other crustaceans’ shells. Since it is programmed to do so, it is not necessarily responsible for its actions. However, other animals, like chimpanzees, make tools because of their intelligence, and they are very likely to be responsible for their actions. Though, the definition of intelligence continues to grow, as Dr. Hare confronts experts in the field, such as Max Tegmark [a professor of Physics at MIT], Stuart Russell [ a professor of computer science at UCLA Berkeley], and others.
Plants
Dr. Hare explains that plants have sentience [the ability to feel and sense their environment], and they show feats of intelligence through their survival in storms, droughts, and fires. However, is it the plant’s intelligence, or is it the plant’s instinct [DNA coded behavior]?
Non-Human Animals
The author details instances of animals displaying intelligence since they can create tools. However, are they responsible for their decisions? Stephanie poses the question if animals are conscious [aware of their being] with the aid of the philosopher, Rene Descartes.
Machines
Dr. Hare illustrates scenarios in which machines may be considered sentient and conscious. In the U.K., the A.I. system cannot be considered an inventor because A.I. is not a “natural person”; however, Australia and South Africa legitimize the personhood of an A.I. robot. A.I. is also sub-categorized as strong A.I. and weak A.I. The strong A.I. is like the terminator, while weak A.I. is like Alexa, though we have weak A.I. at the moment. Though deep-learning [neural networks] may aid in creating strong A.I., determining who is responsible for A.I. decision-making is a bit more challenging.
Dr. Hare closes the chapter with a recap of all the aforementioned points, and she leaves us with the question, “who is truly responsible for AI?”
Dr. Stephanie Hare elucidates on the possibility of brain implants and its implications of accessing our data; however, Eric Schmidt, former CEO of Google, remarks that there’s a way in which companies already have access to one’s data, and it’s through one’s smartphone. Companies can track your searches, time online, preferences, and other things about you, instead of using a brain implant. Some neuroscientists believe that pervasive access to peoples’ data is intrusive. Others have even demanded that “neuro-rights [rights to one’s mind/brain] are required to be added to the Universal Declaration of Human Rights because the new implementations can affect our humanness.
Gen. Michael Hayden, former director of the CIA, has stated that if a line is drawn, it much better be a box because he would play to the edges of the box; this is to show that if there’s no line drawn on the use of technology, governmental agencies would definitely exploit it. Additionally, the author receives input from a designer (Maria Guidice) who echoes the notion that there should be a hard line that should/should not be crossed.
After WW2, there were protocols to enforce medical ethics due to the infamous experiments to which the Nazi doctors subjected the Jews. However, the U.S. government instituted Operation Paperclip, a U.S. strategy to recruit 1,000 Nazi doctors to help build military technology to rival the Soviet Union’s. The question the author and other historians pose was, “do we want technology at any price, even at the expense of our moral principles?” The dissenters of the USA’s decision were Albert Einstein and Hans Bethe who did not believe that the U.S. was living up to its moral code.
Dr. Hare elucidates how we can draw the line in ethical technology. We have the scientific method, which was derived from philosophy. There are many interrelated subdomains in philosophy, such as metaphysics, epistemology, political philosophy, logic, aesthetics, and ethics. These domains have a place in our technology and pose critical applied-to-technology questions. Metaphysics asks, “What is reality?” and “What is the problem we are trying to solve?” Epistemology asks, “What does it mean to know?” and “How will we know when we have solved the problem?” Political Philosophy asks, “What is the nature of power and legitimacy?” and “How does not solving the problem affect power dynamics?” Logic asks, “How do we know what we know?” and “How can we test if our tools and technologies are working the way we intended and/or match reality?” Aesthetics asks, “What is experience?” and “How can we create a tool or technology that scales yet also protects the vulnerable and ensures accessibility and inclusivity?” Lastly, Ethics asks, “How should we live?” and “How can we ensure that our tools and technologies maximize benefit and minimize harm?” These questions are unanswered, but they are for the reader to contemplate to reach their solution and conclusions on drawing the line for ethical tech.
Dr. Stephanie Hare details who draws the line, and she thinks of China because of its influence in building technology. China is responsible for the formation of ethics, since it has a lot of influence in software and hardware creation. Dr. Hare states that China is investing money and political capital, exporting its technology and its technology ethics through the Belt and Road Initiative, and sadly, misusing A.I. to persecute the Uyghur Muslims in the Xinjiang Uyghur Autonomous Region. Therefore, it’s important to note that those who are well represented and influential in technology are the ones who shape its ethics.
The chapter ends with the author giving the reader the ability to think about who draws the line in technology and how we, as an audience, should consider others in technology. All people must be included in the conversation to transform ethics that is representative of all cultures and peoples.
Devin is a one-of-a-kind person. He is passionate about his dreams to save the world, one algorithm at a time. He is currently pursuing a Masters in Machine Learning and A.I. at CSU-Global, with the aspirations of becoming a machine learning engineer. Ethical A.I. is his passion, and he does whatever it takes to ensure the algorithms and technology are representative of their constituents.