April 5, 2019
Artificial intelligence has long been heralded as a potential solution to groups that have been historically marginalized in society. In fact, Andrew McAfee, a principle research scientist at MIT, is known for saying, “If you want bias out, get the algorithms in.” Unfortunately, ending bias has not been so simple.
In recent days, AI systems that were intended to increase fairness, are being increasingly criticized and pulled from the market due to racism, sexism and general vulgarity. There’s been Amazon’s hiring algorithm which discriminated against resumes that contained the word “women’s” and the face-analyzing algorithm that determines a person’s likelihood of being charged with future crimes which routinely put the risk higher for black and brown people andmanymoreexamplesofbiasedAI. These algorithmic injustices seem to go against many initial assumptions that AI could help solve racism and sexism, but how? Shouldn’t computers be objective?
The key to understanding where these algorithms go wrong is twofold. Perhaps most crucially, our algorithms learn from the data we provide. If the data we input into an algorithm contains patterns of bias, it will provide biased recommendations. For example, if we create a salary predictor and input data where women routinely earn less than men, the algorithm will suggest offering women lower salaries than men. Similarly, when MIT conducted a survey to determine whose safety self-driving cars should prioritize in a collision, people ranked convicts below cats, which if implemented would be a violation of human rights. Machines don’t develop their decision making processes by themselves, but rather our algorithms are reflective of the people who create them and the information those people provide the algorithm. In a society entrenched in biased and unjust systems, the data we produce will inevitably contain bias and injustice unless we actively combat these trends. In order to avoid biased algorithms, we must audit algorithmic inputs as well as outputs for bias and stay vigilant even if the initial results appear fair.
The second shortcoming that often leads to biased AI is the teams that build these products might not be representative of their potential user base. For example, early cameras were not properly calibrated for darker skin tones, many speech recognition algorithms do not pick up female voices as well as male voices because they are often higher pitched than the men that built and tested the system, and many automated bathroom sensors do not respond to darker skin tones.
This is not to say that all AI algorithms are doomed, but rather that we should view them with caution and skepticism. In a world where AI is becoming increasingly embedded in our lives, it is crucial that decision makers, who might lack a strong technical background, consult outside experts before deploying an automated decision-making system that could adversely affect someone else’s life without cause. In the words ofJohn Giannandrea, Senior Vice President of Machine Learning and AI at Apple, “If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it.” It is vital that the technology industry takes the many criesfordiversity seriously and that all decision makers develop an understanding that while modern AI can be highly capable, it can only be as unbiased as it’s creators.
Bias is an issue that affects everyone. Algorithmic failures range in severity from not receiving a marketing email to being denied a loan to receiving a life sentence. As technology becomes even further imbued in society, we must ask more of tech workers, our leaders and ourselves. We must run bias audits of all algorithmically made decisions in order to ensure fairness and avoid violations of the law.