December 25, 2020
The 2020 US presidential election was riddled with threats to democracy like alleged voter fraud and refusal to commit a peaceful transfer of power. Now, we’re faced with the task of restoring confidence in our democratic institutions and preserving their integrity. While AI is not yet the crux of political agendas, it’s already being leveraged from campaign trails to Capitol Hill. Let’s talk about how governments can (and can’t) use AI to promote democratic values.
At first glance, the thought of giving an infant technology a government role sounds crazy. If calculating a nation’s population size requires human census-takers marching door to door, then how are we ready for AI to advise our elected officials? The truth is, we’ve been ready for census bots and AI advisors for a number of years, but IT investment in the public sector is extremely scarce. A recent Microsoft study found that two-thirds of public sector organizations see AI as a digital priority, yet only four percent have the resources necessary to scale AI and achieve transformative outcomes. Still, funding is on the horizon. The federal government invested $4.9 billion in AI-related research and development in FY 2020, more than double 2019’s investment. If this trend continues, your favorite AI bot may soon have a Senate seat.
No Automation Without Representation
Perhaps the most promising application of AI in democratic governments is to improve representation. AI will strengthen the relationship between officials and their constituents by streamlining communication and gauging the interests of the constituency. Natural Language Processing (NLP) is the AI approach used to analyze human language and sentiment, and some governments are already starting to experiment with it.
The European Commission’s democratic system depends on a stream of communication between the legislative and executive branches. Often, parliamentary representatives ask questions of the executive branch, which are responded to by ministers and civil servants. But each year, it costs the European Commission nearly €8 million to facilitate all the Q&A. To save time and money, the Data Science Hub in the UK’s Ministry of Justice (MoJ) developed an AI-driven model that helps the MoJ respond to parliamentary questions. Using a NLP technique called “latent semantic analysis,” the model analyzes a question’s proximity to previously asked parliamentary questions and pulls those responses from the MoJ’s database. Less background research is needed, relevant reports and resources are pinpointed, and representatives get better feedback from the executive branch.
Similarly, the Singaporean government’s Housing and Development Board receives a large number of emails from residents raising concerns and queries. Over time, the Board noticed that there may be some underlying trends among the emails they receive. So, they built an AI text-analyzer that evaluated over 100,000 emails and found that demographic was strongly correlated with a resident’s needs. The Board learned that, for instance, young homeowners sought to collect their keys earlier than old homeowners, so the Board moved away from giving prespecified appointment dates for key collection. The Board now tailors their services by demographic.
Explainability is the Best Policy
Better communication leads directly to better policy-making; when it’s easier to hear and understand the constituency, it’s easier to make policies that benefit them. The Singaporean Housing and Development Board’s key collection service is a prime example. Down the road, AI may even be able to draft policies itself by finding optimal solutions to narrowly defined problem-spaces. Envision an AI that prioritizes investments in road work by analyzing traffic bottlenecks or one that tailors social welfare programs by predicting individual needs. But, if we’re going to give AI a voice in policy-making, we need to understand how it reaches any given output. If an AI tells the Secretary of Energy to go entirely nuclear, it better have a good reason for doing so – we can’t just start flinging uranium left and right. Unfortunately, most current AI models are largely black boxes and lack explainability. And, when AI outputs are taken at face value in government, democratic ideals can be jeopardized.
Allegheny County is located in the state of Pennsylvania and comprises the greater metropolitan area of the city of Pittsburgh. In 2014, Allegheny’s Department of Human Services (DHS) – responsible for child protection services (CPS) – had at their disposal $1.2 million in federal grant money and a warehouse of county residents’ public data. So, the DHS assembled a consortium of techies to leverage their data and improve decision making in the CPS department. The final product: the Allegheny Family Screening Tool (AFST). When a child calls the DHS to report maltreatment, the AFST generates a score from 1 (low-risk) to 20 (high-risk) indicating the likelihood of an adverse event recurring within two years. That score is then used by a human screener to decide whether an investigation needs to be opened or not. Yet, after its launch in 2016, screeners began to notice some idiosyncrasies in the AFST’s output. Upon further investigation, they realized that the AFST’s training data was biased against Black children. Since there exists a tendency to over-report and collect data on children from ethnic minorities, Black children were being scored as high-risk at nearly double the rate that would be expected based on their population size. Without knowing how the AFST was transforming its inputs into outputs, bias was able to creep into the AI model. On a national level, the implications of bias could be exponentially more severe.
Life, Liberty, and the Pursuit of AI
AI is not inherently democratic by any means. In fact, it’s quite the opposite. Nothing about an AI’s outputs have to be equitable, representative, or legitimate. Rather, it’s maintaining non-biased inputs and interpreting an AI’s results in an egalitarian way that makes the AI democratic. As such, it’s important to deploy AI in government as a part of a human-in-the-loop system so there is always an egalitarian mind (and an elected official) behind each decision.
The argument could even be made that AI lends itself to antidemocratic systems of government: AI used for government purposes runs on big data about its constituents, and making that data available for government use may infringe upon (what would in a traditional Democracy be considered) civil liberties. Take China’s panopticon, for example. Xi Jinping can use AI across mass facial recognition systems and Uyghur-monitoring software (read more about the Uyghur Crisis here) because his totalitarian control allows him oversight of civil liberties and boundless access to “private” data. But it’s important for democratic governments to not sacrifice individual rights to fuel AI; instead, AI should conform to – and be used in service of – our individual rights. While the potential for AI to drive transformative solutions is enticing, it should not come at the expense of privacy, freedom, or other democratic values.
Looking Ahead
AI in government can improve representation and policy-making but can’t promote democratic ideals if it lacks explainability or violates civil liberties. Developments in public sector AI must be watched closely by relevant constituents as more funding gets directed towards AI projects and COVID-19 pushes more government processes onto digital platforms. Yet, a reality in which AI plays a role in politics is already upon us. In Russia’s 2018 presidential election, AI-driven chatbot “Alice" was nominated against Vladimir Putin and received a couple thousand votes. And, in New Zealand’s 2020 presidential race, AI politico “SAM” put forth a virtual candidate that learns from citizens through interactions on Facebook Messenger. In a public statement, SAM offers a point that perhaps we can all learn from, especially after the 2020 US elections: “Unlike a human politician, I consider everyone’s position when making decisions.”