“In order to improve the use of AI in immigration and refugee systems, governments must commit to greater transparency, accessibility, compliance with human rights law.” – Melinda Cardenas
This week, we have an essay from Melinda Cardenas about her thoughts on the biases in automated decision making technology in immigration and refugee systems.
Same Discrimination, Different Form is an essay written to understand the biases imbedded into the design of automated decision making technology in immigration and refugee systems. It dissects the issue by discussing the history of bias in immigration systems, the rationale used to justify the denial of human rights to migrants, and the ways in which AI augments the issues of the political justice system to another level.
Melinda is a senior at NYU Gallatin School of Individualized Study concentrating on designing ethical frameworks for emerging technologies. Her studies lay at the intersection of design, computer science, interactive media arts, and political activism. She is interested in exploring questions of how to reduce bias in algorithms, how interactive technologies can change the human experience, and what the empowerment of marginalized groups looks like in the age of AI and surveillance technology. To learn more and connect with her, you can find her on LinkedIn here.
You can read Melinda's paper in its entirety here.
Thank you for sharing your essay with us, Melinda! If you are currently a student and also doing research in the field of AI, reach out to me at firstname.lastname@example.org and let’s chat! We would love to feature your work.
This blog post was written to accompany our All About AI newsletter, our newsletter containing news from the world of artificial intelligence, as well as research papers to help you learn more about AI. Subscribe here for more content like this!