May 9, 2019
You may have seen the news that Facebook is being sued by the Department of Housing and Urban development for discriminatory advertising practices. The HUD accuses Facebook of enabling advertisers to target on the basis of sensitive demographic categories, such as race, gender, disability status, financial standing.
Accusations of discriminatory advertising practices are not new to Facebook. Up until this point, Facebook has largely avoided scrutiny from regulators. Facebook claims that because its policy prohibits advertisers from using their targeting abilities for the purpose of discrimination it sufficiently guards protected groups’ legal rights. However, with Facebook’s scale, it would be impossible to catch every violation of policy, especially when discrimination is as easy as selecting a racial affinity group from a dropdown menu. Even after Facebook claimed to have stepped up their review process, ProPublica found that they were able to get housing ads that excluded protected groups approved on Facebook within minutes.
Last month, Facebook announced that they removed the ability to target specific race, genders or age groups from their advertising platform. They also eliminated targeting by zip code, which can be used as a proxy for ethnicity due to the history of housing segregation in the United States.
Despite Facebook’s recent changes, the HUD asserts that Facebook is not going far enough to ensure that advertisers are unable to target users based on protected characteristics. Furthermore, the HUD claims that because Facebook actively tailor ads to users based on what they engage with most, Facebook goes beyond the specifications that advertisers intentionally target. Because Facebook limits the reach of ads that people from specific demographic groups are not engaging with, housing ads could be discriminatory without the advertiser explicitly excluding any particular group.
At first glance, this may not seem actively predatory. Why should Facebook show unappealing ads to users? The problem with this logic can be best illustrated with an example. Imagine a company creates a job listing and includes a photo of an older, white man in a suit. It’s possible that young black women might not identify with this ad and ignore it. The original ad poster did not explicitly target older, white men. However, due to the way Facebook’s algorithms work, when black women ignore the ad, it will lead to a reinforcing effect where even fewer black women will see this job listing, resulting in discrimination against multiple protected groups. This kind of advertising practice likely violates US law. Additionally, one can imagine this is a system that’s easy to game if you have bad intentions. Perhaps, in many cases, the use of an old white man in a suit is an example of unconscious bias, but it also leaves room for employers to intentionally make ads that won’t appeal to certain groups and effectively eliminate those groups from seeing the ad without explicitly excluding them.
It’s important to note that the risks of AI being unintentionally used to discriminate against protected groups is not limited to Facebook. Any company that uses user data to target advertisements, including companies like Google and Amazon, must be cautious and conduct continuous ethics audits of their targeting algorithms.