Facebook’s ad delivery algorithm distinguishes by gender and race

Algorithms are biased – and Facebook’s are no exception.

Last week, the tech giant was sued by the US Department of Housing and Urban Development because advertisers could deliberately target their ads to race, gender, and religion – all protected classes under US law. The company announced it would stop allowing this.

But new evidence shows that Facebook’s algorithm, which automatically decides to whom an ad is shown, carries out the same discrimination anyway and displays ads to more than two billion users based on their demographic information.

Sign up for The algorithm – artificial intelligence, demystified

A team led by Muhammad Ali and Piotr Sapiezynski at Northeastern University presented a series of otherwise identical advertisements with small variations in the available budget, headline, text or image. They discovered that those subtle adjustments had a significant impact on the audience that reached each ad, especially when the ads were for jobs or real estate. For example, posts for kindergarten teachers and secretaries were shown to a higher fraction of women, while posts for caretakers and taxi drivers were shown to a larger proportion of minorities. Advertisements about homes for sale were also shown to more white users, while advertisements for rental to more minorities were shown.

“We have made important changes to our ad-targeting tools and know that this is only a first step,” said a Facebook spokesperson in a statement following the findings. “We have looked at our ad delivery system and have involved market leaders, academics and civil rights experts on the subject – and we are investigating more changes.”

In some respects this should not be surprising: bias with recommendation algorithms has been a known problem for years. In 2013, Latanya Sweeney, professor of government and technology at Harvard, for example, published a paper demonstrating the implicit racial discrimination of Google’s advertising algorithm. The problem goes back to how these algorithms basically work. They are all based on machine learning, which finds patterns in huge amounts of data and applies them again to make decisions. There are many ways in which bias can trickle in during this process, but the two most noticeable in the case of Facebook relate to problems during problem creation and data collection.

Bias occurs when creating problems when the purpose of a machine-learning model is not in line with the need to prevent discrimination. With the Facebook advertising tool, advertisers can choose from three optimization goals: the number of views an ad receives, the number of clicks and the amount of engagement it receives and the amount of sales it generates. But those business goals have nothing to do with, for example, equal access to housing. If the algorithm discovered that it could generate more involvement by showing more white users for purchase, it would ultimately discriminate against black users.

Bias occurs during data collection when the training data reflects existing prejudices. Facebook’s advertising tool bases its optimization decisions on the historical preferences that people have demonstrated. In the past, if more minorities occupied themselves with rental advertisements, the machine learning model would identify that pattern and apply it forever. Again, it will continue blindly along the path of employment and housing discrimination – without being explicitly told to do so.

Although these behaviors have been studied in machine learning for some time, the new study provides a more direct insight into the enormous scope of its impact on people’s access to housing and employment. “These findings are explosive!” Said Christian Sandvig, director of the Center for Ethics, Society and Computing at the University of Michigan, at The Economist. “The newspaper tells us that [ .] big data, used in this way, can never offer us a better world. It is even likely that these systems make the world worse by speeding up the problems in the world that make things unjust. “

The good news is that there may be ways to address this issue, but it will not be easy. Many AI researchers are now seeking technical solutions for machine learning bias that can create fairer models for online advertising. A recent article from Yale University and the Indian Institute of Technology, for example, suggests that it may be possible to limit algorithms to minimize discriminatory behavior, albeit at a small cost for advertising revenue. But policymakers will have to play a greater role if platforms are to invest in such solutions – especially if this can affect their results.

This originally appeared in our AI newsletter The Algorithm. Register here for free to have it delivered directly to your inbox.

Similar Posts

Leave a Reply