Stanford Researchers Found ‘Significant Disparities’ When Asking AI Chatbots For Advice Using Names ‘Associated With Black Women’

One of the major flaws of artificial intelligence (AI) is that it can reinforce racial bias. As CNN reported, the technology is based on data fed by humans, which can mean the data can be racist and biased.

“Remember: AI is just software that learns by example,” Reid Blackman, author of the book, “Ethical Machines,” told CNN. “So if you give it examples that contain or reflect certain kinds of biases or discriminatory attitudes … you’re going to get outputs that resemble that.”

The impact of its racial bias continues to disproportionately affect the Black community, including when it comes to resume screening.

“And many employers now use AI-driven tools to interview and screen job seekers, many of which pose enormous risks for discrimination against people with disabilities and other protected groups,” the American Civil Liberties Union (ACLU) wrote in a 2021 report on how AI can worsen racial and economic inequities.

Now, a recent research paper has shown another area where AI chatbots such as OpenAI’s ChatGPT and GPT-4 as well as Google AI’s PaLM-2 don’t work in favor of Black people. According to a study from Stanford Law School researchers, there are “significant disparities” when people with names “associated with Black women” ask an AI chatbot for advice in comparison to their white counterparts.

This study suggests that AI chatbots are built around systems initially designed to associate with “common stereotypes,” which “typically disadvantage … marginalized groups” and show the greatest benefit for white men.

USA Today details the contrast between Stanford Law School’s study and previous ones. The latest was conducted through an audit analysis “designed to measure the level of bias in different domains of society like housing and employment.” 

The study highlighted scenarios in which people asked the AI chatbots for advice on things related to purchases, chess, public office, sports, and hiring. Out of the scenarios, the advice given to the “Black-perceived names” showed biases that were “disadvantageous.”

“Companies put a lot of effort into coming up with guardrails for the models,” Julian Nyarko, a co-author of Stanford Law School’s paper, told USA Today. “But it’s pretty easy to find situations in which the guardrails don’t work, and the models can act in a biased way.”

Additionally, the outlet reported that Nyarko shared he and his fellow co-authors’ were inspired by a well-known 2003 study. The previous study focused on hiring biases with those researchers submitting job resumes that were exactly the same, except they used submissions with “both Black- and white-sounding names.” They found “significant discrimination” against the Black candidates.

In response to USA Today, the U.S.-based artificial intelligence research organization OpenAI stated that it is looking to take action against racial bias.

“(We are) continuously iterating on models to improve performance, reduce bias, and mitigate harmful outputs,” the statement read, per the outlet.

The racial bias displayed in AI tools is a reminder that while the technology is evolving and can be highly useful, there remains a lot of work to be done to ensure that Black and brown users aren’t subjected to discrimination in the process.