STUDY: Machines Are Just as Racist as Humans

By Kenrya Rankin Apr 14, 2017

It looks like artificial intelligence (AI) is not so smart after all. A new study shows that programs that run on AI are just as biased as humans.

Semantics Derived Automatically From Language Corpora Contain Human-Like Biases,” published today (April 14) in Science, explores how machines gain artificial intelligence from analyzing human-crafted data—and how what they learn mirrors the beliefs of the people who create that data. In short: It shows that machines learn to be racist and sexist from humans.

Researchers from Princeton University and University of Bath replicated the Implicit Association Test—which uses measures unconscious bias—for artificially intelligent machines to measure how they associate words with feelings. The result: they found that these machines have learned to associate Black people with negative words, and that they also harbor sexist attitudes about the jobs women can hold.

Study coauthor Joanna J. Bryson wrote about the results on her blog:

AI (and children) can inherit our prejudices just from word usage. … What the IAT is better known for is showing the extent to which we have strong implicit associations for a wide range of stereotypes, like that women are more domestic an[d] men more career oriented, or women are more associated with the humanities and men to math or science. And worst of all, that African-American first names are more easily associated with unpleasant terms AND European-American names with pleasant terms than the other way around.

These findings are in line with what has been observed in the real world. Back in 2015, Microsoft created a chatbot that used artificial intelligence to interact on Twitter. But it only took a day before she went on a racist rant, targeting Black people, feminists, Mexicans and Jews and calling for a race war.

Bryson says that the research should inform how artificial intelligence is crafted moving forward, particularly in a world that already uses AI to screen job candidates and translate language. From her blog entry:

So if we are going to have AI systems that learn about the world from culture and then also act upon the world, then we may very well want a similar system to what humans have. We may want an implicit system for learning enough scaffolding to understand the world, and then an explicit way of instructing the system to conform to what society currently accepts. This may sound scifi, but think about the text prediction on your own smart phone (if you have one). It guesses the next word you might type with something derived from culture—an n-gram model that tells it what words you are likely to say next, particularly given what letters you’ve typed so far. But there are some words it will never help you finish. That’s not because no one has ever said them before, it’s because guessing them wrong would be socially unacceptable.

Read the full study here.

(H/t NBC News)