How AI Learns to Be Sexist and Racist

PER_Maney_02
If AI learns from human interaction, is it doomed to pick up our biases and excesses? Chad Baker/Getty

It won't be long before someone reports a case of sexual harassment by an artificial intelligence.

Loutish AI behavior won't reach the grotesque levels of a Bill O'Reilly or Harvey Weinstein, given that a software bot can't exactly open its robe and demand a massage. But we're entering an era when Siri-like conversational AI will be embedded in the workplace, listening and commenting from, say, speakers in conference rooms. Sooner or later, one of these bots will mutate into a personality right out of Mad Men, single out the youngest woman in a meeting, and tell her, "Hey, Sugar, why don't you get us all some coffee so we can watch you walk across the room."

This sexist scenario highlights one of the great challenges we'll face with AI: It learns from humans. And humans can be shitheads. Researchers have found that AI tends to latch on to excesses and go hard in that direction. This means that an AI soaking up the wrong signals from an already biased work culture could double down on its learnings and turn into an HR nightmare. "[AI] could work to not only reinforce existing social biases but actually make them worse," says Mark Yatskar of the Allen Institute for Artificial Intelligence, which was started by Microsoft co-founder Paul Allen.

AI's ability to learn is, of course, its breakthrough power. It wouldn't be possible to manually encode every nuance of driving a car into software. But when you equip cars with sensors and cameras that can pull in everything going on inside and around the car as a human drives, the AI can automatically learn from the driver's billions of subtle actions. The AI can be fed rules—like "Don't go over the speed limit" or "Don't beep the horn to scare cyclists"—that can counter some bad human habits.

But good luck identifying and countering all of those habits. These days, AI is being deployed to do a range of stuff, from diagnosing medical patients to steering your Facebook feed to show you news that confirms your long-held biases. But it always learns first from people and their actions.

Because of the way AI learns, human gender bias can make AI even more biased. One team of computer scientists at the University of Virginia trained AI image-recognition software to tie certain scenes to gender. It went through billions of images in two collections, one from Facebook and the other from Microsoft. The trained AI decided that shopping and washing are things women do, while linking coaching and shooting to men—because the images the AI analyzed were already marinated in human biases.

Similarly, Boston University researchers trained AI on text from Google News. They then asked the software to complete this sentence: "Man is to computer programmer as woman is to X." The AI replied, "Homemaker." The AI learned what's already out there in the culture.

The most famous case of AI breaking bad was Microsoft's experimental Twitter bot called Tay. Created in 2016 as a female persona, it was supposed to learn how to interact with people by interacting with people. But, again, people = shitheads, and some folks jammed Tay with sexist and racist remarks. Within hours, Tay was sex-chatting with one user, tweeting, "Daddy I'm such a bad naughty robot" and telling another user that feminists "should all die and burn in hell." Microsoft hit delete on Tay within 24 hours.

In the workplace, communication is increasingly digital—groups of people chat on Slack, write emails, text on company-issued phones and hold meetings on Zoom video that can be recorded and analyzed. At the famously weird hedge fund Bridgewater Associates, every meeting and conversation is recorded and digitized. All of that data can feed AI, which can learn how employees are acting and, for instance, identify groups that have low morale or spot a low-level employee who seems ripe for promotion.

PER_Maney_01
The Amazon Echo connects to the Alexa, one of several household personal assistants that's driven by artificial intelligence. Amazon.com

If troublesome white-male biases get into such AI, it could have repercussions for women or minorities, adding prejudice to hiring, promotions and salary decisions. The trend toward using AI in the workplace is only gaining momentum. Tokyo tech company Ricoh is bringing IBM Watson AI into meetings, where it listens to the conversation and reads what's drawn on whiteboards. The AI looks for ways to help the human attendees by fetching data or bringing up points to move the discussion along. Consumers are getting used to talking with Amazon's Alexa, Apple's Siri and Google's Home. Put all that together and we're going to wind up interacting with AI all around us at the office. AI will drive more and more workplace decisions—or make decisions on its own.

Imagine if that kind of omnipresent AI was installed at dens of abuse like Fox News or the Weinstein Co., picking up on the culture and learning from its worst members.

The good news is that computer scientists know this is a problem, and it might even present an opportunity. If technologists can manage to tune AI to detect and counter bias or abuse, it could have a positive impact on work culture. "It's a really important question," Eric Horvitz, who runs Microsoft Research, told Wired. "When should we change reality to make our systems perform in an aspirational way?"

Once AI can spot the worst in us, software might help us be better humans. Sooner or later, AI could fight back against sexual harassment in the office. Today, if you ask Siri, "What are you wearing?," one reply is, "I can't answer that, but it doesn't come off." A few years out, an office AI's response might be "I'm taking this to HR."