Facebook’s AI Seems to Have a Racism Problem

The company apologized after a video of Black men was followed by a prompt asking about “primates.”

Richard Drew/AP

Fight disinformation. Get a daily recap of the facts that matter. Sign up for the free Mother Jones newsletter.

Have you ever fallen down a YouTube rabbit hole (of course, you have) and watched the latest Lil Nas X video, then took a look at the most recent instance of police abuse of Black folks caught on camera? And then did you say to yourself, “Seems like it’s time to watch primates hanging out with Jane Goodall”?

Of course, you didn’t! Those associations are blatantly wrong and offensive (not to mention ridiculous). But for more than a year, after Facebook users watched a video showing encounters between Black men and white civilians and cops, they received an automated prompt asking if they wanted to “keep seeing videos about Primates.” On Friday, the social media giant apologized for the decisions its AI apparently made.

Here’s what the New York Times reported:

The video, dated June 27, 2020, was by The Daily Mail and featured clips of Black men in altercations with white civilians and police officers. It had no connection to monkeys or primates.

Darci Groves, a former content design manager at Facebook, said a friend had recently sent her a screenshot of the prompt. She then posted it to a product feedback forum for current and former Facebook employees. In response, a product manager for Facebook Watch, the company’s video service, called it “unacceptable” and said the company was “looking into the root cause.”

Ms. Groves said the prompt was “horrifying and egregious.”

Last Thursday, Groves posted the screenshot on Twitter and called on the company to “escalate” fixing the “egregious” error. Facebook apologized for what they described as an “unacceptable error” and said they were investigating how to “prevent this from happening again.” But the company’s artificial intelligence fail and its belated act of contrition fits into a familiar pattern among tech companies when they have to deal with embarrassing flaws in their technologies. First, they say they will fix them and then they apologize, without fully reckoning with the inherent biases, racism, and sexism infused in the algorithms in the first place. 

Tech companies like Google and Amazon have historically had problems with the insidious ways biases have seeped into the algorithms. As the Times pointed out, Google Photo came under scrutiny in 2015 and apologized after photos of Black people were labeled as “gorillas.” As an attempt to address the outrageous problem, Google simply removed labels for gorillas, chimps, and monkeys. Before last year’s nationwide protests over George Floyd’s killing, Amazon profited off its facial recognition software and sold it to police departments—even as research has shown not only that facial recognition programs falsely identify people of color compared to white people, but that its use by police could lead to unjust arrests that disproportionately affect Black people. Amazon halted the distribution of facial recognition software to police departments last June. Computer engineers have wrestled with the historical use of coding terms that evoke racism such as “master” and “slave,” while some have pushed for more neutral language.  

That’s all to say, the tech world, which has its own diversity problems in the workplace, is also riddled with biases inside the algorithms its engineers create. This is not the first time Facebook has struggled with combatting these biases on its platforms: The New York Times reported that the social media company and Instagram failed to curtail racist abuse faced by three Black English soccer players after they missed penalty kicks in a shootout in the Euro 2020 finals. Bukayo Sayo, one of the soccer players involved, blasted the social media companies’ tepid responses to combating racist abuse.

“To the social media platforms @instagram @twitter @facebook I don’t want any child or adult to have to receive the hateful and hurtful messages that me Marcus and Jadon have received this week,” Saka wrote in an Instagram post. “I knew instantly the kind of hate that I was about to receive and that is a sad reality that your powerful platforms are not doing enough to stop these messages.”

FACT:

Mother Jones was founded as a nonprofit in 1976 because we knew corporations and billionaire owners wouldn't fund the type of hard-hitting journalism we set out to do.

Today, reader support makes up about two-thirds of our budget, allows us to dig deep on stories that matter, and lets us keep our reporting free for everyone. If you value what you get from Mother Jones, please join us with a tax-deductible donation today so we can keep on doing the type of journalism 2021 demands.

payment methods

FACT:

Mother Jones was founded as a nonprofit in 1976 because we knew corporations and billionaire owners wouldn't fund the type of hard-hitting journalism we set out to do.

Today, reader support makes up about two-thirds of our budget, allows us to dig deep on stories that matter, and lets us keep our reporting free for everyone. If you value what you get from Mother Jones, please join us with a tax-deductible donation today so we can keep on doing the type of journalism 2021 demands.

payment methods

We Recommend

Latest

Sign up for our free newsletter

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.

Subscribe

Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.

Donate