Racist, Sexist AI Could Be A Bigger Problem Than Lost Jobs

By TheWAY - 3월 01, 2018

Racist, Sexist AI Could Be A Bigger Problem Than Lost Jobs

Photo via TED
Joy Buolamwini presenting her research at a TED conference.
Joy Buolamwini was conducting research at MIT on how computers recognized people’s faces, when she started experiencing something weird.
Whenever she sat before a system's front-facing camera, it wouldn't recognize her face, even after working for her lighter-skinned friends. But when she put on a simple white mask, the face-tracking animation suddenly lit up the screen.
Suspecting a more widespread problem, she carried out a study on the AI-powered facial recognition systems of Microsoft, IBM and Face++, a Chinese startup that has raised more than $500 million from investors.
Buolamwini showed the systems 1,000 faces, and told them to identify each as male or female.
All three companies did spectacularly when discerning between white faces, and men in particular.


But when it came to dark-skinned females, the results were dismal: there were 34% more errors with dark-skinned females than light-skinned males, according to the findings Buolamwini presented on Saturday, Feb. 24th, at the Conference on Fairness, Accountability and Transparency in New York.
As skin shades on women got darker, the chances of the algorithms predicting their gender accurately “came close to a coin toss.” With the darkest skin women, the face-detection systems were getting their gender wrong close to half the time.
Buolamwini’s project, which became the basis of her MIT thesis, shows that concerns about bias are adding a new dimension to the general anxiety around artificial intelligence.
While much has been written about ways that machine learning will replace human jobs, the public has paid less attention to the consequences of biased datasets.
What happens, for instance, when software engineers train their facial-recognition algorithms primarily with images of white males? Buolamwini's research showed the algorithm itself becomes prejudiced.
Another example came to light in 2016, when Microsoft released its AI chatbot Tay onto Twitter. Engineers programmed the bot to learn human behavior by interacting with other Twitter users. After just 16 hours, Tay was shut down because its tweets had become a stream of sexist, pro-Hitler messages.
Experts later said Microsoft had done just fine teaching Tay to mimic behavior, but not enough about what behavior was appropriate.
Suranga Chandratillake, a leading venture capitalist with Balderton Partners in London, UK, says bias in AI is as much a concerning issue as that of job destruction.
“I’m not negative about the job impact,” he says. The bigger issue is building AI-powered systems that take historical data, then use it to make judgements.
“Historical data could be full of things like bias,” Chandratillake says from his office in Kings Cross, which is just up the road from the headquarters of Google’s leading artificial intelligence business, DeepMind.
“On average people approve mortgages to men or people who are white, or from a certain town.” When the power to make that judgement is given to a machine, the machine “encodes that bias.”
So far the examples of bias caused by algorithms have seemed mundane, but in aggregate they can have an impact, especially with so many companies racing to incorporate AI into their apps and services. (Mentions of "AI" in earnings calls have skyrocketed over the past year, according to CB Insights, even from unlikely companies like Procter & Gamble or Bed Bath & Beyond.)
In recent months several researchers have pointed to how even Google Translate has shown signs of sexism, automatically suggesting words like “he” for male-dominated jobs and vice versa, when translating from a gender-neutral language like Turkish.
Camelia Boban, a software developer in Italy, also noticed on Feb. 4th that Google Translate didn’t recognize the female term for “programmer” in Italian, which is programmatrice. (She said in a recent email to Forbes that the issue has since been corrected.)
Such examples might sound surprising when you expect software to be logical and objective. “People believe in machines being rational," Chandratillake says. "You end up not realizing that actually, what should be meritocratic, isn’t at all. It’s just an encoding of something that wasn’t in the first place.”
When humans make important decisions about hiring, or granting a bank loan, they’re more likely to be questioned about their judgement. There's less reason to question AI because of “this veneer of innovative new tech," he says. "But it’s destined to repeat the errors of the past.”
Today's engineers are also overly-focused on building algorithms to solve complex problems, rather than building an algorithm to monitor and report on how the first algorithm is performing -- a kind of algorithmic watchdog.
“Today the way a lot of AI is configured, is basically as a black box,” he adds. “Neural networks are not good at explaining why they made a decision.”
MIT’s Buolamwini points to the lack of diversity in images and data used to train algorithms.
Fortunately, this is an issue that can be worked on.
After MIT's Buolamwini sent the results of her study to Microsoft, IBM and Face++, IBM responded by replicating her research internally, and releasing a new API, according to a conference goer who attended her presentation on Saturday.
The updated system now classifies darker-skinned females with a success rate of 96.5%.

SOURCE: https://www.forbes.com/sites/parmyolson/2018/02/26/artificial-intelligence-ai-bias-google/#acd182c1a015

  • Share:

You Might Also Like

0 개의 댓글