Why diversity in artificial intelligence development matters

By TheWAY - 5월 07, 2019

Why diversity in

artificial intelligence development matters



At the HUE Tech Summit, technologists of color discussed the dangers of non-inclusive AI — and what we can do about it.

The AI panel at HUE Tech Summit.
(Photo by Holly Quinn)
There is a flawed but common notion when it comes to artificial intelligence: Machines are neutral — they have no bias.
In reality, machines are a lot more like people: If it’s taught implicit bias by being fed non-inclusive data, it will behave with bias.
This was the topic of the panel “Artificial Intelligence: Calculating Our Culture” at the HUE Tech Summit on day one of Philly Tech Weekpresented by Comcast.
“In Silicon Valley, Black people make up only about 2% [of technologists],” said Asia Rawls, director of software and education for the Chicago-based intelligent software company Reveal. “When Google Analytics labeled Black people as apes, that’s not the algorithm. It’s you. The root is people. Tech is ‘neutral,’ but we define it.”
“Machine learning learns not by itself, but by our data,” said moderator Annalisa Nash Fernandez, intercultural strategist for Because Culture. “We feed it data. We’re feeding it flawed data.”
Often, the flaw is that the data isn’t inclusive. For example, when developers assume that the tech will react to dark skin the same as light skin, they’re creating a neutrality that doesn’t actually exist — so, an automated soap dispenser won’t sense dark skin.
“Implicit bias in computer vision technology means that cameras that don’t see dark skin are in Teslas, telling them whether to stop or not,” said Ayodele Odubela, founder of fullyConnected, an education platform for underrepresented people.
If there’s a positive note, panelists said, it’s that companies are learning to expand their data sets when a lack of diversity in their product development becomes apparent.
AI can expose bias, too. Odubela works with Astral AR, a Texas-based company that’s part of FEMA’s Emerging Technology Initiative. The company builds drones that can intervene when someone — including a police officer — pulls a gun on an unarmed person and actually stops the bullet they fire.
“It can identify a weapon versus a non-weapon and will deescalate a situation regardless of who is escalating,” Odubela said.
What can be done now to make AI and machine learning less biased? More people from underrepresented groups are needed in tech, but even if you’re not working in AI (and even if you’re not working in tech at all), there’s one ridiculously simple thing you can do to help increase the datasets: Take those surveys when they pop up on your screen, asking for feedback about a company or digital product.
“Take a survey, hit the chatbot,” said Amanda McIntyre Chavis, chief experience officer of Women of Wearables. “They need the data analytics.”
“People don’t respond to those surveys, then they complain,” said Rawls. “I always respond, and I’ll go off in the comments.”
Ultimately, if our machines are going to be truly unbiased anytime soon, there needs to be an understanding that humans are biased, even when they don’t mean to be.
“We need to get to a place where we can talk about racism,” said Rawls.
If we don’t, eventually the machines will probably be the ones to bring  it up.

  • Share:

You Might Also Like

0 개의 댓글