AI Bias Adds Complexity To AI Systems

By TheWAY - 10월 16, 2019

One of the biggest issues with Artificial Intelligence and Data Science is the integrity of our data. Even if we did all the right things in our models, and our testing, data might conform to some technical standard of “cleanliness,” there might still be biases in our data as well as “common sense” issues. With Big Data, it is difficult to get to a certain granularity of data validity without proper real-world testing. By real-world testing, we mean that when data is being used to make decisions, as consumers, as testers, as programmers, as data scientists, we look at groups of scenarios to see if the decisions made conform to a kind of “common sense” standard. This is when we discover the most important biases in our data. It is also when we discover the real impact of the decisions made by our AI systems.
The impact of AI systems
With the speedy proliferation of AI technology, it’s not hard to see where the impact of AI systems may lie, such as in countries like China which is aggressively implementing a Social Credit System based on Artificial Intelligence. AI systems’ decisions will drive the citizen’s ability to travel, receive government services, take out loans, and receive an education. It will also drive the corporation’s ability to conduct business, obtain capital, and make a profit. Needless to say, in such a system, the impact is immense. In the United States, AI systems are implemented by corporations to gain cost savings and efficiency. These types of AI systems tend to work alongside humans such as portfolio management systems that execute automatic trading strategies, AI-assisted surgery, and AI-assisted medical diagnosis, etc. When AI systems start to make judgments about a person’s quality of life independently without checks and balances such as in the case of AI systems that monitor student’s emotions in classrooms to gauge engagement, and AI systems that make decisions on whether someone should be incarcerated, issues such as bias in AI data and privacy should involve lengthy questions from our legal, political systems, and our media. The reason for such scrutiny is precisely because these AI systems can easily impact someone’s life and infringe on someone’s liberties as defined by our Constitution.
Why is bias so important for AI systems?
When we talk about AI and machine learning, we are mostly talking about the widely used Deep Learning algorithms that use Neural Networks to learn from data generated by humans. This type of algorithm learns from human-generated data. This information may be collected from real life through social media. In the process of collecting such data, there can be many biases such as data collection, cognitive, social, and algorithmic, etc. AI algorithms are replicating our decision-making process by learning from data. Inherently, this data is not objective. Past data can contain biases, human emotions, interests, and perspectives. Therefore, AI’s decisions are subjective. An objective decision is one that is not influenced by one’s personal feelings, perspectives, interests, and biases.
Different types of data that affect AI’s decision process
The types of data that an AI system learns from drives the subjectivity of the decisions made by the AI system. For instance, if an AI system is simply a system that learns from MRI images of the human body to look for a specific tumor, then this system is likely more objective than an AI system that learns from social media tweets to identify trolls. With “common sense,” we can see that image data taken objectively by an MRI machine is more objective than tweets from people responding to events. The source of the data that the AI system learns from introduces the bias.

One of the biggest biases that may be introduced from subjective information is the social context of the data. In an AI system that is used to analyze tweets, each tweet comes with it not only the author’s opinions, it also carries with it the context in which the tweet was written. For instance, the tweets that the author might have read before authoring the tweet in question will alter the meaning of the tweet. Another example is the concept of “dark humor.” “Dark humor” can be perceived as negative comments on social media. Identifying humor is very subjective and based on the context of the text. If taken out of context, “dark humor” using words with negative connotations can be perceived as harassment.
The ability to forget is central to solve AI’s bias
In the land of biased data, we, humans have the unique ability to forget. This ability allows us to forget about past events that are anomalies in favor of new norms established. This ability allows us to forget about our perceived biases when new values are learned and internalized. This ability to forget allows us to become “better” humans. Artificial Intelligence is not so lucky. It does not possess this ability to forget. AI systems are created to “learn.” It will learn as much as it can. This means that the inherent biases introduced by data within the system will stay there. Even though new information acquired can cause the system to place less importance on the data, it is still, nevertheless, there. It can still affect the outcomes of decisions made under certain conditions. When certain decisions that may place a higher emphasis on biased information, without adequate checks and balances, these types of decisions will not be reliable.
The ability to make “fair” judgments is central to solve AI’s bias
If an AI system is used to decide on the mental health of an individual without checks and balances, the diagnosis can be made with biases. Mental Health diagnoses often need multiple professionals to confirm. Mental Health diagnosis is usually made with data that’s not only subjective but also can contain a myriad of social contexts. A Mental Health diagnosis can potentially impact a person’s employment and quality of life. It needs to be made with caution. The question becomes how do we establish “fairness” in AI systems’ treatment of data? The question of “fairness” is often based on a line that is drawn based on established norms. By questioning our established norms, we can question our perceived “fairness” we use to judge the effectiveness of AI systems’ decisions.
Successful AI systems have one of two notable features
Due to the biases inherent in today’s AI systems, the systems that are highly effective in the market place have one of the two notable features: 1) The system uses observed data that is inherently highly objective. 2) The system’s decisions don’t have a critical impact on people’s lives. Companies that are utilizing AI systems based on highly subjective data are now trying to establish processes and procedures to check and balance the decisions made by the AI systems. Researchers, on the other hand, are trying to develop more sophisticated methods for AI to “unlearn” data, to detect “context,” and internalize norms. These combined efforts will allow us to understand individual cases of biases related to particular AI systems usage.
How “Common Sense” can help?
In AI systems that are using highly “subjective” data, testing the data with real-world scenarios means injecting “common sense” into decision making. Humans have the unique ability to process data using our cognitive mechanisms, and emotional mechanisms to gain unparalleled understanding. Through the process of understanding, we are discarding unwanted information, focusing on important information, putting information into social context, and injecting needed ethical boundaries to simplify information to make “common sense” decisions. AI is attempting to replicate the human process of decision making, but it is only able to replicate some of this process. This is when “common sense” testing of real-world scenarios will be helpful. When groups of possible biases related to outcomes are identified for review based on real-world scenarios, it is much easier to understand the places where AI systems need improvement before being able to function without human checks and balances.
For instance, in an AI system used to make judgments about whether to identify possible criminals for scrutiny, there might be “common sense” testing involving “observed” data in daily life. When a minority teenager who just turned 18 years old is identified by the system as being a possible criminal, the system looked at the fact that this teenager lives in a housing project rampant with crime, the teenager’s mother has substance abuse issues, the teenager is of African American descent from a low-income family, the teenager goes to a school with high crime rate and low graduation rate, and the teenager’s brother has a long criminal record. The only positive factor in this minority teenager’s life is that this teenager maintains all A’s at school and spent most of his life in his grandmother’s house. The grandmother taught this teenager to play the piano and enriches the teenager’s life beyond his current circumstances. But, since the negative factors in this teenager’s life outweigh the positive factors, this teenager was identified as a possible criminal. “Common sense” might suggest an additional evaluation of this teenager’s life by an objective bystander. If a bystander just randomly went up to the teenager and spent an afternoon talking to the teenager, this bystander will see that the teenager is well-mannered, has aspirations, is focused on studies, and is actively working toward a better life. In this case, the “common sense” judgment can add additional positive factors to be considered in this teenager’s case. These positive factors will help the system make a more informed decision in this teenager’s case. From the testing, we can see that the AI system lacks other critical information that might need to be considered such as behaviors inside the school, outside the school, reality of living arrangements, and social circles. Even though our “common sense” testing added another layer of complexity to the data, it also allowed for a better decision to be made. It may not be trivial to obtain this information with AI systems, it may be trivial to obtain this information by a person. If the person obtaining the information is “objective enough”, then the additional layer of “common sense” checks has made AI’s ultimate decision in this case much more well-rounded and less biased. Not every case, every criteria should be evaluated by AI in a system assisted by AI. Certain criterias evaluated by humans do not taint the decisions by AI, rather they help to give the AI system a more well-rounded picture.
Responsibility of Lawmakers
When AI system implementations can contain many biases and inject “common sense” testing is not trivial, resources needed can quickly multiply on AI projects. The lawmaker’s job is not to put limits on the proliferation of AI systems. The lawmakers become directors. The lawmakers can direct the trend of proliferation of AI to safeguard an individual’s liberties as defined by our Constitution. By placing specific “regulations” to delay certain aspects of AI systems proliferations in certain industries that use highly “subjective” data to make decisions that have big impact on people’s lives rather than outright “bans,” lawmakers will allow researchers more time to catch up on developing AI technologies, lawmakers will also allow corporations to put real-life scenarios into place to evaluate the AI systems thoroughly with “common sense.” In this case, to be responsible and to safeguard our liberties in the age of AI means setting standards in industries for testing with real-life scenarios that will inject “common sense” into the decision process. 
Conclusion
AI bias is difficult to overcome. But it is a joint effort of corporations, researchers, lawmakers, and the media. When many eyes are on the issues, we may have a lot more data, opinions about the data, and judgments passed, but with “common sense,” we come closer to equality, to human kindness, and to the protection of our constitutional liberties. That to me is an opportunity to exercise our democratic process.




  • Share:

You Might Also Like

0 개의 댓글