Can Artificial Intelligence Be Biased?
By TheWAY - 1월 25, 2019
Can Artificial Intelligence Be Biased?
Bias?SHUTTERSTOCK ENHANCED BY COGWORLD
Introduction
In pursuit of automation-driven efficiencies, the rapidly evolving artificial intelligence (AI) tools and techniques (such as neural networks, machine-learning, predictive analytics, speech recognition, natural-language processing and more) are now routinely used across nations: its governments, industries, organizations and academia (NGIOA) for navigation, translation, behavior modeling, robotic control, risk management, security, decision making and many other applications.
As AI is becoming democratized, these evolving intelligent algorithms are now rapidly becoming prevalent in most, if not all, aspects of human and machine decision-making. While Decision Utilities like intelligent algorithms have been in use for many years, there are rising concerns about the general lack of algorithmic understanding, usage practices, the rapidly penetrating bias in automated decisions, and the lack of transparency and accountability. As a result, ensuring integrity, transparency and trust in algorithmic decision-making is becoming a complex challenge for the creators of algorithms with huge implications for the future of society.
Human versus Machine Decision-Making Processes
Irrespective of cyberspace, geospace or space (CGS), since technology revolutions are driven not just by accidental discovery but also by societal needs, the question we all individually and collectively need to first and foremost evaluate is whether there really is a need for decision-making algorithms—and if yes, where and why. Furthermore, what gaps in the human decision-making process can be filled by decision-making algorithms?
Artificial intelligence tools and techniques are increasingly expanding and enriching decision support not only by coordinating diverse data sources delivery in a timely and efficient manner but also by analyzing evolving data sources, trends, providing defined forecasts, developing data consistency, quantifying uncertainty of all data variables, anticipating the human or machine user’s data needs, providing information to the human or machine user in the most appropriate forms, and suggesting decisive courses of all possible action based on the intelligence gathered. Understandably, this is being welcomed—since in a fast-changing digital age environment, it is becoming difficult for human decision-makers to keep up, analyze the mountains of growing data in front of them, and make informed and intelligent decisions.
However, even for an algorithmic decision-making process, there are complex challenges to reach an informed decision. For example, it is difficult to know whether decision-making algorithms will be able to make effective decisions with the current computing and data analytics infrastructure and processing capability. While it seems very likely that artificial intelligence will become universal in most, if not all, aspects of decision-making in the near future, it will be interesting to see how the emerging competition between human decision-making versus AI decision-making will play out.
Algorithmic Engineering Process and Penetration of Bias
While there are growing concerns about machine learning decision-making models, it seems AI is being woven into the very fabric of human society and everything individuals and entities do across nations: its government, industries, organizations and academia (NGIOA) in cyberspace, geospace and space (CGS).
Moreover, it needs to be understood that the rapidly evolving machine-learning model is not a static piece of code since we are constantly feeding it data from diverse sources and are constantly training, re-training and fine-tuning it to how predictions can be made. In each of these data journey steps, humans at the moment play a significant and influential role. As a result, while machine-learning models are becoming almost like a living, breathing thing with a growing dynamic data ecosystem from CGS around it, the very involvement of humans brings with it the same complex human bias. Since, we are trying to re-define and re-design systems that brings us more trust and transparency, there is a clear need to promote equality, transparency and accountability in algorithm design and development for decision-making—and to ensure that data transparency, training, review and remediation are being considered throughout the entire algorithmic engineering process.
According to ProPublica, an investigative journalism organization, a computer program used by US courts across the nation has been reported to be biased against black prisoners. The program, named the Correctional Offender Management Profiling for Alternative Sanctions, mistakenly flagged black defendants as likely to reoffend at almost twice the rate as white defendants (45% to 24%). The program likely factored in the higher rates of arrest for black people into its predictions but was not able to escape the same racial biases that contributed to those higher levels of arrests. Bias has also been reported in granting credit to home buyers, even going as far as to potentially violate the Fair Housing Act. Rates of defaulting may be higher in some neighborhoods, but an algorithm using this information to make black and white calls runs the risk of heading towards “red-lining” territory. Examples abound, with plenty of cases to show AI and technology to be both sexist and racist. Let’s not forget Google’s search algorithm including black people in the results of a search on “gorilla.”
While decision-making algorithms are inherently not biased and algorithmic decision-making depends on a number of variables -- including how the software is designed, developed, deployed and the quality, integrity and representativeness of the underlying data sources -- there is a need for a new approach to define and design decision-making algorithms. We perhaps need adaptive computing that integrates intelligence gathering into its very fabric and which does not rely on humans training the algorithms in how to make decisions.
Since it is important to evaluate what the implications will be if bias penetrates decision-making algorithms—it brings us to evaluating further whether data protection safeguards can be built into the algorithms from the earliest stages of development to prevent bias from penetrating them. It is important to address this as the very foundations of the systems that are being re-defined and re-designed depend on it.
Now, since it is not possible to interrogate algorithms, and there are no effective rules or regulations around decision-making algorithms that focus on the algorithmic accountability, how to remove bias remains a complex challenge facing society. Acknowledging this emerging reality, Risk Group initiated the much-needed discussion on Algorithmic Decision-Making with Prof. (Dr.) Steve Omohundro, President at Possibility Research.
Disclosure: Risk Group LLC is my company.
Risk Group discusses Algorithmic Decision- Making with Prof. (Dr.) Steve Omohundro, President at Possibility Research, based in CA, United States.
Perhaps the key to making decision-making algorithms work for everyone on a fair and balanced playing field is to build in accountability, responsibility, neutrality and outcomes from the very beginning—right in the code. If not, without question, all the efforts that are being put in re-defining and re-designing the systems in cyberspace, geospace and space will bring no real value for society overall.
Data: Nature, Sources, Efficiency
The growing availability, volume and accumulation of diverse sources of data means it can be overwhelming for any human decision maker to effectively make decisions—irrespective of whether these are strategic decisions or tactical. Therefore, it is important to evaluate what role dynamic growing data plays in how the algorithms are structured to take into consideration the growing data input. Further, it's important to evaluate, irrespective of the nature or source of data, whether decision-making algorithms give consistent decisions using different scenarios, models, tools and techniques? And how is this tested?
So, while efficiency seems to be at the core of many emerging automation applications, the transparency and integrity of the data on which the algorithmic decisions are being made will be critical to ensure its accountability. That brings us to another question--is there a way algorithms can detect data sources, credibility, authenticity, transparency and rate the integrity of the algorithmic decision itself? An algorithmic disclaimer?
The democratization of computing infrastructure allows anyone to build any algorithm any way they want. However, when it comes to its decision-making applications for systems at all levels: global, national or local (may it be government agencies, banks, credit agencies, courts, prisons, education institutions etc.,) there is a need for a global standard on the best practices to define and determine whose algorithm can be used for equality, fairness and objectivity. That brings us to another question of who will be testing the different versions of algorithms and rating them for public use?
The question now is not just whether humans or machines will make decisions—rather it's about whether intelligent algorithms replacing humans in decision-making will bring with them the same biases of race, religion, class, gender and ideology that are harmful to society. So, the question remains, what do we do about it? And will we ever be able to build truly objective and transparent machines?
What Next?
The question today for each one of us to individually and collectively evaluate is whether intelligent algorithms are and will remain an aid to the human decision-making process or whether they will become the ultimate decision-makers. And if artificial intelligence becomes the decision-maker, what will be the implications of relinquishing decision-making control to intelligent machines--as the very use of automated AI based decision-making techniques raises challenges for humanity as a whole.
SOURCE : https://www.forbes.com/sites/cognitiveworld/2019/01/20/can-artificial-intelligence-be-biased/#6f59f1557e7c
0 개의 댓글