Don't Trust Artificial Intelligence? Time To Open The AI 'Black Box'
Despite its promise, the growing field of Artificial Intelligence (AI) is experiencing a variety of growing pains. In addition to the problem of bias I discussed in a previous article, there is also the ‘black box’ problem: if people don’t know how AI comes up with its decisions, they won’t trust it.
In fact, this lack of trust was at the heart of many failures of one of the best-known AI efforts: IBM IBM -0.42% Watson – in particular, Watson for Oncology.
Experts were quick to single out the problem. “IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Oncology) was a PR disaster,” says Vyacheslav Polonski, PhD, UX researcher for Google GOOGL -0.39% and founder of Avantgarde Analytics. “The problem with Watson for Oncology was that doctors simply didn’t trust it.”
When Watson’s results agreed with physicians, it provided confirmation, but didn’t help reach a diagnosis. When Watson didn’t agree, then physicians simply thought it was wrong.
If the doctors knew how it came to its conclusions, the results might have been different. “AI’s decision-making process is usually too difficult for most people to understand,” Polonski continues. “And interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control.”
DARPA illustrating Explainable Artificial Intelligence (XAI)DARPA
The Need for Explainability
If oncologists had understood how Watson had come up with its answers – what the industry refers to as ‘explainability’ – their trust level may have been higher.
Explainability has now become a critical requirement for AI in many contexts beyond healthcare. Take Twitter TWTR -0.89% , for example. “We need to do a much better job at explaining how our algorithms work. Ideally opening them up so that people can actually see how they work. This is not easy for anyone to do,” says Jack Dorsey, CEO of Twitter. “In fact, there’s a whole field of research in AI called ‘explainability’ that is trying to understand how to make algorithms explain how they make decisions.”
In other cases, people are calling upon AI to make life-or-death decisions. In such situations, explainability is absolutely critical.
It’s no surprise, therefore, that the US Department of Defense (DoD) is investing in Explainable AI (XAI). “Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners,” explains David Gunning, program manager at the Defense Advanced Research Projects Agency (DARPA), an agency of the DoD. “New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.”
Healthcare, however, remains one of AI’s most promising areas. “Explainable AI is going to be extremely important for us in healthcare in actually bridging this gap from understanding what might be possible and what might be going on with your health, and actually giving clinicians tools so that they can really be comfortable and understand how to use,” says Sanji Fernando, vice president and head of the OptumLabs Center for Applied Data Science atUnitedHealth Group UNH -0.07% . “That’s why we think there’s some amazing work happening in academia, in academic institutions, in large companies, and within the federal government, to safely approve this kind of decision making.”
The Explainability Tradeoff
While organizations like DARPA are actively investing in XAI, there is an open question as to whether such efforts detract from the central priority to make AI algorithms better.
Even more concerning: will we need to ‘dumb down’ AI algorithms to make them explainable? “The more accurate the algorithm, the harder it is to interpret, especially with deep learning,” points outSameer Singh, assistant professor of computer science at the University of California Irvine. “Computers are increasingly a more important part of our lives, and automation is just going to improve over time, so it’s increasingly important to know why these complicated AI and ML systems are making the decisions that they are.”
Some AI research efforts may thus be at cross-purposes with others. “The more complex a system is, the less explainable it will be,” saysJohn Zerilli, postdoctoral fellow at University of Otago and researcher into explainable AI. “If you want your system to be explainable, you’re going to have to make do with a simpler system that isn’t as powerful or accurate.”
The $2 billion that DARPA is investing in what it calls ‘third-wave AI systems,’ however, may very well be sufficient to resolve this tradeoff. “XAI is one of a handful of current DARPA programs expected to enable ‘third-wave AI systems,’ where machines understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real world phenomena,” DARPA’s Gunning adds.
Skynet, Explain Thyself
Resolving today’s trust issues is important, but there are trust issues in the future as well. If we place the growing field of XAI into the dystopian future that some AI skeptics lay out, we may have a path to preventing such nightmares.
Whether it be HAL 9000, Skynet, or any other Hollywood AI villain, there is always a point in the development of the evil program where it breaks free from the control of its human creators and runs amuck.
If those creators built explainability into such systems, however, then running amuck would be much less likely – or at least, much easier to see coming.
Thanks to XAI, therefore, we can all breathe a sigh of relief.
Source : https://www.forbes.com/sites/jasonbloomberg/2018/09/16/dont-trust-artificial-intelligence-time-to-open-the-ai-black-box/#504c7fea3b4a