Home » Artificial Intelligence » Learning to Trust Artificial Intelligence

Learning to Trust Artificial Intelligence

September 25, 2019

supplu-chain

A modern-day philosopher, who tweets under the pseudonym The Stoic Emperor, states, “Trust is built slowly. Trust is destroyed quickly. Trust can make complex things possible. The absence of trust can make simple things impossible. Trust powers relationships, businesses, nations. Trust is as precious as it is fragile.” Without trust, moving forward in almost any venture is difficult, if not impossible. That’s why business leaders, academics, and politicians are worried about the future of artificial intelligence (AI). Will Knight (@willknight) puts it this way, “No one really knows how the most advanced algorithms do what they do. That could be a problem. … This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith.”[1] Guru Banavar, Chief Science Officer of Cognitive Computing and Vice President of IBM Research, bluntly states, “If we are ever to reap the full spectrum of societal and industrial benefits from artificial intelligence, we will first need to trust it.”[2] He adds, “Trust of AI systems will be earned over time, just as in any personal relationship. Put simply, we trust things that behave as we expect them to. But that does not mean that time alone will solve the problem of trust in AI. AI systems must be built from the get-go to operate in trust-based partnerships with people.”

 

Many analysts believe waiting to see if AI systems behave as expected isn’t sufficient to engender trust. They insist we need to understand what’s going on inside AI’s “black box” operations. For example, PwC analysts, Anand Rao and Euan Cameron, write, “If it is to drive business success, AI cannot hide in a black box.”[3] Jason Bloomberg (@TheEbizWizard), an IT industry analyst, agrees. He writes, “Despite its promise, the growing field of Artificial Intelligence is experiencing a variety of growing pains. In addition to the problem of bias, there is also the ‘black box’ problem: if people don’t know how AI comes up with its decisions, they won’t trust it.”[4]

 

Artificial intelligence’s black box

 

Most articles about AI’s black box problem focus on one type of AI: deep learning. Knight explains, “AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries. But this won’t happen — or shouldn’t happen — unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur — and it’s inevitable they will.” Mark van Rijmenam (@VanRijmenam), founder of Datafloq, writes, “Algorithms are black boxes and often, we don’t know why an algorithm comes to a certain decision. They can make great predictions, on a wide range of topics, but how much are these predictions worth, if we don’t understand the reasoning behind it? Therefore, it is important to have explanatory capabilities within the algorithm, to understand why a certain decision was made.”[5]

Van Rijmenam asserts explainable AI can help generate the trust needed for organizations to take action based on AI’s analytic insights. He writes, “Explainable AI should be an important aspect of any algorithm. When the algorithm can explain why certain decisions have been / will be made and what the strengths and weaknesses of that decision are, the algorithm becomes accountable for its actions, just like humans are. It then can be altered and improved if it becomes (too) biased or if it becomes too literal, resulting in better AI for everyone.”

 

Opening the black box

 

Opening AI’s black box isn’t easy. Kanti S explains, “AI models are inbuilt so complex that it becomes very hard to describe what is being done; why, when and where. When AI is deployed making decisions more complicated and more accurate thus harder to explain the rationality behind them in real-terms.”[6] He once again emphasizes the hardest black box to open is the one used by deep learning. “Of the two types of AI,” he writes, “supervised is often mathematically driven, and explicitly programmed to be predictable and the second one, unsupervised learning is focused on using deep-learning; on an attempt to mimic the human brain. Unsupervised learning is fed on data and is expected to learn on its own which makes it nonlinear and chaotic, making it impossible to predict outputs ahead of time. However, the good news is that experts are working on ways and tools to assist with the generalized explanations and make AI, in both cases, more understandable.” Van Rijmenam notes the efforts to which Kanti S refers involve Explainable AI or XAI. He writes, “The objective of XAI is to ensure that an algorithm can explain its rationale behind certain decisions and explain the strengths or weaknesses of that decision. Explainable AI, therefore, can help to uncover what the algorithm does not know, although it is not able to know this itself. Consequently, XAI can help to understand which data sources are missing in the mathematical model, which can be used to improve the AI.”

 

Concerning XAI efforts, Bloomberg asks, “Will we need to ‘dumb down’ AI algorithms to make them explainable?” Rao and Cameron believe the answer to that question may be yes. They explain, “It is important to note that there can be a trade-off between performance and interpretability. For example, a simpler model may be easier to understand, but it won’t be able to process complex data or relationships. Getting this trade-off right is primarily the domain of developers and analysts. But business leaders should have a basic understanding of what determines whether a model is interpretable, as this is a key factor in determining an AI system’s legitimacy in the eyes of the business’s employees and customers.”

 

Enterra® addresses the black box problem using Massive DynamicsRepresentational Learning Machine™ (RLM). The RLM helps determine what type of analysis is best-suited for the data involved in a high-dimensional environment and it accomplishes this in a “glass box” rather than black box fashion. Glass box operations go a long way toward making AI a trusted business partner. Rao and Cameron conclude, “Opening the black box in which some complex AI models have previously functioned will require companies to ensure that for any AI system, the machine-learning model performs to the standards the business requires, and that company leaders can justify the outcomes. Those that do will help reduce risks and establish the trust required for AI to become a truly accepted means of spurring innovation and achieving business goals — many of which have not yet even been imagined.”

 

Concluding thoughts

 

Sameer Singh, an assistant professor of computer science at the University of California Irvine, states, “Computers are increasingly a more important part of our lives, and automation is just going to improve over time, so it’s increasingly important to know why these complicated AI and ML systems are making the decisions that they are.”[7] Kanti S adds, “The concept of explainable AI is both possible and desirable. Reaching to clearer explanation frameworks for algorithms will provide the users and customers with better information, and could improve trust in these disruptive technologies over time.” Although I appreciate his optimism, I believe the deep learning black box is going to be very challenging to open.

 

Footnotes
[1] Will Knight, “The Dark Secret at the Heart of AI,” MIT Technology Review, 11 April 2017.
[2] Guru Banavar, “What It Will Take for Us to Trust AI,” Longitudes, 9 February 2017.
[3] Anand Rao and Euan Cameron, “The Future of Artificial Intelligence Depends on Trust,” strategy + business, 31 July 2018.
[4] Jason Bloomberg, “Don’t Trust Artificial Intelligence? Time To Open The AI ‘Black Box’,” Forbes, 16 September 2018.
[5] Mark van Rijmenam, “Algorithms are Black Boxes, That is Why We Need Explainable AI,” Datafloq, 31 January 2017.
[6] Kanti S, “Explainable Artificial Intelligence — the Magic Inside the Black Box,” Analytics Insight, 20 December 2018.
[7] Bloomberg, op. cit.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!