Home » Artificial Intelligence » Alright, AI. Explain Yourself

Alright, AI. Explain Yourself

December 5, 2023

supplu-chain

We’ve all seen films about adolescent boys striking out and doing things their parents didn’t like. When caught in the middle of their misadventures, they were often confronted by their parents who said, “Alright young man, you better explain yourself.” Today’s headlines are filled with concerns about artificial intelligence (AI) systems; especially, when they produce unexpected, and sometimes incorrect, outputs. When that happens, business leaders are as concerned for their business as parents are for their children. And, they have every right to state, “Alright, AI. Explain yourself.”

 

Referring to ChatGPT, a large language model (LLM), tech journalist Benj Edwards tweeted, “It’s possible that OpenAI invented history’s most convincing, knowledgeable, and dangerous liar — a superhuman fiction machine that could be used to influence masses or alter history.” Edwards is one of the few experts willing to call generative AI a liar when it produces false facts. Developers who believe such criticism is too harsh, referring to these “lies” as “hallucinations.” Tech writer Oluwademilade Afolabi explains, “Artificial intelligence hallucination occurs when an AI model generates outputs different from what is expected. … A ChatGPT hallucination would result in the bot giving you an incorrect fact with some assertion, such that you would naturally take such facts as truth. In simple terms, it is made-up statements by the artificially intelligent chatbot.”[1] Hallucinations occur because the network has no true understanding of its answers; the natural result is that a portion of its responses turn out to be inaccurate or nonsensical.

 

When used within a business, inaccuracy is one of Generative AI’s biggest obstacles. The only safe way to ensure accuracy is for humans to judge the veracity of generated results. Generative AI, however, is not the only AI system used in business. And, even though hallucinations aren’t generally a problem for other types of AI models, AI system outputs can still be opaque — even puzzling. The clearer an AI system can explain how it determines its outputs the more trust is developed in that system. In the past, the inability of AI systems to explain themselves has been the cause of much concern and produced a general lack of trust. A survey conducted by the AI Policy Institute (AIPI) concluded, “American voters are worried about risks from AI technology.”[2] Despite political and cultural views that divide many American voters, they are surprisingly united over their concerns about AI. The AIPI survey found, “62% are concerned about artificial intelligence, while just 21% are excited.” Their concerns are fear-based. “83% believe AI could accidently cause a catastrophic event.”

 

I know that many business leaders also harbor concerns about how AI could adversely affect their organization should something go wrong. As long as AI systems operate as black boxes, those fears will remain. McKinsey analysts observe, “Businesses increasingly rely on artificial intelligence systems to make decisions that can significantly affect individual rights, human safety, and critical business operations. But how do these models derive their conclusions? What data do they use? And can we trust the results? Addressing these questions is the essence of ‘explainability,’ and getting it right is becoming essential.”[3] Mathematician Neil Raden puts it this way, “Too often, AI vendors tell us — ‘Machine learning handles that.’ So, what exactly does that mean? Vendors are making what I call Deux ex Machina claims about AI.”[4] Deus ex Machina literally means “god from the machine.” It was an ancient Greek theatrical plot device used when a seemingly unsolvable problem in a story is suddenly or abruptly resolved by an unexpected and unlikely occurrence. To make AI/ML useful, this Deux ex Machina problem needs to be solved.

 

Before discussing AI’s interpretability or explainability problem, let me discuss a couple of important terms: artificial intelligence and machine learning (ML). Eric Siegel, a former computer science professor at Columbia University, writes, “A.I. is a big fat lie. Artificial intelligence is a fraudulent hoax — or in the best cases it’s a hyped-up buzzword that confuses and deceives.”[5] He goes on to explain, “The much better, precise term would instead usually be machine learning — which is genuinely powerful and everyone oughta be excited about it.” Machine learning is mostly about recognizing patterns. Here’s the problem. While great at understanding patterns, black-box ML cannot explain the drivers of outputs and this inherently results in interpretability problems (i.e., you know the outputs received, you just don’t know how the system weighed the inputs to get the output). Fortunately, the industry is evolving to provide more explainability using glass-box machine learning technologies, which explain the key drivers of model outputs and help build trust and adoption of AI across organizations. Today, the most advanced AI technologies can maximize interpretability by combining the strongest aspects of human-like reasoning and generative AI with real-world optimization and glass-box machine learning to provide practical and scalable enterprise grade solutions which generate highly accurate and trusted outputs in natural language and at the speed of the market.

 

Let’s dig a little deeper into each of those areas. First, let’s look at sub-area of AI called cognitive computing. A cognitive computing system falls short of artificial general intelligence (AGI) — that is, a sentient machine that thinks for itself — but it far surpasses basic ML techniques. A cognitive computing system utilizes human-like reasoning. Cognitive computing was developed to handle/manage large and varied data sets to answer a multitude of questions in a variety of circumstances encountered by organizations. Cognitive computing can deal with ambiguities whereas machine learning cannot. The way we define artificial intelligence at Enterra Solutions® is: having a machine reason in a human-like fashion about data in order to make decisions. That is why Enterra® is focusing on advancing Autonomous Decision Science™ (ADS®), which combines mathematical computation with semantic reasoning and symbolic logic. The Enterra ADS™ system analyzes data, automatically generates insights, makes decisions with the same subtlety of judgment as a human expert would, and executes those decisions at machine speed with machine reliability.

 

To deal with the challenge of machine learning explainability (i.e., glass-box versus black-box ML), Enterra utilizes the Representational Learning Machine™ (RLM) developed by Massive Dynamics™, a firm headquartered in Princeton, NJ. The RLM helps determine what type of analysis is best-suited for the data involved in a high-dimensional environment (i.e., it selects the most appropriate algorithms) and it accomplishes this in a glass-box rather than a black box fashion (i.e., it makes decisions explainable). The RLM can decompose which individual variables and variable groups most significantly contribute to the output of a predictive model. The RLM also provides mathematical guarantees around the optimality in its model and results.

 

Trust grows with understanding. The late English philosopher John Locke once stated, “The great art of learning is to understand but little at a time.” The more we learn about how to give AI the ability to explain itself — to interpret its outputs — the steeper our learning curves and the greater our trust in AI will become. McKinsey analysts conclude, “Explainability is crucial to building trust. Customers, regulators, and the public at large all need to feel confident that the AI models rendering consequential decisions are doing so in an accurate and fair way. Likewise, even the most cutting-edge AI systems will gather dust if intended users don’t understand the basis for the recommendations being supplied.” Like the mischievous boy caught red-handed by his parents, it’s time for AI to explain itself.

 

Footnotes
[1] Oluwademilade Afolabi, “What Is AI Hallucination, and How Do You Spot It?” Make Use Of, 24 March 2023.
[2] Staff, “Translating Public Concern into Policy,” AI Policy Institute, 2023.
[3] Liz Grennan, Andreas Kremer, Alex Singla, and Peter Zipparo, “Why businesses need explainable AI—and how to deliver it,” McKinsey Quantum Black, 29 September 2022.
[4] Neil Raden, “AI doesn’t explain itself – machine learning has a ‘Deus ex Machina’ problem,” Diginomica, 11 October 2021.
[5] Eric Siegel, “Why A.I. is a big fat lie,” Big Think, 23 January 2019.

Related Posts: