Home » Artificial Intelligence » Should Artificial Intelligence have to Explain Itself?

Should Artificial Intelligence have to Explain Itself?

July 18, 2018

supplu-chain

The term Artificial Intelligence (AI) evokes different, and sometimes strong, emotions in people. While some individuals are excited about AI’s potential, others are terrified AI could spell the end of the human race. Journalists at The Economist assert, “For artificial intelligence to thrive, it must explain itself. If it cannot, who will trust it?”[1] Ilan Moscovitz (@IlanMoscovitz) adds, “After reading story after story about AI’s rapid advances, it’s hard not to imagine that this technology will transform our world. But AI’s abilities are so hyped, its promised benefits so vast, its dangers so dystopian, its direction so hard to predict, and its mechanisms so technical, that we lack a clear view of what’s going on. Our understanding lags our awe.”[2] The chorus of people demanding AI’s “black box” to be opened appears to be rising; but, Dave Gershgorn (@davegershgorn) reports, “Not everyone is sold on opening the ‘black box’ of artificial intelligence.”[3] Who’s right?

 

Arguments for opening AI’s black box

 

The Economist notes, “Real AI is nowhere near as advanced as its usual portrayal in fiction. It certainly lacks the apparently conscious motivation of the sci-fi stuff. But it does turn both hope and fear into matters for the present day, rather than an indeterminate future. And many worry that even today’s ‘AI-lite’ has the capacity to morph into a monster. The fear is not so much of devices that stop obeying instructions and instead follow their own agenda, but rather of something that does what it is told (or, at least, attempts to do so), but does it in a way that is incomprehensible.” In other words, people fear AI for the same reasons they fear the dark or fear change. Humans are uncomfortable when they can’t see where they are going or don’t understand what’s going on. Moscovitz indicates there may be another reason people fear AI — losing our human identity. He explains, “Human intelligence is wrapped up with our species’ success and is interwoven with our understanding of who we are. And so, artificial intelligence cuts right to the heart of what it is to be human. AI has the capacity to augment our most distinctive qualities — rationality, adaptability, ingenuity, mastery over our environment — and, in the minds of many, to supplant us as the most sapient residents of this world. This is what makes AI unlike any other revolutionary technology: In principle, it can do everything we do — and possibly do it better.” He believes understanding what’s going on in the black box may help us maintain our human identity and allow us to live more congenially with AI.

 

David Gunning, a program manager at the U.S. Defense Advanced Research Projects Agency (DARPA), also believes it’s important AI systems explain themselves. He told Richard Waters (@RichardWaters),”Right now, I think this AI technology is eating the world, and [people] are going to need this.”[4] By “this,” Gunning means a way for AI systems to explain how they think. Waters explains, “Researchers at Parc, a laboratory with links to some of Silicon Valley’s biggest breakthroughs, have just taken on a particularly thorny challenge: teaching intelligent machines to explain, in human terms, how their minds work. The project, one of several sponsored by the US Defense Advanced Research Projects Agency, is part of the search for an answer to one of the hardest problems in artificial intelligence.” Mark Stefik, the researcher heading the Parc project, told Waters, “You’re in effect talking to an alien. It’s a different kind of a mind.” The Parc project is looking at only one kind of AI — deep learning; but, if deep learning systems can explain themselves, other AI techniques probably will be able to explain themselves as well.

 

Should AI systems explain themselves?

 

Kriti Sharma (@sharma_kriti), Vice President of AI at Sage Group, asserts, “AI should not be above the law.”[5] She adds, “Those of us in the global tech community developing AI need to jointly address its auditability and transparency. We should be committed to investing in the research and development of emerging AI technologies that many people don’t understand at an algorithmic level.” Sharma, however, doesn’t advocate for opening AI’s black box so it can explain itself; rather, she asserts “the key to expanding AI transparency, knowledge and understanding” is industry self-governance. She explains, “People building AI for business and enterprise applications need to responsibly create, source and test diverse data. We need to introduce bias detection testing that identifies if the AI conforms to a standard and agreed testing protocol. Specifically, engineers need to simulate how data sets interact with users across a wide variety of contexts before AI leaves the test lab. … Fundamentally, the tech community needs to define what AI transparency means and work together to apply transparency to AI innovation. We need to stop treating AI as a black box and address the auditability and traceability issues that will lead us down the right path.”

 

David Weinberger (@dweinberger), Editor of Harvard’s Berkman Klein Center Collection, agrees something needs to be done to increase AI trust; but, he believes asking AI to explain itself isn’t the answer. His preferred approach is “optimization over explanation.”[6] He explains:

“Keeping AI simple enough to be explicable can forestall garnering the full value possible from unhobbled AI. Still, one way or another, we’re going to have to make policy decisions governing the use of AI — particularly machine learning — when it affects us in ways that matter. One approach is to force AI to be artificially stupid enough that we can understand how it comes up with its conclusion. But here’s another: Accept that we’re not always going to be able to understand our machine’s ‘thinking.’ Instead, use our existing policy-making processes —  regulators, legislators, judicial systems, irate citizens, squabbling politicians — to decide what we want these systems optimized for. Measure the results. Fix the systems when they don’t hit their marks. Celebrate and improve them when they do.”

Gershgorn believes Weinberger’s approach has merit, but may not be fully adequate. “There will always be AI failures that people want to understand explicitly,” Gershgorn writes. “Consider NASA trying to figure out why a satellite was lost, or a scientist’s algorithm being able to predict the composition of a new material without knowing why it could exist. [Weinberger’s] proposed system works a lot better when thinking of it in terms of a public-facing product or service.”

 

Summary

 

The issue at hand is trust. What will it take for people to trust AI? At this point, no one is sure. Moscovitz concludes, “AI is magical. And it’s not sorcery. AI relies on extremely clever instructions whose basic principles anyone can grasp. It’s only when you see the outcome of this elegance that its grandeur comes into view. And so, no matter how much or how little you know about computer science concepts, you, too, can understand what lies behind AI and see a glimpse of our future.” People may be able to grasp AI’s basic principles, but understanding is baffling even for AI experts. Were it not so, we wouldn’t be having this discussion.

 

Footnotes
[1] Staff, “For artificial intelligence to thrive, it must explain itself,” The Economist, 15 February 2018.
[2] Ilan Moscovitz, “Opening Artificial Intelligence’s Black Box,” The Motley Fool, 31 December 2017.
[3] Dave Gershgorn, “The case against understanding why AI makes decisions,” Quartz, 31 January 2018.
[4] Richard Waters, “Intelligent machines are asked to explain how their minds work,” Financial Times, 9 July 2017.
[5] Kriti Sharma, “How to unmask AI,” TechCrunch, 21 December 2017.
[6] David Weinberger, “Optimization over Explanation,” Medium, 28 January 2018.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!