Home » Artificial Intelligence » Artificial Intelligence is Different than Our Intelligence

Artificial Intelligence is Different than Our Intelligence

May 5, 2016

supplu-chain

“Sometimes it is perceived as a figment of the far future,” states The Economist. “But artificial intelligence (AI) is today’s great obsession in Silicon Valley.”[1] While science fiction writers and some well-known personalities are warning about the dangers of artificial general intelligence (that is, self-aware machines), most companies are concentrating on much narrower forms of artificial intelligence. Josh Worth (@misterjworth) writes, “The colloquial definition of ‘artificial intelligence’ refers to the general idea of a computerized system that exhibits abilities similar to those of the human mind, and usually involves someone pulling back their skin to reveal a hideous metallic endoskeleton. It’s no surprise that the phrase is surrounded by so many misconceptions since ‘artificial’ and ‘intelligence’ are two words that are notoriously difficult to define. … Deciding whether a computer is intelligent has been a very troublesome project, mostly because the standard for what constitutes intelligence keeps changing. Computers have been performing operations similar to those of the human brain since they were invented, yet no one is quite willing to call them intelligent.”[2] Worth would like the world to stop using the term “artificial intelligence,” but that is unlikely to happen. Frankly, it doesn’t really matter. If a sentient machine is ever developed, it will likely have a unique “intelligence” (or thinking process) unlike the human mind and a new term for that kind of machine intelligence could be coined.

 

Even if the current versions of AI falls short of doing anything approaching real intelligence, they do things that appear pretty darn smart — like winning television game shows, beating board game champions, or optimizing business processes. One of the subsets of artificial intelligence receiving a lot of attention is cognitive computing. People who don’t like the term “intelligence” are not very fond of the term “cognitive” either. For example, Tom Austin, a Vice President and fellow at Gartner, believes, “‘Cognitive’ is marketing malarkey. It implies machines think. Nonsense.”[3] I believe people are capable of understanding the difference between a cognitive system and a sentient system (i.e., a system that demonstrates self-awareness). Cognition is defined as “the action or process of acquiring knowledge and understanding through thought, experience, and the senses.” Of course, that definition has to be modified slightly when applied to a “cognitive” machine. A cognitive system is a system that discovers insights and relationships through analysis, machine learning, and sensing of data. Additionally, there are a number of ways that cognitive computing systems tackle those tasks. The most famous cognitive computing system is IBM’s Watson. Often I have to explain to clients the difference between the Enterra Solutions® approach using the Enterra Enterprise Cognitive System™ (ECS) and IBM’s Watson. Watson is designed to respond to queries where the answer is found within a large corpus of documents. It analyzes massive amounts of data and provides a “best guess” answer (IBM calls it a “confidence-weighted response”) based on what it finds. In contrast, Enterra’s ECS is designed for queries where both semantic reasoning (i.e., semantic intelligence) and advanced calculations (i.e., computational intelligence) are required to derive the answer. Enterra uses the world’s largest commonsense ontology as part of the semantic reasoning process. Susan Feldman (@susanfeldman), founder of Synthexis, points out that one of the most significant contributions cognitive computing systems can make is helping organizations discover what they don’t know.[4] She explains:

“The reasons why we make decisions and change directions are poorly known and can’t be modeled for the process to happen again. We’re losing information that’s falling off the table. … Cognitive computing is going to bring us another step closer to solving some of these problems. What is cognitive computing? Last year I brought together a team of 14 or 15 people to try to define it before marketplace hype completely screwed up any idea of what it was. I don’t know if we’re succeeding or not. What are the problems that cognitive computing attacks? They’re the ones that we have left on the table because we can’t put them into neat rows and columns. They’re ambiguous. They’re unpredictable. They’re very human. There’s a lot of conflicting data. There’s no right and wrong, just best, better, and not such a good idea but maybe. This data requires exploration not searching. You just have to keep poking at it and shifting things around.”

The untidiness and ambiguity of the data needing analysis is why we believe semantic intelligence must play a large role in the future of corporate decision making. Machine learning, of course, is another capability essential for making better decisions. The Economist notes, “Machine-learning, in which computers become smarter by processing large data-sets, currently has many profitable consumer-facing applications, including image recognition in photographs, spam filtering and those that help to better target advertisements to web surfers. Many of tech firms’ most ambitious projects, including building self-driving cars and designing virtual personal assistants that can understand and execute complex tasks, also rely on artificial intelligence, especially machine-learning and robotics.” For better or worse, most analysts agree that artificial intelligence is going to change the future. Daniel Susskind (@danielsusskind) believes at the rate AI systems are developing, “It points to a future that is different from the one that most experts are predicting.”[5] He continues:

“It is often said that because machines cannot ‘think’ like human beings, they can never be creative; that because they cannot ‘reason’ like human beings, they can never exercise judgment; or that because they cannot ‘feel’ like human beings they can never be empathetic. For these reasons, it is claimed, there are many tasks that will always require human beings to perform them. But this is to fail to grasp that tomorrow’s systems will handle many tasks that today require creativity, judgment or empathy, not by copying us but by working in entirely different unhuman ways. The set of tasks reserved exclusively for human beings is likely to be much smaller than many expect.”

To prove the point, Erin Blakemore (@heroinebook) reports that a deep learning system just painted a new Rembrandt masterpiece.[6] Worth adds, “So let’s not continue down this path by referring to these problem-solving, pattern-recognizing machines as ‘artificial intelligence.’ We’re just building tools like we’ve always done, and acting as agents in the exciting process of cognitive evolution.” AI is unlikely to be replaced anytime soon with a new term; but, I do think you will hear a lot more about cognitive computing in the years ahead.

 

Footnotes
[1] “Why firms are piling into artificial intelligence,” The Economist, 31 March 2016.
[2] Josh Worth, “Stop Calling it Artificial Intelligence,” Josh Worth Art & Design, 10 February 2016.
[3] Katherine Noyes, “5 things you need to know about AI buzzwords: cognitive, neural, and deep, oh my!PCWorld, 3 March 2016.
[4] Susan Feldman, “Cognitive Computing and Knowledge Management: Sparking Innovation,” KMWorld, 5 February 2016.
[5] Daniel Susskind, “AlphaGo marks stark difference between AI and human intelligence,” Financial Times, 21 March 2016.
[6] Erin Blakemore, “‘New’ Rembrandt Created, 347 Years After the Dutch Master’s Death,” Smithsonian Magazine, 5 April 2016.

Related Posts: