Home » Artificial Intelligence » Artificial Intelligence is an Aspirational Term

Artificial Intelligence is an Aspirational Term

April 22, 2020

supplu-chain

Artificial Intelligence (AI) is a subject of discussion from the White House to Hollywood and everywhere in between. But what does the term really mean? Eric Siegel (@predictanalytic), a former computer science professor at Columbia University, writes, “A.I. is a big fat lie. Artificial intelligence is a fraudulent hoax — or in the best cases it’s a hyped-up buzzword that confuses and deceives.”[1] Arvind Narayanan (@random_walker), an associate professor at Princeton, asserts, “Most of the products or applications being sold today as artificial intelligence (AI) are little more than ‘snake oil’.”[2] The problem is not with the products being sold under the AI rubric but with the fact AI is neither artificial nor intelligent. Siegel explains, “The much better, precise term would instead usually be machine learning — which is genuinely powerful and everyone oughta be excited about it.” When Siegel insists AI should “usually be” referred to as machine learning, he’s admitting there are other cognitive technologies that do more than machine learning but fall short of artificial intelligence.

 

Types of AI

 

As I survey the field, I see three types of “artificial intelligence” being developed. They are:

 

• Weak AI: Wikipedia states: “Weak artificial intelligence (weak AI), also known as narrow AI, is artificial intelligence that is focused on one narrow task.” In other words, weak AI was developed to handle/manage a small and specific data set to answer a single question. Its perspective is singular, resulting in tunnel vision. Machine learning generally falls into this category. Most people believe the first successful demonstration of weak AI was in 1997 when IBM’s Deep Blue program beat then-world chess champion Garry Kasparov. However, Herbert Bruderer, a retired lecturer in didactics of computer science at ETH Zürich, points out, “If one takes chess as a yardstick for artificial intelligence, this branch of research begins much earlier, at the latest in 1912 with the chess automaton of the Spaniard Leonardo Torres Quevedo. … Torres Quevedo showed his electromechanical chess machine (El ajedrecista, chess player), developed from 1912, in the machine laboratory of the Sorbonne University in Paris in 1914. The endgame machine was able to checkmate the king of a human opponent with a rook and king.”[3]

 

• Strong AI: Strong AI originally referred to Artificial General Intelligence (i.e., a machine with consciousness, sentience and mind), “with the ability to apply intelligence to any problem, rather than just one specific problem.” Today, however, there are cognitive systems that fall short of AGI but far surpass weak AI. These systems were developed to handle/manage large and varied data sets to answer a multitude of questions in a variety of categories. This is the category into which cognitive computing falls. Cognitive computing can deal with ambiguities whereas machine learning cannot.

 

• General AI: The AGI Society notes the ultimate goal of AGI is to develop “thinking machines” (i.e., “general-purpose systems with intelligence comparable to that of the human mind”). Siegel writes, “The term artificial intelligence has no place in science or engineering. ‘AI’ is valid only for philosophy and science fiction — and, by the way, I totally love the exploration of AI in those areas.”

 

I prefer the term cognitive technologies to artificial intelligence, but some experts, including Siegel, still have problems with the term “cognitive” believing it still conjures up images of consciousness. Siegel explains, “The term ‘cognitive computing’ … is another poorly-defined term coined to allege a relationship between technology and human cognition.” Cognition is defined as “the action or process of acquiring knowledge and understanding through thought, experience, and the senses.” Of course, that definition needs to be modified when applied to a “cognitive” machine. A cognitive system is a system that discovers knowledge, gains insights, and establishes relationships through analysis, machine learning, and sensing of data. I’m sympathetic to those who don’t like the term; but, no one has come up with a better one. Machine learning is too confining.

 

People who imply cognitive computing works the same way as the human mind are going too far. Cognitive scientists readily admit we are only beginning to understand the human mind. The best we can do today is develop algorithms that approach problems in ways similar to humans. The term “Cognitive Computing” is being used to cover a broad range of distinctly different techniques and capabilities. These techniques and capabilities include natural language processing (NLP); advanced search capabilities; image recognition; machine learning; pattern recognition; neural networks; knowledge graphs; and many more. My preferred definition of cognitive computing is the combination of semantic intelligence (NLP, machine learning, and ontologies) and computational intelligence (advanced mathematics). This combination of capabilities permits cognitive computers to approach problems in ways similar to humans.

 

AGI remains aspirational

 

Hod Fleishman (@Hod_Fleishman), Partner, Vice President, and Global Head of IoT at BCG Digital Ventures, writes, “As of today and the near future, machines can perform specific tasks very well; in some cases, they can even learn and improve their performance of a particular job, but is this intelligence?“[4] Most scientists agree what computers can accomplish today cannot be considered intelligence. Back in 1950, the mathematical genius Alan Turing proposed an Imitation Game (aka the Turing Test) that suggested when a computer could fool experts into thinking it was human, it would achieve a breakthrough moment. Many people believe that moment was achieved in 2014, when a supercomputer posing as a 13-year-old boy named Eugene Goostman, fooled a third of the expert panelists assembled to test five machines involved in text-based discussions. Up to that time, no computer had ever met the threshold of fooling 30% of human interrogators into believing it was human. Remarkable as that achievement was, no one was ready to declare Eugene Goostman intelligent. The late Paul Allen, co-founder of Microsoft, doubted an AGI system would ever be created. He argued, “[Developing AGI] will take unforeseeable and fundamentally unpredictable breakthroughs.”[5]

 

Fleishman agrees with Allen that AGI remains aspirational. “If we continue and pursue AI with an intent to mimic human thinking,” he writes, “we stand to fail, because humans and machines think in very different ways. To increase machine computation capabilities to the point in which the end outcome is similar to a decision a person makes will be achievable when we understand the best way for machines to think like machines.” Siegel adds, “Intelligence isn’t a Platonic ideal that exists separately from humans, waiting to be discovered. It’s not going to spontaneously emerge along a spectrum of better and better technology. Why would it? That’s a ghost story.” He continues, “No advancements in machine learning to date have provided any hint or inkling of what kind of secret sauce could get computers to gain ‘general common sense reasoning.’ Dreaming that such abilities could emerge is just wishful thinking and rogue imagination, no different now, after the last several decades of innovations, than it was back in 1950, when Alan Turing, the father of computer science, first tried to define how the word ‘intelligence’ might apply to computers.”

 

Concluding thoughts

 

While doubts about the future of artificial general intelligence remain, futurists and visionaries, like Kevin Kelly (@kevin2kelly), founding Executive Editor of Wired magazine, are excited about cognitive computing. They believe this technology can confront challenges that have historically proven difficult. Kelly tweeted, “In the very near future you will cognify everything in your life that is already electrified.” That’s a bold statement considering he had to make up a verb (i.e., cognify) to describe how the business world is going to change. Jennifer Zaino (@Jenz514) agrees with Kelly. She writes, “Cognitive Computing increasingly will be put to work in practical, real-world applications. The industries that are adopting it are not all operating at the same maturity levels; there remain some challenges to conquer. The wheels are very much in motion to make cognitive-driven Artificial Intelligence (AI) applications a key piece of enterprise toolsets.”[6] Because cognitive computing systems are adaptable, Accenture analysts believe cognitive computing is “the ultimate long-term solution” for many of businesses’ most nagging challenges.[7] In the end, business leaders need to look at capabilities rather than relying on names to determine which technology is best for their enterprise.

 

Footnotes
[1] Eric Siegel, “Why A.I. is a big fat lie,” Big Think, 23 January 2019.
[2] Dev Kundaliya, “Much of what’s being sold as ‘AI’ today is snake oil, says Princeton professor,” Computing, 20 November 2019.
[3] Herbert Bruderer, “AI Began in 1912,” Communications of the ACM, 3 January 2020.
[4] Hod Fleishman, “Is Artificial Intelligence A Myth?Forbes, 19 February 2020.
[5] Paul G. Allen, “Paul Allen: The Singularity Isn’t Near,” Technology Review, 12 October 2011.
[6] Jennifer Zaino, “Cognitive Computing, Artificial Intelligence Apps Have Big Future in the Enterprise,” Dataversity, 17 September 2015.
[7] “From Digitally Disrupted to Digital Disrupter,” Accenture, 2014.

Related Posts: