People use oxymorons all the time. An oxymoron is a figure of speech in which apparently contradictory terms appear in conjunction. A few examples are: small crowd, old news, jumbo shrimp, open secret, living dead, deafening silence, only choice, pretty ugly, and awfully good. To that list you might add “real artificial intelligence.” Is there such a thing? Eric Siegel (@predictanalytic), a former computer science professor at Columbia University, doesn’t think so. He writes, “A.I. is a big fat lie. Artificial intelligence is a fraudulent hoax — or in the best cases it’s a hyped-up buzzword that confuses and deceives.”[1] Business journalist Lin Grensing-Pophal (@LinWriter) agrees with that assessment. She writes, “Not everything labeled AI is truly artificial intelligence. The technology, in reality, has not advanced nearly far enough to actually be ‘intelligent’.”[2] And Arvind Narayanan (@random_walker), an associate professor at Princeton, asserts, “Most of the products or applications being sold today as artificial intelligence are little more than ‘snake oil’.”[3] The problem is not with the products being sold under the AI rubric but with the fact AI is neither artificial nor intelligent. By that I mean, technologies like machine learning are real, they just don’t approach anything close to conscious thought.
The term “artificial intelligence” is aspirational. It’s an umbrella term under which a number of different techniques reside. As Grensing-Pophal observes, “Much of the confusion that exists over what AI is, or isn’t, is driven by the overly broad use of the term, fueled to a large degree by popular entertainment, the media and misinformation.” That doesn’t mean technologies residing under the AI umbrella aren’t useful. Siegel explains, “The much better, precise term would instead usually be machine learning — which is genuinely powerful and everyone oughta be excited about it.” All this confusion about AI motivated Azamat Abdoullaev, an ontologist and theoretical physicist, to ask, “So what’s the difference between real AI and fake AI?”[4]
Real vs Fake AI
Everyone can agree that the world has yet to see a system that can be classified as an artificial general intelligence platform (i.e., a general-purpose system with intelligence comparable to that of the human mind). Abdoullaev asserts, “An Intelligence is any entity which is modeling and simulating the world to effectively and sustainably interact with any environment, physical, natural, mental, social, digital or virtual. This is a common definition covering any intelligent system of any complexity, human, machine, or alien intelligences.” The field of artificial intelligence has sparked endless debates about what constitutes “intelligence” or “cognition.” The bottom line is no “real” artificial intelligence has yet been developed. The late Paul G. Allen, former chairman of Vulcan and cofounder of Microsoft, and Mark Greaves, a computer scientist at Vulcan, insisted “it will take unforeseeable and fundamentally unpredictable breakthroughs” to develop a true AGI platform.[5] And they weren’t optimistic. As I survey the field, I see three types of “artificial intelligence” being developed. They are:
Weak AI: Wikipedia states: “Weak artificial intelligence (weak AI), also known as narrow AI, is artificial intelligence that is focused on one narrow task.” In other words, weak AI was developed to handle/manage a small and specific data set to answer a single question. Its perspective is singular, resulting in tunnel vision. Machine learning generally falls into this category.
Strong AI: Strong AI originally referred to Artificial General Intelligence (i.e., a machine with consciousness, sentience and mind), “with the ability to apply intelligence to any problem, rather than just one specific problem.” Today, however, there are cognitive systems that fall short of AGI but far surpass weak AI. These systems were developed to handle/manage large and varied data sets to answer a multitude of questions in a variety of categories. This is the category into which cognitive computing falls. Cognitive computing can deal with ambiguities whereas machine learning cannot. The way we define artificial intelligence at Enterra Solutions® is: having a machine reason in a human-like fashion about data in order to make decisions. Enterra® is advancing Autonomous Decision Science™ (ADS™), which combines mathematical computation with semantic reasoning and symbolic logic. The Enterra ADS® system analyzes data, automatically generates insights, makes decisions with subtlety of judgment like an expert would, and executes those decisions at machine speed with machine reliability.
General AI: AGI would be “real AI.” Siegel writes, however, “The term artificial intelligence has no place in science or engineering. ‘AI’ is valid only for philosophy and science fiction — and, by the way, I totally love the exploration of AI in those areas.” On the other hand, “Scientists at U.K.-based AI lab DeepMind argue that intelligence and its associated abilities will emerge not from formulating and solving complicated problems but by sticking to a simple but powerful principle: reward maximization.”[6] Reward maximization is more often referred to as reinforcement learning. So the debate rages on.
I prefer the term cognitive technologies to artificial intelligence, but some experts, including Siegel, still have problems with the term “cognitive” believing it continues to conjure up images of consciousness. Siegel explains, “The term ‘cognitive computing’ … is another poorly-defined term coined to allege a relationship between technology and human cognition.” Marc Rameaux, a project manager, statistician and data scientist, believes a proper definition of AI could help resolve the debate about real and fake AI. He explains, “The debate about AI and the benefit or impact it could have for society, has grown to a scale which dwarfs the most basic question — what actually is it? It has exploded to the extent that much of the thinking about AI is more like fantasy or dystopia than real analysis. Going back to a definition of what AI is would allow us to have a balanced idea of it, to not refuse its considerable advantages, and to prepare for its risks, which are not always ones we expect.”[7] Rameaux offers this definition for AI:
“The ability of a machine to achieve performance equal to or greater than that of certain human cognitive processes, on problems reaching either NP-complexity or whose solution cannot be written entirely in an explicit specification, by extracting the relevant representation of the input data on its own without having to have it done for it.”
He believes his definition “shows that AI is still very far from true human intelligence, although it now merits the name under the definition, because this time it is a different issue from all the data processing techniques that preceded it.” For those unfamiliar with NP-complexity, Wikipedia notes, “In computational complexity theory, NP (nondeterministic polynomial time) is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is ‘yes’, have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.”
Fake Intelligence, Real Results
Frankly, business leaders don’t really care whether a system touted as an AI platform is real or fake — they care about results. McKinsey & Company analysts assert, “Staying ahead in the accelerating artificial-intelligence race requires executives to make nimble, informed decisions about where and how to employ AI in their business.”[8] They add, “A convergence of algorithmic advances, data proliferation, and tremendous increases in computing power and storage has propelled AI from hype to reality.” Business writer Beth Stackpole (@bethstack) notes, “As artificial intelligence continues to move into the mainstream, companies are combining AI and big data to build and design better products, react faster to changing market conditions, and protect consumers from fraud.”[9] She adds, “Big data plus AI creates a foundation for more intelligent products and services — ones that initiate maintenance procedures before something breaks, perform more precise operations, or automatically recalibrate resources to meet changing demand and usage patterns. While AI and big data pave the way for such evolutionary use cases, the pair do not constitute a business strategy on their own accord. ‘The question is how do you use AI right or use it wisely,’ [says] Ed McLaughlin, president of operations and technology for Mastercard.” Answers to that question are unique to every business.
Many companies are now successfully leveraging AI. Bob Gourley (@bobgourley) co-founder and Chief Technology Officer of OODA LLC, insists the fact that you don’t know AI is working behind the scenes is a sign of its success. He explains, “AI technologies are making continuous advances in domains like industrial robotics, logistics, speech recognition and translation, banking, medicine and advanced scientific research. But in almost every case, the cutting-edge AI that drives the advances drops from attention, becoming almost invisible when it becomes part of the overall system. The fact that most AI use today is invisible can lead to the erroneous assumption that it is not delivering on expected value.”[10]
Discussions about whether AI is real or fake are great academic exercises. For business leaders, how so-called AI systems can contribute to their bottom line is the only true measure of value. And that value is real.
Footnotes
[1] Eric Siegel, “Why A.I. is a big fat lie,” Big Think, 23 January 2019.
[2] Lin Grensing-Pophal, “Not All AI Is Really AI: What You Need to Know,” SHRM, 23 July 2021.
[3] Dev Kundaliya, “Much of what’s being sold as ‘AI’ today is snake oil, says Princeton professor,” Computing, 20 November 2019.
[4] Azamat Abdoullaev, “Real Artificial Intelligence vs. Fake Artificial Intelligence,” BBN Times, 2 July 2021.
[5] Paul G. Allen and Mark Greaves, ““The Singularity Isn’t Near,” Technology Review, 12 October 2011.
[6] Ben Dickson, “DeepMind says reinforcement learning is ‘enough’ to reach general AI,” VentureBeat, 9 June 2021.
[7] Marc Rameaux, “What AI is – and what it is not,” European Scientist, 22 May 2018.
[8] Staff, “An executive’s guide to AI,” McKinsey & Company, 2020.
[9] Beth Stackpole, “How big firms leverage artificial intelligence for competitive advantage,” MIT Management, 26 May 2021.
[10] Bob Gourley, “Using Artificial Intelligence For Competitive Advantage in Business,” OODA Loop, 10 July 2021.