Most people now realize artificial intelligence (AI) is playing a large role in their lives. Journalists at The Economist assert, “Artificial intelligence is spreading beyond the technology sector, with big consequences for companies, workers and consumers.”[1] The term “artificial intelligence” covers a range of technologies (both real and imagined) from narrow AI that can master the game of chess to sentient machines like the fictional HAL9000. AI terms seem to be multiplying as fast rabbits in the wild. Terence Mills (@terence_mills), CEO of AI.io and Moonshot, explains, “With the rise of artificial intelligence, our definitions of certain technological processes are increasingly important. Much like in the way virtual reality, augmented reality and mixed reality are often confused, so what most people call artificial intelligence can become muddled with virtual intelligence.”[2] Mills insists almost everything labeled AI today falls into the virtual AI category. He explains, “While the term virtual intelligence may sound new to you, it’s actually been all around us for a few years now. It’s present when you open navigation apps like Google Maps or Waze, when you track your health and fitness improvements via Fitbit or Garmin, or when you listen to music on a smart speaker like Amazon’s Echo. All these things seem intelligent when we interact with them on a daily basis. I mean, they can give us directions, recommend dietary and workout habits and even respond to spoken commands. But in actuality, these devices are just taking advantage of VI technology. The difference between these two is vital.” Mills’ taxonomy is simple and straight forward; but, I would like to add augmented intelligence to the mix. Why? Because I believe there are nuanced differences. My preferred taxonomy for explaining differences between machine intelligence levels is weak AI, strong AI, and general AI.[3]
Virtual Intelligence
In Mills’ construct, virtual intelligence resides at the bottom of the machine intelligence spectrum. He explains, “The inner workings of these devices are … a lot less smart than they might seem — though they are quite scientific. The way in which virtual intelligence works to generate results is scientific because it uses a controlled environment, receives predetermined factors and outputs calculated results. A common pattern used behind the scenes to show this is IF x, THEN y. That is, if the given input (factors) fits the predetermined criteria (controlled environment), then it gives a response (calculated response). By using this method, VI simulates decision making. We say it simulates decision making because it cannot adjust its output as changes are developing. The factors given must be fully developed before it can recalculate the results.” He points to the example of VI used in GPS systems providing turn-by-turn instructions. Such systems don’t tell you when you’re about to make a wrong turn, they let you do it. Once the turn has been made and you’re on the wrong road, the system recalculates what you need to do. He concludes, “Virtual intelligence thus falls short of error correction and instead focuses on damage control after an error has been made. This is the vital difference in AI and VI.” Another way of thinking about virtual intelligence is that it doesn’t deal well with ambiguity. That’s why I believe augmented intelligence needs to be inserted into the conversation.
Augmented Intelligence
Another name for augmented reality is cognitive computing. The term was first adopted by IBM as an alternative to the term artificial intelligence (AI). When Bloomberg Businessweek Editor Megan Murphy (@meganmurp) asked IBM’s Ginni Rometty (@GinniRometty), why IBM chose cognitive computing over the more familiar artificial intelligence term, Rometty replied, “It was really a very thoughtful decision. The world calls it AI. There’s so much fearmongering about AI. When we started over a decade ago, the idea was to help you and I make better decisions amid cognitive overload. That’s what has always led us to cognitive.”[4] The Cognitive Computing Consortium defines cognitive computing this way: “Cognitive computing addresses complex situations that are characterized by ambiguity and uncertainty; in other words it handles human kinds of problems. Cognitive computing systems often need to weigh conflicting evidence and suggest an answer that can be considered as ‘best’ rather than ‘right’.” My company’s entry in this field is called the Enterra Enterprise Cognitive System™ (Aila™) — a system that can Sense, Think, Act, and Learn®.
Artificial Intelligence
Artificial intelligence sits at the top of Mills’ taxonomy. He writes, “Merriam-Webster defines artificial intelligence as: ‘The capability of a machine to imitate intelligent human behavior.’ The key word in this definition is intelligent. As we see above, virtual intelligence mimics human decision making by using math and predetermined factors. Artificial intelligence, on the other hand, should be intelligent enough to make decisions as changes and events are occurring.” Using this definition, you can understand why artificial intelligence is different than either virtual or augmented intelligence. Mills adds, “In the future, we hope to see things like transportation, manufacturing and other areas of everyday life become automated by artificial intelligence. This can be achieved once AI can start making human-like, intelligent decisions that adapt as problems and changes arise.”
Summary
Using the nuanced version of Mills’ taxonomy, you could classify artificial intelligence by its decision-making capabilities. Virtual intelligence is useful when rules-based decision-making is in play. Augmented intelligence is useful in situations characterized by ambiguity and uncertainty. It augments decision-making by providing “best” rather than “right” answers. One of the big benefits of augmented intelligence (or cognitive computing) is its ability to learn as it works. Artificial intelligence, in Mills’ framework, is characterized by its ability to make decisions on the fly that adapt in rapidly changing situations. The Economist concludes, “Instead of relying on gut instinct and rough estimates, cleverer and speedier AI-powered predictions promise to make businesses much more efficient.”
This nuanced discussion of decision-making fits neatly into what Benn R. Konsynski (@Konsynski), the George S. Craft Distinguished University Professor of Information Systems and Operations Management at Emory University’s Goizueta Business School, and Dr. John Sviokla (@jjsviokla), a principal with PwC, call “cognitive reapportionment.” They explain, “We suggest the idea that the organization can be conceived of as bundles of decisions and that these bundles can be allocated across humans, systems, or combinations of humans and systems. Also, we suggest that the allocation of these cognitive responsibilities can be dynamic — taking into account the user, the system status, and the decision environment.”[5] That means, in the future, organizations will leverage virtual, augmented, and artificial intelligence as the situation dictates. Mills concludes, “Knowing what’s best to use for your project entirely depends on your use case.”
Footnotes
[1] Staff, “Non-tech businesses are beginning to use artificial intelligence at scale,” The Economist, 31 March 2018.
[2] Terence Mills, “Virtual Intelligence Vs. Artificial Intelligence: What’s The Difference?” Forbes, 27 March 2018.
[3] Stephen DeAngelis, “Artificial Intelligence: Time for Some Clarity,” Enterra Insights, 12 March 2018.
[4] Megan Murphy, “Ginni Rometty on the End of Programming,” Bloomberg BusinessWeek, 20 September 2017.
[5] Benn R. Konsynski and John J. Sviokla, “Cognitive Reapportionment: Rethinking the Location of Judgment in Managerial Decision Making,” in The Post-Bureaucratic Organization edited by Charles Heckscher and Anne Donnellon, Sage Publications (1994).