Home » Artificial Intelligence » Artificial Intelligence: Beyond the Buzz

Artificial Intelligence: Beyond the Buzz

September 21, 2016

supplu-chain

“[Artificial Intelligence] is hot,” writes Om Malik (@om), founder of GigaOm, “and every company worth its stock price is talking about how this magical potion will change everything.”[1] Despite being hot, Malik asserts a lot of the buzz about artificial intelligence (AI) is unfounded. “Much like ‘the cloud,’ ‘big data,’ and ‘machine learning’ before it,” he explains, “the term ‘artificial intelligence’ has been hijacked by marketers and advertising copywriters. A lot of what people are calling ‘artificial intelligence’ is really data analytics — in other words, business as usual. If the hype leaves you asking ‘What is A.I., really?,’ don’t worry, you’re not alone.” He goes on to note that even experts don’t agree on a single definition of AI. “The only thing they all seem to agree on,” he reports, “is that artificial intelligence is a set of technologies that try to imitate or augment human intelligence. To me, the emphasis is on augmentation, in which intelligent software helps us interact and deal with the increasingly digital world we live in. … Augmented intelligence offers the possibility of winnowing an increasing number of inputs and options in a way that humans can’t manage without a helping hand.” Malik is not alone in emphasizing the term “augmented intelligence.” IBM researchers also prefer the term. They explain, “We are guided by the term ‘augmented intelligence’ rather than ‘artificial intelligence.’ It is the critical difference between systems that enhance and scale human expertise rather than those that attempt to replicate all of human intelligence. … We call our particular approach to augmented intelligence ‘cognitive computing.’ Cognitive computing is a comprehensive set of capabilities based on technologies such as machine learning, reasoning and decision technologies; language, speech and vision technologies; human interface technologies; distributed and high-performance computing; and new computing architectures and devices.”[2]

 

Although I doubt the term augmented intelligence will replace artificial intelligence in public discussions. I believe it much more accurately describes what cognitive computing is all about. I define cognitive computing as a combination of semantic intelligence (artificial intelligence + natural language processing) and computational intelligence (advanced mathematics). I include machine learning under the broader heading of artificial intelligence. My company’s entry in this field is called the Enterra Enterprise Cognitive System™ (ECS) — a system that can Sense, Think, Act, and Learn® — and it can be described as an augmented intelligence system. One of benefits of the term augmented intelligence is that it evokes less fear than the term artificial intelligence. Science fiction literature and the motion picture industry have combined to paint a dark picture about how AI systems are destined to take over the world and either subjugate or destroy the human race. Augmented intelligence implies that humans are still in charge. Change is always scary and Mark van Rijmenam (@VanRijmenam), Founder of Datafloq, observes, “We have entered a world where accelerated change is the only constant. The speed at which technologies are currently developing is unlike any other since the existence of mankind.”[3]

 

Van Rijmenam points out that Charles Darwin theorized, “It is not the strongest of the species that survive, nor the most intelligent that survives. It is the one that is most adaptable to change.” Businesses are no different — the most adaptable businesses are those that are going to survive and thrive in the years ahead. And cognitive computing systems are going to help them adapt. Van Rijmenam puts it this way, “Smart algorithms are taking over the world and they are taking over your business, so it is important to be aware of its potential and what it can do for your organization or if you don’t pay attention, how it can harm your business.” M. Hatipoglu agrees with him. He writes, “More companies are beginning to realize that the chances of their business surviving without adapting the use of artificial intelligence over the next few years are slim to none.”[4] Kevin Kelly (@kevin2kelly), founding Executive Editor of Wired magazine, bluntly tweeted, “In the very near future you will cognify everything in your life that is already electrified.” By that, Kelly means that cognitive computing systems will drive the emerging Internet of Things (IoT) — a machine-to-machine network that will connect billions of devices. The IoT, cognitive computing, and the cloud are three technologies van Rijmenam insists are changing the world.

 

According to van Rijmenam, “A swarm of sensors will engulf the earth and create a truly smart world. … In order to understand the massive amounts of data that will be generated by those trillions of sensors, we require smart algorithms. Fortunately, in the past years the development of algorithms has been brought to the next level thanks to advances in Artificial Intelligence, Machine-Learning and Deep Learning. These concepts are revolutionizing Big Data analytics and as such revolutionize your business.” The primary way cognitive computing will revolutionize business is by augmenting decision making. Bain analysts, Michael C. Mankins and Lori Sherer () note, decision making is one of the most important aspects of any business. “The best way to understand any company’s operations,” they write, “is to view them as a series of decisions.”[5] They elaborate:

“People in organizations make thousands of decisions every day. The decisions range from big, one-off strategic choices (such as where to locate the next multibillion-dollar plant) to everyday frontline decisions that add up to a lot of value over time (such as whether to suggest another purchase to a customer). In between those extremes are all the decisions that marketers, finance people, operations specialists and so on must make as they carry out their jobs week in and week out. We know from extensive research that decisions matter — a lot. Companies that make better decisions, make them faster and execute them more effectively than rivals nearly always turn in better financial performance. Not surprisingly, companies that employ advanced analytics to improve decision making and execution have the results to show for it.”

Better decision making results in better adaptability and, hence, better survivability in the years ahead. Malik writes, “An expert in a field where artificial intelligence and human-computer interaction intersect, [Michelle] Zhou breaks down A.I. into three stages. The first is recognition intelligence, in which algorithms running on ever more powerful computers can recognize patterns and glean topics from blocks of text, or perhaps even derive the meaning of a whole document from a few sentences. The second stage is cognitive intelligence, in which machines can go beyond pattern recognition and start making inferences from data. The third stage will be reached only when we can create virtual human beings, who can think, act, and behave as humans do. … Using Zhou’s three stages as a yardstick, we are only in the ‘recognition intelligence’ phase — today’s computers use deep learning to discover patterns faster and better. It’s true, however, that some companies are working on technologies that can be used for inferring meanings, which would be the next step.” Enterra Solutions® is one of those companies. One of the techniques we use is rules chaining that leverages the world’s largest common sense ontology. Utilizing the knowledge within the ontology, chaining rules creates inference in two ways: The system either has an objective and works backwards to justify the objective with facts (backward chaining), or the system starts with facts and works forward to achieve a goal (forward chaining). The Enterra® Rules-Based Inference System has many more advanced capabilities, including the ability to deal with conflicting information. For example, a general rule like “birds fly” and other conflicting rules such as “chickens don’t fly” is not a problem for the Enterra system. Other key features include the ability to explain why the Inference Engine deduced its conclusions in common English (natural language). The lack of Common Sense Rules proved to be a major limitation for early rules-based inference systems (also referred to as Expert Systems). All the common assumptions humans take for granted such as “when I move my body, all my limbs move with me” is obvious to a human, but not obvious to a computer; but rules chaining the ontology allows our system to reason more like a human being. Malik concludes we are still at the beginning of augmented intelligence era and, as a result, “We’re going to have to deal with the hyperbole surrounding A.I.” Nevertheless, every day the reality grows closer to the hype.

 

Footnotes
[1] Om Malik, “The Hype—and Hope—of Artificial Intelligence,” The New Yorker, 26 August 2016.
[2] Staff, “Preparing for the Future of Artificial Intelligence,” IBM Research.
[3] Mark van Rijmenam, “How Unlimited Computing Power, Swarms of Sensors and Algorithms Will Rock our World,” Datafloq, 20 June 2016.
[4] M. Hatipoglu, “Artificial Intelligence Forms Become More Integrated Within Businesses,” TrendinTech, 4 August 2016.
[5] Michael C. Mankins and Lori Sherer, “Creating value through advanced analytics,” Bain Brief, 11 February 2015.

Related Posts: