Home » Artificial Intelligence » Making Sense of Cognitive Computing

Making Sense of Cognitive Computing

February 2, 2015

supplu-chain

“Artificial intelligence is suddenly everywhere,” writes Kurt Andersen (@KBAndersen). “It’s still what the experts call ‘soft A.I.,’ but it is proliferating like mad.”[1] Commenting on Andersen’s article, Irving Wladawsky-Berger writes, “Artificial intelligence is indeed everywhere, but these days, the term is used in so many different ways that it’s almost like saying that computers are now everywhere.”[2] Another term associated with artificial intelligence that is getting more notice is cognitive computing. “Cognitive computing is a term that probably goes over the head of most of the general public,” states James Kobielus, Senior Program Director of Product Marketing and Big Data Analytics Solutions at IBM. “IBM defines it as the ability of automated systems to learn and interact naturally with people to extend what either man or machine could do on their own, thereby helping human experts drill through big data rapidly to make better decisions.”[3] Cognitive computing is more than a repackaging of artificial intelligence. Accenture describes cognitive computing as the “ultimate long-term solution” for many of the challenges that face businesses today. [“From Digitally Disrupted to Digital Disrupter”] According to Accenture:

“Cognitive computing technology builds on [machine learning] by incorporating components of artificial intelligence to convey insights in seamless, natural ways to help humans or machines accomplish what they could not on their own. At its most advanced, cognitive computing will be the truly intelligent data supply chain — one that masks complexity by harnessing the power of data to help business users ask and answer strategic questions in a data-driven way.”

The “data supply chain” to which Accenture refers is a network over which integrated data is gathered, analyzed, and redistributed to whomever requires it. Like traditional supply chains, each company has a unique data supply chain. Wladawsky-Berger notes that most artificial intelligence systems being discussed today are of the soft variety. Such AI systems are referred to in several ways besides “soft,” including “weak” and “narrow.” These systems are not going to develop into computer overlords. Anyone who has spent the last 50 years looking for the creation of a machine that is sentient (i.e., self-aware) probably has the right to be skeptical. Although Ray Kurzweil insists we will see such a machine within the next 15 years, artificial general intelligence (AGI) has proven elusive.[4]

 

Associating terms like “cognitive” and “intelligent” with machines can be confusing because some people assume that you are applying them the same way they are applied to humans or that cognitive systems are sentient. As Wladawsky-Berger points out, “Soft, weak or narrow AI is inspired by, but doesn’t aim to mimic, the human brain. These are generally statistically oriented, computational intelligence methods for addressing complex problems based on the analysis of vast amounts of information using powerful computers and sophisticated algorithms, whose results exhibit qualities we tend to associate with human intelligence.” You should keep that in mind anytime you read anything about artificial intelligence, machine intelligence, smart machines, or cognitive computing. Even if computers do become genuinely “intelligent,” they are likely to possess a different kind of intelligence than humans. Given the limitations of our current lexicon, I believe cognitive computing is a fairly good descriptive term for what systems like the Enterra Solutions® Cognitive Reasoning Platform™ (CRP) can do. I say this because “cognition” is defined as “the action or process of acquiring knowledge and understanding through thought, experience, and the senses.” If you exchange the word “analysis” for “thought,” you can create a pretty good definition that can be applied to cognitive computing systems, namely: a system that discovers insights and relationships through analysis, machine learning, and sensing.

 

As the Accenture study notes, the purpose of cognitive computing systems is to mask the complexity of many business processes by automating many of the analytical and decision-making activities that need to take place in a timely (i.e., real-time or near-real-time) manner. Databases have become so large that some sort of automation is now an imperative for most large businesses. Narrow (i.e., limited) artificial intelligence, rather than artificial general intelligence, is what businesses need. Finding a cognitive computing system that embraces a number of capabilities (e.g., analytics, complex event processing, machine learning, natural language processing, business rules, and image recognition) is pretty rare. But that’s exactly what Enterra’s CRP does. The only capability that Enterra’s CRP doesn’t currently embrace is image recognition. Although each of the cognitive capabilities mentioned earlier might be called a cognitive task, I agree with pundits who insist that a system that can only perform one of those tasks probably shouldn’t be called a cognitive computing system.

 

A system, like Enterra’s CRP, that can perform most of those tasks and does so using logical approaches similar to those that a human expert would use to tackle a problem does deserve to be called a cognitive computing system; but, even I wouldn’t claim that such a system is universally useful. Nevertheless, cognitive computing is changing how we think about computer systems. Dario Gil (@dariogila), Director of Symbiotic Cognitive Systems at IBM Research, believes that cognitive computing is “an innovation so sweeping that it’s ushering in a new age of computing, along with a new partnership between humans and computers, one where we bring together skills and collaborate to produce better results.”[5] Gil explains:

“Computers, originally designed to help us through speedier calculation, automation, and pattern finding, are programmed in advance to perform every task they undertake. But in the era of Big Data, we need smarter, nimbler machines that can learn from experience, make sense of massive volumes of Big Data, discover new insights, and improve their own performance over time, with or without direct programming. These machines will help us think. Cognitive systems will learn, adapt, hypothesize, and recommend in real time. … Cognitive systems will help us overcome some of the limitations we’re encountering as we collect more information — whether it’s difficulties in processing large amounts of data quickly, drawing a big picture out of disparate data, or dealing with the sensory overload. … Cognitive systems will help us understand ourselves, our biases, and our reasoning. They won’t make decisions for us. But they will help us make better decisions in an ever more complex world.”

Although I agree with most of what Gil states, the fact of the matter is that the best cognitive systems will make some decisions for us. There’s nothing wrong or menacing about that. Some decisions are so routine that machines can make them without human intervention (and make them with fewer errors than humans). Other decisions will require human intervention after reviewing recommendations and insights proffered by the cognitive computing system. The trick is deciding which decisions need human intervention and which don’t. I believe that successful businesses in the future will be those that best figure out how to promote collaboration between humans and smart machines. True cognitive computing systems are highly adaptive and, with the help of smart humans, can be tailored to a variety of cognitively-oriented situations.

 

Footnotes
[1] Kurt Andersen, “Enthusiasts and Skeptics Debate Artificial Intelligence,” Vanity Fair, 26 November 2014.
[2] Irving Wladawsky-Berger, “‘Soft’ Artificial Intelligence Is Suddenly Everywhere,” The Wall Street Journal, 16 January 2015.
[3] Niaz Uddin, “James Kobielus: Big Data, Cognitive Computing and Future of Product,” eTalks, 12 December 2013.
[4] Kevin Loria, “2029: The Year AI Becomes Human,” The Fiscal Times, 29 December 2014.
[5] Dario Gil, “Exploring the Impact of Cognitive Computers,” Wired, 6 November 2013.

Related Posts: