Home » Artificial Intelligence » Cognitive Computing: The Next Big Thing is Already Here

Cognitive Computing: The Next Big Thing is Already Here

August 11, 2015

supplu-chain

“One developing bit of technology that promises to be a massive game-changer for the tech industry and for society as a whole,” writes Jamshed Avari (@ja140), “is cognitive computing. Already one of today’s darling buzzwords, cognitive computing is the vehicle through which companies are promising to usher in a whole new era of personal technology.”[1] Analysts at Allied Market Research add, “[The] cognitive computing technology era is ready to dominate major sectors.”[2] Cognitive computing burst onto to the scene when IBM’s Watson defeated human champions on the television game show Jeopardy!. The public quickly realized that something new had arrived; but, the viewing audience wasn’t sure what to call it. To differentiate Watson’s technology from other computing platforms, IBM called it cognitive computing. That term is now widely accepted to define a range of intelligent computing systems. Rick Delgado (@ricknotdelgado) defines cognitive computing this way:[3]

“In the simplest sense possible, cognitive computing is the creation of self-learning systems that use data mining, pattern recognition and natural language processing (NLP) to mirror the way the human brain works. The purpose of cognitive computing is to create computing systems that can solve complicated problems without constant human oversight.”

James Kobielus (@jameskobielus), Senior Program Director of Product Marketing and Big Data Analytics at IBM, defines cognitive computing in a slightly different manner.[4] He writes:

“Cognitive computing is the ability of automated systems to handle the conscious, critical, logical, attentive, reasoning modes of thought. Cognitive computing platforms, such as IBM Watson, are at the very heart of the big-data analytics revolution.”

Watson basically uses a brute force approach to cognitive analytics. It analyzes massive amounts of data and provides a “best guess” answer (IBM calls it a “confidence-weighted response”) based on what it finds. That’s how Watson won Jeopardy! This brute force approach is often called deep learning.

 

At Enterra Solutions®, we take a different approach to cognitive computing. We promote the Enterra Enterprise Cognitive System™ (ECS) as a system that can Sense, Think, Act, and Learn®. The ECS also uses various techniques to overcome the challenges associated with most deep learning systems. Like deep learning systems, the ECS gets smarter over time and self-tunes by automatically detecting correct and incorrect decision patterns; but, the ECS also bridges the gap between a pure mathematical technique and semantic understanding. The ECS has the ability to do math, but also understands and reasons about what was discovered. Marrying advanced mathematics with a semantic understanding is critical — we call this “Cognitive Reasoning.” The Enterra ECS cognitive computing approach — one that utilizes the best (and multiple) solvers based on the challenge to be solved and has the ability to interpret those results semantically — is a superior approach for many of the challenges to which deep learning is now being applied. The attached infographic, prepared by Wipro, stresses the fact that cognitive computing systems sense, learn, infer, and interact.

 

Delgado believes that interaction is one of the most important components of cognitive computing. “Cognitive computing will bring a high level of fluidity to analytics,” he writes. “Thanks to improvements in NLP, it’s becoming easier and easier to communicate with our machines. Staff who aren’t as familiar with data language or data processing, which are normally essential for proper analytical functions, could still interact with programs and platforms the way humans interact with each other. Meaning, by providing simple commands and using normal language, platforms built with AI technology could translate regular speech and requests into data queries, and then provide responses in the same manner they were received. With this kind of functionality, it would be much easier for anyone to work in the data field.” Interaction is certainly an essential component of cognitive computing; but, cognitive computing systems can also act autonomously to make routine business decisions so that human decision-makers are freed to concentrate on more pressing activities. David Siegel, co-chair of Two Sigma, believes that cognitive computing systems should be doing much more of the decision-making in the world.[5] “In a world awash with digital information,” he writes, “algorithms are better than people at analyzing complex interactions. What they lack in creativity, they more than make up for in consistency and speed.” He continues:

“In fields as wide ranging as medical diagnosis, meteorology and finance, dozens of studies have found that algorithms at least match — and usually surpass — subjective human analysis. Researchers have devised algorithms that estimate the likelihood of events such as a particular convict lapsing back into crime after being released from custody, or a particular business start-up going bust. When they pitted the predictive powers of these programs against human observers, they found that the humans did worse. And that, presumably, was on a good day. Aside from their systematic failings, people get sick, tired, distracted and bored. We get emotional. We can retain and recall a limited amount of information under the very best of circumstances. Most of these quirks we cherish, but in a growing number of domains we no longer need to tolerate the limitations they entail. Nor do we have much to gain from doing so. Yet we seem determined to persevere, tending to forgive ‘human error’ while demanding infallibility from algorithms. … The sooner we learn to place our faith in algorithms to perform the tasks at which they demonstrably excel, the better off we humans will be.”

In a letter to the editor of the Financial Times, David Hosea (@devin91), Founder and CEO of PredictiCare, agreed with Siegel about the benefits of algorithmic decision-making; but, like Delgado, he stresses that man/machine interaction is necessary to make cognitive computing a trusted colleague.[6] “Algorithmic software must be able to interact intelligently with the human professional before it will be respected as a trustworthy adviser instead of a ‘black box’, he writes. “These systems must work in intellectual partnership with their users, not as a substitute. Algorithmic ‘infallibility’ is not sufficient to engender expert practitioner adoption — after all, they’re only human.” One of the benefits of Enterra’s approach is that it is not a “black box” system. It explains to users in plain language why it provided its analytic insights. This approach is exactly what Hosea recommends. Avari concludes, “The possibilities are endless — AI that’s more Star Trek than Siri is only the beginning. Significant progress has already been made, and now the industry seems ready to take one a massive leaps. … Cognitive computing is real and is about to become very mainstream. As with all technology, it’s what we make of it that will define our future.”

 

Footnotes
[1] Jamshed Avari, “Cognitive Computing Isn’t Just a Buzzword, It’s the Next Big Thing in Technology,” NDTV Gadgets, 12 May 2015.
[2] “Cognitive Computing Technology Era Is Ready To Dominate Major Sectors,” Allied Market Research, 16 June 2015.
[3] Rick Delgado, “Cognitive Computing: Solving the Big Data Problem?KDnuggets, 17 June 2015.
[4] James Kobielus, “Cognitive computing? Dump the word ‘artificial’ from ‘artificial intelligence’ in discussing truly intelligent systems (part 1),” LinkedIn, 2 June 2015.
[5] David Siegel, “Human error is unforgivable when we shun infallible algorithms,” Financial Times, 4 June 2015.
[6] Devin Hosea, “Interacting with humans on human terms,” Financial Times, 8 June 2015.

Related Posts: