Home » Artificial Intelligence » Let’s Discuss Cognitive Technologies

Let’s Discuss Cognitive Technologies

December 3, 2020

supplu-chain

One of the most oft-used, computer-related terms being tossed about in business circles is “artificial intelligence” (AI). Arvind Narayanan (@random_walker), an associate professor at Princeton, asserts, “Most of the products or applications being sold today as artificial intelligence (AI) are little more than ‘snake oil’.”[1] That may sound harsh; however, Siegel explains, “The much better, precise term would instead usually be machine learning — which is genuinely powerful and everyone oughta be excited about it.” Some analysts prefer the term “cognitive technologies” because it better connotes that a number of technologies and methods huddle under the AI umbrella. Several years ago IBM coined a new term — cognitive computing — which the company believes better describes how cutting edge techniques can be used to augment human decision-making.

 

Some experts, including Eric Siegel (@predictanalytic), a former computer science professor at Columbia University, have problems with people using the modifier “cognitive” believing it still conjures up images of consciousness. Siegel explains, “The term ‘cognitive computing’ … is another poorly-defined term coined to allege a relationship between technology and human cognition.”[2] Cognition is defined as “the action or process of acquiring knowledge and understanding through thought, experience, and the senses.” Of course, that definition needs to be modified when applied to a “cognitive” machine. A cognitive system is a system that discovers knowledge, gains insights, and establishes relationships through analysis, machine learning, and sensing of data. I’m sympathetic to those who don’t like the term; but, no one has come up with a better one. I think the term “machine learning” is too confining.

 

What is cognitive computing?

 

Historically, AI has been classified as either weak or strong. Weak AI could be considered the “youth” version of AI and strong AI (sometimes called artificial general intelligence) is the “adult” version of AI. Using that taxonomy, cognitive computing could be considered the “adolescent” version of AI.

 

  • Weak AI (Youth): Wikipedia states: “Weak artificial intelligence (weak AI), also known as narrow AI, is artificial intelligence that is focused on one narrow task.” Weak AI operates on limited data sets and to answer a limited range of questions. It produces quality answers but with no “understanding” of those answers. Machine learning falls into this category.
  • Strong AI (Adulthood): Strong AI historically refers to Artificial General Intelligence (AGI) — that is, a machine with consciousness, sentience and mind, “with the ability to apply intelligence to any problem, rather than just one specific problem.” The AGI Society notes the ultimate goal of AGI is to develop “thinking machines” (i.e., “general-purpose systems with intelligence comparable to that of the human mind”).
  • Cognitive Computing (Adolescence): Cognitive Computing lies between weak and strong AI. It operates on large and varied data sets to perform complex tasks or to answer a multitude of questions in a variety of categories. Like human adolescents, cognitive computing often finds it must look for answers in ambiguous situations.

 

The Cognitive Computing Consortium explains, “Cognitive computing makes a new class of problems computable. It addresses complex situations that are characterized by ambiguity and uncertainty; in other words it handles human kinds of problems.”[3] The Consortium adds:

 

In these dynamic, information-rich, and shifting situations, data tends to change frequently, and it is often conflicting. The goals of users evolve as they learn more and redefine their objectives. To respond to the fluid nature of users’ understanding of their problems, the cognitive computing system offers a synthesis not just of information sources but of influences, contexts, and insights. To do this, systems often need to weigh conflicting evidence and suggest an answer that is ‘best’ rather than ‘right’. Cognitive computing systems make context computable. They identify and extract context features such as hour, location, task, history or profile to present an information set that is appropriate for an individual or for a dependent application engaged in a specific process at a specific time and place. They provide machine-aided serendipity by wading through massive collections of diverse information to find patterns and then apply those patterns to respond to the needs of the moment.”

 

Enterra’s cognitive computing system — the Enterra Cognitive Core™, a system that can Sense, Think, Act, and Learn® — is an actualization of the Consortium’s explanation. To help people better understand cognitive computing, Enterra defines it as the inter-combination of semantics and computational intelligence (i.e., machine learning). Semantics, in this case, refers to having a symbolic representation of the knowledge domain’s concepts, interrelationships, and rules, which we model within a technology called a Rule-based Ontology. Our ontology allows cognitive computing systems to learn generalizations, encode learnings as rules, and contextualize numerical values (e.g., 100 is not just a number, but the Celsius temperature at which water boils). Like all cognitive computing systems, our system helps decision-makers who must make decisions based on ambiguous data.

 

When should you use cognitive computing?

 

Sue Feldman (@susanfeldman), from the Cognitive Computing Consortium, notes, “Cognitive Computing extends computing to a new set of complex, human, ambiguous problems, but it’s not applicable in every context. The value received must be justified in terms of cost and productivity, or should provide a competitive edge. It shouldn’t necessarily replace those already in use.”[4] She offers the following guidelines for when to and when not to implement cognitive computing solutions.

 

When to Use:

 

  • When problems are complex (i.e., information and situations are shifting and the outcome depends on context).
  • When there are diverse, changing data sources (e.g., when using structured data with unstructured data, like text or images).
  • When there is no clear right answer (i.e., when evidence is complex, conflicting or ambiguous).
  • When multiple ranked, confidence scored options are needed.
  • When unpredictability makes processing intensive and difficult to automate.
  • When context-dependent information is desired, based on time, user, location, or point in task.
  • When exploration or work across silos is a priority

 

When not to use Cognitive Computing:

 

  • When predictable, repeatable results are required (e.g., sales reports, inventory tracking).
  • When all data is structured, numeric and predictable.
  • When human-machine natural language interaction is not necessary.
  • When a probabilistic approach is not desirable.
  • When shifting views and answers are not appropriate or are indefensible due to industry regulations.
  • When existing transactional systems are adequate.

 

Concluding thoughts

 

SAS analysts, Alison Bolen, Hui Li, and Wayne Thompson, observe, “Cognitive computing brings with it a promise of genuine, human-to-machine interaction. When machines become cognitive, they can understand requests, connect data points and draw conclusions. They can reason, observe and plan.”[5] Like any technology, a business case must be made for its use; however, finding a business case for cognitive computing isn’t difficult, because every business needs to make decisions. In fact, Bain analysts, Michael C. Mankins and Lori Sherer (@lorisherer), assert, “The best way to understand any company’s operations is to view them as a series of decisions.”[6] They add, “We know from extensive research that decisions matter — a lot. Companies that make better decisions, make them faster and execute them more effectively than rivals nearly always turn in better financial performance. Not surprisingly, companies that employ advanced analytics to improve decision making and execution have the results to show for it.”

 

Footnotes
[1] Dev Kundaliya, “Much of what’s being sold as ‘AI’ today is snake oil, says Princeton professor,” Computing, 20 November 2019.
[2] Eric Siegel, “Why A.I. is a big fat lie,” Big Think, 23 January 2019.
[3] Staff, “Cognitive Computing Definition,” Cognitive Computing Consortium.
[4] Amber Lee Dennis, “Cognitive Computing Demystified: The What, Why, and How,” Dataversity, 15 February 2017.
[5] Alison Bolen, Hui Li, and Wayne Thompson, “Becoming Cognitive: Understanding cognitive computing versus artificial intelligence, deep learning and machine learning,” Longitudes, 3 April 2017.
[6] Michael C. Mankins and Lori Sherer, “Creating value through advanced analytics,” Bain Brief, 11 February 2015.

Related Posts: