Home » Artificial Intelligence » Machine Learning: A Primer for the Technically Challenged, Part 1

Machine Learning: A Primer for the Technically Challenged, Part 1

May 21, 2015

supplu-chain

“In the age of data science,” Shelly Palmer (), Managing Director of the Digital Media Group at Landmark Ventures, writes, “machine learning and pattern matching are the building blocks of competitive advantage.”[1] Palmer’s headline questioning whether machines can really learn is certainly at odds with his assertion that machine learning is a building block of competitive advantage. He might not have selected that headline himself; regardless, the better question for his headline would have been, “Can machines really think?” As Palmer points out, “For our purpose, ‘to learn’ is not cognitive; it is operational.” In today’s world, even the term cognitive means different things to different people. Cognition is formally defined as “the action or process of acquiring knowledge and understanding through thought, experience, and the senses.” Of course, that definition has to be modified slightly when applied to a machine. At Enterra Solutions®, we believe a cognitive system is a system that discovers insights and relationships through analysis, machine learning, and sensing. In a very broad sense, machines can think; but, they are a long ways from being sentient.

 

Derrick Harris (@derrickharris), a Senior Research Analyst at Mesosphere, explains that researchers are making great strides in machine learning.[2] He notes that in today’s business world, machine learning is becoming ubiquitous. “Machine learning became the new black as it became baked into untold software packages and services — machine learning for marketing, machine learning for security, machine learning for operations, and on and on and on.” All of that begs the question, what is machine learning? Palmer points out that there are four “main categories of machine learning tasks.” They are:

  • Supervised learning — where you teach it
  • Unsupervised learning — where you let it learn by itself
  • Reinforcement learning — where it learns by trial and error
  • Deep learning — where it uses hierarchical or contextual techniques to learn

To that list, I would add a fifth, hybrid approach that involves semantic reasoning to bridge the gap between a pure mathematical technique and semantic understanding. Palmer adds, “The goal of each of these tasks is to ‘teach’ a computer program to apply generalized rules to data sets and yield useful results.” Tomasz Malisiewicz () believes there remains some confusion about the terms, pattern recognition, machine learning, and deep learning. He writes, “‘Pattern recognition,’ ‘machine learning,’ and ‘deep learning’ represent three different schools of thought. Pattern recognition is the oldest (and as a term is quite outdated). Machine Learning is the most fundamental (one of the hottest areas for startups and research labs as of today, early 2015). And Deep Learning is the new, the big, the bleeding-edge — we’re not even close to thinking about the post-deep-learning era.”[3] Malisiewicz might be wrong on that last point. I think there are techniques are that looking beyond deep learning (see below). In his article, Malisiewicz goes on to explain briefly how “pattern recognition,” “machine learning,” and “deep learning” evolved. He concludes:

“Machine Learning is here to stay. Don’t think about it as Pattern Recognition vs Machine Learning vs Deep Learning, just realize that each term emphasizes something a little bit different. But the search continues. Go ahead and explore. Break something. We will continue building smarter software and our algorithms will continue to learn, but we’ve only begun to explore the kinds of architectures that can truly rule-them-all.”

Serdar Yegulalp (@syegulalp) reminds us that, in the midst of all the hype about machine learning, people should remember that not all machine learning is created equal. “Barely a week goes by,” he writes, “when you don’t see a product or service advertised as powered by ‘machine learning.’ When a label is so broadly employed, it risks being devalued — or used to describe something that barely qualifies as machine learning.”[4] He continues:

“Little doubt exists machine learning is a very real and powerful technology, and everyone from Microsoft to IBM is making hay with it. What complicates the picture is how a professed technology employs machine learning — via the more complex techniques we commonly associate with the label or a low-end concept that has more in common with basic statistics. … ‘Machine learning’ is a generic description that encompasses a lot of different strategies, with products using techniques at the low end fitting the label in only the most rudimentary way. That makes it easier to promote a product that employs any of those strategies as one powered by ‘machine learning.’ … But the term remains associated with highly sophisticated techniques.”

In his article, Yegulalp quotes from a paper written by Alex Pinto (), chief data scientist of the MLSec Project. In that paper, Pinto wrote, “Indeed, math is powerful, and large-scale machine learning is an important cornerstone of much of the systems that we use today. However, not all algorithms and techniques are born equal.” Obviously, at Enterra, we believe our Cognitive Reasoning Platform™ (CRP), which we advertise as a system that can Sense, Think, Act, and Learn®, is one of the better approaches in the market today. The CRP has the ability to do math, but also understands and reasons about what was discovered. Marrying advanced mathematics with a semantic understanding is critical — we call this “Cognitive Reasoning.” The Enterra CRP cognitive computing approach — one that utilizes the best (and multiple) solvers based on the challenge to be solved and has the ability to interpret those results semantically — is a superior approach for many of the challenges to which deep learning is now being applied. In many ways, this approach looks beyond deep learning.

 

Gary Marcus (), a professor of cognitive science at N.Y.U., writes, “There is good reason to be excited about deep learning, a sophisticated ‘machine learning’ algorithm that far exceeds many of its predecessors in its abilities to recognize syllables and images. But there’s also good reason to be skeptical.”[5] He explains:

“Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like ‘sibling’ or ‘identical to.’ They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used. The most powerful A.I. systems, like Watson, the machine that beat humans in ‘Jeopardy,’ use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning.”

As I explained above, many of the challenges noted by Marcus are addressed with the addition of semantic reasoning. By adding an ontology into the mix, a cognitive computing system can understand abstract ideas like “sibling” and how a sibling is related to other family members. The bottom line is that cognitive computing is taking machine learning to a new level and the insights and decision assistance it provides will indeed give companies a competitive advantage.

 

Footnotes
[1] Shelly Palmer, “Can Machines Really Learn?Huffington Post The Blog, 15 March 2015.
[2] Derrick Harris, “Remember when machine learning was hard? That’s about to change,” Gigaom, 21 February 2015.
[3] Tomasz Malisiewicz, “Deep Learning vs Machine Learning vs Pattern Recognition,” Tombone’s Computer Vision Blog, 20 March 2015.
[4] Serdar Yegulalp, “Not all machine learning is created equal,” InfoWorld, 20 March 2015.
[5] Gary Marcus, “Is ‘Deep Learning’ a Revolution in Artificial Intelligence?The New Yorker, 25 November 2012.

Related Posts: