Home » Artificial Intelligence » Machine Learning and Artificial Intelligence

Machine Learning and Artificial Intelligence

April 3, 2017

supplu-chain

“The first thing you need to know about AI (and machine learning),” writes Eric Knorr (@EricKnorr), “is that it’s full of confusing, overlapping terminology, not to mention algorithms with functions that are opaque to all but a select few.”[1] Too often, he explains, people use terms like artificial intelligence (AI), cognitive computing, and machine learning (ML) interchangeably. Anyone who does, he writes, should get “busted by the artificial thought police.” Knorr goes on to explain, “Artificial intelligence is the umbrella phrase under which all other terminology in this area falls. … Machine learning … refers to a subset of AI in which programs feed on data and, by recognizing patterns in that data and learning from them, draw inferences or make predictions without being explicitly programmed to do so.” Even though AI is the umbrella term, Matt Escobar states, “The core foundation of artificial intelligence is rooted in machine learning, which is an elegant and widely accessible tool. … Machine learning refers to teaching computers how to analyse data for solving particular tasks through algorithms.”[2]

 

Data and Algorithms

 

Knorr writes, “Most of the recent advances we hear about fall under the rubric of machine learning. Why is it so hot today? You often hear that Moore’s Law and cheap, abundant memory have given new life to old machine learning algorithms, which have led to a wave of practical applications. That’s true, but even more important has been the hyperabundance of data to enable machine learning systems to learn.” Artificial intelligence, generally, and machine learning, specifically, have a symbiotic relationship with data. “Machine learning,” Escobar writes, “comes down to data. Almost every enterprise generates data in one way or another: think market research, social media, school surveys, automated systems. Machine learning applications try to find hidden patterns and correlations in the chaos of large data sets to develop models that can predict behaviour. Data have two key elements — samples and features. The former represents individual elements in a group; the latter amounts to characteristics shared by them.” One reason people like to use the term “machine learning” is that, unlike the term “artificial intelligence”, it doesn’t imply that machines are “thinking” in the same humans do. “The beauty of machine learning,” writes John Mannes (@JohnMannes), “is that instead of pretending computers are human and simply feeding them with knowledge, we help computers to reason and then let them generalize what they’ve learned to new information.”[3]

 

The discussion so far makes it clear that data is the sine qua non of machine learning; but, the algorithms and functions that turn data into insights are just as important. In fact, a number of things converged to make machine learning the hot topic it is today. “Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful,” explains Michael Copeland. “It also has to do with the simultaneous one-two punch of practically infinite storage and a flood of data of every stripe (that whole Big Data movement) — images, text, transactions, mapping data, you name it.”[4] As Knorr noted, the convergence of these technologies gave “new life to old machine learning algorithms.” Timo Elliott (@timoelliott), SAP’s Global Innovation Evangelist, notes one of the benefits associated with the rise of AI was “much easier … access to powerful algorithm-based software in the form of open-source products or embedded as a service in enterprise platforms.”[5]

 

Types of Machine Learning

 

Sunil Ray (@Sunil2Ray), a Business Analytics and Intelligence professional, notes there are basically three types of machine learning.[6] They are:

 

1. Supervised Learning. “This [type of] algorithm consists of a target/outcome variable (or dependent variable) which is to be predicted from a given set of predictors (independent variables). Using these set of variables, we generate a function that maps inputs to desired outputs. The training process continues until the model achieves a desired level of accuracy on the training data. Examples of Supervised Learning: Regression, Decision Tree, Random Forest, KNN, Logistic Regression, etc.”

 

2. Unsupervised Learning. “In this [type of] algorithm, we do not have any target or outcome variable to predict/estimate. It is used for clustering population in different groups, which is widely used for segmenting customers in different groups for specific intervention. Examples of Unsupervised Learning: Apriori algorithm, K-means.”

 

3. Reinforcement Learning. “Using this [type of] algorithm, the machine is trained to make specific decisions. … The machine is exposed to an environment where it trains itself continually using trial and error. This machine learns from past experience and tries to capture the best possible knowledge to make accurate business decisions. Example of Reinforcement Learning: Markov Decision Process.”

 

Other terms you often hear during machine learning discussions are:

 

Neural Networks. “Neural networks,” Knorr explains, “are a form of machine learning dating back to early AI research. They very loosely emulate the way neurons in the brain work — the objective generally being pattern recognition. As neural networks are trained with data, connections between neurons are strengthened, the outputs from which form patterns and drive machine decision-making.”

 

Deep Learning. “The term Deep Learning (DL),” writes Paramita Ghosh, “is a specialized subfield of Machine Learning that can enable software systems to self-train for the performance of particular tasks. Thus AI is the parent technology with ML as a child, and DL probably as a grandchild. Machine Learning is essentially an intermediary between Artificial Intelligence and Deep Learning.”[7] “In most cases,” Knorr adds, “deep learning refers to many layers of neural networks working together.”

 

Machine Learning Use Cases

 

According to Elliott, “Organizations today [are] more comfortable with manipulating business data.” Thanks to an array of new advanced analytic tools, he asserts, “Enterprises can take their traditional analytics to the next level.” I agree that machine learning has a number of uses in the business world. Sudhir Gupta points out, however, that machine learning “techniques can provide significant benefits in many other areas” as well.[8] For example:

 

– Electricity load prediction based on weather conditions.
– Algorithm based credit approval for bank loans.
– Fraud detection and other applications in financial services.
– Migration trends based on geo tagged twitter feeds.
– Diagnostics and patient management in health care.
– Criminal justice.
– Media and knowledge management.
– Transportation and traffic management.
– Weather monitoring.
– Sustainability and wildlife conservation.

 

Summary

 

Because artificial intelligence is an umbrella term covering a number of approaches aimed at making machines smart, there will likely remain some ambiguity of terms and imprecision of language when the topic is discussed. Machine learning falls under the AI moniker; but, as the above discussion has shown, it holds a particular niche in the world of AI. Machine learning has risen in prominence thanks to advances in computing, data generation, data storage, and algorithms that help make sense of it all. In the years ahead, hype about machine learning will quickly turn into reality.

 

Footnotes
[1] Eric Knorr, “Making sense of machine learning,” InfoWorld, 6 March 2017.
[2] Matt Escobar, “Artificial Intelligence: Here’s What You Need to Know to Understand How Machines Learn,” Dataconomy, 1 March 2017.
[3] John Mannes, “WTF is machine learning?TechCrunch, 23 October 2016.
[4] Michael Copeland, “What’s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?NVIDIA Blog, 29 July 2016.
[5] Timo Elliott, “A Shortcut Guide To Machine Learning And AI In The Enterprise,” D!gitalist, 17 October 2016.
[6] Sunil Ray, “Essentials of Machine Learning Algorithms (with Python and R Codes),” Analytics Vidhya, 10 August 2015.
[7] Paramita Ghosh, “Is Machine Learning Ready to Take on Artificial Intelligence?Dataversity, 9 February 2017.
[8] Sudhir Gupta, “Machine Learning and the Future of Artificial Intelligence,” BW Disrupt, 18 July 2016.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!