Machine learning is a subset in the field of artificial intelligence (AI); but, the two terms are often used interchangeably. Many people use the term “artificial intelligence” because it sounds more exotic and futuristic. AI sells. By itself, however, the term AI is meaningless and many business executives are starting to recognize the difference between the hype and the reality. They honestly don’t care as much about the technology (or what it’s called) as they do about the results it can produce. Randy Bean (@RandyBeanNVP), CEO of NewVantage Partners, explains, “Business executives want to understand whether a technology or algorithmic approach is going to improve business, provide for better customer experience, and generate operational efficiencies such as speed, cost savings, and greater precision.”
Big data and machine learning
Machine learning has become a topic of interest because of the rise of big data. Tim Keary (@TechWriterT) explains, “Ever since the rise of big data, enterprises of all sizes have been in a state of uncertainty. Today [they] have more data available than ever before, but few have been able to implement the procedures to turn this data into insights. To the human eye, there is just too much data to process.” Enter machine learning. Machine learning requires data — and lots of it — in order to learn properly. For most economic sectors, generating data is not the challenge. As Keary notes, gaining insights and taking appropriate actions are where the rubber meets the road. With the advent of the Internet of Things (IoT), the amount of data predicted to be generated is staggering; but, the potential benefits of analyzing and acting upon all that data are amazing. Any industry in which resources are used intensively, or involve complex processes, or manage complicated machinery, or operate fleets of vehicles are prime candidates for AI-related solutions. In the future, access to machine learning will likely come from cognitive computing platforms. Although there are various technologies associated with cognitive computing, my definition involves a combination of semantic intelligence (machine learning, natural language processing, and ontologies) and computation intelligence (advanced mathematics). This combination provides the greatest adaptability and flexibility for an organization.
The reason cognitive technologies are so valuable is because they were designed primarily to augment decision-making. Bain analysts, Michael C. Mankins and Lori Sherer (@lorisherer), assert that if you can improve a company’s decision making you can dramatically improve its bottom line. They explain:
“The best way to understand any company’s operations is to view them as a series of decisions. People in organizations make thousands of decisions every day. The decisions range from big, one-off strategic choices (such as where to locate the next multibillion-dollar plant) to everyday frontline decisions that add up to a lot of value over time (such as whether to suggest another purchase to a customer). In between those extremes are all the decisions that marketers, finance people, operations specialists and so on must make as they carry out their jobs week in and week out. We know from extensive research that decisions matter — a lot. Companies that make better decisions, make them faster and execute them more effectively than rivals nearly always turn in better financial performance. Not surprisingly, companies that employ advanced analytics to improve decision making and execution have the results to show for it.”
One of the most obvious ways machine learning can help make better decisions is through equipment monitoring (i.e., anomaly detection). Once baseline operating conditions have been established, a machine learning solution can alert decision makers when an anomaly has occurred so preventive maintenance can be conducted. This type of machine learning can be applied to any business process where anomalies mean trouble. Keary explains, “Anomaly detection algorithms are leading the charge to take organizations away from the limitations of manually monitoring datasets. In its place is a wave of solutions that can not only make use of large data stories but also become more intelligent over time. Anomaly detection solutions build up experience each time they run. With each further use the responses of the platform become more accurate.”
Is your company ready for machine learning?
Like many business executives, Oksana Sokolovsky (@oksana_rokitt), CEO of Io-Tahoe, is often confronted by sales people insisting her company needs the latest technology in order to survive. She notes, “Given that artificial intelligence and machine learning … are among the hottest topics these days, it should come as no surprise that a significant percentage of marketing outreach involves these technologies.” Like other successful executives, Sokolovsky bases her decisions on what is best for her business. When it comes to the question of whether or not a company needs (or can use) machine learning technology, she recommends decision makers look at their data. “Do [you] have enough data?” she asks. “Machine learning works best when you have significant amounts of data, with no signs of slowing down its accumulation.” She also recommends looking at the condition of your data. “If your data is coming from a variety of sources, or if it hasn’t been cleaned and standardized into a consistent set, you won’t get value from any analytical exercise. Or, if you want to use another acronym, GIGO — garbage in, garbage out.” If you have enough of the right data in the right condition, your company is probably a good candidate for a machine learning solution.
Once you have determined machine learning may work for your company, what comes next? Jennifer Prendki (@jlprendki), Vice President of machine learning at Figure Eight, insists implementing machine learning solutions can be “a crucial yet error-prone process.” To make the process as painless as possible, she recommends nine things organizations can do. They are:
- Prioritize Flexibility. “Failure to remain flexible and to embrace the dynamism of the machine learning solutions they create is one of the top reasons why data scientists don’t see their models live up to their own expectations once in production.”
- Continuously Monitor Inputs. “People wrongly assume that when a model’s performance starts dropping, the model is to blame. … When a feature powered by machine learning suddenly starts going rogue, it is reasonable to question either the system or the data rather than the model itself.”
- Optimize Data Usage. “Because data scientists are taught to optimize for model accuracy, they tend to use as much data as possible when training their models. Yet for most models, accuracy isn’t proportional to the amount of data used, and usually asymptotically tends toward a given value. To address this, data scientists can build a learning curve in order to identify the optimal amount of data required to train their model without breaking the bank or hoarding their company’s servers for more time than is necessary.”
- Don’t Automate Too Early. “We now know it is often dangerous to push a model to production with the assumption that it will perform as expected. Automating a model that isn’t well understood and tested in real-life conditions is a common mistake that is easily avoided.”
- Keep It Simple. “Most readers will know Occam’s razor: the simplest answer is usually correct. … Starting with a simplistic model — a minimum viable product — before gradually making a solution more complex is the way to go.”
- Use Problems as Opportunities to Learn. “If a model is underperforming, it is often tempting to go back to the drawing board and start over. The problem with this approach is that it throws the baby out with the bathwater. … Whenever we decide to create something new, we lose an opportunity to understand both the business problem at hand and the system in depth, and we are likely to repeat some of the same mistakes over time.”
- Test Models Thoroughly Before Shipping. “Data scientists must ensure they account for corner cases and exceptions as much as possible given the quality of the data that is available for the task at hand. As a general rule, thorough testing should happen sooner rather than later, and definitely should happen prior to the engineering QA process.”
- Build Things That Are Explainable. “Finding and fixing problems might turn to be almost impossible to do in a reasonable amount of time if the model isn’t explainable. Explainability isn’t only a way to provide some transparency into the decisions that an algorithm makes, it is also the best way to ensure that the human who built the model, as well as others, can trace problems back to their origin.”
- Use Human Input. “Full automation remains a myth for now. However, the Pareto principle suggests that automating 80 percent of a system is a fairly easy thing to do, while reaching a completely autonomous system would take a tremendous amount of effort.”
Bean concludes, “Today, organizations can infuse machine learning into core business processes that are connected with the firm’s data streams with the objective of improving their decision-making processes through real-time learning.” As more executives identify challenges that can be addressed using data analysis, more of them are likely to turn to machine learning solutions for answers. As a result, machine learning will continue to transform numerous economic sectors.
 Randy Bean, “The State of Machine Learning in Business Today,” Forbes, 17 September 2018.
 Tim Keary, “Anomaly detection: Machine learning platforms for real-time decision making,” Information Age, 23 October 2018.
 Michael C. Mankins and Lori Sherer, “Creating value through advanced analytics,” Bain Brief, 11 February 2015.
 Oksana Sokolovsky, “Is Your Company Ready To Implement Machine Learning?” Forbes, 11 October 2018.
 Jennifer Prendki, “9 best practices for taking machine learning from theory to production,” Information Management, 26 October 2018.