Calling the era in which we live the Information Age is a bit presumptuous. If information can be equated to knowledge, every age is an information age. Cultures that have accumulated the most knowledge have generally prospered. Only when knowledge was lost or forgotten has a pall been thrown over the world. John Brandon (@jmbrandonbb) writes, “We can agree on one thing. We know that information is everywhere. That’s a given. Now, prepare for another shift.” Brandon brazenly declares the information age is ending and we are entering the machine learning age. My view is that the world will always be in an age of information. I would be more comfortable declaring the age of traditional computing is ending. Machine learning (ML) simply offers humankind a new way to analyze data and accumulate knowledge.
The Rise of Machine Learning
Machine learning is not exactly a new phenomenon. Brandon notes, “Pinpointing the start of the machine learning age is difficult.” People and businesses have been using machine learning for years sometimes without knowing what was behind the application in use. Bernard Marr (@BernardMarr), a strategic performance consultant, writes, “At its most simple, machine learning is about teaching computers to learn in the same way we do, by interpreting data from the world around us, classifying it and learning from its successes and failures. In fact, machine learning is a subset, or better, the leading edge of artificial intelligence.” Since ML has been around for a while, Dr. Gero Presser (@gero_presser) a co-founder and managing partner of Quinscape, asks, “How did this technical subject all of a sudden become a topic of discussion in company boardrooms?” His short answer is that ML has garnered attention because it can tackle problems traditional computing methods can’t. He explains, “Although the principle of machine learning is not new, it is currently enjoying a surge in popularity. There are three main reasons for this: firstly, the availability of large quantities of data necessary for the applications and training (‘big data’). Secondly, we now have the huge computing power required, especially in the cloud. And thirdly, a range of open source projects have led to algorithms being accessible to more or less everyone.”
The strength of ML, he writes, lies in algorithms that allow computers to use examples to learn by themselves (i.e., “We train the computer on the basis of samples”). Why is this important? Presser gives the simple ML example of a computer learning to recognize pictures of cats. “Many practice-related problems,” he writes, “fall more into the category of ‘recognizing a cat’ than ‘adding numbers’, and cannot, therefore, be solved adequately with algorithms written by humans. It is frequently a question of identifying a pattern in some data, for example recognizing objects in images, text from language or attempted fraud in transaction data.” To underscore what he means, Presser uses the example of predictive maintenance. “Imagine that lots of sensors send streams of data and occasionally a machine breaks down,” he writes. “The challenge then is to learn the patterns in the streams of data that ultimately lead to the malfunction. Once this pattern has been learned, it can be identified during normal operation, so that a potential breakdown can be anticipated and prevented.”
Why Machine Learning is Important for the Future
According to Nidhi Chappell (@_NidhiC), head of ML at Intel, machine learning is the fastest growing part of AI. She explains, “AI is basically the intelligence – how we make machines intelligent, while machine learning is the implementation of the compute methods that support it. The way I think of it is: AI is the science and machine learning is the algorithms that make the machines smarter. So the enabler for AI is machine learning.” The following video explains a little more about ML and how it can be used to address real-world challenges.
According to Chapell, “AI is going to bring major shifts in society through developments in self-driving cars, medical image analysis, better medical diagnosis, and personalized medicine. And it will also be the backbone of many of the most innovative apps and services of tomorrow.” One implication of Chappell’s prediction is that a lot of jobs are going to be affected by ML. Gideon Lewis-Kraus reports, “Medical diagnosis is one field most immediately, and perhaps unpredictably, threatened by machine learning. … Researchers have shown not only that neural networks can find tumors in medical images much earlier than their human counterparts but also that machines can even make such diagnoses from the texts of pathology reports.” He continues:
“Once you’ve built a robust pattern-matching apparatus for one purpose, it can be tweaked in the service of others. One … engineer took a network he put together to judge artwork and used it to drive an autonomous radio-controlled car. A network built to recognize a cat can be turned around and trained on CT scans — and on infinitely more examples than even the best doctor could ever review. A neural network built to translate could work through millions of pages of documents of legal discovery in the tiniest fraction of the time it would take the most expensively credentialed lawyer. The kinds of jobs taken by automatons will no longer be just repetitive tasks that were once — unfairly, it ought to be emphasized — associated with the supposed lower intelligence of the uneducated classes. We’re not only talking about three and a half million truck drivers who may soon lack careers. We’re talking about inventory managers, economists, financial advisers, real estate agents.”
Job displacement (or, at the very least, job redefinition) is something society will have to address in the years ahead. Nevertheless, on the whole, the benefits society is going to derive from ML are believed to far outweigh any negative consequences.
Presser concludes, “Machine learning does not replace classical programming, it complements it: it provides tools that allow us to additionally address a major category of problems that had until now been too difficult or even impossible to master. Collectively these present us with new opportunities while existing systems are also increasingly being adapted to incorporate machine learning functionalities.” Martin Welker adds, “We need to educate machines on what it’s like to think like a human. Companies like Alphabet are already working on it, with projects like DeepMind at the forefront of AI and machine learning technologies. Others, like Elon Musk’s OpenAI, are working to make sure that humanity’s fears of a malevolent AI will never be realized. As we learn to trust these systems, adoption will quickly follow. And since they’re so universal, they will surely touch all industries.”
 John Brandon, “The information age is over, welcome to the machine learning age,” Venture Beat, 25 June 2017.
 Bernard Marr, “What Is Machine Learning – A Complete Beginner’s Guide In 2017,” Forbes, 4 May 2017.
 Gero Presser, “What Everyone Should Know about Machine Learning,” Datafloq, 24 August 2017.
 Lee Bell, “Machine learning versus AI: what’s the difference?” Wired, 1 December 2016.
 Gideon Lewis-Kraus, “The Great A.I. Awakening,” The New York Times, 14 December 2016.
 Martin Welker, “The Future of Productivity: AI and Machine Learning,” Entrepreneur, 5 July 2017.