These days, most of the arguments that involve the subject of artificial intelligence (AI) and whether it will ever be achieved are really about artificial general intelligence (AGI) — the ability of machines to think like humans and eventually achieve sentience. There are, however, more limited goals to be achieved in the field of AI. Dr. Ben Goertzel defines it as, “the ability to achieve complex goals in complex environments using limited computational resources.” As I’ve repeatedly noted in past posts on artificial intelligence, most business use cases don’t require AGI so the debate is a bit of red herring if you are interested in solving limited, but important, problems using AI. That’s why Adam Hill can now write, “Once the stuff of fantasy, artificial intelligence is now a reality that could change the world we live in.” [“Deep learning: a step toward artificial intelligence,” Performance, 18 August 2013]
Hill notes that there have been a lot of “gimmicks” associated with the field of artificial intelligence in the past (like chess-playing computers), but he is more interested in AI’s ability to address business applications.
“Last year, Microsoft Research boss Rick Rashid demonstrated some advanced English to Cantonese voice recognition and translation software with an error rate low enough to suggest that it had moved things on. Much of the most interesting work in the field at present comes from research into neural networks – building computers that can sift through vast amounts of data and recognize patterns – and these are proving successful in disciplines such as voice and picture recognition and natural language processing (NLP).”
In other words, to be useful in the business environment, AGI is not the sine qua non for success. Hill reports that search engine giant Google “certainly seems to see the potential” in applying limited AI towards business problems. He continues:
“Over the past year, it has snapped up a couple of the best-known names in the field to work with: Professor Geoffrey Hinton from the University of Toronto and AI expert Ray Kurzweil. Hinton is now working part-time with the media giant, while Kurzweil was appointed director of engineering in January.”
Hill is particularly interested in Hinton’s work, which “is to help machines perfect deep learning, which is using low-level data to construct complex meaning and interpretation.” Hill reports that Hinton believes that Google’s scientists and engineers “have a real shot at making spectacular progress in machine learning.” You might recall that last year Google made news when it reported that its computer system used millions of online images to learn to recognize cats on its own. Hill admits, however, that “much more – in terms of both computing power and software development – may yet be required to shift the deep learning paradigm beyond voice and image recognition.” Nevertheless, for many business applications artificial intelligence systems, like Enterra’s Cognitive Reasoning Platform™, are good enough to provide a significant competitive advantage for companies that utilize the technology. For example, Hill reports:
“When students from Sweden’s Chalmers University of Technology looked at AI’s ability to select from which supplier to buy a particular part – taking into account factors such as price, lead time, delivery accuracy and quality – they found it could do so without making too many errors.”
The fact that the system used by the Swedish students wasn’t intelligent in the AGI sense of that word simply didn’t matter for the task at hand. Dan Matthews, chief technology officer at IFS, told Hill, “The problem is not with the AI itself – the algorithms developed work well – but with the scenario and real-life data quality. For this to work well, and be worthwhile, you need a high volume of decisions where there are multiple choices and up-to-date values for all variables that may affect the decision. … Taking the choice of supplier scenario as an example: lack of up-to-date price or lead time information for all alternative suppliers would lead to decisions made on wrong assumptions.” In today’s “always on” world, the collection of Big Data is much less of a problem than it was in the past. In fact, many companies lament that they have too much data and are collecting more each day. That is exactly why AI systems are required to make sense of it all.
At Enterra Solutions, we call this a Sense, Think/Learn, Act™ paradigm. In such a paradigm data matters a lot. As Hill states, “Ultimately, even AI can only be as good as the data it is given – to start with, at least.” Not everyone is as sanguine about the future of deep learning as the researchers cited above. For example, Ed Crego, George Muñoz, and Frank Islam, write, “Big Data and Deep Learning are two major trends that will impact and influence the future direction and potential of innovation in the United States. … In our opinion, both of these trends have substantial promise. But, they also have limitations that must be overcome to deliver on that promise.” [“Big Data and Deep Learning: Big Deals or Big Delusions?” Huffington Post The Blog, 26 June 2013] It appears that what Crego, Muñoz, and Islam really object to is the hyperbole associated with Big Data and deep learning rather than actual gains that have been made in those areas. They write:
“Big Data is everywhere and the folks who are making a living warehousing and mining it abound. Big Data can be used to analyze web browsing patterns, tweets and transit movements, to predict behavior and to customize messages and product offerings. Kenneth Cukier, Data Editor of The Economist, and Viktor Mayer-Schoenberger, Professor of Internet Governance and Regulations at the Oxford Internet Institute, exalt the emerging use and impact of Big Data in an for the May/June issue of Foreign Affairs. The essay is adapted from their new book, Big Data: A Revolution That Will Transform How We Live, Work and Think. … They assert that because of the ability to collect and use great volumes of information there will need to be three ‘profound changes’ in how data is approached. (1) We will no longer have to rely solely on small amounts or samples and statistical methods for analysis. (2) We will have to tolerate some ‘messiness’ and depend on the quantity of data as opposed to its quality. (3) In many instances, ‘we will need to give up our quest to discover the cause of things in return for accepting correlations.’ It seems to us that it is precisely because of these three considerations that there will need to be more rigor and objectivity in the data gathering and analysis process. Scientific methods will become more important rather than less. An informed intellect and an inquiring mind will become more essential in order to perceive ‘truth’ and bring some order out of chaos.”
They make a good point. A lot of people fear “the rise of machines” and believe that AI systems will put analysts out of work. AI systems may reduce the number of analysts that a company needs, but I agree with Crego, Muñoz, and Islam that for the foreseeable future “an informed intellect and an inquiring mind” will remain essential to ensure that the results of machine deep learning make sense. The authors disagree with Cukier and Mayer-Schoenberger on another point as well. Cukier and Mayer-Schoenberger label Big Data as “a resource and a tool”; but, Crego, Muñoz, and Islam insist that it is only a “resource” not a “tool.” On this point, I agree entirely with them. They explain:
“The tool is the research design that is employed to organize, aggregate, and analyze data in order to see patterns, extract meaning and make judgments. The person who creates and uses that data is the toolmaker. Today, we have an oversupply of Big Data and an under supply of Big Data toolmakers. … The message to us from this is straightforward. Even with mounds and mounds of Big Data, human insights and innovation must come into play to matter and make a difference. Big Data. Small Minds. No Progress! Big Data. Big Brains. Breakthrough! Deep Learning stands in contrast to Big Data. Deep Learning is the application of artificial intelligence and software programming through ‘neural networks’ to develop machines that can do a wide variety of things including driving cars, working in factories, conversing with humans, translating speeches, recognizing and analyzing images and data patterns, and diagnosing complex operational or procedural problems.”
In the end, I’m not sure how real the differences are between Crego, Muñoz, and Islam, and Cukier and Mayer-Schoenberger. Both groups agree that the application of deep learning is what is going to make a difference in the future. But, as Hill emphasized, the results of that deep learning depends significantly on the quality of the data being analyzed. Crego, Muñoz, and Islam conclude:
“Smart machines are here and they will continue to get smarter. … The real innovation challenge to us then it seems will not be to apply deep learning to replace humans but to use it to create new ideas, products and industries that will generate new jobs and opportunities for skilled workers. … Getting the most out Deep Learning will require deep thinking. That’s where authentic human intelligence still trumps artificial machine intelligence.”
Obviously, as President & CEO of a company that offers AI-based business solutions, I believe that the potential of cognitive reasoning platforms to enhance business processes is significant. Companies that embrace such systems are more likely to survive the journey across tomorrow’s business landscape than those that do not.