Home » Artificial Intelligence » Narrow Artificial Intelligence and Your Business’ Future

Narrow Artificial Intelligence and Your Business’ Future

December 29, 2014

While some pundits endlessly debate whether artificial general intelligence (AGI) leading to a sentient machine will ever be achieved, companies focused on narrow artificial intelligence (AI) applications for businesses are making rapid progress. “We are entering an exciting period for artificial intelligence,” writes Rick Collins (@NextITCollins), President for enterprise business at Next IT. “We’re seeing more consumer impacting developments and breakthroughs in AI technology than ever before. And as Nova Spivack recently argued, it’s reasonable to expect that major players like Apple, IBM, Google and Microsoft, among others, will lead a fierce consolidation effort for the AI market over the next five years.” [“The Future Of AI Will Be Stacked, TechCrunch, 18 October 2014] Some of the big players mentioned by Collins may pursue AGI, but most companies will continue to look for ways to use narrow AI to help businesses become more profitable.

 

David Senior (@djsenior13), CEO of Lowdownapp Ltd, asserts that the adoption of artificial intelligence may seem only like “a remote possibility for most people and most companies,” but he states that people and businesses rely on AI every day. In fact, he writes, “The chances are you will be using it a lot in the near future, and Narrow AI will be the format that predominates.” [“Artificial Intelligence and the benefits of narrow AI for businesses,” Techradar, 15 September 2014] He explains:

“Narrow AI is not a sophisticated technology, but it does offer a wide range of benefits for individuals and companies. For example, to a very great degree of accuracy it can scan and collate specific required information from the entire contents of the web in a fraction of a second. Not only that, narrow AI can be programmed to send selected information to specific third parties, and automatically update any changes to information. Narrow AI uses a logic driven process that replicates human actions. Typically it sifts through massive amounts of information and accurately extracts only what is needed. However, the real benefits occur when used to contextually layer searches and reporting to build accurate scenarios. It becomes the perfect example of the three Cs — context, context, context.”

Senior concludes, “The possibilities for narrow AI are not infinite, but for better organisation and information gathering it is in a league of its own.” How useful one considers narrow AI programs depends, in part, on how narrowing one defines it. For my purposes, any artificial intelligence system not being developed in pursuit of artificial general intelligence falls under the narrow AI rubric. Using that definition, a large spectrum of systems is included in narrow AI. At the high end of that spectrum are cognitive computing systems, like the Enterra Solutions® Cognitive Reasoning Platform™ (CRP). Senior would place Apple’s Siri and the low end of that spectrum. Somewhere in the upper regions of that spectrum falls predictive analytics that uses machine learning. “Today’s predictive analytics incorporates machine learning on big data,” Apigee Corporation notes, “but the traditional version of predictive analytics has been around for quite some time, with limited adoption.” [“The new predictive analytics: The democratisation of insight,” Bdaily, 31 July 2014] That begs the question: Why is machine learning-based predictive analytics better than traditional predictive analytics? Apigee explains:

“Traditional predictive analytics, represented by statistical tools and rules-based systems (previously known as expert systems) is stuck in the 1990s. It’s based on data in relational data warehouses, which handle only structured data collected in batches. Signals from real-time data, such as the location of a mobile phone or signals available in social data such as tweets or customer service text data, are not considered. Further, many of the tools out there are severely limited by scale and only handle data that fits in memory, which forces analysts to work with samples rather than the full data. Sampling captures the strongest signals in data but misses out on the long tail of weaker signals, thus giving up a fair amount of precision. Traditional predictive analytics also requires feature design: an analyst manually designs the features that drive predictions through a hypothesize-and-test process. For example, for predicting retail purchases, the analyst might hypothesize three features: total amount spent by the customer in the past, total number of times the customer has purchased in the last year, and the last date they made a purchase. The analyst then tests which of these features carry predictive power and experiments with various predictive algorithms. Finally, traditional predictive analytics fails to adapt when customer behaviour changes. Predictive models are typically implemented as code inside applications, which makes it impossible to even monitor their performance, much less adapt to change. The world is moving too fast, and consumers change behaviour too quickly, for traditional predictive analytics to keep up. But a major advance has pushed predictive analytics past these hurdles: machine learning.”

Apigee indicates that three things have changed to makes today’s predictive analytics different and better: machine learning; distributed data processing; and plummeting hardware costs. Of these three factors, Apigee stresses that “the most important advance is machine learning, an artificial intelligence technology that permits computers to adaptively learn from big data without requiring further programming.”

 

Just as there is a broad spectrum of narrow AI systems, there is also, I believe, a greater spectrum of uses for narrow AI than Senior implies. Collins, for example, claims, “Deeply integrated AI has the potential to transform businesses and industries, from the interface level of the customer experience to the breadth of institutional knowledge that employees are able to access and build upon. The key to any effective AI deployment for businesses, however, requires a sophistication and expertise that is specific to the industry and the company. We call that domain knowledge.” In my experience dealing with customers, I agree with Collins. Effective AI systems need to be able to incorporate domain knowledge to be effective. To that end Enterra’s CRP complements mathematical calculations with semantic reasoning that incorporates domain knowledge.

 

There are a number of reasons that AI is going to remain narrow, at least in the short term, and why companies need to be wise in their selection of AI system providers when they want to analyze their big data. Professor Michael Jordan, one of UC Berkeley’s most distinguished professors, states, “I think data analysis can deliver inferences at certain levels of quality. But we have to be clear about what levels of quality. We have to have error bars around all our predictions. That is something that’s missing in much of the current machine learning literature.” [“Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts,” by Lee Gomes (@leegomes), IEEE Spectrum, 20 October 2014] It’s easy to over promise when trying to make a sale; but, over promising can lead to unsatisfactory, if not disastrous, results. Jordan explains:

“I like to use the analogy of building bridges. If I have no principles, and I build thousands of bridges without any actual science, lots of them will fall down, and great disasters will occur. Similarly here, if people use data and inferences they can make with the data without any concern about error bars, about heterogeneity, about noisy data, about the sampling pattern, about all the kinds of things that you have to be serious about if you’re an engineer and a statistician — then you will make lots of predictions, and there’s a good chance that you will occasionally solve some real interesting problems. But you will occasionally have some disastrously bad decisions. And you won’t know the difference a priori. You will just produce these outputs and hope for the best. And so that’s where we are currently. A lot of people are building things hoping that they work, and sometimes they will. And in some sense, there’s nothing wrong with that; it’s exploratory. But society as a whole can’t tolerate that; we can’t just hope that these things work. Eventually, we have to give real guarantees. Civil engineers eventually learned to build bridges that were guaranteed to stand up. So with big data, it will take decades, I suspect, to get a real engineering approach, so that you can say with some assurance that you are giving out reasonable answers and are quantifying the likelihood of errors.”

When it comes to AI systems that be brought to bear on any challenge, companies are like little children sitting in the back seat of a car on a long drive. They keep asking, “Are we there yet?” As Jordan notes, we are not there yet; but, we are getting closer. Should companies wait to pursue AI-based solutions to their challenges? That depends on the challenge. There are number of AI-based solutions that are mature enough to address some of those challenges and companies that hesitate to adopt them could find themselves falling behind their competition. I can say with certainty that most businesses’ future will rely on AI in some form or another and jumping on the bandwagon sooner rather than later is probably in their best interest.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!