Home » Artificial Intelligence » Machine Learning and the Digital Enterprise

Machine Learning and the Digital Enterprise

June 24, 2019

supplu-chain

One of the most useful forms of artificial intelligence (AI) currently being used by enterprises is machine learning (ML). Machine learning’s rise in the corporate setting is the result of numerous factors, including data availability, improvements in computer technology, and advancements in algorithm development. Machine learning requires data — lots and lots of data. For most organizations, obtaining data is no longer an obstacle. In fact, many organizations are drowning in data. Concerning improvements in computer technology, Jake Bennett (@jakebenn), Solution Principal at Slalom, writes, “Advances in computing technology (GPU chips and cloud computing, in particular) are enabling engineers to solve problems in ways that weren’t possible before. … The low cost of computation and the ease of accessing cloud-managed clusters have democratized AI in a way we’ve never seen before.”[1] He also notes, “Advances in AI algorithms in the mid-1980s broke the spell of the AI winter of the 1970s. … Machine learning algorithms work by improving their ability to perform specific tasks using data.”

 

Machine learning and the digital enterprise

 

Many analysts have suggested businesses need to transform into digital enterprises if they want to compete successfully in the digital age. A digital enterprise is one that leverages data to gain a competitive advantage. Machine learning plays a large role in achieving that goal. Bennett explains, “Problems that used to be the exclusive domain of humans — computer vision, speech recognition, autonomous movement — are being solved today by machine learning algorithms. In fact, machine learning has become such a huge area of focus and, for all practical purposes, the term machine learning has become synonymous with AI. Ultimately this is a good thing. The more consumers and companies start associating the term AI with real-world applications of machine learning like self-driving cars, the more they realize that AI is a real thing. It’s here to stay, and it holds the promise of reshaping the technology landscape over the next several years.” Just as important, machine learning has the potential to reshape the business landscape.

 

Macy Bayern (@macybayern) notes AI and machine learning, even though they are still considered cutting edge technologies, are not new technologies. She reports a study by TBCO highlights the fact they have been around long enough to have demonstrated usefulness in a number of business use cases.[2] Bayern reports the top six use cases for AI and machine learning in today’s organizations are:

 

  • Data security (28%): These use cases include risk identification, early detection, operation improvement and corrective action.
  • Real time analytics (24%): AI and machine learning can use real-time analytics to find fraudulent transactions, product offers, dynamic pricing, and more.
  • Personalized data visualizations and dashboards (24%): The tech can be used to identify irregularities in data, support predictive analytics, and suggest improvements for performance.
  • Data integration, preparation, and management (23%): AI and machine learning is vital for understanding the details of an organization’s data.
  • Sales/revenue forecasting (23%): These use cases include accurate sales forecasting, enhanced business control, and assistance in year-over-year growth.
  • Personal security (20%): The tech can be used for home surveillance, access to controls for events, and military defense.

 

Bennett predicts AI and machine learning will become so important companies will inevitably come to “the realization that if [they] don’t develop strong in-house machine learning capabilities now, [they’ll] end up on the wrong side of the future of technology.”

 

Machine learning and cognitive computing

 

Kalev Leetaru (@kalevleetaru), a Senior Fellow at the George Washington University Center for Cyber & Homeland Security, observes, “Machine learning tools [have been developed which are] capable of identifying the patterns in massive noisy datasets with accuracy that often exceeds that of human domain experts.”[3] Often these machine learning capabilities are embedded in cognitive computing systems — which I define as a combination of semantic intelligence (i.e., machine learning, natural language processing, and ontologies) and computational intelligence (i.e., advanced mathematics). Leetaru argues that machine learning by itself is often not enough to provide desired insights. He explains, “A critical distinction between machines and humans is the way in which we reason about the world: humans through high order semantic abstractions and machines through blind adherence to statistics.” He continues:

“A human, shown a collection of photographs of dogs running around dog parks and house cats running around people’s homes, is able to use their life experiences to understand that in this particular context, the background of the photograph is not relevant to categorizing whether the photo depicts a dog or a cat. A machine learning algorithm, on the other hand, might recognize that the strongest signal differentiating a dog from a cat is whether the photo is a bright outdoor photo or a dim indoor photo. If the machine’s goal is to maximize its accuracy at distinguishing dogs and all of the dog photos are outdoors, then this is indeed the ‘best’ signal for the machine to latch onto. However, in doing so, the machine will necessarily fail when shown images of large outdoor cats like lions.”

Context is important. At Enterra®, we add semantic reasoning into the mix. This approach can help prevent machines from following false trails as they pursue new knowledge about a particular subject. Without this added dimension, learning machines can return some pretty funny (and erroneous) conclusions. Eric Blattberg (@EricBlattberg) reports, for example, “When deep learning startup AlchemyAPI exposed its natural language processing system to the Internet, it determined that dogs are people because of the way folks talked about their pets. That might ring true to some dog owners, but it’s not accurate in a broader context. That hilarious determination reflects the challenges — and opportunities — inherent to machine learning.”[4] Using semantic reasoning and a common sense ontology, a machine knows dogs aren’t human. Leetaru concludes, “Once [machine] learning systems can reason about the world in terms of semantics, rather than statistics and incorporate external world knowledge and context into their decision-making processes, will we finally have machines that can begin to shed some of the brittleness and failures of the current generation of AI.”

 

Concluding thoughts

 

Bennett notes, “The increasing demand for AI-driven technology, combined with the dearth of machine learning talent in the labor pool, will force the democratization of data science.” In fact, what we are seeing are cognitive computing systems that leverage natural language processing capabilities to ensure non-technical users can easily ask questions and receive answers from the system. As a result, digital enterprises are finding numerous ways to use machine learning capabilities to advance their business goals.

 

Footnotes
[1] Jake Bennett, “How a new wave of machine learning will impact today’s enterprise,” VentureBeat, 15 July 2017.
[2] Macy Bayern, “AI and machine learning: Top 6 business use cases,” TechRepublic, 16 May 2019.
[3] Kalev Leetaru, “Why Machine Learning Needs Semantics Not Just Statistics,” Forbes, 15 January 2019.
[4] Eric Blattberg, “Cognitive computing is smashing our conception of ‘ground truth’,” Venture Beat, 20 March 2014.

Related Posts: