Cognitive Computing and Supply Chain Risk Management

Stephen DeAngelis

October 2, 2014

Earlier this year, Robert J. Bowman, Managing Editor of SupplyChainBrain, wrote about a discussion he had with Yves Leclerc (@wmpyleclerc), managing director with business consultancy West Monroe Partners. Leclerc told Bowman, “Despite a raft of natural disasters and quality failures over the years, many companies have yet to step up to the requirements of an effective risk-management effort.” [“Disaster Looms: Why Today’s Global Supply Chains Are At Risk,” Forbes, 1 April 2014] At the time Bowman wrote his article, the crisis gaining most of the headlines was faulty General Motors ignition switches. Bowman continued:

“You might think that 13 deaths and the recall of 6.1 million cars since February would have top manufacturing executives scurrying to adopt controls that would prevent such nightmares from occurring in their own organizations. And maybe they are. But neither the 2013 floods in Thailand nor the 2011 earthquake, tsunami and nuclear disaster in Japan has resulted in sweeping risk-management measures, Leclerc said. The business world, it would seem, has a short memory. … Many companies remain fixated on boosting shareholder value in the short term. It can be tough to sell top executives on the value of expensive programs that could shield them from disruptions caused by disasters, natural or otherwise. What is the value of a non-event? Leclerc was disheartened to hear one of his clients brush off the necessity of a plan for coping with lost or delayed containers, even during the critical peak-shipping season. ‘His reaction was, “If I’m in trouble, all my competitors will be, too. It’s no big deal.”‘ Even the most innovative companies are vulnerable.”

There have been so many “events” over the past decade that one would think persuading executives of the value of risk management would be easy. Obviously, that isn’t true. Making a case for risk management remains a challenge. The case might be more easily made using analytics. In an interview earlier this year, Vivek Katyal, principal, Deloitte & Touche LLP, was asked a series of questions about how analytics could be used in the risk management process. Katyal noted, “There is no exact science for measuring risk. But with analytics, you can build measurement parameters that can help you establish and examine likely risk scenarios. From there, it’s easier to understand the potential impact of a risk – and start planning around it. Along the way, analytics can help establish a baseline of data for measuring risk across the organization by pulling together many strands of risk into one unified system.” [“Five questions about applying analytics to risk management,” Deloitte Risk Angles, 11 February 2014]

 

Katyal’s interviewer noted that analytics have been used for a long time in the business world and that executives are questioning why there is a need to change what is analyzed and how it’s accomplished. Katyal responded that analytics has improved dramatically in recent years. He stated, “When it comes to the level of sophistication, there’s a world of difference. Historically, analytics has been synonymous with business Intelligence – knowing the facts and reporting past and current performance. But today risk analytics is more focused on data exploration, segmentation, statistical clustering, predictive modeling, and event simulation and scenario analysis.” Justin Lyon, CEO of Simudyne, certainly agrees with Katyal. In an interview with Lloyd’s, he explained how his company is using cognitive computing technology to improve risk management. [“Artificial Intelligence for Risk Managers,” Lloyd’s, 4 April 2014] He told the interviewer, “By testing thousands of scenarios in a synthetic environment, companies can identify strategies that are highly resilient. For example, they may model and identify a set of tactics that work in 80% of plausible future scenarios, and can then track against the 20% of outlying scenarios so they can have an early warning and can shift their strategy sooner. This allows companies to make optimal decisions to save money and time, and mitigate risk.” The interviewer then asked Lyon, “What are the differences between ‘big data’ and ‘cognition’ platforms?” Lyon responded:

“When looking only at the past or using gut feel to make decisions we can often get them badly wrong when fundamentals are changing in the real world, correlations break down and trends no longer hold. We saw this in the financial crisis. Big data platforms use pattern recognition to turn data into information about what happened, where and why. A cognition platform is more predictive, turning that historical information into knowledge of what might happen in the future. It gives you the ability to test your decision in the knowledge that if it doesn’t work it doesn’t matter because you just hit ‘reset’. We allow risk managers or companies to understand the physics of how things actually work so that we can turn big data into big wisdom.”

One of the other benefits of cognitive computing systems is that they have the ability to learn. The more data they ingest or the more scenarios they run the more they learn. Lyon noted, “A lot of companies we work with have made significant investments in big data platforms – they are drowning in big data, but the data doesn’t actually connect to any material decision-making processes. A cognition platform allows you for the first time to track the data that is really relevant, saving a whole lot of effort and allowing you to make decisions that will affect your balance sheet in the future.” The objective of any good analytical effort is to generate actionable insights. In today’s fast-paced and complex business world, there simply isn’t enough time to manually try and make sense of all of the available data. Humans simply can’t handle the complexity involved in trying to analyze all of the variables that can potentially affect supply chains. When asked about the future of cognitive computing, Lyon responded:

“History is full of ‘unintended consequences’, and recent events such as the financial crisis have driven a real appetite among executives for what we are doing. Today it may be seen as best practice or thought leadership, but some day in the future it should be fairly normal to use cognition platforms. For the first time in our evolution we have the capability to use cognition platforms to explore deep problems and find solutions to them in a way we’ve not been able to do before. We believe all important decisions can and should be simulated, and we anticipate and look forward to a day when humans can have a spoken dialogue with our computers and improve our lives on earth. It sounds like science fiction, but we are doing it today for real companies in the real world. It is not fiction any more.”

Lyon makes a very important point when he states, “We anticipate and look forward to a day when humans can have a spoken dialogue with our computers.” Analysts and decision makers need to be able to communicate with and receive insights from cognitive computing systems using common (or natural) language. The more user-friendly such systems are the more useful they become. Bowman asserted, “It all comes down to a lack of visibility, coupled with inadequate response plans when the inevitable problems occur.” Leclerc told Bowman, “Good risk management is both a technology and business-process effort. … Companies have spent untold amounts of money on enterprise resource planning (ERP) systems to manage financials and other basic functions, but they’re less advanced in acquiring systems that enable end-to-end visibility and collaboration among all supply-chain partners. At the same time, they need to tear down the functional ‘silos’ that keep various disciplines from communicating key information on raw materials, goods in production and inventory throughout the chain.” Cognitive computing systems can help address all of those challenges.