In an interview with SupplyChainBrain, John J. Brown, director of supply chain risk management at Coca-Cola, claimed that “psychology plays a much larger role in risk assessment and risk management than most people realize. After researching this topic for two years, Brown has come to understand that risk assessment depends on what we perceive as important and a whole host of experience factors play into that, including what we know, what we don’t know and what we choose to remember.” [“Psychological Factors in Risk Management,” 21 December 2012] Fortunately, Caroline MacDonald reports that business executives recognize at least part of the problem. Reporting on a study entitled Executive Perspectives on Top Risks for 2013, she writes, “The top two risks identified by executives send the message that they are more concerned with what they don’t know, regarding economic conditions and regulations, than with what they do know, even about significant operational risks.” [“Top Risks Reflect Unsure Business Environment,” 15 March 2013]
Dylan Evans, founder of Perception Point, agrees with Brown that other psychological issues can also compromise risk management programs. “A key aspect of risk intelligence is recognizing the limits to your knowledge,” he writes. “People’s judgment of risks is deeply compromised by psychological biases. [“Your judgment of risk is compromised,” The Knowledge Exchange, 12 March 2013] Clearly, permitting personal perceptions to “deeply compromise” your organization’s risk management process is not a good thing. The SupplyChainBrain interview with Brown next turns to the subject of perception.
“Our perception of risk is highly influenced by choice – whether we are exposed to a risk by our own volition or because it is forced on us – and the degree of control we have over the risk. ‘If we are in control of a situation, we feel much better about it, which is why a lot of people choose to drive rather than fly, even though statistics show they are much more likely to die in a car accident than from traveling via air,’ says Brown.”
Evans agrees that perception (or rather misperception) is a serious problem. Perceptions feed our biases and, according to Evans, “one of the most pervasive of these is a phenomenon called the favorite-longshot bias, first observed by the American psychologist Richard Griffith in 1949.” He explains:
“Numerous studies have found evidence of the bias at racetracks and other sports betting markets all around the world. Indeed, it is probably the most discussed empirical regularity in sports gambling markets, and the literature documenting it now runs to well over a hundred scientific papers. Daniel Kahneman and Amos Tversky provide perhaps the best theoretical framework in which to understand the phenomenon. In the famous 1979 paper in which they presented their Prospect Theory, they noted that people’s ability to perceive differences between extreme probabilities was far greater than their ability to notice differences between intermediate ones. The difference between 0 percent and 1 percent, for example, is much more salient than that between 10 percent and 11 percent, and the difference between 99 percent and 100 percent looms much larger than that between 89 percent and 90 percent. As a result, we tend to overreact to small changes in extreme probabilities and underreact to changes in intermediate probabilities. We will pay far more for a medical operation that increases our chance of surviving from 0 percent to 1 percent than one that increases it from 10 percent to 11 percent. We will also pay more for a lottery ticket that increases our chance of winning from 99 percent to 100 percent than one that increases it from 89 percent to 90 percent.”
Personally, I’d like to know where I can buy that lottery ticket that “increases our chance of winning from 99 percent to 100 percent”! Even if such a ticket doesn’t exist (and, trust me on this, it doesn’t exist), Evans’ point is well made. He continues:
“This oversensitivity to small changes in likelihood at both ends of the probability spectrum gives rise to an interesting phenomenon in gambling on horse racing; punters tend to value longshots more than they should, given how rarely they win, while valuing favorites too little, given how often they win. The result is that punters make bigger losses over the long run when they bet on longshots than they do when betting on favorites (they still make losses when betting on favorites, because the racetrack takes a percentage of each bet, but the losses are smaller). This is why bookies rejoice when a long shot wins; that’s when they make their biggest profits. According to data published by Erik Snow berg and Justin Wolfers in their article ‘Explaining the Favorite-Longshot Bias: Is it Risk-Love or Misperceptions?’ there’s an oversensitivity to small changes in likelihood at both ends of the probability spectrum — for very short and very long odds. Betting on horses with odds between 4/1 and 9/1 has an approximately constant rate of return (at minus 18 percent), which implies that bettors are quite good at distinguishing between probabilities over this range. It is only when punters bet on favorites (with odds shorter than 4/1) or longshots (with odds longer than 9/1) that they get into difficulties.”
So what does betting at a racetrack have to do with risk management in a business? Evans believes that the favorite-longshot bias is alive and well in the business world. To compensate for individual biases, an organization obviously needs to use objective, unbiased analysis. Or as Evans puts its, you need “to turn to mathematics.” He explains:
“Remember: A key aspect of risk intelligence is recognizing the limits to your knowledge, and this includes recognizing the limits of risk intelligence itself. The financial crisis of 2007–2008 was partly due to an over reliance on mathematical models and a corresponding failure to exercise judgment. The opposite mistake — ignoring mathematical models and relying entirely on subjective estimates — can be equally dangerous when distinctions of less than 1 percent matter. Such distinctions are particularly important when it comes to extremely low probabilities of less than 1 percent. It is impossible to feel the difference between, say, a probability of 0.01 percent (one in 10,000) and 0.001 (one in 100,000), and yet the first is 10 times greater than the second. In this territory, you must therefore abandon all recourse to epistemic feelings and rely completely on your calculator.”
In the age of big data, it is becoming easier to trust modeling to overcome biases because the size of the data help minimize skewing caused by outliers (that is, if the models are set up correctly). That’s important because risk management processes, and the solutions they invoke, cost money. Organizations don’t want to overpay to become resilient (which can happen if solutions are based on perceptions rather than realities). Brown agrees that organizations need to focus on the right risks and not merely perceived risks.
“‘Another important thing to realize is that risks may exist, even though we don’t know about them or fail to see them,’ says Brown. This is the ‘black swan’ aspect of risk perception, which is based on a book by that name by Nassim Nicholas Taleb. The name refers to a time in Europe when no one believed that it was possible for swans to be anything but white, until explorers came across black swans in Australia. ‘The term has come to mean a risk event that no one believed existed or was possible,’ Brown says. The impact is that companies may be focusing on the wrong risks, he says. ‘We end up managing the risks that we perceive to be of a higher likelihood or higher consequence, but those may not be the most important risks to our company. That is why it is important to understand the role of psychological factors on risk identification and risk assessment.'”
Brown reports that Coca-Cola holds “training sessions to help people calibrate their understanding of risk.” “Because risk perception is memory based and experience based,” he told the SupplyChainBrain staff, “we need to help people understand the way their minds work when it comes to perceiving risk.” Brown also reports that Coca-Cola conducts “structured interviews to help bring depth and analysis to … each risk.” Bob Ferrari indicates that there are other things a business can do as well. He writes:
“The concepts of scenario based planning, advanced business intelligence, predictive analytics and supply chain control towers are gaining increased supply chain functional attention. They are an important extension of business continuity strategy and should come under the stewardship of a cross-functional, cross-business steering team tasked with the same.” [“Validation of Increased Supply Chain Risk Should Equate to Investment in Resiliency,” Supply Chain Matters, 2 June 2012]
The concept of scenario-based planning has been around for a long time. A research paper written back in 1977 by John H. Vanston, W. Parker Frisbie, Sally Cook Lopreato, and Dudley L. Poston provided a succinct explanation of why this method is useful. They wrote:
“In order to minimize the risk inherent in planning against a single, unforeseeable future and to be in a position to profit from different possible trends and events, many governmental agencies and private companies are finding it desirable to plan against, not one, but rather a range of possible futures. Obviously, for the technique to be used effectively a set of alternate scenarios which are relevant, reasonable, and logically interrelated needs to be developed.” [“Alternative Scenario Planning,” Technological Forecasting and Social Change, 1977]
For further discussion about what Vanston and his colleagues wrote, read “Alternative Scenario Planning,” Supply Chain Risk Management, 2 July 2012] Fernando Hernandez, a consultant with Palisade Corporation, indicates that simulation modeling is also on the rise. He writes:
“More and more organizations around the world are turning their eyes away from decision making processes, based on single-point estimates and viewing their risks and opportunities with more sophisticated techniques. One such technique is Monte Carlo simulation (MCS). In a nutshell, MCS allows the examination of all the possible outcomes of decisions and assesses the impact of risk, allowing for better decision making under uncertainty. It is a computerized mathematical technique that allows the business modeler to account for risk in quantitative analysis and decision making. MCS furnishes the decision-maker with a range of possible outcomes and the probabilities they will occur for any choice of action.”
My only quibble with Hernandez is that I don’t believe that any simulation can provide “all the possible outcomes” of a scenario. The world is just too complex to model completely. Nevertheless, simulations are a good technique for overcoming biases and perceptions. Just being aware that you have biases that color your perceptions of the world is a good start to finding ways to overcome them.