The world is often divided into two camps: optimists and pessimists. Pessimists look into the future and see a dystopia in which most human workers are made redundant by artificial intelligence (AI) systems. Some pessimists go beyond the dystopia scenario to predict the total annihilation of humankind by AI overlords. On the other hand, optimists see a bright future in which cognitive systems augment human intelligence to help humankind solve some of humanity’s greatest challenges and usher in an era of peace and prosperity for all. Realistically, neither extreme is likely to unfold. Nevertheless, I lean towards the optimistic point of view. Futurist Gerd Leonhard (@derFuturist) is also optimistic. He writes, “I am a hopeful optimist, a humanist, and someone who still believes that technology can be harnessed for the greater good of mankind.”[1] Like me, Leonhard understands the future is the result of deliberate decisions (or lack of them) and unexpected cosmic twists which humans can’t control. He adds, “My job as a futurist is to consider what it means to be human in a machine-led world, and the many social, physical, economical and political ramifications of such a place.”
The human/machine future
Clearly, Leonhard believes we are entering an age of artificial intelligence (i.e., “a machine-led world”) and, like him, I ponder humankind’s place in a highly technical world. Leonhard believes, “Humanity will change more in the next 20 years than the previous 300 years.” What does that mean? According to Leonhard, “The effect of the changes we’re witnessing surpass pivotal historical moments such as the industrial revolution, or the invention of the printing press. Technology will no longer remain just outside of us, but it is relocating inside of us in the form of wearables, brain/computer interfaces, nanotechnology, and human genome editing. What we are experiencing is a shift in the very definition of what it means to be human.” Leonhard’s vision of the future can take your breath away — as a result of surprise or fear or both. One thing history has taught us is that the future unfolds in unexpected ways. Rarely does extrapolation work when it comes to prognostication. Leonhard understands that. He writes, “Technology is growing exponentially powerful, and whilst much of it is likely to have very positive effects on humanity — such as the possibility of ending diseases and solving energy scarcity — some of it could change what it means to be human. We need to embrace technology and harness its positive powers, but we should not become technology ourselves in the process.” In other words, we need to ensure technology remains a servant, rather than becoming the master, as humankind moves forward.
One of the things I like most about the cognitive computing movement is the fact it was specifically developed as a way to augment, rather than replace, human intelligence. Ginni Rometty (@GinniRometty), IBM’s CEO, explains the difference between the terms “AI” and “cognitive computing” begins with intent. AI seeks to match human intelligence whereas cognitive computing seeks to augment human intelligence. Rometty explains, “[When IBM coined the term cognitive computing] the idea was to help you and I make better decisions amid cognitive overload. That’s what has always led us to cognitive. If I considered the initials AI, I would have preferred augmented intelligence. It’s the idea that each of us are going to need help on all important decisions.”[2] As President and CEO of a cognitive computing company, I’m well aware of the technology’s strengths and weaknesses. Cognitive computing platforms, like Enterra’s Artificial Intelligence Learning Agent™ (AILA®), are decision aids that let users ask questions and receive answers in language they understand. Although that sounds a little Wizard of Oz-ish, it’s not. As decision aids, cognitive platforms provide insights to help decision-makers select informed courses of action. Analytics expert Kamalika Some (@KamalikaS) notes, “A cognitive computing system is used in complex situations for ambiguous and uncertain outcomes.”[3] Most often, humans make the final decisions.
Augmenting human decision making
Decision making in business, as in life, makes all the difference. Bain analysts, Michael C. Mankins and Lori Sherer (@lorisherer), assert if you can improve a company’s decision making you can dramatically improve its bottom line. They explain, “The best way to understand any company’s operations is to view them as a series of decisions.”[4] They add, “We know from extensive research that decisions matter — a lot. Companies that make better decisions, make them faster and execute them more effectively than rivals nearly always turn in better financial performance. Not surprisingly, companies that employ advanced analytics to improve decision making and execution have the results to show for it.” Chuck Densinger, co-founder and chief operating officer at Elicit, writes, “Businesses of all sizes make a head-spinning number of decisions — using up stores of valuable and finite energy in the process. It’s not surprising then that we try to preserve this precious commodity by simplifying or automating as much as we can.”[5]
He goes on to describe a “Decision Automation Continuum” across which, he believes, all companies must journey on their way to augmented decision making. The continuum involves three groups of people. First, people who deal with technology (i.e., Geeks); second, people who deal with data (i.e., Nerds); and, lastly, people who deal with business processes (i.e., Suits). At the beginning of this continuum are decisions made strictly by humans (i.e., “unassisted decision making”). According to Densinger, “The vast majority of decisions a business makes fall in this category.” As decisions get more complicated (i.e., as they involve more variables) Geeks and Nerds must collaborate to “build new models, design metrics, and provide reports or dashboards that inform and assist human decision makers.” At this stage, decision makers primarily rely on spreadsheets. As more data is collected and analyzed (i.e., as the Nerds join the decision making process), the trip along the continuum continues.
Densinger writes, “After enough learning has taken place, we can then build models that will enable the machines to recommend specific decisions or actions. The models and algorithms developed in the first stage may still be used, or they may have been updated by learnings from previous stages. In this stage, a human still takes the final action, often after adjusting the machine recommendation.” Even in companies using advanced analytics platforms, essential decisions will be remain human decisions augmented by machine insights. “In the final stage,” Densinger writes, “we have refined the machine’s decision making to the point where we entrust it to act without intervention.” Even at this stage, however, companies use a management by exception approach letting machines make routine decisions but alerting human decision makers when an anomaly is detected. Densinger concludes, “These solutions may proceed slowly or rapidly to full automation, depending on the nature of the problem, and the organization’s ability to effectively integrate and deploy the technology (Geek), data (Nerd), and business processes (Suit) required to create a new decisioning capability. … Strategists (Suits) are required in every stage to ensure that we are continuing to consider business priorities and ask the right questions along the way.”
Concluding thoughts
Keeping the human touch in decision making is important. Leonhard observes, “Our blind trust in technology, and our inclination towards viewing machines as superior, per se, is truly frightening.” Using cognitive technologies to augment human decision-making rather replace it will help us understand humankind’s place in machine-led world. No one can predict the future; but, we can use technology to help us make better decisions which can help lead to a brighter future.
Footnotes
[1] Gerd Leonhard, “As Technology Becomes Cognitive, All Paths Must Lead To Collective Human Flourishing,” Forbes, 25 February 2019.
[2] Megan Murphy, “Ginni Rometty on the End of Programming,” Bloomberg BusinessWeek, 20 September 2017.
[3] Kamalika Some, “Leveraging Cognitive Computing for Business Gains,” Analytics Insight, 19 September 2018.
[4] Michael C. Mankins and Lori Sherer, “Creating value through advanced analytics,” Bain Brief, 11 February 2015.
[5] Chuck Densinger, “Why humans still hold the advantage in decision automation,” Information Management, 11 April 2019.