Artificial intelligence (AI) is the futuristic boogeyman of the cinema. In the movies, the sole aim of artificial intelligence systems seems to be the elimination of homo sapiens. The real world of AI is a lot less dramatic and a lot more beneficial. Analysts from eMarketer note, “Artificial intelligence is already becoming entrenched in many facets of everyday life, and is being tapped for a growing array of core business applications, including predicting market and customer behavior, automating repetitive tasks and providing alerts when things go awry.”[1] Former IBM executive Irving Wladawsky-Berger believes we may have reached a tipping point for artificial intelligence applications. “After many years of promise and hype,” he writes, “AI seems to be finally reaching a tipping point of market acceptance. AI is now being applied to activities that not long ago were viewed as the exclusive domain of humans.”[2] The fact that AI is now being applied to activities once performed by humans is both a curse and a blessing. For people put out of work by AI, it’s a curse. For people relieved of tedious tasks (but still employed), AI is a blessing.
Have We Reached the Artificial Intelligence Tipping Point?
What does Wladawsky-Berger mean when he asserts we have reached a tipping point of market acceptance? I suspect he means a majority of business executives are now going seriously going to consider how artificial intelligence systems can benefit their enterprises. Mike Gualtieri (@mgualtieri), a Vice President and Principal Analyst at Forrester Research, reports, “Forrester surveyed business and technology professionals and found that 58% of them are researching AI, but only 12% are using AI systems.”[3] I’m not sure those figures confirm we have reached a tipping point, but they could. Gualtieri explains, “This gap reflects growing interest in AI, but little actual use at this time. We expect enterprise interest in, and use of, AI to increase as software vendors roll out AI platforms and build AI capabilities into applications. Enterprises that plan to invest in AI expect to improve customer experiences, improve products and services, and disrupt their industry with new business models.” If widespread interest in AI constitutes a tipping point, then I agree with Wladawsky-Berger that a tipping point has been reached.
What Will the Future Look Like with More AI in Our Lives?
Rowena Lindsay (@Rowena__Lindsay) predicts, “Whether they are assisting your doctor in surgery, driving your car, analyzing crime patterns, or cleaning and providing the security system for your home, artificial intelligence will play a big role in urban living in by 2030.”[4] That may be true, but Dave Gershgorn (@davegershgorn) observes there are so many varied forms of AI in the works trying to predict how those applications might affect our daily lives is extremely difficult.[5] Commenting on a report released by Stanford University’s One Hundred Year Study project, Gershgorn writes, “The report illustrates eight areas that AI has already impacted, and will continue to influence in some way: transportation, home robots, healthcare, education, low-resource communities, public safety and security, employment and workplace, and entertainment.” He continues:
“The researchers don’t see high-skilled or low-skilled jobs being affected — but certain jobs already affected by the internet such as travel agents will wane further. High and low-skilled jobs will have tasks automated, but still require humans to work machinery or make informed decisions. Any jobs that AI might create are beyond the authors’ imagination. ‘The new jobs that will emerge are harder to imagine in advance than the existing jobs that will likely be lost,’ the study says.”
Wladawsky-Berger notes, “AI suffers from what’s become known as the AI effect: AI is whatever hasn’t been done yet, and as soon as an AI problem is successfully solved, the problem is no longer considered part of AI.” EMarketer analysts add, “Many people … don’t realize that AI powers some of today’s most buzzed-about technologies.” In other words, the public might never truly understand how ubiquitous AI may become in their lives and, as a result, may continue to harbor suspicions about its value.
Befriending Artificial Intelligence
Peter Stone, a computer scientist at the University of Texas at Austin, told Lindsay, “AI tends to be very polarizing: some people tend to be very excited about it, others are very fearful, and sometimes the same people have both of those different attitudes.” Change is always difficult and we have witnessed growing pains associated with technological advancements in the past. Nevertheless, humankind has inevitably pressed forward. Bryan Johnson writes, “The evolution of human tools, from rocks to AI, can be seen as a trajectory of increasingly powerful effort arbitrage, where we exploit our comparative advantage relative to our tools to do things better, and do more new things. Along this trajectory, tools that embody significant levels of intelligence are our most powerful yet. … In this pursuit of effort arbitrage, the smallest of intelligence advancements has the power to yield enormous gains for humans, individual and collective. … With each advance, we happily relinquished a small part of our agency for known pre-programmed outcomes. Our tools could begin doing bigger and bigger things on our behalf, freeing us up for other, more desired tasks.”[6] He continues:
“We’re at an interesting transition point where we are moving from using our tools as passive extensions of ourselves, to working with them as active partners. … Our tools are now actors unto themselves, and their future is in our hands. … Ideally, such technologically evolved decision-making abilities can flourish alongside evolving [human intelligence (HI)], to rethink assumptions, reframe possibilities and explore new territories. … In short, we are poised for an explosive, generative epoch of massively increased human capability through a Cambrian explosion of possibilities represented by the simple equation: HI+AI. When HI combines with AI, we will have the most significant advancement to our capabilities of thought, creativity and intelligence that we will have ever had in history.”
I agree with Johnson’s optimism and believe the future should be characterized by human/machine collaboration. Whether that future actually emerges remains an open question. As Gershgorn noted, the term “AI” covers a lot of ground and some AI efforts are more concerning than others — most of potentially dangerous efforts are those trying to develop artificial general intelligence. Gualtieri calls this “pure AI.” He explains, “Humanity is still far away from pure AI … The highest benchmark for AI is the humanlike abilities to perceive (sense), learn, think (formulate ideas), interact, and take actions. This is pure AI.” Although cognitive computing systems can sense, think, act, and learn in a basic way, they do not yet measure up to the “pure AI” about which Gualtieri is writing. Today’s cognitive computing systems fall into the narrow AI camp. I do know that befriending most use cases of artificial intelligence makes a lot more sense than fearing them.
Footnotes
[1] Staff, “Understanding Artificial Intelligence,” eMarketer, 21 October 2016.
[2] Irving Wladawsky-Berger, “Has AI (Finally) Reached a Tipping Point?” The Wall Street Journal, 28 October 2016.
[3] Mike Gualtieri, “Artificial Intelligence: Fact, Fiction. How Enterprises Can Crush It,” Information Management,
[4] Rowena Lindsay, “What will artificial intelligence look like in 15 years?” The Christian Science Monitor, 6 September 2016.
[5] Dave Gershgorn, “Not even brightest minds in artificial intelligence can tell you how it’s going to change our lives,” Quartz, 1 September 2016.
[6] Bryan Johnson, “The combination of human and artificial intelligence will define humanity’s future,” TechCrunch, 12 October 2016.