Media is abuzz with stories about artificial intelligence (AI). The scariest stories are about artificial general intelligence (AGI), the type of AI most often depicted in films and science fiction. Those stories routinely depict an AGI system taking over the world intent on destroying humanity. Science fiction is a good place for such stories since no one really knows if AGI will ever be developed. The late Paul Allen, co-founder of Microsoft, doubted an AGI system would be created. He argued developing AGI “will take unforeseeable and fundamentally unpredictable breakthroughs.” Nevertheless, conducting “what if” exercises about AGI is important. Jayshree Pandya (@jayshreepandya), Founder of Risk Group, notes, “As humanity stands on the brink of a technology triggered information revolution, the scale, scope and complexity of the impact of intelligence evolution in machines is unlike anything humankind has experienced before. As a result, the speed at which the ideas, innovations and inventions are emerging on the back of artificial intelligence has no historical precedent and is fundamentally disrupting everything in the human ecosystem.” She believes the trajectory towards AGI is troubling. She explains, “The technology triggered intelligence evolution in machines and the linkages between ideas, innovations and trends have in fact brought us on the doorsteps of singularity.” She could just as easily as written, AI research and the pursuit of AGI has brought us to the doorstep of the unknown. Or as Rod Serling would say, “Welcome to the Twilight Zone.”
If you are unfamiliar with the term “singularity,” you’re probably not alone. The AGI singularity is defined as the moment when machines become both sentient and smarter than the humans who created them. The word “singularity” has its roots in both mathematics and the physical sciences (specifically, cosmology). Both uses are interesting to examine. A mathematical singularity is a point at which a function no longer works in a predictable way. In cosmology, it refers to an event horizon so spectacular or powerful that no useful data is transmitted from it. The most common cosmological examples are the big bang and black holes. The common thread in these three definitions of singularity is that it is impossible to predict anything useful about them or their consequences. A singularity changes everything. Pandya adds, “Irrespective of whether we believe that the singularity will happen or not, the very thought raises many concerns and critical security risk uncertainties for the future of humanity.” What could change? Pandya provides a short list including: “The fundamental transformation of entire interconnected and interdependent systems of basic and applied science: research and development, concept to commercialization, politics to governance, socialization to capitalism, education to training, production to markets, survival to security and more.” In other words, everything could change.
Types of artificial intelligence
Before we tar all forms of AI with a scary brush, we need to distinguish between three types of AI. They are:
- Weak AI: Wikipedia states: “Weak artificial intelligence (weak AI), also known as narrow AI, is artificial intelligence that is focused on one narrow task.” In other words, weak AI was developed to handle/manage a small and specific data set to answer a single question. Its perspective is singular, resulting in tunnel vision.
- Strong AI: Strong AI originally referred to Artificial General Intelligence (i.e., a machine with consciousness, sentience and mind), “with the ability to apply intelligence to any problem, rather than just one specific problem.” Today, however, there are cognitive systems that fall short of AGI but far surpass weak AI. These systems were developed to handle/manage large and varied data sets to answer a multitude of questions in a variety of categories. This is the category into which cognitive computing falls. Cognitive AI can deal with ambiguities whereas weak AI cannot.
- General AI: The AGI Society notes the ultimate goal of AGI is to develop “thinking machines” (i.e., “general-purpose systems with intelligence comparable to that of the human mind”).
The important thing to remember is neither weak nor strong AI systems are going to take over the world — that doesn’t mean they can’t do harm. Critics of AI often point to the dangers of autonomous weapons capable of making lethal choices independent of human intervention. Even though such systems won’t someday rule the world, they could nevertheless do grave damage to humans. Pandya concludes, “While there is no way to calculate just how and when this intelligence evolution will unfold in machines, one thing is clear: it changes the very fundamentals of security, and the response to it must be integrated and comprehensive.”
The future of AGI
There are a number of articles available discussing the pros and cons of artificial intelligence. One such article discussing concerns about AI’s future was written by Kelsey Piper (@KelseyTuoc). In her article she discusses questions like: What is AI? Is it even possible to make a computer as smart as a person? How exactly could it wipe us out? When did scientists first start worrying about AI risk? Why couldn’t we just shut off a computer if it got too powerful? What are we doing right now to avoid an AI apocalypse? Is this really likelier to kill us all than, say, climate change? Is there a possibility that AI can be benevolent? How worried should we be? Like Pandya, Piper believes regardless of whether we really know the answers to those questions or not, we need to hold a conversation. “AI looks increasingly like a technology that will change the world when it arrives,” she writes. “Researchers across many major AI organizations tell us it will be like launching a rocket: something we have to get right before we hit ‘go.’ So it seems urgent to get to work learning rocketry. No matter whether or not humanity should be afraid, we should definitely be doing our homework.”
Dan Robitzski (@DanRobitzski) notes, “We have no idea how to build AGI.” He cites a number of opinions from AI researchers; none of whom could provide a serious prediction about when AGI might be achieved. Several years ago, Oren Etzioni (@etzioni), CEO of AIlen Institute for Artificial Intelligence, gave a TED talk about artificial intelligence (see below). Citing all the negative depictions about AI in the movies and popular press, he joked, “It hurts.” In his talk, he provides a balanced view of benefits and concerns about AI. The 13 minutes it takes to watch are well worth your time. In his talk, he cites, Eric Horvitz (@erichorvitz), Director of Microsoft Research Labs, who asserts, “It’s the absence of AI technologies that is already killing people.”
“For decades,” writes Harvard cognitive scientist Steven Pinker (@sapinker), “we have been terrified by dreadful visions of civilization-ending overpopulation, resource shortages, pollution and nuclear war. But recently, the list of existential menaces has ballooned. We now have been told to worry about nanobots that will engulf us, robots that will enslave us, artificial intelligence that will turn us into raw materials and teenagers who will brew a genocidal virus or take down the internet from their bedrooms.” Pinker doesn’t believe doomsday thinking is helpful. He asks, “How should we think about the existential threats that lurk behind the vast incremental progress the world has enjoyed in longevity, health, wealth and education?” He continues, “No one can prophesy that a cataclysm will never happen. But, as with our own mortality, there are wise and foolish ways of dealing with the threats to our existence.” The greatest danger of doomsday thinking, he writes, is, “Reasonable people will think, as a 2016 New York Times article put it, ‘These grim facts should lead any reasonable person to conclude that humanity is screwed.’ If humanity is screwed, why sacrifice anything to reduce potential risks? Why forgo the convenience of fossil fuels or exhort governments to rethink their nuclear weapons policies? Eat, drink and be merry, for tomorrow we die!” Certainly, that’s not helpful for the generations yet to be born. He goes on to assert most past civilizations could have survived had they been blessed with better technological solutions. He quotes physicist David Deutsch, “Before our ancestors learned how to make fire artificially (and many times since then, too), people must have died of exposure literally on top of the means of making the fires that would have saved their lives, because they did not know how. In a parochial sense, the weather killed them; but the deeper explanation is lack of knowledge.”
Pinker believes AI can help the world address global challenges and that the upside to AI is larger than the downside. I agree. He concludes, “The prospect of meeting these challenges is by no means utopian. … Implementing the measures that will drive these numbers all the way down to zero will require enormous amounts of persuasion, pressure, and will. But we know that there is one measure that will not make the world safer: moaning that we’re doomed.” Caution is warranted; but, optimism should still rule the day. We simply don’t know if we will ever achieve the singularity — and know even less about will happen if we do.
 Paul G. Allen, “Paul Allen: The Singularity Isn’t Near,” Technology Review, 12 October 2011.
 Jayshree Pandya, “The Troubling Trajectory Of Technological Singularity,” Forbes, 10 February 2019.
 Kelsey Piper, “The case for taking AI seriously as a threat to humanity,” Vox, 23 December 2018.
 Dan Robitzski, “When Will We Have Artificial Intelligence As Smart as a Human? Here’s What Experts Think,” Futurism, 21 December 2018.
 Steven Pinker, “The dangers of worrying about doomsday,” The Globe and Mail, 24 February 2019.