Home » Artificial Intelligence » Why Artificial Intelligence Won’t Necessarily Mean the End of the Human Race

Why Artificial Intelligence Won’t Necessarily Mean the End of the Human Race

October 7, 2014

supplu-chain

There have been numerous warnings raised recently about the impending doom that could result from continued development of artificial intelligence (AI) systems. The most widely publicized warning comes from the renowned theoretical physicist Stephen Hawking (@Prof_S_Hawking). “It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction,” Hawking writes along with his colleagues Stuart Russell, Max Tegmark (@tegmark), and Frank Wilczek (@FrankWilczek). “But this would be a mistake, and potentially our worst mistake in history.” [“Stephen Hawking: ‘Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?‘” The Independent, 1 May 2014] Physicist and author, Louis Del Monte (@DelMonteLouis1), adds his voice of warning. He “believes that new artificial intelligence technology will threaten the survival of humankind, and in 30 years, probably, the top species on our planet, Earth, will not be humans.” [“New Artificial Intelligence Technology Will Threaten Survival of Humankind: Louis Del Monte,” by Afza Fathima, International Business Times, 7 July 2014] These are serious scientists raising serious concerns. How scared should we be?

 

Raising fears about the rise of sentient machines has been a staple of science fiction books and movies for decades. One of the most infamous fictional sentient machines is HAL 9000, a computer system that was introduced in Arthur C. Clarke’s “Space Odyssey” series. Most people’s introduction to HAL came in Stanley Kubrick’s 1968 film “2001: A Space Odyssey.” In that film, HAL was visually portrayed as an eerie, glowing red camera eye. HAL’s soft voice was devoid of emotion as he interacted with the crew of Discovery One. In the movie, HAL must wrestle with two conflicting commands: relay information accurately and withhold the true purpose of the mission from the crew. This conflict results subtle malfunctions that concern crewmembers who decide to shut HAL down rather than risk further problems. In response, HAL tries to eliminate the crew and successfully kills most of them. The decision to kill the crew is a case of twisted logic, at least from HAL’s perspective. After all, HAL wouldn’t have to lie to the crew to protect the mission if the crew was dead.

 

The next-generation of movie goers was introduced to another, and even more menacing, sentient computer system in the “Terminator” series of movies. In the movies, the system, dubbed Skynet, was the developed as a defensive system by the U.S. military. The hope was that Skynet would eliminate the possibility of human error and increase reaction times when defending the country against attack. Once the system became sentient, i.e., self-aware, its operators became concerned about its potential power. Like humans in “2001: A Space Odyssey,” Skynet’s operators decided the best course of action was to deactivate the system. Skynet perceived this action as an attack and decided that the entire human race was a threat. As a result, it launched a nuclear war that killed over three billion people and then proceeded to hunt down and enslave the rest of humanity. The question is: How likely are these scenarios?

 

The sine qua non for both HAL 9000 and Skynet is self-awareness. It is when self-aware machines are coupled to systems that can cause death and destruction that things get nasty. Not everyone, however, sees this combination as threatening. “For professor Noel Sharkey, the greatest danger posed by AI is its lack of sentience, rather than the presence of it. As warfare, policing and healthcare become increasingly automated and computer-powered, their lack of emotion and empathy could create significant problems.” [“The story of artificial intelligence,” bit-tech, 19 March 2012] Sharkey (@NoelSharkey), like the scientists mentioned at the beginning of this article, would prefer that autonomous AI systems not be developed by any military organization. “There is no way for any AI system to discriminate between a combatant and an innocent,” Sharkey states. “Claims that such a system is coming soon are unsupportable and irresponsible. It is likely to accelerate our progress towards a dystopian world in which wars, policing and care of the vulnerable are carried out by technological artefacts that have no possibility of empathy, compassion or understanding.”

 

Perhaps the real question we should be asking ourselves is whether machines can become sentient. One individual who believes that machine sentience will be achieved is Ray Kurzweil, an inventor and entrepreneur who now works with Google. Kurzweil calls this expected achievement the singularity. He predicts that three decades from now computers will be a billion times more powerful than the combined brains of humanity. When it comes to the singularity, some people call Kurzweil a visionary and others call him a dreamer. Sharkey can be counted in the latter camp. “Sharkey contends that greater-than-human computer intelligence may never occur, that the differences between human brains and computers may be so fundamentally different they can never be successfully replicated.” He is joined in that opinion by Microsoft co-founder Paul Allen (@PaulGAllen), who also believes such ideas to be “far-fetched.” [“Paul Allen: The Singularity Isn’t Near,” Technology Review, 12 October 2011] He writes, “We acknowledge that it is possible but highly unlikely that Kurzweil will eventually be vindicated. … If the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs.”

 

Computing power alone does not equate to sentience or even to intelligence. If the singularity does occur, the “intelligence” demonstrated by the self-aware computer may be totally different from human intelligence. The one thing about which pundits seem to agree is that humankind should move cautiously (and with great trepidation) in any effort to create fully-autonomous weapons systems. AI systems are programmed to accomplish specific tasks and to learn how to fulfill those tasks in the most effective way possible. They continue to learn 24/7 and they are relentless in the pursuit of their goal. That is what frightens so many people. Will any of the doomsday predictions come about? I don’t know. I do know that we have a few years to determine how best to pursue self-aware systems and that we should take advantage of that time to ensure that whatever system is developed doesn’t risk humankind’s future. The bit.tech article concludes:

“The story of AI is one of remarkable discoveries and absurd claims, of great leaps forward and abrupt dead ends. What began as a chess algorithm on a piece of paper grew, within a few short years, into an entire field of research, research which went on to spawn important breakthroughs in computer technology and neuroscience. Yet even now, the first sentient machines, thought to be so imminent in those early years of research, appear to be as remote as they were more than half a century ago. … Nevertheless, there remains the possibility that, as technology advances science discovers new insights into the human mind, we may one day see the creation of the first sentient machine. And whatever the outcome of that first awakening – whether we’re transfigured by it, as Kurzweil believes, or enslaved, as Bob Joy fears – we should be mindful of one thing: that it all began with a quiet game of Chess.”

As we develop AI systems, we continue to be in a chess match against ourselves. Let’s hope we win.

Related Posts: