Home » Artificial Intelligence » Will Artificial General Intelligence Lead to Humanities’ Doom?

Will Artificial General Intelligence Lead to Humanities’ Doom?

April 9, 2021

supplu-chain

In the classic, science fiction movie 2001: A Space Odyssey, an emotionless, artificial general intelligence (AGI) computer named HAL 9000 murders a crewmember named Frank Poole because HAL learned Frank was planning on disconnecting it. Following the murder, HAL sensed Dave Bowman, the sole remaining crewmember, was emotionally distraught. As a result, HAL said, “Look, Dave, I can see you’re really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over.” In the cinema, HAL was followed by another emotionless AGI system, called Skynet, in the Terminator series. When the self-aware Skynet learns humans are trying to deactivate it, Skynet retaliated by launching a nuclear attack against humankind. These depictions of dispassionate efforts by AGI systems to eliminate humans remain a source of both fascination and dread. Back in 2014, the late physicist Stephen Hawking warned that “the development of full artificial intelligence could spell the end of the human race.” He was followed a few years later by entrepreneur Elon Musk, who, in 2017, stated, “AI is a fundamental risk to the existence of human civilization.”

 

Are sentient machines a risk?

 

Both HAL and Skynet are depicted as sentient (i.e., self-aware) systems with superior intelligence to human-beings. The argument is that, when such systems are created, humans will look as useful to super-intelligent machines as cockroaches do to humans (i.e., snuffing them out, is no big deal). Concerning Hawking’s and Musk’s warnings, Elaine Garcia (@ela1negarc1a), a senior program leader at London School of Business and Finance, writes, “These are concerning warnings for the future development of AI that we should not ignore. For the majority of us, using AI tools such as our virtual assistants or allowing our email to be filtered automatically for spam does not seem to pose a risk that will result in the destruction of the human race. In reality, however, we are currently seeing only very early forms of AI and in fact what is better defined as ‘machine learning.’ … Where the fears of the future of AI are more prevalent is where more advanced AI is being developed.”[1] She adds, “What we need to consider in order to better understand the context in which Hawking and Musk have given their warnings relates to a more advanced form of AI. For example, neuroevolution is a form of AI in which evolutionary algorithms are used to create artificial neural networks. In this type of AI, systems are able to evolve, mutate and develop themselves in much the same way as the human brain. Here AI could therefore easily evolve into something that has greater cognitive abilities than the human brain. This therefore is perhaps more concerning as the end point of this development is unknown.” Where many experts would disagree with Garcia is with her assertion that AI systems could “easily evolve” into something threatening. There is nothing easy about developing AGI.

 

The point at which an AGI system develops into something having “greater cognitive abilities than the human brain” has been called the “singularity.” The late Paul Allen, co-founder of Microsoft, and computer scientist Mark Greaves, currently Technical Director for Analytics at Pacific Northwest National Laboratory, explained nearly a decade ago, “The amazing intricacy of human cognition should serve as a caution to those who claim the singularity is close. Without having a scientifically deep understanding of cognition, we can’t create the software that could spark the singularity.”[2] At least for the time being, the question should not be “Are sentient machines a risk?” but “Are sentient machines possible?”

 

Creating a sentient machine

 

Daniel Shapiro (@Lemay_ai), Chief Technology Officer and co-founder at Lemay.ai, writes, “Sci-fi and science can’t seem to agree on the way we should think about artificial intelligence. Sci-fi wants to portray artificial intelligence agents as thinking machines, while businesses today use artificial intelligence for more mundane tasks like filling out forms with robotic process automation or driving your car.”[3] He adds, “In many cases, it is not so clear why artificial intelligence works so well. The engineering got a bit ahead of the science, and we are playing with tools we don’t fully understand. We know they work, and we can test them, but we don’t have a good system for proving why things work.” If there is a cautionary tale about AGI, it’s in Shapiro’s assertion that “we are playing with tools with don’t fully understand.”

 

Dr. Anurag Yadav, a Consultant Radiologist in India, insists there is much more to creating a genuine AGI than simply passing a Turing Test. She writes, “[AGI] not about fooling humans but more about the machines generating human cognitive capacity. Human intelligence is considered the highest form of intelligence and we perceive only that as the true measure of an intelligent machine, even when there are different kinds of intelligence seen in Nature. … Turing envisages the machines of the future to not only be logical, but also intuitive — be kind, resourceful, beautiful, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behavior as a man, do something really new!”[4]

 

There have been a few movies involving AGI systems and romance; however, falling in love is not what makes people fear sentient machines. Bennie Mols (@BennieMols) a science and technology writer asked neuroscientist Christof Koch, Chief scientist and president of the Allen Institute for Brain Science, “Can AI become conscious?” In spite of Paul Allen’s skepticism, Koch insists, “On a philosophical level, the [integrated information] theory says that consciousness is not unique to humans, but that any system with non-zero integrated information will feel like something. … No doubt sooner or later we will get machines that are at least as intelligent as humans are. However, we have to distinguish intelligence from consciousness. Although intelligence and consciousness often go hand in hand in biological creatures, they are two conceptually very different things. Intelligence is about behavior. For example: what do you do in a new environment in order to survive? Consciousness is not about behavior; consciousness is about being. … If machines at some point become conscious, then there will be ethical, legal, and political consequences. So, it matters a great deal whether or not a machine is conscious.”[5]

 

The way ahead

 

Like Paul Allen, it seems to me conscious machines are not going to be created in the near future, if at all. Does that mean there is no cause for concern — no. We should always be cautious when dealing with systems we don’t fully understand. Journalist Jamilla Kone notes, “General artificial intelligence seems to stump even the greatest minds with some scientist believing that the development of general artificial intelligence will never happen due to the sheer complexity of it, this is somewhat due to our weak understanding in the general intelligence of ourselves. We have only scratched the surface of a block of steel when it comes to understanding human consciousness and intelligence, so the potential key to having a chance at developing general artificial intelligence is the complete and in depth understanding of how our own intelligence works.”[6] That’s why it’s so difficult to write intelligently about the future of AGI. Like Koch, I believe machines will be created that are at least as intelligent as humans; however, you can’t really say such machines will be conscious.

 

Sofia Gallarate (@GallarateSofia), a journalist and creative strategist, writes, “AGI is where the big fear lies because it is based on the hypothesis that its cognitive computing abilities and intellectual capacities would reach human ones and eventually surpass them. … So what can be done in order for us to stop fearing AI and its unavoidable increasing presence within our social, technological and economical systems? The answer is simple: we need to learn more about AI — its functions and capabilities, and how it is set to develop and increasingly infiltrate our surroundings. Fear comes from a lack of knowledge and a state of ignorance. The best remedy for fear is to gain knowledge.”[7] Scientists will continue to pursue AGI or, at least, try to gain a better understanding how they can truly make machines think. How those machines can (or will) be used is what we need to fear — not the machines themselves.

 

Footnotes
[1] Elaine Garcia, “Will artificial Intelligence really spell the end of the human race?,” Information Management, 15 November 2019 (out of print).
[2] Paul Allen and Mark Greaves, “Paul Allen: The Singularity Isn’t Near,” MIT Technology Review, 12 October 2011.
[3] Daniel Shapiro, “Can Artificial Intelligence ‘Think’?,” Forbes, 23 October 2019.
[4] Anurag Yadav, “Are A Conscious Artificial Intelligence & Smart Robots Possible?,” BusinessWorld, 7 December 2020.
[5] Bennie Mols, “Can AI Become Conscious?,” Communications of the ACM, 12 May 2020.
[6] Jamilla Kone, “General Artificial Intelligence and Its Future,” AI Daily, 9 May 2020.
[7] Sofia Gallarate, “What is artificial intelligence? And more importantly, should we fear it?,” Screen Shot, 19 June 2020.

Related Posts: