Home » Artificial Intelligence » Artificial General Intelligence: Destroyer of Worlds?

Artificial General Intelligence: Destroyer of Worlds?

August 1, 2023

supplu-chain

The recent movie “Oppenheimer,” a film about J. Robert Oppenheimer’s role as the scientific director of the Manhattan Project which developed the world’s first nuclear weapons, had him quoting Hindu scripture — the Bhagavad Gita —shortly after the first atomic bomb exploded in a 1945 test. It’s not clear he actually quoted that scripture in 1945. What is certain is that he cited the scripture some twenty years later. Journalist Hillary Busis reports, “Oppenheimer appeared in a 1965 NBC News documentary called The Decision to Drop the Bomb. ‘We knew the world would not be the same,’ he said onscreen. ‘A few people laughed; a few people cried. Most people were silent. I remembered the line from the Hindu scripture, the Bhagavad Gita; Vishnu is trying to persuade the prince that he should do his duty, and to impress him, takes on his multiarmed form and says, “Now I am become Death, the destroyer of worlds.” I suppose we all thought that, one way or another.'”[1] As computer scientists continue their efforts to develop more capable artificial intelligence (AI) systems, I’m not sure they believe they are creating a system that will become Death, the destroyer of worlds; however, there are many critics who believe just that.

 

The Dangers of Artificial General Intelligence

 

Author Evgeny Morozov writes, “In May [2023], more than 350 technology executives, researchers and academics signed a statement warning of the existential dangers of artificial intelligence. ‘Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,’ the signatories warned. This came on the heels of another high-profile letter, signed by the likes of Elon Musk and Steve Wozniak, a co-founder of Apple, calling for a six-month moratorium on the development of advanced A.I. systems.”[2] The advanced AI systems causing such concern are known primarily as Artificial General Intelligence (AGI) systems. As Morozov notes, “The mounting anxiety about A.I. isn’t because of the boring but reliable technologies that autocomplete our text messages or direct robot vacuums to dodge obstacles in our living rooms. It is the rise of artificial general intelligence, or A.G.I., that worries the experts.”

 

Morozov admits AGI systems don’t yet exist and may never exist. Does that mean we needn’t worry about such systems? That would be unwise. Journalist Steve Rose interviewed several experts who described various ways AGI could adversely affect the human race. One of those experts was Max Tegmark, an AI researcher at the Massachusetts Institute of Technology. He told Rose that, not surprisingly, the worse-case scenario is human annihilation. He stated, “If we become the less intelligent species, we should expect to be wiped out.” Calling AGI “a species” implies it is a living, sentient creature. At the moment, that’s a bit of a stretch. The late Paul Allen, co-founder of Microsoft, doubted an artificial general intelligence system will ever be created. He argued that developing AGI “will take unforeseeable and fundamentally unpredictable breakthroughs.”[4] The moment computers become sentient and smarter than humans has often been referred to as the singularity. Allen concluded, “Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.”

 

Tegmark agrees that there is much uncertainty about the future AGI and its consequences. He told Rose, “Any scenario has to come with the caveat that, most likely, all the scenarios we can imagine are going to be wrong.” Nevertheless, he went on to explain, “In many cases, we have wiped out species just because we wanted resources. We chopped down rainforests because we wanted palm oil; our goals didn’t align with the other species, but because we were smarter they couldn’t stop us. That could easily happen to us. If you have machines that control the planet, and they are interested in doing a lot of computation and they want to scale up their computing infrastructure, it’s natural that they would want to use our land for that. If we protest too much, then we become a pest and a nuisance to them. They might want to rearrange the biosphere to do something else with those atoms — and if that is not compatible with human life, well, tough luck for us.”

 

Another expert interviewed by Rose, Eliezer Yudkowsky, co-founder and research fellow at the Machine Intelligence Research Institute, agrees that direct annihilation by AGI is a possibility; however, he believes side effects could be just as deadly. “[AGI] could want us dead,” he said, “but it will probably also want to do things that kill us as a side-effect.” He explained, “It’s probably going to want to do things that kill us as a side-effect, such as building so many power plants that run off nuclear fusion — because there is plenty of hydrogen in the oceans — that the oceans boil. How would AI get physical agency? In the very early stages, by using humans as its hands. … We are rushing way, way ahead of ourselves with something lethally dangerous. We are building more and more powerful systems that we understand less well as time goes on. We are in the position of needing the first rocket launch to go very well, while having only built jet planes previously. And the entire human species is loaded into the rocket.”

 

A third expert, Ajeya Cotra, a senior research analyst at Open Philanthropy, also believes humans will unwittingly contribute to their own demise. She explained, “The trend will probably be towards these models taking on increasingly open-ended tasks on behalf of humans, acting as our agents in the world. The culmination of this is what I have referred to as the ‘obsolescence regime’: for any task you might want done, you would rather ask an AI system than ask a human, because they are cheaper, they run faster and they might be smarter overall. In that endgame, humans that don’t rely on AI are uncompetitive. … In that world, it becomes easier to imagine that, if AI systems wanted to cooperate with one another in order to push humans out of the picture, they would have lots of levers to pull: they are running the police force, the military, the biggest companies; they are inventing the technology and developing policy.”

 

Concluding Thoughts

 

A common thread in many worse-case AGI scenarios is that humans wittingly or unwittingly aid in humankind’s eventual annihilation. That being the case, in order to control the future of AGI, you need to regulate and monitor the humans creating the technology. Recognizing this truth, some of the biggest names in AI research have surprisingly called for more regulation. Journalists Michael Calore and Lauren Goode observe, “The idea that machine intelligence will one day take over the world has long been a staple of science fiction. But given the rapid advances in consumer-level artificial intelligence tools, the fear has felt closer to reality these past few months than it ever has before. The generative AI craze has stirred up excitement and apprehension in equal measure, leaving many people uneasy about where the future of this clearly powerful yet still nascent tech is going. … For example, the nonprofit group Center for AI Safety released a short statement warning that society should be taking AI as seriously as an existential threat as we do nuclear war and pandemics.”[5]

 

Caution is a good thing; however, we should not let concerns about AGI blind us to the benefits of AI systems that can improve business operations, use resources more wisely, and make life better. Not all AI systems lead to Terminator futures or become Death, the destroyer of worlds.

 

Footnotes
[1] Hillary Busis, “‘Now I Am Become Death’: The Story Behind Oppenheimer’s Indelible Quote,” Vanity Fair, 21 July 2023.
[2] Evgeny Morozov, “The True Threat of Artificial Intelligence,” The New York Times, 30 June 2023.
[3] Steve Rose, “Five ways AI might destroy the world: ‘Everyone on Earth could fall over dead in the same second’,” The Guardian, 7 July 2023.
[4] Paul G. Allen, “Paul Allen: The Singularity Isn’t Near,” Technology Review, 12 October 2011.
[5] Michael Calore and Lauren Goode, “AI Won’t Wipe Out Humanity (Yet),” Wired, 1 June 2023.

Related Posts: