Home » Artificial Intelligence » How Much Danger Does AGI Pose?

How Much Danger Does AGI Pose?

November 9, 2022

supplu-chain

Science fiction authors love to write about artificial general intelligence (AGI) systems that go rogue, conclude the humans are a danger, and decide Earth’s entire population must be wiped out. In recent years, these concerns have moved from science fiction into science. Luminaries such as Stephen Hawking and Elon Musk have expressed concerns for the future. The late Professor Hawking went so far as to tell the BBC, “The development of full artificial intelligence could spell the end of the human race.”[1] More recently, science journal David Nield (@davidniel) reports, “Researchers say it’ll be Impossible to control a super-intelligent AI. … The idea of artificial intelligence overthrowing humankind has been talked about for decades, and in 2021, scientists delivered their verdict on whether we’d be able to control a high-level computer super-intelligence. The answer? Almost definitely not.”[2]

 

Most science fiction scenarios depict AGI systems deliberately setting out to destroy humankind. Émile P. Torres (@xriskology), a historian of global catastrophic risk, thinks its just as plausible that AGI could wipe out humankind accidentally. He writes, “We haven’t just been wrong about things we thought would come to pass; humanity also has a long history of incorrectly assuring ourselves that certain now-inescapable realities wouldn’t. … The conventional expectation is that ever-growing computing power will be a boon for humanity. But what if we’re wrong again? Could artificial superintelligence instead cause us great harm? Our extinction? As history teaches, never say never.”[3] He adds, “It seems only a matter of time before computers become smarter than people. This is one prediction we can be fairly confident about — because we’re seeing it already.”

 

What, Me Worry?

 

People old enough to remember Mad Magazine also remember the publication’s fictitious cover boy Alfred E. Neuman whose motto was “What, me worry?” Neuman’s ambivalent attitude towards risk probably reflects how many of us feel about the prospect of AGI. Will it or won’t it pose a risk to humanity? The late Paul Allen, co-founder of Microsoft, doubted an artificial general intelligence system will ever be created. He argued that developing AGI “will take unforeseeable and fundamentally unpredictable breakthroughs.”[4] And, James Kobielus (@jameskobielus), a Big Data Evangelist at IBM, writes, “There is such a thing as watching too much science fiction. If you spend a significant amount of time immersed in dystopic fantasies of the future, it’s easy to lose your grip on the here and now. Or, even when some science fiction is grounded in plausible alternative futures, it’s far too easy to engage in armchair extrapolation in any troublesome direction your minds leads you. One of the most overused science-fiction tropes is that of the super-intelligent ‘robot overlord’ that, through human negligence or malice, has enslaved us all.”[5]

 

Laurence B. Siegel (@LaurenceBSiegel), the Gary P. Brinson Director of Research at the CFA Institute Research Foundation, thinks along the same lines as Allen. He writes, “Will artificial general intelligence transform the experience of being human, opening up possibilities of knowledge, achievement, and prosperity that we can now barely conceive? Or is AGI an existential threat to humanity, something to be feared and restrictively confined? Erik J. Larson, in a fascinating book entitled The Myth of Artificial Intelligence, says ‘neither.’ I agree. AGI, if it is ever achieved, will be an illusion created by very fast computers, very big data, and very clever programmers. The promise or threat of AGI is hype. Lesser kinds of AI are real and need to be reckoned with.”[6] Journalist Akshay Kumar believes AGI doesn’t present an imminent threat. He notes, “The fear of AI seems to be widespread, as people are unsure of what AI is capable of and the repercussions of implementing it. The reality is that AI is already implemented in multiple parts of our daily life. … The threat of AI might not be the climactic, action-packed threat that we see in films. Instead, it could be a dependence on AI that threatens to lull us into complacency.”[7] I admit complacency is a concern and I agree with Siegel that lesser forms of AI need to be reckoned with.

 

Despite reassurances that AGI isn’t going to exterminate the human race, tech writer Dave McQuilling (@DaveyMaccy) reports that a recent survey found “43.55% [of respondents] claim to be scared by the prospect of AI becoming sentient.”[8] Sentience technically involves the quality of being able to experience feelings. Often, however, it is defined as self-awareness. It is the latter definition of sentience that has people worried. The staff at Mind Matters reports, “On February 9, [2022] Ilya Sutskever, co-founder of fake text generator OpenAI, made a claim that was frothy even for Twitter: ‘it may be that today’s largest neural networks are slightly conscious.'” Their response to this claim was, “Well, ‘slightly conscious’ is like being ‘slightly pregnant’ or ‘slightly dead.'”[9] More recently, Google fired an employee named Blake Lemoine for insisting its artificial intelligence tool called LaMDA (Language Model for Dialog Applications) was sentient.[10]

 

Concluding Thoughts

 

I’m sorry to disappoint; however, in a short article like this one, a conclusive answer as to whether AGI poses a threat to humanity is not forthcoming. At the same time, concluding that we’ll have to wait and see is both unsatisfying and dangerous. As Torres notes, computers are on a rapid trajectory to becoming smarter than human beings. The word of the day needs to be “vigilance.” Torres concludes, “It’s unclear humanity will ever be prepared for superintelligence, but we’re certainly not ready now. With all our global instability and still-nascent grasp on tech, adding in artificial superintelligence (ASI) would be lighting a match next to a fireworks factory. Research on artificial intelligence must slow down, or even pause. And if researchers won’t make this decision, governments should make it for them. Some of these researchers have explicitly dismissed worries that advanced artificial intelligence could be dangerous. And they might be right. It might turn out that any caution is just ‘talking moonshine,’ and that ASI is totally benign — or even entirely impossible. After all, I can’t predict the future. The problem is: Neither can they.”

 

Footnotes
[1] Rory Cellan-Jones, “Stephen Hawking warns artificial intelligence could end mankind,” BBC News, 2 December 2014.
[2] David Nield, “Researchers Say It’ll Be Impossible to Control a Super-Intelligent AI,” Science Alert, 18 September 2022.
[3] Émile P. Torres, “How AI could accidentally extinguish humankind,” The Washington Post, 31 August 2022.
[4] Paul G. Allen, “Paul Allen: The Singularity Isn’t Near,” Technology Review, 12 October 2011.
[5] James Kobielus, “The Bogus Bogeyman of the Brainiac Robot Overlord,” Dataversity, 7 September 2015.
[6] Laurence B. Siegel, “The Sobering Limitations of Artificial Intelligence,” Advisor Perspectives, 25 July 2022.
[7] Akshay Kumar, “Artificial intelligence is unlikely to seriously harm society,” Pipe Dream, 13 September 2021.
[8] Dave McQuilling, “43% Of People Polled Are Scared Of The Potential Of Sentient AI,” Slash Gear, 28 July 2022.
[9] Staff, “Can AI Really Be ‘Slightly Conscious? Can Anyone?” Mind Matters News, 15 February 2022.
[10] For more background, see Stephen DeAngelis, “I’m sorry, Dave. I’m afraid I can’t do that.” Enterra Insights, 14 June 2022.

Related Posts: