Home » Artificial Intelligence » Is Artificial General Intelligence a Bogeyman?

Is Artificial General Intelligence a Bogeyman?

February 7, 2018

supplu-chain

Science fiction writers and respected scientists have raised concerns about the possibility of artificial general intelligence (AGI) systems (i.e., sentient machines) being developed that surpass humankind’s intellect and eventually rule over or destroy humanity. When Jeopardy! champion Ken Jennings was beaten on a special edition of that show by an IBM’s Watson computer (named in honor of Thomas J. Watson, Jr., one IBM’s most beloved CEOs), Jennings’ “Final Jeopardy” response included the phrase, “I, for one, welcome our new computer overlords.” You can watch that moment in following video.

 

 

The debate begins with the question of whether a sentient machine can be built. Microsoft co-founder, Paul Allen (@PaulGAllen), believes there are missing pieces to the puzzle and doubts an artificial general intelligence system will ever be created. He argues that developing AGI “will take unforeseeable and fundamentally unpredictable breakthroughs.”[1] James Kobielus (@jameskobielus), Big Data Evangelist at IBM, writes, “There is such a thing as watching too much science fiction. If you spend a significant amount of time immersed in dystopic fantasies of the future, it’s easy to lose your grip on the here and now. Or, even when some science fiction is grounded in plausible alternative futures, it’s far too easy to engage in armchair extrapolation in any troublesome direction your minds leads you. One of the most overused science-fiction tropes is that of the super-intelligent ‘robot overlord’ that, through human negligence or malice, has enslaved us all.”[2]

 

Can Artificial General Intelligence be Controlled?

 

Any good risk manager will tell you the worst case scenario must be explored. In the case of artificial general intelligence, the worst case could be its very existence. John Thornhill (@johnthornhillft) writes, “Scientists reckon there have been at least five mass extinction events in the history of our planet, when a catastrophically high number of species were wiped out in a relatively short period of time. We are possibly now living through a sixth — caused by human activity. But could humans themselves be next? This is the sort of question that preoccupies the staff at the Future of Humanity Institute in Oxford. … So what tops the institute’s list of existential threats? A man-made one: that rapidly advancing research into artificial intelligence might lead to a runaway ‘superintelligence’ which could threaten our survival.”[3]

 

Luminaries such as Stephen Hawking and Elon Musk have expressed concerns for the future. Professor Hawking went so far as to tell the BBC, “The development of full artificial intelligence could spell the end of the human race.” I’m all for being cautious and can appreciate concerns raised by legitimate scientists and technologists. One valid concern is the development of autonomous weapons of mass destruction. Developing autonomous weapons, however, is not the same as developing AGI. That’s why I’m a bit more optimistic about the future of artificial intelligence in general.

 

Will Artificial General Intelligence become a Reality?

 

Thomas Hornigold (@physicspod), a physics student at the University of Oxford, writes, “People are still predicting [AGI] will happen within the next 20 years, perhaps most famously Ray Kurzweil. There are so many different surveys of experts and analyses that you almost wonder if AI researchers aren’t tempted to come up with an auto reply: ‘I’ve already predicted what your question will be, and no, I can’t really predict that.’ The issue with trying to predict the exact date of human-level AI is that we don’t know how far is left to go.”[4] What’s left to do is coming up with Allen’s “fundamentally unpredictable breakthroughs.”

 

Back in 2016, Janey Tracey reported, “Several scientists at ECCC’s Evolution of Technology and Sci-Fi panel, [asserted] self-aware AI may be very, very far away, or may never happen at all. … According to the panelists, the rumors of sentient AI have been greatly exaggerated.”[5] Kobielus agrees. He explains, “This issue will be with us forever, much the way that UFO conspiracy theorists have kept their article of faith alive in the popular mind since the early Cold War era. In the Hollywood-stoked popular mindset that surrounds this issue, the supposed algorithmic overlords represent the evil puppets dangled among us by ‘Big Brother,’ diabolical ‘technocrats,’ and other villains for whom there’s no Superman who might come to our rescue. Hysteria is not too extreme a word for this popular perspective.” He notes we have more to fear from the “usual human suspects — the world’s militaries and, possibly, terrorist organizations — rather than some 21st century kindred of HAL9000.”

 

Summary

 

Kriti Sharma (@sharma_kriti) the vice president of bots and AI at Sage Group, asserts, “The real AI issues that need to be addressed today are more nuanced, technical and ethical.”[6] She adds, “AI creators need to be mindful of ways to avoid and protect against vulnerabilities that open AI technology to attacks from people, governments and, perhaps most alarming, other AI-driven networks. In practice, this means employing trustworthy code at all times and subjecting AI to rigorous testing that replicates the impact of real world attacks. AI creators should also consider developing security guidelines for consumers and businesses that interact with AI. Businesses and agencies that deploy AI should focus on proactively informing users of potential threats to their safety and the data’s integrity.” In light of the hysteria surrounding AGI, she insists, “It’s more important than ever for AI creators to be vocal about the technology’s value and jump at chances to address common misgivings about its role in the world.” Kobielus hedges his bets. He concludes, “I’m not discounting the possibility that robot overlords have already conquered other planets in our galaxy. I like to keep an open mind on such matters.”

 

Footnotes
[1] Paul G. Allen, “Paul Allen: The Singularity Isn’t Near,” Technology Review, 12 October 2011.
[2] James Kobielus, “The Bogus Bogeyman of the Brainiac Robot Overlord,” Dataversity, 7 September 2015.
[3] John Thornhill, “Artificial intelligence: can we control it?Financial Times, 14 July 2016 (subscription required).
[4] Thomas Hornigold, “When Will We Finally Achieve True Artificial Intelligence?SingularityHub, 1 January 2018.
[5] Janey Tracey, “Scientists Say Sentient Artificial Intelligence Will Likely Never Happen,” Outer Places, 9 April 2016.
[6] Kriti Sharma, “Everybody calm down about artificial intelligence,” Mashable, 19 September 2017.

Related Posts: