Home » Artificial Intelligence » The Road to Singularity: A Dead End?

The Road to Singularity: A Dead End?

April 12, 2013

supplu-chain

As those familiar with artificial intelligence (AI) or artificial general intelligence (AGI) know, there continues to be a lot of debate about whether the singularity will be achieved. The AI singularity is defined as the moment when machines become sentient and (presumably) smarter than the humans who created them. The word has its roots in both mathematics and the physical sciences (specifically, cosmology). Both uses are interesting to examine. A mathematical singularity is a point at which a function no longer works in a predictable way. In cosmology, it refers to an event horizon so spectacular or powerful that no useful data is transmitted from it. The most common cosmological examples are the big bang and black holes. The common thread in these three definitions of singularity is that it is impossible to predict anything useful about them or their consequences. A singularity changes everything. One of the most vocal proponents of the AI singularity is Ray Kurzweil — a very smart and very innovative man. As smart as he is, however, not everyone agrees with him when it comes to his predictions about the singularity (which he believes will take place somewhere around mid-century).

 

Yann LeCun, a professor of computer and neural science at New York University, claims, “In terms of computational ability, even the most-powerful computers in the world are just approaching that of an insect.” [“A Rat is Smarter Than Google,” by Sean Captain, TechNewsDaily, 4 June 2012] He went on to say, “I would be happy in my lifetime to build a machine as intelligent as a rat.” Paul G. Allen and Mark Greaves accept the possibility that an artificial human brain could be built in the future, but they insist that such a development is not inevitable. “An adult brain is a finite thing,” they write, “so its basic workings can ultimately be known through sustained human effort. But if the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress.” [“The Singularity Isn’t Near,” Technology Review, 12 October 2011]

 

Tom Hartsfield is another naysayer when it comes to the singularity. He notes that AI researchers have predicted that AGI (or something close to the singularity) is going to happen in the near future for a number of years. Unfortunately, the “near future” seems to be a moving target. He writes:

“Many notable AI researchers, as well as psychologists, writers and others who studied the progress of computer intelligence throughout the 1960s and 1970s believed that AI was only 15-35 years away. As these predictions began to fail, one after another, the research field of AI largely withered. It seems that AI is always predicted, by most experts, to be something like 15-25 years away from the present. Current experts often peg 2030 as a rough date for achieving artificial intelligence: just less than 20 years. Will we have AI anytime soon? Based on the history of the field, and of predictions about it, the short answer is: I’m afraid we can’t do that.” [“I’m Sorry Dave, I’m Afraid I Can’t Do That,” Real Clear Science, 25 March 2013]

Stuart Armstrong, a research fellow at the Future of Humanity Institute at University of Oxford. agrees that “AI predictions are very hard to get right.” [“Predicting the future of artificial intelligence has always been a fool’s game,” by Mark Piesing, Wired, 30 March 2013] Armstrong should know. Piesing reports, “Armstrong has recently analyzed the Future of Humanity Institute’s library of 250 AI predictions. The library stretches back to 1950, when Alan Turing, the father of computer science, predicted that a computer would be able to pass the ‘Turing test’ by 2000.” His conclusion, “Timeline predictions … are particularly worthless.” Hartsfield asserts, “The closer you come to reality, the harder the next step is. While AI has made dramatic progress, it will be a very, very long time before we have conscious computer companions (or, perhaps, living in fear of our machine overlords).”

 

Piesing reports that Robin Hanson, an associate professor at George Mason University and chief scientist at Consensus Point, believes that some experts in the AI field believe that “without any acceleration it might take between 200 and 400 years to achieve the goal.” Some would even argue that progress towards achieving it is actually “decelerating.” Armstrong is a bit more optimistic than that. He predicts “that [AI is] likely to happen sometime in the next five to 80 years.” That’s a pretty broad spread of years. Clearly, however, he is leaning towards the longer end of that prediction. He states, “I would give a 90 percent chance [it will happen] in the next two centuries, although there is always the chance that someone could come up with an AI algorithm tomorrow.” In another post, Hartsfield appears to agree Armstrong. After explaining some mathematical concepts, like the logistics curve, a curve that initially looks “like an exponential curve, but [things] level off when realistic constraints on growth begin to take effect,” he writes:

“Given how far we are from understanding even a simple worm’s brain, much less the human brain, this leveling off will almost certainly occur before our computing power swells to the necessary point to create an artificial human mind. Maybe some day we will succeed in creating AI, but don’t hold your breath or freeze your brain. Technological singularity will not bring it any time soon.” [“There Will Be No Technological Singularity,” Real Clear Science, 26 March 2013]

Hartsfield raises the example of the “simple worm’s brain” in that post because, in an earlier post, the worm was the focus of his attention. [“Attention Ray Kurzweil: We Can’t Even Build an Artificial Worm Brain,” Real Clear Science, 5 March 2013] He wrote:

“In the human brain, 100 billion neurons are connected by 100 trillion synapses. And, really, this staggeringly complex structure is only the beginning. (Consider that there may be roughly one hundred thousand trillion electrical signals traversing the brain in one second.) Can we build a working model of our brain, or is it still too formidable for complete scientific study? The key insight to answer this question comes from looking at something far, far simpler. Transparent and only one millimeter long, C. elegans worms are used in thousands of biology experiments as a ubiquitous invertebrate ‘lab rat.’ Each worm has exactly 302 neurons (connected by roughly 5,000-7,000 synapses). We know this because many scientists have counted the number of cells; each worm always contains 959 cells (hermaphrodite) or 1031 cells (male- which also contains 81 extra neurons in its tail). C. elegans was the first animal to have its genome sequenced. We can freeze it in liquid nitrogen and revive it. We can track it in 3-D. You can browse a library of its complete genome, its proteome (like the genome, but proteins) and even its whole nervous system on the internet. Science has studied this organism more thoroughly than any other — with the possible exception of the fruit fly and the laboratory mouse — in the entire animal kingdom. If we can get inside any mind in nature, this would be the one.”

His point, of course, is that even after a decade of research we “are not even close” to being able to predict the behavior of the worm. “In fact, it takes a computer with a billion transistors to make a weak, incorrect guess at what a worm with 302 brain cells will do.” Hartsfield’s bottom line is this:

“Science always has to start at the very simplest level and work its way up. We build on our previous knowledge and eventually arrive at enormous achievements. Further research on this topic is absolutely important; some day we may very well be able to model the brain of a worm and even a human. However, at the present, there is simply no way that a comprehensive human brain simulation will be feasible in the near future.”

Even the naysayers don’t want AGI researchers to stop their work or allow their skepticism to diminish the enthusiasm that proponents of the singularity have for achieving that goal. In fact, they believe that by bringing a little realism to the table neither the scientific community nor the general public need experience the disappointment that comes from unfulfilled expectations. At the moment, we really don’t know if the road to the singularity is a dead end. We’ll have to travel down it to see.

Related Posts: