Home » Artificial Intelligence » Can Machine Intelligence Emerge from a Single Algorithm?

Can Machine Intelligence Emerge from a Single Algorithm?

June 5, 2013

supplu-chain

Daniela Hernandez entices us into her Wired article about Stanford computer science professor Andrew Ng with this statement, “There’s a theory that human intelligence stems from a single algorithm.” [“The man behind the Google Brain: Andrew Ng and the new AI,” 8 May 2013] If the theory is true, then scientists could be closer than we think to creating a deep-learning machine that eventually develops sentience. Hernandez writes:

“The idea arises from experiments suggesting that the portion of your brain dedicated to processing sound from your ears could also handle sight for your eyes. This is possible only while your brain is in the earliest stages of development, but it implies that the brain is — at its core — a general-purpose machine that can be tuned to specific tasks. About seven years ago, Stanford computer science professor Andrew Ng stumbled across this theory, and it changed the course of his career, reigniting a passion for artificial intelligence, or AI. ‘For the first time in my life,’ Ng says, ‘it made me feel like it might be possible to make some progress on a small part of the AI dream within our lifetime’.”

What Hernandez and Ng are referring to is often described as artificial general intelligence (AGI) in order to differentiate it from AI systems that can learn and think but only in limited ways to perform specific functions. AGI is the Holy Grail for many scientists working the field. Hernandez’ article received a lot of attention in the blogosphere. One reason is because people get excited when new, easy to understand possibilities are presented to them. Ng explained to Hernandez that traditional approaches to AGI have generally assumed that the greatest challenge to achieving it would be dealing with the complexity of the mind. Hernandez writes:

“In the early days of artificial intelligence, Ng says, the prevailing opinion was that human intelligence derived from thousands of simple agents working in concert, what MIT’s Marvin Minsky called ‘The Society of Mind.’ To achieve AI, engineers believed, they would have to build and combine thousands of individual computing modules. One agent, or algorithm, would mimic language. Another would handle speech. And so on. It seemed an insurmountable feat.”

If the theory is true that a single algorithm that promotes deep learning can produce the complexity required to achieve what Ray Kurzweil calls “the singularity,” the feat doesn’t appear to be quite so insurmountable. That is what is getting people excited. Researchers have known for some time that simple algorithmic instructions can lead to complex computer behaviors in limited experiments. More recent research demonstrates that these kinds of simple instructions can be used to deal with a much broader spectrum of challenges than previously thought. For example, Alexander Wissner-Gross, a physicist at Harvard University and a fellow at the Massachusetts Institute of Technology Media Lab, and his colleague Cameron Freer, a post-doctoral fellow at MIT’s Computer Science and Artificial Intelligence Laboratory, have developed a software program called Entropica, which they label “Sapient Software.” George Dvorsky believes, “Wissner-Gross’s work has serious implications for AI. And in fact, he says it turns conventional notions of a world-dominating artificial intelligence on its head.” [“How Skynet Might Emerge From Simple Physics,” Sentient Developments, 11 May 2013] Dvorsky’s “Skynet” reference is to the Terminator movie series in which an AGI system (i.e., Skynet) eventually gets so smart it takes over control of the world and starts killing humans. Wissner-Gross’ work implies that the desire for control actually leads to intelligence and not the other way around. As Dvorsky writes, “From the rather simple thermodynamic process of trying to seize control of as many potential future histories as possible, intelligent behavior may fall out immediately.” To learn more about this work, read my post entitled Intelligence from Chaos. The bottom line is that complexity has been shown to emerge from simplicity. Returning to Hernandez’ article on Professor Ng, she writes:

“Ng now leads a new field of computer science research known as Deep Learning, which seeks to build machines that can process data in much the same way the brain does, and this movement has extended well beyond academia, into big-name corporations like Google and Apple. In tandem with other researchers at Google, Ng is building one of the most ambitious artificial-intelligence systems to date, the so-called Google Brain. This movement seeks to meld computer science with neuroscience — something that never quite happened in the world of artificial intelligence. ‘I’ve seen a surprisingly large gulf between the engineers and the scientists,’ Ng says. Engineers wanted to build AI systems that just worked, he says, but scientists were still struggling to understand the intricacies of the brain. For a long time, neuroscience just didn’t have the information needed to help improve the intelligent machines engineers wanted to build. What’s more, scientists often felt they ‘owned’ the brain, so there was little collaboration with researchers in other fields, says Bruno Olshausen, a computational neuroscientist and the director of the Redwood Centre for Theoretical Neuroscience at the University of California, Berkeley. The end result is that engineers started building AI systems that didn’t necessarily mimic the way the brain operated. They focused on building pseudo-smart systems that turned out to be more like a Roomba vacuum cleaner than Rosie the robot maid from the Jetsons. But, now, thanks to Ng and others, this is starting to change. ‘There is a sense from many places that whoever figures out how the brain computes will come up with the next generation of computers,’ says Thomas Insel, the director of the National Institute of Mental Health.”

Frankly, the debate between engineers and scientists may be a red herring. If deep learning does lead to AGI, whatever machine intelligence does emerge will be something unique and different. Hernandez notes that, wherever this journey may lead, deep learning is the first step. The so-called “Google Brain” project received a lift (at least in publicity) when Ray Kurzweil joined the Google team. As noted above, Kurzweil is long-time, true believer in the singularity. Not everyone, however, is a believer. “Miguel Nicolelis, a top neuroscientist at Duke University, says computers will never replicate the human brain and that the technological Singularity is ‘a bunch of hot air’.” [“The Brain Is Not Computable,” by Antonio Regalado, MIT Tech Review, 18 February 2013] “The brain is not computable,” he told Regalado, “and no engineering can reproduce it.” Nicolelis calls Kurzweil’s vision of recreating the human brain “sheer bunk.”

 

True or not, there is still a lot to be gained from deep learning machines. Advances in both mathematics and computing power have “normally cautious AI researchers hopeful that intelligent machines may finally escape the pages of science fiction.” [“Ten Breakthrough Technologies 2013,” by Robert D. Hof, MIT Tech Review, 23 April 2013] Hof continues:

“Indeed, machine intelligence is starting to transform everything from communications and computing to medicine, manufacturing, and transportation. The possibilities are apparent in IBM’s Jeopardy!-winning Watson computer, which uses some deep-learning techniques and is now being trained to help doctors make better decisions. Microsoft has deployed deep learning in its Windows Phone and Bing voice search. Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. And we probably won’t see machines we all agree can think for themselves for years, perhaps decades—if ever. But for now, says Peter Lee, head of Microsoft Research USA, ‘deep learning has reignited some of the grand challenges in artificial intelligence’.”

The most important characteristic of deep learning Ng told Hernandez is that the software learns on its own. “With Deep Learning, Ng says, you just give the system a lot of data ‘so it can discover by itself what some of the concepts in the world are’.” We have all experienced the thrill of self-discovery. Although I’m not sure that machines will ever “feel” that thrill, they are certainly becoming more capable of learning on their own. The “single algorithm” theory may not lead to AGI, but it should help get us closer to that goal.

Related Posts: