Home » Artificial Intelligence » Artificial Brains: We’re Not There Yet

Artificial Brains: We’re Not There Yet

July 6, 2012

supplu-chain

Yann LeCun, a professor of computer and neural science at New York University, claims that, “in terms of computational ability, even the most-powerful computers in the world are just approaching that of an insect.” [“A Rat is Smarter Than Google,” by Sean Captain, TechNewsDaily, 4 June 2012] He went on to say, “I would be happy in my lifetime to build a machine as intelligent as a rat.” LeCun and his colleague Josh Tenenbaum, a professor of computational computer science at MIT, aren’t dissing the “amazing things that Google can do, like giving us driving or walking directions nearly instantaneously,” they are simply pointing out that that what the human brain can do is even more amazing. Tenenbaum claims that the “basic kind of intelligence” used in Google applications uses simple planning. “That’s very easy,” he told Captain. “It’s not even called it AI anymore. It’s just called Google.” Captain continues:

“Real intelligence, they said, is not just memorizing but using what you’ve learned to figure out situations you’ve never experienced. … The two professors are nowhere near that. LeCun, for example, is experimenting with a driving robot that tries to identify the objects around it. He showed a video of what the robot sees — how it labels objects like people, trees and roads. It generally gets them right, but often calls trees people, a patch of dirt water, a lamppost a building. To show what AI researchers are up against, LeCun described the immensity of the human brain based on the latest, albeit very rough, estimates: 100 billion neurons make from 1,000 to 10,000 connections with other neurons and use those connections up to 100 to 1,000 times a second (a pretty high estimate). That’s perhaps a quintillion — 1,000,000,000,000,000,000 — operations happening every second in everyone’s head. ‘But the power of supercomputers increases exponentially,’ said LeCun, estimating that they will reach that ability in ‘somewhere between 30 and a 100 years. Then we wait another 10 or 20 years, and it fits in your smartphone. So then your smartphone is smarter than you.'”

Ray Kurzweil, a well-known innovator and futurist, is much more optimistic about the future of artificial intelligence. He claims that “artificial intelligence has progressed to the point where computers’ reasoning powers should be indistinguishable from human brains by 2029.” [“Computers’ Reasoning Skills to Equal Humans by 2029,” Wall Street Journal, 26 June 2012] Paul G. Allen and Mark Greaves accept the possibility that an artificial human brain could be built in the future, but insist that such a development is not inevitable. “An adult brain is a finite thing,” they write, “so its basic workings can ultimately be known through sustained human effort. But if the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress.” [“The Singularity Isn’t Near,” Technology Review, 12 October 2011] Allen and Greaves are countering arguments proffered by technologists like Kurzweil and artificial intelligence expert Jürgen Schmidhuber. Kurzweil has labeled the moment when a computer acquires self-awareness and becomes smarter than a human the “singularity.” Schmidhuber calls that moment “omega.” Both Kurzweil and Schmidhuber base their predictions on when that might occur on the Law of Accelerating Returns. Schmidhuber gave a very interesting (and entertaining) talk at TEDxLausanne on Jan. 20, 2012 which you can view that lecture below.

 

 

In his talk, Schmidhuber insists that “a few things can be predicted confidently, such as: soon there will be computers faster than human brains, because computing power will continue to grow by a factor of 100–1000 per decade.” [“When creative machines overtake man,” Kurzweil Blog, 31 March 2012] He realizes that naysayers will insist that even though “computers will be faster than brains” they will still “lack the general problem-solving software of humans, who apparently can learn to solve all kinds of problems!” His response:

“That’s too pessimistic. At the Swiss AI Lab IDSIA in the new millennium we already developed mathematically optimal, learning, universal problem solvers living in unknown environments. That is, at least from a theoretical point of view, blueprints of universal AIs already exist. They are not yet practical for various reasons; but on the other hand we already do have not quite as universal, but very practical brain-inspired artificial neural networks that are learning complex tasks that seemed unfeasible only 10 years ago.”

Schmidhuber admits that the “learning systems” that have progressed the furthest are those used to identify patterns. He stresses that you don’t have to program such systems because “they learn from millions of training examples, extracting the regularities, and generalizing on unseen test data.” At this point, Schmidhuber knows that critics will say, “Maybe computers will be faster and better pattern recognizers, but they will never be creative!” His response:

“That’s too pessimistic. In my group at the Swiss AI Lab IDSIA, we developed a Formal Theory of Fun and Creativity that formally explains science & art & music & humor, to the extent that we can begin to build artificial scientists and artists. Let me explain it in a nutshell. As you are interacting with your environment, you record and encode (e.g., through a neural net) the growing history of sensory data that you create and shape through your actions. Any discovery (say, through a standard neural net learning algorithm) of a new regularity in the data will make the code more efficient (e.g., less bits or synapses needed, or less time). This efficiency progress can be measured — it’s the wow-effect or fun! A real number. This number is a reward signal for the separate action-selecting module, which uses a reinforcement learning method to maximize the future expected sum of such rewards or wow-effects. Just like a physicist gets intrinsic reward for creating an experiment leading to observations obeying a previously unpublished physical law that allows for better compressing the data. Or a composer creating a new but non-random, non-arbitrary melody with novel, unexpected but regular harmonies that also permit wow-effects through progress of the learning data encoder. Or a comedian inventing a novel joke with an unexpected punch line, related to the beginning of the story in an initially unexpected but quickly learnable way that also allows for better compression of the perceived data.”

I believe that most neuroscientists agree that we don’t fully understand how the brain functions so it’s a bit premature to predict that science will be able to duplicate it artificially. Gerhard Adam argues that intelligence is biological; therefore, by definition, a non-biological system can never achieve true intelligence (see my post entitled Intelligence: Artificial or Not). But as LeCun and Tenenbaum imply, if the singularity (or omega) is ever going to be achieved, the place to start is probably by trying to duplicate the 100 billion neurons found in the human brain and the synapses through which their signals pass. “Researchers in Japan have shown that it’s possible to mimic synaptic function with nanotechnology, a breakthrough that could result in not just artificial neural networks, but fixes for the human brain as well.” [“Synthetic synapse could take us one step closer to an artificial brain,” by George Dvorsky, io9, 11 June 2012] Dvorsky continues:

“Synapses are essential to brain function. It’s what allows a neuron to pass an electric or chemical signal to another cell. Its structure is incredibly complex, with hundreds of proteins and other chemicals interacting in a complicated way. It’s because of this that cognitive scientists and artificial intelligence researchers have had great difficulty trying to simulate this exact function. But a new study published in Advanced Functional Materials has shown that it may be possible to reproduce synaptic function by using a single nanoscale electrochemical atomic switch. Japanese researchers developed a tiny device that has a gap bridged by a copper filament under a voltage pulse stimulation. This results in a change in conductance which is time-dependent — a change in strength that’s nearly identical to the one found in biological synaptic systems. The inorganic synapses could thus be controlled by changes in interval, amplitude, and width of an input voltage pulse stimulation.”

Dvorsky writes that this breakthrough is exciting because the device “is essentially mimicking the major features of human cognition, what the researchers refer to as the ’emulation of synaptic plasticity’, including what goes on in short-term and long-term memory.” He concludes:

“Not only that, it responds to the presence of air and temperature changes, which indicates that it has the potential to perceive the environment much like the human brain. The researchers are hoping that their newfound insight could help in the development of artificial neural networks, but it’s clear that their system, which operates at a microscopic level, could also be used to treat the human brain. The day may be coming when failing synaptic systems could be patched with a device similar to this one, in which biological function is offloaded to a synthetic one.”

I agree with Allen and Greaves that some “unforeseeable and fundamentally unpredictable breakthroughs” are going to be needed if scientists are going to duplicate the human brain and achieve a singularity. New breakthroughs, like the synthetic synapse, are encouraging as is the work being done at the Swiss AI Lab IDSIA. I also like Schmidhuber’s restrained optimism when it comes to future of artificial intelligence. There is no reason to be too pessimistic.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!