Last December Dario Borghino reported, “Researchers at the University of Waterloo have built what they claim is the most accurate simulation of a functioning brain to date.” [“Scientists build the most accurate computer simulation of the brain yet,” Gizmag, 6 December 2012] The computer system used by Waterloo researchers has what Borghino describes as a “seemingly unimpressive count of only 2.5 million neurons, (the human brain is estimated to have somewhere nearing 100 billion neurons).” Larger neural network computers have been built. In an earlier article, Borghino noted that “IBM has simulated 530 billion neurons and 100 trillion synapses – matching the numbers of the human brain.” [“IBM supercomputer used to simulate a typical human brain,” Gizmag, 19 November 2012] That’s why the Waterloo claim of being the most accurate simulation of a functioning human brain is so impressive. Borghino continues:
“Spaun (Semantic Pointer Architecture Unified Network) is able to process visual inputs, compute answers and write them down using a robotic arm, performing feats of intelligence that up to this point had only been attributed to humans.”
Peter Murray is also impressed with what Spaun has accomplished. “Instead of the tour de force processing of Deep Blue or Watson’s four terabytes of facts of questionable utility,” he writes, “Spaun attempts to play by the same rules as the human brain to figure things out.” [“Scientists Create Artificial Brain with 2.3 million Simulated Neurons,” Singularity Hub, 10 December 2012] Murray continues:
“Instead of the logical elegance of a CPU, Spaun’s computations are performed by 2.3 million simulated neurons configured in networks that resemble some of the brain’s own networks. It was given a series of tasks and performed pretty well, taking a significant step toward the creation of a simulated brain. … It was given 6 different tasks that tested its ability to recognize digits, recall from memory, add numbers and complete patterns. Its cognitive network simulated the prefrontal cortex to handle working memory and the basal ganglia and thalamus to control movements. Like a human, Spaun can view an image and then give a motor response; that is, it is presented images that it sees through a camera and then gives a response by drawing with a robotic arm. And its performance was similar to that of a human brain. For example, the simplest task, image recognition, Spaun was shown various numbers and asked to draw what it sees. It got 94 percent of the numbers correct. In a working memory task, however, it didn’t do as well. It was shown a series of random numbers and then asked to draw them in order. Like us with human brains, Spaun found the pattern recognition task easy, the working memory task not quite as easy.”
Researchers admit that other AI computer systems can perform some of the tasks better than Spaun can — but that’s not the point. Murray explains:
“What’s important is that, in Spaun’s case, the task computations were carried out solely by the 2.3 million artificial neurons spiking in the way real neurons spike to carry information from one neuron to another. The visual image, for example, was processed hierarchically, with multiple levels of neurons successively extracting more complex information, just as the brain’s visual system does. Similarly, the motor response mimicked the brain’s strategy of combining many simple movements to produce an optimal, single movement while drawing.”
In an earlier post (Artificial Brains: The Debate Continues), I cited an article by George Dvorsky in which he writes, “It’s important to distinguish between emulation and simulation. Emulation refers to a 1-to-1 model where all relevant properties of a system exist. This doesn’t mean re-creating the human brain in exactly the same way as it resides inside our skulls. Rather, it implies the re-creation of all its properties in an alternative substrate, namely a computer system. Moreover, emulation is not simulation. Neuroscientists are not looking to give the appearance of human-equivalent cognition. A simulation implies that not all properties of a model are present. Again, it’s a complete 1:1 emulation that they’re after.” It appears to me that Waterloo researchers are trying to emulate, not simply simulate, brain function. Borghino reminds us that the quest for artificial general intelligence is difficult. He writes:
“Save for a select few areas, our decades-old efforts in creating a true artificial intelligence have mostly come up short: while we’re slowly moving toward more accurate speech recognition, better computerized gaming opponents and ‘smart’ personal assistants on our phones, we’re still a very long way from developing a general-purpose artificial intelligence that displays the plasticity and problem-solving capabilities of an actual brain. The ‘reverse engineering’ approach of attempting to understand the biology of the human brain and then build a computer that models it isn’t new; but now, thanks to the promising results of research efforts led by Prof. Chris Eliasmith, the technique could gain even more traction. Using a supercomputer, the researchers modeled the mammalian brain in close detail, capturing its properties, overall structure and connectivity down to the very fine details of each neuron – including which neurotransmitters are used, how voltages are generated in the cell, and how they communicate – into a very large and resource-intensive computer simulation. Then, they hardwired into the system the instructions to perform eight different tasks that involved different forms of high-level cognitive functions, such as abstraction. Tasks included handwriting recognition, answering questions, addition by counting, and even the kind of completion of symbolic patterns that often appears in intelligence tests.”
Spaun takes a baby step down the road to AGI. Borghino reports “the model is still affected by some severe limitations. For one, it cannot learn new tasks, and all of its knowledge has to be hardwired beforehand. Also, Spaun’s performance isn’t exactly breathtaking: it takes the system approximately two and a half hours to produce an output that you and I could carry out in a single second.” The following video gives a brief overview about how Spaun works.
Borghino noted that “the team’s findings appear in a paper … published in the journal Science. An open-access version of the paper is available here (PDF).” University of Waterloo researchers aren’t the only scientists involved in the hunt for algorithms that can help computers think like humans. “Hiroyuki Akama at the Graduate School of Decision Science and Technology, Tokyo Institute of Technology, together with co-workers in Yokohama, the USA, Italy and the UK, have completed a study using fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people.” [“Training computers to understand the human brain,” Medical Xpress, 8 October 2012] The article explains, “Understanding how the human brain categorizes information through signs and language is a key part of developing computers that can ‘think’ and ‘see’ in the same way as humans.” The article notes that, even if the experiments don’t lead to artificial general intelligence, “future application of experiments such as this could be the development of real-time brain-computer-interfaces. Such devices could allow patients with communication impairments to speak through a computer simply by thinking about what they want to say.” Even if AGI is a long way off (or is never achieved), research efforts attempting to reach that goal will result in significant benefits in areas from healthcare to marketing.