Home » Artificial Intelligence » Artificial Brains: The Debate Continues

Artificial Brains: The Debate Continues

February 7, 2013

supplu-chain

“There’s an ongoing debate among neuroscientists, cognitive scientists, and even philosophers,” writes George Dvorsky, “as to whether or not we could ever construct or reverse engineer the human brain. Some suggest it’s not possible, others argue about the best way to do it, and still others have already begun working on it.” [“How will we build an artificial human brain?” io9, 2 May 2012] Dvorsky is fairly optimistic that we will achieve some kind of machine intelligence. He writes:

“It’s fair to say that ongoing breakthroughs in brain science are steadily paving the way to the day when an artificial brain can be constructed from scratch. And if we assume that cognitive functionalism holds true as a theory — the idea that our brains are a kind of computer — there are two very promising approaches worth pursuing.”

Before looking at those two promising approaches, I should point out that assuming cognitive functionalism is a true theory is a BIG assumption. Among the skeptics is Mary Cummings, an associate professor at the Massachusetts Institute of Technology who studies the intersection of humans and automation. “I’m a big fan of the mutually supportive look at humans and technology, but this is a huge leap,” Cummings told Julie Manoharan. “We can manipulate basic electrical impulses, but for the scientific community to say we can completely replicate cognition, that to me is where the Singularity starts to fall apart.” [“The Singularity: Should We Worry?LiveScience, 24 January 2013] The singularity has been defined by proponents of artificial general intelligence as the point when machines achieve sentience and become smarter than humans.

 

Returning to Dvorsky’s article, he notes that the two promising approaches to which he alluded come “from two relatively different disciplines: cognitive science and neuroscience. One side wants to build a brain with code, while the other wants to recreate all the brain’s important functions by emulating it on a computer.” He first discusses the cognitive science (or rules-based) approach. He writes:

“One very promising strategy for building brains is the rules-based approach. The basic idea is that scientists don’t need to mimic the human brain in its entirety. Instead, they just have to figure out how the ‘software’ parts of the brain work; they need to figure out the algorithms of intelligence and the ways that they’re intricately intertwined. Consequently, it’s this approach that excites the cognitive scientists.”

Researchers have achieved some very interesting results by providing computers with some basic rules and then letting them loose to learn on their own. Those promising results are one reason that “some computer theorists insist that the rules-based approach will get us to the brain-making finish line first.” Dvorsky continues:

“Ben Goertzel is one such theorist. His basic argument is that other approaches over-complicate and muddle the issue. He likens the approach to building airplanes: we didn’t have to reverse engineer the bird to learn how to fly. Essentially, cognitive scientists like Goertzel are confident that the hard-coding of artificial general intelligence (AGI) is a more elegant and direct approach. It’ll simply be a matter of identifying and developing the requisite algorithms sufficient for the emergence of the traits they’re looking for in an AGI. They define intelligence in this context as the ability to detect patterns in the world, including in itself.”

Frankly, based on current technology it’s a leap to go from promising rules-based results aimed at limited objectives to hard-coding AGI. Paul Allen notes that as things get more complicated (like going from limited to general artificial intelligence) scientists run into a “complexity brake.” [“The Singularity Isn’t Near,” MIT Technology Review, 12 October 2011] Although Allen believes “the complexity brake slows our rate of progress, and pushes the singularity considerably into the future,” he doesn’t completely discount it nor does he argue that should scientists cease their efforts to achieve it. Dvorsky continues:

“To that end, Goertzel and other AI theorists have highlighted the importance of developing effective learning algorithms. A new mind comes into the world as a blank slate, they argue, and it spends years learning, developing, and evolving. Intelligence is subject to both genetic and epigenetic factors, and just as importantly, environmental factors. It is unreasonable, say the cognitive scientists, to presume that a brain could suddenly emerge and be full of intelligence and wisdom without any actual experience. This is why Goertzel is working to create a ‘baby-like’ artificial intelligence first, and then raise and train this AI baby in a simulated or virtual world such as Second Life to produce a more powerful intelligence. A fundamental assumption is that knowledge can be represented in a network whose nodes and links carry ‘probabilistic truth values’ as well as ‘attention values,’ with the attention values resembling the weights in a neural network. There are a number of algorithms that need to be developed in order to make the whole neural system work, argues Goertzel, the central one being a probabilistic inference engine and a custom version of evolutionary programming. Once these algorithms and associations are established, it’s just a matter of teaching the AI what it needs to know.”

Sounds easy doesn’t it? Dvorsky points out that “neuroscientists aren’t entirely convinced by the rules-based approach.” He writes:

“They feel that something is being left out of the equation, literally. Instead, they argue that researchers should be inspired by an actual working model: our brains. Indeed, whole brain emulation (WBE), the idea of reverse engineering the human brain, makes both intuitive and practical sense. Unlike the rules-based approach, WBE works off a tried-and-true working model; neuroscientists are not having to re-invent the wheel. Natural selection, through excruciatingly tedious trial-and-error, created the human brain — and all without a preconceived design. They say there’s no reason to believe that we can’t model this structure ourselves. If the brain could come about through autonomous processes, argue neuroscientists, then it can most certainly come about through the diligent work of intelligent researchers.”

Dvorsky points out that “it’s important to distinguish between emulation and simulation. Emulation refers to a 1-to-1 model where all relevant properties of a system exist.” He explains:

“This doesn’t mean re-creating the human brain in exactly the same way as it resides inside our skulls. Rather, it implies the re-creation of all its properties in an alternative substrate, namely a computer system. Moreover, emulation is not simulation. Neuroscientists are not looking to give the appearance of human-equivalent cognition. A simulation implies that not all properties of a model are present. Again, it’s a complete 1:1 emulation that they’re after.”

Dvorsky notes that the emulation approach has as many critics as the rules-based approach. The “critics point out that we’ll never completely emulate the human brain on account of the chaos and complexity inherent in such a system.” Proponents of the emulation approach argue that complexity isn’t a problem. Luke Muehlhauser, who works at the Singularity Institute in Berkeley, CA, told Julie Manoharan, “that a total understanding of the human brain is not necessary to replicate the functionality of humans in machines.” Dvorsky indicates that researchers from Oxford University claim the same thing. He explains:

“What’s required is a functional understanding of all necessary low-level information about the brain and knowledge of the local update rules that change brain states from moment to moment. What is meant by low-level at this point is an open question, but it likely won’t involve a molecule-by-molecule understanding of cognition. And as Ray Kurzweil has revealed, the brain contains masterful arrays of redundancy; it’s not as complicated as we currently think. In order to gain this ‘low-level functional understanding’ of the human brain, neuroscientists will need to employ a series of interdisciplinary approaches (most of which are currently underway).”

Dvorsky writes that among those advances that are needed and underway are the following:

Computer science: The hardware component has to be vastly improved. Scientists are going to need machines with the processing power required to host a human brain. They’re also going to need to improve the software component so that they can create algorithmic correlates to specific brain function.”

How far have we advanced? We’ve come quite a ways and Dario Borghino reports that even more advances could be at hand. “Using the world’s fastest supercomputer and a new, scalable, ultra-low power computer architecture,” he writes, “IBM has simulated 530 billion neurons and 100 trillion synapses – matching the numbers of the human brain – in an important step toward creating a true artificial brain.” [“IBM supercomputer used to simulate a typical human brain,” Gizmag, 19 November 2012] The next technologies discussed by Dvorsky are brain mapping and neurosciences. He writes:

Microscopy and scanning technologies: Scientists need to better study and map the brain at the physical level. Brain slicing techniques will allow them to visibly study cognitive action down to the molecular scale. Specific areas of inquiry will include molecular studies of individual neurons, the scanning of neural connection patterns, determining the function of neural clusters, and so on.

Neurosciences: Researchers need more impactful advances in the neurosciences so that they can better understand the modular aspects of cognition and start mapping the neural correlates of consciousness (what is currently a very grey area).”

How far have we advanced in this area? One thing that gives hope to scientists trying to mimic the human brain is imagery that reveals “a deceptively simple pattern of organization in the wiring of this complex organ.” [“Spectacular brain images reveal surprisingly simple structure,” by Stephanie Pappas, MSNBC, 29 March 2012] Pappas’ article includes stunning images of the brain’s structure. The final area discussed by Dvorsky is genetics. He writes:

Genetics: Scientists need to get better at reading our DNA for clues about how the brain is constructed. It’s generally agreed that our DNA will not tell us how to build a fully functional brain, but it will tell us how to start the process of brain-building from scratch.”

Dvorsky concludes, “Essentially, WBE requires three main capabilities: (1) the ability to physically scan brains in order to acquire the necessary information, (2) the ability to interpret the scanned data to build a software model, and (3) the ability to simulate this very large model.” Like other approaches, Dvorsky admits this one isn’t going to be easy. And that’s the primary point made by the skeptics.

 

When it comes to achieving AGI, rhetoric and reality have not yet met. I agree with Dvorsky on one point, “This will be a multi-disciplinary endeavor that will require decades of data collection and the use of technologies that don’t yet exist.” Clearly the debate about creating an artificial human brain or achieving the singularity is going to continue well into the future.

Related Posts: