Home » Artificial Intelligence » Artificial Intelligence and Human Brains

Artificial Intelligence and Human Brains

June 20, 2012

supplu-chain

In a previous three-part series entitled “Artificial Intelligence: The Quest for Machines that Think Like Humans,” I noted that many scientists believe that it will be a long time before anyone creates a computer that thinks like a human. One reason is that scientists are still trying to figure out how the brain works. In Part 1 of that series, I discussed work being done at IBM, supported by funding from DARPA, related to the development of cognitive computers. In Part 2, I discussed work being done elsewhere. In Part 3, I discussed work being conducted at Carnegie Mellon University on the Never-Ending Language Learning system, or NELL.

 

In an article entitled “Why Your Brain Isn’t A Computer,” Alex Knapp quotes Emerson M. Pugh who wrote, “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.” [Forbes, 4 May 2012] Such sentiments, however, don’t stop people from trying to build computers that think like humans. For example, Dean Wilson reports, “Intel is researching computer technology that mimics the human brain so that it learns about the user over time, marking a major milestone in artificial intelligence and next-generation computing.” [“Intel researching computers that mimic human brain,” VR-Zone, 24 May 2012] In his article, Knapp directs readers to another article written by George Dvorsky entitled “How will we build an artificial human brain?” io9, 2 May 2012] Dvorsky writes:

“There’s an ongoing debate among neuroscientists, cognitive scientists, and even philosophers as to whether or not we could ever construct or reverse engineer the human brain. Some suggest it’s not possible, others argue about the best way to do it, and still others have already begun working on it. Regardless, it’s fair to say that ongoing breakthroughs in brain science are steadily paving the way to the day when an artificial brain can be constructed from scratch. And if we assume that cognitive functionalism holds true as a theory — the idea that our brains are a kind of computer — there are two very promising approaches worth pursuing.”

Dvorsky notes that the two approaches come from two different disciplines: cognitive science and neuroscience. “One side wants to build a brain with code,” he writes, “while the other wants to recreate all the brain’s important functions by emulating it on a computer. It’s anyone’s guess at this point in time as to who will succeed and get there first, if either of them.” Knapp believes that Dvorsky’s article “unwittingly” demonstrates “some of the fundamental flaws underlying artificial intelligence research based on the computational theory of mind.” He continues:

“The computational theory of mind, in essence, says that your brain works like a computer. That is, it takes input from the outside world, then performs algorithms to produce output in the form of mental state or action. In other words, it claims that the brain is an information processor where your mind is ‘software’ that runs on the ‘hardware’ of the brain. Dvorsky explicitly invokes the computational theory of mind by stating ‘if brain activity is regarded as a function that is physically computed by brains, then it should be possible to compute it on a Turing machine, namely a computer.’ He then sets up a false dichotomy by stating that ‘if you believe that there’s something mystical or vital about human cognition you’re probably not going to put too much credence’ into the methods of developing artificial brains that he describes.”

Knapp is skeptical that the human mind will be duplicated, although he doesn’t claim that creating “an artificial human brain is impossible.” He writes, “It’s just that programming such a thing would be much more akin to embedded systems programming rather than computer programming.” He continues:

“Moreover, it means that the hardware matters a lot – because the hardware would have to essentially mirror the hardware of the brain. This enormously complicates the task of trying to build an artificial brain, given that we don’t even know how the 300 neuron roundworm brain works, much less the 300 billion neuron human brain. But looking at the workings of the brain in more detail reveal some more fundamental flaws with computational theory. For one thing, the brain itself isn’t structured like a Turing machine. It’s a parallel processing network of neural nodes – but not just any network. It’s a plastic neural network that can in some ways be actively changed through influences by will or environment. For example, so long as some crucial portions of the brain aren’t injured, it’s possible for the brain to compensate for injury by actively rewriting its own network. Or, as you might notice in your own life, its possible to improve your own cognition just by getting enough sleep and exercise.”

One thing that gives hope to scientists trying to mimic the human brain is imagery that reveals “a deceptively simple pattern of organization in the wiring of this complex organ.” [“Spectacular brain images reveal surprisingly simple structure,” by Stephanie Pappas, MSNBC, 29 March 2012] The first image displayed by Pappas comes from the MGH-UCLA Human Connectome Project. Pappas writes:

“Instead of nerve fibers travelling willy-nilly through the brain like spaghetti, as some imaging has suggested, the new portraits reveal two-dimensional sheets of parallel fibers crisscrossing other sheets at right angles in a gridlike structure that folds and contorts with the convolutions of the brain.”

The image shows the “grid structure of major pathways of the human left cerebral hemisphere. Seen here are a major bundle of front-to-back paths (the ‘superior longitudinal fasciculus,’ or SLF) rendered in purples. These cross nearly orthogonally to paths projecting from the cerebral cortex radially inward (belonging to the ‘internal capsule’), shown in orange and yellow. These data were obtained in the new MGH-UCLA 3T Connectome Scanner as part of the NIH Blueprint Human Connectome Project.” The second image that accompanies Pappas’ article is from Van Wedeen / Martinos Center for Biomedical Imaging, Mass. General Hospital. It shows “detail of a diffusion spectrum MR image of rhesus monkey brain showing the sheet-like, three-dimensional structure of neural pathways that cross each other at right angles.” Regardless of how one feels about artificial intelligence or man’s ability to recreate the human brain, you have have to admit that images make it look as though the brain is “wired.” Pappas continues:

“The finding of clear up-down, front-back and side-to-side organization in the brain makes sense, [Van Wedeen, a neuroscientist at Harvard Medical School and Massachusetts General Hospital.] said, given that the brain has had to rewire both evolutionarily (to form the specialized brains humans boast today) and during its lifetime (as it grows and learns, for example). If the organization of communication were chaotic, that wouldn’t work.”

Knapp doesn’t believe that the wiring of the brain can explain how the mind works internally. He writes:

“Just consider the prevalence of cognitive dissonance and confirmation bias. Cognitive dissonance is the ability of the mind to believe what it wants even in the face of opposing evidence. Confirmation bias is the ability of the mind to seek out evidence that conforms to its own theories and simply gloss over or completely ignore contradictory evidence. Neither of these aspects of the brain are easily explained through computation – it might not even be possible to express these states mathematically. What’s more, the brain simply can’t be divided into functional pieces. Neuronal ‘circuitry’ is fuzzy and from a hardware perspective, its ‘leaky.’ Unlike the logic gates of a computer, the different working parts of the brain impact each other in ways that we’re only just beginning to understand. And those circuits can also be adapted to new needs. As Mark Changizi points out in his excellent book Harnessed, humans don’t have a portions of the brain devoted to speech, writing, or music. Rather, they’re emergent – they’re formed from parts of the brain that were adapted to simpler visual and hearing tasks. If the parts of the brain we think of as being fundamentally human – not just intelligence, but self-awareness – are emergent properties of the brain, rather than functional ones, as seems likely, the computational theory of mind gets even weaker.”

Knapp isn’t arguing against the pursuit of artificial intelligence per se; he is simply arguing that those who believe the human brain/mind can be duplicated are far from proving their case. We all know that artificial intelligence or machine learning can be very useful. The Intel project, for example, involves a plan “to develop small, wearable computers that improve certain aspects of everyday life. An example given was a user leaving his or her car keys at home. The first week the device will remember where he or she left the keys, while in the second week it will remind him or her not to forget the keys. This ability to adapt to a user’s experience could revolutionize the industry and produce huge demand for new home equipment.” Moody Eden, president of Intel Israel, told Dean Wilson, “Within five years all of the human senses will be in computers and in 10 years we will have more transistors in one chip than neurons in the human brain.” Wilson continues:

“The question, of course, is whether or not this will lead to technology outsmarting us and the scenarios put forward by science-fiction writers of robots revolting against our rule. It might all sound far-fetched, but so did the idea of computers that learn a few years ago.”

Knapp is not concerned. He concludes:

“The mind is best understood, not as software, but rather as an emergent property of the physical brain. So building an artificial intelligence with the same level of complexity as that of a human intelligence isn’t a matter of just finding the right algorithms and putting it together. The brain is much more complicated than that, and is very likely simply not amenable to that kind of mathematical reductionism, any more than economic systems are. Getting back to the question of artificial intelligence, then, you can see why it becomes a much taller order to produce a human-level intelligence. It’s possible to build computers that can learn and solve complex problems. But it’s much less clear that there’s an easy road to a computer that’s geared towards the type of emergent properties that distinguish the human brain. Even if such properties did emerge, I’m willing to bet that the end result of a non-human, sapient intelligence would be very alien to our understanding, possibly to the point of non-comprehension. Electric circuits simply function differently than electrochemical ones, and so its likely that any sapient properties would emerge quite differently.”

Regardless of whether the human brain will ever be duplicated, the fact that “computers … can learn and solve complex problems” should be reason enough to continue research into artificial intelligence and its applications.

Related Posts: