Home » Artificial Intelligence » Artificial Intelligence: The Quest for Machines that Think Like Humans, Part 1

Artificial Intelligence: The Quest for Machines that Think Like Humans, Part 1

January 30, 2012

supplu-chain

Paul G. Allen, chairman of Vulcan and cofounder of Microsoft, and Mark Greaves, a computer scientist at Vulcan, believe that machines won’t begin out-thinking humans in the foreseeable horizon. They acknowledge that their opinion differs from futurists like Vernor Vinge and Ray Kurzweil, who “have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities.” [“The Singularity Isn’t Near,” Technology Review, 12 October 2011] They are not at fundamental odds with Vinge and Kurzweil in that they believe that day will eventually come; they just believe that Vinge and Kurzweil are using “black box” thinking. They write that much of the magic that leads to the tipping point Kurzweil calls “the singularity” takes place in the “black box” and they don’t believe you can predict when the “black box” is going to be invented. They write:

“While we suppose this kind of singularity might one day occur, we don’t think it is near. In fact, we think it will be a very long time coming. … By working through a set of models and historical data, Kurzweil famously calculates that the singularity will arrive around 2045. This prediction seems to us quite far-fetched. Of course, we are aware that the history of science and technology is littered with people who confidently assert that some event can’t happen, only to be later proven wrong—often in spectacular fashion. We acknowledge that it is possible but highly unlikely that Kurzweil will eventually be vindicated. An adult brain is a finite thing, so its basic workings can ultimately be known through sustained human effort. But if the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress.”

Despite the fact that technologists and futurists have differing views on when computers will gain cognition, most seem to agree that the day will eventually arrive. In this 3-part series, I’ll discuss ongoing research aimed at developing computers that can think like humans. In this post, I’ll discuss some work being done by IBM.

 

Last August, “IBM researchers unveiled a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition.” [“IBM Unveils Cognitive Computing Chips,” IBM Press Release, 18 August 2011] The release states:

“The technology could yield many orders of magnitude less power consumption and space than used in today’s computers. In a sharp departure from traditional concepts in designing and building computers, IBM’s first neurosynaptic computing chips recreate the phenomena between spiking neurons and synapses in biological systems, such as the brain, through advanced algorithms and silicon circuitry. Its first two prototype chips have already been fabricated and are currently undergoing testing.”

One of the characteristics that the IBM approach has in common with many (if not all) of the other approaches being pursued to achieve cognitive computing is the use of artificial neurons, synapses, and axons. These biological marvels are what allow the brain to make millions of connections between facts, figures, experiences, etc. The press release continues:

“Called cognitive computers, systems built with these chips won’t be programmed the same way traditional computers are today. Rather, cognitive computers are expected to learn through experiences, find correlations, create hypotheses, and remember – and learn from – the outcomes, mimicking the brains structural and synaptic plasticity. To do this, IBM is combining principles from nanoscience, neuroscience and supercomputing as part of a multi-year cognitive computing initiative. The company and its university collaborators also announced they have been awarded approximately $21 million in new funding from the Defense Advanced Research Projects Agency (DARPA) for Phase 2 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project. The goal of SyNAPSE is to create a system that not only analyzes complex information from multiple sensory modalities at once, but also dynamically rewires itself as it interacts with its environment – all while rivaling the brain’s compact size and low power usage. The IBM team has already successfully completed Phases 0 and 1.”

DARPA is known for supporting projects with tremendous breakthrough potential. Since it has committed a significant amount of funding for this project, it obviously believes that IBM and its partners are on to something big. Dharmendra Modha, project leader for IBM Research, states, “This is a major initiative to move beyond the von Neumann paradigm that has been ruling computer architecture for more than half a century. Future applications of computing will increasingly demand functionality that is not efficiently delivered by the traditional architecture. These chips are another significant step in the evolution of computers from calculators to learning systems, signaling the beginning of a new generation of computers and their applications in business, science and government.”

 

The “von Neumann model” (or “von Neumann paradigm”) is a model used in traditional sequential computers sometimes referred to as instruction-stream-based computing. It derives both its approach and name from a computer architecture proposed in the mid-1940s by John von Neumann, a mathematician and early computer scientist. If machines are going to begin thinking like humans, they can’t be trapped in this paradigm. The press release explains:

“IBM’s overarching cognitive computing architecture is an on-chip network of light-weight cores, creating a single integrated system of hardware and software. This architecture represents a critical shift away from traditional von Neumann computing to a potentially more power-efficient architecture that has no set programming, integrates memory with processor, and mimics the brain’s event-driven, distributed and parallel processing.”

The heart of the IBM approach is the Neurosynaptic Chip. The press release explains:

“While they contain no biological elements, IBM’s first cognitive computing prototype chips use digital silicon circuits inspired by neurobiology to make up what is referred to as a ‘neurosynaptic core’ with integrated memory (replicated synapses), computation (replicated neurons) and communication (replicated axons). IBM has two working prototype designs. Both cores were fabricated in 45 nm SOI-CMOS and contain 256 neurons. One core contains 262,144 programmable synapses and the other contains 65,536 learning synapses. The IBM team has successfully demonstrated simple applications like navigation, machine vision, pattern recognition, associative memory and classification. … IBM’s long-term goal is to build a chip system with ten billion neurons and hundred trillion synapses, while consuming merely one kilowatt of power and occupying less than two liters of volume.”

That is an ambitious goal. Obviously, a chip system of that size and compaction needs to keep power consumption low so that excessive heat, which could damage the system, is not generated. Although the answer may seem obvious, the IBM press release asks the rhetorical question, “Why Cognitive Computing?” It answers that question this way:

“Future chips will be able to ingest information from complex, real-world environments through multiple sensory modes and act through multiple motor modes in a coordinated, context-dependent manner. For example, a cognitive computing system monitoring the world’s water supply could contain a network of sensors and actuators that constantly record and report metrics such as temperature, pressure, wave height, acoustics and ocean tide, and issue tsunami warnings based on its decision making. Similarly, a grocer stocking shelves could use an instrumented glove that monitors sights, smells, texture and temperature to flag bad or contaminated produce. Making sense of real-time input flowing at an ever-dizzying rate would be a Herculean task for today’s computers, but would be natural for a brain-inspired system.”

The kind of data that IBM is talking about analyzing is exponentially larger than what we now call Big Data — it’s humongous data. The new CBS television show “Person of Interest” involves a machine that supposedly crunches this kind of data to predict events before they happen. Dr. Modha says, “Imagine traffic lights that can integrate sights, sounds and smells and flag unsafe intersections before disaster happens or imagine cognitive co-processors that turn servers, laptops, tablets, and phones into machines that can interact better with their environments.” That is future into which we are heading. Alex Knapp writes that it is hard to tell how much truth there is behind the hype. [“Is IBM Building a Computer That Thinks Like a Human?Forbes, 23 August 2011] He reports:

“I emailed Scott Aaronson, an Associate Professor at MIT who specializes in quantum computing and computational complexity theory to get his take on IBM’s research. He, like me, was frustrated at the lack of detail. ‘[T]hey go into detail about the number of neurons and synapses that they’re able to simulate, as well as the speed and memory requirements of the simulation. But they say much less about the purpose or performance of this huge neural network: i.e., what sorts of learning or recognition tasks (if any) is the network designed to solve? And is its performance on those tasks demonstrably better than the performance of smaller neural networks?’ He was quick to point out that this didn’t mean that the project didn’t have any application – merely that it’s tough to figure out from the information provided what the cognitive chips are for. ‘Whether your motivation comes more from engineering or from neuroscience, at the end of the day it of course matters less how many neurons you can simulate than what you can DO with those neurons!’ I have to agree with Dr. Aaronson’s assessment on this score. It’s not clear from IBM’s materials as to what sorts of applications are expected from the Synapse project. I get that they want to build a system with 106 “neurons” and then 108 “neurons” — but to what end?”

Knapp, reflecting thinking along the lines of Allen and Greaves, was also skeptical about “the idea that the cognitive computing being developed by the Synapse project is ’emulating the human brain.” He writes:

“That didn’t seem right to me – there’s far more to how the brain works than the simple action potential of a neuron. So I sent a Twitter message to neuroscientist Bradley Voytek to see if he could confirm my suspicion. ‘It totally doesn’t work like a brain!’ he tweeted back. We continued our conversation over email. ‘Do you mind elaborating exactly how the hardware doesn’t work like a brain?’ I asked. ‘Our current knowledge of how neurons give rise to processes such as learning and memory are severely lacking,’ he replied. ‘What Modha and colleagues have done is designed a computing architecture based on neurons communicating via action potentials. More and more the evidence suggests that the brain isn’t just a collection of neurons wired together in the correct way, however. It’s a dynamic system of neurons and glial cells communicating via action potentials, graded voltage gap junctions, long-range oscillatory activity, and so on.’ I followed up with another quick question, ‘Do you think that just simulating the action potentials is enough to simulate learning?’ ‘Given that most computational neuroscientists treat action potentials as binary signals, then yes, learning can be instantiated by a series of action potentials. The question is: is that implementation mimicking the way the brain instantiates learning? And my guess is that the answer is: no, it’s not that simple.’ So that’s where we are. I don’t mean to downplay the work of Dr. Modha and his team – I definitely think that this is a fascinating project, and I’m very interested to see their published research. More importantly, like Dr. Aaronson, I’m interested to see what kinds of applications this technology is used for. And as Dr. Voytek mentioned to me in our email conversation, even if the Synapse program doesn’t really think like a brain, that doesn’t mean it can’t teach us about the brain. ‘This might prove to be a very important step in the history of computing, and watching how their architecture fails to capture the intricacies of the brain will inform us about how our models are deficient.'”

In the end, Knapp brings us back to the arguments presented at the beginning by Allen and Greaves. His bottom line: “This is definitely an exciting project. But we don’t have to worry about the rise of the machines anytime soon.”

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!