Home » Artificial Intelligence » Artificial Intelligence: Trying to Think Like a Human

Artificial Intelligence: Trying to Think Like a Human

Stephen DeAngelis

April 10, 2014

supplu-chain

According to Luke Dormehl, “The secret of human brains is pattern recognition.” [“The Algorithm That Thinks Like A Human,” Fast Company, 11 November 2013] I’m pretty sure that the human brain holds a few more secrets than that; but, pattern recognition is certainly one human trait that helps us cope with the world around us. The algorithm to which Dormehl refers in his headline is one created by a startup company called Vicarious to outsmart “that string of distorted characters that forces you to prove you’re human” known as CAPTCHA. Dormehl reports that it was a ten-year journey to the algorithm that grabbed headlines in the fall of 2013. Dr. Dileep George, who with D. Scott Phoenix heads the team of AI researchers at Vicarious, told Dormehl, “We started off by using machine learning tools to create a model of what comprises individual letters — thereby training our system to recognize them. That’s not hard. But the next step was to make the system good at learning, even when there wasn’t much data available to suggest a pattern something much, much harder for a computer to do.” It was hard, but the computer did it. As a result, the founders of Vicarious have been rewarded. “A group of tech elites and venture capital firms have awarded $40 million to Vicarious. Venture capital firm Formation 8 led the round … joined by Tesla and SpaceX CEO Elon Musk, Facebook CEO Mark Zuckerberg and actor Ashton Kutcher.” [“Artificial intelligence startup Vicarious collects $40 million from tech elites,” by Signe Brewster, Gigaom, 21 March 2014] Mastering one task, however, doesn’t mean that Vicarious’ algorithm has the computer thinking like a human.

 

“Computers are incredibly inefficient at lots of tasks that are easy for even the simplest brains,” writes Tom Simonite, “such as recognizing images and navigating in unfamiliar spaces.” [“Thinking in Silicon,” MIT Technology Review, January/February 2014] That, Simonite notes, is because computer chips and brains are very different. He explains:

“Picture a person reading these words on a laptop in a coffee shop. The machine made of metal, plastic, and silicon consumes about 50 watts of power as it translates bits of information — a long string of 1s and 0s — into a pattern of dots on a screen. Meanwhile, inside that person’s skull, a gooey clump of proteins, salt, and water uses a fraction of that power not only to recognize those patterns as letters, words, and sentences but to recognize the song playing on the radio.”

Simonite goes on to explain that computers being used to learn some of the skills carried out by human brains “are huge and energy-hungry, and they need specialized programming.” Clearly, Vicarious and other companies specializing in artificial intelligence are demonstrating that strides are being made; but, we’re still a ways from creating a computer that thinks like a brain. He reports, however, “A new breed of computer chips that operate more like the brain may be about to narrow the gulf between artificial and natural computation — between circuits that crunch through logical operations at blistering speed and a mechanism honed by evolution to process and act on sensory input from the real world.” He continues:

“Advances in neuroscience and chip technology have made it practical to build devices that, on a small scale at least, process data the way a mammalian brain does. These ‘neuromorphic’ chips may be the missing piece of many promising but unfinished projects in artificial intelligence, such as cars that drive themselves reliably in all conditions, and smartphones that act as competent conversational assistants. … The prototypes have already shown early sparks of intelligence, processing images very efficiently and gaining new skills in a way that resembles biological learning. IBM has created tools to let software engineers program these brain-inspired chips; the other prototype, at HRL Laboratories in Malibu, California, will soon be installed inside a tiny robotic aircraft, from which it will learn to recognize its surroundings.”

Included in Simonite’s article is the following infographic that shows how computer chips have progressed over the last half-century.

 

 

 

In that infographic you will notice the mention of artificial neurons and synapses, patterned after the neurons and synapses found in the human brain. “More than two decades ago,” reports Joab Jackson, “neural networks were widely seen as the next generation of computing, one that would finally allow computers to think for themselves.” [“Biologically inspired: How neural networks are finally maturing,” PCWorld, 17 December 2013] Twenty years on, he writes, “The ideas around the technology, loosely based on the biological knowledge of how the mammalian brain learns, are finally starting to seep into mainstream computing, thanks to improvements in hardware and refinements in software models.” He continues:

“Computers still can’t think for themselves, of course, but the latest innovations in neural networks allow computers to sift through vast realms of data and draw basic conclusions without the help of human operators. ‘Neural networks allow you to solve problems you don’t know how to solve,’ said Leon Reznik, a professor of computer science at the Rochester Institute of Technology. Slowly, neural networks are seeping into industry as well. Micron and IBM are building hardware that can be used to create more advanced neural networks. On the software side, neural networks are slowly moving into production settings as well. Google has applied various neural network algorithms to improve its voice recognition application, Google Voice. For mobile devices, Google Voice translates human voice input to text, allowing users to dictate short messages, voice search queries and user commands even in the kind of noisy ambient conditions that would flummox traditional voice recognition software.”

“The great promise — and great fear — of Artificial Intelligence,” asserts Joshua Rivera, “has always been that someday, computers would be able to mimic the way our brains work.” [“What Does Artificial Intelligence Really Mean, Anyway?Fast Company, 17 December 2013] He insists, however, that there is nothing to fear at the moment. “After years of progress,” he writes, “AI isn’t just a long way from HAL 9000, it has gone in an entirely different direction. Some of the biggest tech companies in the world are beginning to implement AI in some form, and it looks nothing like we thought it would.” To be honest, not everybody has given up on creating a machine with artificial general intelligence (i.e., a sentient computer); but, Rivera is correct in reporting that companies are finding all sorts of uses for narrow artificial intelligence applications.

 

Rivera goes on to note that Tom Chatfield believes that most “artificial intelligence” systems being created by companies today are not even trying to imitate the function of a human brain. “Chatfield argues that we’ve created something entirely different. Instead of machines that think like humans, we now have machines that think in an entirely different, perhaps even alien, way. Continuing to shoehorn them into replicating our natural thought processes could be limiting.” In other words, it doesn’t really matter how we teach computers to think as long as the processes they use achieve the desired result. I agree with that philosophy and suspect that it will be followed for most business use cases. Nevertheless, the dream of achieving human-like thought with a computer is unlikely to die. To underscore that point, Vicarious’ Phoenix told Dormehl proclaim, “In the long run we’re trying to create systems that can think and learn like the human brain. Anything that a brain can do, our system should be able to do as well.” However, Simonite reports, “Some critics doubt it will ever be possible for engineers to copy biology closely enough to capture these abilities.”

 

If that goal is ever achieved, the solution will probably involve the development of artificial neural networks (ANN) that mimic the human brain’s neural networks. “The brain processes sensory and other information using billions of interconnected neurons,” explains Jackson. “Over time, the connections among the neurons change, by growing stronger or weaker in a feedback loop, as the person learns more about his or her environment.” He continues:

“An artificial neural network (ANN) also uses this approach of modifying the strength of connections among different layers of neurons, or nodes in the parlance of the ANN. ANNs, however, usually deploy a training algorithm of some form, which adjusts the nodes to extract the desired features from the source data. Much like humans do, a neural network can generalize, slowly building up the ability to recognize, for instance, different types of dogs, using a single image of a dog.”

Jackson concludes, “It is doubtful that neural networks would ever replace standard CPUs.” He does believe, however, that “they may very well end up tackling certain types of jobs difficult for CPUs alone to handle.” For most of us, the debate about whether narrow artificial intelligence really qualifies a system to be labeled “intelligent” is of little significance. What really matters is whether such systems are useful — and on that point there is little disagreement. They are!

Related Posts: