Home » Artificial Intelligence » Artificial Intelligence Still Can’t Beat the Real Thing

Artificial Intelligence Still Can’t Beat the Real Thing

December 30, 2009

supplu-chain

Earlier this month, John Markoff, who writes about science and information technologies for the New York Times, penned a column about a team of remarkable men who worked to together at the dawn of the information age in the Stanford Artificial Intelligence Laboratory (SAIL) [“Optimism as Artificial Intelligence Pioneers Reunite,” 8 December 2009]. He begins his article by noting that the “personal computer and the technologies that led to the Internet were largely invented in the 1960s and ’70s at three computer research laboratories next to the Stanford University campus.” SAIL was one of them. The other two were the Augmentation Research Center, “which became known for the mouse,” and Xerox’s Palo Alto Research Center (PARC), “which developed the first modern personal computer” as well as many other computer innovations we now take for granted. Markoff, however, focuses on SAIL, which was “run by the computer scientist John McCarthy.” He notes that SAIL was less well known than the other two research centers.

 

That may be because SAIL tackled a much harder problem: building a working artificial intelligence system. By the mid-1980s, many scientists both inside and outside of the artificial intelligence community had come to see the effort as a failure. The outlook was more promising in 1963 when Dr. McCarthy began his effort. His initial proposal, to the Advanced Research Projects Agency of the Pentagon, envisioned that building a thinking machine would take about a decade. Four and a half decades later, much of the original optimism is back, driven by rapid progress in artificial intelligence technologies, and that sense was tangible last month when more than 200 of the original SAIL scientists assembled at the William Gates Computer Science Building here for a two-day reunion.”

 

This is not the first time that Markoff has written optimistically about artificial intelligence. Over three years ago he wrote an article in which he insisted that the field of artificial intelligence was “finally catching up to the science-fiction hype.” [“Brainy Robots Start Stepping Into Daily Life,” by John Markoff, New York Times 8 July 2006]. That article points to activities (such as robot cars driving themselves across the desert, electronic eyes performing lifeguard duty in swimming pools, and virtual enemies with humanlike behavior battling video game players) to demonstrate advancements that have been made. Although strides continue to be made in artificial intelligence, truly cognitive machines still elude researchers. Returning to Markoff’s latest article, he discusses why SAIL was important.

 

During their first 10 years, SAIL researchers embarked on an extraordinarily rich set of technical and scientific challenges that are still on the frontiers of computer science, including machine vision and robotic manipulation, as well as language and navigation. … The scientists and engineers who worked at the laboratory constitute an extraordinary Who’s Who in the computing world. Dr. McCarthy coined the term artificial intelligence in the 1950s. Before coming to SAIL he developed the LISP programming language and invented the time-sharing approach to computers. [Les Earnest, the laboratory’s deputy director] designed the first spell-checker and is rightly described as the father of social networking and blogging for his contribution of the finger command that made it possible to tell where the laboratory’s computer users were and what they were doing. Among others, Raj Reddy and Hans Moravec went on to pioneer speech recognition and robotics at Carnegie Mellon University. Alan Kay brought his Dynabook portable computer concept first to Xerox PARC and later to Apple. Larry Tesler developed the philosophy of simplicity in computer interfaces that would come to define the look and functioning of the screens of modern Apple computers — what is called the graphical user interface, or G.U.I. Don Knuth wrote the definitive texts on computer programming. Joel Pitts, a Stanford undergraduate, took a version of the Space War computer game and turned it into the first coin-operated video game — which was installed in the university’s student coffee house — months before Nolan Bushnell did the same with Atari. The Nobel Prize-winning geneticist Joshua Lederberg worked with Edward Feigenbaum, a computer scientist, on an early effort to apply artificial intelligence techniques to create software to act as a kind of medical expert. John Chowning, a musicologist, … was invited to use the mainframe computer at the laboratory late at night when the demand was light, and his group went on to pioneer FM synthesis, a technique for creating sounds that transforms the quality, or timbre, of a simple waveform into a more complex sound. (The technique was discovered by Dr. Chowning at Stanford in 1973 and later licensed to Yamaha.)

 

Those are all impressive achievements, but where does research on artificial intelligence stand? Markoff concludes that “it is still an open question. In 1978, Dr. McCarthy wrote, ‘human-level A.I. might require 1.7 Einsteins, 2 Maxwells, 5 Faradays and 3 Manhattan Projects.'” If AI is a subject that interests you, you might also want to read my blog entitled Superintelligent Computers. In some ways, the fact that we haven’t been able to duplicate artificially human-level intelligence is comforting. The fact remains, however, that most computer scientists believe that day will come. Although creating a cognitive machine will be a significant achievement, I tend to agree with Stanislas Dehaene that man’s greatest invention was learning to read [“Humanity’s greatest invention,” a book review by Susan Okie, Washington Post, 29 November 2009]. Dehaene, a renowned cognitive neuroscientist, wrote a book entitled Reading in the Brain that examines how the brain acquires reading skills. About the book, Okie writes:

 

About 5,000 years ago, societies in ancient Sumeria, China and South America invented writing, and in the millennia since, the ability to read has propelled human intellectual and cultural development, vastly expanding our capacity to learn, create, explore and record what we think, feel and know. Reading supplies our brains with an external hard drive and gives us access to our species’s past: In the words of Francisco de Quevedo, it enables us ‘to listen to the dead with our eyes.’ But how, in such a short time, did the human species evolve this unique skill, one that requires the brain to decode written words visually and process their sounds and sense rapidly? In this fascinating and scholarly book, French neuroscientist Stanislas Dehaene explains what scientists now know about how the human brain performs the feat of reading, and what made this astonishing cultural invention biologically possible.”

 

In a post entitled The Power of Words, I discussed how the spoken word can stir men’s souls. But unrecorded spoken words cannot stir generations since they have no lasting impact. That is why reading and writing is humanity’s greatest invention. Okie continues:

 

Presented with a word’s image on the retina, average readers of English can, within a few 10ths of a second, match it with one of 50,000 or more words stored in their mental dictionaries, comprehend its meaning in context, and proceed seamlessly to the next word. Amazingly, most children become proficient readers during elementary school (although learning to read Italian is easier, and learning to read Chinese harder, than learning to read English). … ‘Only a stroke of good fortune allowed us to read,’ Dehaene writes near the end of his tour of the reading brain. It was Homo sapiens’s luck that in our primate ancestors, a region of the brain’s paired temporal lobes evolved over a period of 10 million years to specialize in the visual identification of objects. Experiments in monkeys show that, within this area, individual nerve cells are dedicated to respond to a specific visual stimulus: a face, a chair, a vertical line. Research suggests that, in humans, a corresponding area evolved to become what Dehaene calls the ‘letterbox,’ responsible for processing incoming written words. Located in the brain’s left hemisphere near the junction of the temporal and occipital lobes, the letterbox performs identical tasks in readers of all languages and scripts.”

 

Okie distills Dehaene explanation of how children (or adults, I presume) learn to read.

 

Children learn reading in a stepwise process: first, awareness that words are made up of phonemes or speech sounds (ba, da); then the discovery that there’s a correspondence between these speech sounds and pairs or groups of letters. Later the child begins to recognize entire words, and after a few years, reading speed becomes independent of word length. Dehaene deplores the whole-language approach to teaching reading in which beginning readers are presented with entire words or phrases in the hope of fostering earlier comprehension of text. He cites research showing that children who first learn which sounds are represented by which letters, and how pairs or groups of letters correspond to speech sounds, make steadier progress and achieve better reading scores than those taught using the whole-language method. He also notes the success of teaching methods that incorporate multiple senses and motor gestures, such as those used in Montessori schools. For example, in preparation for learning to read, young Montessori students are often asked to trace with their fingers the shapes of large letters cut out of sandpaper. The exercise makes use of vision, touch and spatial orientation, as well as mimicking the gestures used to print each letter.”

 

Understanding how the brain learns to read wouldn’t be complete without some discussion about why some people have difficulties attaining that skill. Okie provides a brief discussion about dyslexia and notes that “there is no prospect of a cure.” She concludes:

 

Reading, Dehaene writes, is ‘by far the finest gem’ in humanity’s cultural storehouse, and judging by the ubiquity of electronic messages and Web surfing, it’s a skill no less essential in the digital age than it was during the age of print.”

 

Clearly, there would have been no digital age had there been no age of print. The pioneer researchers at SAIL owe more to the inventers of alphabets than they do the discoverers of electricity. With electronic reading devices, like Amazon’s Kindle and Sony’s Reader, being hot gift items this Christmas season, it’s clear that reading remains humanity’s greatest invention. Reading and writing will remain critical to humanity’s progress and information age groups are doing what they can to preserve man’s written past (see my posts entitled Books Forever and The United Nations’ “Alexandrian Library”). A Japanese proverb says, “One written word is worth a thousand pieces of gold.” That assertion remains true — even at time when an ounce of gold is selling at ridiculously high prices.

Related Posts: