Home » Artificial Intelligence » Artificial Intelligence: The Past and Future, Part 1

Artificial Intelligence: The Past and Future, Part 1

June 5, 2012

supplu-chain

Earlier this year, the staff at bit-tech published a short, but excellent, history of artificial intelligence (AI). [“The story of artificial intelligence,” 19 March 2012] The first installment, entitled “The genesis of artificial intelligence,” begins “in the pale light of a laboratory” where “a white humanoid quietly contemplates a series of objects.” That “humanoid” was a robot that had been created by Japanese scientists called “Asimo (an acronym which stands for Advanced Step in Innovative MObility). The robot was neither big nor menacing (“standing some four feet tall”), but it was “capable of recognising objects, faces, hand gestures and speech.” The article continues:

“While robots and benevolent computers have existed in literature and philosophy for centuries, it was only in the years following World War II that artificial intelligence began to move from the realms of science fiction to reality – and it all began in England, with a quiet game of chess. English mathematician Alan Turing, whose code breaking work at Bletchley Park played an essential part in WWII, was a key figure in the early days of computer science. In his lectures and books, Turing put forward his theory of what could be defined as an ‘intelligent computer’ – now commonly known as the ‘Turing test’ – which posited that, if a machine could communicate convincingly enough to fool a casual observer (over a teletype machine in the mid-20th century, and via, say, Skype today), then it could be considered artificially intelligent.”

For those unfamiliar with the Turing Test, it comes from a 1950 paper by Alan Turing entitled “Computing Machinery and Intelligence.” It is a proposed test of a computer’s ability to demonstrate intelligence. As described in Wikipedia: a human judge engages in a natural language conversation with one human and one machine, each of which try to appear human; if the judge cannot reliably tell which is which, then the machine is said to pass the test. In order to test the machine’s intelligence rather than its ability to render words into audio, the conversation is limited to a text-only channel such as a computer keyboard and screen (or, as the above article noted, using a teletype machine as Turing originally suggested). Interestingly, Turing felt the question about whether machines could think was itself “too meaningless” to deserve discussion. Unfortunately, Turing didn’t live to see the emergence of the information age. He died in 1954 at the age of 41. The article continues:

“With computer science still in its infancy, Turing recognized that the possibility of a machine capable of conversation was still decades away. Instead, he concentrated on creating a program that could play chess. Lacking a sufficiently powerful computer, Turing wrote the program by hand. In 1952, he tested the routine against a human opponent, Alick Glennie, with Turing acting as the CPU, with each move taking him half an hour to process. The program lost within 29 moves. Similarly, in 1952, American scientist and engineer Arthur L Samuel created a checkers program for the IBM 701, the first commercial computer. This apparently trivial pursuit actually had a useful purpose: by ‘teaching’ a computer to play a simple strategy game like checkers, he hoped to create a program which could be used to solve other, more complex problems. Turing and Samuel’s work was a stunning demonstration of the possibilities of computer programming. Suddenly, computers were capable of more than just solving complicated sums.”

In the 1950s, science fiction was really coming into its own and machines capable of thinking certainly played a role in many a plot. Sometimes these machines were friendly towards their makers; but, often they became smarter than those who created them and soon turned on the human race. As a result, many people developed a love/hate relationship with thinking machines. The second segment of bit-tech’s history of AI is entitled “Artificial intelligence gets its name, plays lot of chess.” It continues:

“In 1965, scientist Herbert Simon optimistically declared that ‘machines will be capable, within twenty years, of doing any work a man can do,’ while in a 1970 article for Life magazine Marvin Minsky claimed that ‘in from three to eight years we will have a machine with the general intelligence of an average human being.’ Unfortunately, that initial burst of optimism felt between the late 50s and early 60s would soon dissipate. In 1970, British mathematician Sir James Lighthill wrote a highly critical report on AI research, stating that ‘in no part of the field have discoveries made so far produced the major impact that was then promised.’ The subsequent withdrawal of vital funds in both the US and UK dealt a serious blow to AI research, leading to the first ‘AI winter’ which would last until the early 80s.”

One must remember that during the so-called “AI winter” computers were still massive and expensive. They filled rooms. The article notes that what changed winter to spring was “new advances in integrated circuit technology” and “research into what was termed ‘expert systems’ became a financially viable solution for business and industry.” This research also ushered in the age of personal computers. No longer did a computer need to fill a room; it could fit on your desk. Unfortunately, this period turned out to be a false spring and the second AI winter followed. The article continues:

“Sadly, this fresh period of research would also prove to be short lived. Expert systems, initially seen as a hugely beneficial corporate tool, ultimately proved to be expensive, cumbersome and difficult to update, and the associated companies that sprang up like daisies in the early part of the decade had all collapsed by 1987. These apparent ‘boom and bust’ periods seen throughout the history of AI research earned it something of a bad reputation; in 2005, New York Times writer John Markoff said ‘at its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.'”

Rather than concentrating on making machines smart, researchers concentrated on making machines do smart things. By focusing on doing one thing well (in this case, playing chess), researchers were able to keep the field of AI alive and moving forward. The article continues:

“The attempt to create a Chess program that could play at a grandmaster level, an endeavor began over 50 years before by Alan Turing, was still proving elusive in the late 90s. Philosopher Hubert Dreyfus predicted that a computer could never play chess at the same level as a human because it couldn’t distinguish between strategically useful and dangerous areas of the board. For many years, Dreyfus’s predictions appeared to hold true. IBM’s machine Deep Thought lost against world chess champion Garry Kasparov in 1989. But in February 1996, the first signs of progress appeared when Deep Thought’s successor, Deep Blue, played reigning world chess champion Garry Kasparov. While Kasparov went on to win the match overall, Deep Blue succeeded in securing victory in one game, therefore becoming the first computer to beat a world chess champion. Deep Blue’s dedicated hardware used 32 parallel processors, and was capable of calculating 200 million possible moves per second. ‘It makes up for strategic blindness with brute-force enumeration of tactical possibilities,’ wrote New Scientist’s Donald Michie, shortly after Kasparov’s defeat in 1997. As professor Noel Sharkey put it, ‘A computer like Deep Blue wins by brute force, searching quickly through the outcomes of millions of moves. It is like arm-wrestling with a mechanical digger.'”

That kind of brute force computing still provides the edge that makes AI a possibility. With affordable parallel processing replacing huge, expensive super computers for most research, a new age of AI had begun. Bit-tech’s next segment is entitled “Artificial intelligence, perception, and achievements.” The article notes that humans seem to be fascinated with human-looking smart machines; but, they probably don’t represent the future of AI. “Doctor Marvin Minsky, an important voice of optimism in the early days of AI, was highly critical of such projects. Speaking to Wired magazine in 2003, he was particularly scornful of the field of robotics. ‘The worst fad has been these stupid little robots,’ said Minsky. ‘Graduate students are wasting three years of their lives soldering and repairing robots, instead of making them smart. It’s really shocking.'” The Japanese seem to have a particular fascination with robots — probably for a good reason. With its aging population, Japanese scientists understand that some retiring workers must be replaced with robots if the Japanese economy is going to grow. For more on that topic, read my post entitled Demographics and Robots.

 

Minsky’s criticism of robots (whether they be smart or dumb) is mostly academic. Manufacturers are clamoring for smart robots that can increase production output as well as quality, detect and alert maintenance personnel when their parts are nearing failure, and can change what they are producing with a few strokes on a computer keyboard. Most of those robots, however, don’t look remotely human. Minsky would prefer that the smartest research minds continue to pursue the Holy Grail of AI — self-awareness. How to achieve that has resulted in both philosophical and scientific debate. That’s where we’ll begin the final part of this discussion.

Related Posts: