Home » Artificial Intelligence » Artificial Intelligence Pioneer Wins 2011 A.M. Turing Award

Artificial Intelligence Pioneer Wins 2011 A.M. Turing Award

March 20, 2012

supplu-chain

Most informed people have heard of slain Wall Street Journal reporter Daniel Pearl, who was kidnapped and beheaded in Pakistan a decade ago. Less famous (in most circles) is Pearl’s father, Judea Pearl, a professor of Computer Science and Statistics and director of the Cognitive Systems Laboratory at UCLA. Professor Pearl recently won the A.M. Turing award conferred by the Association for Computing Machinery. The Turin Award is “often called the computing equivalent to the Nobel Prize.” [“Judea Pearl wins Turing Award for work on AI reasoning, now looking at ‘moral’ computers,” by Adi Robertson, The Verge, 16 March 2012] Robertson reports that Pearl won this prestigious recognition for his “work on probability, which laid the foundations for much of modern artificial intelligence by creating a way for computers to process uncertainty or cause-and-effect relationships.” She explains that “the Turing Award, which has honored major contributions to the field of computing since 1966, should not be confused with the Loebner Prize, which is offered to conversational computers that can pass its version of the Turing Test.” As I wrote in a previous post:

“For those unfamiliar with the Turing Test, it comes from a 1950 paper by Alan Turing entitled “Computing Machinery and Intelligence.” It is a proposed test of a computer’s ability to demonstrate intelligence. As described in Wikipedia: a human judge engages in a natural language conversation with one human and one machine, each of which try to appear human; if the judge cannot reliably tell which is which, then the machine is said to pass the test. In order to test the machine’s intelligence rather than its ability to render words into audio, the conversation is limited to a text-only channel such as a computer keyboard and screen (Turing originally suggested a teletype machine, one of the few text-only communication systems available in 1950).” Interestingly, Turing felt the question about whether machines could think was itself “too meaningless” to deserve discussion. Unfortunately, Turing didn’t live to see the emergence of the information age. He died in 1954 at the age of 41.”

Pearl is a fitting recipient of an award named in honor of another pioneer in computing. In another article about Pearl’s recent honor, Jason Koebler reports, “Pearl developed two branches of calculus that opened the door for modern artificial intelligence, such as the kind found in voice recognition software and self-driving cars.” [“Artificial Intelligence Pioneer: We Can Build Robots With Morals,” U.S. News & World Report, 16 March 2012] Koebler continues:

“Vint Cerf, considered one of the ‘fathers of the Internet,’ said in a statement that Pearl’s development of probabilistic and causal reasoning changed the world. ‘His accomplishments over the last 30 years have provided the theoretical basis for progress in artificial intelligence and led to extraordinary achievements in machine learning,’ he said. ‘They have redefined the term “thinking machine.”‘ The calculus Pearl invented propels probabilistic reasoning, which allows computers to establish the best courses of action given uncertainty, such as a bank’s perceived risk in loaning money when given an applicant’s credit score.”

Even school children understand the value of mathematics when straight-forward processes lead to conclusive answers — like, how many pieces of candy am I going to have left if I start with 5 pieces and give 2 pieces to my friend. We all know, however, that life is not straight-forward and uncertainty must be confronted on a daily basis. Pearl’s contributions help us deal with that uncertainty. Speaking of Pearl’s work, Alfred Spector, vice president of research and special initiatives at Google, said, “Before Pearl, most AI systems reasoned with Boolean logic—they understood true or false, but had a hard time with ‘maybe’.” What is Boolean logic? Wikipedia explains it this way:

Boolean algebra, as developed in 1854 by George Boole in his book An Investigation of the Laws of Thought, is a variant of ordinary elementary algebra differing in its values, operations, and laws. Instead of the usual algebra of numbers, Boolean algebra is the algebra of truth values 0 and 1, or equivalently of subsets of a given set. The operations are usually taken to be conjunction ∧, disjunction ∨, and negation ¬, with constants 0 and 1. And the laws are definable as those equations that hold for all values of their variables, for example x∨(yx) = x. Applications include mathematical logic, digital logic, computer programming, set theory, and statistics. … Boole’s algebra predated the modern developments in abstract algebra and mathematical logic; it is however seen as connected to the origins of both fields.”

Since Boolean algebra uses the constants 0 and 1, it’s easy to understand why it has been extensively used in computer programming. The Wikipedia article continues:

“In the 1930s, while studying switching circuits, Claude Shannon observed that one could … apply the rules of Boole’s algebra … to analyze and design circuits by algebraic means in terms of logic gates. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra. … Efficient implementation of Boolean functions is a fundamental problem in the design of combinatorial logic circuits. Modern electronic design automation tools for VLSI circuits often rely on an efficient representation of Boolean functions known as (reduced ordered) binary decision diagrams (BDD) for logic synthesis and formal verification. Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra. Thus, Boolean logic is sometimes used to denote propositional calculus performed in this way.”

The article goes on to state that “Boolean algebra is not sufficient to capture logic formulas using quantifiers, like those from first order logic.” The shortcomings of Boolean logic, limited progress in artificial intelligence until Pearl came along. Koebler notes that “the other calculus he invented allows computers to determine cause-and-effect relationships.” That is a capability that is critical for many business processes. Despite his age (75), Koebler reports that “Pearl … is currently working on a branch of calculus that he says will allow computers to consider the moral implications of their decisions.” I admit that sounds a bit like the basis for a good science fiction story. The remainder of Koebler’s article consists of a question and answer session with Pearl. The first question Koebler posed was: “Artificial intelligence has improved by leaps and bounds over the past few years—what’s the greatest hurdle for scientists working on making machines more human like?” Pearl’s response:

“There are many hurdles. There’s the complexity of being able to generalize, an array of technical problems. But we have an embodiment of intelligence inside these tissues inside our skull. It’s proof that intelligence is possible, computer scientists just have to emulate the brain out of silicon. The principles should be the same because we have proof intelligent behavior is possible. I’m not futuristic, and I won’t guess how many years it’ll take, but this goal is a driving force that’s inspiring for young people. Other disciplines can be pessimistic, but we don’t have that in the field of artificial intelligence. Step by step we overcome one problem after the other. We have this vision that miraculous things are feasible and can be emulated in a system that is more understandable than our brain.”

My impression is that Pearl is a realist, like Paul Allen, who thinks that computers may surpass human intelligence but that they are still far off. On the other hand Pearl expresses optimism like Ray Kurzweil, who believes that once computers surpass human intelligence amazing discoveries will be made. To learn more about the differences between the Allen’s and Kurzweil’s points of view, read my post entitled Artificial Intelligence and the Era of Big Data. The next question posed by Koebler was, “What do you think is the most impressive use of artificial intelligence that most people are familiar with?” Pearl’s response:

“I think the voice recognition systems that we constantly use, as much as we hate them, are miraculous. They’re not flawless, but what we have shows it’s feasible and could one day be flawless. There’s the chess-playing machine we take for granted. A computer can beat any human chess player. Every success of AI becomes mundane and is removed from AI research. It becomes routine in your job, like a calculator that performs arithmetic, winning in chess—it’s no longer intelligence.”

In the most recent attempt to match computer capabilities against human intelligence, Matthew Ginsberg created a computer program called Dr. Fill that competed at the American Crossword Puzzle Tournament. It finished in 141st place among a field of “600 of the nation’s best human solvers.” [“In Crosswords, It’s Man Over Machine, for Now,” by Steve Lohr, New York Times, 18 March 2012] Koebler’s next question was: “So what’s next? What are people working on that’ll be world changing?” Pearl’s response:

“I think there will be computers that acquire free will, that can understand and create jokes. There will be a day when we’re able to do it. There will be computers that can send jokes to the New York Times that will be publishable. I try to avoid watching futuristic movies about super robots, about the limitations of computers that show when the machines will try to take over. They don’t interest me.”

I’ll admit that I was surprised that Pearl indicated that the next thing that computers will do to change the world is tell jokes. Obviously, his point was that a computer that can create a joke will have the kind of nuanced understanding of human behavior that could make a difference in many of the areas that we ask computers help us solve problems. Koebler, however, zeroed in on Pearl’s comments about Terminator-like movies. He asked, “Do you think those movies scare people off? Are they detrimental to the field?” Pearl’s response:

“I think they tickle the creativity and interest of young people in AI research. It’s good for public interest, they serve a purpose. For me, I don’t have time. I have so many equations to work on.”

Apparently the good doctor doesn’t have much down time. That really isn’t too surprising; driven people often sacrifice personal time to achieve their goals. Read, for example, what Sarah E. Needleman writes about entrepreneurs [“Personal Time Gets Short Shrift,” Wall Street Journal, 10 March 2012]. Clearly, Pearl is driven by his work. That work is the focus of Koebler’s next question: “What are you working on now?” Pearl’s response:

“I’m working on a calculus for counterfactuals—sentences that are conditioned on something that didn’t happen. If Oswald didn’t kill Kennedy, then who did? Sentences like that are the building blocks of scientific and moral behavior. We have a calculus that if you present knowledge about the world, the computer can answer questions of the sort. Had John McCain won the presidency, what would have happened?”

Again, that may sound a lot like science fiction rather mathematical fact, but the implications of that kind of work are profound. On numerous occasions in the past, I’ve trumpeted the benefits of “what if” exercises for helping companies manage future risk. A calculus for counterfactuals could have a significant impact on how risk management is approached in the future. Rather than view Pearl’s work in terms of “what if” scenarios, Koebler sees such work as creating alternative realities. He asks: “Sort of like an alternative reality?” Pearl’s response takes a different tack than I think Koebler was expecting. He states:

“It’s kind of like an alternative reality—you have to give the computer the knowledge. The ability to process that knowledge moves the computer closer to autonomy. It allows them to communicate by themselves, to take a responsibility for one’s actions, a kind of moral sense of behavior. These are issues that are interesting—we could build a society of robots that are able to communicate with the notion of morals. But we don’t have to wait until we build robots. The theory of econometric prediction is changing because we have counterfactual calculus. Should we raise taxes? Should we lower interest rates? If the government raises taxes, will that pacify the unions? It’s been a stumbling block for the past 150 years. We can assume something about reality before we take an action.”

I find this kind of “what if” scenario building very exciting. It could help businesses, governments, even individuals make better decisions. The thing that is likely to scare some people is that Pearl isn’t talking about computers that help people make better decisions, but computers that look at these “what if” scenarios and make autonomous decisions based on calculated outcomes that could affect you and I. At the very least, you have to admit that Professor Pearl’s work is thought provoking.

Related Posts: