Home » Artificial Intelligence » Artificial Intelligence and the Future

Artificial Intelligence and the Future

May 9, 2011

supplu-chain

Back in February an IBM supercomputer named Watson beat two very bright human beings on the television game show “Jeopardy!” The three-day event made headlines around the world and IBM has capitalized on Watson’s success through a series of advertisements. Remarking on Watson’s triumph, innovator and entrepreneur Ray Kurzweil wrote, “The point has been made: Watson can compete at the championship level—and is making it more difficult for anyone to argue that there are human tasks that computers will never achieve.” [“When Computers Beat Humans on Jeopardy,” The Wall Street Journal, 17 February 2011] Kurzweil is famous for his notion of the “singularity” — the “revolutionary transition [point] when humans and/or machines start evolving into immortal beings with ever-improving software.” [“The Future Is Now? Pretty Soon, at Least,” by John Tierney, New York Times, 3 June 2008] In scientific terms, a singularity is an event horizon after which things change so much that no credible predictions can be made about the future. Kurzweil believes that Watson brings us one step closer to the singularity he predicted. He continues:

“‘Jeopardy!’ involves understanding complexities of humor, puns, metaphors, analogies, ironies and other subtleties. Elsewhere, computers are advancing on many other fronts, from driverless cars (Google’s cars have driven 140,000 miles through California cities and towns without human intervention) to the diagnosis of disease. … With computers demonstrating a basic ability to understand human language, it’s only a matter of time before they pass the famous ‘Turing test,’ in which ‘chatbot’ programs compete to fool human judges into believing that they are human.”

For those unfamiliar with the Turing Test, it comes from a 1950 paper by Alan Turing entitled “Computing Machinery and Intelligence.” It is a proposed test of a computer’s ability to demonstrate intelligence. As described in Wikipedia: a human judge engages in a natural language conversation with one human and one machine, each of which try to appear human; if the judge cannot reliably tell which is which, then the machine is said to pass the test. In order to test the machine’s intelligence rather than its ability to render words into audio, the conversation is limited to a text-only channel such as a computer keyboard and screen (Turing originally suggested a teletype machine, one of the few text-only communication systems available in 1950).” Interestingly, Turing felt the question about whether machines could think was itself “too meaningless” to deserve discussion. Unfortunately, Turing didn’t live to see the emergence of the information age. He died in 1954 at the age of 41. Kurzweil continues:

“If Watson’s underlying technology were applied to the Turing test, it would likely do pretty well. Consider the annual Loebner Prize competition, one version of the Turing test. Last year, the best chatbot contestant fooled the human judges 25% of the time. Perhaps counterintuitively, Watson would have to dumb itself down in order to pass a Turing test. After all, if you were talking to someone over instant messaging and they seemed to know every detail of everything, you’d realize it was an artificial intelligence (AI). A computer passing a properly designed Turing test would be operating at human levels. I, for one, would then regard it as human.”

Kurzweil writes that he expects a computer to successfully become “humanized” within the next twenty years. He concludes:

“By the time the controversy dies down and it becomes clear that nonbiological machine intelligence has become equal to biological human intelligence, the AIs will already be thousands of times smarter than us. But keep in mind that this is not an alien invasion from Mars. We’re creating these technologies to extend our reach. The fact that millions of farmers in China can access most of human knowledge with devices they carry in their pockets is a testament to the fact that we are doing this already. Ultimately, we will vastly extend and expand our own intelligence by merging with these tools of our own creation.”

In a review of Brian Christian’s book The Most Human Human, Julian Baggini notes that Christian, like Kurzweil, doesn’t see the rise of machines as a threat to humanity. [“More Than Machine,” The Wall Street Journal, 8 March 2011] He writes:

“Mr. Christian thinks that seeing a computer pass the Turing Test, ‘and the reality check to follow, might do us a world of good.’ The recent victory of IBM’s Watson supercomputer over a pair of ‘Jeopardy’ champions suggests that he may be right. Watson showed an unprecedented ability to understand certain complexities of ambiguity and context. Following Mr. Christian’s advice, we should not see this victory as a threat but as a chance to learn even more about who we are. Every technology that seems to dehumanize us is an opportunity to rehumanize ourselves.”

Baggini argues, “Any old computer can keep score; only humans can rate the quality of the game.” Kurzweil would probably counter that computers are approaching the day when they, too, can rate the quality of the game. Baggini continues:

“The exchanging of roles between human and machine is at the heart of Mr. Christian’s project, which weaves its narrative around the Loebner Prize, awarded annually to the world’s ‘most human’ computer. … Mr. Christian entered the 2009 Loebner contest in Brighton, England, not with a computer that he had programmed but as a ‘confederate’: one of the people that the judges have to try to distinguish from the machines. The contest’s computer winner comes away with the title ‘Most Human Computer’ (though no computer, as yet, has passed the Turing Test). But the confederates also compete to be named ‘most human human.’ By exploring what responses seem ‘most human,’ Mr. Christian cleverly suggests that the Turing Test not only tells us how smart computers are but also teaches us about ourselves. … Remaining alive to what is mechanical or original in our own behavior can preserve a sense of human difference.”

Whereas Kurzweil claims that Watson would have to dumb itself down in order to pass a Turing test, Christian argues that one becomes the “most human human” by adding context to answers. Baggini explains:

“Mr. Christian covers all the major advances in the history of artificial intelligence in a similar way, drawing lessons for how human intelligence works. One recurring theme is how computers must simplify, by converting information to digital bits and then compressing data. Our brains do this too, of course, converting experience into neural patterns and retaining only what is salient. But for all the power of the analogy between the mind and computer software, machines struggle with the incredibly subtle connections we make through richness of context, allusion and ambiguity. Computers prefer information that depends minimally on the precise context in which it is communicated. Thus one way that Mr. Christian found to seem ‘more human’ during the competition was to provide answers with an excess of context.”

Spencer E. Ante notes that Watson’s victory over Ken Jennings and Brad Rutter “wasn’t the first time that humans have taken it on the chin from technology. In 1997, an IBM supercomputer named Deep Blue defeated Gary Kasparov, then considered the greatest living chess player.” [“Computer Thumps ‘Jeopardy’ Minds,” The Wall Street Journal, 17 February 2011]. He continues:

“But Watson confirms the opening of a new era in the age-old contest between man and machine. Deep Blue won at chess by crunching millions of mathematical possibilities to determine the best possible move. Watson, named after IBM founder Thomas J. Watson, was designed instead to understand the more complex domain of words, language and human knowledge. … After the match ended, David Ferrucci, the IBM scientist who led the development of Watson, said the machine would help people reach a greater understanding of humanity but wouldn’t be a substitute. ‘Human intelligence is a whole other leap,’ he said. ‘A computer doesn’t know what it means to be human.’ Mr. Jennings, famous for winning 74 games in a row on the TV quiz show, saw the moment’s dark humor. Acknowledging defeat in his final answer, Mr. Jennings wrote on the computer screen, ‘I, for one, welcome our new computer overlords.'”

A few days before the Jeopardy show aired, author Richard Powers rhetorically asked, “What is Watson?” [“What Is Artificial Intelligence?The New York Times, 5 February 2011] His answer is that Watson became much more than a normal computer “with the extravagant addition of many multiple ‘expert’ analyzers — more than 100 different techniques running concurrently to analyze natural language, appraise sources, propose hypotheses, merge the results and rank the top guesses.” He continues:

“This raises the question of whether Watson is really answering questions at all or is just noticing statistical correlations in vast amounts of data. But the mere act of building the machine has been a powerful exploration of just what we mean when we talk about knowing.”

Before the contest was held, Powers wasn’t sure whether man or machine would prove victor in the short-term. He concluded, however, that it didn’t really matter. He explains:

“The real showdown is between us and our own future. Information is growing many times faster than anyone’s ability to manage it, and Watson may prove crucial in helping to turn all that noise into knowledge. … Like so many of its precursors, Watson will make us better at some things, worse at others. … History is the long process of outsourcing human ability in order to leverage more of it. We will concede this trivia game (after a very long run as champions), and find another in which, aided by our compounding prosthetics, we can excel in more powerful and ever more terrifying ways.”

All of the above authors share the view that by conceding some advantages to computers and learning how to bridle those capabilities, humankind can concentrate on things that make us a better beings. We become partners in the future rather than maintaining a master/servant relationship. Clive Thompson writes, “We live in an age of increasingly smart machines. In recent years, engineers have pushed into areas, from voice recognition to robotics to search engines, that once seemed to be the preserve of humans.” [“What is I.B.M.’s Watson?The New York Times, 16 June 2010] Thompson reports that IBM’s David Ferrucci took on the challenge to create Watson because he was more of a fan of Star Trek than Jeopardy. “The computer on ‘Star Trek’ is a question-answering machine,’ Ferrucci told Thompson. “It understands what you’re asking and provides just the right chunk of response that you needed. When is the computer going to get to a point where the computer knows how to talk to you? That’s my question.” Thompson continues:

“The great shift in artificial intelligence began in the last 10 years, when computer scientists began using statistics to analyze huge piles of documents, like books and news stories. They wrote algorithms that could take any subject and automatically learn what types of words are, statistically speaking, most (and least) associated with it. … In theory, this sort of statistical computation has been possible for decades, but it was impractical. Computers weren’t fast enough, memory wasn’t expansive enough and in any case there was no easy way to put millions of documents into a computer. All that changed in the early ’00s. Computer power became drastically cheaper, and the amount of online text exploded as millions of people wrote blogs and wikis about anything and everything; news organizations and academic journals also began putting all their works in digital format. What’s more, question-answering experts spent the previous couple of decades creating several linguistic tools that helped computers puzzle through language — like rhyming dictionaries, bulky synonym finders and ‘classifiers’ that recognized the parts of speech.”

Nobody is claiming that Watson is the final proof that artificial intelligence has been conquered. Scientists continue to struggle with “issues ranging from the more theoretical such as algorithms capable of solving combinatorial problems to robots that can reason about emotions, systems that use vision to monitor activities, and automated players that learn how to win in a given situation.” [“Artificial Intelligence for Improving Data Processing,” Science Daily, 11 April 2011] The article reports that “more and more emphasis is being placed on developing systems capable of learning and demonstrating intelligent behavior without being tied to replicating a human model.” The article concludes that “the future AI will tackle more daring concepts such as the incarnation of intelligence in robots, as well as emotions, and above all consciousness.” When Kurzweil talks about computers “operating at human levels,” he is undoubtedly equating this to the idea of consciousness. When that achievement is met, we might indeed find ourselves at a singularity.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!