Home » Artificial Intelligence » Thinking Machines and the Future

Thinking Machines and the Future

November 14, 2012

supplu-chain

Very few, if any, futurists doubt that machines are going to get smarter in the decades ahead. “Smartphones that talk. Self-driving cars. Robots that think.” Those are a few of the devices that Dan Gordon identifies as “fruits of artificial intelligence research that are here now or just around the corner.” [“Mind and Machine: Making Sense of Artificial Intelligence,” UCLA Magazine, 1 October 2012] Gordon reports that “UCLA scientists and engineers are in the vanguard of the study of how to build ‘thinking’ machines’—and their present and possible future impact on human society.” In preparation for writing his article, he asked Siri, the AI assistant on his iPhone, if she would write an article for him about artificial intelligence — she wouldn’t. But Gordon does report that “publications as prestigious as Forbes … use … articles written not by people, but by computers, unbeknownst to the average reader.” Gordon goes on to describe a few ways that most of our lives are impacted by artificial intelligence (AI) — things like Google searches.

 

In addition to Google searches, he reports that “navigation systems now nearly ubiquitous in cars and on smartphones resulted from an early AI application. Credit-card authorization systems employ AI to search for unusual patterns of activity that might indicate fraud. … Then there are the more direct encounters, like the customer service ‘representatives’ who do all they can to answer your questions without putting you through to a live person.” Gordon writes, “AI is based on the notion that knowledge and thought can be represented and manipulated through computer algorithms so as to build a thinking machine.”

 

Gordon goes on to discuss the work of Judea Pearl, a well-known UCLA professor of computer science. For more information about Professor Pearl, read my post entitled Artificial Intelligence Pioneer Wins 2011 A.M. Turing Award. The big breakthrough that Pearl provided to the field of artificial intelligence was the ability for computers to deal with “maybe.” Gordon explains:

“‘Before Pearl, most AI systems understood true or false, but had a hard time with “maybe,”‘ Alfred Spector, vice president of research and special initiatives at Google, noted in an ACM press release. ‘That meant that early AI systems tended to have more success in domains where things are black and white, like chess.’ But Pearl developed a method for delivering ‘maybe,’ or in scientific terms, probabilistic and causal reasoning, using what he coined a ‘Bayesian network.’ … Pearl’s work laid the foundation for computers that reason about actions and observations while assessing cause-effect relationships. The concept has found its way into a remarkable range of applications – medical diagnosis and gene mapping, credit-card fraud detection, homeland security, speech recognition systems and Google searches, to name a few. Pearl also used Bayesian networks to advance a new way of understanding and measuring causality in wide-ranging scientific disciplines such as psychology, economics, epidemiology and social sciences.”

Gordon reports that Pearl’s work also addressed another classic AI challenge: “heuristics, or combinatorial optimization – finding efficient algorithms for problems so large that an exhaustive search isn’t possible.” In the era of big data, research of this kind is “a central concern in AI.” Gordon continues:

“Natural language processing presents another major AI problem, as anyone who has conversed with Siri can attest. ‘How to get a machine to really understand the meaning of a word, a sentence, a joke or an editorial the way people do is a huge challenge,’ says Michael Dyer, UCLA professor of computer science and a leader in the language-processing field. A person hearing a statement like ‘John picked up a bat and hit Bill; there was blood everywhere’ understands context and relationships enough to automatically conclude that John hit Bill with a baseball bat and that the blood belonged to Bill, although none of that is explicitly stated. ‘The more intelligent we are, the less we have to say to each other,’ Dyer explains.”

Gordon reports that “a third key AI problem involves vision – getting machines to recognize and understand images.” The news that a team from Google created an “artificial neural network [that] had successfully taught itself on its own to identify [cats]” was widely publicized. [“Google team: Self-teaching computers recognize cats,” by Nancy Owano, Phys.Org, 26 June 2012] The point is, research in the field of AI is making progress on all fronts. Vision-assisted AI programs are essential for many autonomous machine activities. Gordon explains:

“Among the most anticipated application of computer vision are self-driving cars. The first vision-guided vehicles were successfully demonstrated in Munich, Germany more than 20 years ago. [UCLA Professor Stefano] Soatto’s group built a self-driving car for the U.S. Department of Defense’s DARPA Grand Challenge in 2005, and one of the group’s company partners has since built systems for vision-based driver assistance and autonomous driving. Self-driving vehicles have been available for sale in Japan since 2006, and aren’t far from hitting the market in the United States, where Google’s driverless fleet has tallied more than 200,000 miles, including navigation on public roads.”

Gordon goes on to talk a little bit about Artificial General Intelligence (AGI) whose goal is to create sentient computers (i.e., achieve the Singularity predicted by Ray Kurzweil). Dario Nardi told Gordon that creating a sentient computer won’t be easy, even if chip speeds increase dramatically. “Just because the chip speed is going to be fast enough to duplicate a human brain in terms of performance,” Nardi said, “doesn’t mean it will have anything smart in there.” To highlight this point, Gordon embedded the following amusing video in his post.

 

For most people in the field of AI, creating a “general” artificial intelligence is no longer the most important goal to achieve. The more immediate goal is creating programs that help humans become more efficient and effective at what they do while relieving them of tedious jobs that are often prone to error. This is sometimes referred to as “weak AI.” Gordon writes, “Computers are already better than humans at many things, whether it’s storing huge databases (you try memorizing thousands of phone numbers) or playing chess. On the other hand, notes Pearl, it’s hard to imagine machines outstripping comedians when it comes to something like writing jokes. ‘Humor takes a deep knowledge of oneself, the listener, the society and the context in which we are living,’ he says.”

 

Despite the pervasive pragmatism found in the field of AI, there remains a desire to create machines that are more human. A good example is a robot recently introduced by Rethink Robotics Inc., named Baxter. Baxter was designed to work side-by-side with humans. For more information about Baxter, read my post entitled Meet Baxter — Your New Co-worker. Gordon continues:

“AI is moving toward a ‘humanoid’ robot – one with vision and the ability to plan, reason and understand emotion; one with common sense and the ability to learn; one able to balance itself and move about. Already, whether it’s autonomous vehicles or the robots being built to provide home care for the elderly, artificially intelligent machines are increasingly acting on their own. ‘For robots to be really useful they have to learn and evolve, and these are likely to escape human control,’ observes [Charles Taylor, a professor of ecology and evolutionary biology at UCLA].”

Another example of a humanoid project comes from Russia. Billionaire Dmitry Itskov, a believer in the singularity, recently introduced an “android surrogate, built and programmed by Moscow-based Neurobotics.” [“Russia builds its first realistic female android,” by Jason Falconer, Gizmag, 23 October 2012] Dan Rowinski agrees that autonomous “machine-based life” holds great potential. He’s just not sure whether that potential is for good or evil. He writes:

“The ability for humans to create machine-based life that thinks on its own and acts on its own has the potential to make our lives dramatically better – or worse, depending on what kind of science fiction you read. But getting there won’t be easy.” [“Futurist’s Cheat Sheet: Artificial Intelligence, readwrite, 16 October 2012]

Indiana University Professor Colin Allen, author of Moral Machines: Teaching Robots Right from Wrong, agrees with Rowinski. He told Gordon, “The programmer can’t predict how these systems will end up behaving after certain sequences of inputs; it’s just too complex. And furthermore, the machine is reconfiguring itself as it goes, so you can get not just unexpected and unpredictable consequences, but configurations that weren’t explicitly intended by the programmer.” Gordon reports that Allen worries about “the emergence of AI systems that can adapt and learn raises the question of whether they can be designed to do the right thing in circumstances that call for ethical judgments.” For further a discussion about philosophy and AI, read my post entitled Philosophy and Artificial General Intelligence. Gordon writes, “Allen is optimistic about a future in which artificial intelligence plays a pivotal role.” Allen told Gordon, “All technologies have unintended consequences. Nobody thought to predict the amount of pollution that would result from automobiles or the percentage of our urban environment that would be built around them, but they also drive all kinds of positive economic and political outcomes. There have always been those who said various technologies would represent the end of everything good, but they’ve been proven false over and over again.”

 

Gordon wonders if all the hype about AI is just another prediction that sounds great but lacks substance. He writes:

“A society in which robots run things sounds like the stuff of science fiction. But is it? ‘It’s plausible that we’ll never have machines that are as intelligent as we are, but it’s also plausible that we’ll one day have machines that are much more intelligent than we are,’ says [Richard Korf, UCLA professor of computer science]. ‘If we lived in such a world, what sort of use would we be? Would the machines keep us around as pets? This is pretty far afield, but not completely inconceivable.'”

Rowinski concludes:

“Our future will no doubt be filled with very smart machines. Machines that may even seem like they are truly intelligent. But can we create machines that match or exceed human intelligence? If it’s a matter of time, it’s likely a fairly long time. Perhaps 20 years. 50 years? 200 years? Basically, we have no idea if that dream can be realized. The goal for now is to create smarter machines that increasingly take on more aspects of intelligence as we evolve them. Anybody that tells you they know the answer about when humankind will achieve true artificial intelligence is probably trying to sell you something.”

Two things probably can be stated for sure. First, AI systems will continue to play an increasingly important role in the world. Second, regardless of how long it takes, scientists will continue their quest to build “machine-based life.” I can understand Rowinski’s skepticism that they may never achieve that goal; but, even if they don’t, what they discover along the way will be beneficial.

Related Posts: