Home » Artificial Intelligence » Artificial Intelligence: The Past and Future, Part 2

Artificial Intelligence: The Past and Future, Part 2

June 6, 2012

In Part 1 of this two-part series on artificial intelligence, I discussed the history of artificial intelligence (AI) as presented by bit.tech in an excellent series of short articles. [“The story of artificial intelligence,” 19 March 2012] That post ended by noting that the search for machine cognition has not advanced as far or as fast as some scientists had hoped. The previous post started began with bit.tech’s story of Honda’s impressive robot named Asimo. The article notes that Asimo’s “photogenic appearance and ability to walk with relative ease immediately captured media attention.” It went on to comment, “What is most significant about Asimo is its ability to recognize objects, gestures and sounds; Asimo will give a handshake to an outstretched hand, respond to its name when called, and can, most remarkably, work out the identity of an unfamiliar object by comparing it to similar items in its memory banks.” Those are impressive feats to be sure. But, as the article notes, “Despite its impressive abilities, Asimo is but a small shuffle forward on the road of AI research; a cutting-edge synthesis of already extant technologies and programs.” The article continues:

“The fundamental difference between human and programmed intelligence is still science’s greatest challenge. While neuroscientists continue to map the human brain, a complete and comprehensive theory of how it actually works has proved tantalizingly out of reach. Connectionism, a philosophical theory of the mind’s workings that originated in the nineteenth century, became popular with scientists in the 80s. According to connectionist thinking, the brain works like an enormously complex computer, with its neurons behaving as individual units interconnected like computers on a network. The attempt to artificially recreate a neural network has been the subject of ongoing research – and constant debate – for decades.”

Although we still don’t know exactly how the human brain functions, breakthroughs are continuing to be discovered. Even as research continues, so does the debate. The article explains:

“Rival schools of thought, for example computationalism, argue that connectionism doesn’t accurately describe how the mind works. In an era that is still struggling to simulate the comparatively simplistic cerebral workings of insects, American neuroscientist Jim Olds has argued that what science needs is an authoritative model for how our minds work. ‘We need an Einstein of neuroscience,’ Olds said, ‘to lay out a fundamental theory of cognition the way Einstein came up with the theory of relativity.'”

Although Olds’ comments may sound pessimistic and discouraging, the article insists that much has been accomplished through the years and mankind is better off for it. It explains:

“Glancing over the history of AI research, it’s easy to assume that little has been achieved. The all-knowing machine intelligences promised by writers and distinguished scientists failed to materialize, and the robot servants predicted to appear in every modern home have remained firmly in the realms of science fiction. Yet AI has yielded a wealth of applications that we now take for granted. Search engine algorithms like Google, which are in constant use by millions of people every day, exist as a result of AI research. Meanwhile, banking systems can recognise unusual patterns in customer spending to detect credit card fraud. Facial recognition systems are now used as a part of airport security. Speech recognition continues to be refined and has been integrated into Windows system software since Vista (and there’s Apple’s Siri, too). And, of course, AI plays an important role in videogames.”

To learn more about Siri and artificial intelligence, read my post entitled Artificial Intelligence and the Era of Big Data. The final segment of the bit.tech history is entitled “Artificial intelligence, emotion, singularity, and awakening.” It begins:

“For professor Noel Sharkey, the greatest danger posed by AI is its lack of sentience, rather than the presence of it. As warfare, policing and healthcare become increasingly automated and computer-powered, their lack of emotion and empathy could create significant problems. ‘Eldercare robotics is being developed quite rapidly in Japan,’ Sharkey said. ‘Robots could be greatly beneficial in keeping us out of care homes in our old age, performing many dull duties for us and aiding in tasks that failing memories make difficult. But it is a trade-off. My big concern is that once the robots have been tried and tested, it may be tempting to leave us entirely in their care. Like all humans, the elderly need love and human contact, and this often only comes from visiting carers.'”

The article notes that, in the arena of conflict, autonomous, heartless machines present an even bigger challenge (one reminiscent of the conflicts that are the focus of the Terminator movies). Sharkey insists that “there is no way for any AI system to discriminate between a combatant and an innocent” and that “claims that such a system is coming soon are unsupportable and irresponsible.” The article continues:

“If the idea of software-powered killing machines isn’t nightmarish enough, then some of science’s darker predictions for the future of AI certainly are. As far back as the 1960s, when AI research was still in its earliest stages, scientist Irving Good posited the idea that, if a sufficiently advanced form of artificial intelligence were created, it could continue to improve itself in what he termed an ‘intelligence explosion’. While Good’s supposition that an ‘ultraintelligent’ machine would be invented in the 20th century was wide of the mark, his theory exposed an exciting and potentially worrying possibility: that a superior artificial intellect could render human intelligence obsolete.”

One of the greatest fears that science fiction writers have raised is the possibility that machines will someday become the masters. They posit a time when computers will no longer be tools to assist human progress, but will be a new life form that considers humans to be in the same class as all other less intelligent life forms on earth. One such writer, the article notes, is Vernor Vinge. The article continues:

“In 1993, mathematics professor, computer scientist and SF writer Vernor Vinge wrote, … “Within 30 years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Vinge called this rise of superhuman intelligence the ‘singularity’, an ever-accelerating feedback loop of technological improvement with potentially unexpected side effects.”

The article insists that we shouldn’t panic since “literature and history are littered with bleak predictions and dire warnings of scientific hubris.” It continues:

“Fears of a ‘negative singularity’, where the human race is rendered obsolete or enslaved by a superior being of its own creation, follow a similar alarmist tradition – that’s the opinion of scientific figures such as Ray Kurzweil, who foresees a positive outcome for the coming singularity. According to Kurzweil, the coming singularity won’t come as a tidal wave, but as a gradual integration. Just as mobile phones and the internet have revolutionized communication and the spread of information, so will our society absorb future technology. ‘This will not be an alien invasion of intelligent machines,’ Kurzweil wrote in the foreword to James Gardner’s 2006 book, The Intelligent Universe. ‘It will be an expression of our own civilization, as we have always used our technology to extend our physical and mental reach.'”

Not everyone is as optimistic about the singularity as Kurzweil. The article explains:

“Professor Sharkey contends that greater-than-human computer intelligence may never occur, that the differences between human brains and computers may be so fundamentally different they can never be successfully replicated. ‘It is often forgotten that the idea of mind or brain as computational is merely an assumption, not a truth,’ Sharkey said in a 2009 interview with New Scientist. He’s particularly suspicious of the predictions of scientific figures such as Moravec and Kurzweil. Their theories are, he argues, ‘fairy tales’. ‘Roboticist Hans Moravec says that computer processing speed will eventually overtake that of the human brain and make them our superiors,’ Sharkey said. ‘The inventor Ray Kurzweil says humans will merge with machines and live forever by 2045. To me these are just fairy tales. I don’t see any sign of it happening. These ideas are based on the assumption that intelligence is computational. It might be, and equally it might not be. My work is on immediate problems in AI, and there is no evidence that machines will ever overtake us or gain sentience.'”

Sharkey is not alone. Microsoft co-founder Paul Allen also believes such ideas to be “far-fetched.” [“Paul Allen: The Singularity Isn’t Near,” Technology Review, 12 October 2011] He writes, “We acknowledge that it is possible but highly unlikely that Kurzweil will eventually be vindicated.” The bit.tech article concludes:

“The story of AI is one of remarkable discoveries and absurd claims, of great leaps forward and abrupt dead ends. What began as a chess algorithm on a piece of paper grew, within a few short years, into an entire field of research, research which went on to spawn important breakthroughs in computer technology and neuroscience. Yet even now, the first sentient machines, thought to be so imminent in those early years of research, appear to be as remote as they were more than half a century ago. … Nevertheless, there remains the possibility that, as technology advances science discovers new insights into the human mind, we may one day see the creation of the first sentient machine. And whatever the outcome of that first awakening – whether we’re transfigured by it, as Kurzweil believes, or enslaved, as Bob Joy fears – we should be mindful of one thing: that it all began with a quiet game of Chess.”

A recent article in The Observer reports, “No computer can yet pass the ‘Turing test’ and be taken as human. But the hunt for artificial intelligence is moving in a different, exciting direction that involves creativity, language – and even jazz.” [“AI robot: how machine intelligence is evolving,” by Marcus du Sautoy, 31 March 2012] Du Sautoy, the Simonyi professor for the public understanding of science and a professor of mathematics at the University of Oxford, states, “The AI community is beginning to question whether we should be so obsessed with recreating human intelligence.” He explains:

“That intelligence is a product of millions of years of evolution and it is possible that it is something that will be very difficult to reverse engineer without going through a similar process. The emphasis is now shifting towards creating intelligence that is unique to the machine, intelligence that ultimately can be harnessed to amplify our very own unique intelligence.”

Rather than being discouraged by the possibility that human intelligence might never be achieved by a machine, du Sautoy reflects on the exciting things that can be done using AI technologies now and in the future. I believe his position is the one most widely held by people involved in the AI field.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!