Home » Artificial Intelligence » Philosophy and Artificial General Intelligence

Philosophy and Artificial General Intelligence

November 1, 2012

supplu-chain

Proponents of artificial intelligence (AI) seem to separate into two camps. The first camp includes those who are trying to create a true artificial intelligence (that is, a machine that becomes self-aware or sentient). The second camp includes those who want to create “machines that act intelligently, without taking a position on whether or not the machines actually are intelligent.” [“The Philosophical Foundations of Artificial Intelligence,” by Selmer Bringsjord and Konstantine Arkoudas, Rensselaer Polytechnic Institute (RPI), 25 October 2007] Bringsjord and Arkoudas claim that “AI officially started in 1956, launched by a small but now-famous summer conference at Dartmouth College, in Hanover, New Hampshire.” Others trace its roots to a 1950 paper by Alan Turing entitled “Computing Machinery and Intelligence,” which put forth the famous Turing Test. Bringsjord and Arkoudas, however, indicate that the philosophical roots of AI go back hundreds of years. To make their point, they quote a passage written by René Descartes and published in a 1911 volume entitled The Philosophical Works of Descartes, Volume 1, translated by Elizabeth S. Haldane and G.R.T. Ross. It reads:

“If there were machines which bore a resemblance to our body and imitated our actions as far as it was morally possible to do so, we should always have two very certain tests by which to recognise that, for all that, they were not real men. The first is, that they could never use speech or other signs as we do when placing our thoughts on record for the benefit of others. For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if it is touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do. And the second difference is, that although machines can perform certain things as well as or perhaps better than any of us can do, they infallibly fall short in others, by which means we may discover that they did not act from knowledge, but only for the disposition of their organs. For while reason is a universal instrument which can serve for all contingencies, these organs have need of some special adaptation for every particular action. From this it follows that it is morally impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our reason causes us to act.”

David Deutsch, a physicist at Oxford University, jumped into the discussion of philosophy and artificial intelligence in an op-ed piece in The Guardian. [“Philosophy will be the key that unlocks artificial intelligence,” 3 October 2012] He writes:

“To state that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos would be uncontroversial. The brain is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists. Nor are its unique abilities confined to such cerebral matters. The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances. But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially – the field of ‘artificial general intelligence’ or AGI – has made no progress whatever during the entire six decades of its existence.”

Al Fin agrees with Deutsch. “Artificial intelligence has turned into something of a laggard and a laughingstock in the cognitive science community,” he writes. “Human level AI always seems to be ’10 to 20 years away,’ and has been for most of the past 60 + years.” [“Artificial Intelligence Needs a New Philosophical Foundation,” Al Fin, 4 October 2012] Despite the harsh criticism, Deutsch claims, “AGI must be possible.” He explains:

“That is because of a deep property of the laws of physics, namely the universality of computation. It entails that everything that the laws of physics require physical objects to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.”

Deutsch obviously subscribes to what Bringsjord and Arkoudas call a “rationalist programme.” They note that Hubert Dreyfus criticized the AI community for trying to push this philosophy. They write:

“Philosophically, Dreyfus argued that AI is an ill-conceived attempt to implement a rationalist programme that goes back at least to Leibniz and Hobbes, a project that rests on the misguided ‘Cartesian’ tenet which holds that human understanding consists in forming and manipulating symbolic representations. In contradistinction, he maintained that our ability to understand the world and other people is a non-declarative type of know-how skill that is not amenable to propositional codification. It is inarticulate, preconceptual, and has an indispensable phenomenological dimension which cannot be captured by any rule-based system.”

Deutsch dismisses such criticism. He writes:

“So why has the field not progressed? In my view it is because, as an unknown sage once remarked, ‘it ain’t what we don’t know that causes trouble, it’s what we know that just ain’t so.’ I cannot think of any other significant field of knowledge where the prevailing wisdom, not only in society at large but among experts, is so beset with entrenched, overlapping, fundamental errors. Yet it has also been one of the most self-confident fields in prophesying that it will soon achieve the ultimate breakthrough.”

Over-confidence, Deutsch argues, may be a fault but it is not a reason to dismiss the eventual creation of artificial general intelligence. He continues:

“The field used to be called ‘AI’ – artificial intelligence. But AI was gradually appropriated to describe all sorts of unrelated computer programs such as game players, search engines and chatbots, until the G for ‘general’ was added to make it possible to refer to the real thing again, but now with the implication that an AGI is just a smarter species of chatbot.”

Deutsch’s argument seems to be that because there was (and is) money to be made pursing uses for limited (or weak) AI systems in the business world, the motivation for creating AGI has waned. The best minds, he implies, are following the money. Another obstacle that has slowed the progress of AGI, Deutsch argues, is the whole idea of machine self-awareness, which he labels a “popular irrationality of cultural relativism.” He writes:

“That’s just another philosophical misconception, sufficient in itself to block any viable approach to AGI. The fact is that present-day software developers could straightforwardly program a computer to have ‘self-awareness’ in the behavioural sense – for example, to pass the ‘mirror test’ of being able to use a mirror to infer facts about itself – if they wanted to. As far as I am aware, no one has done so, presumably because it is a fairly useless ability as well as a trivial one.”

Google “has already successfully used neural networking data for computers to recognize cats in YouTube videos. The computer itself was able to decide which features of the videos — patterns, colors, etc. — to give importance to and then identify what it thought was a feline.” [“Google’s Neural Networks Advance Artificial Intelligence [VIDEO], by Neha Prakash, Mashable, 9 October 2012] Presumably computers could also use this technique to learn to identify themselves as well. I doubt, however, that such an achievement would be widely regarded as self-awareness. Regardless, Deutsch believes that “self-awareness has its undeserved reputation for being connected with AGI.” He continues:

“Self-reference of any kind has acquired a reputation for woo-woo mystery. And so has consciousness. And for consciousness we have the problem of ambiguous terminology again: the term has a huge range of meanings. At one end of the scale there is the philosophical problem of the nature of subjective sensations (‘qualia’), which is intimately connected with the problem of AGI; but at the other end, ‘consciousness’ is simply what we lose when we are put under general anaesthetic. Many animals certainly have that. AGIs will indeed be capable of self-awareness – but that is because they will be General: they will be capable of awareness of every kind of deep and subtle thing, including their own selves.”

Deutsch goes on to counter criticisms of AGI that require belief in the supernatural or criticisms that are founded on false premises. Some of those false premises arise, he argues, from the use of poor metaphors. He explains:

“The philosopher John Searle … pointed out that before computers existed, steam engines and later telegraph systems were used as metaphors for how the human mind must work. He argues that the hope that AGI is possible rests on a similarly insubstantial metaphor, namely that the mind is ‘essentially’ a computer program. But that’s not a metaphor: the universality of computation follows from the known laws of physics. Some have suggested that the brain uses quantum computation, or even hyper-quantum computation relying on as-yet-unknown physics beyond quantum theory, and that this explains the failure to create AGI on existing computers. Explaining why I, and most researchers in the quantum theory of computation, disagree that that is a plausible source of the human brain’s unique functionality is beyond the scope of this article.”

If you want to learn more about Searle’s arguments, read the article by Bringsjord and Arkoudas. Deutsch goes on to assert that the pursuit of AGI systems that closely mimic the human mind (that is, that turn AGI systems into “people”) is faulty because it favors “organic brains over silicon brains.” He writes, “This isn’t good. Never mind the terminology; change it if you like, and there are indeed reasons for treating various entities with respect, protecting them from harm and so on. All the same, the distinction between actual people, defined by that objective criterion, and other entities, has enormous moral and practical significance, and is going to become vital to the functioning of a civilisation that includes AGIs.” If AGI systems do emerge and are deemed to have self-awareness (i.e., become “machine life”), are they going to be treated as “people”? That would raise a myriad of philosophical and legal problems. He explains:

“For example, the mere fact that it is not the computer but the running program that is a person raises unsolved philosophical problems that will become practical, political controversies as soon as AGIs exist – because once an AGI program is running in a computer, depriving it of that computer would be murder (or at least false imprisonment or slavery, as the case may be), just like depriving a human mind of its body. But unlike a human body, an AGI program can be copied into multiple computers at the touch of a button. Are those programs, while they are still executing identical steps (i.e., before they have become differentiated due to random choices or different experiences), the same person or many different people? Do they get one vote, or many? Is deleting one of them murder, or a minor assault? And if some rogue programmer, perhaps illegally, creates billions of different AGI people, either on one computer or on many, what happens next? They are still people, with rights. Do they all get the vote? Furthermore, in regard to AGIs, like any other entities with creativity, we have to forget almost all existing connotations of the word ‘programming’. Treating AGIs like any other computer programs would constitute brainwashing, slavery and tyranny. And cruelty to children too, because ‘programming’ an already-running AGI, unlike all other programming, constitutes education. And it constitutes debate, moral as well as factual. Ignoring the rights and personhood of AGIs would not only be the epitome of evil, but a recipe for disaster too: creative beings cannot be enslaved forever.”

Deutsch goes on to talk about the fear that has been raised concerning the rise of good and evil AGI systems. He argues that good and evil exist outside the field of AGI and that such arguments shouldn’t prevent us from developing AGI systems. He writes, “Enslave all intelligence’ would be a catastrophically wrong answer, and ‘enslave all intelligence that doesn’t look like us’ would not be much better.” Since we remain a long ways from developing an AGI system to which such philosophical and legal conundrums might apply, one may wonder if it isn’t too soon to be worrying about such things. Deutsch doesn’t think it is. He explains:

“I am not highlighting all these philosophical issues because I fear that AGIs will be invented before we have developed the philosophical sophistication to understand them and to integrate them into civilisation. It is for almost the opposite reason: I am convinced that the whole problem of developing AGIs is a matter of philosophy, not computer science or neurophysiology, and that the philosophical progress that will be essential to their future integration is also a prerequisite for developing them in the first place. The lack of progress in AGI is due to a severe log jam of misconceptions. … Clearing this log jam will not, by itself, provide the answer.”

In the end, Deutsch reveals himself as an optimist when it comes to developing AGI. He writes that he believes “the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees. So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever.” I think most people would agree that a breakthrough leading to AGI would be one of the best ideas ever; but, given all the philosophical questions surrounding AGI, I’m guessing there are a number of people who question whether that idea is worth pursuing.

Related Posts: