Home » Artificial Intelligence » Neat and Scruffy Artificial Intelligence

Neat and Scruffy Artificial Intelligence

April 1, 2013

supplu-chain

According to Wikipedia, “Neat and scruffy are labels for two different types of artificial intelligence research. Neats consider that solutions should be elegant, clear, and provably correct. Scruffies believe that intelligence is too complicated (or computationally intractable) to be solved with the sorts of homogeneous system such neat requirements usually mandate.” The article continues:

“The distinction was originally made by Roger Schank in the mid-1970s to characterize the difference between his work on natural language processing (which represented commonsense knowledge in the form of large amorphous semantic networks) from the work of John McCarthy, Allen Newell, Herbert A. Simon, Robert Kowalski, and others whose work was based on logic and formal extensions of logic. The distinction was also partly geographical and cultural: ‘scruffy’ was associated with AI research at MIT under Marvin Minsky in the 1960s. The laboratory was famously ‘freewheeling’ and researchers often developed AI programs by spending long hours tweaking programs until they showed the required behavior.”

Marvin Minsky accepts these labels (i.e., neat versus scruffy), but he also describes these different approaches as “logical versus analogical” and “symbolic versus connectionist.” [“Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruff,” AI Magazine, Volume 12, Number 2, 1991) Although he recognizes that people love labels, Minsky isn’t sure they serve a very useful purpose. He also believes the pursuing a particular path to AI in isolation from other research efforts isn’t very useful either. He writes:

“Why can’t we simply explain what we want, and then let our machines do experiments or read some books or go to school — the sorts of things that people do. Our machines today do no such things: Connectionist networks learn a bit but show few signs of becoming smart; symbolic systems are shrewd from the start but don’t yet show any common sense. How strange that our most advanced systems can compete with human specialists yet are unable to do many things that seem easy to children. I suggest that this stems from the nature of what we call specialties — because the very act of naming a specialty amounts to celebrating the discovery of some model of some aspect of reality, which is useful despite being isolated from most of our other concerns.”

Although that was written back in 1991, it could have been written yesterday. Rather than selecting one approach, Professor Minsky believes there is value in pursuing and combining multiple approaches. “There is no one best way to represent knowledge or to solve problems,” he writes, “and the limitations of current machine intelligence largely stem from seeking unified theories or trying to repair the deficiencies of theoretically neat but conceptually impoverished ideological positions.” Using just one approach is like trying to gain perspective using only one eye. It’s a much simpler task when both eyes are used. A recent blog post in The Daily Omnivore raised anew the issue of neats versus scruffies. [“Neats vs. Scruffies,” 23 January 2013] The post states:

“New statistical and mathematical approaches to AI were developed in the 1990s, using highly developed formalisms such as Bayesian nets and mathematical optimization. This general trend towards more formal methods in AI is described as ‘the victory of the neats’ by Peter Norvig and Stuart Russell. Pamela McCorduck, in 2004: ‘As I write, AI enjoys a Neat hegemony, people who believe that machine intelligence, at least, is best expressed in logical, even mathematical terms.’ Neat solutions have been highly successful in the 21st century and are now used throughout the technology industry. These solutions, however, have mostly been applied to specific problems with specific solutions, and the problem of general intelligence remains unsolved. The terms ‘neat’ and ‘scruffy’ are rarely used by AI researchers in the 21st century, although the issue remains unresolved. ‘Neat’ solutions to problems such as machine learning and computer vision, have become indispensable throughout the technology industry, but ad-hoc and detailed solutions still dominate research into robotics and commonsense knowledge.”

It should come as no surprise that “neat” AI solutions have gained traction in the 21st century. This is because they can be used to solve specific challenges. As a result, a business case can be made for neat solutions. Stated another way, there is money in neat solutions. The same cannot be said for the pursuit of artificial general intelligence (see my post Business and Artificial Intelligence). The Daily Omnivore article continues:

“As might be guessed from the terms, neats use formal methods – such as logic or pure applied statistics – exclusively. Scruffies are hackers, who will cobble together a system built of anything – even logic. Neats care whether their reasoning is both provably sound and complete and that their machine learning systems can be shown to converge in a known length of time. Scruffies would like their learning to converge too, but they are happier if empirical experience shows their systems working than to have mere equations and proofs showing that they ought to. To a neat, scruffy methods appear promiscuous, successful only by accident and unlikely to produce insights about how intelligence actually works. To a scruffy, neat methods appear to be hung up on formalism and to be too slow, fragile or boring to be applied to real systems.”

It is not clear whether these schools of thought are trying to emulate human intelligence or will be satisfied just simulating its functions. For more on that debate, read my post entitled Artificial Brains: The Debate Continues. The Daily Omnivore article states, however, that the conflict between neats and scruffies “goes much deeper than programming practices.” It explains:

“For philosophical or possibly scientific reasons, some people believe that intelligence is fundamentally rational, and can best be represented by logical systems incorporating truth maintenance. Others believe that intelligence is best implemented as a mass of learned or evolved hacks, not necessarily having internal consistency or any unifying organizational framework. Ironically, the apparently scruffy philosophy may also turn out to be provably (under typical assumptions) optimal for many applications. Intelligence is often seen as a form of search, and as such not believed to be perfectly solvable in a reasonable amount of time. It is an open question whether human intelligence is inherently scruffy or neat. Some claim that the question itself is unimportant: the famous neat John McCarthy has said publicly he has no interest in how human intelligence works, while famous scruffy Rodney Brooks is openly obsessed with creating humanoid intelligence.”

Professor Minsky concludes his 1991 article this way:

“There are countless wonders yet to be discovered in these exciting new fields of research. We can still learn a great many things from experiments on even the simplest nets. We’ll learn even more from trying to make theories about what we observe. And surely, soon we’ll start to prepare for that future art-of-mind design by experimenting with societies of nets that embody more structured strategies — and, consequently, make more progress on the networks that make up our own human minds. And in doing all these experiments, we’ll discover how to make symbolic representations that are more adaptable and connectionist representations that are more expressive. It is amusing how persistently people express the view that machines based on symbolic representations (as opposed, presumably, to connectionist representations) could never achieve much or ever be conscious and self-aware. I maintain it is precisely because our brains are still mostly connectionist, that we humans have so little consciousness! It’s also why we’re capable of so little parallelism of thought — and why we have such limited insight into the nature of our own machinery.”

I agree with the good professor that pursuing and combining multiple AI approaches is more likely to advance our understanding than going down a single trail. I also agree (even two decades later), that “there are countless wonders yet to be discovered.”

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!