Home » Artificial Intelligence » Artificial Intelligence: Is Your Brain more than an Algorithm?

Artificial Intelligence: Is Your Brain more than an Algorithm?

September 9, 2016

supplu-chain

“The colloquial definition of ‘artificial intelligence’ refers to the general idea of a computerized system that exhibits abilities similar to those of the human mind,” writes Josh Worth (@misterjworth), “and usually involves someone pulling back their skin to reveal a hideous metallic endoskeleton. It’s no surprise that the phrase is surrounded by so many misconceptions since ‘artificial’ and ‘intelligence’ are two words that are notoriously difficult to define.”[1] Worth adds, “Deciding whether a computer is intelligent has been a very troublesome project, mostly because the standard for what constitutes intelligence keeps changing. Computers have been performing operations similar to those of the human brain since they were invented, yet no one is quite willing to call them intelligent.” Actually, the debate about artificial intelligence (AI) goes even deeper than that. In a fascinating article, Rafi Letzter (@RafiLetzter) explains why the debate isn’t about “physics or computer science at all. It’s about the brain — or more precisely, about consciousness — and it’s been going on for decades. Its central question: Is the brain fundamentally like a computer?”[2]

 

Letzter lays out the battleground by citing ideas from prominent thinkers on both sides of the debate. On one side is Scott Aaronson, a theoretical computer scientist at MIT. Letzter continues:

“His view, which is more widely accepted, is that because the brain exists inside the universe, and because computers can simulate the entire universe given enough power, your entire brain can be simulated in a computer. And because it can be simulated in a computer, its structure and functions, including your consciousness, must be entirely logical and computational. In other words, all evidence suggests that your mind is a computer. (There is, of course, a great deal more nuance to his ideas than this, but that is the crux of his view.)”

On the other side of the artificial intelligence debate is mathematical physicist Roger Penrose. His point of view, Letzter explains, is, “Your consciousness emerges from mysterious, exotic physics acting inside your neurons. Penrose (who, at 84, is responsible for a substantial chunk of our understanding of the shape of the universe) has argued since the 1980s that conventional computer science and physics cannot explain the human mind.” After all, our minds are quite capable of fooling us, even generating false memories. Letzter reports Aaronson and Penrose recently debated each other at a conference in Minnesota. Letzter sums up the debate this way, “Either the brain is basically a computer, or there’s a whole new world of neuroscience and physics out there that we have not yet even begun to discover.” According to Letzter, Aaronson actually appreciates Penrose’s attempt to explain his position concerning artificial intelligence. In blog article about the debate, Aaronson explains:

“If anyone thinks [a brain is nothing like a computer], the burden is on them to articulate what it is about the brain that could possibly make it relevantly different from a digital computer. It’s their job! … One of the many reasons I admire Roger is that, out of all the AI skeptics on earth, he’s virtually the only one who’s actually tried to meet this burden, as I understand it! He, nearly alone, did what I think all AI skeptics should do, which is: suggest some actual physical property of the brain that, if present, would make it qualitatively different from all existing computers, in the sense of violating the Church-Turing Thesis. Indeed, he’s one of the few AI skeptics who even understands what meeting this burden would entail: that you can’t do it with the physics we already know, that some new ingredient is necessary.”[3]

As someone who works in the field of artificial intelligence (specifically in the field of cognitive computing), I find Aaronson’s arguments more compelling than Penrose’s. You don’t have to address the metaphysical mystery of sentience in order to accept the proposition that machines can act intelligently to perform specified tasks or discover new knowledge on their own. In fact, I believe that work done by Cycorp®, one of Enterra Solutions® partners, actually negates Penrose’s assertions. As the company’s website notes, “Cycorp is a leading provider of semantic technologies that bring a new level of intelligence and common sense reasoning to a wide variety of software applications. The Cyc software combines an unparalleled common sense ontology and knowledge base with a powerful reasoning engine and natural language interfaces to enable the development of novel knowledge-intensive applications.”

 

Cycorp’s CEO Doug Lenat has been rightfully labeled a visionary. His vision, which he conceived over three decades ago, was “creating a ‘knowledge base’ called Cyc that can endow computers with something approaching common sense.”[4] Lamont Wood explains, “Cyc’s creation involved inputting and organizing the millions of facts that, while seemingly obvious to humans, must be explicitly taught to computers in the logic they can understand. After reaching a certain level of sophistication, Cyc began to help direct its own education by asking questions based on what it already knew. … The result: a computer that doesn’t have to be told that parents are older than their children and that people stop subscribing to magazines after they die.”[5] Lenat has been single-minded in developing Cyc. To that end, explains Wood, “Cycorp doesn’t even want to be distracted by the rigors of the retail software business; instead, it licenses Cyc for use in third-party software packages.” In fact, Enterra® has established a licensing arrangement with Cycorp. Dylan Love (@dylanlove) asserts, “It’s only a slight stretch to say Cycorp is building a brain out of software, and they’re doing it from scratch. … This means Cyc can see ‘the white space rather than the black space in what everyone reads and writes to each other.’ An author might explicitly choose certain words and sentences as he’s writing, but in between the sentences are all sorts of things you expect the reader to infer; Cyc aims to make these inferences.”[6]

 

Obviously, we are not going to resolve the debate about artificial intelligence and sentience any time soon — certainly not in this article. Whether an artificial intelligence system will ever achieve sentience is not as important as figuring out what artificial intelligence can do for mankind in both the short- and long-term. We shouldn’t fear AI but figure out how humans can collaborate with it to overcome real challenges in our lives.

 

Footnotes
[1] Josh Worth, “Stop Calling it Artificial Intelligence,” Josh Worth Art & Design, 10 February 2016.
[2] Rafi Letzter, “If you think your brain is more than a computer, you must accept this fringe idea in physics,” Business Insider, 10 June 2016.
[3] Scott Aaronson, “‘Can computers become conscious?’: My reply to Roger Penrose,” Shtetl-Optimized, 2 June 2016.
[4] Lamont Wood, “Cycorp: The Cost of Common Sense,” MIT Technology Review, 1 March 2005.
[5] Ibid.
[6] Dylan Love, “The Most Ambitious Artificial Intelligence Project In The World Has Been Operating In Near Secrecy For 30 Years,” Business Insider, 2 July 2014.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!