Home » Artificial Intelligence » The Pre-history of Artificial Intelligence

The Pre-history of Artificial Intelligence

July 26, 2017

supplu-chain

One of the initial speakers at the First Annual Enterra Solutions® Cognitive Computing Summit was George Dyson, a historian and author of Turing’s Cathedral: The Origins of the Digital Universe. Dyson presented a fascinating tour d’horizon of the historical giants who laid the groundwork for modern thinking about artificial intelligence (a term normally traced to a 1956 meeting held at Dartmouth). I owe a debt of gratitude to Nancy Kleinrock from TTI/Vanguard who captured the highlights of Dyson’s presentation. Dyson began his presentation with a story about Roger Bacon (1219–1292), who attempted to construct an oracle — a brass head that would be able to correctly answer any question put to it. Medieval scholars, like Bacon, had developed a reputation as wizards who could construct such wonders. According to legend, Bacon was only able to get the brazen head to speak after tricking the devil. A sixteenth-century account of the story by Robert Greene (1558-1592) asserts Bacon fell asleep while waiting for the head to speak. While he slept, the head famously said, “Time is.” Bacon’s alert assistant deemed this statement too inconsequential to rouse his boss. Subsequently, the brass head uttered, “Time was.” Once again the assistant felt the statement too insignificant to awaken his master. Finally, the head, in its profundity, said “Time is past,” and self-destructed. Poor Friar Bacon slept through the entire episode. Dyson’s point was that people were thinking about machine intelligence as far back as medieval times.

 

Dyson then discussed Greene’s near-contemporary, Thomas Hobbes (1588–1679), who opined in Leviathan: Or the Matter, Forme and Power of a Common Wealth Ecclesiasticall and Civil that all thought could be reduced to addition and subtraction — hence, computation. Dyson noted this is the fundamental principle of AI, and taken further — as Hobbes did — of artificial life. Jumping forward a few years, Dyson next discussed the contributions of Blaise Pascal (1623–1662), who built a machine with gears and wheels to perform arithmetic. One of Pascal’s contemporaries, Gottfried Wilhelm Leibniz (1646–1716), proposed the notion of a non-wheeled machine to perform binary arithmetic that used open holes to specify 1, filled holes to specify 0, and gates to switch between the two states; marbles progressing through the contraption performed the computation. Dyson stated, “Leibniz truly invented the modern digital computer. Today, we simply use electrons and voltage gradients instead of marbles and gravity.”

 

The next intellectual giant discussed by Dyson was Charles Babbage (1791–1871). Babbage built an analytical engine, which was arbitrarily programmable with software encoded on a set of punched cards that the wheeled machine could read. It would be a century later before punch cards once again came into the picture. Other nineteenth century scholars making contributions to the pre-history of AI were Augusta Ada, Countess of Lovelace (1816–1852), who further expressed the power of software; André-Marie Ampère (1775-1836), a French physicist and mathematician, who coined the term cybernetics; Alfred Smee (1818–1877), who studied the “electro-biology” of the human brain and its workings; Samuel Butler (1835–1902), who cogitated on Darwin’s work and projected the possibility of mechanical consciousness as an evolution of the then-existing state of human-created artifacts. Dyson noted, Butler’s concept of evolved machines was introduced shortly before John Ambrose Fleming (1849-1945), a British electrical engineer and physicist, coined the term electronics and invented the vacuum tube based on the flow of current through a lightbulb filament. The twentieth century brought with it new scholars and new discoveries. In 1907, Lee De Forest (1873-1961), an American inventor, built on Fleming’s work by adding a third filament: the grid. Lewis Fry Richardson (1881-1953), an English mathematician, physicist, devised a non-deterministic electrical circuit in 1930 that Dyson claims is “all you need to know about AI,” in which P and Q were detectors of quantum noise, yielding an AI “capable of only two ideas.”

 

Mid-twentieth century saw the rise of geniuses like Alan Turing (1912–1954). Turing, famous for his codebreaking efforts during the Second World War and his Turing Test, proved — “in a very circular way” — that it would be possible for a single, general-purpose computing machine, were it invented, to compute any computable sequence. Turing’s great contribution, Dyson noted, was to take seriously that a machine could think. Another intellectual giant was John von Neumann (1903–1957), who, according to Robbert Dijkgraaf, Director of Princeton’s Institute for Advanced Studies, was “perhaps a greater genius than Einstein.”[1] Dyson observes von Neumann had the combined benefits of not only being a mathematical prodigy but also, as the son of a banker, being comfortable asking for money. “He didn’t build a computer, but he organized a group of oddballs who did.” Dyson notes that von Neumann’s wife, Klára Dán von Neumann (1911-1963), was also a pioneer of computer programming, not only implementing the Monte Carlo method on the ENIAC, but also drawing a line between the operating system and applications by introducing the concept of order coding, with background coding (OS) distinct from problem coding (applications).

 

In the hardware area, things also progressed through the first part of the twentieth century. Vladimir Kosma Zworykin (1888-1982), a Russian inventor and engineer working at RCA Labs invented the selectron, a vacuum tube with 4096 discrete bits of electrostatic memory. Zworykin and von Neumann wrote a proposal to have hundreds of researchers around the globe simultaneously use selectron-based machines — WWII analog computers converted into digital devices — to model the weather in real time. Dyson notes that AND and OR gates, as well as switch registers, were invented as part of this effort. According to Dyson, von Neumann was also interested in biology and enlisted geneticist Nils Barricelli (1912-1993), a Norwegian-Italian mathematician, to conduct computer-assisted experiments in symbiogenesis and evolution. Those experiments are considered pioneering efforts in artificial life research. In 1997, Julian Bigelow (1913-2003), a pioneering American computer engineer, credited Barricelli as being the only mid-century researcher to understand computational evolution as “the true path to real AI.”

 

Other early contributors to the field of artificial intelligence mentioned by Dyson were: I.J. (Jack) Good (1916-2009), a British mathematician who worked as a cryptologist at Bletchley Park with Alan Turing; Claude Shannon (1916–2001), whose greatest contribution was to take seriously that machines could communicate; and Norbert Wiener (1894–1964), who fostered the idea that machines could take control. Dyson concluded his presentation by noting Ross Ashby (1903-1972), an English psychiatrist and a pioneer in cybernetics, laid out three fundamentals of artificial intelligence in his book Design for a Brain: The Origin of Adaptive Behavior. They are:

 

  • Ashby’s Law of Requisite Variety: “Any effective control system must be as complex as the system it controls.”
  • Von Neumann’s Law of Sufficient Complexity: “A complex system constitutes its own simplest behavioral description.”
  • The “Third Law”: “Any system complicated enough to behave intelligently will be too complicated to understand” — but there is no prohibition to building something you don’t understand, as with deep learning.

 

In the grand scheme of things, artificial intelligence remains in its infancy. Yet AI’s roots can be traced back centuries. Bernard of Chartres is credited with saying, “We are like dwarves perched on the shoulders of giants, and thus we are able to see more and farther than the latter. And this is not at all because of the acuteness of our sight or the stature of our body, but because we are carried aloft and elevated by the magnitude of the giants.” Sir Isaac Newton echoed those sentiments. We owe a great debt to the giants of the past and we need historians like Dyson to remind us of that enduring truth.

 

Footnotes
[1] Abraham Flexner and Robbert Dijkgraaf, “The Usefulness of Useless Knowledge,” Princeton University Press, 2017.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!