Alexander Wissner-Gross, a physicist at Harvard University and a fellow at the Massachusetts Institute of Technology Media Lab, and his colleague Cameron Freer, a post-doctoral fellow at MIT’s Computer Science and Artificial Intelligence Laboratory, have been conducting experiments on how intelligence can develop out of chaos. Apparently what motivated them to research this area was the scientific notion that the cosmological systems with the greatest entropy also have the greatest probability of producing intelligent life. Entropy is most often thought of in negative terms. One of the definitions for entropy found in the online version of the Merriam-Webster dictionary is: “the degradation of the matter and energy in the universe to an ultimate state of inert uniformity.” In commenting on the work of Wissner-Gross and Freer, Anthony Wing Kosner begins his article with a quotation from a William Butler Yeats‘ poem entitled “The Second Coming.” Yeats wrote, “Things fall apart; the centre cannot hold; Mere anarchy is loosed upon the world.” That stanza reminds us that things age, rust, corrode, and disintegrate; or, as the common funereal lament states, all things go from “ashes to ashes, dust to dust.”
So how does one go from disorder to order? That is the question Wissner-Gross and Freer are attempting to answer. In a paper entitled “Causal Entropic Forces,” Wissner-Gross and Freer “claim to have begun to formalize a mathematical framework capturing a fundamental relationship between intelligence and entropy maximization.” [“Maximum Causal Entropy – Adaptive Systems and Intelligence,” Compute 2020, 27 April 2013] In the introduction to their research paper, Wissner-Gross and Freer write:
“Recent advances in ?elds ranging from cosmology to computer science have hinted at a possible deep connection between intelligence and entropy maximization. In cosmology, the causal entropic principle for anthropic selection has used the maximization of entropy production in causally connected space-time regions as a thermodynamic proxy for intelligent observer concentrations in the prediction of cosmological parameters. In geoscience, entropy production maximization has been proposed as a unifying principle for nonequilibrium processes underlying planetary development and the emergence of life. In computer science, maximum entropy methods have been used for inference in situations with dynamically revealed information, and strategy algorithms have even started to beat human opponents for the ?rst time at historically challenging high look-ahead depth and branching factor games like Go by maximizing accessible future game states. However, despite these insights, no formal physical relationship between intelligence and entropy maximization has yet been established.”
Kosner helps us decipher the scientific lingo. He writes, “[Wissner-Gross’] idea is really very simple, but he has the mathematics … and visualizations (see video below) to back it up. In short, everything in nature (our minds included) seeks to keep its options open. Instead of seeing entropy as a form of destruction (things falling apart) Wissner-Gross shows it to be a state of active play.” [“From Atoms To Bits, Physics Shows Entropy As The Root Of Intelligence, Forbes, 21 April 2013]” The video to which Kosner refers is very interesting and takes about three minutes to watch.
Chris Gorski, an editor for Inside Science News Service, summarizes the results of Wissner-Gross’ and Freer’s work this way, “The researchers suggest that intelligent behavior stems from the impulse to seize control of future events in the environment. This is the exact opposite of the classic science-fiction scenario in which computers or robots become intelligent, then set their sights on taking over the world.” [“Physicist Proposes New Way To Think About Intelligence,” Physics Central, 26 April 2013] That interpretation sounds awfully sinister. Don Monroe offers a less ominous description of the research results. He writes, “The researchers interpreted … behaviors as indications of a rudimentary adaptive intelligence, in that the systems moved toward configurations that maximized their ability to respond to further changes. Wissner-Gross acknowledges that ‘there’s no widely agreed-upon definition of what intelligence actually is,’ but he says that social scientists have speculated that certain skills prospered during evolution because they allowed humans to exploit ecological opportunities.”
The Compute 2020 post notes, “The paper suggests that complex adaptive behaviors can generally emerge as an agent or system attempts to maximize its accessibility to diverse future histories.” It calls this “a startling generalization” that, if true, “may have impacts across a wide spectrum of fields of inquiry including the emergence of life, origins of gravity, roots of intelligent behaviors and driving forces of evolution.” In the video Wissner-Gross claims it could also impact robotics, assistive technologies, manufacturing, agriculture, economic planning, gaming, social media, healthcare, energy, intelligence, defense, logistics, transportation, finance, and insurance.
As the video indicates, the software developed by Wissner-Gross and Freer is called Entropica, which they label “Sapient Software.” You can access more information about the software by clicking this link. Entropica isn’t the Holy Grail of artificial intelligence. Monroe concludes that “the new formulation is not meant to be a literal model of the development of intelligence.” He does report, however, that Wissner-Gross believes that the model points toward a “general thermodynamic picture of what intelligent behavior is.” Max Tegmark of MIT told Monroe that “the paper provides an ‘intriguing new insight into the physics of intelligence. … It’s impressive to see such sophisticated behavior spontaneously emerge from such a simple physical process.'”
Simon DeDeo, a research fellow at the Santa Fe Institute, told Gorski, “It’s a provocative paper. It’s not science as usual.” Jeff Clune, a computer scientist at the University of Wyoming, however, “expressed some reservations about the new research” to Gorski. Although Clune admitted that he “would be very interested to learn more and better understand the mechanism by which they’re achieving some impressive results, because it could potentially help our quest for artificial intelligence.” Clune suggested to Gorski that his reservations “could be due to a difference in jargon used in different fields.” Although Wissner-Gross admits “there is room for improvement,” he told Gorski, “We basically view this as a grand unified theory of intelligence. And I know that sounds perhaps impossibly ambitious, but it really does unify so many threads across a variety of fields, ranging from cosmology to computer science, animal behavior, and ties them all together in a beautiful thermodynamic picture.”
Even if Wissner-Gross’ and Freer’s research doesn’t result in “a grand un
ified theory of intelligence,” it will nevertheless provide an important building block for the future. As Kosner concludes, “I think this approach is tremendously important for people who design interaction systems. Using the laws of physics instead of explicit goals is remarkably efficient and flexible.” Kosner ended his article with this thought:
“To return to Yeats’ poem, it begins with the lines, ‘Turning and turning in the widening gyre / The falcon cannot hear the falconer.’ … If both the falcon and the falconer are governed by the laws of thermodynamics, they need not hear each other to be bound by the ‘macrostate’ of the widening gyre. Think of this gyre as a cone of possible futures emanating from the point of present moment, and the falcon as the entropic explorer of these ‘future histories.'”
Whatever “future history” we end up pursuing, work like that being done by Wissner-Gross and Freer will make it all the more interesting.