Ever since the first scientist sliced up a flatworm and discovered that the individual pieces could transform themselves back into whole creatures, they (the scientists) have been both curious and frustrated. Recently, however, computer scientists from Tufts University decided to let artificial intelligence (AI) have a go at solving the 120-year-old mystery. To their surprise, it did. Katie Collins (@katieecollins) reports, “For the first time ever a computer has managed to develop a new scientific theory using only its artificial intelligence, and with no help from human beings.”[1] Michael Levin, one of the computer scientists who, along with Daniel Lobo, programmed the computer system, told Collins, “One of the most remarkable aspects of the project was that the model it found was not a hopelessly-tangled network that no human could actually understand, but a reasonably simple model that people can readily comprehend. All this suggests to me that artificial intelligence can help with every aspect of science, not only data mining but also inference of meaning of the data.”
Fear mongering about artificial intelligence has grabbed the headlines in recent months; so, a bit of cheerful news showing that AI can be beneficial is refreshing. I agree with Levin that AI has a place in scientific research; but. I also believe he is thinking too narrowly. Artificial Intelligence can play a positive role in any sector of human activity where data is available and questions need to be answered or insights gained. Glenn McDonald (@glennmcdonald1) asserts, “Over the next 20 years, we’ll likely see technological breakthroughs in hardware, software and wetware that will change everything in an instant, sending research on a totally new trajectory.”[2] Of course, predicting how the future will unfold (and how AI will help shape that future) remains speculative; however, a few experts have been brave enough to take a stab at it. Christopher Martinez, a professor of electrical and computer engineering at University of New Haven, told McDonald, “Just as the Industrial Revolution changed every aspect of society, the A.I. Revolution will have the same impact. The Industrial Revolution changed manufacturing and the A.I. Revolution will change the intellectual landscape.”
One of the technologies that Martinez believes will have a major impact on the future is cognitive computing. IBM’s Watson is the best-known cognitive computing system. Martinez told McDonald, “Computers similar to Watson will change health care, marketing, education, service industries — almost every occupation will be using A.I.” As President and CEO of cognitive computing firm, I obviously agree with that assessment. The Enterra® Enterprise Cognitive System™ (ECS) is adaptable to almost any situation and adds semantic reasoning to advanced mathematical calculations to provide unique insights. Professor James Hendler, an artificial intelligence researcher at Rensselaer Polytechnic Institute, provides his views about how AI will enhance the future in the following TEDxBaltimore talk.
If you didn’t watch the video, Hendler’s main points are that AI systems are good at ingesting, storing, and analyzing vast amounts of data — something that is very difficult for humans — and that computers have the ability to bring people together in ways never before possible. Christopher Mims (@Mims) asserts, “The age of intelligent machines has arrived — only they don’t look at all like we expected. Forget what you’ve seen in movies; this is no HAL from ‘2001: A Space Odyssey,’ and it’s certainly not Scarlett Johansson’s disembodied voice in ‘Her.’ It’s more akin to what happens when insects, or even fungi, do when they ‘think.’ (What, you didn’t know that slime molds can solve mazes?) Artificial intelligence has lately been transformed from an academic curiosity to something that has measurable impact on our lives.”[3] Mims goes on to note that narrow AI is what is proving to be most useful. These systems, Mims notes, “work precisely because their makers have decided to tackle problems that are as narrowly defined as possible.” Narrow AI will touch our lives in many ways in the years ahead; but, because it will be working behind the scenes, most of us will be unaware of its impact.
Artificial General Intelligence (AGI) is the type of AI that most people worry about getting out of hand and eventually threatening human existence. One scientist working in the field of AGI, Murray Shanahan (@mpshanahan), a professor of cognitive robots at Imperial College London, believes we’ll see a machine that surpasses human intelligence before mid-century. He told Hugh Langley (@HughLangley), “I would say the chances of it happening in my lifetime are better than fifty-fifty. So within the next 30 years I would say better than fifty-fifty that we’ll achieve human-level AI.”[4] Shanahan admits, however, that all the pieces of the AGI puzzle haven’t yet been discovered. He told Langley, “The amazing thing about humans, and other animals as well, is that we’re so adaptive. So that kind of general intelligence — we’re a way off understanding how to achieve that artificial general intelligence, and I don’t think anybody really knows what the missing pieces are. I think there’s a trick that nature has discovered, that evolution has discovered, that we’re not making the most of yet.” Microsoft co-founder, Paul Allen (@PaulGAllen), agrees with Langley that there are missing pieces to the puzzle; but, he is much less sanguine that an AGI system will ever be created.[5] He argues that developing AGI “will take unforeseeable and fundamentally unpredictable breakthroughs.” He concludes:
“While we have learned a great deal about how to build individual AI systems that do seemingly intelligent things, our systems have always remained brittle — their performance boundaries are rigidly set by their internal assumptions and defining algorithms, they cannot generalize, and they frequently give nonsensical answers outside of their specific focus areas. A computer program that plays excellent chess can’t leverage its skill to play other games. The best medical diagnosis programs contain immensely detailed knowledge of the human body but can’t deduce that a tightrope walker would have a great sense of balance.”
The good news is that we don’t need to develop artificial general intelligence for AI to have a significant and positive impact on our lives and the world in which we live. Most of the articles about AI or cognitive computing discuss the new insights and discoveries that such systems can make. One aspect of cognitive computing that deserves more attention is its ability to make routine decisions and to alert human decision makers when an anomaly occurs. Bain analysts, Michael C. Mankins and Lori Sherer (@lorisherer), note that decision making is one of the most important aspects of any business. “The best way to understand any company’s operations,” they write, “is to view them as a series of decisions.”[6] They explain:
“People in organizations make thousands of decisions every day. The decisions range from big, one-off strategic choices (such as where to locate the next multibillion-dollar plant) to everyday frontline decisions that add up to a lot of value over time (such as whether to suggest another purchase to a customer). In between those extremes are all the decisions that marketers, finance people, operations specialists and so on must make as they carry out their jobs week in and week out. We know from extensive research that decisions matter — a lot. Companies that make better decisions, make them faster and execute them more effectively than rivals nearly always turn in better financial performance. Not surprisingly, companies that employ advanced analytics to improve decision making and execution have the results to show for it.”
Allowing AI to make routine decisions frees human decision makers to concentrate on more important decisions. Like Hendler, I believe that AI systems will complement human activity and help the world’s best minds to work together to solve the globe’s greatest challenges. I think we will be amazed where AI eventually takes us.
Footnotes
[1] Katie Collins, “Computer Independently Solves 120-Year-Old Biological Mystery,” Wired UK, 5 June 2015.
[2] Glenn McDonald, “2035: Future A.I. Will Revolutionize Society, Economy,” Discovery, 5 June 2015.
[3] Christopher Mims, “It’s Time to Take Artificial Intelligence Seriously,” The Wall Street Journal, 24 August 2014.
[4] Hugh Langley, “AI will likely match human intelligence in the next 30 years, says robot expert,” TechRadar, 9 July 2014.
[5] Paul G. Allen, “Paul Allen: The Singularity Isn’t Near,” Technology Review, 12 October 2011.
[6] Michael C. Mankins and Lori Sherer, “Creating value through advanced analytics,” Bain Brief, 11 February 2015.