“It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction,” writes the renowned theoretical physicist Stephen Hawking along with his colleagues Stuart Russell, Max Tegmark, and Frank Wilczek. “But this would be a mistake, and potentially our worst mistake in history.” [“Stephen Hawking: ‘Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?‘” The Independent, 1 May 2014] Anything “sensational” that Hawking writes concerning science and technology garners headlines and most news sources have trumpeted his and his colleagues’ warnings about the dangers of developing artificial intelligence (AI) and ignored what they had to say about the benefits of artificial intelligence. Concerning the benefits of AI, they wrote:
“The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.”
The next line, however, is the one that garnered all of the attention: “Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Hawking and his colleagues go on to discuss autonomous weapons, workforce dislocations, and economic transformation. They continue:
“Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. … One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all. So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. … Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.”
Although I agree that artificial general intelligence (AGI) needs to be developed carefully, it only takes one “evil genius” to undo the best laid plans that more ethical scientists put into place. That doesn’t mean that we should throw up our arms in surrender to an inevitable future in which machines rule the earth. It means doing exactly what Hawking, Russell, Tegmark, and Wilczek suggest — doing what we can to improve our chances of reaping the benefits and avoiding the risks. What concerns people the most about artificial intelligence is that machines can learn. The question is what will they learn? The simple answer is that they will learn to do better the task they have been given. Dylan Love writes, “Machine learning is a computer’s way of learning from examples, and it is one of the most useful tools we have for the construction of artificially intelligent systems. It starts with the effort of incredibly talented mathematicians and scientists who design algorithms (a fancy word for mathematical recipes) that take in data and improve themselves to better interact with that data. The algorithms effectively ‘learn’ how to be better at their jobs.” [“What The Heck Is Machine Learning?” Business Insider, 3 May 2014] The secret to a better future is ensuring that smart machines are given the right jobs to do. Most of our lives are already being influenced by learning machines. Martin Hack notes, “When Amazon recommends a book you would like, Google predicts that you should leave now to get to your meeting on time, and Pandora magically creates your ideal playlist, these are examples of machine learning over a Big Data stream.” [“Use Data to Tell the Future: Understanding Machine Learning,” Wired, 17 March 2014] Hack continues:
“Machine learning is the modern science of finding patterns and making predictions from data based on work in multivariate statistics, data mining, pattern recognition, and advanced/predictive analytics. Machine learning methods are particularly effective in situations where deep and predictive insights need to be uncovered from data sets that are large, diverse and fast changing — Big Data. Across these types of data, machine learning easily outperforms traditional methods on accuracy, scale, and speed. For example, when detecting fraud in the millisecond it takes to swipe a credit card, machine learning rules not only on information associated with the transaction, such as value and location, but also by leveraging historical and social network data for accurate evaluation of potential fraud.”
Alex Heber asserts, “Machine learning has advanced so much in the last decade it’s difficult to determine what’s going to be possible in the future.” [“CeBIT 2014: Three Ways Machine Learning Is Changing The World,” Business Insider, 5 May 2014] To get an idea of what could lie ahead, Heber interviewed Jeremy Howard, a Data strategist at the University of San Francisco. “Howard outlined three concrete ways machine to machine learning will change the world we live in by understanding humans.” The first area discussed by Howard was the medical field.
“Cancer diagnostics — Scientists are already training computers to read breast cancer pathology reports which take about four expert pathologists to read, understand and reach a decision about which areas to treat. ‘A machine learning algorithm was trained to automatically identify these areas and when they benchmarked them they found out it was more accurate than the best of the human pathologists,’ Howard said. ‘The idea of looking at images and understanding something particularly at a level beyond experts is something traditionally computers could never do and yet now, at least in that area of research, we’re at a point where computers can work better than the best people.'”
Cognitive computing systems (i.e., machines that can learn) should be able to assist in drug discovery and the spread of communicable diseases as well as assist with diagnoses and recommend courses of treatment. When combined with their ability to detect health insurance fraud, these systems could finally help us get a handle on controlling the cost of healthcare. The next benefits discussed by Howard concern natural language processing.
“Computers learning to read — Computers haven’t been able to read and understand the subtleties of human language. But this is changing, with a study coming out of Stamford that has created an algorithm that is teaching computers to read language. ‘A Stamford researcher has been able to come up with a machine learning algorithm that was able to uncover the patterns [of human language],’ Howard said. ‘The accuracy of this algorithm is only about 5 per cent less than the agreement of humans with each other. This is now approaching human levels of ability to understand the sentiment of natural simulation.'”
Natural language processing is becoming an increasingly important task for computers because so much of the data currently being created is unstructured — meaning it isn’t found nicely organized in rows and columns on a spreadsheet. Identifying single words isn’t the challenge. The challenge is putting the word in context. The word “tank,” for example, could mean a military tank, an automobile gas tank, or a fish tank. At Enterra Solutions®, we complement artificial intelligence algorithms with the world’s largest ontology to provide natural language context to create many of our analytical solutions. The final area discussed by Howard involves object recognition.
“Computer object recognition — In a recent study thousands of images and thousands of sentences were put into an algorithm and a computer was able to match the picture with each sentence describing it. ‘This algorithm is close to human level of performance with coming up with appropriate captions,’ Howard said. ‘We can now not only understand images and read text we can actually bring the two together.'”
I’m a bit surprised that Howard didn’t address other areas, like agriculture, smart cities, and climate change, in which cognitive computing systems are likely to prove extremely beneficial in the years ahead. In the balance, I believe that artificial intelligence will prove to be more of a benefit than bane to mankind. That doesn’t mean, however, that we shouldn’t proceed with caution.