Home » Artificial Intelligence » Artificial Intelligence and the Era of Big Data

Artificial Intelligence and the Era of Big Data

November 8, 2011

supplu-chain

Despite initial reports that Apple had disappointed everyone by introducing the iPhone 4S instead of the much-rumored iPhone 5, the iPhone 4S has achieved excellent sales. One of the reasons is the new feature included with the phone called Siri a “smart voice recognition software, capable of intuitively answering a wide array of questions from iPhone users.” [“Apple’s Siri and the Future of Artificial Intelligence,” by E.D. Kain, Forbes, 15 October 2011] Alexis Madrigal, a senior editor at The Atlantic, reports that Siri is “a voice-driven artificial intelligence system created with DARPA funds and, … if the hype holds up, the software will be the biggest deployment of human-like AI the world has seen.” [“Siri: The Perfect Robot for Our Time,” 12 October 2011] Before continuing with a discussion of Madrigal’s thoughts, I offer a quick primer on artificial intelligence. According to an Oracle site, “Artificial Intelligence (AI) is the area of computer science focusing on creating machines that can engage on behaviors that humans consider intelligent.” Such a definition begs the question, “What is intelligence?” Professor John McCarthy from Stanford University writes this about intelligence:

“Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines. … Intelligence involves mechanisms, and AI research has discovered how to make computers carry out some of them and not others. If doing a task requires only mechanisms that are well understood today, computer programs can give very impressive performances on these tasks. Such programs should be considered ‘somewhat intelligent’.” [“What is Artificial Intelligence?” 12 November 2007]

Armed with that brief explanation, let’s continue with Madrigal’s thoughts about Siri. He writes:

“The difference between Siri and what came before is massive amounts of data. Data allowed the construction of algorithms that decipher voice. Data on the Internet allows Siri to have a lot more situational awareness than it would have had in the past. Data about your location massively increases the usefulness of anything an assistant could offer.”

In other words, Siri is another indication that we are indeed entering the Era of Big Data. For more discussion on the Era of Big Data, read my post entitled, The “Big Data” Dialogues, Part 6. Concluding his article on Siri, Madrigal writes:

“The genius of Siri is to combine the new type of information bot with the old type of human-helper bot. Instead of patterning Siri on a humanoid body, Apple used a human archetype — the secretary or assistant. To do so, Apple gave Siri a voice and a set of skills that seem designed to make everyone feel like Don Draper. Siri listens to you and does what you say. ‘Take this down, Siri… Remind me to buy Helena flowers!’ And if early reviews are any indication, the disembodied robot could be the next big thing in how we interact with our computers.”

If you didn’t get Madrigal’s “Don Draper” reference, Draper is the leading character in AMC’s highly acclaimed television series Mad Men. Kain writes that Siri’s capabilities are “reminiscent of [IBM’s] Watson’s ability to quickly parse through enormous amounts of data on the quiz show.” He asks, “How long before Watson is in our pocket, and Siri is just a thing of the past?” Both Kain and Madrigal agree that what makes both Siri and Watson possible is the availability of big data. John Stokes isn’t particularly impressed with Siri’s artificial intelligence (AI) credentials — he calls the software “a chatterbot” — but, he writes, “as Siri’s repertoire of canned responses grows, Apple could end up with a bona fide artificial intelligence, at least in the ‘weak AI’ sense. Siri may be yet another chatterbot, but it’s a chatterbot with a cloud back-end, and that cloudy combination of real-time analytics and continuous deployment makes all the difference.” [“With Siri, Apple Could Eventually Build A Real AI,” Wired, 16 October 2011] Stokes writes that “Big Data” can result in “big smarts.” He concludes:

“[Some critics may complain that what Siri does is] not really ‘AI’ because all Siri is doing is shuffling symbols around according to a fixed set of rules without ‘understanding’ any of the symbols themselves. But for the rest of us who don’t care about the question of whether Siri has ‘intentions’ or an ‘inner life,’ the service will be a fully functional AI that can [respond] flawlessly and appropriately to a larger range of input than any one individual is likely to produce over the course of a typical interaction with it. At that point, a combination of massive amounts of data and a continuous deployment model will have achieved what clever [natural language processing] algorithms alone could not: a chatterbot that looks enough like a ‘real AI’ that we can actually call it an AI in the ‘weak AI’ sense of the term.”

In another article, Kain agrees with Stokes that “the technology undergirding the software and iPhone hardware will continue to improve.” [“Neuromancing the Cloud: How Siri Could Lead to Real Artificial Intelligence,” Forbes, 17 October 2011] He also agrees that Siri “may not be the AI we had in mind, but it also probably won’t be the final word in Artificial Intelligence either. Other companies, such as IBM, are working to develop … ‘cognitive computers’ as well.” As for the future, Kain writes:

“While the Singularity may indeed be far, far away, it’s still exciting to see how some forms of A.I. may emerge at least in part through cloud-sourcing.”

For readers unfamiliar with the term “singularity,” it is a concept borrowed from science that describes an event horizon after which things change so much that no credible predictions can be made about the future. Inventor Ray Kurzweil believes that one such event horizon will take place the day computers become smarter than humans. He believes that this event horizon is just around the corner and wrote a book entitled The Singularity Is Near: When Humans Transcend Biology. To learn a little more about Kurzweil, read my post entitled Looking towards the Future with Ray Kurzweil. For a fuller explanation of the singularity, you can watch the attached video and hear it in Kurzweil’s own words.

 

 

From Kain’s comment above that “the Singularity may indeed be far, far away,” it’s clear that Kurzweil has his skeptics. Two such skeptics are Paul Allen, co-founder of Microsoft, and Mark Greaves, a computer scientist at Vulcan. They claim, “the singularity itself is a long way off.” [“Paul Allen: The Singularity Isn’t Near,” Technology Review, 12 October 2011] Allen and Greaves explain:

“While we suppose this kind of singularity might one day occur, we don’t think it is near. In fact, we think it will be a very long time coming. … By working through a set of models and historical data, Kurzweil famously calculates that the singularity will arrive around 2045. This prediction seems to us quite far-fetched. Of course, we are aware that the history of science and technology is littered with people who confidently assert that some event can’t happen, only to be later proven wrong—often in spectacular fashion. We acknowledge that it is possible but highly unlikely that Kurzweil will eventually be vindicated. An adult brain is a finite thing, so its basic workings can ultimately be known through sustained human effort. But if the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress.”

In other words, they are not at fundamental odds with Kurzweil they just believe that he is using “black box” thinking. Much of the magic that leads to the singularity (i.e., the “fundamentally unpredictable breakthroughs”) take place in the “black box” and Allen and Greaves don’t believe you can predict when the “black box” is going to be invented. They continue:

“Singularity proponents occasionally appeal to developments in artificial intelligence (AI) as a way to get around the slow rate of overall scientific progress in bottom-up, neuroscience-based approaches to cognition. It is true that AI has had great successes in duplicating certain isolated cognitive tasks, most recently with IBM’s Watson system for Jeopardy! question answering. But when we step back, we can see that overall AI-based capabilities haven’t been exponentially increasing either, at least when measured against the creation of a fully general human intelligence. While we have learned a great deal about how to build individual AI systems that do seemingly intelligent things, our systems have always remained brittle—their performance boundaries are rigidly set by their internal assumptions and defining algorithms, they cannot generalize, and they frequently give nonsensical answers outside of their specific focus areas. A computer program that plays excellent chess can’t leverage its skill to play other games.”

In my previous discussions about big data, I have mentioned this boundary issue, although not exactly using those terms. One reason we use an ontology at Enterra Solutions®, is because it can make some of the relationship connections that systems like Watson can’t. As Allen and Greaves explain below, even that approach has its limits. They write:

“Why has it proven so difficult for AI researchers to build human-like intelligence, even at a small scale? One answer involves the basic scientific framework that AI researchers use. As humans grow from infants to adults, they begin by acquiring a general knowledge about the world, and then continuously augment and refine this general knowledge with specific knowledge about different areas and contexts. AI researchers have typically tried to do the opposite: they have built systems with deep knowledge of narrow areas, and tried to create a more general capability by combining these systems. This strategy has not generally been successful, although Watson’s performance on Jeopardy! indicates paths like this may yet have promise. The few attempts that have been made to directly create a large amount of general knowledge of the world, and then add the specialized knowledge of a domain (for example, the work of Cycorp), have also met with only limited success. And in any case, AI researchers are only just beginning to theorize about how to effectively model the complex phenomena that give human cognition its unique flexibility: uncertainty, contextual sensitivity, rules of thumb, self-reflection, and the flashes of insight that are essential to higher-level thought. Just as in neuroscience, the AI-based route to achieving singularity-level computer intelligence seems to require many more discoveries, some new Nobel-quality theories, and probably even whole new research approaches that are incommensurate with what we believe now. This kind of basic scientific progress doesn’t happen on a reliable exponential growth curve. So although developments in AI might ultimately end up being the route to the singularity, again the complexity brake slows our rate of progress, and pushes the singularity considerably into the future.”

Regardless of who is correct, for most of activities for which humans want to use artificial intelligence, being “somewhat intelligent,” as Professor McCarthy put it, is probably good enough. And “good enough” gets better as more and more data becomes available and ways to access it improve.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!