Home » Artificial Intelligence » Cognitive Computing and Human/Computer Interactions

Cognitive Computing and Human/Computer Interactions

February 3, 2014

supplu-chain

“Computers have entered the age when they are able to learn from their own mistakes,” writes John Markoff, “a development that is about to turn the digital world on its head.” [“Brainlike Computers, Learning From Experience,” New York Times, 28 December 2013] Computer systems that run programs capable of learning (either from their own mistakes or from relationships established by analyzing vast amounts of data) are part of the growing field called cognitive computing. Although some people are worried that intelligent computers will develop into autonomous networks, like the infamous Skynet in the “Terminator” movies, and take over the world, Anup Varier believes that future belongs to teams of humans and computers working together. [“Next up: Humans, systems team in cognitive computing,” PCWorld, 23 December 2013] Varier is not alone in this view. In a post entitled “Feigenbaum Predicts that Artificial Intelligence Will Provide ‘Wow’,” I noted that Professor Edward Feigenbaum, one of the leading lights of the artificial intelligence field, indicated that one of the real “wow” moments in AI came when my good friend and business associate Dr. Douglas B. Lenat twice won a competition using a program called Eurisko. Their success forced the competition’s organizers to ban the Lenat/Eurisko team from further competition. Feigenbaum indicated that the importance of Eurisko was that it demonstrated the power of human-AI teams.

 

After IBM’s Watson computer beat out a couple of human challengers on the game show Jeopardy!, Zachary Lemnios, vice president of strategy for IBM Research, stated that the victory “opened up a new chapter in information technology called cognitive computing — based on the idea of a natural interaction between systems and people.” The evolving relationship between humans and machines was also the key theme of Gartner’s “Hype Cycle for Emerging Technologies, 2013.” Gartner reports it chose “to feature the relationship between humans and machines due to the increased hype around smart machines, cognitive computing, and the Internet of Things.” For more information on the latter topic, read my post entitled “The Rise of the Internet of Things Age.” The important point to be made here is that even though machines are getting better at learning, it is still the human/computer team that holds the most promise for the future. Varier continues:

“It’s no secret that organizations today, across industry, are overwhelmed with so much data that they are unable to make time-critical decisions. Not only is the data growing by leaps and bounds it is also coming in multiple shapes and forms. ‘These increasingly challenging times need organizations to make tighter decisions in tighter timelines with the consequence of each decision going up,’ Lemnios said. Such a scenario probably calls for a closer look at the interaction between humans and computers. ‘We encourage enterprises to look beyond the narrow perspective that only sees a future in which machines and computers replace humans. In fact, by observing how emerging technologies are being used by early adopters, there are actually three main trends at work,’ said Jackie Fenn, vice president and Gartner fellow. These trends, in Fenn’s view, are ‘augmenting humans with technology — for example, an employee with a wearable computing device; machines replacing humans — for example, a cognitive virtual assistant acting as an automated customer representative; and humans and machines working alongside each other — for example, a mobile robot working with a warehouse employee to move many boxes.’ Of these major trends shaping the emerging technologies domain, IBM is focused on cognitive computing which is also broadly classified under the natural-language question and answering systems.”

Readers of this blog know that my company, Enterra Solutions®, is also in this field. As I wrote in the Feigenbaum post mentioned above: “Standing on the shoulders of intellectual giants such as Professor Feigenbaum, the work at IBM Watson Research Labs, other leaders in AI, and partners like Doug Lenat, is the business mission of Enterra®. We seek to bridge the gap between mathematical optimization and reasoning through our Cognitive Reasoning Platform™ that makes democratically accessible advanced analytics and insights to business managers seeking to solve some of the most complex problems in the commercial and government sectors.” I agree with Varier and other analysts who insist that trying to stir up a machine against human movement is not a smart decision. Varier explains:

“Gartner’s studies explain that ‘humans versus machines’ is not a binary decision; sometimes machines working alongside humans is a better choice. IBM’s Watson does background research for doctors, just like a research assistant, to ensure they account for all the latest clinical, research and other information when making diagnoses or suggesting treatments. At a time when humans are clearly reaching the limits of what we can absorb and understand, Gartner suggests the main benefit of having machines working alongside humans is the ability to access the best of both worlds (that is, productivity and speed from machines, emotional intelligence, and the ability to handle the unknown from humans). As machines get smarter and start automating more human tasks, humans will need to trust the machines and feel safe.”

Like it or not, cognitive computing is a trend that is growing and computers are getting smarter. Markoff reports, “The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term ‘computer crash’ obsolete.” He continues:

“The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals. In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming. Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.”

Varier concludes that cognitive systems will help humans operate in a rapidly changing environment without disruptions. That’s one reason why the most popular class currently being taught at Stanford University is about machine learning. [“Why Is Machine Learning (CS 229) the Most Popular Course at Stanford?” by Anthony Wing Kosner, Forbes, 29 December 2013] Kosner writes:

“Machine learning is the part of artificial intelligence that actually works. You can use it to train computers to do things that are impossible to program in advance. … It turns out that artificial intelligence and the robotics that is tied to it, consists of two primary systems, control and perception. There has been much progress with control, … but perception has been more difficult. … For robots to act autonomously and intelligently and for other forms of technology to function unobtrusively in the world, this kind of machine learning is essential. It is no wonder that Stanford students can’t get enough of it.”

The professor who teaches the course at Stanford is Andrew Ng. To get a taste of what he teaches, watch the video shown below.

 

As the field of cognitive computing advances, a lot more of us may join Homer Simpson and Jeopardy champion Ken Jennings in stating, “I for one welcome our new computer overlords.” Fortunately, computers are more likely to be our co-workers than our overlords.

Related Posts: