Home » Artificial Intelligence » Artificial Intelligence: Ascendant but not Transcendent

Artificial Intelligence: Ascendant but not Transcendent

May 12, 2014

Release of the movie entitled “Transendence” starring Johnny Depp shined a lot of light on the subject of artificial intelligence. In the movie, Dr. Will Caster (played by Depp) wants to create a sentient machine (i.e., a computer that achieves self-awareness through artificial general intelligence). The movie’s title comes from the fact that Caster believes that the combination of human and machine consciousness can transcend the limits of the world as we know them. Let’s just say that not everyone in the movie wants to see him succeed. In the real world, there are also concerns that groups of Dr. Casters might be out there plotting humankind’s demise. There have been a number of successes in the field of artificial intelligence (AI) but not necessarily advances towards achieving transendence or world domination by machines. A year ago (March 2013) at the AAAI spring symposia held at Stanford University, a “potpourri of innovative projects in process around the world [were discussed] by academic researchers in the artificial intelligence field.” [“What’s new in AI? Trust, Creativity, and Shikake,” by LaBlogga, Broader Perspectives, 31 March 2013] According to the article, projects could be “grouped into two overall categories: those that focus on computer-self interaction or computer-computer interaction, and those that focus on human-computer interaction or human sociological phenomena.” The groupings are shown below:

“Computer self-interaction or computer-computer interaction

  • Designing Intelligent Robots: Reintegrating AI II
  • Lifelong Machine Learning
  • Trust and Autonomous Systems
  • Weakly Supervised Learning from Multimedia

Human-computer interaction or human sociological phenomena

  • Analyzing Microtext
  • Creativity and (Early) Cognitive Development
  • Data Driven Wellness: From Self-Tracking to Behavior Change
  • Shikakeology: Designing Triggers for Behavior Change

This last topic, Shikakeology, is an interesting new category that is completely on-trend with the growing smart matter, Internet-of-things, Quantified Self, Habit Design, and Continuous Monitoring movements. Shikake is a Japanese concept, where physical objects are embedded with sensors to trigger a physical or psychological behavior change. An example would be a trash can playing an appreciative sound to encourage litter to be deposited.”

That last paragraph addresses the two-sides of artificial intelligence (namely, thinking and acting). AI is not particularly useful if only thinking is taking place. Some sort of behavior (or informed action) is necessary to make a thinking machine useful. That “behavior” may emerge as an insight into a challenging subject or as a physical manifestation, such as the autonomous movement of a robot. To be truly useful, a thinking machine (or cognitive computer) needs to have some meaningful interaction with the real world. To achieve that end, computers use agents. An agent is “anything that perceives its environment through sensors and acts on that environment through actuators.” [“CS 331: Artificial Intelligence/Intelligent Agents,” Oregon State University, Sprint 2012] Agents can have simple functions, like obtaining the price of a certain stock. Once such information is obtained by the agent, it is introduced into the algorithm that will do the thinking. For example, an algorithm may be programmed to issue a “sell” order if a stock reaches a target price. If the trigger point is reached, the computer once again interacts with the environment and sells the stock. The agent used in this example is a simple reflex agent. Agents, however, can get much more complex. Some agents, called model-based reflex agents, remember information they have gleaned in the past. This is called a “percept sequence” (i.e., “a complete history of everything the agent has ever perceived. Think of this as the state of the world from the agent’s perspective”). There is also something called an “agent function (or Policy),” that “maps the percept sequence to action (i.e., determines agent behavior)” Finally, there is an “agent program” that “implements the agent function.” The OSU course material emphasizes that you want an agent to act rationally. It defines “rationality” as the ability to “do the action that causes the agent to be most successful.” It then asks the question, “How do you define success?” That’s an important question because the computer can’t learn if isn’t rewarded for being correct. The course material indicates that “rationality depends on 4 things.” Those four things are:

1. Performance measure of success.
2. Agent’s prior knowledge of environment.
3. Actions agent can perform.
4. Agent’s percept sequence to date.

The material goes onto state that “for each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.” At Enterra Solutions®, we call this a Sense, Think, Act, and Learn® process. Most cognitive computing systems, like Enterra’s Cognitive Reasoning Platform™ (CRP), employ autonomous agents so that the learning process can proceed unattended. As the OSU material explains, “Autonomous agents learn from experience to compensate for partial or incorrect prior knowledge.” The material describes four types of agent. In addition to the two already mentioned (namely, simple reflex and model-based reflex agents), it discusses goal-based and utility-based agents. Below is how each of these agent types is described:

Simple reflex agents — select actions using only the current precept and work on condition-action rules (i.e., if condition then action).

Model-based reflex agents — maintain some internal state that keeps track of the part of the world they can’t see now and they need a model (i.e., a model that encodes knowledge about how the world works).

Goal-based agents — have goal information that guides their actions (i.e., information that looks to the future). Sometimes achieving the goal is simple (e.g., requires a single action) and, at other times, the goal requires reasoning about long sequences of actions. Goal-based agents are flexible because they can be reprogrammed by simply changing the goal.

Utility-based agents — are used when there are many paths to the goal. Utility measures which states are preferable to other states.

From those descriptions, you can tell that agents can be useful, but they won’t achieve sentience. That’s why most of efforts to use AI in business are referred to as narrow or weak efforts. Such efforts may not lead to transcendence; but, they are on the ascendant. Lee Bell writes, “Artificial Intelligence is becoming more commonplace by the day.” [“AI will play a vital role in our future, just don’t expect robot butlers,” The Inquirer, 14 March 2014] Lee asked Dr. Kevin Curran, a technical expert at the Institute of Electrical and Electronics Engineers (IEEE), whether artificial intelligence “will become a fundamental facet of human life or will merely continue to churn out less innovative robot gadgets that aren’t really good for anything.” Curran told him:

“The scope of Artificial Intelligence (AI) is huge. We tend to associate AI in its grandest form as a Humanoid Robot communicating with us as portrayed in movies such as Blade Runner. The truth is more mundane but shows that AI software is running underneath all sorts of modern technological tasks from autopilot to the magnificent gyroscope ability of Segways. Anywhere that ‘fast fuzzy type’ decisions needs to be made – there is some Artificial Intelligence involved. … A simple search on Google is basically putting AI to work. Language, speech, translation, and visual processing all rely on Machine Learning or AI. … Humans doing tedious automated tasks have already become replaced. I see no reason why this trend will not continue.”

Dominic Basulto agrees with Curran. He writes, “Artificial intelligence shows signs of becoming the next big trend for tech start-ups in Silicon Valley.” [“Artificial intelligence is the next big tech trend. Here’s why.Washington Post, 25 March 2014] Basulto goes on to state that the narrow AI efforts being pursued by companies “to solve smaller, real-world problems” are going to be “good enough” to provide companies with a competitive edge in the years ahead.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!