Home » Artificial Intelligence » Getting to Know Artificial Intelligence Terms

Getting to Know Artificial Intelligence Terms

May 3, 2021

supplu-chain

In today’s business environment, the term “artificial intelligence” (AI) is tossed easily into many conversations. People seem to understand, in a general way, what is being discussed; however, according to Michael I. Jordan, a computer science professor at the University of California, Berkeley, many people are still confused. He explains, “People are getting confused about the meaning of AI in discussions of technology trends — that there is some kind of intelligent thought in computers that is responsible for the progress and which is competing with humans. We don’t have that, but people are talking as if we do.”[1] Although I suffer from no delusions that people will stop using the terms artificial intelligence or AI when discussing various cognitive technologies, getting to know terms associated with AI can help business leaders better understand those conversations. In Oscar Hammerstein’s and Richard Rodgers’ brilliant musical “My Fair Lady,” there is a song entitled “Getting to Know You.” Part of the lyrics state:

 

Getting to know you
Getting to feel free and easy
When I am with you
Getting to know what to say
Haven’t you noticed
Suddenly I’m bright and breezy?
Because of all the beautiful and new
Things I’m learning about you
Day by day

 

Some business leaders may never “feel free and easy” when discussing AI; however, they can feel bright and breezy as they learn beautiful and new things about the subject — including the terms currently being used in AI discussions. The staff at the Enterprisers Project explain, “With interest growing in artificial intelligence, it’s becoming increasingly important for IT executives and business leaders to cut through the hype and occasional confusion around AI concepts and technologies. It’s easy to get confused when discussing AI. It’s not uncommon to hear AI and machine learning used interchangeably, despite clear distinctions between the two.”[2] The problem is there are no universally accepted definitions for many AI-related terms. Below is list of some of the terms with which business leaders should be acquainted.

 

Glossary of Terms

 

Advanced Analytics: Although not AI per se, most cognitive technologies have advanced analytics embedded in them. There are primarily four types of analytics: 1. Descriptive Analytics (what happened in the past); 2. Diagnostic Analytics (why something happened); 3. Predictive Analytics (what can happen next); and, 4. Prescriptive Analytics (what you should do to achieve a particular outcome).

 

Algorithm: Technology writer Cynthia Harvey writes, “The dictionary definition for an algorithm is ‘a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.’ In layman’s terms, when we are talking about algorithms, we are talking about processes, usually processes related to math. When you were in third or fourth grade, you learned the algorithm for long division. You learned a process that involved dividing, multiplying, subtracting and bringing down the next digit. When we talk about algorithms for AI and machine learning, we’re talking about the same kinds of processes — just a lot more complex.”[3]

 

Artificial Intelligence (AI): AI is an umbrella term under which multiple disciplines or technologies are sheltered. Traditionally, two types of AI are discussed: weak and strong. Wikipedia states: “Weak artificial intelligence (weak AI), also known as narrow AI, is artificial intelligence that is focused on one narrow task.” In other words, weak AI was developed to handle/manage a small and specific data set to answer a single question. Its perspective is singular, resulting in tunnel vision. On the other hand, strong AI — sometimes referred to as artificial general intelligence (AGI) — if it’s ever developed, will operate without restrictions. The AGI Society notes the ultimate goal of artificial general intelligence is to develop “thinking machines” (i.e., “general-purpose systems with intelligence comparable to that of the human mind”). Cognitive computing is a third type of AI, that lies between weak and strong AI. (See also “Cognitive Computing” below.)

 

Artificial neural network (ANN): Journalist Jackie Snow explains, “[An artificial neural network involves an] algorithm that attempts to mimic the human brain, with layers of connected ‘neurons’ sending information to each other.”[4] (See also “Deep Learning” below.)

 

Augmented Reality: Analysts from the Brooking Institution explain, “Augmented reality puts people in realistic situations that are augmented by computer-generated video, audio, or sensory information. This kind of system allows people to interact with actual and artificial features, be monitored for their reactions, or be trained on the best ways to deal with various stimuli.”[5]

 

Big Data: You can’t discuss AI without understanding that it requires data on which to train. Brookings analysts explain, “[Big Data refers to] extremely large data sets that are statistically analyzed to gain detailed insights. The data can involve billions of records and require substantial computer-processing power. Data sets are sometimes linked together to see how patterns in one domain affect other areas. Data can be structured into fixed fields or unstructured as free-flowing information. The analysis of big data sets can reveal patterns, trends, or underlying relationships that were not previously apparent to researchers.”

 

Black box algorithms: Snow explains, “When an algorithm’s decision-making process or output can’t be easily explained by the computer or the researcher behind it, [it’s considered a black box algorithm].” At Enterra®, we often utilize Massive DynamicsRepresentational Learning Machine™ (RLM) for machine learning. The RLM acts as a “glass box” providing the user a functional understanding of the structure and dependencies within the data. (See also “Explainable AI” below.)

 

Chatbot: “Also called a bot or an interactive agent,” explains Harvey, “a chatbot is an artificial intelligence system that uses natural language processing capabilities to carry on a conversation. Today, the most recognizable examples of chatbots are Apple’s Siri, Microsoft’s Cortana and Amazon’s Alexa.”

 

Cloud Computing: Although not an AI technology, most of the data used by cognitive technologies are stored in the cloud. Brookings analysts explain, “Data storage and processing used to take place on personal computers or local servers controlled by individual users. In recent years, however, storage and processing have migrated to digital servers hosted at data centers operated by internet platforms, and people can store information and process data without being in close proximity to the data center. Cloud computing offers convenience, reliability, and the ability to scale applications quickly.”

 

Cognitive Computing: Cognitive Computing lies between weak and strong AI. It operates on large and varied data sets to perform complex tasks or to answer a multitude of questions in a variety of categories. Cognitive computing systems can deal with ambiguities whereas weak AI cannot. Doug Black (@DougBlack1), Editor-in-Chief of insideHPC, explains, “Cognitive computing applies knowledge from cognitive science to build an architecture of multiple AI subsystems — including machine learning, natural language processing (NLP), vision, and human-computer interaction — to simulate human thought processes with the aim of making high level decisions in complex situations.”[6]

 

Computer Vision/Image Recognition: According to Wikipedia, “Computer vision is an interdisciplinary field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do.”

 

Data Mining: Harvey writes, “Data mining is all about looking for patterns in a set of data. It identifies correlations and trends that might otherwise go unnoticed. For example, if a data mining application were given Walmart’s sales data, it might discover that people in the South prefer certain brands of chips or that during the month of October people will buy anything with ‘pumpkin spice’ in the product name. Data mining tools don’t necessarily have to include machine learning or deep learning capabilities, but today’s most advanced data mining software generally does have these features built in.”

 

Deep Learning: The staff at the Enterprisers Project explain, “Deep learning is commonly referred to as a subset — aka ‘deeper’ — form of machine learning. The HBR Analytic Services report offers this definition, adapted in part from Skymind.ai: ‘A form of machine learning but with more layers. Deep artificial neural networks are a set of algorithms that have set new records in accuracy for many important problems, such as image recognition, sound recognition, recommender systems, etc.’ As Skymind explains: ‘Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns.’ To describe in a slightly different way, deep learning can be thought of as a way to use a set of more complex algorithms to enable a machine to mimic the human brain as much as possible.” Black adds, “Deep learning algorithms have proved successful in, for example, detecting cancerous cells or forecasting disease — but with one huge caveat: there’s no way to identify which factors the deep learning program uses to reach its conclusion. This can lead to problems involving AI bias, data ethics and ‘algorithm accountability.’ This problem has led some regulated industries and public sector organizations, which must comply with anti-bias regulations, to abandon DL for the most transparent ML.”

 

Embodied AI: According to Snow, “[Embodied AI is] a fancy way of saying ‘robots with AI capabilities.'”

 

Explainable AI (XAI): Snow writes, “[XAI is] AI that can tell or show its human operators how it came to its conclusions.”

 

Few-shot learning: Although this is not a widely-used term, Snow explains, “Most of the time, computer vision systems need to see hundreds or thousands (or even millions) of examples to figure out how to do something. One-shot and few-shot learning try to create a system that can be taught to do something with far less training. It’s similar to how toddlers might learn a new concept or task.”

 

Generative adversarial networks: “Also called GANs,” Snow explains, “these are two neural networks that are trained on the same data set of photos, videos or sounds. Then, one creates similar content while the other tries to determine whether the new example is part of the original data set, forcing the first to improve its efforts. This approach can create realistic media, including artworks.”

 

Machine Learning (ML): “Rooted in data science, computational statistical models, algorithms and mathematical optimization,” Black explains, “machine learning is the ability of computer systems to improve their performance by exposure to data without the need to follow explicitly programmed instructions, relying on patterns and inference instead. Machine learning is the process of automatically spotting patterns in large amounts of data that can then be used to make predictions. Using sample training data, ML algorithms build a model for identifying likely outcomes.” Harvey explains there are four types of machine learning.[7] They are: Supervised Learning (“Supervised learning requires a programmer or teacher who offers examples of which inputs line up with which outputs.”); Unsupervised Learning (“Unsupervised learning requires the system to develop its own conclusions from a given data set.”); Semi-supervised Learning (“Semi-supervised learning, as you probably guessed, is a combination of supervised and unsupervised learning.”) and; Reinforcement Learning (Reinforcement learning involves a system receiving feedback analogous to punishments and rewards.”) Snow adds a fifth form of learning: Transfer learning (“This method tries to take training data used for one thing and reuse it for a new set of tasks, without having to retrain the system from scratch.”)

 

Natural Language Processing (NLP): The staff at the Enterprisers Project explain, “NLP (like pretty much every other term in this glossary) is a hot topic because it pertains directly to the growing number of human-to-machine voice interactions in both professional and personal contexts. The folks at SAS have a good, clear definition: ‘Natural language processing is a branch of artificial intelligence that helps computers understand, interpret and manipulate human language. NLP draws from many disciplines, including computer science and computational linguistics, in its pursuit to fill the gap between human communication and computer understanding.’”

 

Robotic Process Automation (RPA): Black explains, “[RPA is] software configured to automatically capture and interpret existing applications for processing a transaction, manipulating data, triggering responses and communicating with other digital systems, often used to handle repetitive office tasks, such as forms processing. The key difference … from enterprise automation tools like business process management (BPM) is that RPA uses software or cognitive robots to perform and optimize process operations rather than human operators.” The staff at the Enterprisers Project add, “People don’t always agree about whether RPA should be considered AI, but this depends at least in part on how you define the latter umbrella category. For example, the HBR Analytic Services report notes that RPA should qualify, according to its definition of AI.”

 

Virtual Reality: Brookings analysts write, “Virtual reality uses headsets equipped with projection visors to put people in realistic-seeming situations that are completely generated by computers. People can see, hear, and experience many types of environments and interact with them. By simulating actual settings, VR can train people how to deal with various situations, vary the features that are observed, and monitor how people respond to differing stimuli.”

 

Concluding Thoughts

 

Although this list is not exhaustive, it provides the terms most-often used during business discussions surrounding artificial intelligence. As Professor Jordan notes, “For the foreseeable future, computers will not be able to match humans in their ability to reason abstractly about real-world situations. We will need well-thought-out interactions of humans and computers to solve our most pressing problems.”

 

Footnotes
[1] Kathy Pretz, “Stop Calling Everything AI, Machine-Learning Pioneer Says,” IEEE Spectrum, 31 March 2021.
[2] Staff, “10 artificial intelligence terms you need to know,” The Enterprisers Project, 4 September 2019.
[3] Cynthia Harvey, “12 Artificial Intelligence Terms You Need to Know,” InformationWeek, 28 September 2017.
[4] Jackie Snow, “Learning the AI lingo,” SFGATE, 18 October 2018.
[5] John Allen and Darrell West, “The Brookings glossary of AI and emerging technologies,” The Brookings Institution, 13 July 2020.
[6] Doug Black, “AI Definitions: Machine Learning vs. Deep Learning vs. Cognitive Computing vs. Robotics vs. Strong AI….,” Enterprise AI, 30 May 2019.
[7] Cynthia Harvey, “What is Machine Learning?” Datamation, 3 January 2018.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!