Home » Artificial Intelligence » Artificial Intelligence Terms

Artificial Intelligence Terms

July 20, 2020

Artificial intelligence (AI) is a widely used buzzword that often creates confusion among business leaders. The staff at the Enterprisers Project explain, “With interest growing in artificial intelligence, it’s becoming increasingly important for IT executives and business leaders to cut through the hype and occasional confusion around AI concepts and technologies. It’s easy to get confused when discussing AI. It’s not uncommon to hear AI and machine learning used interchangeably, despite clear distinctions between the two.”[1] The problem is there are no universally accepted definitions for many AI-related terms. The staff at SciTech Europa explains, “The computer scientist John McCarthy coined the term Artificial Intelligence in 1956, and defines the field of artificial intelligence as ‘the science and engineering of making intelligent machines.’ As well as the term for the scientific discipline, artificial intelligence refers to the intelligence of a machine, program, or system, in contrast to that of human intelligence. Alessandro Annoni, the head of the European Commission’s Joint Research Center, spoke at the Science Meets Parliaments conference at the European Parliament, Brussels in February 2019. He said: ‘Artificial intelligence should not be considered a simple technology … it is a collection of technologies. It is a new paradigm that is aiming to give more power to the machine. It’s a technology that will replace humans in some cases.”[2]

 

Doug Black (@DougBlack1), Editor-in-Chief of insideHPC, explains, “AI definitions aren’t meant to be the final word on AI terminology, the industry is growing and changing so fast that terms will change and new ones will be added.”[3] As Black notes, the following terms are an effort to “frame the language we currently use.” The terms come from numerous sources including the Enterprisers Project, SciTech Europa, and Enterprise AI.

 

Artificial Intelligence (AI): “AI is a broad umbrella that encompasses multiple disciplines or technologies. A a straightforward definition … by Bill Brock, VP of engineering at Very, [is]: ‘AI, simply stated, is the concept of machines being able to perform tasks that seemingly require human intelligence.’ [Another] definition, this one from [a] Harvard Business Review Analytic Services report, [is]: ‘Artificial intelligence is the science and engineering of making intelligent machines. This includes intelligence that is programmed and rules-based, as well as more advanced techniques such as machine learning, deep learning, and neural networks.'”[4] Most AI systems currently in use are considered Weak AI Wikipedia states: “Weak artificial intelligence (weak AI), also known as narrow AI, is artificial intelligence that is focused on one narrow task.” In other words, weak AI was developed to handle/manage a small and specific data set to answer a single question.

 

Artificial General Intelligence (AGI) or Strong AI: The ultimate goal of Strong (aka General) AI is to develop “thinking machines” (or as The AGI Society notes., to develop “general-purpose systems with intelligence comparable to that of the human mind”).

 

Cognitive Computing: Cognitive Computing lies between weak and strong AI. It operates on large and varied data sets to perform complex tasks or to answer a multitude of questions in a variety of categories. Cognitive computing systems can deal with ambiguities whereas weak AI cannot. Black adds, “Cognitive computing applies knowledge from cognitive science to build an architecture of multiple AI subsystems — including machine learning, natural language processing (NLP), vision, and human-computer interaction — to simulate human thought processes with the aim of making high level decisions in complex situations.”

 

Computer Vision/Image Recognition: “Computer vision could be thought of as how machines ‘see’ — not just humans, but potentially any image or even video stream. Wikipedia has a succinct definition: ‘Computer vision is an interdisciplinary field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do.’ Computer vision is likely to continue drawing attention not just for its productive applications, but also for potential risks such as AI bias and other concerns.”[5]

 

Deep Learning: “Deep learning is commonly referred to as a subset — aka ‘deeper’ — form of machine learning. The HBR Analytic Services report offers this definition, adapted in part from Skymind.ai: ‘A form of machine learning but with more layers. Deep artificial neural networks are a set of algorithms that have set new records in accuracy for many important problems, such as image recognition, sound recognition, recommender systems, etc.’ As Skymind explains: ‘Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns.’ To describe in a slightly different way, deep learning can be thought of as a way to use a set of more complex algorithms to enable a machine to mimic the human brain as much as possible.”[6] Black adds, “Deep learning algorithms have proved successful in, for example, detecting cancerous cells or forecasting disease — but with one huge caveat: there’s no way to identify which factors the deep learning program uses to reach its conclusion. This can lead to problems involving AI bias, data ethics and ‘algorithm accountability.’ This problem has led some regulated industries and public sector organizations, which must comply with anti-bias regulations, to abandon DL for the most transparent ML.”

 

Machine Learning (ML): “Rooted in data science, computational statistical models, algorithms and mathematical optimization,” Black explains, “machine learning is the ability of computer systems to improve their performance by exposure to data without the need to follow explicitly programmed instructions, relying on patterns and inference instead. Machine learning is the process of automatically spotting patterns in large amounts of data that can then be used to make predictions. Using sample training data, ML algorithms build a model for identifying likely outcomes.”[7] Cynthia Harvey explains there are actually four, not three, types of machine learning.[8] They are:

 

  • Supervised Learning: “Supervised learning requires a programmer or teacher who offers examples of which inputs line up with which outputs.”
  • Unsupervised Learning: “Unsupervised learning requires the system to develop its own conclusions from a given data set.”
  • Semi-supervised Learning: “Semi-supervised learning, as you probably guessed, is a combination of supervised and unsupervised learning.”
  • Reinforcement Learning: “Reinforcement learning involves a system receiving feedback analogous to punishments and rewards.”

 

Natural Language Processing (NLP): “NLP (like pretty much every other term in this glossary) is a hot topic because it pertains directly to the growing number of human-to-machine voice interactions in both professional and personal contexts. The folks at SAS have a good, clear definition: ‘Natural language processing is a branch of artificial intelligence that helps computers understand, interpret and manipulate human language. NLP draws from many disciplines, including computer science and computational linguistics, in its pursuit to fill the gap between human communication and computer understanding.'”[9]

 

Robotic Process Automation (RPA): Black explains, “[RPA is] software configured to automatically capture and interpret existing applications for processing a transaction, manipulating data, triggering responses and communicating with other digital systems, often used to handle repetitive office tasks, such as forms processing. The key difference … from enterprise automation tools like business process management (BPM) is that RPA uses software or cognitive robots to perform and optimize process operations rather than human operators.” The staff at the Enterprisers Project add, “People don’t always agree about whether RPA should be considered AI, but this depends at least in part on how you define the latter umbrella category. For example, the HBR Analytic Services report notes that RPA should qualify, according to its definition of AI.”

 

Those are the primary terms with which business leaders need to be familiar. The staff at the Enterprisers Project also discuss the term AIOps. They write, “According to Gartner’s definition, ‘AIOps combines big data and machine learning to automate IT operations processes, including event correlation, anomaly detection, and causality determination.’ Put another way, it’s about using AI and data to automatically identify or handle issues that would have once depended on a person to take care of manually. You can consider AIOps a more specific AI-driven category of the broader automation trend in IT, but remember that not all forms of automation would necessarily be considered AIOps.” AI technologies hold great potential for improving business processes. The key to unlocking this potential is knowing which technology to apply to specific business challenges.

 

Footnotes
[1] Staff, “10 artificial intelligence terms you need to know,” The Enterprisers Project, 4 September 2019.
[2] Staff, “What is artificial intelligence? AI definitions, applications, and the ethical questions,” SciTech Europa, 3 May 2019.
[3] Doug Black, “AI Definitions: Machine Learning vs. Deep Learning vs. Cognitive Computing vs. Robotics vs. Strong AI….,” Enterprise AI, 30 May 2019.
[4] The Enterprisers Project, op. cit.
[5] Ibid.
[6] The Enterprisers Project, op. cit.
[7] Black, op. cit.
[8] Cynthia Harvey, “What is Machine Learning?” Datamation, 3 January 2018.
[9] The Enterprisers Project, op. cit.

Related Posts:

Full Logo

Thanks!

One of our team members will reach out shortly and we will help make your business brilliant!