A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z
A form of logical inference which starts with an observation then seeks to find the simplest and most likely explanation.
A mapping between formalisms that reduces the computational complexity of the task at stake.
A set of rules or instructions given to an AI program, neural network, or other machine to help it learn on its own.
A field of computer science dedicated to the study of computer software making intelligent decisions, reasoning, and problem solving.
Artificial general intelligence
The representation of generalized human cognitive abilities in software so that, faced with an unfamiliar task, the AI system could find a solution. An AGI system could perform any task that a human is capable of.
Artificial neural network
A system patterned after the operation of neurons in the human brain. Neural networks are a variety of deep learning technology.
A method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network.
A field that treats ways to analyze, systematically extract information from, or otherwise deal with data sets that are too large or complex to be dealt with by traditional data-processing application software.
Black box algorithm
When an algorithm’s decision-making process or output can’t be easily explained by the computer or the researcher behind it.
Programming that simulates the conversation or “chatter” of a human being through text or voice interactions.
The task of grouping a set of objects in such a way that objects in the same group are more similar to each other than to those in other groups.
A field of artificial intelligence that trains computers to interpret and understand the visual world. Using digital images from cameras and videos and deep learning models, machines can accurately identify and classify objects.
A new generation of enterprise search solutions that employ AI technologies such as natural language processing and machine learning to ingest, understand, organize, and query digital content from multiple data sources.
Convolutional neural network (CNN)
A type of artificial neural network used in image recognition and processing that is specifically designed to process pixel data.
Data (structured, unstructured)
Structured data: any data that resides in a fixed field within a record or file. This includes data contained in relational databases and spreadsheets.
Unstructured data: information that either does not have a pre-defined data model or is not organized in a pre-defined manner. Unstructured information is typically text-heavy, but may contain data such as dates, numbers, and facts as well.
The process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.
A multi-disciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from structured and unstructured data.
A collection of related sets of information that is composed of separate elements but can be manipulated as a unit by a computer.
A system used for reporting and data analysis that is considered a core component of business intelligence. Data warehouses are central repositories of integrated data from one or more disparate sources.
Decision support system
A computer program application that analyzes business data and presents it so that users can make business decisions more easily.
A simple representation for classifying examples. Decision tree learning is one of the most successful techniques for supervised classification learning.
An artificial intelligence function that imitates the workings of the human brain in processing data and creating patterns for use in decision making. Deep learning is a subset of machine learning in artificial intelligence (AI) that has networks capable of learning unsupervised from data that is unstructured or unlabeled. Also known as deep neural learning or deep neural network.
A machine learning technique that combines several base models in order to produce one optimal predictive model.
An expert system is a computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code.
A set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data.
A form of many-valued logic in which the truth values of variables may be any real number between 0 and 1 inclusive. It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false.
In statistical analysis of binary classification, the F₁ score is a measure of a test’s accuracy.
Generative adversarial network
A type of AI machine learning (ML) technique made up of two nets that are in competition with one another in a zero-sum game framework. GANs typically run unsupervised, teaching itself how to mimic any given distribution of data.
A collection of nodes and edges. Each node represents an entity (such as a person or business) and each edge represents a connection or relationship between two nodes. Every node in a graph database is defined by a unique identifier, a set of outgoing edges and/or incoming edges and a set of properties expressed as key/value pairs. Each edge is defined by a unique identifier, a starting-place and/or ending-place node and a set of properties.
Graphics processing unit
A computer chip that performs rapid mathematical calculations, primarily for the purpose of rendering images.
This concept leverages both human and machine intelligence to create machine learning models. In this approach, humans are directly involved in training, tuning and testing data for a particular ML algorithm.
A logical process in which multiple premises, all believed true or found true most of the time, are combined to obtain a specific conclusion. Inductive reasoning is often used in applications that involve prediction, forecasting, or behavior.
In the field of Artificial Intelligence, an inference engine is a component of the system that applies logical rules to the knowledge base to deduce new information. The first inference engines were components of expert systems.
Intelligent process automation
An emerging set of new technologies that combines fundamental process redesign with robotic process automation and machine learning. It is a suite of business-process improvements and next-generation tools that assists the knowledge worker by removing repetitive, replicable, and routine tasks.
Knowledge representation and reasoning
The field of artificial intelligence dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks.
The scientific study of algorithms and statistical models that computer systems use to effectively perform a specific task without using explicit instructions, relying on patterns and inference instead.
A sub-field of computational linguistics that investigates the use of software to translate text or speech from one language to another.
Named entity recognition
A subtask of information extraction that seeks to locate and classify named entity mentions in unstructured text into pre-defined categories such as the person names, organizations, locations, etc.
Natural language generation
The use of artificial intelligence (AI) programming to produce written or spoken narrative from a dataset.
Natural language processing
A subfield of computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human languages, in particular how to program computers to process and analyze large amounts of natural language data.
Natural language understanding
A branch of artificial intelligence (AI) that uses computer software to understand input made in the form of sentences in text or speech format.
A set of concepts and categories in a subject area or domain that shows their properties and the relations between them.
Optical character recognition (OCR)
The use of technology to distinguish printed or handwritten text characters inside digital images of physical documents, such as a scanned paper document. The basic process of OCR involves examining the text of a document and translating the characters into code that can be used for data processing.
The production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit additional data or predict future observations reliably.
The automated recognition of patterns and regularities in data. Pattern recognition is closely related to artificial intelligence and machine learning, together with applications such as data mining and knowledge discovery in databases.
In the field of information retrieval, precision is the fraction of retrieved documents that are relevant to the query.
The use of data, statistical algorithms and machine learning techniques to identify the likelihood of future outcomes based on historical data. The goal is to go beyond knowing what has happened to providing a best assessment of what will happen in the future.
In the context of data provenance, provenance documents the inputs, entities, systems, and processes that influence data of interest, in effect providing a historical record of the data and its origins.
The area of study focused on developing computer technology based on the principles of quantum theory, which explains the nature and behavior of energy and matter on the quantum (atomic and subatomic) level.
A data construct applied to machine learning that develops large numbers of random decision trees analyzing sets of variables. This type of algorithm helps to enhance the ways that technologies analyze complex data.
In the field of information retrieval, recall is the fraction of the relevant documents that are successfully retrieved.
Recurrent neural network (RNN)
A type of artificial neural network commonly used in speech recognition and natural language processing (NLP). They are designed to recognize a data’s sequential characteristics and use patterns to predict the next likely scenario. RNNs are used in deep learning and in the development of models that simulate the activity of neurons in the human brain.
A type of machine learning technique that enables an agent to learn in an interactive environment by trial and error using feedback from its own actions and experiences.
Robotic process automation
The use of software with artificial intelligence (AI) and machine learning capabilities to handle high-volume, repeatable tasks that previously required humans to perform. These tasks can include queries, calculations, and maintenance of records and transactions.
A class of machine learning tasks and techniques that also make use of unlabeled data for training – typically a small amount of labeled data with a large amount of unlabeled data.
The machine learning task of learning a function that maps an input to an output based on example input-output pairs. It infers a function from labeled training data consisting of a set of training examples.
Symbolic artificial intelligence
The term for the collection of all methods in artificial intelligence research that are based on high-level “symbolic” (human-readable) representations of problems, logic, and search. The approach is based on the assumption that many aspects of intelligence can be achieved by the manipulation of symbols.
Information that is artificially manufactured rather than generated by real-world events. Synthetic data is created algorithmically, and it is used as a stand-in for test datasets of production or operational data, to validate mathematical models and, increasingly, to train machine learning models.
A topic model is a type of statistical model for discovering the abstract “topics” that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for the discovery of hidden semantic structures in a text body.
An initial set of data used to help a program understand how to apply technologies like neural networks to learn and produce sophisticated results. Training data is also known as a training set, training dataset or learning set.
A machine learning method where a model developed for a task is reused as the starting point for a model on a second task. Transfer learning differs from traditional machine learning in that it is the use of pre-trained models that have been used for another task to jump-start the development process on a new task or problem.
A test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. It was developed by Alan Turing in 1950.
Underfitting happens when a machine learning model isn’t complex enough to accurately capture relationships between a dataset’s features and a target variable. An underfitted model results in problematic or erroneous outcomes on new data, or data that it wasn’t trained on, and many times performs poorly even on training data.
The training of an artificial intelligence (AI) algorithm using information that is neither classified nor labeled and allowing the algorithm to act on that information without guidance.