Skip to content

AI - Artificial Intelligence

Here’s a glossary of AI (Artificial Intelligence) terms and abbreviations, covering machine learning, deep learning, natural language processing, computer vision, robotics, and AI ethics.


🤖 ARTIFICIAL INTELLIGENCE (AI) GLOSSARY


A

AI / Artificial Intelligence – Machine systems performing tasks that normally require human intelligence.
Algorithm – Step-by-step procedure or formula used for problem-solving in AI.
ANN / Artificial Neural Network – Computing system inspired by the human brain, used for pattern recognition.
Activation Function – Function determining neuron output in neural networks (ReLU, Sigmoid, Tanh).
Agent / Intelligent Agent – Entity that perceives environment, makes decisions, and acts to achieve goals.
Adversarial Attack – Input designed to deceive AI models into making wrong predictions.
AutoML / Automated Machine Learning – Automated process of building machine learning models.
Attention Mechanism – Deep learning technique to focus on important features, widely used in NLP.


B

Backpropagation – Algorithm for training neural networks by adjusting weights based on error.
Bias / Model Bias – Systematic error in predictions caused by assumptions in training data.
Batch / Mini-Batch – Subset of training data processed at one time in model training.
Bayesian Network – Probabilistic model representing conditional dependencies among variables.
Big Data – Extremely large datasets used to train AI systems.


C

CNN / Convolutional Neural Network – Neural network specialized for image and video processing.
Classification / Labeling – Assigning categories to input data.
Clustering / Unsupervised Learning – Grouping similar data points without labeled outputs.
Cross-Validation – Technique for assessing model performance by splitting data into training and testing sets.
Computer Vision / CV – AI field enabling machines to interpret visual data.


D

Deep Learning / DL – AI technique using multi-layered neural networks for complex tasks.
Dataset / Training Data – Collection of labeled or unlabeled data used for AI model training.
Decision Tree – Model that splits data based on feature values to make predictions.
Dimensionality Reduction – Reducing number of features while preserving important information.
Dropout – Regularization technique to prevent overfitting in neural networks.


E

Ethical AI / Responsible AI – AI systems designed to avoid bias, discrimination, and harmful impact.
Embedding / Feature Representation – Transforming data into vector representations for models.
Evolutionary Algorithm – Optimization inspired by natural selection.
Explainable AI / XAI – Techniques that make AI decisions interpretable by humans.
Epoch – One complete pass of the training dataset through the model.


F

Feature / Attribute – Individual measurable property of data used for model training.
Federated Learning – Training models across multiple devices without sharing raw data.
Fine-Tuning – Adjusting pre-trained models for a specific task.
Fuzzy Logic – AI method handling reasoning that is approximate rather than fixed or exact.
Function Approximation – Using AI models to approximate complex mathematical functions.


G

GAN / Generative Adversarial Network – AI system with generator and discriminator networks for creating realistic data.
Gradient Descent – Optimization algorithm for minimizing loss in model training.
Graph Neural Network / GNN – Neural network that operates on graph-structured data.
General AI / AGI – Hypothetical AI capable of human-level intelligence across all tasks.
GPU / Graphics Processing Unit – Hardware acceleration for AI training and inference.


H

Hyperparameter – Model parameter set before training (learning rate, batch size).
Heuristic – Rule-of-thumb method for problem-solving in AI.
Hidden Layer – Intermediate layers in a neural network between input and output.
Human-in-the-Loop / HITL – AI system involving human supervision or feedback.
Hybrid AI – Combining symbolic AI and machine learning techniques.


I

Inference / Prediction – Using a trained AI model to make decisions on new data.
Instance / Data Point – Single observation or example in a dataset.
Intelligence Augmentation / IA – Using AI to enhance human decision-making.
Iterative Learning / Online Learning – Updating models continuously as new data arrives.
Image Recognition / Object Detection – AI identifying objects or features in images.


J

JSON / AI Data Format – Lightweight data format often used to store inputs or outputs.
Joint Probability – Probability distribution over multiple random variables.
Jupyter Notebook / AI Environment – Interactive environment for coding, analysis, and visualization.


K

Knowledge Graph / Semantic Network – AI structure linking entities and relationships.
K-Means / Clustering Algorithm – Partitioning data into K clusters based on similarity.
Kernel / SVM Kernel Function – Transforms data to higher-dimensional space for classification.
KNN / K-Nearest Neighbors – Algorithm classifying data based on closest training examples.
Kalman Filter / State Estimation – Predicting and updating system state with noisy measurements.


L

Label / Target – Desired output in supervised learning.
Language Model / NLP – AI model trained to understand and generate text.
Latent Variable / Hidden Feature – Underlying variable inferred from observed data.
Loss Function / Cost Function – Measures error between predicted and actual outputs.
LSTM / Long Short-Term Memory – Recurrent neural network type handling sequence data.


M

Machine Learning / ML – AI approach where systems learn patterns from data.
Model / Predictive Model – Mathematical representation trained to perform tasks.
Multi-Agent System / MAS – AI system with multiple interacting intelligent agents.
Metadata / Data About Data – Provides context and information for AI datasets.
Momentum / Optimization Technique – Improves gradient descent speed and stability.


N

Natural Language Processing / NLP – AI field for understanding and generating human language.
Neural Network / NN – Layered structure of interconnected nodes inspired by the brain.
Normalization / Data Scaling – Adjusting feature values for effective model training.
Noise / Data Perturbation – Random variations in data affecting AI performance.
Novelty Detection / Anomaly Detection – Identifying unusual or unseen patterns.


O

Optimizer / Training Algorithm – Algorithm updating model parameters to reduce loss.
Overfitting / Model Overtraining – Model learns training data too closely, reducing generalization.
Online Learning / Streaming Data – Model trained incrementally on continuous data input.
Object Detection / Instance Segmentation – Locating and labeling objects in images or video.
Ontology / Semantic Framework – Formal representation of knowledge and relationships.


P

Perceptron / Basic Neural Unit – Simple AI unit for binary classification.
Preprocessing / Data Cleaning – Preparing data for AI models.
Predictive Analytics / Forecasting – Using AI to forecast trends or behavior.
Precision / Performance Metric – Correct positive predictions divided by total positive predictions.
Pruning / Network Optimization – Removing unnecessary neurons or connections to simplify models.


Q

Q-Learning / Reinforcement Learning – Learning optimal actions through rewards.
Query / Search Input – Data used to request predictions or responses from AI.
Quantization / Model Compression – Reducing precision to make AI models smaller and faster.


R

Reinforcement Learning / RL – AI learning via trial-and-error and reward feedback.
Regression / Predictive Modeling – Predicting continuous values from input features.
Random Forest / Ensemble Learning – Combination of decision trees for robust predictions.
RNN / Recurrent Neural Network – Neural network for sequential data like text or time series.
Robustness / Model Stability – AI performance under noise, attacks, or unseen data.


S

Supervised Learning – AI learns from labeled input-output pairs.
Self-Supervised Learning – AI generates labels from data itself for training.
Semantic Segmentation / Image Understanding – Classifying each pixel in an image.
Swarm Intelligence / Collective AI – Multiple AI agents coordinating to solve problems.
Stochastic Gradient Descent / SGD – Iterative optimization for neural networks.


T

Transformer / Attention Model – Deep learning model architecture for NLP and vision tasks.
Tensor / Multi-Dimensional Array – Fundamental data structure in AI frameworks like TensorFlow.
Transfer Learning / Knowledge Reuse – Applying pre-trained models to new tasks.
Training Set / Learning Dataset – Data used to train AI models.
Turing Test / AI Benchmark – Test evaluating a machine’s ability to exhibit human-like intelligence.


U

Unsupervised Learning – Learning patterns from data without labels.
Underfitting / Model Bias – Model too simple to capture patterns in data.
Utility Function / Reward Function – Defines goals or incentives for reinforcement learning agents.
Uncertainty Quantification / Confidence Estimation – Estimating AI prediction reliability.
U-Net / Segmentation Network – CNN architecture for precise image segmentation.


V

Validation Set / Model Evaluation – Dataset for tuning model parameters and avoiding overfitting.
Variational Autoencoder / VAE – Neural network for generating data distributions.
Vector Embedding / Feature Representation – Mapping objects or words into numerical vectors.
Vanishing Gradient / Training Challenge – Issue where gradients become too small to train deep networks.
Vision Transformer / ViT – Transformer-based model for image analysis.


W

Word Embedding / NLP Representation – Vector representation of words (Word2Vec, GloVe).
Weight / Neural Connection Strength – Parameter determining influence of inputs on neuron output.
Weak AI / Narrow AI – AI specialized for specific tasks.
Whitening / Data Normalization – Removing correlations between features for model efficiency.
Windowing / Sequence Data Processing – Dividing sequences for time-series or text analysis.


X

XAI / Explainable AI – Techniques to make AI models interpretable to humans.
XGBoost / Gradient Boosting Algorithm – Powerful ensemble learning method for classification/regression.
XML / AI Data Format – Structured data format for storing AI inputs and outputs.


Y

YAML / AI Configuration Format – Human-readable data serialization for AI pipelines.
Yield / Model Output Accuracy – Fraction of correct outputs compared to total predictions.
YOLO / You Only Look Once – Real-time object detection algorithm.


Z

Zero-Shot Learning / ZSL – AI predicts classes not seen during training.
Z-Score / Standardization Metric – Normalizing features for AI processing.
Zone of Competence / Task Scope – AI’s area of reliable performance.
ZeRO / Memory Optimization – Distributed training optimization for large models.

 

Published 17 Feb. 2026

Search

Cart

Your cart is empty.

Unfortunately we could not find any products in your cart.

Continue shopping