Machine Learning Algorithms – Vocabulary Review

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/71

flashcard set

Earn XP

Description and Tags

These 72 vocabulary flashcards cover essential terms and definitions from the UT Dallas lecture on machine-learning algorithms, focusing on neural networks, activation functions, loss metrics, optimization, and convolutional architectures.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

72 Terms

1
New cards

Artificial Intelligence

Any technique that enables computers to mimic human behavior or intelligence-related tasks.

2
New cards

Machine Learning

The ability of a computer system to learn from data without being explicitly programmed for each rule.

3
New cards

Deep Learning

A subset of machine learning that extracts patterns directly from raw data using multi-layer neural networks.

4
New cards

Big Data

Extremely large data sets whose size and complexity enable modern learning algorithms to perform well.

5
New cards

TensorFlow

Google’s open-source software library for numerical computation and large-scale machine learning.

6
New cards

PyTorch

Facebook’s open-source deep-learning framework that offers dynamic computation graphs and GPU acceleration.

7
New cards

Perceptron

The simplest feed-forward neural unit that combines inputs with weights, adds a bias, then applies a non-linear activation.

8
New cards

Forward Propagation

The process of computing outputs by passing inputs through the network’s layers from input to output.

9
New cards

Bias (Neural Networks)

An additional learnable parameter that shifts the activation function, improving model flexibility.

10
New cards

Weight (Neural Networks)

A learnable coefficient that scales input values; the core parameters adjusted during training.

11
New cards

Activation Function

A non-linear function (e.g., ReLU, sigmoid) applied to neurons to enable complex pattern learning.

12
New cards

Sigmoid Function

An S-shaped activation g(z)=1/(1+e^{-z}) that outputs values between 0 and 1, useful for binary classification.

13
New cards

Hyperbolic Tangent (tanh)

Activation g(z)=tanh(z) producing outputs between −1 and 1; zero-centered alternative to sigmoid.

14
New cards

Rectified Linear Unit (ReLU)

Activation g(z)=max(0,z) that keeps positive inputs and sets negatives to zero, speeding convergence.

15
New cards

Maxout

An activation that outputs the maximum of a set of linear functions, generalizing ReLU and alleviating saturation.

16
New cards

Multilayer Perceptron (MLP)

A feed-forward neural network with one or more hidden layers between input and output.

17
New cards

Input Layer

The first layer of a network that receives raw feature vectors from the data set.

18
New cards

Hidden Layer

Intermediate layer(s) of neurons that learn internal feature representations.

19
New cards

Output Layer

The final layer that produces predictions such as probabilities or continuous values.

20
New cards

Loss Function

A metric that quantifies prediction error; minimized during training to improve performance.

21
New cards

Mean Squared Error (MSE)

Loss defined as the average of squared differences between targets and predictions: (1/n)Σ(y−ŷ)².

22
New cards

Mean Absolute Error (MAE)

Loss defined as the average of absolute differences between targets and predictions: (1/n)Σ|y−ŷ|.

23
New cards

Binary Cross-Entropy (Log Loss)

Loss for binary classification: −[y log ŷ + (1−y) log(1−ŷ)].

24
New cards

Categorical Cross-Entropy

Loss that measures prediction error over multiple mutually exclusive classes.

25
New cards

Sparse Categorical Cross-Entropy

Cross-entropy formulation that expects integer class labels instead of one-hot vectors.

26
New cards

Gradient Descent

An optimization algorithm that updates parameters in the direction of the negative gradient of the loss.

27
New cards

Learning Rate

A hyperparameter controlling the step size during gradient descent updates.

28
New cards

Weight Update Rule

Parameter adjustment formula w ← w − η ∂L/∂w, where η is the learning rate.

29
New cards

Derivative

The instantaneous rate of change of a function; foundation for optimization in neural networks.

30
New cards

Partial Derivative

The derivative of a multivariable function with respect to one variable while keeping others constant.

31
New cards

Cost Function

Overall measure of model error, often the average loss across the entire training set.

32
New cards

Overfitting

When a model learns noise and specific details of training data, harming generalization to new data.

33
New cards

Convolutional Neural Network (CNN)

A neural architecture that employs convolutional layers to automatically learn spatial hierarchies of features from images.

34
New cards

Convolution Operation

Sliding a filter over input data, performing element-wise multiplication and summing to extract local patterns.

35
New cards

Filter (Kernel)

A small matrix of weights applied during convolution to detect specific features such as edges or textures.

36
New cards

Feature Map

The output matrix produced after applying a filter over the input through convolution.

37
New cards

Parameter Sharing

Reusing the same filter weights across different spatial locations, greatly reducing model parameters.

38
New cards

Local Connectivity

Each neuron connects only to a local region of the previous layer, capturing spatially local patterns.

39
New cards

Pooling Layer

A layer that down-samples feature maps, reducing dimensionality and computation.

40
New cards

Max Pooling

Pooling method that keeps the maximum value within each sub-region of the feature map.

41
New cards

Spatial Invariance

Model property enabling recognition of objects regardless of their position or small deformations in the image.

42
New cards

Representation Learning

The automatic discovery of useful feature hierarchies directly from raw data.

43
New cards

Fully Connected Layer

A dense layer where every neuron is connected to all activations from the previous layer.

44
New cards

Softmax Function

Activation that converts logits into a probability distribution over multiple classes.

45
New cards

Flatten Layer

Operation that reshapes multidimensional feature maps into a one-dimensional vector for dense layers.

46
New cards

Image as Matrix

Concept that digital images are arrays of integers (0-255) representing pixel intensities.

47
New cards

Low-Level Features

Basic patterns like edges or corners learned in early convolutional layers.

48
New cards

Mid-Level Features

Intermediate patterns such as eyes, noses, wheels, or windows learned by deeper layers.

49
New cards

High-Level Features

Complex, task-specific concepts like full faces or objects formed in the deepest layers.

50
New cards

Downsampling

Process of reducing spatial resolution, typically via pooling, to decrease computation and encourage invariance.

51
New cards

Upsampling (Transposed Convolution)

Operation that increases spatial resolution, used to reconstruct high-resolution outputs such as segmentation maps.

52
New cards

Fully Convolutional Network (FCN)

A network composed only of convolutional and upsampling layers, used for tasks like semantic segmentation.

53
New cards

Object Detection

Task of identifying and localizing multiple objects within an image by drawing bounding boxes and classifying them.

54
New cards

Region Proposal

A candidate bounding box likely to contain an object, used as input for detection pipelines.

55
New cards

R-CNN

Region-based CNN that classifies region proposals but suffers from slow inference due to separate CNN passes.

56
New cards

Faster R-CNN

Improved detection model that learns region proposals with an internal network and runs a single CNN pass per image.

57
New cards

Region Proposal Network (RPN)

Sub-network in Faster R-CNN that generates object region candidates directly from convolutional feature maps.

58
New cards

Semantic Segmentation

Pixel-wise classification task assigning every image pixel to a semantic class label.

59
New cards

Receptive Field

The region of the input image that influences the activation of a particular neuron.

60
New cards

Stride

The number of pixels a filter moves at each step during convolution or pooling operations.

61
New cards

Slope

In calculus, the change in y divided by change in x; generalized in ML as the derivative at a point.

62
New cards

Gradient

Vector of partial derivatives indicating the direction and magnitude of the steepest loss increase.

63
New cards

Sparse Connections

Network property where each neuron connects to only a subset of previous layer activations, reducing parameters.

64
New cards

Pooling Benefits

Dimensionality reduction, decreased overfitting, and tolerance to small spatial distortions.

65
New cards

Channel (Feature Depth)

The third dimension in image tensors representing color channels or multiple feature maps.

66
New cards

Classification

Predictive task where the output variable represents discrete class labels.

67
New cards

Regression

Predictive task where the output variable is continuous, such as a real-valued number.

68
New cards

Probability Output

Model output between 0 and 1 indicating confidence in a prediction, often produced by sigmoid or softmax.

69
New cards

Non-Linear Activation

Function that introduces non-linearity, enabling networks to learn complex, non-linear mappings.

70
New cards

Hyperparameter

A configuration variable (e.g., learning rate, filter size) set before training and not learned from data.

71
New cards

Hardware Acceleration (GPU)

Use of specialized hardware to perform parallel computations, drastically speeding up deep-learning training.

72
New cards

Backpropagation

Algorithm that computes gradients of the loss with respect to each parameter by reverse traversal of the network.