Foundations of Machine Learning

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/83

flashcard set

Earn XP

Description and Tags

Flashcards covering key terms and definitions from the foundations of machine learning concepts and techniques.

Last updated 6:40 PM on 4/22/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

84 Terms

1
New cards

Machine Learning

Using algorithms to learn patterns from data and make predictions.

2
New cards

Supervised Learning

Learning using labeled data (input + known output).

3
New cards

Unsupervised Learning

Learning patterns from unlabeled data.

4
New cards

Classification

Predicting categories, such as spam vs not spam.

5
New cards

Regression

Predicting continuous values like height or price.

6
New cards

Feature Matrix (X)

Input variables used to make predictions.

7
New cards

Target Vector (y)

Output variable being predicted.

8
New cards

Linear Regression

Models relationship between variables using a straight line.

9
New cards

Least Squares Method

Minimizes squared error between predicted and actual values.

10
New cards

Error (Residual)

Difference between actual and predicted value.

11
New cards

Mean Squared Error (MSE)

Average of squared errors.

12
New cards

Root Mean Squared Error (RMSE)

Square root of MSE; average prediction error.

13
New cards

R² (R-Squared)

Percentage of variability explained by the model.

14
New cards

Training Set

Data used to build the model.

15
New cards

Validation Set

Data used to tune and evaluate during training.

16
New cards

Test Set

Final dataset to evaluate model performance.

17
New cards

Overfitting

Model performs well on training data but poorly on new data.

18
New cards

Cross-Validation (CV)

Repeatedly splitting data into folds to evaluate model performance.

19
New cards

K-Fold Cross Validation

Splitting data into k groups and rotating validation sets.

20
New cards

Missing Data

Data points with no value (NaN) that must be handled.

21
New cards

Imputation

Filling missing values using mean, median, or most frequent value.

22
New cards

Mean Imputation

Replace missing values with average.

23
New cards

Median Imputation

Replace missing values with median (better for skewed data).

24
New cards

Standardization (Z-score)

Scale data to mean = 0 and SD = 1.

25
New cards

Normalization (Min-Max Scaling)

Scale values between 0 and 1.

26
New cards

Feature Scaling

Ensuring all features are on a similar scale to improve models.

27
New cards

K-Nearest Neighbors (KNN)

Classifies data based on nearest neighbors.

28
New cards

K Value

Number of neighbors used for classification.

29
New cards

Distance (Norm)

Measure of similarity between data points.

30
New cards

Curse of Dimensionality

Performance decreases as features increase.

31
New cards

Naive Bayes

Probabilistic classifier using Bayes’ theorem.

32
New cards

Bayes Theorem

Calculates probability of a class given features.

33
New cards

Prior Probability

Initial probability of a class.

34
New cards

Likelihood

Probability of features given class.

35
New cards

Posterior

Final probability after considering evidence.

36
New cards

Naive Assumption

Features are independent given class.

37
New cards

Joint Probability

Probability of multiple events occurring together.

38
New cards

Conditional Probability

Probability of one event given another.

39
New cards

Support Vector Machines (SVM)

Finds optimal boundary (hyperplane) separating classes.

40
New cards

Hyperplane

Decision boundary.

41
New cards

Margin

Distance between boundary and closest data points.

42
New cards

Support Vectors

Points closest to boundary.

43
New cards

Hard Margin

No misclassification allowed.

44
New cards

Soft Margin

Allows some errors.

45
New cards

Kernel

Function defining shape of decision boundary.

46
New cards

RBF Kernel

Nonlinear boundary (curved separation).

47
New cards

Regularization Parameter (C)

Controls tradeoff between accuracy and generalization.

48
New cards

Decision Tree

Model that splits data based on features.

49
New cards

Entropy

Measure of randomness/impurity.

50
New cards

Information Gain

Reduction in entropy after a split.

51
New cards

Gini Index

Measure of impurity used in trees.

52
New cards

Bagging (Bootstrap Aggregation)

Combine multiple models trained on random samples.

53
New cards

Random Forest

Ensemble of decision trees.

54
New cards

Boosting

Sequentially improving weak models.

55
New cards

AdaBoost

Adjusts weights to focus on errors.

56
New cards

Gradient Boosting

Builds models based on previous errors.

57
New cards

Learning Rate

Controls how much each new model learns.

58
New cards

Hyperparameter

Parameter set before training that controls model behavior.

59
New cards

Grid Search

Testing multiple hyperparameter values to find the best one.

60
New cards

Regularization

Prevent overfitting by penalizing large coefficients.

61
New cards

Ridge Regression (L2)

Shrinks coefficients evenly.

62
New cards

Lasso Regression (L1)

Can reduce coefficients to zero.

63
New cards

Time Series Data

Data ordered over time.

64
New cards

Trend

Long-term direction.

65
New cards

Seasonality

Repeating pattern.

66
New cards

Noise

Random variation.

67
New cards

Lag Feature

Previous value used for prediction.

68
New cards

Moving Average

Average of past values to smooth data.

69
New cards

Autocorrelation

Correlation of a variable with its past values.

70
New cards

Stationarity

Statistical properties remain constant over time.

71
New cards

Differencing

Subtracting previous values to remove trend.

72
New cards

Autoregressive Model (AR)

Uses past values to predict future.

73
New cards

Moving Average Model (MA)

Uses past errors to predict.

74
New cards

ARIMA Model

Combines AR + I (integration) + MA.

75
New cards

Ontologies

Structured representation of knowledge.

76
New cards

Ontology Evaluation

Assessing quality of ontology.

77
New cards

Accuracy (Ontology)

Correctness of representation.

78
New cards

Consistency

No contradictions in ontology.

79
New cards

Completeness

Covers domain fully.

80
New cards

Clarity

Easy to understand.

81
New cards

Adaptability

Can be extended.

82
New cards

Semantic Web

Web of linked structured data.

83
New cards

Web Ontology Language (OWL)

A formal language for representing ontologies.

84
New cards

SKOS

Vocabulary system for concepts.