Machine Learning Foundations Week 5 Glossary

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/27

flashcard set

Earn XP

Description and Tags

Flashcards for Machine Learning Foundations Week 5 Glossary

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

28 Terms

1
New cards

Accuracy

A performance metric for classification models; the number of correct predictions out of the total number of predictions.

2
New cards

Area under the receiver operator curve (AUC)

A commonly used metric for measuring a binary classifier’s performance.

3
New cards

Base rate

Pertaining to a model, the percent of cases in your evaluation data where Y equals 1.

4
New cards

Classification

A supervised learning method in which the label is a categorical value.

5
New cards

Conditional expected value

The likely average future value of Y in cases where X is true.

6
New cards

Empirical risk minimization

Choosing the model that minimizes loss on the training set.

7
New cards

Expected value

The likely average future value of Y.

8
New cards

Expected value estimation

The most likely value of an outcome given known information about an example

9
New cards

Feature selection

The process of empirically testing different combinations of features to choose an appropriate set.

10
New cards

Generalization

A model’s ability to adapt to new, previously unseen data.

11
New cards

Heuristic selection

A feature selection method that filters out features using heuristic rules prior to modeling.

12
New cards

Hyperparameters

The “knobs” that you tweak during successive runs of training a model. Often trade off complexity vs. simplicity of models.

13
New cards

Implicit feature selection

Reducing feature count as a byproduct of the model training procedure.

14
New cards

K-fold cross-validation

A resampling method that uses different portions of the data to train and validate the model on different partitions of the data.

15
New cards

Model deployment

The process of using a machine learning model in a production environment where it can be used for its intended purpose.

16
New cards

Out-of-sample validation

Computing evaluation metrics on examples that were not part of model training. Helps approximate the expected loss.

17
New cards

Precision

Percentage of positive predictions that were actually positive.

18
New cards

Ranking

Sorting examples and choosing top K to fulfill some optimization objective.

19
New cards

Recall

Percentage of actual positives that were correctly classified as positive.

20
New cards

Receiver operator curve (ROC)

A curve that represents the performance of your binary classification model at various classification thresholds.

21
New cards

Regression

A supervised learning method in which the label is any real valued number.

22
New cards

Regularization

The penalty on a model’s complexity; helps prevent overfitting.

23
New cards

Stepwise selection

Feature selection method to iteratively add/reduce features based on empirical model performance.

24
New cards

Supervised learning

A class of machine learning problems in which labeled data are available, enabling an algorithm to learn how to associate data values with data labels so that predictive models for classification or regression on unseen data are possible.

25
New cards

Test set

The subset of the data set that you use as a final test of your model’s performance.

26
New cards

Training set

The subset of the data set used to train a machine learning model to make predictions.

27
New cards

Validation set

The subset of the data set that is used to evaluate models’ performances when performing model selection.

28
New cards