TOPIC 2: Bias-Variance Tradeoff, Overfitting & Underfitting

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/13

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 9:42 PM on 4/18/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

14 Terms

1
New cards

Bias

Error from overly simplistic assumptions(too simple); model systematically

wrong.

2
New cards

Variance

Error from sensitivity to training data fluctuations; model too

complex.

3
New cards

Overfitting

Model learns training noise; poor generalization.

4
New cards

Underfitting

Model too simple to capture true patterns; poor training and test performance.

5
New cards

Bias-Variance Tradeoff

The tension between model simplicity (high bias) and flexibility (high variance).

6
New cards

Regularization

Technique to penalize model complexity to reduce overfitting

(e.g., L1/L2).

7
New cards

Cross-Validation

Technique to estimate model generalization by splitting data into multiple train/test folds.

8
New cards

Irreducible Error

Noise inherent in the data that no model can remove.

9
New cards

Model Complexity

The degree of flexibility in a model (e.g., depth of tree, number of

parameters).

10
New cards

Learning Curve

A plot of training/validation error vs. training set size; used to diagnose bias/variance.

11
New cards

Validation Set

Data held out from training to evaluate hyperparameter choices.

12
New cards

L1 Regularization

(Lasso)

Adds absolute value of weights to the loss; promotes sparsity.

13
New cards

L2 Regularization

(Ridge)

Adds squared weights to the loss; shrinks weights toward zero.

14
New cards

Dropout

Randomly zeroing neurons during neural network training to

reduce overfitting.