Data Science Underfitting & Overfitting

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/11

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 6:08 PM on 4/10/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

12 Terms

1
New cards

What does underfitting mean?

The model is too simple to learn the real pattern in the data so it performs poorly on both the training and test set

2
New cards

Signs of underfitting:

  • training accuracy is low

  • test accuracy is low

  • training & text accuracy are usually close together

3
New cards

Causes of underfitting:

  • poor quality of data

  • model’s hidden layers are not deep enough

  • not enough epochs

4
New cards

How to fix underfitting:

  • add more hidden layers/neurons

  • add more epochs

  • dataset cleaning, scaling, feature selection

5
New cards

What does overfitting mean?

The model is too complex and learns the data too well, including noise and unimportant details so it performs well on the training set but poorly on the test set

6
New cards

Signs of overfitting:

  • training accuracy is very high

  • test accuracy is much lower

  • there is a large gap between the train and test sets

7
New cards

Causes of overfitting:

  • the dataset wasn’t randomized

  • the dataset was too small

  • too many epochs

  • the number of neurons was too large with too many layers

8
New cards

How to fix overfitting:

  • reduce the number of neurons and hidden layers

  • generate more data

  • use drop out

9
New cards

What does early stopping do?

after each epoch, the model checks the validation loss. if the model stops improving for several epochs, stop training and revert back to the best model

10
New cards

What does dropout do?

during training, some neurons are randomly turned off to force the network to not depend too much on any single neuron

11
New cards

What does L1 regularization do?

it penalizes the absolute value of weights, pushing some weights closer to exactly zero and can make the model more sparse

<p>it penalizes the <strong>absolute value</strong> of weights, pushing some weights closer to exactly zero and can make the model more sparse</p><p></p>
12
New cards

What does L2 regularization do?

it penalizes the square of weights, so very large weights are punished more strongly

<p>it penalizes the <strong>square</strong> of weights, so very large weights are punished more strongly</p>