2 Ridge Regression , Lasso Regression & ElasticNet Regression

0.0(0)
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/14

flashcard set

Earn XP

Description and Tags

Regularizatin techniques

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

15 Terms

1
New cards

how do we reduce the over fitting of linear regression

We use ridge regression

2
New cards

Ridge Regression formula

knowt flashcard image
3
New cards

when we have overfitting the cost function will be

zero

4
New cards

Formulae of ridge Regression

knowt flashcard image
5
New cards

when cost function is zero that means

data is Overfitting

6
New cards

if we increase the lamda then ____ is shifted

Gradient decent curve is moved. This happen when lambda value increase.

<p>Gradient decent curve is moved. This happen when lambda value increase.</p>
7
New cards

If global minima should not move then lamda value should be

Zero

8
New cards

best fitted line is moved above or below depending on

Lamda function

9
New cards

Lasso Regression

The feature that is not important will be deleted automatically.

10
New cards

Formula of lasso Regression

knowt flashcard image
11
New cards

Elastic Regression

This help in reduce the overfitting & feature Selection.

12
New cards

Formulae of Elastic net

<p></p>
13
New cards

What is regularization and why is it used in machine learning?

Regularization is a technique used in machine learning to prevent overfitting by adding a penalty to the model's complexity. It helps improve the model's generalization ability, ensuring it performs well on unseen data.

14
New cards

Regularization Type

L1 regularization (Lasso) and L2 regularization (Ridge).

15
New cards

How does regularization prevent overfitting, and why do we use techniques like L1 and L2? 

  • L1 Regularization (Lasso):

    • Penalty: Adds the absolute value of the coefficients to the loss function.

    • Effect: Encourages sparsity, meaning it can drive some coefficients to zero, effectively performing feature selection.

    • Use Case: Useful when you suspect that only a few features are important.

  • L2 Regularization (Ridge):

    • Penalty: Adds the squared value of the coefficients to the loss function.

    • Effect: Shrinks the coefficients but does not set them to zero, leading to a more evenly distributed set of weights.

    • Use Case: Useful when you want to keep all features but reduce their impact.