L4 - Regularization and Dimension Reduction

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/53

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 8:07 PM on 4/14/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

54 Terms

1
New cards

Why do we need alternatives to least squares?

To improve prediction accuracy and interpretability when many predictors exist.

2
New cards

When does least squares perform well?

When the number of observations is much larger than the number of predictors.

3
New cards

What happens when predictors are close to the number of observations?

The model has high variance and can overfit.

4
New cards

What happens when predictors exceed observations?

There is no unique solution and variance becomes infinite.

5
New cards

What problem arises with many predictors?

Irrelevant variables reduce interpretability.

6
New cards

Why is interpretability important?

It helps identify which variables truly affect the response.

7
New cards

Why can’t least squares perform variable selection well?

It rarely sets coefficients exactly to zero.

8
New cards

What are the three main approaches to improve least squares?

Subset selection, shrinkage methods, and dimension reduction.

9
New cards

What is subset selection?

Choosing a subset of predictors to build the model.

10
New cards

What is best subset selection?

Testing all possible predictor combinations.

11
New cards

What is forward stepwise selection?

Adding predictors one at a time starting from none.

12
New cards

What is backward stepwise selection?

Removing predictors one at a time starting from all.

13
New cards

What is a drawback of subset selection?

It has high variance and can be computationally expensive.

14
New cards

What is multicollinearity?

When predictors are highly correlated.

15
New cards

What happens under multicollinearity?

Coefficients become unstable and vary greatly.

16
New cards

Why does multicollinearity increase variance?

Small data changes cause large coefficient changes.

17
New cards

What is shrinkage?

A method that reduces coefficient magnitudes toward zero.

18
New cards

Why use shrinkage?

To reduce variance and improve prediction accuracy.

19
New cards

What is the tradeoff in shrinkage?

Increased bias but reduced variance.

20
New cards

What is regularization?

Another term for shrinkage or penalization.

21
New cards

Why must predictors be standardized before shrinkage?

To ensure penalties apply equally across variables.

22
New cards

What is feature scaling?

Transforming variables to comparable scales.

23
New cards

What is ridge regression?

A shrinkage method that reduces coefficient size.

24
New cards

What happens to coefficients in ridge regression?

They shrink toward zero but never become exactly zero.

25
New cards

What does the tuning parameter control in ridge?

The strength of shrinkage.

26
New cards

What happens when the tuning parameter is zero?

Ridge becomes ordinary least squares.

27
New cards

What happens when the tuning parameter is very large?

Coefficients approach zero.

28
New cards

Why does ridge reduce overfitting?

It lowers variance by shrinking coefficients.

29
New cards

What is a limitation of ridge regression?

It keeps all predictors in the model.

30
New cards

What is the lasso?

A shrinkage method that can set coefficients exactly to zero.

31
New cards

Why is lasso useful?

It performs variable selection.

32
New cards

What type of models does lasso produce?

Sparse models with fewer predictors.

33
New cards

What is the key difference between ridge and lasso?

Lasso can eliminate variables while ridge cannot.

34
New cards

How does lasso improve interpretability?

By removing irrelevant predictors.

35
New cards

What happens to coefficients as the tuning parameter increases in lasso?

More coefficients shrink to zero.

36
New cards

What is a limitation of lasso with correlated predictors?

It may arbitrarily select one variable and drop others.

37
New cards

What is elastic net?

A method combining ridge and lasso penalties.

38
New cards

Why use elastic net?

To balance variable selection and handling correlated predictors.

39
New cards

What does elastic net control parameter do?

It determines the mix between ridge and lasso.

40
New cards

When is ridge preferred over lasso?

When many predictors have small effects.

41
New cards

When is lasso preferred over ridge?

When the true model is sparse.

42
New cards

What is cross-validation used for?

Selecting the optimal tuning parameter.

43
New cards

Why is tuning parameter selection important?

It determines model performance.

44
New cards

What is the bias-variance tradeoff?

Reducing variance increases bias and vice versa.

45
New cards

How does ridge affect bias and variance?

Increases bias but decreases variance.

46
New cards

Why can ridge outperform least squares?

It reduces test error when variance is high.

47
New cards

What is dimension reduction?

Transforming predictors into a smaller set of variables.

48
New cards

Why use dimension reduction?

To handle multicollinearity and high dimensionality.

49
New cards

What is the idea behind dimension reduction?

Combine information from predictors into new variables.

50
New cards

How is dimension reduction different from lasso?

It does not remove variables but transforms them.

51
New cards

What is principal components regression?

A method using principal components as predictors.

52
New cards

What is partial least squares?

A method that considers both predictors and response in reduction.

53
New cards

Why is dimension reduction useful with correlated predictors?

It preserves shared information instead of discarding variables.

54
New cards

What is the key benefit of regularization methods overall?

Improved prediction accuracy in high-dimensional settings.