Decision Trees - Predictive Analytics Final Exam

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/42

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No study sessions yet.

43 Terms

1
New cards

What is a node?

A decision/test on a predictor (e.g., is it raining?)

2
New cards

What is a branch?

Outcome of the test (left = true statement and right = false statement) (e.g., is it windy?)

3
New cards

What is a leaf node?

Final outcome (e.g., bring a raincoat)

4
New cards

Each leaf node corresponds to a ______

decision rule

5
New cards

Each decision is a ___ split on the sample space

binary

6
New cards

What does splitting on a numerical variable look like?

X > 5 & X is less than or equal to 5

7
New cards

What does splitting on a categorical variable look like?

XE{A} and XE{B, C} OR

XE{A, B} and XE{C}

8
New cards

What is the intuition behind the Gini index?

It measures how mixed or impure a set of observations is.

9
New cards

What is the formula for the Gini index?

Gini = 1 − ∑^m k=1 p²k

10
New cards

What does mmm represent in the Gini formula?

The number of classes

11
New cards

What does pk represent in the Gini formula?

The proportion of observations in class k

12
New cards

What type of measure is the Gini index?

An impurity measure

13
New cards

What is the best possible Gini value?

0 (all observations belong to the same class)

14
New cards

What is the worst possible Gini value?

1− 1/ m (all classes equally represented).

15
New cards

What does a Gini index of 0 mean?

The node is perfectly pure

16
New cards

What does a higher Gini index indicate?

More class mixing (more impurity).

17
New cards

How do you evaluate a potential split in a decision tree?

Compute the Gini index for the left and right regions.

18
New cards

How are the two region Gini values combined?

By taking their weighted average.

19
New cards

What weights are used in the weighted Gini average?

The proportion of observations in each region.

20
New cards

What do you compare the weighted Gini against?

The Gini index before the split.

21
New cards

When should you make a split?

When there is a large decrease in Gini impurity.

22
New cards

What is the goal of splitting in a classification tree?

To reduce impurity as much as possible.

23
New cards

What impurity measure is used in regression trees?

Mean Squared Error (MSE)

24
New cards

What replaces the Gini index in regression trees?

Sum of Squared differences from the mean

25
New cards

What value is stored in a regression tree leaf node?

The average of all observations in that region

26
New cards

How does this differ from classification trees?

Classification uses the majority class instead of an average

27
New cards

What is the natural stopping point of a decision tree?

100% purity in each leaf node

28
New cards

Why is reaching 100% purity a problem?

It likely causes overfitting.

29
New cards

What does overfitting lead to?

Low predictive accuracy on new data

30
New cards

What does overfitting lead to?

They are too complex and overfit the data

31
New cards

When do we typically stop growing a decision tree?

When the tree becomes large

32
New cards

Why stop when leaf nodes have few observations?

Small leaf sizes increase overfitting risk.

33
New cards

What does a small decrease in impurity indicate?

The split is not very useful.

34
New cards

What is the goal of stopping early?

To balance model complexity and accuracy.

35
New cards

What is Option 1 for building a smaller tree?

Only split if impurity reduction exceeds a high threshold (but this is short-sighted and may block good later splits)

36
New cards

What is Option 2 for building a smaller tree?

Grow a large tree then prune it back (allows flexibility and better final performance)

37
New cards

Why might a full decision tree perform poorly?

It is too complex and overfits the data. Therefore we prune back to reduce the number of splits

38
New cards

How does pruning affect training and testing accuracy?

It usually lowers training fit and may improve the generalization for testing data

39
New cards

What is a benefit of a smaller tree?

Better interpretability

40
New cards

What does cross-validation help select?

The optimal number of leaf (terminal) nodes

41
New cards

Why can’t we rely only on training data?

It leads to overfitting.

42
New cards

What does cv_tree( ) do automatically?

Performs the training–validation split.

43
New cards

What is the goal of cross-validation in trees?

Balance bias and variance.