L12 - Deep Learning for Time Series Forecasting

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/74

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 7:36 PM on 4/14/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

75 Terms

1
New cards

What is time-series data?

Data indexed in temporal order where dependence exists across observations.

2
New cards

What assumption distinguishes time-series from iid data?

Observations are not independent; temporal dependence exists.

3
New cards

What is the forecasting constraint in time series?

Only past and present information can be used to predict future values.

4
New cards

What is an AR(L) model?

A linear model where the current value is a linear combination of the previous L lagged values.

5
New cards

What is the role of lag length L?

Defines the memory window of the model.

6
New cards

What is the error term in AR models?

A stochastic disturbance capturing unexplained variation.

7
New cards

What type of relationships do AR models capture?

Only linear dependencies.

8
New cards

Why do AR models have high bias?

They impose a strict linear structure on the data.

9
New cards

Why does polynomial expansion in AR increase complexity?

It introduces interaction and nonlinear terms, increasing feature dimensionality exponentially.

10
New cards

What is the dimensionality issue with large L?

Number of predictors grows rapidly with lag length and interactions.

11
New cards

What is a feed-forward neural network in time series?

A mapping from fixed lag inputs to output using nonlinear transformations.

12
New cards

What is the structural limitation of feed-forward models for sequences?

They do not preserve temporal order beyond fixed inputs.

13
New cards

Why do feed-forward networks require fixed lag length?

Input dimension must be predefined.

14
New cards

Why do feed-forward models scale poorly with large L?

Parameter count increases with number of lagged inputs.

15
New cards

Why can feed-forward networks overfit in time series?

High parameterization increases estimation variance.

16
New cards

What is a Recurrent Neural Network (RNN)?

A neural network that processes sequential data using a recursive hidden state.

17
New cards

What is the recurrence relation in RNNs?

Current hidden state depends on current input and previous hidden state.

18
New cards

What is the hidden state?

A latent vector summarizing past sequence information.

19
New cards

What is the key advantage of weight sharing in RNNs?

Reduces parameter count and improves generalization. :contentReference[oaicite:0]{index=0}

20
New cards

What is the dimensionality benefit of RNNs?

Compresses long sequences into fixed-size hidden states.

21
New cards

How do RNNs differ from AR models?

They are nonlinear and sequential rather than fixed linear combinations.

22
New cards

How do RNNs differ from feed-forward networks?

They maintain memory across time steps.

23
New cards

What is the role of activation functions in RNNs?

Introduce nonlinearity and control numerical stability.

24
New cards

Why is tanh commonly used?

It bounds outputs between −1 and 1.

25
New cards

What is the effect of tanh on gradients?

Its derivative is less than or equal to 1, contributing to gradient shrinkage.

26
New cards

What is Backpropagation Through Time (BPTT)?

Gradient computation method that unfolds the RNN across time and applies chain rule.

27
New cards

Why must gradients be propagated through time?

Each hidden state depends recursively on previous states.

28
New cards

What is the computational cost of BPTT?

Scales with sequence length L.

29
New cards

What are vanishing gradients?

Gradients decay exponentially toward zero as they propagate backward.

30
New cards

What are exploding gradients?

Gradients grow exponentially and destabilize training.

31
New cards

What mathematical cause leads to vanishing gradients?

Repeated multiplication by values less than 1 (e.g., derivatives of tanh).

32
New cards

What mathematical cause leads to exploding gradients?

Repeated multiplication by values greater than 1.

33
New cards

What is the effect of vanishing gradients on learning?

Early time steps receive negligible updates.

34
New cards

What is the effect of exploding gradients on optimization?

Parameter updates become unstable and diverge. :contentReference[oaicite:1]{index=1}

35
New cards

What is the long-term dependency problem?

Inability to capture relationships between distant time steps.

36
New cards

Why do standard RNNs fail on long sequences?

Gradient signal deteriorates over time.

37
New cards

What is a Long Short-Term Memory (LSTM) network?

An RNN variant designed to preserve long-term dependencies using gated memory.

38
New cards

What is the cell state in LSTM?

A persistent memory vector that carries information across time.

39
New cards

Why is the cell state effective?

It enables near-linear gradient flow.

40
New cards

What are gates in LSTM?

Sigmoid-based mechanisms controlling information flow.

41
New cards

What is the range of gate outputs?

Values between 0 and 1.

42
New cards

What is the forget gate?

Controls how much past information is retained.

43
New cards

What is the input gate?

Controls how much new information is written.

44
New cards

What is the candidate state?

Proposed new content for the cell state using tanh transformation.

45
New cards

What is the update rule for cell state?

Combination of retained past and new candidate information.

46
New cards

Why does additive updating help gradients?

Avoids repeated multiplication, preventing decay.

47
New cards

What is the output gate?

Controls how much of the cell state is exposed as hidden state.

48
New cards

What is the relationship between hidden state and cell state?

Hidden state is a filtered version of the cell state.

49
New cards

What is a GRU (Gated Recurrent Unit)?

A simplified gated RNN that merges cell and hidden states.

50
New cards

What is the update gate in GRU?

Controls interpolation between previous state and candidate state.

51
New cards

What is the reset gate in GRU?

Controls how much past information contributes to candidate computation.

52
New cards

Why does GRU have fewer parameters than LSTM?

It combines multiple gates and removes separate cell state.

53
New cards

How do RNNs handle multiple predictors?

Inputs at each time step are vectors of features.

54
New cards

What happens to parameter count when sequence length increases in RNNs?

It remains constant due to weight sharing.

55
New cards

Why do feed-forward models have higher variance than RNNs for long sequences?

They require separate parameters for each lag.

56
New cards

What is autocorrelation?

Correlation between a variable and its lagged values.

57
New cards

Why is autocorrelation useful?

It indicates predictive structure in time series.

58
New cards

What is a lag window?

A fixed-length subsequence used as model input.

59
New cards

How are training samples constructed for RNNs?

By sliding a window over the time series to form input-output pairs.

60
New cards

Why can a single time series produce many training samples?

Each overlapping subsequence is treated as a separate observation.

61
New cards

What is sequence-to-one mapping?

A sequence input produces a single output.

62
New cards

What is sequence-to-sequence mapping?

A sequence input produces a sequence output.

63
New cards

How is text modeled in RNNs?

As sequences of word embeddings.

64
New cards

Why is padding required in text models?

To ensure uniform input length.

65
New cards

What is masking in sequence models?

Ignoring padded elements during forward and backward passes.

66
New cards

Bias-variance tradeoff in AR models?

High bias, low variance due to simplicity.

67
New cards

Bias-variance tradeoff in neural networks?

Low bias, high variance due to flexibility.

68
New cards

How do RNNs improve bias?

They model nonlinear temporal dependencies.

69
New cards

How do RNNs control variance?

Through parameter sharing across time steps.

70
New cards

Why can RNNs still overfit?

Large hidden state size increases model complexity.

71
New cards

What determines RNN model capacity?

Number of hidden units K and depth of sequence processing.

72
New cards

What is the tradeoff in choosing hidden size K?

Larger K reduces bias but increases variance.

73
New cards

What is the role of loss function in RNN training?

Measures prediction error across sequences.

74
New cards

Why is squared error commonly used?

It penalizes large deviations and is differentiable.

75
New cards

What is the training objective in RNNs?

Minimize loss over all sequences using gradient descent.