ENGRI 1410 – Spring 2026 Formula List for the Prelim

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/23

flashcard set

Earn XP

Description and Tags

A collection of important equations and concepts for the ENGRI 1410 exam in Spring 2026.

Last updated 6:36 AM on 4/7/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

24 Terms

1
New cards

Preactivation for neurons

The equation is z = w · x + b.

2
New cards

Decision boundary for a single neuron

The equation is w · x + b = 0.

3
New cards

Step activation

The output is y = {0, z ≤ 0; 1, z > 0}.

4
New cards

Sigmoid activation

The equation is σ(z) = 1 / (1 + e^{-z}).

5
New cards

Sigmoid derivative

The derivative is σ′(z) = σ(z)·(1 - σ(z)).

6
New cards

ReLU activation

The equation is ReLU(z) = max(0, z).

7
New cards

ReLU derivative

The derivative is piecewise: ReLU′(z) = {0, z < 0; 1, z > 0}.

8
New cards

Output of a linear neuron

The equation is y = Σjw_jh_j + a.

9
New cards

Mean squared error

The cost is C = 1/n Σ_{t=1}^n (y_t - ŷ_t)².

10
New cards

Binary cross-entropy

The cost is C = - (1/n) Σ_{x} {y ln a + (1 - y) ln(1 - a)}.

11
New cards

Softmax for class i

The formula is p_i = e^{z_i} / Σ_{j=1}^K e^{z_j}.

12
New cards

Softmax probabilities sum to one

The condition is Σ_{i=1}^K p_i = 1.

13
New cards

Categorical Cross-Entropy (CCE)

The cost is C = - (1/n) Σ{x} Σ{k=1}^K y_k ln a_k.

14
New cards

One-hot targets CCE simplification

The cost simplifies to C = -ln(p_c).

15
New cards

Gradient descent update (scalar form)

The update rule is w_{k+1} = w_k - η (dC/dw (w_k)).

16
New cards

Backpropagation update rule

The update is θ ← θ - η (∂C/∂θ).

17
New cards

Standardization formula

The formula is x_{std} = (x - µ) / σ.

18
New cards

L2-regularized Cost

The generic form is C_{reg} = C_0 + (λ/2n) Σ_{w} w².

19
New cards

L1-regularized Cost

The generic form is C_{reg} = C_0 + (λ/n) Σ_{w} |w|.

20
New cards

Max pooling

The output is a_{out} = max_{(i,j)∈W} a_{ij}.

21
New cards

Average pooling

The output is a_{out} = (1/|W|) Σ{(i,j)∈W} a{ij}.

22
New cards

Basic RNN recurrence

The equation is h_t = f(w_x x_t + w_h h_{t-1} + b_x).

23
New cards

Basic RNN output equation

The equation is ŷ = f(w_0 h_t + b_0).

24
New cards

BPPT for many-to-many

The derivative is ∂L/∂w_x = Σ{t=1}^T ( Σ{k=t}^T ∂L_k/∂h_t) f'(a_t) x_t.