1/23
A collection of important equations and concepts for the ENGRI 1410 exam in Spring 2026.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Preactivation for neurons
The equation is z = w · x + b.
Decision boundary for a single neuron
The equation is w · x + b = 0.
Step activation
The output is y = {0, z ≤ 0; 1, z > 0}.
Sigmoid activation
The equation is σ(z) = 1 / (1 + e^{-z}).
Sigmoid derivative
The derivative is σ′(z) = σ(z)·(1 - σ(z)).
ReLU activation
The equation is ReLU(z) = max(0, z).
ReLU derivative
The derivative is piecewise: ReLU′(z) = {0, z < 0; 1, z > 0}.
Output of a linear neuron
The equation is y = Σjw_jh_j + a.
Mean squared error
The cost is C = 1/n Σ_{t=1}^n (y_t - ŷ_t)².
Binary cross-entropy
The cost is C = - (1/n) Σ_{x} {y ln a + (1 - y) ln(1 - a)}.
Softmax for class i
The formula is p_i = e^{z_i} / Σ_{j=1}^K e^{z_j}.
Softmax probabilities sum to one
The condition is Σ_{i=1}^K p_i = 1.
Categorical Cross-Entropy (CCE)
The cost is C = - (1/n) Σ{x} Σ{k=1}^K y_k ln a_k.
One-hot targets CCE simplification
The cost simplifies to C = -ln(p_c).
Gradient descent update (scalar form)
The update rule is w_{k+1} = w_k - η (dC/dw (w_k)).
Backpropagation update rule
The update is θ ← θ - η (∂C/∂θ).
Standardization formula
The formula is x_{std} = (x - µ) / σ.
L2-regularized Cost
The generic form is C_{reg} = C_0 + (λ/2n) Σ_{w} w².
L1-regularized Cost
The generic form is C_{reg} = C_0 + (λ/n) Σ_{w} |w|.
Max pooling
The output is a_{out} = max_{(i,j)∈W} a_{ij}.
Average pooling
The output is a_{out} = (1/|W|) Σ{(i,j)∈W} a{ij}.
Basic RNN recurrence
The equation is h_t = f(w_x x_t + w_h h_{t-1} + b_x).
Basic RNN output equation
The equation is ŷ = f(w_0 h_t + b_0).
BPPT for many-to-many
The derivative is ∂L/∂w_x = Σ{t=1}^T ( Σ{k=t}^T ∂L_k/∂h_t) f'(a_t) x_t.