1/12
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
accuracy
total correct/total predictions
if 95% passengers died, predicting died every time gets 95% accuracy but catches zero survivors
TP+TN / TP + TN+ FP + FN
precision
of everything you predicted as positive/flagged how many actually were positive. use when false positives are costly, like flagging a trade as fraudulent
TP/ TP + FP
recall (sensitivity)
of all actual postivits, how many did you catch. high recall = fewer misses.
use when false negatives are costly, like missing a client or cancer
F1 Score
harmonic mean of precision and recall, use when you need to balance both.
AUC - ROC
measures hwo well the model seperates classes across possible thresholds. 0.5 = random guessing, and 1 = perfect
confusion matrix
the 2×2 table behind everything
RMSE (root mean squared error)
average prediction error in teh same units as the target
penalizes large errors heavily bc of the squaring
big misses are much worse than small ones
MAE mean absolute error
treats all error equally, more robust to outliers than rmse
if rmse is much higher than mae, then you have large outliers in predictions
R²
proportion of vairance explained by the x variables.
1 = perfect, 0 = not good
roc curve
AUC = Area under the curve, 0.5 means the model is gussing randomly and 1 emans perfect
Interpretation of AUC:
1.0: Perfect Classifier.
0.9–0.99: Excellent discrimination.
0.8–0.89: Good discrimination.
0.7–0.79: Fair discrimination.
< 0.7: Poor/Weak discrimination.
0.5: No better than random chance.
Bias-variance tradeoff
high bias (underfits), complex model = high variance (overfits)
Regularization
L1 (Lasso) drives coefficients to zero (feature selection), L2 (Ridge) shrinks them (prevents overfitting)