1/14
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Confusion Matrix
A tool used to measure the performance of a classification model by comparing actual and predicted class labels.
Classification Accuracy
The ratio of correctly predicted instances to the total instances in a classification task.
Prediction Error
The difference between the predicted value and the actual value, often used in numerical predictions.
Absolute Error
The absolute difference between the predicted value and the actual value.
Mean Absolute Error (MAE)
The average of the absolute errors between predicted values and actual values.
Mean Square Error (MSE)
The average of the squares of the errors between predicted values and actual values.
Relative Absolute Error
The absolute error expressed as a percentage of the actual value.
Receiver Operating Characteristic (ROC) Curve
A graphical representation of a classification model's true positive rate versus its false positive rate.
T-test
A statistical test used to determine if there is a significant difference between the means of two groups.
K-fold Cross-validation
A technique for assessing the performance of a model by splitting the dataset into K subsets and training/testing K times.
True Positive Rate
The ratio of true positives to the actual number of positives in a dataset.
False Positive Rate
The ratio of false positives to the total number of actual negatives.
Statistical Significance
A measure of whether an observed effect is likely to be genuine or if it could have occurred by chance.
Hypothesis Testing
A method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis.
Critical Value
A threshold that determines when to reject the null hypothesis in a statistical test.