1/34
These flashcards cover important concepts pertaining to statistical inference, prediction, and their methodologies as discussed in the lecture.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Inference vs Prediction
Inference focuses on understanding relationships and estimating parameters, while prediction focuses on forecasting unseen outcomes with accuracy.
Parameter
A fixed but unknown quantity that describes the population, which we aim to estimate through inference.
Likelihood Function
A function that measures how plausible different values of a parameter are given the observed data.
Maximum Likelihood Estimation (MLE)
A statistical method that chooses the parameter value that makes the observed data most plausible.
Bayesian Framework
A statistical framework that treats parameters as random variables and updates beliefs about them after observing data.
Credible Interval
A range of values within which a parameter is believed to lie with a certain probability, analogous to confidence intervals in frequentist statistics.
Bootstrap
A resampling technique that allows estimation of sampling distribution and uncertainty when theoretical formulas are difficult to obtain.
Bias
Systematic error in estimation; an unbiased estimator has an expected value equal to the true parameter value.
Variance
A measure of the instability of an estimator across different samples.
Mean Squared Error (MSE)
A single number that combines bias and variance, calculated as the sum of the variance and the square of the bias.
Overfitting
A modeling error that occurs when a model captures noise in the training data, leading to poor performance on new data.
James-Stein Estimator
An estimator that shows that shrinking estimates towards a common mean can improve overall accuracy.
ROC Curves
Graphs that display the performance of a binary classification model by illustrating the trade-off between true positive rate and false positive rate.
Area Under the Curve (AUC)
A metric that measures the overall ability of a model to distinguish between two classes, where higher values indicate better performance.
Training Error
The error measured on the data used to fit the model; it often underestimates true prediction error due to overfitting.
A/B Testing
An experimental method where two versions are compared to determine which one performs better, often involving random assignments to control and treatment groups.