1/9
These flashcards cover key vocabulary related to validation and hyperparameter tuning in machine learning, facilitating review for the associated concepts.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Hyperparameter
A parameter that is not learned during model training and is set before the learning process begins, acting like a knob to adjust the model.
Validation Set
An additional set of data used to evaluate how well a model performs after training, helping to prevent overfitting.
Overfitting
A modeling error that occurs when a model learns the details and noise in the training data to the extent that it negatively impacts the performance of the model on new data.
k-fold Cross-Validation
A method where the training data is split into k equally-sized subsets, with each subset used as a validation set once while the others are used for training.
Leave-One-Out Cross-Validation (LOOCV)
A special case of cross-validation where each training example is used as a single validation set while the rest serve as the training set.
Grid Search
A systematic method for selecting hyperparameter combinations by evaluating all possible combinations within a specified parameter grid.
Random Sampling
A method of selecting hyperparameter combinations at random rather than systematically, useful when there is little intuition about parameter settings.
Bayesian Optimization
A method that treats hyperparameter tuning as a machine learning problem, using prior information to evaluate new hyperparameter configurations.
Training Set
The portion of the dataset used to train the model, allowing it to learn patterns and relationships.
Testing Set
The data used to evaluate the model's performance after it has been trained, measuring how well it generalizes to unseen data.