1/19
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
training set
Example: 800 houses (80% of dataset) with features + prices. It's the data used to teach the model.
testing set
Example: 200 houses (20% of dataset). It's the data used to evaluate how well the model generalizes.
features
Example: house size, bedrooms, age, neighborhood score. They're the input variables to the model.
target
Example: house price. It's the output the model predicts.
principal components
Example: combining "size, bedrooms, lot size" into one "overall size" measure. They're algorithmically engineered features that combine correlated variables.
normalization
Example: dividing square footage by 1000 so values are ~1-5. It's scaling features to make them comparable.
neuron/unit
Example: 0.5size + 0.3bedrooms + 0.2*age. It's a function that computes weighted sum of inputs + bias, then applies activation.
weights/parameters
Example: [0.5, 0.3, 0.2] show size is most important for house price. They're numbers that represent feature importance.
layer
Example: one output neuron predicting house price. It's a collection of neurons.
activation function
Example: for regression, identity (linear); for classification, sigmoid. It's a function that introduces non-linearity.
loss function
Example: Mean Squared Error (predicted - actual)^2. It's a measure of prediction error.
gradient
Example: dLoss/dWeight shows that increasing the size weight slightly will reduce error. It's the slope showing how weights should change to reduce loss.
gradient descent
Example: weight_new = weight_old - learning_rate * gradient. It's an optimization method that updates weights iteratively.
learning rate
Example: 0.01 = small stable adjustments. It's the step size for weight updates.
epoch
Example: processing all 800 houses once. It's one full pass through training data.
training loop
Example: forward pass → compute loss → backprop → update weights → repeat. It's the cycle the model goes through to learn.
autoencoder
Example: encoder compresses 6 house features into 2 codings; decoder rebuilds them. It's a network that compresses features into fewer variables, then reconstructs them.
encoder
Example: compresses [size, bedrooms, age, location] into 2 variables. It's part of an autoencoder that compresses input features.
decoder
Example: rebuilds 6 house features from 2 compressed codings. It's part of an autoencoder that reconstructs features from codings.
codings
Example: 2 variables that capture most house info. They're compressed representations of features.