Machine Learning in Housing Price Prediction: Key Concepts and Autoencoders

0.0(0)
studied byStudied by 0 people
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/19

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

20 Terms

1
New cards

training set

Example: 800 houses (80% of dataset) with features + prices. It's the data used to teach the model.

2
New cards

testing set

Example: 200 houses (20% of dataset). It's the data used to evaluate how well the model generalizes.

3
New cards

features

Example: house size, bedrooms, age, neighborhood score. They're the input variables to the model.

4
New cards

target

Example: house price. It's the output the model predicts.

5
New cards

principal components

Example: combining "size, bedrooms, lot size" into one "overall size" measure. They're algorithmically engineered features that combine correlated variables.

6
New cards

normalization

Example: dividing square footage by 1000 so values are ~1-5. It's scaling features to make them comparable.

7
New cards

neuron/unit

Example: 0.5size + 0.3bedrooms + 0.2*age. It's a function that computes weighted sum of inputs + bias, then applies activation.

8
New cards

weights/parameters

Example: [0.5, 0.3, 0.2] show size is most important for house price. They're numbers that represent feature importance.

9
New cards

layer

Example: one output neuron predicting house price. It's a collection of neurons.

10
New cards

activation function

Example: for regression, identity (linear); for classification, sigmoid. It's a function that introduces non-linearity.

11
New cards

loss function

Example: Mean Squared Error (predicted - actual)^2. It's a measure of prediction error.

12
New cards

gradient

Example: dLoss/dWeight shows that increasing the size weight slightly will reduce error. It's the slope showing how weights should change to reduce loss.

13
New cards

gradient descent

Example: weight_new = weight_old - learning_rate * gradient. It's an optimization method that updates weights iteratively.

14
New cards

learning rate

Example: 0.01 = small stable adjustments. It's the step size for weight updates.

15
New cards

epoch

Example: processing all 800 houses once. It's one full pass through training data.

16
New cards

training loop

Example: forward pass → compute loss → backprop → update weights → repeat. It's the cycle the model goes through to learn.

17
New cards

autoencoder

Example: encoder compresses 6 house features into 2 codings; decoder rebuilds them. It's a network that compresses features into fewer variables, then reconstructs them.

18
New cards

encoder

Example: compresses [size, bedrooms, age, location] into 2 variables. It's part of an autoencoder that compresses input features.

19
New cards

decoder

Example: rebuilds 6 house features from 2 compressed codings. It's part of an autoencoder that reconstructs features from codings.

20
New cards

codings

Example: 2 variables that capture most house info. They're compressed representations of features.