Exam 2 - Module 7 (Autoencoders, Dimension Reduction, Clustering)

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/9

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 3:52 AM on 4/24/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

10 Terms

1
New cards

Generative vs Discriminative

knowt flashcard image
2
New cards

Autoencoders

unsupervised neural network that learns to compress data (encoder) into latent reperesentation and then reconstruct it back (decoder) to its original form

3
New cards

Latent space

lower-dimensional representation of the input data

4
New cards

A good latent space and AE Tradeoff

A good latent space is compressed (d≪n) and smooth. Standard AEs can have "gaps" in the latent space that make it difficult to sample new

The latent dimension size controls the information-detail tradeoff

5
New cards

Variational Autoencoders (VAE)

Instead of a point, we get a region of latent space, which makes it smooth and continuous

<p>Instead of a point, we get a <strong>region of latent space</strong>, which makes it <strong>smooth and continuous</strong></p>
6
New cards

VAE Loss Function

balances reconstruction with regularization (pushing the latent distribution toward a simple prior like N(0,I))

7
New cards

KL Divergence

quantifies the "information loss" and is a is a measure of how one probability distribution differs from a reference probability distribution

<p><span>quantifies the "information loss" and is a is a measure of how one probability distribution differs from a reference probability distribution</span></p>
8
New cards

Reparameterization Trick

9
New cards

VAE Loss Tradeoff

create highly disentangled features (interpretable) but result in blurrier reconstructions

10
New cards

Generative Adversarial Networks (GANs)