Clustering Lecture Flashcards

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/18

flashcard set

Earn XP

Description and Tags

Flashcards based on Stony Brook University lecture slides by Dr. Steven Skiena on Clustering.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

19 Terms

1
New cards

Unsupervised Learning

Methods that find structure in data by providing labels (clusters) or values (rankings) without a trusted standard.

2
New cards

Clustering

The problem of grouping points by similarity, often revealing underlying sources or explanations.

3
New cards

Similarity in Clustering

Defined by some underlying distance function/metric.

4
New cards

Natural Clusters

Compact, circular types.

5
New cards

Clustering Gene Expression Data

Groups genes active in the same phases of the cell cycle.

6
New cards

Biological Clustering

Associated with dendrograms or phylogenic trees.

7
New cards

Why use Clustering?

To determine how many distinct populations are in your data, build separate predictive models for each cluster, replace each cluster by its centroid, and detect outliers by distance from cluster centers.

8
New cards

K-Means Clustering

Pick k points as centers, assign examples to the nearest center, recalculate the center, and repeat until stable.

9
New cards

Local Optima

K-means can get stuck in these.

10
New cards

Centermost Input Example

Using the centermost input example as the center.

11
New cards

How to determine the "right" number of clusters

The SQE of points from their center should decrease slowly once exceeding the right number of clusters.

12
New cards

Limitations of K-means

Nested clusters, and long thin clusters.

13
New cards

Agglomerative Clustering

These bottom-up methods repeatedly merge the two nearest clusters.

14
New cards

Single-link clustering

Minimum Spanning Tree

15
New cards

Hierarchical Agglomerative Clustering

We start with every data point in a separate cluster and keep merging the most similar pairs of data points/clusters until we have one big cluster left.

16
New cards

Output of Hierarchical Clustering

A binary tree or dendrogram.

17
New cards

Dendrogram Height

The height of the bars indicate how close the items are.

18
New cards

Linkage Criteria

Nearest neighbor (single link, MST), Average link, Nearest centroid, Furthest link.

19
New cards

Advantages of Cluster Hierarchies

Organization of clusters and sub-clusters, visualization, natural measure of distance, and efficient classification of new items.