Classification and Clustering Techniques in Data Analysis

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/73

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

74 Terms

1
New cards

Classification

Task of assigning objects to one of several predefined categories.

2
New cards

Input

Collection of records (instances/examples), each represented by (X, y), where: X = attribute set, y = class label.

3
New cards

Goal

Learn a target function f that maps attribute sets to class labels.

4
New cards

Descriptive Modeling

Explain differences between classes.

5
New cards

Predictive Modeling

Predict unknown class labels using the model.

6
New cards

General Approach

Use a training set (records with known labels) to build the model and apply the model to a test set (records with unknown labels) to evaluate.

7
New cards

Confusion Matrix

Summarizes correct/incorrect predictions.

8
New cards

Accuracy

Accuracy = (Correct Predictions) / (Total Predictions)

9
New cards

Error Rate

Error Rate = 1 - Accuracy

10
New cards

Decision Trees

A model structure with root nodes, internal nodes, and leaf nodes used for classification.

11
New cards

Root Node

No incoming edges.

12
New cards

Internal Nodes

One incoming edge, two or more outgoing edges.

13
New cards

Leaf Nodes

No outgoing edges; assigned a class label.

14
New cards

Class Label Prediction

Start from root, follow the decision rules based on attributes until reaching a leaf node.

15
New cards

Hunt's Algorithm

Procedure to build a decision tree.

16
New cards

Greedy Strategy

At each step, choose the attribute test that best separates the classes.

17
New cards

Gini Index

Gini(t) = 1 - ∑ [p(i|t)]²

18
New cards

Entropy

Entropy(t) = -∑ p(i|t) log2 p(i|t)

19
New cards

Misclassification Error

Error(t) = 1 - max p(i|t)

20
New cards

Information Gain

Measures entropy reduction after a split.

21
New cards

Rule-Based Classifier

Uses a collection of 'if...then...' rules to perform classification.

22
New cards

Structure of a Rule

Condition (LHS / antecedent): A conjunction of attribute tests; Conclusion (RHS / consequent): A class label.

23
New cards

Example Rule

(Give Birth = no) ∧ (Can Fly = yes) → Birds

24
New cards

Applications of Rule-Based Classifier

Classification of animals based on biological traits and tax fraud prediction based on financial attributes.

25
New cards

Mutually Exclusive Rules

Each record matches at most one rule.

26
New cards

Exhaustive Rules

Every record is matched by at least one rule.

27
New cards

Coverage

The fraction of total records that satisfy the rule's condition.

28
New cards

Accuracy

The fraction of records satisfying the rule's condition that also have the correct class label.

29
New cards

High Coverage and High Accuracy

Both are desirable.

30
New cards

Converting Trees to Rules

Each path from root to leaf becomes a classification rule.

31
New cards

Ordered Rule Set (Decision List)

Rules are applied in priority order; first matching rule is used for classification.

32
New cards

Unordered Rule Set

Voting schemes (majority rule) may be used if multiple rules match.

33
New cards

Rule-based Ordering

Rank rules by individual quality.

34
New cards

Class-based Ordering

Group and order rules based on the predicted class.

35
New cards

Direct Method

Extract rules directly from data (e.g., RIPPER, CN2, Holte's 1R).

36
New cards

Indirect Method

Extract rules from other models like decision trees (e.g., C4.5rules).

37
New cards

Sequential Covering (Direct Method)

Start with an empty rule; grow a rule to cover as many instances as possible.

38
New cards

Cluster Analysis

Finding groups of objects such that objects within a group are similar to each other and dissimilar to objects in other groups.

39
New cards

Maximize inter-cluster distance

Goal of cluster analysis.

40
New cards

Minimize intra-cluster distance

Goal of cluster analysis.

41
New cards

Partitional Clustering

Divides data into non-overlapping clusters; each point in exactly one cluster.

42
New cards

Hierarchical Clustering

Nested clusters organized as a tree (dendrogram).

43
New cards

Exclusive Clustering

Each point belongs to exactly one cluster.

44
New cards

Overlapping Clustering

Points may belong to multiple clusters (e.g., 'border' points).

45
New cards

Fuzzy Clustering

Points belong to all clusters with varying degrees (weights between 0 and 1, must sum to 1).

46
New cards

Complete Clustering

All data points are clustered.

47
New cards

Partial Clustering

Only a subset of data is clustered.

48
New cards

Well-Separated Clusters

Each point is closer to every point within its cluster than to any point outside.

49
New cards

Prototype-Based Clusters

Points are closer to the cluster's 'center' (centroid or medoid) than to other centers.

50
New cards

Graph-Based Clusters

Based on nearest neighbor chains; points are closer to some other point in the cluster than to any point outside.

51
New cards

Density-Based Clusters

Dense regions separated by sparser regions; useful for irregular shapes and handling noise.

52
New cards

Shared-Property (Conceptual Clusters)

Clusters defined by sharing a common property or concept.

53
New cards

Clustering Algorithms

K-means, Hierarchical, Density-Based.

54
New cards

K-means

Partitional clustering that associates each cluster with a centroid and assigns points to the closest centroid, requiring specifying K (number of clusters).

55
New cards

Hierarchical Clustering

Produces nested clusters visualized with a dendrogram.

56
New cards

Agglomerative Clustering

A bottom-up merging approach in hierarchical clustering.

57
New cards

Divisive Clustering

A top-down splitting approach in hierarchical clustering.

58
New cards

Density-Based Clustering

Identifies clusters as dense regions separated by low-density areas (e.g., DBSCAN).

59
New cards

Sum of Squared Error (SSE)

Total distance squared from points to cluster centers, with lower SSE preferred.

60
New cards

Limitations of K-means

Poor performance with clusters of differing sizes, densities, non-globular shapes, sensitive to outliers, and initial centroid placement.

61
New cards

Overcoming Limitations of K-means

Use many small clusters and combine later, careful selection of initial centroids, and alternative clustering methods if non-globular or varying density.

62
New cards

DBSCAN Core Idea

Density = Number of points within a radius (Eps); core points: ≥ MinPts within Eps; border points: fewer than MinPts but in neighborhood of a core point; noise points: neither core nor border.

63
New cards

Strengths of DBSCAN

Handles noise well and finds clusters of varying shapes and sizes.

64
New cards

Weaknesses of DBSCAN

Struggles with varying densities and is less effective with high-dimensional data.

65
New cards

Cluster Evaluation Purpose

Avoid finding patterns in random noise and compare clustering algorithms or clusterings.

66
New cards

Types of Measures in Cluster Evaluation

External Index, Internal Index, Relative Index.

67
New cards

Anomaly Detection Definition

An object is considered an anomaly if it is distant from most points in the dataset.

68
New cards

Proximity-based Approaches

Use the distance to the k-nearest neighbor to assess if an object is isolated.

69
New cards

Outlier Score

Lowest score: 0; Highest score: maximum possible distance (can be infinity).

70
New cards

Density-Based Relation

Defines density as the reciprocal of the average distance to the k-nearest neighbors.

71
New cards

Clustering-based Approaches

First, cluster the data into groups of different densities; points in small clusters are chosen as candidate outliers.

72
New cards

Detection Strategy in Anomaly Detection

Discard small clusters that are far from larger clusters and define thresholds for minimum cluster size and minimum distance between clusters.

73
New cards

Cluster-based Outlier

An object is a cluster-based outlier if it does not strongly belong to any cluster.

74
New cards

Example Using K-Means

Outlier score computed in two ways: distance from the point to its closest cluster centroid and relative distance.