Data Mining CA2

0.0(0)
studied byStudied by 7 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/106

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

107 Terms

1
New cards

main idea of cure

  • Stops the creation of a cluster hierarchy if a level consists of k clusters

  • Uses multiple representative points to evaluate the distance between clusters, adjusts well to arbitrary shaped clusters and avoids single-link effect

2
New cards

Drawbacks of square-error based clustering method

  • Consider only one point as representative of a cluster

  • Good only for convex shaped, similar size and density, and if k can be reasonably estimated

3
New cards

Cure: Shrinking Representative Point

  • Shrink the multiple representative points towards the gravity centre by a fraction of a.

  • Multiple representatives capture the shape of the cluster

4
New cards

ROCK: Robust Clustering using linKs

  • Use links to measure similarity/proximity

  • Not distance based

  • Computational complexity

5
New cards

ROCK Algorithm

  • Draw random sample (get a random sample of the data)

  • Cluster the data using the link agglomerative approach

  • Label data in disk

6
New cards

Density-Based Clustering methods

Clustering based on density (local cluster criterion), such as density-connected points

7
New cards

Major features of Density-Based Clustering methods

  • Discover clusters of arbitrary shape

  • Handle noise

  • One scan

  • Need density parameters as termination condition

8
New cards

density reachable clustering

A point p is density-reachable from a point q with regard to Eps, MinPts if there is a chain of points p1, …, pn, p1 =q, pn = p such that pi+1 is directly density-reachable from pi

9
New cards

density-connected clustering

A point p is density-connected to a point q with regard to Eps, MinPts if there is a point o such that both, p and q are density-reachable from o with regard to Eps and MinPts

10
New cards

DBSCAN

  • Density Based Spatial Clustering of Applications with Noise

  • Relies on a density-based notion of cluster: A cluster is defined as a maximal set of density-connected points

  • Discovers clusters of arbitrary shape in spatial datasets with noise

11
New cards

DBSCAN algorithm

1. Arbitrary select a point p

2. Retrieve all points density-reachable from p with regard to Eps and

MinPts.

3. If p is a core point, a cluster is formed.

4. If p is a border point, no points are density-reachable from p and

DBSCAN visits the next point of the dataset.

5. Continue the process until all of the points have been processed.

12
New cards

spatial data

  • Generalise detailed geographic points into clustered regions, such as business, residential, industrial, or agricultural areas, according to land usage

  • Require the merge of a set of geographic areas by spatial operation

13
New cards

image data

  • Extracted by aggregation and/or approximation

  • Size, colour, shape, texture, orientation, and relative positions and structures of the contained objects or regions in the image

14
New cards

music data

  • Summarise its melody: based on the approximate patterns that repeatedly occur in the segment

  • Summarise its style: based on its tone, tempo, or the major musical instruments played

15
New cards

object identifier

Generalise to the lowest level of class in the class/subclass hierarchies

16
New cards

class composition hierarchies

  • generalise nested structured data

  • generalise only objects closely related in semantics to the current one

17
New cards

Construction and mining of object cubes

  • Extend the attribute-oriented induction method: Apply a sequence of class-based generalisation operators on different attributes; Continue until getting a small number of generalised objects that can be summarized as a concise in high-level terms

  • For efficient implementation: Examine each attribute, generalise it to simple-valued data; Construct a multidimensional data cube (object cube)

18
New cards

multidimesnsional analysis strategy

  • Generalise the plan-base in different directions

  • Look for sequential patterns in the generalised plans

  • Derive high-level plans

19
New cards

dimension tables

Generalise plan-base in a multidimensional way using dimension table

20
New cards

cardinality

Use the number of distinct values at each level to determine the right level of generalisation (level-“planning”)

21
New cards

operators

Use operators merge “+”, option “[ ]” to further generalise patterns

22
New cards

spatial data warehouse

  • Integrated, subject-oriented, time-variant, and non-volatile spatial

  • data repository for data analysis and decision making

23
New cards

Why is spatial data integration a big issue

  • Structure-specific formats (raster- vs. vector-based, OO vs. relational models, different storage and indexing, etc.)

  • Vendor-specific formats (ESRI, MapInfo, Inter-graph, etc.

24
New cards

spatial data cube

  • multidimensional spatial database

  • Both dimensions and measures may contain spatial componenent

25
New cards

On-line integration

  • Collect and store pointers to spatial objects in a spatial data cube

  • Expensive and slow, need efficient aggregation technique

26
New cards

Pre-processing

  • Pre-compute and store all the possible combinations: huge space overhead

  • Pre-compute and store rough approximations in a spatial data cube: accuracy trade-off

27
New cards

selective computation

Only materialise those which will be accessed frequently: a reasonable choice

28
New cards

Two-step mining of spatial association

  • Step 1: Rough spatial computation (as a filter) - Use MBR (minimum bounding rectangle) for rough estimation

  • Step2: Detailed spatial algorithm (as refinement) - Apply only to those objects which have passed the rough spatial association test (no lessthan min_support)

29
New cards

spatial classification

  • Analyse spatial objects to derive classification schemes, such as decision trees in relevance to certain spatial properties (district, highway, river, etc.)

  • Example: Classify regions in a province into rich vs. poor according to the average family income

30
New cards

Spatial trend analysis

  • Detect changes and trends along a spatial dimension

  • Study the trend of non spatial or spatial data changing with space

  • Example: Observe the trend of changes of the climate or vegetation with the increasing distance from an ocean

31
New cards

time-series database

  • Consists of sequences of values or events changing with time

  • Data is recorded at regular intervals

  • Characteristic time-series components: Trend, cycle, seasonal, irregula

32
New cards

time-series database applications

  • Financial: stock price, inflation

  • Biomedical: blood pressure

  • Meteorological: precipitation

33
New cards

time series data

can be illustrated as a time-series graph which describes a point moving overtime

34
New cards

categories of time-series movement

  • Long-term or trend movements (trend curve)

  • Cyclic movements or cycle variations, e.g., business cycles

  • Seasonal movements or seasonal variations: almost identical patterns that a time series appears to follow during corresponding months of successive years

  • Irregular or random movements

35
New cards

the freehand method

  • Fit the curve by looking at the graph

  • Costly and barely reliable for large-scaled data mining

36
New cards

the least-square method

Find the curve minimising the sum of the squares of the deviation of points on the curve from the corresponding data point

37
New cards

the moving-average method

  • Eliminate cyclic, seasonal and irregular patterns

  • Loss of end data

  • Sensitive to outliers

38
New cards

estimations of cyclic variations

If (approximate) periodicity of cycles occurs, cyclic index can be constructed in the same manner as seasonal indexe

39
New cards

estimation of irregular variations 

By adjusting the data for trend, seasonal and cyclic variation

40
New cards

prediction

  • With systematic analysis of the trend, cyclic, seasonal, and irregular components, it is possible to make long- or short-term predictions with reasonable quality

41
New cards

similarity search

finds data sequences that differ only slightly from the given query sequence

42
New cards

whole matching similarity query

find a set of sequences that are similar to each other (as a whole)

43
New cards

subsequence matching similarity query

find sequences that contain sub-sequences that are similar to a given query sequence

44
New cards

typical applications for similarity search

  • Financial market & Market basket data analysis

  • Scientific databases

  • Medical diagnosis

45
New cards

subsequence matching

  • Breaking sequences: into a set of pieces of window with length w

  • Extracting the features: For each subsequence inside the window

  • Mapping: each sequence to a “trail” in the feature space

  • Dividing the trail: of each sequence into “sub-trails” and represent each of them with minimum bounding rectangle

  • Multi-piece assembly algorithm: to search for longer sequence matches

46
New cards

full periodicity

Every point in time contributes (precisely or approximately) to the periodicity

47
New cards

partial periodicity

Only some segments contribute to the periodicity

48
New cards
49
New cards
50
New cards
51
New cards
52
New cards
53
New cards
54
New cards
55
New cards

text datasets (document datasets)

  • Large collections of documents from various sources: news articles, research

papers, books, digital libraries, e-mail messages, and Web pages, library

database, etc.

  • Data stored is usually semi-structured

  • Traditional information retrieval techniques become inadequate for the

increasingly vast amounts of text data

56
New cards

information retrieval vs. database systems

  • Some DB problems are not present in IR, e.g., update, transaction management,

complex objects, concurrency control, recovery

  • Some IR problems are not addressed well in DBMS, e.g., unstructured

documents, approximate search using keywords and the notion of relevance

57
New cards

information retrieval

  • A field developed in parallel with database systems

  • Information is organised into (a large number of) documents

58
New cards

information retrieval problem

locating relevant documents based on user input, such as keywords or example documents

59
New cards

Typical IR systems

  • Online library catalogue systems

  • Online document management systems

60
New cards

precision

The percentage of retrieved documents that are in fact relevant to the query (i.e., “correct” responses

61
New cards

recall

The percentage of documents that are relevant to the query and were, in fact, retrieve

62
New cards

Major difficulties of keyword-based retrieval

  • Synonymy: A keyword T does not appear anywhere in the document, even

though the document is closely related to T, e.g., data mining

  • Polysemy: The same keyword may mean different things in different contexts,

e.g., mining

63
New cards

Similarity-Based Retrieval in Text Dataset

  • The Problem: Finds similar documents based on a set of common keywords

  • Answer:Should be based on the degree of relevance based on the nearness of the keywords, relative frequency of the keywords, etc

64
New cards

word stem

Several words are small syntactic variants of each other since they share a

common word stem

65
New cards

similarity metrics

  • measure the closeness of a document to a query (a set of keywords)

  • Relative term occurrences

66
New cards

latent semantic indexing

  • Similar documents have similar word frequencies

  • Difficulty: the size of the term frequency matrix is very large

  • Use a singular value decomposition (SVD) techniques to reduce the size of

frequency table

  • Retain the K most significant rows of the frequency table

67
New cards

method of latent semantic indexing

  • Create a term frequency matrix, freq_matrix

  • SVD construction: Compute the singular value decomposition of freq_matrix by

splitting it into 3 matrices, U, S, V

  • Vector identification: For each document d, replace its original document vector by

a new term excluding the eliminated terms

  • Index creation: Store the set of all vectors, indexed by one of a number of

techniques

68
New cards

inverted index

  • Maintains two hash- or B+-tree indexed tables:

Document table: a set of document records <doc_id, post_list>

Term table: a set of term records, <term, posting_list>

  • Answer query: Find all docs associated with one or a set of terms

  • Advantage: easy to implement

  • Disadvantage: do not handle well synonymy and polysemy, and posting lists

could be too long (storage could be very large)

69
New cards

signature file

  • A signature is a representation of an ordered list of terms that describe the

document

  • Order is obtained by frequency analysis, stemming and stop lists

  • Associate a signature with each document

70
New cards

similarity detection

  • Cluster documents by a common author

  • Cluster documents containing information from a common source

71
New cards

link analysis

unusual correlation between entities

72
New cards

sequence analysis

predicting a recurring event

73
New cards

anomaly detection

find information that violates usual patterns

74
New cards

hypertext analysis

patterns in anchors/links:Anchor text correlations with linked objects

75
New cards

Keyword-based association analysis

Collect sets of keywords or terms that occur frequently together and then find the

association or correlation relationships among them

76
New cards

pre-process

Process the text data by parsing, stemming, removing stop words, etc

77
New cards

association mining algorithms

  • Consider each document as a transaction

  • View a set of keywords in the document as a set of items in the transaction

78
New cards

term level association mining

  • No need for human effort in tagging documents

  • The number of meaningless results and the execution time is greatly reduced

79
New cards

Automatic document classification

  • Training set: Human experts generate a training data set

  • Classification: The computer system discovers the classification rules

  • Application: The discovered rules can be applied to classify new/unknown

documents

80
New cards

Association-Based Document Classification

  • Extract keywords and terms by information retrieval and simple association analysis

techniques

  • Obtain concept hierarchies of keywords and terms using: Available term classes, such as WordNet, Expert knowledge, Some keyword classification systems

  • Classify documents in the training set into class hierarchies

  • Apply term association mining method to discover sets of associated terms

  • Use terms to maximally distinguish 1 class of documents from others

  • Derive a set of association rules associated with each document class

  • Order the classification rules based on their occurrence frequency and

discriminative power

  • Use the rules to classify new documents

81
New cards

Document Clustering

  • Goal:Automatically group related documents based on their content

  • Require no training sets or predetermined taxonomies, generate a taxonomy at

    runtime

  • Major steps:Pre-processing-Remove stop words, stem, feature extraction, lexical analysis; hierarchical clustering-Compute similarities applying clustering algorithm

82
New cards

challenges of mining the WWW

  • Too large for effective data warehousing and data mining

  • Too complex and heterogeneous: no standards and structure

83
New cards

Web search engines are index-based

  • Search the Web, index Web pages, and build and store huge keyword-based indices

  • Help to locate sets of Web pages containing certain keywords

84
New cards

Deficiencies of Web Search Engines

  • A topic of any breadth may easily contain hundreds of thousands of

documents

  • Many documents that are highly relevant to a topic may not contain

keywords defining them (polysemy)

85
New cards

What you search for in web mining

  • Web access patterns

  • Web structures

  • Regularity and dynamics of Web content

86
New cards

Web mining: A more challenging task

  • The “abundance” problem

  • Limited coverage of the Web: hidden Web sources, majority of data in DBMS

  • Limited query interface based on keyword-oriented search

  • Limited customisation to individual users

87
New cards

finding authoritative web pages

Retrieving pages that are not only relevant, but also of high quality, or

authoritative on the topic

88
New cards

hyperlinks can infer the notion of authority

  • The Web consists not only of pages, but also of hyperlinks pointing from one

page to another

  • These hyperlinks contain an enormous amount of latent human annotation

  • A hyperlink pointing to another Web page, this can be considered as the

author's endorsement of the other page

89
New cards

problems with the web linkage structure

  • Not every hyperlink represents an endorsement: Other purposes are for navigation or for paid advertisements; If the majority of hyperlinks are for endorsement, the collective opinion will still dominate

  • One authority will rarely have its Web page point to its rival authorities in the same field

  • Authoritative pages are not often particularly descriptive

90
New cards

Hub

Set of Web pages that provides collections of links to authorities

91
New cards

HITS (Hyperlink-Induced Topic Search)

  • Explore interactions between hubs and authoritative pages

  • Use an index-based search engine to form the root set: Many of these pages are presumably relevant to the search topic; Some of them should contain links to the majority of the prominent authorities

  • Expand the root set into a base set: Include all of the pages that the root-set pages link to, and all of the pages that link to a page in the root set, up to a designated size cut-off

  • Apply weight-propagation

  • An iterative process that determines numerical estimates of hub and authority weight

92
New cards

The problem with systems based on HITS

Output a short list of the pages with large hub weights, and the pages with large

authority weights for the given search topic

93
New cards

Systems based on the HITS algorithm

Clever, Google: achieve better quality search results than those generated by

term-index engines such as AltaVista and those created by human ontologists

such as Yahoo

94
New cards

drifting

when hubs contain multiple topics

95
New cards

topic hijacking

when many pages from a single Web site point to the same

single popular site

96
New cards

class label

  • Assign a class label to each document from a set of predefined topic categories

  • Based on a set of examples of pre-classified documents

97
New cards

example of automatic classification of web documents

  • Use Yahoo!'s taxonomy and its associated documents as training and test sets

  • Derive a Web document classification scheme

  • Use the scheme to classify new Web documents by assigning categories from the same taxonomy

98
New cards

multilayered web information base

  • Layer0: the Web itself

  • Layer1: the Web page descriptor layer

Contains descriptive information for pages on the Web

An abstraction of Layer0: substantially smaller but still rich enough to preserve most of the interesting, general information

Organised into dozens of semi-structured classes: document, person, organisation, ads, directory, sales, software, game, stocks, library_catalog, geographic_data, scientific_data, etc.

  • Layer2 and up: various Web directory services constructed on top of

  • Layer1 provide multidimensional, application-specific services

99
New cards

Potential problem of XML

can help solve heterogeneity for vertical applications, but the freedom to define

tags can make horizontal applications on the Web more heterogeneous

100
New cards

benefits of multi-layered meta-web

  • Multi-dimensional Web info summary analysis

  • Approximate and intelligent query answering

  • Web high-level query answering (WebSQL, WebML)

  • Web content and structure mining

  • Observing the dynamics/evolution of the We