1/106
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
main idea of cure
Stops the creation of a cluster hierarchy if a level consists of k clusters
Uses multiple representative points to evaluate the distance between clusters, adjusts well to arbitrary shaped clusters and avoids single-link effect
Drawbacks of square-error based clustering method
Consider only one point as representative of a cluster
Good only for convex shaped, similar size and density, and if k can be reasonably estimated
Cure: Shrinking Representative Point
Shrink the multiple representative points towards the gravity centre by a fraction of a.
Multiple representatives capture the shape of the cluster
ROCK: Robust Clustering using linKs
Use links to measure similarity/proximity
Not distance based
Computational complexity
ROCK Algorithm
Draw random sample (get a random sample of the data)
Cluster the data using the link agglomerative approach
Label data in disk
Density-Based Clustering methods
Clustering based on density (local cluster criterion), such as density-connected points
Major features of Density-Based Clustering methods
Discover clusters of arbitrary shape
Handle noise
One scan
Need density parameters as termination condition
density reachable clustering
A point p is density-reachable from a point q with regard to Eps, MinPts if there is a chain of points p1, …, pn, p1 =q, pn = p such that pi+1 is directly density-reachable from pi
density-connected clustering
A point p is density-connected to a point q with regard to Eps, MinPts if there is a point o such that both, p and q are density-reachable from o with regard to Eps and MinPts
DBSCAN
Density Based Spatial Clustering of Applications with Noise
Relies on a density-based notion of cluster: A cluster is defined as a maximal set of density-connected points
Discovers clusters of arbitrary shape in spatial datasets with noise
DBSCAN algorithm
1. Arbitrary select a point p
2. Retrieve all points density-reachable from p with regard to Eps and
MinPts.
3. If p is a core point, a cluster is formed.
4. If p is a border point, no points are density-reachable from p and
DBSCAN visits the next point of the dataset.
5. Continue the process until all of the points have been processed.
spatial data
Generalise detailed geographic points into clustered regions, such as business, residential, industrial, or agricultural areas, according to land usage
Require the merge of a set of geographic areas by spatial operation
image data
Extracted by aggregation and/or approximation
Size, colour, shape, texture, orientation, and relative positions and structures of the contained objects or regions in the image
music data
Summarise its melody: based on the approximate patterns that repeatedly occur in the segment
Summarise its style: based on its tone, tempo, or the major musical instruments played
object identifier
Generalise to the lowest level of class in the class/subclass hierarchies
class composition hierarchies
generalise nested structured data
generalise only objects closely related in semantics to the current one
Construction and mining of object cubes
Extend the attribute-oriented induction method: Apply a sequence of class-based generalisation operators on different attributes; Continue until getting a small number of generalised objects that can be summarized as a concise in high-level terms
For efficient implementation: Examine each attribute, generalise it to simple-valued data; Construct a multidimensional data cube (object cube)
multidimesnsional analysis strategy
Generalise the plan-base in different directions
Look for sequential patterns in the generalised plans
Derive high-level plans
dimension tables
Generalise plan-base in a multidimensional way using dimension table
cardinality
Use the number of distinct values at each level to determine the right level of generalisation (level-“planning”)
operators
Use operators merge “+”, option “[ ]” to further generalise patterns
spatial data warehouse
Integrated, subject-oriented, time-variant, and non-volatile spatial
data repository for data analysis and decision making
Why is spatial data integration a big issue
Structure-specific formats (raster- vs. vector-based, OO vs. relational models, different storage and indexing, etc.)
Vendor-specific formats (ESRI, MapInfo, Inter-graph, etc.
spatial data cube
multidimensional spatial database
Both dimensions and measures may contain spatial componenent
On-line integration
Collect and store pointers to spatial objects in a spatial data cube
Expensive and slow, need efficient aggregation technique
Pre-processing
Pre-compute and store all the possible combinations: huge space overhead
Pre-compute and store rough approximations in a spatial data cube: accuracy trade-off
selective computation
Only materialise those which will be accessed frequently: a reasonable choice
Two-step mining of spatial association
Step 1: Rough spatial computation (as a filter) - Use MBR (minimum bounding rectangle) for rough estimation
Step2: Detailed spatial algorithm (as refinement) - Apply only to those objects which have passed the rough spatial association test (no lessthan min_support)
spatial classification
Analyse spatial objects to derive classification schemes, such as decision trees in relevance to certain spatial properties (district, highway, river, etc.)
Example: Classify regions in a province into rich vs. poor according to the average family income
Spatial trend analysis
Detect changes and trends along a spatial dimension
Study the trend of non spatial or spatial data changing with space
Example: Observe the trend of changes of the climate or vegetation with the increasing distance from an ocean
time-series database
Consists of sequences of values or events changing with time
Data is recorded at regular intervals
Characteristic time-series components: Trend, cycle, seasonal, irregula
time-series database applications
Financial: stock price, inflation
Biomedical: blood pressure
Meteorological: precipitation
time series data
can be illustrated as a time-series graph which describes a point moving overtime
categories of time-series movement
Long-term or trend movements (trend curve)
Cyclic movements or cycle variations, e.g., business cycles
Seasonal movements or seasonal variations: almost identical patterns that a time series appears to follow during corresponding months of successive years
Irregular or random movements
the freehand method
Fit the curve by looking at the graph
Costly and barely reliable for large-scaled data mining
the least-square method
Find the curve minimising the sum of the squares of the deviation of points on the curve from the corresponding data point
the moving-average method
Eliminate cyclic, seasonal and irregular patterns
Loss of end data
Sensitive to outliers
estimations of cyclic variations
If (approximate) periodicity of cycles occurs, cyclic index can be constructed in the same manner as seasonal indexe
estimation of irregular variations
By adjusting the data for trend, seasonal and cyclic variation
prediction
With systematic analysis of the trend, cyclic, seasonal, and irregular components, it is possible to make long- or short-term predictions with reasonable quality
similarity search
finds data sequences that differ only slightly from the given query sequence
whole matching similarity query
find a set of sequences that are similar to each other (as a whole)
subsequence matching similarity query
find sequences that contain sub-sequences that are similar to a given query sequence
typical applications for similarity search
Financial market & Market basket data analysis
Scientific databases
Medical diagnosis
subsequence matching
Breaking sequences: into a set of pieces of window with length w
Extracting the features: For each subsequence inside the window
Mapping: each sequence to a “trail” in the feature space
Dividing the trail: of each sequence into “sub-trails” and represent each of them with minimum bounding rectangle
Multi-piece assembly algorithm: to search for longer sequence matches
full periodicity
Every point in time contributes (precisely or approximately) to the periodicity
partial periodicity
Only some segments contribute to the periodicity
text datasets (document datasets)
Large collections of documents from various sources: news articles, research
papers, books, digital libraries, e-mail messages, and Web pages, library
database, etc.
Data stored is usually semi-structured
Traditional information retrieval techniques become inadequate for the
increasingly vast amounts of text data
information retrieval vs. database systems
Some DB problems are not present in IR, e.g., update, transaction management,
complex objects, concurrency control, recovery
Some IR problems are not addressed well in DBMS, e.g., unstructured
documents, approximate search using keywords and the notion of relevance
information retrieval
A field developed in parallel with database systems
Information is organised into (a large number of) documents
information retrieval problem
locating relevant documents based on user input, such as keywords or example documents
Typical IR systems
Online library catalogue systems
Online document management systems
precision
The percentage of retrieved documents that are in fact relevant to the query (i.e., “correct” responses
recall
The percentage of documents that are relevant to the query and were, in fact, retrieve
Major difficulties of keyword-based retrieval
Synonymy: A keyword T does not appear anywhere in the document, even
though the document is closely related to T, e.g., data mining
Polysemy: The same keyword may mean different things in different contexts,
e.g., mining
Similarity-Based Retrieval in Text Dataset
The Problem: Finds similar documents based on a set of common keywords
Answer:Should be based on the degree of relevance based on the nearness of the keywords, relative frequency of the keywords, etc
word stem
Several words are small syntactic variants of each other since they share a
common word stem
similarity metrics
measure the closeness of a document to a query (a set of keywords)
Relative term occurrences
latent semantic indexing
Similar documents have similar word frequencies
Difficulty: the size of the term frequency matrix is very large
Use a singular value decomposition (SVD) techniques to reduce the size of
frequency table
Retain the K most significant rows of the frequency table
method of latent semantic indexing
Create a term frequency matrix, freq_matrix
SVD construction: Compute the singular value decomposition of freq_matrix by
splitting it into 3 matrices, U, S, V
Vector identification: For each document d, replace its original document vector by
a new term excluding the eliminated terms
Index creation: Store the set of all vectors, indexed by one of a number of
techniques
inverted index
Maintains two hash- or B+-tree indexed tables:
Document table: a set of document records <doc_id, post_list>
Term table: a set of term records, <term, posting_list>
Answer query: Find all docs associated with one or a set of terms
Advantage: easy to implement
Disadvantage: do not handle well synonymy and polysemy, and posting lists
could be too long (storage could be very large)
signature file
A signature is a representation of an ordered list of terms that describe the
document
Order is obtained by frequency analysis, stemming and stop lists
Associate a signature with each document
similarity detection
Cluster documents by a common author
Cluster documents containing information from a common source
link analysis
unusual correlation between entities
sequence analysis
predicting a recurring event
anomaly detection
find information that violates usual patterns
hypertext analysis
patterns in anchors/links:Anchor text correlations with linked objects
Keyword-based association analysis
Collect sets of keywords or terms that occur frequently together and then find the
association or correlation relationships among them
pre-process
Process the text data by parsing, stemming, removing stop words, etc
association mining algorithms
Consider each document as a transaction
View a set of keywords in the document as a set of items in the transaction
term level association mining
No need for human effort in tagging documents
The number of meaningless results and the execution time is greatly reduced
Automatic document classification
Training set: Human experts generate a training data set
Classification: The computer system discovers the classification rules
Application: The discovered rules can be applied to classify new/unknown
documents
Association-Based Document Classification
Extract keywords and terms by information retrieval and simple association analysis
techniques
Obtain concept hierarchies of keywords and terms using: Available term classes, such as WordNet, Expert knowledge, Some keyword classification systems
Classify documents in the training set into class hierarchies
Apply term association mining method to discover sets of associated terms
Use terms to maximally distinguish 1 class of documents from others
Derive a set of association rules associated with each document class
Order the classification rules based on their occurrence frequency and
discriminative power
Use the rules to classify new documents
Document Clustering
Goal:Automatically group related documents based on their content
Require no training sets or predetermined taxonomies, generate a taxonomy at
runtime
Major steps:Pre-processing-Remove stop words, stem, feature extraction, lexical analysis; hierarchical clustering-Compute similarities applying clustering algorithm
challenges of mining the WWW
Too large for effective data warehousing and data mining
Too complex and heterogeneous: no standards and structure
Web search engines are index-based
Search the Web, index Web pages, and build and store huge keyword-based indices
Help to locate sets of Web pages containing certain keywords
Deficiencies of Web Search Engines
A topic of any breadth may easily contain hundreds of thousands of
documents
Many documents that are highly relevant to a topic may not contain
keywords defining them (polysemy)
What you search for in web mining
Web access patterns
Web structures
Regularity and dynamics of Web content
Web mining: A more challenging task
The “abundance” problem
Limited coverage of the Web: hidden Web sources, majority of data in DBMS
Limited query interface based on keyword-oriented search
Limited customisation to individual users
finding authoritative web pages
Retrieving pages that are not only relevant, but also of high quality, or
authoritative on the topic
hyperlinks can infer the notion of authority
The Web consists not only of pages, but also of hyperlinks pointing from one
page to another
These hyperlinks contain an enormous amount of latent human annotation
A hyperlink pointing to another Web page, this can be considered as the
author's endorsement of the other page
problems with the web linkage structure
Not every hyperlink represents an endorsement: Other purposes are for navigation or for paid advertisements; If the majority of hyperlinks are for endorsement, the collective opinion will still dominate
One authority will rarely have its Web page point to its rival authorities in the same field
Authoritative pages are not often particularly descriptive
Hub
Set of Web pages that provides collections of links to authorities
HITS (Hyperlink-Induced Topic Search)
Explore interactions between hubs and authoritative pages
Use an index-based search engine to form the root set: Many of these pages are presumably relevant to the search topic; Some of them should contain links to the majority of the prominent authorities
Expand the root set into a base set: Include all of the pages that the root-set pages link to, and all of the pages that link to a page in the root set, up to a designated size cut-off
Apply weight-propagation
An iterative process that determines numerical estimates of hub and authority weight
The problem with systems based on HITS
Output a short list of the pages with large hub weights, and the pages with large
authority weights for the given search topic
Systems based on the HITS algorithm
Clever, Google: achieve better quality search results than those generated by
term-index engines such as AltaVista and those created by human ontologists
such as Yahoo
drifting
when hubs contain multiple topics
topic hijacking
when many pages from a single Web site point to the same
single popular site
class label
Assign a class label to each document from a set of predefined topic categories
Based on a set of examples of pre-classified documents
example of automatic classification of web documents
Use Yahoo!'s taxonomy and its associated documents as training and test sets
Derive a Web document classification scheme
Use the scheme to classify new Web documents by assigning categories from the same taxonomy
multilayered web information base
Layer0: the Web itself
Layer1: the Web page descriptor layer
Contains descriptive information for pages on the Web
An abstraction of Layer0: substantially smaller but still rich enough to preserve most of the interesting, general information
Organised into dozens of semi-structured classes: document, person, organisation, ads, directory, sales, software, game, stocks, library_catalog, geographic_data, scientific_data, etc.
Layer2 and up: various Web directory services constructed on top of
Layer1 provide multidimensional, application-specific services
Potential problem of XML
can help solve heterogeneity for vertical applications, but the freedom to define
tags can make horizontal applications on the Web more heterogeneous
benefits of multi-layered meta-web
Multi-dimensional Web info summary analysis
Approximate and intelligent query answering
Web high-level query answering (WebSQL, WebML)
Web content and structure mining
Observing the dynamics/evolution of the We