Concepts and Techniques: Data Mining
Concepts and Techniques: Data Mining
Concepts and Techniques: Data Mining
Basic Cluster
1
Chapter 10. Cluster Analysis: Basic Concepts and
Methods
3
Clustering for Data Understanding and
Applications
Biology: taxonomy of living things: kingdom, phylum, class, order,
family, genus and species
Information retrieval: document clustering
Land use: Identification of areas of similar land use in an earth
observation database
Marketing: Help marketers discover distinct groups in their customer
bases, and then use this knowledge to develop targeted marketing
programs
City-planning: Identifying groups of houses according to their house
type, value, and geographical location
Earth-quake studies: Observed earth quake epicenters should be
clustered along continent faults
Climate: understanding earth climate, find patterns of atmospheric
and ocean
Economic Science: market resarch
4
Clustering as a Preprocessing Tool (Utility)
Summarization:
Preprocessing for regression, PCA, classification, and
association analysis
Compression:
Image processing: vector quantization
Finding K-nearest Neighbors
Localizing search to one or a small number of clusters
Outlier detection
Outliers are often viewed as those “far away” from any
cluster
5
Quality: What Is Good Clustering?
6
Measure the Quality of Clustering
Dissimilarity/Similarity metric
Similarity is expressed in terms of a distance function,
typically metric: d(i, j)
The definitions of distance functions are usually rather
different for interval-scaled, boolean, categorical, ordinal
ratio, and vector variables
Weights should be associated with different variables
based on applications and data semantics
Quality of clustering:
There is usually a separate “quality” function that
measures the “goodness” of a cluster.
It is hard to define “similar enough” or “good enough”
The answer is typically highly subjective
7
Considerations for Cluster Analysis
Partitioning criteria
Single level vs. hierarchical partitioning (often, multi-level
hierarchical partitioning is desirable)
Separation of clusters
Exclusive (e.g., one customer belongs to only one region) vs.
non-exclusive (e.g., one document may belong to more than one
class)
Similarity measure
Distance-based (e.g., Euclidian, road network, vector) vs.
connectivity-based (e.g., density or contiguity)
Clustering space
Full space (often when low dimensional) vs. subspaces (often in
high-dimensional clustering)
8
Requirements and Challenges
Scalability
Clustering all the data instead of only on samples
Constraint-based clustering
User may give inputs on constraints
Use domain knowledge to determine input parameters
Interpretability and usability
Others
Discovery of clusters with arbitrary shape
High dimensionality
9
Major Clustering Approaches (I)
Partitioning approach:
Construct various partitions and then evaluate them by some
Hierarchical approach:
Create a hierarchical decomposition of the set of data (or objects)
Density-based approach:
Based on connectivity and density functions
Grid-based approach:
based on a multiple-level granularity structure
10
Major Clustering Approaches (II)
Model-based:
A model is hypothesized for each of the clusters and tries to find
Frequent pattern-based:
Based on the analysis of frequent patterns
User-guided or constraint-based:
Clustering by considering user-specified or application-specific
constraints
Typical methods: COD (obstacles), constrained clustering
Link-based clustering:
Objects are often linked together in various ways
11
Chapter 10. Cluster Analysis: Basic Concepts and
Methods
E ik1 pCi ( p ci ) 2
Given k, find a partition of k clusters that optimizes the chosen partitioning
criterion
Global optimal: exhaustively enumerate all partitions
Heuristic methods: k-means and k-medoids algorithms
k-means (MacQueen’67, Lloyd’57/’82): Each cluster is represented by
the center of the cluster
k-medoids or PAM (Partition around medoids) (Kaufman &
Rousseeuw’87): Each cluster is represented by one of the objects in
the cluster
13
The K-Means Clustering Method
14
An Example of K-Means Clustering
K=2
16
Variations of the K-Means Method
17
What Is the Problem of the K-Means Method?
10 10
9 9
8 8
7 7
6 6
5 5
4 4
3 3
2 2
1 1
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
18
PAM: A Typical K-Medoids Algorithm
Total Cost = 20
10 10 10
9 9 9
8 8 8
7 7 7
6
Arbitrary 6
Assign 6
5
choose k 5 each 5
4 object as 4 remainin 4
3
initial 3
g object 3
2
medoids 2
to 2
1 1
nearest
1
0 0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
medoids 0 1 2 3 4 5 6 7 8 9 10
Do loop 9
8 Compute
9
8
Swapping O 7 total cost of 7
change
5 5
If quality is 4 4
improved. 3
2
3
1 1
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
19
The K-Medoid Clustering Method
20
Chapter 10. Cluster Analysis: Basic Concepts and
Methods
Cluster Analysis: Basic Concepts
Partitioning Methods
Hierarchical Methods
Density-Based Methods
Grid-Based Methods
Evaluation of Clustering
Summary
21
Hierarchical Clustering
Use distance matrix as clustering criteria. This method
does not require the number of clusters k as an input, but
needs a termination condition
Step 0 Step 1 Step 2 Step 3 Step 4
agglomerative
(AGNES)
a
ab
b abcde
c
cde
d
de
e
divisive
Step 4 Step 3 Step 2 Step 1 Step 0 (DIANA)
22
AGNES (Agglomerative Nesting)
Introduced in Kaufmann and Rousseeuw (1990)
Implemented in statistical packages, e.g., Splus
Use the single-link method and the dissimilarity matrix
Merge nodes that have the least dissimilarity
Go on in a non-descending fashion
Eventually all nodes belong to the same cluster
10 10 10
9 9 9
8 8 8
7 7 7
6 6 6
5 5 5
4 4 4
3 3 3
2 2 2
1 1 1
0 0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
23
Dendrogram: Shows How Clusters are Merged
24
DIANA (Divisive Analysis)
10 10
10
9 9
9
8 8
8
7 7
7
6 6
6
5 5
5
4 4
4
3 3
3
2 2
2
1 1
1
0 0
0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10
25
Distance between Clusters X X
Centroid: distance between the centroids of two clusters, i.e., dist(Ki, Kj)
= dist(Ci, Cj)
Medoid: distance between the medoids of two clusters, i.e., dist(Ki, Kj) =
dist(Mi, Mj)
Medoid: a chosen, centrally located object in the cluster
26
Centroid, Radius and Diameter of a Cluster
(for numerical data sets)
Centroid: the “middle” of a cluster iN 1(t )
Cm N
ip
27
Extensions to Hierarchical Clustering
Major weakness of agglomerative clustering methods
Can never undo what was done previously
Do not scale well: time complexity of at least O(n2), where n
is the number of total objects
Integration of hierarchical & distance-based clustering
BIRCH (1996): uses CF-tree and incrementally adjusts the
quality of sub-clusters
CHAMELEON (1999): hierarchical clustering using
dynamic modeling
28
BIRCH (Balanced Iterative Reducing and
Clustering Using Hierarchies)
Zhang, Ramakrishnan & Livny, SIGMOD’96
Incrementally construct a CF (Clustering Feature) tree, a hierarchical
data structure for multiphase clustering
Phase 1: scan DB to build an initial in-memory CF tree (a multi-level
compression of the data that tries to preserve the inherent clustering
structure of the data)
Phase 2: use an arbitrary clustering algorithm to cluster the leaf
nodes of the CF-tree
Scales linearly: finds a good clustering with a single scan and improves
the quality with a few additional scans
Weakness: handles only numeric data, and sensitive to the order of the
data record
29
Clustering Feature Vector in BIRCH
CF = (5, (16,30),(54,190))
SS: square sum of N points
N 2 10
(3,4)
Xi
9
(2,6)
8
i 1
7
(4,5)
5
1
(4,7)
(3,8)
0
0 1 2 3 4 5 6 7 8 9 10
30
CF-Tree in BIRCH
Clustering feature:
Summary of the statistics for a given subcluster: the 0-th, 1st,
nodes
31
The CF Tree Structure
Root
Non-leaf node
CF1 CF2 CF3 CF5
child1 child2 child3 child5
32
The Birch Algorithm
Cluster Diameter 1 2
(x x )
n( n 1) i j
If entry diameter > max_diameter, then split leaf, and possibly parents
Algorithm is O(n)
Concerns
Sensitive to insertion order of data points
Since we fix the size of leaf nodes, so clusters may not be so natural
33
CHAMELEON: Hierarchical Clustering Using
Dynamic Modeling (1999)
CHAMELEON: G. Karypis, E. H. Han, and V. Kumar, 1999
Measures the similarity based on a dynamic model
Two clusters are merged only if the interconnectivity and
closeness (proximity) between two clusters are high
relative to the internal interconnectivity of the clusters
and closeness of items within the clusters
Graph-based, and a two-phase algorithm
1. Use a graph-partitioning algorithm: cluster objects into a
large number of relatively small sub-clusters
2. Use an agglomerative hierarchical clustering algorithm:
find the genuine clusters by repeatedly combining these
sub-clusters
34
Overall Framework of CHAMELEON
Construct (K-NN)
Sparse Graph Partition the Graph
Data Set
K-NN Graph
P and q are connected if Merge Partition
q is among the top k
closest neighbors of p
Relative interconnectivity:
connectivity of c1 and c2
over internal connectivity
Final Clusters
Relative closeness:
closeness of c1 and c2 over
internal closeness 35
CHAMELEON (Clustering Complex Objects)
36
Probabilistic Hierarchical Clustering
Algorithmic hierarchical clustering
Nontrivial to choose a good distance measure
Hard to handle missing attribute values
Optimization goal not clear: heuristic, local search
Probabilistic hierarchical clustering
Use probabilistic models to measure distances between clusters
Generative model: Regard the set of data objects to be clustered as
a sample of the underlying data generation mechanism to be
analyzed
Easy to understand, same efficiency as algorithmic agglomerative
clustering method, can handle partially observed data
In practice, assume the generative models adopt common distributions
functions, e.g., Gaussian distribution or Bernoulli distribution, governed
by parameters
37
Generative Model
Given a set of 1-D points X = {x1, …, xn} for clustering
analysis & assuming they are generated by a
Gaussian distribution:
38
A Probabilistic Hierarchical Clustering Algorithm
For a set of objects partitioned into m clusters C1, . . . ,Cm, the quality can
be measured by,
40
Density-Based Clustering Methods
Clustering based on density (local cluster criterion), such as
density-connected points
Major features:
Discover clusters of arbitrary shape
Handle noise
One scan
Need density parameters as termination condition
Several interesting studies:
DBSCAN: Ester, et al. (KDD’96)
41
Density-Based Clustering: Basic Concepts
Two parameters:
Eps: Maximum radius of the neighbourhood
MinPts: Minimum number of points in an Eps-
neighbourhood of that point
NEps(p): {q belongs to D | dist(p,q) ≤ Eps}
Directly density-reachable: A point p is directly density-
reachable from a point q w.r.t. Eps, MinPts if
p belongs to NEps(q)
p MinPts = 5
core point condition:
Eps = 1 cm
|NEps (q)| ≥ MinPts q
42
Density-Reachable and Density-Connected
Density-reachable:
A point p is density-reachable from a p
point q w.r.t. Eps, MinPts if there is a
p1
chain of points p1, …, pn, p1 = q, pn = q
p such that pi+1 is directly density-
reachable from pi
Density-connected
A point p is density-connected to a p q
point q w.r.t. Eps, MinPts if there is a
point o such that both, p and q are o
density-reachable from o w.r.t. Eps
and MinPts
43
DBSCAN: Density-Based Spatial Clustering of
Applications with Noise
Relies on a density-based notion of cluster: A cluster is
defined as a maximal set of density-connected points
Discovers clusters of arbitrary shape in spatial databases
with noise
Outlier
Border
Eps = 1cm
Core MinPts = 5
44
DBSCAN: The Algorithm
Arbitrary select a point p
Retrieve all points density-reachable from p w.r.t. Eps and
MinPts
If p is a core point, a cluster is formed
If p is a border point, no points are density-reachable
from p and DBSCAN visits the next point of the database
Continue the process until all of the points have been
processed
45
DBSCAN: Sensitive to Parameters
46
OPTICS: A Cluster-Ordering Method (1999)
techniques
47
OPTICS: Some Extension from DBSCAN
Index-based:
k = number of dimensions
N = 20
p = 75% D
M = N(1-p) = 5
Complexity: O(NlogN)
Core Distance: p1
min eps s.t. point is core
Reachability Distance o
p2
o
Max (core-distance (o), d (o, p)) MinPts = 5
r(p1, o) = 2.8cm. r(p2,o) = 4cm = 3 cm 48
Reachability
-distance
undefined
‘
Cluster-order
of the objects 49
Density-Based Clustering: OPTICS & Its Applications
50
DENCLUE: Using Statistical Density Functions
f Gaussian ( x , y ) e
2 2 f Gaussian
2
d ( x , xi ) 2
influence of y
( x, xi ) i 1 ( xi x) e
D N
2 2
on x f Gaussian
Major features
gradient of x in
Solid mathematical foundation the direction of
xi
Good for data sets with large amounts of noise
Allows a compact mathematical description of arbitrarily shaped
clusters in high-dimensional data sets
Significant faster than existing algorithm (e.g., DBSCAN)
But needs a large number of parameters
51
Denclue: Technical Essence
Uses grid cells but only keeps information about grid cells that do
actually contain data points and manages these cells in a tree-based
access structure
Influence function: describes the impact of a data point within its
neighborhood
Overall density of the data space can be calculated as the sum of the
influence function of all data points
Clusters can be determined mathematically by identifying density
attractors
Density attractors are local maximal of the overall density function
Center defined clusters: assign to each density attractor the points
density attracted to it
Arbitrary shaped cluster: merge density attractors that are connected
through paths of high density (> threshold)
52
Density Attractor
53
Center-Defined and Arbitrary
54
Chapter 10. Cluster Analysis: Basic Concepts and
Methods
Cluster Analysis: Basic Concepts
Partitioning Methods
Hierarchical Methods
Density-Based Methods
Grid-Based Methods
Evaluation of Clustering
Summary
55
Grid-Based Clustering Method
56
STING: A Statistical Information Grid Approach
Wang, Yang and Muntz (VLDB’97)
The spatial area is divided into rectangular cells
There are several levels of cells corresponding to different
levels of resolution
1st layer
(i-1)st layer
i-th layer
57
The STING Clustering Method
Each cell at a high level is partitioned into a number of
smaller cells in the next lower level
Statistical info of each cell is calculated and stored
beforehand and is used to answer queries
Parameters of higher level cells can be easily calculated
from parameters of lower level cell
count, mean, s, min, max
update
O(K), where K is the number of grid cells at the lowest
level
Disadvantages:
All the cluster boundaries are either horizontal or
59
CLIQUE (Clustering In QUEst)
61
Salary
(10,000)
=3
0 1 2 3 4 5 6 7
20
30
40
50
Sa
l ar
y
Vacation
60
age
30
Vacation
50
(week)
0 1 2 3 4 5 6 7
20
30
40
age
50
60
age
62
Strength and Weakness of CLIQUE
Strength
automatically finds subspaces of the highest
Elbow method
Use the turning point in the curve of sum of within cluster variance
E.g., For each point in the test set, find the closest centroid, and
use the sum of squared distance between all points in the test
set and the closest centroids to measure how well the model fits
the test set
For any k > 0, repeat it m times, compare the overall quality measure
w.r.t. different k’s, and find # of clusters that fits the data the best
66
Measuring Clustering Quality
68
Chapter 10. Cluster Analysis: Basic Concepts and
Methods
73
References (3)
G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to
Clustering. John Wiley and Sons, 1988.
R. Ng and J. Han. Efficient and effective clustering method for spatial data mining.
VLDB'94.
L. Parsons, E. Haque and H. Liu, Subspace Clustering for High Dimensional Data: A
Review, SIGKDD Explorations, 6(1), June 2004
E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large
data sets. Proc. 1996 Int. Conf. on Pattern Recognition
G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution
clustering approach for very large spatial databases. VLDB’98.
A. K. H. Tung, J. Han, L. V. S. Lakshmanan, and R. T. Ng. Constraint-Based Clustering
in Large Databases, ICDT'01.
A. K. H. Tung, J. Hou, and J. Han. Spatial Clustering in the Presence of Obstacles,
ICDE'01
H. Wang, W. Wang, J. Yang, and P.S. Yu. Clustering by pattern similarity in large data
sets, SIGMOD’02
W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial
Data Mining, VLDB’97
T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : An efficient data clustering method
for very large databases. SIGMOD'96
X. Yin, J. Han, and P. S. Yu, “LinkClus: Efficient Clustering via Heterogeneous Semantic
Links”, VLDB'06
74
Slides unused in class
75
A Typical K-Medoids Algorithm (PAM)
Total Cost = 20
10 10 10
9 9 9
8 8 8
7 7 7
6
Arbitrary 6
Assign 6
5
choose k 5 each 5
4 object as 4 remainin 4
3
initial 3
g object 3
2
medoids 2
to 2
1 1
nearest
1
0 0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
medoids 0 1 2 3 4 5 6 7 8 9 10
Do loop 9
8 Compute
9
8
Swapping O 7 total cost of 7
change
5 5
If quality is 4 4
improved. 3
2
3
1 1
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
76
PAM (Partitioning Around Medoids) (1987)
78
What Is the Problem with PAM?
79
CLARA (Clustering Large Applications) (1990)
81
ROCK: Clustering Categorical Data
Major ideas
Use links to measure similarity/proximity
Not distance-based
Experiments
Congressional voting, mushroom data
82
Similarity Measure in ROCK
Traditional measures for categorical data may not work well, e.g.,
Jaccard coefficient
Example: Two groups (clusters) of transactions
C1. <a, b, c, d, e>: {a, b, c}, {a, b, d}, {a, b, e}, {a, c, d}, {a, c, e},
{a, d, e}, {b, c, d}, {b, c, e}, {b, d, e}, {c, d, e}
C2. <a, b, f, g>: {a, b, f}, {a, b, g}, {a, f, g}, {b, f, g}
Jaccard co-efficient may lead to wrong clustering result
C1: 0.2 ({a, b, c}, {b, d, e}} to 0.5 ({a, b, c}, {a, b, d})
C1 & C2: could be as high as 0.5 ({a, b, c}, {a, b, f})
Jaccard co-efficient-based similarity function: Sim( T1 , T2 ) T1 T2
T1 T2
Ex. Let T1 = {a, b, c}, T2 = {c, d, e}
{c} 1
Sim(T 1, T 2) 0.2
{a, b, c, d , e} 5
83
Link Measure in ROCK
Clusters
C1:<a, b, c, d, e>: {a, b, c}, {a, b, d}, {a, b, e}, {a, c, d}, {a, c, e}, {a, d, e}, {b,
c, d}, {b, c, e}, {b, d, e}, {c, d, e}
C2: <a, b, f, g>: {a, b, f}, {a, b, g}, {a, f, g}, {b, f, g}
Neighbors
Two transactions are neighbors if sim(T 1,T2) > threshold
Let T1 = {a, b, c}, T2 = {c, d, e}, T3 = {a, b, f}
T1 connected to: {a,b,d}, {a,b,e}, {a,c,d}, {a,c,e}, {b,c,d}, {b,c,e}, {a,b,f},
{a,b,g}
T2 connected to: {a,c,d}, {a,c,e}, {a,d,e}, {b,c,e}, {b,d,e}, {b,c,d}
T3 connected to: {a,b,c}, {a,b,d}, {a,b,e}, {a,b,g}, {a,f,g}, {b,f,g}
Link Similarity
Link similarity between two transactions is the # of common neighbors
link(T1, T2) = 4, since they have 4 common neighbors
{a, c, d}, {a, c, e}, {b, c, d}, {b, c, e}
link(T1, T3) = 3, since they have 3 common neighbors
{a, b, d}, {a, b, e}, {a, b, g}
84
Aggregation-Based Similarity Computation
0.2
4 5 ST2
0.9 1.0 0.8 0.9 1.0
10 11 12 13 14
a b ST1
For each node nk ∈ {n10, n11, n12} and nl ∈ {n13, n14}, their path-
based similarity simp(nk, nl) = s(nk, n4)·s(n4, n5)·s(n5, nl).
simna , nb sn , n
4 5
l 13
0.171
3 2
86
Computing Similarity with Aggregation
Average similarity a: b:(0.95,2)
and total weight (0.9,3) 0.2
4 5
87
Chapter 10. Cluster Analysis: Basic Concepts and
Methods
aaai04
aaai
Mary aaai05 Issue: Expensive to compute:
For a dataset of N objects
Jeh & Widom, KDD’2002: SimRank
Two objects are similar if they are and M links, it takes O(N2)
linked with the same or similar space and O(M2) time to
objects compute all similarities.
89
Observation 1: Hierarchical Structures
All
Articles
grocery electronics apparel
TV DVD camera
Words
90
Observation 2: Distribution of Similarity
0.4
0.3
among DBLP authors
0.2
0.1
0
0.02
0.04
0.06
0.08
0.12
0.14
0.16
0.18
0.22
0.24
0.1
0.2
0
similarity value
Canon A40
digital camera
Digital
Sony V3 digital Cameras
Consumer Apparels
camera
electronics
TVs
92
Similarity Defined by SimTree
Similarity between two
sibling nodes n1 and n2
n1 0.2 n2 n3
n7 n8 n9
Path-based node similarity
simp(n7,n8) = s(n7, n4) x s(n4, n5) x s(n5, n8)
Similarity between two nodes is the average similarity
between objects linked with them in other SimTrees
Average similarity between x and all other nodes
Adjust/ ratio for x = Average similarity between x’s parent and all other nodes
93
LinkClus: Efficient Clustering via Heterogeneous
Semantic Links
Method
Initialize a SimTree for objects of each type
similar to
For details: X. Yin, J. Han, and P. S. Yu, “LinkClus: Efficient
Clustering via Heterogeneous Semantic Links”, VLDB'06
94
Initialization of SimTrees
Initializing a SimTree
Repeatedly find groups of tightly related nodes, which
95
Finding Tight Groups by Freq. Pattern Mining
Finding tight groups Frequent pattern mining
Reduced to
Transactions
n1 1 {n1}
The tightness of a g1 2 {n1, n2}
group of nodes is the n2 3 {n2}
4 {n1, n2}
support of a frequent 5 {n1, n2}
pattern n3 6 {n2, n3,
g2 7 n4}
{n4}
n4 8 {n3, n4}
9 {n3, n4}
Procedure of initializing a tree
Start from leaf nodes (level-0)
n1 n2 n3
0.9
n4 n5 n6
0.8
n7 n7 n8 n9
similar to, under the constraint that each parent node can
have at most c children
97
Complexity
Time Space
Updating similarities O(M(logN)2) O(M+N)
Adjusting tree structures O(N) O(N)
98
Experiment: Email Dataset
F. Nielsen. Email dataset. Approach Accuracy time (s)
www.imm.dtu.dk/~rem/data/Email-1431.zip
370 emails on conferences, 272 on jobs,
LinkClus 0.8026 1579.6
and 789 spam emails SimRank 0.7965 39160
Accuracy: measured by manually labeled ReCom 0.5711 74.6
data
F-SimRank 0.3688 479.7
Accuracy of clustering: % of pairs of objects
in the same cluster that share common label CLARANS 0.4768 8.55
Approaches compared:
SimRank (Jeh & Widom, KDD 2002): Computing pair-wise similarities
SimRank with FingerPrints (F-SimRank): Fogaras & R´acz, WWW 2005
pre-computes a large sample of random paths from each object and uses
samples of two objects to estimate SimRank similarity
ReCom (Wang et al. SIGIR 2003)
Iteratively clustering objects using cluster labels of linked objects
99
WaveCluster: Clustering by Wavelet Analysis (1998)
100
The WaveCluster Algorithm
How to apply wavelet transform to find clusters
Summarizes the data by imposing a multidimensional grid
101
Quantization
& Transformation
Quantize data into m-D grid structure,
then wavelet transform
a) scale 1: high resolution
b) scale 2: medium resolution
c) scale 3: low resolution
102