Unit-4
Unit-4
Cluster Analysis
二〇二五年五月三十一日 2
Clustering
二〇二五年五月三十一日 3
General Applications of Clustering
Pattern Recognition
Spatial Data Analysis
create thematic maps in GIS by clustering
feature spaces
detect spatial clusters and explain them in
二〇二五年五月三十一日 5
Examples of Clustering
Applications
Marketing: Help marketers discover distinct groups in
their customer bases, and then use this knowledge to
develop targeted marketing programs
Land use: Identification of areas of similar land use in
an earth observation database
Insurance: Identifying groups of motor insurance
policy holders with a high average claim cost
City-planning: Identifying groups of houses according
to their house type, value, and geographical location
Earth-quake studies: Observed earth quake epicenters
should be clustered along continent faults
二〇二五年五月三十一日 6
What Is Good Clustering?
二〇二五年五月三十一日 9
Similarity and Dissimilarity
Between Objects
二〇二五年五月三十一日 10
Similarity and Dissimilarity
Between Objects (Cont.)
If q = 2, d is Euclidean distance:
d (i, j) (| x x |2 | x x |2 ... | x x |2 )
i1 j1 i2 j2 ip jp
Properties
d(i,j) 0
d(i,i) = 0
d(i,j) = d(j,i)
d(i,j) d(i,k) + d(k,j)
Also one can use weighted distance, parametric
Pearson product moment correlation, or other
disimilarity measures.
二〇二五年五月三十一日 11
Major Clustering Approaches
二〇二五年五月三十一日 12
Partitioning Algorithms: Basic
Concept
Partitioning method: Construct a partition of a
database D of n objects into a set of k clusters
Types:
k-means (MacQueen’67): Each cluster is
represented by the center of the cluster
k-medoids or PAM (Partition around medoids)
(Kaufman & Rousseeuw’87): Each cluster is
represented by one of the objects in the cluster
二〇二五年五月三十一日 13
K-mean approach
c( S )
i 1
i
|Si | |Si |
c( Si ) (d ( x , x )) i
r
i
s
2
r 1 s 1
二〇二五年五月三十一日 14
An example of k-mean approach
iteration.
二〇二五年五月三十一日 15
The K-Means Clustering Method
二〇二五年五月三十一日 16
Examples
二〇二五年五月三十一日 17
Examples
二〇二五年五月三十一日 18
Examples
二〇二五年五月三十一日 19
Examples
二〇二五年五月三十一日 20
Examples
二〇二五年五月三十一日 21
The K-Means Clustering Method
Example
10 10
9 9
8 8
7 7
6 6
5 5
4 4
3 3
2 2
1 1
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
10 10
9 9
8 8
7 7
6 6
5 5
4 4
3 3
2 2
1 1
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
二〇二五年五月三十一日 22
Comments on the K-Means Method
Strength
With large number of variables, k-means may be
二〇二五年五月三十一日 23
The K-Medoids Clustering Method
Find representative objects, called medoids, in
clusters
PAM (Partitioning Around Medoids, 1987)
starts from an initial set of medoids and
iteratively replaces one of the medoids by one of
the non-medoids if it improves the total distance
of the resulting clustering
PAM works effectively for small data sets, but
does not scale well for large data sets
CLARA (Kaufmann & Rousseeuw, 1990)
CLARANS (Ng & Han, 1994): Randomized sampling
Focusing + spatial data structure (Ester et al., 1995)
二〇二五年五月三十一日 24
PAM (Partitioning Around Medoids)
(1987)
二〇二五年五月三十一日 26
CLARANS (“Randomized” CLARA)
(1994)
CLARANS (A Clustering Algorithm based on Randomized
Search) (Ng and Han’94)
CLARANS draws sample of neighbors dynamically
The clustering process can be presented as searching a
graph where every node is a potential solution, that is, a
set of k medoids
If the local optimum is found, CLARANS starts with new
randomly selected node in search for a new local optimum
It is more efficient and scalable than both PAM and CLARA
Focusing techniques and spatial access structures may
further improve its performance (Ester et al.’95)
二〇二五年五月三十一日 27
Cluster Analysis
9
8
Eventually all nodes belong to the same cluster 9
8
9
7 7 7
6 6 6
5 5 5
4 4 4
3 3 3
2 2 2
1 1 1
0 0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
二〇二五年五月三十一日 30
A Dendrogram Shows How the
Clusters are Merged Hierarchically
二〇二五年五月三十一日 31
DIANA (Divisive Analysis)
9 9
9
8 8
8
7 7
7
6 6
6
5 5
5
4 4
4
3 3
3
2 2
2
1 1
1
0 0
0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10
二〇二五年五月三十一日 32
Hierarchical Clustering
Centriod | mi m j |
二〇二五年五月三十一日 33
More on Hierarchical Clustering
Methods
Major weakness of agglomerative clustering methods
do not scale well: time complexity of at least O(n2),
dynamic modeling
二〇二五年五月三十一日 34
BIRCH (1996)
Birch: Balanced Iterative Reducing and Clustering using
Hierarchies, by Zhang, Ramakrishnan, Livny
(SIGMOD’96)
Incrementally construct a CF (Clustering Feature) tree, a
hierarchical data structure for multiphase clustering
Phase 1: scan DB to build an initial in-memory CF tree
(a multi-level compression of the data that tries to
preserve the inherent clustering structure of the data)
Phase 2: use an arbitrary clustering algorithm to
cluster the leaf nodes of the CF-tree
Scales linearly: finds a good clustering with a single scan
and improves the quality with a few additional scans
Weakness: handles only numeric data, and sensitive to
the order of the data record.
二〇二五年五月三十一日 35
Drawbacks of Distance-Based
Method
Data Set
Merge Partition
Final Clusters
二〇二五年五月三十一日 38
. Cluster Analysis
Density-connected
p q
A point p is density-connected to
a point q wrt. Eps, MinPts if there
is a point o such that both, p and o
q are density-reachable from o
wrt. Eps and MinPts.
二〇二五年五月三十一日 42
DBSCAN: Density Based Spatial
Clustering of Applications with
Noise
Border
Eps = 1cm
Core MinPts = 5
二〇二五年五月三十一日 43
DBSCAN: The Algorithm
二〇二五年五月三十一日 46
STING: A Statistical Information
Grid Approach
Wang, Yang and Muntz (VLDB’97)
The spatial area area is divided into rectangular
cells
There are several levels of cells corresponding to
different levels of resolution
二〇二五年五月三十一日 47
STING: A Statistical
Information Grid Approach (2)
Each cell at a high level is partitioned into a number of
smaller cells in the next lower level
Statistical info of each cell is calculated and stored
beforehand and is used to answer queries
Parameters of higher level cells can be easily
calculated from parameters of lower level cell
count, mean, s, min, max
type of distribution—normal, uniform, etc.
Use a top-down approach to answer spatial data
queries
Start from a pre-selected layer—typically with a small
number of cells
For each cell in the current level compute the
confidence interval
二〇二五年五月三十一日 48
STING: A Statistical
Information Grid Approach (3)
Remove the irrelevant cells from further
consideration
When finish examining the current layer, proceed
to the next lower level
Repeat this process until the bottom layer is
reached
Advantages:
Query-independent, easy to parallelize,
incremental update
O(K), where K is the number of grid cells at the
lowest level
Disadvantages:
All the cluster boundaries are either horizontal
or vertical, and no diagonal boundary is
二〇二五年五月三十一日 49
WaveCluster (1998)
Sheikholeslami, Chatterjee, and Zhang (VLDB’98)
A multi-resolution clustering approach which
applies wavelet transform to the feature space
A wavelet transform is a signal processing
technique that decomposes a signal into
different frequency sub-band.
Both grid-based and density-based
Input parameters:
# of grid cells for each dimension
the wavelet, and the # of applications of
wavelet transform.
二〇二五年五月三十一日 50
WaveCluster (1998)
How to apply wavelet transform to find clusters
Summaries the data by imposing a
multidimensional grid structure onto data
space
These multidimensional spatial data objects
二〇二五年五月三十一日 53
Quantization
二〇二五年五月三十一日 54
Transformation
二〇二五年五月三十一日 55
WaveCluster (1998)
Why is wavelet transformation useful for clustering
Unsupervised clustering
Multi-resolution
Cost efficiency
Major features:
Complexity O(N)
scales
Not sensitive to noise, not sensitive to input order
二〇二五年五月三十一日 58
Salary
(10,000)
=3
0 1 2 3 4 5 6 7
20
二〇二五年五月三十一日
30
40
50
Sa
l ar
Vacation
y
60
age
30
Vacation
(week)
50
0 1 2 3 4 5 6 7
20
30
40
age
50
60
age
59
Strength and Weakness of
CLIQUE
Strength
It automatically finds subspaces of the highest
A form of clustering in machine learning
Produces a classification scheme for a set of unlabeled
objects
Finds characteristic description for each concept (class)
COBWEB (Fisher’87)
A popular a simple method of incremental conceptual
learning
Creates a hierarchical clustering in the form of a
classification tree
Each node refers to a concept and contains a probabilistic
description of that concept
二〇二五年五月三十一日 62
COBWEB Clustering
Method
A classification tree
二〇二五年五月三十一日 63
More on Statistical-Based
Clustering
Limitations of COBWEB
The assumption that the attributes are
independent of each other is often too strong
because correlation may exist
Not suitable for clustering large database data –
skewed tree and expensive probability
distributions
CLASSIT
an extension of COBWEB for incremental
clustering of continuous data
suffers similar problems as COBWEB
AutoClass (Cheeseman and Stutz, 1996)
Uses Bayesian statistical analysis to estimate
the number of clusters
Popular in industry
二〇二五年五月三十一日 64
Other Model-Based
Clustering Methods
Neural network approaches
Represent each cluster as an exemplar, acting
units (neurons)
Neurons compete in a “winner-takes-all”
二〇二五年五月三十一日 66
Self-organizing feature maps
(SOMs)
Gretzky, ...
Problem
Find top n outlier points
Applications:
Credit card fraud detection
Customer segmentation
Medical analysis
二〇二五年五月三十一日 69
Outlier Discovery:
Statistical
Approaches
Drawbacks
most tests are for single attribute
二〇二五年五月三十一日 70
Outlier Discovery: Distance-
Based Approach
二〇二五年五月三十一日 75
Summary