Module 3
Module 3
Basic Concepts
Evaluation Methods
Summary
1
What Is Frequent Pattern Analysis?
Frequent pattern: a pattern (a set of items, subsequences, substructures,
etc.) that occurs frequently in a data set
First proposed by Agrawal, Imielinski, and Swami [AIS93] in the context
of frequent itemsets and association rule mining
Motivation: Finding inherent regularities in data
What products were often purchased together?— Beer and diapers?!
What are the subsequent purchases after buying a PC?
What kinds of DNA are sensitive to this new drug?
Can we automatically classify web documents?
Applications
Basket data analysis, cross-marketing, catalog design, sale campaign
analysis, Web log (click stream) analysis, and DNA sequence analysis.
2
Why Is Freq. Pattern Mining Important?
Freq. pattern: An intrinsic and important property of
datasets
Foundation for many essential data mining tasks
Association, correlation, and causality analysis
Broad applications
3
Basic Concepts: Frequent Patterns
4
Basic Concepts: Association Rules
Tid Items bought Find all the rules X Y with
10 Beer, Nuts, Diaper
minimum support and confidence
20 Beer, Coffee, Diaper
30 Beer, Diaper, Eggs
support, s, probability that a
40 Nuts, Eggs, Milk transaction contains X Y
50 Nuts, Coffee, Diaper, Eggs, Milk
confidence, c, conditional
Customer
buys both
Customer probability that a transaction
buys
having X also contains Y
diaper
Let minsup = 50%, minconf = 50%
Freq. Pat.: Beer:3, Nuts:3, Diaper:4, Eggs:3,
{Beer, Diaper}:3
Customer
buys beer Association rules: (many more!)
Beer Diaper (60%, 100%)
Diaper Beer (60%, 75%)
5
Closed Patterns and Max-Patterns
A long pattern contains a combinatorial number of sub-
patterns, e.g., {a1, …, a100} contains (1001) + (1002) + … +
(110000) = 2100 – 1 = 1.27*1030 sub-patterns!
Solution: Mine closed patterns and max-patterns instead
An itemset X is closed if X is frequent and there exists no
super-pattern Y כX, with the same support as X
(proposed by Pasquier, et al. @ ICDT’99)
An itemset X is a max-pattern if X is frequent and there
exists no frequent super-pattern Y כX (proposed by
Bayardo @ SIGMOD’98)
Closed pattern is a lossless compression of freq. patterns
Reducing the # of patterns and rules
6
Closed Patterns and Max-Patterns
Exercise. DB = {<a1, …, a100>, < a1, …, a50>}
Min_sup = 1.
What is the set of closed itemset?
<a1, …, a100>: 1
< a1, …, a50>: 2
What is the set of max-pattern?
<a1, …, a100>: 1
What is the set of all patterns?
!!
7
Computational Complexity of Frequent Itemset Mining
8
Chapter 5: Mining Frequent Patterns, Association and
Correlations: Basic Concepts and Methods
Basic Concepts
Evaluation Methods
Summary
9
Scalable Frequent Itemset Mining Methods
Approach
Format
10
The Downward Closure Property and Scalable
Mining Methods
The downward closure property of frequent patterns
Any subset of a frequent itemset must be frequent
diaper}
i.e., every transaction having {beer, diaper, nuts} also
@SIGMOD’00)
Vertical data format approach (Charm—Zaki & Hsiao
@SDM’02)
11
Apriori: A Candidate Generation & Test Approach
12
The Apriori Algorithm—An Example
Supmin = 2 Itemset sup
Itemset sup
Database TDB {A} 2
L1 {A} 2
Tid Items C1 {B} 3
{B} 3
10 A, C, D {C} 3
1st scan {C} 3
20 B, C, E {D} 1
{E} 3
30 A, B, C, E {E} 3
40 B, E
C2 Itemset sup C2 Itemset
{A, B} 1
L2 Itemset sup
{A, C} 2 2nd scan {A, B}
{A, C} 2 {A, C}
{A, E} 1
{B, C} 2 {A, E}
{B, C} 2
{B, E} 3
{B, E} 3 {B, C}
{C, E} 2
{C, E} 2 {B, E}
{C, E}
C3 Itemset
3rd scan L3 Itemset sup
{B, C, E} {B, C, E} 2
13
The Apriori Algorithm (Pseudo-Code)
Ck: Candidate itemset of size k
Lk : frequent itemset of size k
L1 = {frequent items};
for (k = 1; Lk !=; k++) do begin
Ck+1 = candidates generated from Lk;
for each transaction t in database do
increment the count of all candidates in Ck+1 that are
contained in t
Lk+1 = candidates in Ck+1 with min_support
end
return k Lk; 14
Implementation of Apriori
16
Counting Supports of Candidates Using Hash Tree
Subset function
Transaction: 1 2 3 5 6
3,6,9
1,4,7
2,5,8
1+2356
13+56 234
567
145 345 356 367
136 368
357
12+356
689
124
457 125 159
458
17
Candidate Generation: An SQL Implementation
SQL Implementation of candidate generation
Suppose the items in Lk-1 are listed in an order
Step 1: self-joining Lk-1
insert into Ck
select p.item1, p.item2, …, p.itemk-1, q.itemk-1
from Lk-1 p, Lk-1 q
where p.item1=q.item1, …, p.itemk-2=q.itemk-2, p.itemk-1 < q.itemk-
1
Step 2: pruning
forall itemsets c in Ck do
forall (k-1)-subsets s of c do
if (s is not in Lk-1) then delete c from Ck
Use object-relational extensions like UDFs, BLOBs, and Table functions for
efficient implementation [See: S. Sarawagi, S. Thomas, and R. Agrawal.
Integrating association rule mining with relational database systems:
Alternatives and implications. SIGMOD’98]
18
Scalable Frequent Itemset Mining Methods
19
Further Improvement of the Apriori Method
20
Partition: Scan Database Only Twice
Any itemset that is potentially frequent in DB must be
frequent in at least one of the partitions of DB
Scan 1: partition database and find local frequent
patterns
Scan 2: consolidate global frequent patterns
23
DIC: Reduce Number of Scans
ABCD
Once both A and D are determined
frequent, the counting of AD begins
ABC ABD ACD BCD Once all length-2 subsets of BCD are
determined frequent, the counting of BCD
begins
AB AC BC AD BD CD
Transactions
1-itemsets
A B C D
Apriori 2-itemsets
…
{}
Itemset lattice 1-itemsets
2-items
S. Brin R. Motwani, J. Ullman,
and S. Tsur. Dynamic itemset DIC 3-items
counting and implication rules for
market basket data. SIGMOD’97
24
Scalable Frequent Itemset Mining Methods
25
Pattern-Growth Approach: Mining Frequent Patterns
Without Candidate Generation
Bottlenecks of the Apriori approach
Breadth-first (i.e., level-wise) search
Candidate generation and test
Often generates a huge number of candidates
The FPGrowth Approach (J. Han, J. Pei, and Y. Yin, SIGMOD’ 00)
Depth-first search
Avoid explicit candidate generation
Major philosophy: Grow long patterns from short ones using local
frequent items only
“abc” is a frequent pattern
Get all transactions having “abc”, i.e., project DB on abc: DB|abc
“d” is a local frequent item in DB|abc abcd is a frequent pattern
26
Construct FP-tree from a Transaction Database
Patterns containing p
…
Pattern f
28
Find Patterns Having P From P-conditional Database
{}
Header Table
f:4 c:1 Conditional pattern bases
Item frequency head
f 4 item cond. pattern base
c 4 c:3 b:1 b:1 c f:3
a 3
a fc:3
b 3 a:3 p:1
m 3 b fca:1, f:1, c:1
p 3 m:2 b:1 m fca:2, fcab:1
p fcam:2, cb:1
p:2 m:1
29
From Conditional Pattern-bases to Conditional FP-trees
pattern base
c:3
f:3
am-conditional FP-tree
c:3 {}
Cond. pattern base of “cm”: (f:3)
a:3 f:3
m-conditional FP-tree
cm-conditional FP-tree
{}
Cond. pattern base of “cam”: (f:3) f:3
cam-conditional FP-tree
31
A Special Case: Single Prefix Path in FP-tree
C2:k2 C3:k3
a3:n3 C2:k2 C3:k3
32
Benefits of the FP-tree Structure
Completeness
Preserve complete information for frequent pattern
mining
Never break a long pattern of any transaction
Compactness
Reduce irrelevant info—infrequent items are gone
Items in frequency descending order: the more
frequently occurring, the more likely to be shared
Never be larger than the original database (not count
node-links and the count field)
33
The Frequent Pattern Growth Mining Method
Idea: Frequent pattern growth
Recursively grow frequent patterns by pattern and
database partition
Method
For each frequent item, construct its conditional
FP-tree
Until the resulting FP-tree is empty, or it contains only
34
Scaling FP-growth by Database Projection
What about if FP-tree cannot fit in memory?
DB projection
First partition a database into a set of projected DBs
Then construct and mine FP-tree for each projected DB
Parallel projection vs. partition projection techniques
Parallel projection
Project the DB in parallel for each frequent item
Parallel projection is space costly
All the partitions can be processed in parallel
Partition projection
Partition the DB based on the ordered frequent items
Passing the unprocessed parts to the subsequent partitions
35
Partition-Based Projection
am-proj DB cm-proj DB
fc f …
fc f
fc f
36
Performance of FPGrowth in Large Datasets
100
140
90 D1 FP-grow th runtime D2 FP-growth
80
D1 Apriori runtime 120 D2 TreeProjection
70 100
Run time(sec.)
Runtime (sec.)
60
80
50 Data set T25I20D10K Data set T25I20D100K
40 60
30 40
20
20
10
0 0
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2
Support threshold(%)
Support threshold (%)
37
Advantages of the Pattern Growth Approach
Divide-and-conquer:
Decompose both the mining task and DB according to the
frequent patterns obtained so far
Lead to focused search of smaller databases
Other factors
No candidate generation, no candidate test
Compressed database: FP-tree structure
No repeated scan of entire database
Basic ops: counting local freq items and building sub FP-tree, no
pattern search and matching
A good open-source implementation and refinement of FPGrowth
FPGrowth+ (Grahne and J. Zhu, FIMI'03)
38
Further Improvements of Mining Methods
39
Extension of Pattern Growth Mining Methodology
Pattern-growth-based Clustering
MaPle (Pei, et al., ICDM’03)
Pattern-Growth-Based Classification
Mining frequent and discriminative patterns (Cheng, et al, ICDE’07)
40
Scalable Frequent Itemset Mining Methods
41
ECLAT: Mining by Exploring Vertical Data Format
Vertical format: t(AB) = {T11, T25, …}
tid-list: list of trans.-ids containing an itemset
Deriving frequent patterns based on vertical intersections
t(X) = t(Y): X and Y always happen together
t(X) t(Y): transaction having X always has Y
Using diffset to accelerate mining
Only keep track of differences of tids
t(X) = {T1, T2, T3}, t(XY) = {T1, T3}
Diffset (XY, X) = {T2}
Eclat (Zaki et al. @KDD’97)
Mining Closed patterns using vertical format: CHARM (Zaki &
Hsiao@SDM’02)
42
Scalable Frequent Itemset Mining Methods
43
Mining Frequent Closed Patterns: CLOSET
Flist: list of all frequent items in support ascending order
Flist: d-a-f-e-c Min_sup=2
TID Items
Divide search space 10 a, c, d, e, f
Patterns having d 20 a, b, e
30 c, e, f
Patterns having d but no a, etc. 40 a, c, d, f
50 c, e, f
Find frequent closed pattern recursively
Every transaction having d also has cfa cfad is a
frequent closed pattern
J. Pei, J. Han & R. Mao. “CLOSET: An Efficient Algorithm for
Mining Frequent Closed Itemsets", DMKD'00.
CLOSET+: Mining Closed Itemsets by Pattern-Growth
48
Visualization of Association Rules: Rule Graph
49
Visualization of Association Rules
(SGI/MineSet 3.0)
50
Chapter 5: Mining Frequent Patterns, Association and
Correlations: Basic Concepts and Methods
Basic Concepts
Evaluation Methods
Summary
51
Interestingness Measure: Correlations (Lift)
play basketball eat cereal [40%, 66.7%] is misleading
The overall % of students eating cereal is 75% > 66.7%.
play basketball not eat cereal [20%, 33.3%] is more accurate,
although with lower support and confidence
Measure of dependent/correlated events: lift
52
Are lift and 2 Good Measures of Correlation?
53
Null-Invariant Measures
54
Comparison of Interestingness Measures
Null-(transaction) invariance is crucial for correlation analysis
Lift and 2 are not null-invariant
5 null-invariant measures
Null-transactions Kulczynski
w.r.t. m and c measure (1927) Null-invariant
June 11, 2024 Data Mining: Concepts and Techniques Subtle: They disagree55
Analysis of DBLP Coauthor Relationships
Recent DB conferences, removing balanced associations, low sup, etc.
Basic Concepts
Evaluation Methods
Summary
58
Research on Pattern Mining: A Road Map
59
Chapter 7 : Advanced Frequent Pattern Mining
Pattern Mining: A Road Map
Pattern Mining in Multi-Level, Multi-Dimensional Space
Mining Multi-Level Association
Mining Multi-Dimensional Association
Mining Quantitative Association Rules
Mining Rare Patterns and Negative Patterns
Constraint-Based Frequent Pattern Mining
Mining High-Dimensional Data and Colossal Patterns
Mining Compressed or Approximate Patterns
Pattern Exploration and Application
Summary
60
Mining Multiple-Level Association Rules
Items often form hierarchies
Flexible support settings
Items at the lower level are expected to have lower
support
Exploration of shared multi-level mining (Agrawal &
Srikant@VLB’95, Han & Fu@VLDB’95)
61
Multi-level Association: Flexible Support and
Redundancy filtering
Flexible min-support thresholds: Some items are more valuable but
less frequent
Use non-uniform, group-based min-support
E.g., {diamond, watch, camera}: 0.05%; {bread, milk}: 5%; …
Redundancy Filtering: Some rules may be redundant due to
“ancestor” relationships between items
milk wheat bread [support = 8%, confidence = 70%]
2% milk wheat bread [support = 2%, confidence = 72%]
The first rule is an ancestor of the second rule
A rule is redundant if its support is close to the “expected” value,
based on the rule’s ancestor
62
Chapter 7 : Advanced Frequent Pattern Mining
Pattern Mining: A Road Map
Pattern Mining in Multi-Level, Multi-Dimensional Space
Mining Multi-Level Association
Mining Multi-Dimensional Association
Mining Quantitative Association Rules
Mining Rare Patterns and Negative Patterns
Constraint-Based Frequent Pattern Mining
Mining High-Dimensional Data and Colossal Patterns
Mining Compressed or Approximate Patterns
Pattern Exploration and Application
Summary
63
Mining Multi-Dimensional Association
Single-dimensional rules:
buys(X, “milk”) buys(X, “bread”)
Multi-dimensional rules: 2 dimensions or predicates
Inter-dimension assoc. rules (no repeated predicates)
age(X,”19-25”) occupation(X,“student”) buys(X, “coke”)
hybrid-dimension assoc. rules (repeated predicates)
age(X,”19-25”) buys(X, “popcorn”) buys(X, “coke”)
Categorical Attributes: finite number of possible values, no
ordering among values—data cube approach
Quantitative Attributes: Numeric, implicit ordering among
values—discretization, clustering, and gradient approaches
64
Chapter 7 : Advanced Frequent Pattern Mining
Pattern Mining: A Road Map
Pattern Mining in Multi-Level, Multi-Dimensional Space
Mining Multi-Level Association
Mining Multi-Dimensional Association
Mining Quantitative Association Rules
Mining Rare Patterns and Negative Patterns
Constraint-Based Frequent Pattern Mining
Mining High-Dimensional Data and Colossal Patterns
Mining Compressed or Approximate Patterns
Pattern Exploration and Application
Summary
65
Mining Quantitative Associations
66
Static Discretization of Quantitative Attributes
71
Defining Negative Correlated Patterns (II)
Definition 2 (negative itemset-based)
X is a negative itemset if (1) X = Ā U B, where B is a set of positive
items, and Ā is a set of negative items, |Ā|≥ 1, and (2) s(X) ≥ μ
Itemsets X is negatively correlated, if
Summary
73
Constraint-based (Query-Directed) Mining
74
Constraints in Data Mining
Knowledge type constraint:
classification, association, etc.
this year
Dimension/level constraint
in relevance to region, price, brand, customer category
$200)
Interestingness constraint
strong rules: min_support 3%, min_confidence
60%
75
Meta-Rule Guided Mining
Meta-rule can be in the rule form with partially instantiated predicates
and constants
P1(X, Y) ^ P2(X, W) => buys(X, “iPad”)
The resulting rule derived can be
age(X, “15-25”) ^ profession(X, “student”) => buys(X, “iPad”)
In general, it can be in the form of
P1 ^ P2 ^ … ^ Pl => Q1 ^ Q2 ^ … ^ Qr
Method to find meta-rules
Find frequent (l+r) predicates (based on min-support threshold)
Push constants deeply when possible into the mining process (see
the remaining discussions on constraint-push techniques)
Use confidence, correlation, and other measures when possible
76
Constraint-Based Frequent Pattern Mining
Pattern space pruning constraints
Anti-monotonic: If constraint c is violated, its further mining can
be terminated
Monotonic: If c is satisfied, no need to check c again
Succinct: c must be satisfied, so one can start with the data sets
satisfying c
Convertible: c is not monotonic nor anti-monotonic, but it can be
converted into it if items in the transaction can be properly
ordered
Data space pruning constraint
Data succinct: Data space can be pruned at the initial pattern
mining process
Data anti-monotonic: If a transaction t does not satisfy c, t can be
pruned from its further mining
77
Pattern Space Pruning with Anti-Monotonicity Constraints
TDB (min_sup=2)
A constraint C is anti-monotone if the super TID Transaction
pattern satisfies C, all of its sub-patterns do so 10 a, b, c, d, f
too
20 b, c, d, f, g, h
In other words, anti-monotonicity: If an itemset 30 a, c, d, e, f
S violates the constraint, so does any of its 40 c, e, f, g
superset
Item Profit
Ex. 1. sum(S.price) v is anti-monotone
a 40
Ex. 2. range(S.profit) 15 is anti-monotone
b 0
Itemset ab violates C
c -20
So does every superset of ab
d 10
Ex. 3. sum(S.Price) v is not anti-monotone
e -30
Ex. 4. support count is anti-monotone: core f 30
property used in Apriori
g 20
h -10 78
Pattern Space Pruning with Monotonicity Constraints
TDB (min_sup=2)
TID Transaction
A constraint C is monotone if the pattern
satisfies C, we do not need to check C in 10 a, b, c, d, f
subsequent mining 20 b, c, d, f, g, h
30 a, c, d, e, f
Alternatively, monotonicity: If an itemset S
40 c, e, f, g
satisfies the constraint, so does any of its
superset Item Profit
Ex. 1. sum(S.Price) v is monotone a 40
b 0
Ex. 2. min(S.Price) v is monotone
c -20
Ex. 3. C: range(S.profit) 15
d 10
Itemset ab satisfies C
e -30
So does every superset of ab f 30
g 20
h -10 79
Data Space Pruning with Data Anti-monotonicity
TDB (min_sup=2)
A constraint c is data anti-monotone if for a pattern TID Transaction
p cannot satisfy a transaction t under c, p’s 10 a, b, c, d, f, h
superset cannot satisfy t under c either 20 b, c, d, f, g, h
The key for data anti-monotone is recursive data 30 b, c, d, f, g
reduction 40 c, e, f, g
Ex. 1. sum(S.Price) v is data anti-monotone Item Profit
Ex. 2. min(S.Price) v is data anti-monotone a 40
Ex. 3. C: range(S.profit) 25 is data anti- b 0
monotone c -20
d -15
Itemset {b, c}’s projected DB:
e -30
T10’: {d, f, h}, T20’: {d, f, g, h}, T30’: {d, f, g}
f -10
since C cannot satisfy T10’, T10’ can be pruned
g 20
h -5 80
Pattern Space Pruning with Succinctness
Succinctness:
Given A1, the set of items satisfying a succinctness
constraint C, then any set S satisfying C is based on
A1 , i.e., S contains a subset belonging to A1
Idea: Without looking at the transaction database,
whether an itemset S satisfies constraint C can be
determined based on the selection of items
min(S.Price) v is succinct
sum(S.Price) v is not succinct
Optimization: If C is succinct, C is pre-counting
pushable 81
Naïve Algorithm: Apriori + Constraint
Database D itemset sup.
L1 itemset sup.
TID Items C1 {1} 2 {1} 2
100 134 {2} 3 {2} 3
200 235 Scan D {3} 3 {3} 3
300 1235 {4} 1 {5} 3
400 25 {5} 3
C2 itemset sup C2 itemset
L2 itemset sup {1 2} 1 Scan D {1 2}
{1 3} 2 {1 3} 2 {1 3}
{2 3} 2 {1 5} 1 {1 5}
{2 3} 2 {2 3}
{2 5} 3
{2 5} 3 {2 5}
{3 5} 2
{3 5} 2 {3 5}
C3 itemset Scan D L3 itemset sup Constraint:
{2 3 5} {2 3 5} 2 Sum{S.price} < 5
82
Constrained Apriori : Push a Succinct Constraint Deep
Database D itemset sup.
L1 itemset sup.
TID Items C1 {1} 2 {1} 2
100 134 {2} 3 {2} 3
200 235 Scan D {3} 3 {3} 3
300 1235 {4} 1 {5} 3
400 25 {5} 3
C2 itemset sup C2 itemset
L2 itemset sup {1 2}
{1 2} 1 Scan D
{1 3} 2 {1 3} 2 {1 3}
not immediately
{1 5} 1 {1 5} to be used
{2 3} 2
{2 3} 2 {2 3}
{2 5} 3
{2 5} 3 {2 5}
{3 5} 2 {3 5}
{3 5} 2
C3 itemset Scan D L3 itemset sup Constraint:
{2 3 5} {2 3 5} 2 min{S.price } <= 1
83
Constrained FP-Growth: Push a Succinct Constraint
Deep
1-Projected DB
TID Items
100 3 4 No Need to project on 2, 3, or 5
300 2 3 5
Constraint:
min{S.price } <= 1
84
Constrained FP-Growth: Push a Data Anti-
monotonic Constraint Deep
Remove from data
TID Items TID Items
100 134 100 1 3
200 235 300 1 3
300 1235 FP-Tree
400 25
Constraint:
min{S.price } <= 1
85
TID Transaction
Constrained FP-Growth: Push a Data
10 a, b, c, d, f, h
Anti-monotonic Constraint Deep 20 b, c, d, f, g, h
30 b, c, d, f, g
TID Transaction
40 a, c, e, f, g
10 a, b, c, d, f, h
20 b, c, d, f, g, h Item Profit
30 b, c, d, f, g FP-Tree a 40
40 a, c, e, f, g b 0
c -20
B-Projected DB Recursive
Data
TID Transaction Pruning
d -15
10 a, c, d, f, h e -30
20 c, d, f, g, h B
f -10
30 c, d, f, g FP-Tree g 20
h -5
Single branch: Constraint:
range{S.price } > 25
bcdfg: 2 min_sup >= 2
86
Convertible Constraints: Ordering Data in
Transactions
TDB (min_sup=2)
TID Transaction
Convert tough constraints into anti-
10 a, b, c, d, f
monotone or monotone by properly
20 b, c, d, f, g, h
ordering items 30 a, c, d, e, f
Examine C: avg(S.profit) 25 40 c, e, f, g
Order items in value-descending Item Profit
order a 40
<a, f, g, d, b, h, c, e> b 0
c -20
If an itemset afb violates C d 10
So does afbh, afb* e -30
f 30
It becomes anti-monotone!
g 20
h -10
87
Strongly Convertible Constraints
89
Pattern Space Pruning w. Convertible Constraints
Item Value
C: avg(X) >= 25, min_sup=2
a 40
List items in every transaction in value
f 30
descending order R: <a, f, g, d, b, h, c, e>
g 20
C is convertible anti-monotone w.r.t. R
d 10
Scan TDB once b 0
remove infrequent items
h -10
Item h is dropped
c -20
Itemsets a and f are good, …
e -30
Projection-based mining TDB (min_sup=2)
Imposing an appropriate order on item TID Transaction
projection 10 a, f, d, b, c
Many tough constraints can be converted into 20 f, g, d, b, c
(anti)-monotone 30 a, f, d, c, e
40 f, g, h, c, e
90
Handling Multiple Constraints
92
Constraint-Based Mining — A General Picture
sum(S) v ( a S, a 0 ) yes no no
sum(S) v ( a S, a 0 ) no yes no
range(S) v yes no no
range(S) v no yes no
support(S) no yes no
93
Chapter 7 : Advanced Frequent Pattern Mining
Summary
94
Mining Colossal Frequent Patterns
F. Zhu, X. Yan, J. Han, P. S. Yu, and H. Cheng, “Mining Colossal
Frequent Patterns by Core Pattern Fusion”, ICDE'07.
We have many algorithms, but can we mine large (i.e., colossal)
patterns? ― such as just size around 50 to 100? Unfortunately, not!
Why not? ― the curse of “downward closure” of frequent patterns
The “downward closure” property
Any sub-pattern of a frequent pattern is frequent.
Example. If (a1, a2, …, a100) is frequent, then a1, a2, …, a100, (a1,
a2), (a1, a3), …, (a1, a100), (a1, a2, a3), … are all frequent! There
are about 2100 such frequent itemsets!
No matter using breadth-first search (e.g., Apriori) or depth-first
search (FPgrowth), we have to examine so many patterns
Thus the downward closure property leads to explosion!
95
Colossal Patterns: A Motivating Example
Let’s make a set of 40 transactions Closed/maximal patterns may
T1 = 1 2 3 4 ….. 39 40 partially alleviate the problem but not
T2 = 1 2 3 4 ….. 39 40 really solve it: We often need to
: . mine scattered large patterns!
: . Let the minimum support threshold
: . σ= 20
: . 40
There are frequent patterns of
T40=1 2 3 4 ….. 39 size 20 20
Then 40
delete the items on the diagonal
Each is closed and maximal
T1 = 2 3 4 ….. 39 40
T2 = 1 3 4 ….. 39 40 # patterns = n 2n
: . 2 /
: . n / 2 n
: .
The size of the answer set is
: . exponential to n
T40=1 2 3 4 …… 39
96
Colossal Pattern Set: Small but Interesting
97
Mining Colossal Patterns: Motivation and
Philosophy
Motivation: Many real-world tasks need mining colossal patterns
Micro-array analysis in bioinformatics (when support is low)
Transaction Database D
A colossal pattern α
α D
α1 Dαk
α2
D
Dα1
α
Dα2
αk
101
Robustness of Colossal Patterns
Core Patterns
Intuitively, for a frequent pattern α, a subpattern β is a τ-core
pattern of α if β shares a similar support set with α, i.e.,
| D |
0 1
| D |
102
Example: Core Patterns
A colossal pattern has far more core patterns than a small-sized pattern
A colossal pattern has far more core descendants of a smaller size c
A random draw from a complete set of pattern of size c would more
likely to pick a core descendant of a colossal pattern
A colossal pattern can be generated by merging a set of core patterns
(abcef) (100) (ab), (ac), (af), (ae), (bc), (bf), (be) (ce), (fe), (e),
(abc), (abf), (abe), (ace), (acf), (afe), (bcf), (bce),
(bfe), (cfe), (abcf), (abce), (bcfe), (acfe), (abfe),
(abcef)
103
Colossal Patterns Correspond to Dense Balls
105
Idea of Pattern-Fusion Algorithm
106
Pattern-Fusion: The Algorithm
Initialization (Initial pool): Use an existing algorithm to
mine all frequent patterns up to a small size, e.g., 3
Iteration (Iterative Pattern Fusion):
At each iteration, k seed patterns are randomly picked
108
Pattern-Fusion Leads to Good Approximation
109
Experimental Setting
110
Experiment Results on Diagn
LCM run time increases
exponentially with pattern
size n
Pattern-Fusion finishes
efficiently
The approximation error of
Pattern-Fusion (with min-sup
20) in comparison with the
complete set) is rather close
to uniform sampling (which
randomly picks K patterns
from the complete answer
set)
111
Experimental Results on ALL
ALL: A popular gene expression data set with 38
transactions, each with 866 columns
There are 1736 items in total
112
Experimental Results on REPLACE
REPLACE
A program trace data set, recording 4395 calls
and transitions
The data set contains 4395 transactions with
57 items in total
With support threshold of 0.03, the largest
113
Experimental Results on REPLACE
Approximation error when
compared with the complete
mining result
Example. Out of the total 98
patterns of size >=42, when
K=100, Pattern-Fusion returns
80 of them
A good approximation to the
colossal patterns in the sense
that any pattern in the
complete set is on average at
most 0.17 items away from one
of these 80 patterns
114
Chapter 7 : Advanced Frequent Pattern Mining
Summary
115
Mining Compressed Patterns: δ-clustering
Why compressed patterns? ID Item-Sets Support
P1 {38,16,18,12} 205227
too many, but less meaningful
P2 {38,16,18,12,17} 205211
Pattern distance measure P3 {39,38,16,18,12,17 101758
}
P4 {39,16,18,12,17} 161563
P5 {39,16,18,12} 161576
δ-clustering: For each pattern P, Closed frequent pattern
find all patterns which can be Report P1, P2, P3, P4, P5
expressed by P and their distance Emphasize too much on
to P are within δ (δ-cover) support
All patterns in the cluster can be no compression
116
Redundancy-Award Top-k Patterns
Why redundancy-aware top-k patterns?
Desired patterns: high
significance & low
redundancy
Propose the MMS
(Maximal Marginal
Significance) for
measuring the
combined significance
of a pattern set
Xin et al., Extracting
Redundancy-Aware
Top-K Patterns, KDD’06
117
Chapter 7 : Advanced Frequent Pattern Mining
Summary
118
How to Understand and Interpret Patterns?
Semantic Information
Non-semantic info.
Definitions indicating
semantics
Examples of Usage
Synonyms
Related Words
Semantic Analysis with Context Models
Semantic Annotations
Pattern { x_yan, j_han} Context Units
Non Sup = …
< { p_yu, j_han}, { d_xin }, … , “graph pattern”,
CI {p_yu}, graph pattern, … … “substructure similarity”, … >
Trans. gSpan: graph-base……
SSPs { j_wang }, {j_han, p_yu}, …
Summary
123
Summary
Roadmap: Many aspects & extensions on pattern mining
Mining patterns in multi-level, multi dimensional space
Mining rare and negative patterns
Constraint-based pattern mining
Specialized methods for mining high-dimensional data
and colossal patterns
Mining compressed or approximate patterns
Pattern exploration and understanding: Semantic
annotation of frequent patterns
124
Ref: Mining Multi-Level and Quantitative Rules
Y. Aumann and Y. Lindell. A Statistical Theory for Quantitative Association
Rules, KDD'99
T. Fukuda, Y. Morimoto, S. Morishita, and T. Tokuyama. Data mining using
two-dimensional optimized association rules: Scheme, algorithms, and
visualization. SIGMOD'96.
J. Han and Y. Fu. Discovery of multiple-level association rules from large
databases. VLDB'95.
R.J. Miller and Y. Yang. Association rules over interval data. SIGMOD'97.
R. Srikant and R. Agrawal. Mining generalized association rules. VLDB'95.
R. Srikant and R. Agrawal. Mining quantitative association rules in large
relational tables. SIGMOD'96.
K. Wang, Y. He, and J. Han. Mining frequent itemsets using support
constraints. VLDB'00
K. Yoda, T. Fukuda, Y. Morimoto, S. Morishita, and T. Tokuyama. Computing
optimized rectilinear regions for association rules. KDD'97.
125
Ref: Mining Other Kinds of Rules
F. Korn, A. Labrinidis, Y. Kotidis, and C. Faloutsos. Ratio rules: A new
paradigm for fast, quantifiable data mining. VLDB'98
Y. Huhtala, J. Kärkkäinen, P. Porkka, H. Toivonen. Efficient Discovery of
Functional and Approximate Dependencies Using Partitions. ICDE’98.
H. V. Jagadish, J. Madar, and R. Ng. Semantic Compression and Pattern
Extraction with Fascicles. VLDB'99
B. Lent, A. Swami, and J. Widom. Clustering association rules. ICDE'97.
R. Meo, G. Psaila, and S. Ceri. A new SQL-like operator for mining
association rules. VLDB'96.
A. Savasere, E. Omiecinski, and S. Navathe. Mining for strong negative
associations in a large database of customer transactions. ICDE'98.
D. Tsur, J. D. Ullman, S. Abitboul, C. Clifton, R. Motwani, and S. Nestorov.
Query flocks: A generalization of association-rule mining. SIGMOD'98.
126
Ref: Constraint-Based Pattern Mining
R. Srikant, Q. Vu, and R. Agrawal. Mining association rules with item
constraints. KDD'97
R. Ng, L.V.S. Lakshmanan, J. Han & A. Pang. Exploratory mining and pruning
optimizations of constrained association rules. SIGMOD’98
G. Grahne, L. Lakshmanan, and X. Wang. Efficient mining of constrained
correlated sets. ICDE'00
J. Pei, J. Han, and L. V. S. Lakshmanan. Mining Frequent Itemsets with
Convertible Constraints. ICDE'01
J. Pei, J. Han, and W. Wang, Mining Sequential Patterns with Constraints in
Large Databases, CIKM'02
F. Bonchi, F. Giannotti, A. Mazzanti, and D. Pedreschi. ExAnte: Anticipated
Data Reduction in Constrained Pattern Mining, PKDD'03
F. Zhu, X. Yan, J. Han, and P. S. Yu, “gPrune: A Constraint Pushing Framework
for Graph Pattern Mining”, PAKDD'07
127
Ref: Mining Sequential Patterns
X. Ji, J. Bailey, and G. Dong. Mining minimal distinguishing subsequence patterns with
gap constraints. ICDM'05
H. Mannila, H Toivonen, and A. I. Verkamo. Discovery of frequent episodes in event
sequences. DAMI:97.
J. Pei, J. Han, H. Pinto, Q. Chen, U. Dayal, and M.-C. Hsu. PrefixSpan: Mining Sequential
Patterns Efficiently by Prefix-Projected Pattern Growth. ICDE'01.
R. Srikant and R. Agrawal. Mining sequential patterns: Generalizations and
performance improvements. EDBT’96.
X. Yan, J. Han, and R. Afshar. CloSpan: Mining Closed Sequential Patterns in Large
Datasets. SDM'03.
M. Zaki. SPADE: An Efficient Algorithm for Mining Frequent Sequences. Machine
Learning:01.
128
Mining Graph and Structured Patterns
A. Inokuchi, T. Washio, and H. Motoda. An apriori-based algorithm for
mining frequent substructures from graph data. PKDD'00
M. Kuramochi and G. Karypis. Frequent Subgraph Discovery. ICDM'01.
X. Yan and J. Han. gSpan: Graph-based substructure pattern mining.
ICDM'02
X. Yan and J. Han. CloseGraph: Mining Closed Frequent Graph Patterns.
KDD'03
X. Yan, P. S. Yu, and J. Han. Graph indexing based on discriminative frequent
structure analysis. ACM TODS, 30:960–993, 2005
X. Yan, F. Zhu, P. S. Yu, and J. Han. Feature-based substructure similarity
search. ACM Trans. Database Systems, 31:1418–1453, 2006
129
Ref: Mining Spatial, Spatiotemporal, Multimedia Data
130
Ref: Mining Frequent Patterns in Time-Series Data
131
Ref: FP for Classification and Clustering
G. Dong and J. Li. Efficient mining of emerging patterns: Discovering
trends and differences. KDD'99.
B. Liu, W. Hsu, Y. Ma. Integrating Classification and Association Rule
Mining. KDD’98.
W. Li, J. Han, and J. Pei. CMAR: Accurate and Efficient Classification Based
on Multiple Class-Association Rules. ICDM'01.
H. Wang, W. Wang, J. Yang, and P.S. Yu. Clustering by pattern similarity in
large data sets. SIGMOD’ 02.
J. Yang and W. Wang. CLUSEQ: efficient and effective sequence clustering.
ICDE’03.
X. Yin and J. Han. CPAR: Classification based on Predictive Association
Rules. SDM'03.
H. Cheng, X. Yan, J. Han, and C.-W. Hsu, Discriminative Frequent Pattern
Analysis for Effective Classification”, ICDE'07
132
Ref: Privacy-Preserving FP Mining
133
Mining Compressed Patterns
D. Xin, H. Cheng, X. Yan, and J. Han. Extracting redundancy-
aware top-k patterns. KDD'06
D. Xin, J. Han, X. Yan, and H. Cheng. Mining compressed
frequent-pattern sets. VLDB'05
X. Yan, H. Cheng, J. Han, and D. Xin. Summarizing itemset
patterns: A profile-based approach. KDD'05
134
Mining Colossal Patterns
F. Zhu, X. Yan, J. Han, P. S. Yu, and H. Cheng. Mining colossal
frequent patterns by core pattern fusion. ICDE'07
F. Zhu, Q. Qu, D. Lo, X. Yan, J. Han. P. S. Yu, Mining Top-K Large
Structural Patterns in a Massive Network. VLDB’11
135
Ref: FP Mining from Data Streams
Y. Chen, G. Dong, J. Han, B. W. Wah, and J. Wang. Multi-Dimensional
Regression Analysis of Time-Series Data Streams. VLDB'02.
R. M. Karp, C. H. Papadimitriou, and S. Shenker. A simple algorithm for
finding frequent elements in streams and bags. TODS 2003.
G. Manku and R. Motwani. Approximate Frequency Counts over Data
Streams. VLDB’02.
A. Metwally, D. Agrawal, and A. El Abbadi. Efficient computation of frequent
and top-k elements in data streams. ICDT'05
136
Ref: Freq. Pattern Mining Applications
137