Classification With Decision Trees I: Instructor: Qiang Yang

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 29

Classification with Decision Trees I

Instructor: Qiang Yang


Hong Kong University of Science and Technology
Qyang@cs.ust.hk

Thanks: Eibe Frank and Jiawei Han

1
INTRODUCTION

 Given a set of pre-classified examples, build a model


or classifier to classify new cases.
 Supervised learning in that classes are known for
the examples used to build the classifier.
 A classifier can be a set of rules, a decision tree, a
neural network, etc.
 Typical Applications: credit approval, target
marketing, fraud detection, medical diagnosis,
treatment effectiveness analysis, …..
Constructing a Classifier
 The goal is to maximize the accuracy on new cases
that have similar class distribution.
 Since new cases are not available at the time of
construction, the given examples are divided into the
testing set and the training set. The classifier is built
using the training set and is evaluated using the
testing set.
 The goal is to be accurate on the testing set. It is
essential to capture the “structure” shared by both
sets.
 Must prune overfitting rules that work well on the
training set, but poorly on the testing set.

3
Example
Classification
Algorithms
Training
Data

NAME RANK YEARS TENURED Classifier


M ike A ssistant P rof 3 no (Model)
M ary A ssistant P rof 7 yes
B ill P rofessor 2 yes
Jim A ssociate P rof 7 yes
IF rank = ‘professor’
D ave A ssistant P rof 6 no
OR years > 6
A nne A ssociate P rof 3 no
THEN tenured = ‘yes’
Example (Conted)

Classifier

Testing
Data Unseen Data

(Jeff, Professor, 4)
NAME RANK YEARS TENURED
T om A ssistant P rof 2 no Tenured?
M erlisa A ssociate P rof 7 no
G eorge P rofessor 5 yes
Joseph A ssistant P rof 7 yes
Evaluation Criteria
 Accuracy on test set
 the rate of correct
classification on the testing
set. E.g., if 90 are classified
correctly out of the 100
testing cases, accuracy is Predicted class
90%.
 Error Rate on test set
 The percentage of wrong Yes No
predictions on test set
 Confusion Matrix
 For binary class values, “yes” Actual Yes True False
and “no”, a matrix showing class positive negativ
true positive, true negative,
false positive and false e
negative rates
 Speed and scalability
 the time to build the No False True
classifier and to classify new
cases, and the scalability positive negativ
with respect to the data size. e
 Robustness: handling noise
and missing values
Evaluation Techniques

 Holdout: the training set/testing set.


 Good for a large set of data.
 k-fold Cross-validation:
 divide the data set into k sub-samples.
 In each run, use one distinct sub-sample as testing set and
the remaining k-1 sub-samples as training set.
 Evaluate the method using the average of the k runs.
 This method reduces the randomness of training
set/testing set.
Cross Validation: Holdout Method
— Break up data into groups of the same size

— Hold aside one group for testing and use the rest to build model

— Repeat

iteration
Test

8
8
Continuous Classes

 Sometimes, classes are continuous in that they come


from a continuous domain,
 e.g., temperature or stock price.
 Regression is well suited in this case:
 Linear and multiple regression
 Non-Linear regression
 We shall focus on categorical classes, e.g., colors or
Yes/No binary decisions.
 We will deal with continuous class values later in CART

9
DECISION TREE [Quinlan93]

 An internal node represents a test on an attribute.


 A branch represents an outcome of the test, e.g.,
Color=red.
 A leaf node represents a class label or class label
distribution.
 At each node, one attribute is chosen to split training
examples into distinct classes as much as possible
 A new case is classified by following a matching path
to a leaf node.

10
Training Set
Outlook Tempreature Humidity Windy Class
sunny hot high false N
sunny hot high true N
overcast hot high false P
rain mild high false P
rain cool normal false P
rain cool normal true N
overcast cool normal true P
sunny mild high false N
sunny cool normal false P
rain mild normal false P
sunny mild normal true P
overcast mild high true P
overcast hot normal false P
rain mild high true N
Example
Outlook

sunny overcast
overcast rain

humidity P windy

high normal true false

N P N P
Building Decision Tree [Q93]

 Top-down tree construction


 At start, all training examples are at the root.
 Partition the examples recursively by choosing one attribute
each time.
 Bottom-up tree pruning
 Remove subtrees or branches, in a bottom-up manner, to
improve the estimated accuracy on new cases.

13
Choosing the Splitting
Attribute

 At each node, available attributes are evaluated on


the basis of separating the classes of the training
examples. A Goodness function is used for this
purpose.
 Typical goodness functions:
 information gain (ID3/C4.5)
 information gain ratio
 gini index

14
Which attribute to select?

15
A criterion for attribute
selection

 Which is the best attribute?


 The one which will result in the smallest tree
 Heuristic: choose the attribute that produces the “purest”
nodes
 Popular impurity criterion: information gain
 Information gain increases with the average purity of the
subsets that an attribute produces
 Strategy: choose attribute that results in greatest
information gain

16
Computing information

 Information is measured in bits


 Given a probability distribution, the info required to predict
an event is the distribution’s entropy
 Entropy gives the information required in bits (this can
involve fractions of bits!)
 Formula for computing the entropy:

entropy( p1 , p2 ,, pn )   p1logp1  p2logp2   pn logpn

17
Example: attribute “Outlook”

 “Outlook” = “Sunny”:
info([2,3])  entropy(2/5,3/5)  2 / 5 log(2 / 5)  3 / 5 log(3 / 5)  0.971 bits

 “Outlook” = “Overcast”: Note: this is


info([4,0])  entropy(1,0)  1log(1)  0 log(0)  0 bits normally not
defined.
 “Outlook” = “Rainy”:
info([3,2])  entropy(3/5,2/5)  3 / 5 log(3 / 5)  2 / 5 log(2 / 5)  0.971 bits
 Expected information for attribute:

info([3,2],[4,0],[3,2])  (5 / 14)  0.971  (4 / 14)  0  (5 / 14)  0.971


 0.693 bits
18
Computing the information
gain

 Information gain: information before splitting –


information after splitting
gain(" Outlook" )  info([9,5] ) - info([2,3] , [4,0], [3,2])  0.940 - 0.693

 0.247 bits
 Information gain for attributes from weather data:
gain(" Outlook" )  0.247 bits
gain(" Temperatur e" )  0.029 bits
gain(" Humidity" )  0.152 bits
gain(" Windy" )  0.048 bits

19
Continuing to split

gain(" Temperatur e" )  0.571 bits


gain(" Humidity" )  0.971 bits
gain(" Windy" )  0.020 bits

20
The final decision tree

 Note: not all leaves need to be pure; sometimes


identical instances have different classes
 Splitting stops when data can’t be split any further

21
Highly-branching attributes

 Problematic: attributes with a large number of values


(extreme case: ID code)
 Subsets are more likely to be pure if there is a large
number of values
 Information gain is biased towards choosing attributes with
a large number of values
 This may result in overfitting (selection of an attribute that is
non-optimal for prediction)
 Another problem: fragmentation

22
The gain ratio
 Gain ratio: a modification of the information gain that
reduces its bias on high-branch attributes
 Gain ratio takes number and size of branches into
account when choosing an attribute
 It corrects the information gain by taking the intrinsic
information of a split into account
 Also called split ratio
 Intrinsic information: entropy of distribution of
instances into branches
 (i.e. how much info do we need to tell which branch an
instance belongs to)

23
Gain Ratio

 Gain ratio should be


 Large when data is evenly spread
 Small when all data belong to one branch
 Gain ratio (Quinlan’86) normalizes info gain by this
reduction:
|S | |S |
IntrinsicInfo(S , A)   i log i .
|S| 2 | S |

GainRatio(S, A)  Gain(S , A) .
IntrinsicInfo(S , A)
Computing the gain ratio

 Example: intrinsic information for ID code


info ([1,1, ,1)  14  (1 / 14  log 1 / 14)  3.807 bits

 Importance of attribute decreases as intrinsic


information gets larger
 Example of gain ratio:
gain(" Attribute" )
gain_ratio (" Attribute" ) 
intrinsic_ info(" Attribute" )
 Example:
0.940 bits
gain_ratio (" ID_code")   0.246
3.807 bits
25
Gain ratios for weather data
Outlook Temperature

Info: 0.693 Info: 0.911


Gain: 0.940-0.693 0.247 Gain: 0.940-0.911 0.029
Split info: info([5,4,5]) 1.577 Split info: info([4,6,4]) 1.362

Gain ratio: 0.247/1.577 0.156 Gain ratio: 0.029/1.362 0.021

Humidity Windy

Info: 0.788 Info: 0.892


Gain: 0.940-0.788 0.152 Gain: 0.940-0.892 0.048
Split info: info([7,7]) 1.000 Split info: info([8,6]) 0.985

Gain ratio: 0.152/1 0.152 Gain ratio: 0.048/0.985 0.049

26
More on the gain ratio

 “Outlook” still comes out top


 However: “ID code” has greater gain ratio
 Standard fix: ad hoc test to prevent splitting on that type of
attribute
 Problem with gain ratio: it may overcompensate
 May choose an attribute just because its intrinsic information
is very low
 Standard fix:
 First, only consider attributes with greater than average
information gain
 Then, compare them on gain ratio

27
Gini Index
 If a data set T contains examples from n classes, gini index,
gini(T) is defined as
n
gini (T )  1   p2j
j 1
where pj is the relative frequency of class j in T. gini(T) is
minimized if the classes in T are skewed.
 After splitting T into two subsets T1 and T2 with sizes N1 and
N2, the gini index of the split data is defined as

ginisplit (T )  N 1 gini(T1)  N 2 gini(T 2)


N N
 The attribute providing smallest ginisplit(T) is chosen to split the
node.
Discussion

 Algorithm for top-down induction of decision trees


(“ID3”) was developed by Ross Quinlan
 Gain ratio just one modification of this basic algorithm
 Led to development of C4.5, which can deal with numeric
attributes, missing values, and noisy data
 Similar approach: CART (linear regression tree, WF
book, Chapter 6.5)
 There are many other attribute selection criteria! (But
almost no difference in accuracy of result.)

29

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy