7-Decision Trees Learning

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 51

Decision Trees Learning

Outline

 Decision tree representation


 ID3 learning algorithm
 Entropy, information gain
 Overfitting

2
Decision Tree for PlayTennis

 Attributes and their values:


– Outlook: Sunny, Overcast, Rain
– Humidity: High, Normal
– Wind: Strong, Weak
– Temperature: Hot, Mild, Cool

– Target concept - Play Tennis: Yes, No

3
Decision Tree for PlayTennis

Outlook

Sunny Overcast Rain

Humidity Yes Wind

High Normal Strong Weak

No Yes No Yes
4
Decision Tree for PlayTennis

Outlook

Sunny Overcast Rain

Humidity Each internal node tests an attribute

High Normal Each branch corresponds to an


attribute value node
No Yes Each leaf node assigns a classification
5
Decision Tree for PlayTennis
Outlook Temperature Humidity Wind PlayTennis
Sunny Hot High Weak ?No
Outlook

Sunny Overcast Rain

Humidity Yes Wind

High Normal Strong Weak

No Yes No Yes
6
Decision Tree for Conjunction
Outlook=Sunny  Wind=Weak

Outlook

Sunny Overcast Rain

Wind No No

Strong Weak

No Yes
7
Decision Tree for Disjunction
Outlook=Sunny  Wind=Weak
Outlook

Sunny Overcast Rain

Yes Wind Wind

Strong Weak Strong Weak

No Yes No Yes
8
Decision Tree for XOR
Outlook=Sunny XOR Wind=Weak

Outlook

Sunny Overcast Rain

Wind Wind Wind

Strong Weak Strong Weak Strong Weak

Yes No No Yes No Yes


9
Decision Tree
• decision trees represent disjunctions of conjunctions
Outlook

Sunny Overcast Rain

Humidity Yes Wind

High Normal Strong Weak


No Yes No Yes

(Outlook=Sunny  Humidity=Normal)
 (Outlook=Overcast)
 (Outlook=Rain  Wind=Weak)
10
When to consider Decision Trees

 Instances describable by attribute-value pairs


– e.g Humidity: High, Normal
 Target function is discrete valued
– e.g Play tennis; Yes, No
 Disjunctive hypothesis may be required
– e.g Outlook=Sunny  Wind=Weak
 Possibly noisy training data
 Missing attribute values
 Examples:
– Medical diagnosis
– Credit risk analysis
– Object classification for robot manipulator (Tan 1993)
11
Top-Down Induction of Decision Trees ID3

1. A  the “best” decision attribute for next node


2. Assign A as decision attribute for node
3. For each value of A create new descendant
4. Sort training examples to leaf node according to
the attribute value of the branch
5. If all training examples are perfectly classified
(same value of target attribute) stop, else
iterate over new leaf nodes.

12
Which Attribute is ”best”?

[29+,35-] A1=? A2=? [29+,35-]

True False True False

[21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-]

13
Entropy

 S is a sample of training examples


 p+ is the proportion of positive examples
 p- is the proportion of negative examples
 Entropy measures the impurity of S
Entropy(S) = -p+ log2 p+ - p- log2 p-
14
Entropy

 Entropy(S)= expected number of bits needed to


encode class (+ or -) of randomly drawn members of
S (under the optimal, shortest length-code)

 Information theory optimal length code assign


–log2 p bits to messages having probability p.
 So the expected number of bits to encode
(+ or -) of random member of S:
-p+ log2 p+ - p- log2 p-
(Note that: 0log 0 = 0)

15
Information Gain
 Gain(S,A): expected reduction in entropy due to sorting S
on attribute A

Gain(S,A)=Entropy(S) - vvalues(A) |Sv|/|S| Entropy(Sv)


Entropy([29+,35-]) = -29/64 log2 29/64 – 35/64 log2 35/64
= 0.99
[29+,35-] A1=? A2=? [29+,35-]

True False True False

[21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-]


16
Information Gain

Entropy([21+,5-]) = 0.71 Entropy([18+,33-]) = 0.94


Entropy([8+,30-]) = 0.74 Entropy([8+,30-]) = 0.62
Gain(S,A1)=Entropy(S) Gain(S,A2)=Entropy(S)
-26/64*Entropy([21+,5-]) -51/64*Entropy([18+,33-])
-38/64*Entropy([8+,30-]) -13/64*Entropy([11+,2-])
=0.27 =0.12

[29+,35-] A1=? A2=? [29+,35-]

True False True False

[21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-]


17
Training Examples
Day Outlook Temp. Humidity Wind Play Tennis
D1 Sunny Hot High Weak No
D2 Sunny Hot High Strong No
D3 Overcast Hot High Weak Yes
D4 Rain Mild High Weak Yes
D5 Rain Cool Normal Weak Yes
D6 Rain Cool Normal Strong No
D7 Overcast Cool Normal Weak Yes
D8 Sunny Mild High Weak No
D9 Sunny Cold Normal Weak Yes
D10 Rain Mild Normal Strong Yes
D11 Sunny Mild Normal Strong Yes
D12 Overcast Mild High Strong Yes
D13 Overcast Hot Normal Weak Yes
D14 Rain Mild High Strong No
18
Selecting the Next Attribute
S=[9+,5-] S=[9+,5-]
E=0.940 E=0.940
Humidity Wind

High Normal Weak Strong

[3+, 4-] [6+, 1-] [6+, 2-] [3+, 3-]


E=0.985 E=0.592 E=0.811 E=1.0
Gain(S,Humidity) Gain(S,Wind)
=0.940-(7/14)*0.985 =0.940-(8/14)*0.811
– (7/14)*0.592 – (6/14)*1.0
=0.151 =0.048
Humidity provides greater info. gain than Wind, w.r.t target classification.19
Selecting the Next Attribute
S=[9+,5-]
E=0.940
Outlook

Over
Sunny Rain
cast

[2+, 3-] [4+, 0] [3+, 2-]


E=0.971 E=0.0 E=0.971
Gain(S,Outlook)
=0.940-(5/14)*0.971
-(4/14)*0.0 – (5/14)*0.0971
=0.247
20
Selecting the Next Attribute

Temperature ? S=[9+,5-]
E=0.940
Temperature

Hot Mild Cool

[2+, 2-] [4+, 2-] [3+, 1-]


E=1 E=0.918 E=0.811
Gain(S, Temperature)
=0.940-(4/14)*1
-(6/14)*0.918 – (4/14)*0.811
=0.029
21
Selecting the Next Attribute

The information gain values for the 4 attributes


are:
• Gain(S,Outlook) =0.247
• Gain(S,Humidity) =0.151
• Gain(S,Wind) =0.048
• Gain(S,Temperature) =0.029

where S denotes the collection of training examples

Note: 0Log20 =0

22
ID3 Algorithm

[D1,D2,…,D14] Outlook
[9+,5-]

Sunny Overcast Rain

Ssunny=[D1,D2,D8,D9,D11] [D3,D7,D12,D13] [D4,D5,D6,D10,D14]


[2+,3-] [4+,0-] [3+,2-]
Test for this node
? Yes ?
Gain(Ssunny , Humidity)=0.970-(3/5)0.0 – 2/5(0.0) = 0.970
Gain(Ssunny , Temp.)=0.970-(2/5)0.0 –2/5(1.0)-(1/5)0.0 = 0.570
Gain(Ssunny , Wind)=0.970= -(2/5)1.0 – 3/5(0.918) = 0.019
23
ID3 Algorithm

Outlook

Sunny Overcast Rain

Humidity Yes Wind


[D3,D7,D12,D13]

High Normal Strong Weak

No Yes No Yes

[D1,D2] [D8,D9,D11] [D6,D14] [D4,D5,D10]


24
Hypothesis Space Search ID3

+ - +
A2
A1
+ - + + - -
+ - + - - +
A2 A2

- + - + -
A3 A4
+ - - + 25
Hypothesis Space Search ID3

 Hypothesis space is complete!


– Target function surely in there…
 Outputs a single hypothesis
 No backtracking on selected attributes (greedy search)
– Local minimal (suboptimal splits)
 Statistically-based search choices
– Robust to noisy data
 Inductive bias (search bias)
– Prefer shorter trees over longer ones
– Place high information gain attributes close to the root

26
Converting a Tree to Rules

Outlook

Sunny Overcast Rain

Humidity Yes Wind

High Normal Strong Weak


No Yes No Yes
R1: If (Outlook=Sunny)(Humidity=High) Then PlayTennis=No
R2: If (Outlook=Sunny)(Humidity=Normal) Then PlayTennis=Yes
R3: If (Outlook=Overcast) Then PlayTennis=Yes
R4: If (Outlook=Rain)(Wind=Strong) Then PlayTennis=No
R: If (Outlook=Rain)(Wind=Weak) Then PlayTennis=Yes 27
Continuous Valued Attributes

Create a discrete attribute to test continuous


 Temperature = 24.50C
 (Temperature > 20.00C) = {true, false}
Where to set the threshold?

Temperature 150C 180C 190C 220C 240C 270C

PlayTennis No No Yes Yes Yes No

(see paper by [Fayyad, Irani 1993]


28
Attributes with many Values

 Problem: if an attribute has many values, maximizing


InformationGain will select it.
 E.g.: Imagine using Date=12.7.1996 as attribute
perfectly splits the data into subsets of size 1

 Use GainRatio instead of information gain as criteria:


 GainRatio(S,A) = Gain(S,A) / SplitInformation(S,A)
 SplitInformation(S,A) = -i=1..c |Si|/|S| log2 |Si|/|S|
 Where Si is the subset for which attribute A has the
value vi

29
Attributes with Cost

Consider:
 Medical diagnosis : blood test costs 1000 SEK( 瑞典克
朗)
 Robotics: width_from_one_feet has cost 23 secs.
How to learn a consistent tree with low expected cost?
Replace Gain by :
Gain2(S,A)/Cost(A) [Tan, Schimmer 1990]
2Gain(S,A)-1/(Cost(A)+1)w w [0,1] [Nunez 1988]

30
Unknown Attribute Values

What if examples are missing values of A?


Use training example anyway sort through tree
 If node n tests A, assign most common value of A among other
examples sorted to node n.
 Assign most common value of A among other examples with
same target value
 Assign probability pi to each possible value vi of A
– Assign fraction pi of example to each descendant in tree

Classify new examples in the same fashion

31
Occam’s Razor

Prefer shorter hypotheses

Why prefer short hypotheses?


Argument in favor:
– Fewer short hypotheses than long hypotheses
– A short hypothesis that fits the data is unlikely to be a coincidence
– A long hypothesis that fits the data might be a coincidence
Argument opposed:
– There are many ways to define small sets of hypotheses
– What is so special about small sets based on size of hypothesis

32
Overfitting

Consider error of hypothesis h over


 Training data: errortrain(h)
 Entire distribution D of data: errorD(h)
Hypothesis hH overfits training data if there is an
alternative hypothesis h’H such that
errortrain(h) < errortrain(h’)
and
errorD(h) > errorD(h’)

33
Overfitting in Decision Tree Learning

34
Avoid Overfitting

How can we avoid overfitting?


 Stop growing when data split not
statistically significant
 Grow full tree then post-prune

35
Reduced-Error Pruning

Split data into training and validation set


Do until further pruning is harmful:
1. Evaluate impact on validation set of pruning each
possible node (plus those below it)
2. Greedily remove the one that less improves the
validation set accuracy

Produces smallest version of most accurate subtree

36
Reduced-Error Pruning
Split data into training and validation Outlook
sets.

Pruning a decision node d consists of:


1. removing the subtree rooted at d. sunny overcast rainy
2. making d a leaf node.
3. assigning d the most common
classification of the training
instances associated with d. Humidity yes Windy

Do until further pruning is harmful:


1. Evaluate impact on validation set
of pruning each possible node high normal false true
(plus those below it).
2. Greedily remove the one that most
improves validation set accuracy.
no yes yes no

37
Effect of Reduced Error Pruning

38
Rule Post-Pruning

 Infer the decision tree from the training set—allow overfitting


 Convert tree into equivalent set of rules
 Prune each rule by removing preconditions that result in
improving its estimated accuracy
 Sort the pruned rules by estimated accuracy and consider
them in order when classifying

39
Outlook

Sunny Overcast Rain

Humidity Yes Wind

High Normal Strong Weak

No Yes No Yes

If (Outlook = Sunny)  ( Humidity = High) Then (PlayTennis = No)

40
Why convert the decision tree to rules
before pruning?
 Allows distinguishing among the different contexts
in which a decision node is used
 Removes the distinction between attribute tests
near the root and those that occur near leaves
 Enhances readability

41
Evaluation

 Training accuracy
– How many training instances can be correctly classify based on
the available data?
– Is high when the tree is deep/large, or when there is less
confliction in the training instances.
– however, higher training accuracy does not mean good
generalization
 Testing accuracy
– Given a number of new instances, how many of them can we
correctly classify?
– Cross validation

42
Strengths

 can generate understandable rules


 perform classification without much computation
 can handle continuous and categorical variables
 provide a clear indication of which fields are most important
for prediction or classification

43
Weakness
 Not suitable for prediction of continuous attribute.
 Perform poorly with many class and small data.
 Computationally expensive to train.
– At each node, each candidate splitting field must be sorted before
its best split can be found.
– In some algorithms, combinations of fields are used and a search
must be made for optimal combining weights.
– Pruning algorithms can also be expensive since many candidate
sub-trees must be formed and compared.
 Do not treat well non-rectangular regions.

44
Cross-Validation

 Estimate the accuracy of a hypothesis induced by


a supervised learning algorithm
 Predict the accuracy of a hypothesis over future
unseen instances
 Select the optimal hypothesis from a given set of
alternative hypotheses
– Pruning decision trees
– Model selection
– Feature selection
 Combining multiple classifiers (boosting)

45
Holdout Method
 Partition data set D = {(v1,y1),…,(vn,yn)} into training Dt and
validation set Dh=D\Dt

Training Dt Validation D\Dt

acch = 1/h  (vi,yi)Dh (I(Dt,vi),yi)

I(Dt,vi) : output of hypothesis induced by learner I


trained on data Dt for instance vi
(i,j) = 1 if i=j and 0 otherwise
Problems:
• makes insufficient use of data
• training and validation set are correlated
46
Cross-Validation
 k-fold cross-validation splits the data set D into k mutually
exclusive subsets D1,D2,…,Dk
D1 D2 D 3 D4

 Train and test the learning algorithm k times, each time it is


trained on D\Di and tested on Di

D1 D 2 D3 D4 D1 D2 D 3 D4

D1 D 2 D3 D4 D1 D2 D 3 D4

acccv = 1/n  (vi,yi)D (I(D\Di,vi),yi)


47
Cross-Validation

 Uses all the data for training and testing


 Complete k-fold cross-validation splits the
dataset of size m in all (m over m/k) possible
ways (choosing m/k instances out of m)
 Leave n-out cross-validation sets n instances
aside for testing and uses the remaining ones for
training (leave one-out is equivalent to n-fold
cross-validation)
– Leave one out is widely used
 In stratified cross-validation, the folds are
stratified so that they contain approximately the
same proportion of labels as the original data set
48
信息论基础
 归纳: 使接收者从不知(不确定) 到 知(确定)。
 信息 —— 对事物不确定性的一种表征。
 信息量 —— 对事物不确定性程度的一种度量。
 消息中的信息量 —— 由消息所能消除掉的不确定性的多少 。
 信息量的计算:
– 信源 = { X1 , X2, .. ,Xn } n 个符号的集合; 各符号的概率 p(Xi)

– 由信源输出的消息 = 集合中若干符号的排列 (Xi Xj ... Xk)


– 消息 (Xi Xj ... Xk) 的信息量 = I(Xi Xj ... Xk)
 信息量需要具有的性质:
– 若 p(X) = 1, 则信息量 I(X) = 0
– 若 p(X) = 0, 则信息量 I(X) = 
– 若 p(Xi) > P(Xj), 则信息量 I(Xi) < I(Xj)
– 若 Xi 、 Xj 、 Xk 相独立,则 I (XiXjXk) = I (Xi) + I (Xj) + I (Xk)

49
信息论基础
 香农提出的信息量公式:
I (X) = log2[ 1/ p(X) ] = - log2 p(X) (bit)
 [ 例 ] 消息 X = (X1X2X3) , 各符号出现的概率分别为 p(X1),p(X2),p(X3) 。
求 I(X) 。
– 解: I (X) = I (X1X2X3)
– = -log [ p(X1X2X3) ]
– = -log[ p(X1) p(X2) p(X3) ] (各 Xi 独立)
– = -log p(X1) -log p(X2) -log p(X3)
– = I (X1) + I (X2) + I (X3)
 [ 例 ] 有两个同样形式的消息 (X1X2X3X4), 来自两个不同的信源。
– 信源 1 : p(X1) = 1/2, p(X2) = 1/4, p(X3) = 1/8, p(X4) = 1/8
– 信源 2 : p(X1) = 1/4, p(X2) = 1/4, p(X3) = 1/4, p(X4) = 1/4
– 解:信源 1 : I(X1X2X3X4) = I(X1) + I(X2) + I(X3) + I(X4)
– = -log(1/2) -log(1/4) -log(1/8) - log(1/8)
– = 1 + 2 + 3 +3 = 9 (bit)
– 信源 2 : I(X1X2X3X4) = I(X1) + I(X2) + I(X3) + I(X4)
– = -log(1/4) -log(1/4) -log(1/4) - log(1/4)
– = 2 + 2 + 2 +2 = 8 (bit) ( Imin )
50
信息论基础
 熵 —— 信源中每个符号的平均信息量 ( 数学期望 ) , 记作 H(X)
。 n n
H ( X )   p ( Xi) I ( Xi)    p( Xi) log p ( Xi)
i 1 i 1

 [ 例 ] 信源符号集为 {X1, X2 , X3 , X4 }, 各符号出现的概率分别为


p(X1) = 1/2, p(X2) = 1/4, p(X3) = 1/8, p(X4) = 1/8
计算该信源的熵。
n ( 自做:若各概率相等,熵等于多少? Hmax )
  p( Xi ) log p( Xi )
i 1
– 解: H(X) =
– = [ -(1/2)log(1/2) ] + [ -(1/4)log(1/4) ] + [ -(1/8)log(1/8) ] + [ -(1/8)log(1/8) ]
– = 1/2 + (1/4)×2 + (1/8) ×3 + (1/8) ×3
– = 1/2 + 1/2 + 3/8 + 3/8
– = 1.75 (bit)

51

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy