Unit 3
Unit 3
Unit 3
2
DECISION TREE REPRESENTATION
FIGURE: A
decision tree for the
concept PlayTennis.
An example is
classified by sorting
it through the tree to
the appropriate leaf
node, then returning
the classification
associated with this
leaf
3
• Decision trees classify instances by sorting them down the tree from the root to
some leaf node, which provides the classification of the instance.
• Each node in the tree specifies a test of some attribute of the instance, and each
branch descending from that node corresponds to one of the possible values for
this attribute.
• An instance is classified by starting at the root node of the tree, testing the
attribute specified by this node, then moving down the tree branch corresponding
to the value of the attribute in the given example. This process is then repeated
for the subtree rooted at the new node.
4
• Decision trees represent a disjunction of conjunctions of constraints on the
attribute values of instances.
• Each path from the tree root to a leaf corresponds to a conjunction of attribute
tests, and the tree itself to a disjunction of these conjunctions
For example,
The decision tree shown in above figure corresponds to the expression
(Outlook = Sunny ∧ Humidity = Normal)
∨ (Outlook = Overcast)
∨ (Outlook = Rain ∧ Wind = Weak)
5
APPROPRIATE PROBLEMS FOR
DECISION TREE LEARNING
Decision tree learning is generally best suited to problems with the following
characteristics:
6
4. The training data may contain errors – Decision tree learning methods are
robust to errors, both errors in classifications of the training examples and errors
in the attribute values that describe these examples.
5. The training data may contain missing attribute values – Decision tree
methods can be used even when some training examples have unknown values
• Decision tree learning has been applied to problems such as learning to classify
medical patients by their disease, equipment malfunctions by their cause, and
loan applicants by their likelihood of defaulting on payments.
• Such problems, in which the task is to classify examples into one of a discrete set
of possible categories, are often referred to as classification problems.
7
THE BASIC DECISION TREE LEARNING
ALGORITHM
• Most algorithms that have been developed for learning decision trees are
variations on a core algorithm that employs a top-down, greedy search through the
space of possible decision trees. This approach is exemplified by the ID3
algorithm and its successor C4.5
8
What is the ID3 algorithm?
• ID3 stands for Iterative Dichotomiser 3
• ID3 is a precursor to the C4.5 Algorithm.
• The ID3 algorithm was invented by Ross Quinlan in 1975
• Used to generate a decision tree from a given data set by employing a top-down,
greedy search, to test each attribute at every node of the tree.
• The resulting tree is used to classify future samples.
9
ID3 algorithm
ID3(Examples, Target_attribute, Attributes)
Examples are the training examples. Target_attribute is the attribute whose value is to be predicted
by the tree. Attributes is a list of other attributes that may be tested by the learned decision tree.
Returns a decision tree that correctly classifies the given Examples.
10
Otherwise Begin
A ← the attribute from Attributes that best* classifies Examples
The decision attribute for Root ← A
For each possible value, vi, of A,
Add a new tree branch below Root, corresponding to the test A = vi
Let Examples vi, be the subset of Examples that have value vi for A
If Examples vi , is empty
Then below this new branch add a leaf node with label = most common value of
Target_attribute in Examples
Else below this new branch add the subtree
ID3(Examples vi, Targe_tattribute, Attributes – {A}))
End
Return Root
11
Which Attribute Is the Best Classifier?
• The central choice in the ID3 algorithm is selecting which attribute to test at each
node in the tree.
• A statistical property called information gain that measures how well a given
attribute separates the training examples according to their target classification.
• ID3 uses information gain measure to select among the candidate attributes at
each step while growing the tree.
12
ENTROPY MEASURES HOMOGENEITY OF EXAMPLES
Where,
p+ is the proportion of positive examples in S
p- is the proportion of negative examples in S.
13
Example: Entropy
• Suppose S is a collection of 14 examples of some boolean concept, including 9
positive and 5 negative examples. Then the entropy of S relative to this boolean
classification is
14
• The entropy is 0 if all members of S belong to the same class
• The entropy is 1 when the collection contains an equal number of positive and
negative examples
• If the collection contains unequal numbers of positive and negative examples, the
entropy is between 0 and 1
15
16
INFORMATION GAIN MEASURES THE EXPECTED
REDUCTION IN ENTROPY
17
Example: Information gain
18
An Illustrative Example
• To illustrate the operation of ID3, consider the learning task represented by the
training examples of below table.
• Here the target attribute PlayTennis, which can have values yes or no for
different days.
• Consider the first step through the algorithm, in which the topmost node of the
decision tree is created.
19
Day Outlook Temperature Humidity Wind PlayTennis
D1 Sunny Hot High Weak No
D2 Sunny Hot High Strong No
D3 Overcast Hot High Weak Yes
D4 Rain Mild High Weak Yes
D5 Rain Cool Normal Weak Yes
D6 Rain Cool Normal Strong No
D7 Overcast Cool Normal Strong Yes
D8 Sunny Mild High Weak No
D9 Sunny Cool Normal Weak Yes
D10 Rain Mild Normal Weak Yes
D11 Sunny Mild Normal Strong Yes
D12 Overcast Mild High Strong Yes
D13 Overcast Hot Normal Weak Yes
D14 Rain Mild High Strong No
20
ID3 determines the information gain for each candidate attribute (i.e., Outlook,
Temperature, Humidity, and Wind), then selects the one with highest information
gain
21
The information gain values for all four attributes are
• According to the information gain measure, the Outlook attribute provides the
best prediction of the target attribute, PlayTennis, over the training examples.
Therefore, Outlook is selected as the decision attribute for the root node, and
branches are created below the root for each of its possible values i.e., Sunny,
Overcast, and Rain.
22
23
SRain = { D4, D5, D6, D10, D14}
24
25
HYPOTHESIS SPACE SEARCH IN DECISION TREE
LEARNING
• ID3 can be characterized as searching a space of hypotheses for one that fits the
training examples.
• The hypothesis space searched by ID3 is the set of possible decision trees.
• ID3 performs a simple-to complex, hill-climbing search through this hypothesis
space, beginning with the empty tree, then considering progressively more
elaborate hypotheses in search of a decision tree that correctly classifies the
training data
26
Figure:
• Hypothesis space search by ID3.
• ID3 searches through the space of
possible decision trees from simplest to
increasingly complex, guided by the
information gain heuristic
27
By viewing ID3 in terms of its search space and search strategy, we can get some
insight into its capabilities and limitations
1. ID3's hypothesis space of all decision trees is a complete space of finite discrete-
valued functions, relative to the available attributes. Because every finite discrete-
valued function can be represented by some decision tree
• ID3 avoids one of the major risks of methods that search incomplete hypothesis
spaces : that the hypothesis space might not contain the target function.
28
2. ID3 maintains only a single current hypothesis as it searches through the space
of decision trees.
For example, with the earlier version space candidate elimination method, which
maintains the set of all hypotheses consistent with the available training
examples.
By determining only a single hypothesis, ID3 loses the capabilities that follow from
explicitly representing all consistent hypotheses.
For example, it does not have the ability to determine how many alternative
decision trees are consistent with the available training data, or to pose new
instance queries that optimally resolve among these competing hypotheses
29
3. ID3 in its pure form performs no backtracking in its search. Once it selects an
attribute to test at a particular level in the tree, it never backtracks to reconsider this
choice.
• In the case of ID3, a locally optimal solution corresponds to the decision tree it
selects along the single search path it explores. However, this locally optimal
solution may be less desirable than trees that would have been encountered along a
different branch of the search.
4. ID3 uses all training examples at each step in the search to make statistically
based decisions regarding how to refine its current hypothesis.
• One advantage of using statistical properties of all the examples is that the
resulting search is much less sensitive to errors in individual training examples.
• ID3 can be easily extended to handle noisy training data by modifying its
termination criterion to accept hypotheses that imperfectly fit the training data.
30
INDUCTIVE BIAS IN DECISION TREE LEARNING
Inductive bias is the set of assumptions that, together with the training data,
deductively justify the classifications assigned by the learner to future instances
Given a collection of training examples, there are typically many decision trees
consistent with these examples. Which of these decision trees does ID3 choose?
31
Approximate inductive bias of ID3: Shorter trees are preferred over larger trees
• Consider an algorithm that begins with the empty tree and searches breadth first
through progressively more complex trees.
• First considering all trees of depth 1, then all trees of depth 2, etc.
• Once it finds a decision tree consistent with the training data, it returns the
smallest consistent tree at that search depth (e.g., the tree with the fewest nodes).
• Let us call this breadth-first search algorithm BFS-ID3.
• BFS-ID3 finds a shortest decision tree and thus exhibits the bias "shorter trees are
preferred over longer trees.
32
A closer approximation to the inductive bias of ID3: Shorter trees are preferred
over longer trees. Trees that place high information gain attributes close to the root
are preferred over those that do not.
33
Restriction Biases and Preference Biases
Difference between the types of inductive bias exhibited by ID3 and by the CANDIDATE-
ELIMINATION Algorithm.
ID3
• ID3 searches a complete hypothesis space
• It searches incompletely through this space, from simple to complex hypotheses, until its
termination condition is met
• Its inductive bias is solely a consequence of the ordering of hypotheses by its search strategy. Its
hypothesis space introduces no additional bias
CANDIDATE-ELIMINATION Algorithm
• The version space CANDIDATE-ELIMINATION Algorithm searches an incomplete hypothesis
space
• It searches this space completely, finding every hypothesis consistent with the training data.
• Its inductive bias is solely a consequence of the expressive power of its hypothesis
representation. Its search strategy introduces no additional bias
34
Restriction Biases and Preference Biases
• The inductive bias of ID3 is a preference for certain hypotheses over others (e.g.,
preference for shorter hypotheses over larger hypotheses), with no hard restriction
on the hypotheses that can be eventually enumerated. This form of bias is called a
preference bias or a search bias.
35
Which type of inductive bias is preferred in order to generalize beyond the training
data, a preference bias or restriction bias?
• A preference bias is more desirable than a restriction bias, because it allows the
learner to work within a complete hypothesis space that is assured to contain the
unknown target function.
• In contrast, a restriction bias that strictly limits the set of potential hypotheses is
generally less desirable, because it introduces the possibility of excluding the
unknown target function altogether.
36
Occam's razor
Occam's razor: is the problem-solving principle that the simplest solution tends to be
the right one. When presented with competing hypotheses to solve a problem, one
should select the solution with the fewest assumptions.
Occam's razor: “Prefer the simplest hypothesis that fits the data”.
37
Why Prefer Short Hypotheses ?
Argument in favour:
Fewer short hypotheses than long ones:
• Short hypotheses fits the training data which are less likely to be coincident
• Longer hypotheses fits the training data might be coincident.
Many complex hypotheses that fit the current training data but fail to generalize
correctly to subsequent data.
38
Argument opposed:
• There are few small trees, and our priori chance of finding one consistent with an
arbitrary set of data is therefore small. The difficulty here is that there are very
many small sets of hypotheses that one can define but understood by fewer
learner.
• The size of a hypothesis is determined by the representation used internally by the
learner. Occam's razor will produce two different hypotheses from the same
training examples when it is applied by two learners, both justifying their
contradictory conclusions by Occam's razor. On this basis we might be tempted to
reject Occam's razor altogether.
39
ISSUES IN DECISION TREE LEARNING
40
1. Avoiding Overfitting the Data
• The ID3 algorithm grows each branch of the tree just deeply enough to perfectly
classify the training examples but it can lead to difficulties when there is noise in
the data, or when the number of training examples is too small to produce a
representative sample of the true target function. This algorithm can produce trees
that overfit the training examples.
41
• The below figure illustrates the impact of overfitting in a typical application of decision tree
learning.
• The horizontal axis of this plot indicates the total number of nodes in the decision tree, as the tree is being
constructed. The vertical axis indicates the accuracy of predictions made by the tree.
• The solid line shows the accuracy of the decision tree over the training examples. The broken line shows
accuracy measured over an independent set of test example
• The accuracy of the tree over the training examples increases monotonically as the tree is grown. The
accuracy measured over the independent test examples first increases, then decreases.
42
How can it be possible for tree h to fit the training examples better than h', but for it to perform
more poorly over subsequent examples?
1. Overfitting can occur when the training examples contain random errors or noise
2. When small numbers of examples are associated with leaf nodes.
43
Approaches to avoiding overfitting in decision tree learning
• Pre-pruning (avoidance): Stop growing the tree earlier, before it reaches the point where
it perfectly classifies the training data
• Post-pruning (recovery): Allow the tree to overfit the data, and then post-prune the tree
44
Reduced-Error Pruning
• Reduced-error pruning, is to consider each of the decision nodes in the tree to be
candidates for pruning
• Pruning a decision node consists of removing the subtree rooted at that node,
making it a leaf node, and assigning it the most common classification of the
training examples affiliated with that node
• Nodes are removed only if the resulting pruned tree performs no worse than-the
original over the validation set.
• Reduced error pruning has the effect that any leaf node added due to coincidental
regularities in the training set is likely to be pruned because these same
coincidences are unlikely to occur in the validation set
45
46