0% found this document useful (0 votes)
6 views

NOTES Module 3 - Chapter 6 - Decision Tree Learning

The document outlines the principles of Decision Tree Learning, specifically focusing on the ID3 algorithm used for constructing decision trees from training examples. It describes how decision trees classify instances through a series of attribute tests, the importance of information gain in selecting attributes, and the characteristics of problems suitable for decision tree learning. Additionally, it discusses the inductive bias of the ID3 algorithm, which favors shorter trees and those with attributes of higher information gain closer to the root.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

NOTES Module 3 - Chapter 6 - Decision Tree Learning

The document outlines the principles of Decision Tree Learning, specifically focusing on the ID3 algorithm used for constructing decision trees from training examples. It describes how decision trees classify instances through a series of attribute tests, the importance of information gain in selecting attributes, and the characteristics of problems suitable for decision tree learning. Additionally, it discusses the inductive bias of the ID3 algorithm, which favors shorter trees and those with attributes of higher information gain closer to the root.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Rashtreeya Sikshana Samithi Trust

RV Institute of Technology and Management®


(Affiliated to VTU, Belagavi)
JP Nagar, Bengaluru – 560076

Department of Information Science and Engineering

Course Name: Machine Learning

Course Code: BCS602

VI Semester 2022 Scheme

Prepared By:
Prof. Samatha R Swamy,
Assistant Professor,
Department of Information Science and Engineering,
RV Institute of Technology and Management(RVITM),
Bengaluru - 560076
Email: samathars.rvitm@rvei.edu.in

1
​ ​ ​ ​ ​ RV Institute Technology & Management®

MODULE - 3

Chapter 6 - Decision Tree Learning


Decision tree learning is a method for approximating discrete-valued target functions, in
which the learned function is represented by a decision tree.

DECISION TREE REPRESENTATION

●​ Decision trees classify instances by sorting them down the tree from the root to some
leaf node, which provides the classification of the instance.
●​ Each node in the tree specifies a test of some attribute of the instance, and each branch
descending from that node corresponds to one of the possible values for this attribute.
●​ An instance is classified by starting at the root node of the tree, testing the attribute
specified by this node, then moving down the tree branch corresponding to the value
of the attribute in the given example. This process is then repeated for the subtree
rooted at the new node.

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 2


​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®

FIGURE: A decision tree for the concept of PlayTennis. An example is classified by sorting it
through the tree to the appropriate leaf node, then returning the classification associated with
this leaf.

●​ Decision trees represent a disjunction of conjunctions of constraints on the


attribute values of instances.
●​ Each path from the tree root to a leaf corresponds to a conjunction of attribute
tests, and the tree itself to a disjunction of these conjunctions

For example, the decision tree shown in the above figure corresponds to the
expression (Outlook = Sunny 𝖠 Humidity = Normal)
∨​ (Outlook = Overcast)
∨​ (Outlook = Rain 𝖠 Wind = Weak)

APPROPRIATE PROBLEMS FOR DECISION TREE LEARNING

Decision tree learning is generally best suited to problems with the following characteristics:

1.​ Instances are represented by attribute-value pairs – Instances are described by


a fixed set of attributes and their values

2.​ The target function has discrete output values – The decision tree assigns a Boolean
classification (e.g., yes or no) to each example. Decision tree methods easily extend
to learning functions with more than two possible output values.

3.​ Disjunctive descriptions may be required

4.​ The training data may contain errors – Decision tree learning methods are robust to
errors, both errors in classifications of the training examples and errors in the attribute
values that describe these examples.

5.​ The training data may contain missing attribute values – Decision tree methods
can be used even when some training examples have unknown values

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 3


​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®

THE BASIC DECISION TREE LEARNING ALGORITHM

The basic algorithm is ID3 which learns decision trees by constructing them top-down

ID3(Examples, Target_attribute, Attributes)

Examples are the training examples. Target_attribute is the attribute whose value is to
be predicted by the tree. Attributes is a list of other attributes that may be tested by the
learned decision tree. Returns a decision tree that correctly classifies the given
Examples.

●​ Create a Root node for the tree


●​ If all Examples are positive, Return the single-node tree Root, with label = +
●​ If all Examples are negative, Return the single-node tree Root, with label = -
●​ If Attributes is empty, Return the single-node tree Root, with label = most common value
of Target_attribute in Examples

●​ Otherwise, Begin
●​ A ← the attribute from Attributes that best* classifies Examples
●​ The decision attribute for Root ← A
●​ For each possible value, vi, of A,
●​ Add a new tree branch below Root, corresponding to the test A = vi

●​ Let Examples vi, be the subset of Examples that have value vi for A
●​ If Examples vi , is empty
●​ Then below this new branch add a leaf node with label = most
common value of Target_attribute in Examples
●​ Else below this new branch add the subtree
ID3(Examples vi, Targe_tattribute, Attributes – {A}))
●​ End
●​ Return Root

*​The best attribute is the one with the highest information gain

TABLE: Summary of the ID3 algorithm specialized to learning Boolean-valued functions.


ID3 is a greedy algorithm that grows the tree top-down, at each node selecting the attribute
that best classifies the local training examples. This process continues until the tree perfectly
classifies the training examples, or until all attributes have been used

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 4


​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®

Which Attribute Is the Best Classifier?

●​ The central choice in the ID3 algorithm is selecting which attribute to test at each
node in the tree.
●​ A statistical property called information gain that measures how well a given
attribute separates the training examples according to their target classification.
●​ ID3 uses an information gain measure to select among the candidate attributes at
each step while growing the tree.

ENTROPY MEASURES HOMOGENEITY OF EXAMPLES

To define information gain, we begin by defining a measure called entropy. Entropy


measures the impurity of a collection of examples.

Given a collection S, containing positive and negative examples of some target concept, the
entropy of S relative to this Boolean classification is

​ ​

p+ is the proportion of positive examples in S


p- is the proportion of negative examples in S.

Example:
Suppose S is a collection of 14 examples of some boolean concept, including 9 positive and 5
negative examples. Then the entropy of S relative to this boolean classification is

●​ The entropy is 0 if all members of S belong to the same class


●​ The entropy is 1 when the collection contains an equal number of positive and
negative examples
●​ If the collection contains unequal numbers of positive and negative examples,
the entropy is between 0 and 1

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 5


​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®

INFORMATION GAIN MEASURES THE EXPECTED REDUCTION IN ENTROPY

●​ Information gain is the expected reduction in entropy caused by partitioning the


examples according to this attribute.
●​ The information gain, Gain(S, A) of an attribute A, relative to a collection of
examples S, is defined as

Example: Information gain

Let, Values(Wind) = {Weak,


Strong} S​ = [9+, 5−]
S​ = [6+, 2−]
Weak
S​ = [3+, 3−]
Strong

Information gain of attribute Wind:


Gain(S, Wind)​ = Entropy(S) − 8/14 Entropy (SWeak) − 6/14 Entropy (SStrong)
= 0.94 – (8/14)* 0.811 – (6/14) *1.00
= 0.048
Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 6
​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®

An Illustrative Example

●​ To illustrate the operation of ID3, consider the learning task represented by the
training examples of the below table.
●​ Here the target attribute is PlayTennis, which can have values yes or no for different days.
●​ Consider the first step through the algorithm, in which the topmost node of the
decision tree is created.

Day Outlook Temperature Humidity Wind PlayTennis


D1 Sunny Hot High Weak No
D2 Sunny Hot High Strong No
D3 Overcast Hot High Weak Yes
D4 Rain Mild High Weak Yes
D5 Rain Cool Normal Weak Yes
D6 Rain Cool Normal Strong No
D7 Overcast Cool Normal Strong Yes
D8 Sunny Mild High Weak No
D9 Sunny Cool Normal Weak Yes
D10 Rain Mild Normal Weak Yes
D11 Sunny Mild Normal Strong Yes
D12 Overcast Mild High Strong Yes
D13 Overcast Hot Normal Weak Yes
D14 Rain Mild High Strong No

●​ ID3 determines the information gain for each candidate attribute (i.e., Outlook,
Temperature, Humidity, and Wind), then selects the one with highest information
gain.

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 7


​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®

●​ The information gain values for all four attributes are


Gain(S, Outlook)​ = 0.246
Gain(S, Humidity)​ = 0.151
Gain(S, Wind)​ =
0.048 Gain(S, Temperature)​ =
0.029

●​ According to the information gain measure, the Outlook attribute provides the best
prediction of the target attribute, PlayTennis, over the training examples. Therefore,
Outlook is selected as the decision attribute for the root node, and branches are created
below the root for each of its possible values i.e., Sunny, Overcast, and Rain.

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 8


​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®

SRain = { D4, D5, D6, D10, D14}

Gain (SRain , Humidity) = 0.970 – (2/5)1.0 – (3/5)0.917 = 0.019


Gain (SRain , Temperature) =0.970 – (0/5)0.0 – (3/5)0.918 – (2/5)1.0 = 0.019
Gain (SRain , Wind) =0.970 – (3/5)0.0 – (2/5)0.0 = 0.970

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 9


​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®

HYPOTHESIS SPACE SEARCH IN DECISION TREE LEARNING

●​ ID3 can be characterized as searching a space of hypotheses for one that fits the
training examples.
●​ The hypothesis space searched by ID3 is the set of possible decision trees.
●​ ID3 performs a simple-to-complex, hill-climbing search through this hypothesis space,
beginning with the empty tree, and then considering progressively more elaborate
hypotheses in search of a decision tree that correctly classifies the training data

Figure: Hypothesis space search by ID3. ID3 searches through the space of possible decision
trees from simplest to increasingly complex, guided by the information gain heuristic.

By viewing ID3 in terms of its search space and search strategy, there is some insight into its
capabilities and limitations

1.​ ID3's hypothesis space of all decision trees is a complete space of finite
discrete-valued functions, relative to the available attributes. Because every finite
discrete-valued function can be represented by some decision tree
ID3 avoids one of the major risks of methods that search incomplete hypothesis
spaces: that the hypothesis space might not contain the target function.

2.​ ID3 maintains only a single current hypothesis as it searches through the space of
decision trees.
For example, with the earlier version space candidate elimination method, which

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 10


​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®
maintains the set of all hypotheses consistent with the available training
examples.

By determining only a single hypothesis, ID3 loses the capabilities that follow from
explicitly representing all consistent hypotheses.
For example, it does not have the ability to determine how many alternative decision
trees are consistent with the available training data or to pose new instance queries that
optimally resolve among these competing hypotheses

3.​ ID3 in its pure form performs no backtracking in its search. Once it selects an
attribute to test at a particular level in the tree, it never backtracks to reconsider this
choice.
In the case of ID3, a locally optimal solution corresponds to the decision tree it selects
along the single search path it explores. However, this locally optimal solution may be
less desirable than trees that would have been encountered along a different branch of
the search.

4.​ ID3 uses all training examples at each step in the search to make statistically
based decisions regarding how to refine its current hypothesis.
One advantage of using statistical properties of all the examples is that the resulting
search is much less sensitive to errors in individual training examples.
ID3 can be easily extended to handle noisy training data by modifying its termination
criterion to accept hypotheses that imperfectly fit the training data.

INDUCTIVE BIAS IN DECISION TREE LEARNING

Inductive bias is the set of assumptions that, together with the training data, deductively justify
the classifications assigned by the learner to future instances

Given a collection of training examples, there are typically many decision trees consistent
with these examples. Which of these decision trees does ID3 choose?

ID3 search strategy


●​ Selects in favor of shorter trees over longer ones
●​ Selects trees that place the attributes with the highest information gain closest to the root.

Approximate inductive bias of ID3: Shorter trees are preferred over larger trees

●​ Consider an algorithm that begins with the empty tree and searches breadth first
through progressively more complex trees.
●​ First consider all trees of depth 1, then all trees of depth 2, etc.
●​ Once it finds a decision tree consistent with the training data, it returns the
Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 11
​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®
smallest consistent tree at that search depth (e.g., the tree with the
fewest nodes).
●​ Let us call this breadth-first search algorithm BFS-ID3.
●​ BFS-ID3 finds the shortest decision tree and thus exhibits the bias "shorter trees
are preferred over longer trees.

A closer approximation to the inductive bias of ID3: Shorter trees are preferred over longer
trees. Trees that place high information gain attributes close to the root are preferred over
those that do not.

●​ ID3 can be viewed as an efficient approximation to BFS-ID3, using a greedy heuristic


search to attempt to find the shortest tree without conducting the entire breadth-first
search through the hypothesis space.
●​ Because ID3 uses the information gain heuristic and a hill-climbing strategy, it
exhibits a more complex bias than BFS-ID3.
●​ In particular, it does not always find the shortest consistent tree, and it is biased to
favor trees that place attributes with high information gain closest to the root.

Restriction Biases and Preference Biases

Difference between the types of inductive bias exhibited by ID3 and by the CANDIDATE-
ELIMINATION Algorithm.
ID3:
●​ ID3 searches a complete hypothesis space
●​ It searches incompletely through this space, from simple to complex hypotheses,
until its termination condition is met
●​ Its inductive bias is solely a consequence of the ordering of hypotheses by its
search strategy. Its hypothesis space introduces no additional bias

CANDIDATE-ELIMINATION Algorithm:
●​ The version space CANDIDATE-ELIMINATION Algorithm searches an incomplete
hypothesis space
●​ It searches this space completely, finding every hypothesis consistent with the training
data.
●​ Its inductive bias is solely a consequence of the expressive power of its
hypothesis representation. Its search strategy introduces no additional bias.

Preference bias – The inductive bias of ID3 is a preference for certain hypotheses over others
(e.g., preference for shorter hypotheses over larger hypotheses), with no hard restriction on
the hypotheses that can be eventually enumerated. This form of bias is called a preference
bias or a search bias.

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 12


​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®
Restriction bias – The bias of the CANDIDATE ELIMINATION algorithm
is in the form of a categorical restriction on the set of hypotheses
considered. This form of bias is typically called a restriction bias or a language bias.

Which type of inductive bias is preferred in order to generalize beyond the training data, a
preference bias or a estriction bias?

●​ A preference bias is more desirable than a restriction bias, because it allows the learner
to work within a complete hypothesis space that is assured to contain the unknown
target function.
●​ In contrast, a restriction bias that strictly limits the set of potential hypotheses is
generally less desirable, because it introduces the possibility of excluding the unknown
target function altogether.

Why Prefer Short Hypotheses? Occam's


razor

●​ Occam's razor: is the problem-solving principle that the simplest solution tends to be
the right one. When presented with competing hypotheses to solve a problem, one
should select the solution with the fewest assumptions.

●​ Occam's razor: “Prefer the simplest hypothesis that fits the data”.

The argument in favor of Occam’s razor:

●​ Fewer short hypotheses than long ones:


●​ Short hypotheses fit the training data which are less likely to be coincident
●​ Longer hypotheses that fit the training data might be coincident.
●​ Many complex hypotheses that fit the current training data but fail to
generalize correctly to subsequent data.

Argument opposed:
●​ There are few small trees, and our a priori chance of finding one consistent with an
arbitrary set of data is therefore small. The difficulty here is that there are very many
small sets of hypotheses that one can define but understood by fewer learners.
●​ The size of a hypothesis is determined by the representation used internally by the
learner. Occam's razor will produce two different hypotheses from the same training
examples when it is applied by two learners, both justifying their contradictory
conclusions by Occam's razor. On this basis we might be tempted to reject Occam's
razor altogether.

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 13


​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®

ISSUES IN DECISION TREE LEARNING

Issues in learning decision trees include


1.​ Avoiding Overfitting the Data
Reduced error pruning
Rule post-pruning
2.​ Incorporating Continuous-Valued Attributes
3.​ Alternative Measures for Selecting Attributes
4.​ Handling Training Examples with Missing Attribute Values
5.​ Handling Attributes with Differing Costs

1.​Avoiding Overfitting the Data

●​ The ID3 algorithm grows each branch of the tree just deeply enough to perfectly
classify the training examples but it can lead to difficulties when there is noise in the
data, or when the number of training examples is too small to produce a representative
sample of the true target function. This algorithm can produce trees that overfit the
training examples.

●​ Definition - Overfit: Given a hypothesis space H, a hypothesis h ∈ H is said to overfit


the training data if there exists some alternative hypothesis h' ∈ H, such that h has
smaller error than h' over the training examples, but h' has a smaller error than h over
the entire distribution of instances.
The below figure illustrates the impact of overfitting in a typical application of decision tree
learning.

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 14


​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®

●​ The horizontal axis of this plot indicates the total number of nodes
in the decision tree, as the tree is being constructed. The vertical axis indicates the
accuracy of predictions made by the tree.
●​ The solid line shows the accuracy of the decision tree over the training examples. The
broken line shows accuracy measured over an independent set of test example
●​ The accuracy of the tree over the training examples increases monotonically as the tree
is grown. The accuracy measured over the independent test examples first increases,
then decreases.

How can it be possible for tree h to fit the training examples better than h', but for it to
perform more poorly over subsequent examples?
1.​ Overfitting can occur when the training examples contain random errors or noise
2.​ When small numbers of examples are associated with leaf nodes.

Noisy Training Example

●​ Example 15: <Sunny, Hot, Normal, Strong, ->


●​ The example is noisy because the correct label is +
●​ Previously constructed trees misclassified it

Approaches to avoiding overfitting in decision tree learning


●​ Pre-pruning (avoidance): Stop growing the tree earlier, before it reaches the point
where it perfectly classifies the training data
●​ Post-pruning (recovery): Allow the tree to overfit the data, and then post-prune the tree

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 15


​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®
Criterion used to determine the correct final tree size
●​ Use a separate set of examples, distinct from the training
examples, to evaluate the utility of post-pruning nodes from the tree
●​ Use all the available data for training, but apply a statistical test to estimate whether
expanding (or pruning) a particular node is likely to produce an improvement beyond
the training set
●​ Use a measure of the complexity for encoding the training examples and the decision
tree, halting growth of the tree when this encoding size is minimized. This approach is
called the Minimum Description Length

MDL – Minimize : size(tree) + size (misclassifications(tree))


Reduced-Error Pruning

●​ Reduced-error pruning is to consider each of the decision nodes in the tree to be


candidates for pruning
●​ Pruning a decision node consists of removing the subtree rooted at that node, making
it a leaf node, and assigning it the most common classification of the training
examples affiliated with that node
●​ Nodes are removed only if the resulting pruned tree performs no worse than the
original over the validation set.
●​ Reduced error pruning has the effect that any leaf node added due to coincidental
regularities in the training set is likely to be pruned because these same coincidences
are unlikely to occur in the validation set

The impact of reduced error pruning on the accuracy of the decision tree is illustrated in the
below figure

●​ The additional line in the figure shows accuracy over the test examples as the tree is

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 16


​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®
pruned. When pruning begins, the tree is at its maximum size and lowest
accuracy over the test set. As pruning proceeds, the number of nodes is
reduced, and accuracy over the test set increases.
●​ The available data has been split into three subsets: the training examples, the
validation examples used for pruning the tree, and a set of test examples used to
provide an unbiased estimate of accuracy over future unseen examples. The plot shows
accuracy over the training and test sets.

Pros and Cons


Pro: Produces smallest version of most accurate T (subtree of T)
Con: Uses fewer data to construct T
Can afford to hold out D
validation
?. If not (data is too limited), may make the error worse
(insufficient D​ )
train

Rule Post-Pruning

Rule post-pruning is a successful method for finding high-accuracy hypotheses


●​ Rule post-pruning involves the following steps:
●​ Infer the decision tree from the training set, growing the tree until the training data is
fit as well as possible and allowing overfitting to occur.
●​ Convert the learned tree into an equivalent set of rules by creating one rule for each
path from the root node to a leaf node.
●​ Prune (generalize) each rule by removing any preconditions that result in improving
its estimated accuracy.
●​ Sort the pruned rules by their estimated accuracy, and consider them in this sequence
when classifying subsequent instances.

Converting a Decision Tree into Rules

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 17


​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®

For example, consider the decision tree. The leftmost path of the tree in the figure
below is translated into the rule.
IF (Outlook = Sunny) ^ (Humidity = High)
THEN PlayTennis = No

Given the above rule, rule post-pruning would consider removing the
preconditions (Outlook = Sunny) and (Humidity = High)

●​ It would select whichever of these pruning steps produced the greatest improvement in
estimated rule accuracy, then consider pruning the second precondition as a further
pruning step.
●​ No pruning step is performed if it reduces the estimated rule accuracy.

There are three main advantages to converting the decision tree to rules before pruning

1.​ Converting to rules allows distinguishing among the different contexts in which a
decision node is used. Because each distinct path through the decision tree node
produces a distinct rule, the pruning decision regarding that attribute test can be made
differently for each path.
2.​ Converting to rules removes the distinction between attribute tests that occur near the
root of the tree and those that occur near the leaves. Thus, it avoids messy
bookkeeping issues such as how to reorganize the tree if the root node is pruned while
retaining part of the subtree below this test.
3.​ Converting to rules improves readability. Rules are often easier to understand.

2. Incorporating Continuous-Valued Attributes

Continuous-valued decision attributes can be incorporated into the learned tree.

There are two methods for Handling Continuous Attributes


1.​ Define new discrete-valued attributes that partition the continuous attribute value
into a discrete set of intervals.
E.g., {high ≡ Temp > 35º C, med ≡ 10º C < Temp ≤ 35º C, low ≡ Temp ≤ 10º C}

2.​Using thresholds for splitting nodes


e.g., A ≤ a produces subsets A ≤ a and A > a

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 18


​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®

What threshold-based Boolean attribute should be defined based on


Temperature?

●​ Pick a threshold, c, that produces the greatest information gain


●​ In the current example, there are two candidate thresholds, corresponding to the values
of Temperature at which the value of PlayTennis changes: (48 + 60)/2, and (80 +
90)/2.
●​ The information gain can then be computed for each of the candidate attributes,
Temperature >54, and Temperature >85 and the best can be selected (Temperature >54)

3.​Alternative Measures for Selecting Attributes

​ The problem is if attributes with many values, Gain will select it ?

​ Example: consider the attribute Date, which has a very large number of possible
values. (e.g., March 4, 1979).
​ If this attribute is added to the PlayTennis data, it would have the highest information
gain of any of the attributes. This is because Date alone perfectly predicts the target
attribute over the training data. Thus, it would be selected as the decision attribute for
the root node of the tree and lead to a tree of depth one, which perfectly classifies the
training data.
​ This decision tree with root node Date is not a useful predictor because it perfectly
separates the training data, but poorly predicts on subsequent examples.

One Approach: Use GainRatio instead of Gain

The gain ratio measure penalizes attributes by incorporating a split information, that is
sensitive to how broadly and uniformly the attribute splits the data

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 19


​ ​ ​
​ ​ ​ ​ ​ RV Institute Technology & Management®
Where, Si is a subset of S, for which attribute A has value vi

4.​Handling Training Examples with Missing Attribute Values

The data that is available may contain missing values for some attributes
Example: Medical diagnosis
​ <Fever = true, Blood-Pressure = normal, …, Blood-Test = ?, …>

​ Sometimes values are truly unknown, sometimes low priority (or cost too high)
Strategies for dealing with the missing attribute value
​ If node n tests A, assign the most common value of A among other training examples
sorted to node n
​ Assign the most common value of A among other training examples with the same target
value
​ Assign a probability pi to each of the possible values vi of A rather than simply
assigning the most common value to A(x)

5.​Handling Attributes with Differing Costs


​ In some learning tasks the instance attributes may have associated costs.

​ For example: In learning to classify medical diseases, the patients are described in
terms of attributes such as Temperature, biopsy results, Pulse, blood test results, etc.
​ These attributes vary significantly in their costs, both in terms of monetary cost and
cost to patient comfort
​ Decision trees use low-cost attributes where possible and depend only on
high-cost attributes when needed to produce reliable classifications

How to Learn A Consistent Tree with Low Expected Cost?


One approach is to replace Gain with Cost-Normalized-Gain
Examples of normalization function

Department of Information Science and Engineering(ISE)​ ​ ​ ​ Page 20


​ ​ ​

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy