0% found this document useful (0 votes)
15 views

Unit2 ML

The document discusses decision tree learning and artificial neural networks. It begins by explaining how decision trees classify instances by sorting them from the root node to a leaf node, which provides the classification. It then describes the basic ID3 algorithm for learning decision trees in a top-down manner and how it uses entropy and information gain to select attributes. The document also discusses issues with decision tree learning like overfitting the data, handling continuous values, and dealing with missing attribute values.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Unit2 ML

The document discusses decision tree learning and artificial neural networks. It begins by explaining how decision trees classify instances by sorting them from the root node to a leaf node, which provides the classification. It then describes the basic ID3 algorithm for learning decision trees in a top-down manner and how it uses entropy and information gain to select attributes. The document also discusses issues with decision tree learning like overfitting the data, handling continuous values, and dealing with missing attribute values.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

UNIT-2

DECISION TREE LEARNING - Decision tree learning algorithm-Inductive bias- Issues in Decision tree learning;
ARTIFICIAL NEURAL NETWORKS – Perceptrons, Gradient descent and the Delta rule, Adaline, Multilayer networks,
Derivation of backpropagation rule Backpropagation Algorithm Convergence, Generalization;

❖ DECISION TREE REPRESENTATION:


• Decision trees classify instances by sorting them down the tree from the root to some leaf node, which provides
the classification of the instance.
• Each node in the tree specifies a test of some attribute of the instance, and each branch descending from that
node corresponds to one of the possible values for this attribute.
• An instance is classified by starting at the root node of the tree, testing the attribute specified by this node, then
moving down the tree branch corresponding to the value of the attribute in the given example. This process is
then repeated for the subtree rooted at the new node.

FIGURE: A decision tree for the concept PlayTennis. An example is classified by sorting it through the tree to the
appropriate leaf node, then returning the classification associated with this leaf

• Decision trees represent a disjunction of conjunctions of constraints on the attribute values of instances.
• Each path from the tree root to a leaf corresponds to a conjunction of attribute tests, and the tree itself to a
disjunction of these conjunctions

For example, the decision tree shown in above figure corresponds to the expression

(Outlook = Sunny ∧ Humidity = Normal)


∨ (Outlook = Overcast)
∨ (Outlook = Rain ∧ Wind = Weak)

❖ THE BASIC DECISION TREE LEARNING ALGORITHM:


The basic algorithm is ID3 which learns decision trees by constructing them top-down
ID3(Examples, Target_attribute, Attributes)

Examples are the training examples. Target_attribute is the attribute whose value is to be predicted by the tree.
Attributes is a list of other attributes that may be tested by the learned decision tree. Returns a decision tree that
correctly classifies the given Examples.

• Create a Root node for the tree


• If all Examples are positive, Return the single-node tree Root, with label = +
• If all Examples are negative, Return the single-node tree Root, with label = -
• If Attributes is empty, Return the single-node tree Root, with label = most common value of Target_attribute in
Examples

• Otherwise Begin
• A ← the attribute from Attributes that best* classifies Examples
• The decision attribute for Root ← A
• For each possible value, vi, of A,
• Add a new tree branch below Root, corresponding to the test A = vi
• Let Examples vi, be the subset of Examples that have value vi for A
• If Examples vi , is empty
• Then below this new branch add a leaf node with label = most common value of Target_attribute
in Examples
• Else below this new branch add the subtree
ID3(Examples vi, Target_attribute, Attributes – {A}))
• End
• Return Root

❖ ENTROPY MEASURES HOMOGENEITY OF EXAMPLES:


To define information gain, we begin by defining a measure called entropy. Entropy measures the impurity of a
collection of examples.
Given a collection S, containing positive and negative examples of some target concept, the entropy of S relative to
this Boolean classification is

Where,
p+ is the proportion of positive examples in S
p- is the proportion of negative examples in S.

Example: Suppose S is a collection of 14 examples of some boolean concept, including 9 positive and 5 negative
examples. Then the entropy of S relative to this boolean classification is

• The entropy is 0 if all members of S belong to the same class


• The entropy is 1 when the collection contains an equal number of positive and negative examples
• If the collection contains unequal numbers of positive and negative examples, the entropy is between 0 and 1
❖ INFORMATION GAIN MEASURES THE EXPECTED REDUCTION IN ENTROPY:
• Information gain, is the expected reduction in entropy caused by partitioning the examples according to this
attribute.
• The information gain, Gain(S, A) of an attribute A, relative to a collection of examples S, is defined as

Example: Information gain

Let, Values(Wind) = {Weak, Strong}


S = [9+, 5−]
SWeak = [6+, 2−]
SStrong = [3+, 3−]
Information gain of attribute Wind:
Gain(S, Wind) = Entropy(S) − 8/14 Entropy (SWeak) − 6/14 Entropy (SStrong)
= 0.94 – (8/14)* 0.811 – (6/14) *1.00
= 0.048
An Illustrative Example
• To illustrate the operation of ID3, consider the learning task represented by the training examples of below table.
• Here the target attribute PlayTennis, which can have values yes or no for different days.
• Consider the first step through the algorithm, in which the topmost node of the decision tree is created.

• ID3 determines the information gain for each candidate attribute (i.e., Outlook, Temperature, Humidity, and
Wind), then selects the one with highest information gain.
• The information gain values for all four attributes are

Gain(S, Outlook) = 0.246


Gain(S, Humidity) = 0.151
Gain(S, Wind) = 0.048
Gain(S, Temperature) = 0.029

• According to the information gain measure, the Outlook attribute provides the best prediction of the target
attribute, PlayTennis, over the training examples. Therefore, Outlook is selected as the decision attribute for the
root node, and branches are created below the root for each of its possible values i.e., Sunny, Overcast, and Rain.
❖ INDUCTIVE BIAS IN DECISION TREE –
Inductive bias of ID3 consists of describing the basis by which ID3 chooses one consistent decision tree over all
the possible Decision Trees. Inductive bias is the set of assumptions that, together with the training data,
deductively justify the classifications assigned by the learner to future instances.
Given a collection of training examples, there are typically many decision trees consistent with these examples.
Which of these decision trees does ID3 choose?

ID3 search strategy


• Selects in favour of shorter trees over longer ones
• Selects element with highest Information Gain as root attribute over lower Information Gain ones.

Approximate inductive bias of ID3: Shorter trees are preferred over larger trees
• Consider an algorithm that begins with the empty tree and searches breadth first through progressively more
complex trees.
• First considering all trees of depth 1, then all trees of depth 2, etc.
• Once it finds a decision tree consistent with the training data, it returns the smallest consistent tree at that
search depth (e.g., the tree with the fewest nodes).
• Let us call this breadth-first search algorithm BFS-ID3.
• BFS-ID3 finds a shortest decision tree and thus exhibits the bias "shorter trees are preferred over longer trees.

A closer approximation to the inductive bias of ID3: Shorter trees are preferred over longer trees. Trees that
place high information gain attributes close to the root are preferred over those that do not.
• ID3 can be viewed as an efficient approximation to BFS-ID3, using a greedy heuristic search to attempt to find
the shortest tree without conducting the entire breadth-first search through the hypothesis space.
• Because ID3 uses the information gain heuristic and a hill climbing strategy, it exhibits a more complex bias than
BFS-ID3.
• In particular, it does not always find the shortest consistent tree, and it is biased to favour trees that place
attributes with high information gain closest to the root.

Types of Inductive Bias—

Preference bias – The inductive bias of ID3 is a preference for certain hypotheses over others (e.g., preference
for shorter hypotheses over larger hypotheses), with no hard restriction on the hypotheses that can be
eventually enumerated. This form of bias is called a preference bias or a search bias. Decision Tree is preference
bias.

Restriction bias – The bias of the CANDIDATE ELIMINATION algorithm is in the form of a categorical restriction
on the set of hypotheses considered. This form of bias is typically called a restriction bias or a language bias.

Why Prefer Short Hypotheses?

Occam's razor

• Occam's razor: is the problem-solving principle that the simplest solution tends to be the right one. When
presented with competing hypotheses to solve a problem, one should select the solution with the fewest
assumptions.
• Occam's razor: “Prefer the simplest hypothesis that fits the data”

❖ ISSUES IN DECISION TREE LEARNING—


1. Avoiding Overfitting the Data
• Reduced error pruning
• Rule post-pruning
2. Incorporating Continuous-Valued Attributes
3. Alternative Measures for Selecting Attributes
4. Handling Training Examples with Missing Attribute Values
5. Handling Attributes with Differing Costs

1. Avoiding Overfitting the Data


• The ID3 algorithm grows each branch of the tree just deeply enough to perfectly classify the training examples
but it can lead to difficulties when there is noise in the data, or when the number of training examples is too small
to produce a representative sample of the true target function. This algorithm can produce trees that overfit the
training examples.

• Definition - Overfit: Given a hypothesis space H, a hypothesis h ∈ H is said to overfit the training data if there
exists some alternative hypothesis h' ∈ H, such that h has smaller error than h' over the training examples, but h'
has a smaller error than h over the entire distribution of instances.

The below figure illustrates the impact of overfitting in a typical application of decision tree learning.

• The horizontal axis of this plot indicates the total number of nodes in the decision tree, as the tree is being
constructed. The vertical axis indicates the accuracy of predictions made by the tree.
• The solid line shows the accuracy of the decision tree over the training examples. The broken line shows accuracy
measured over an independent set of test example
• The accuracy of the tree over the training examples increases monotonically as the tree is grown. The accuracy
measured over the independent test examples first increases, then decreases.

How can it be possible for tree h to fit the training examples better than h', but for it to perform more poorly over
subsequent examples?
1. Overfitting can occur when the training examples contain random errors or noise
2. When small numbers of examples are associated with leaf nodes.

Reduced-Error Pruning
• Reduced-error pruning, is to consider each of the decision nodes in the tree to be candidates for pruning
• Pruning a decision node consists of removing the subtree rooted at that node, making it a leaf node, and
assigning it the most common classification of the training examples affiliated with that node
• Nodes are removed only if the resulting pruned tree performs no worse than-the original over the validation set.
• Reduced error pruning has the effect that any leaf node added due to coincidental regularities in the training set
is likely to be pruned because these same coincidences are unlikely to occur in the validation set

The impact of reduced-error pruning on the accuracy of the decision tree is illustrated in below figure

• The additional line in figure shows accuracy over the test examples as the tree is pruned. When pruning begins,
the tree is at its maximum size and lowest accuracy over the test set. As pruning proceeds, the number of nodes is
reduced and accuracy over the test set increases.
• The available data has been split into three subsets: the training examples, the validation examples used for
pruning the tree, and a set of test examples used to provide an unbiased estimate of accuracy over future unseen
examples. The plot shows accuracy over the training and test sets.

Rule Post-Pruning:
• Infer the decision tree from the training set, growing the tree until the training data is fit as well as possible and
allowing overfitting to occur.
• Convert the learned tree into an equivalent set of rules by creating one rule for each path from the root node to a
leaf node.
• Prune (generalize) each rule by removing any preconditions that result in improving its estimated accuracy.
For example, consider the decision tree. The leftmost path of the tree in below figure is translated into the rule.
IF (Outlook = Sunny) ^ (Humidity = High)
THEN PlayTennis = No

Given the above rule, rule post-pruning would consider removing the preconditions
(Outlook = Sunny) and (Humidity = High)

• It would select whichever of these pruning steps produced the greatest improvement in estimated rule accuracy,
then consider pruning the second precondition as a further pruning step.
• No pruning step is performed if it reduces the estimated rule accuracy.

2. Incorporating Continuous-Valued Attributes


Continuous-valued decision attributes can be incorporated into the learned tree.

There are two methods for Handling Continuous Attributes


1. Define new discrete valued attributes that partition the continuous attribute value into a discrete set of intervals.
E.g., {high ≡ Temp > 35º C, med ≡ 10º C < Temp ≤ 35º C, low ≡ Temp ≤ 10º C}

2. Using thresholds for splitting nodes


e.g., A ≤ a produces subsets A ≤ a and A > a

What threshold-based Boolean attribute should be defined based on Temperature?

• Pick a threshold, c, that produces the greatest information gain


• In the current example, there are two candidate thresholds, corresponding to the values of Temperature at which
the value of PlayTennis changes: (48 + 60)/2, and (80 + 90)/2.

• The information gain can then be computed for each of the candidate attributes, Temperature >54, and
Temperature >85 and the best can be selected (Temperature >54)
3. Alternative Measures for Selecting Attributes
• The problem is if attributes with many values, Gain will select it ?
• Example: consider the attribute Date, which has a very large number of possible values. (e.g., March 4, 1979).
• If this attribute is added to the PlayTennis data, it would have the highest information gain of any of the
attributes. This is because Date alone perfectly predicts the target attribute over the training data. Thus, it would
be selected as the decision attribute for the root node of the tree and lead to a tree of depth one, which perfectly
classifies the training data.
• This decision tree with root node Date is not a useful predictor because it perfectly separates the training data,
but poorly predict on subsequent examples.

One Approach: Use GainRatio instead of Gain

The gain ratio measure penalizes attributes by incorporating a split information, that is sensitive to how broadly
and uniformly the attribute splits the data

Where, Si is subset of S, for which attribute A has value vi

4. Handling Training Examples with Missing Attribute Values

The data which is available may contain missing values for some attributes
Example: Medical diagnosis

Strategies for dealing with the missing attribute value


• If node n test A, assign most common value of A among other training examples sorted to node n
• Assign most common value of A among other training examples with same target value
• Assign a probability pi to each of the possible values vi of A rather than simply assigning the most common value
to A(x)

5. Handling Attributes with Differing Costs


• In some learning tasks the instance attributes may have associated costs.
• For example: In learning to classify medical diseases, the patients described in terms of attributes such as
Temperature, BiopsyResult, Pulse, BloodTestResults, etc.
• These attributes vary significantly in their costs, both in terms of monetary cost and cost to patient comfort
• Decision trees use low-cost attributes where possible, depends only on high-cost attributes only when needed to
produce reliable classifications

How to Learn A Consistent Tree with Low Expected Cost?


One approach is replace Gain by Cost-Normalized-Gain

Examples of normalization functions

❖ ARTIFICIAL NEURAL NETWORKS –


Artificial neural networks (ANNs) provide a general, practical method for learning real-valued, discrete-valued,
and vector-valued target functions.

Biological Motivation
• The study of artificial neural networks (ANNs) has been inspired by the observation that biological learning
systems are built of very complex webs of interconnected Neurons
• Human information processing system consists of brain neuron: basic building block cell that communicates
information to and from various parts of body

Properties of Neural Networks


• Many neuron-like threshold switching units
• Many weighted interconnections among units
• Highly parallel, distributed process
• Emphasis on tuning weights automatically
• Input is a high-dimensional discrete or real-valued (e.g, sensor input)

NEURAL NETWORK REPRESENTATIONS


• A prototypical example of ANN learning is provided by Pomerleau's system ALVINN, which uses a learned ANN
to steer an autonomous vehicle driving at normal speeds on public highways
• The input to the neural network is a 30x32 grid of pixel intensities obtained from a forward-pointed camera
mounted on the vehicle.
• The network output is the direction in which the vehicle is steered
• Figure illustrates the neural network representation.
• The network is shown on the left side of the figure, with the input camera image depicted below it.
• Each node (i.e., circle) in the network diagram corresponds to the output of a single network unit, and the lines
entering the node from below are its inputs.
• There are four units that receive inputs directly from all of the 30 x 32 pixels in the image. These are called
"hidden" units because their output is available only within the network and is not available as part of the global
network output. Each of these four hidden units computes a single real-valued output based on a weighted
combination of its 960 inputs
• These hidden unit outputs are then used as inputs to a second layer of 30 "output" units.
• Each output unit corresponds to a particular steering direction, and the output values of these units determine
which steering direction is recommended most strongly.
• The diagrams on the right side of the figure depict the learned weight values associated with one of the four
hidden units in this ANN.
• The large matrix of black and white boxes on the lower right depicts the weights from the 30 x 32-pixel inputs
into the hidden unit. Here, a white box indicates a positive weight, a black box a negative weight, and the size of
the box indicates the weight magnitude.
• The smaller rectangular diagram directly above the large matrix shows the weights from this hidden unit to each
of the 30 output units.

APPROPRIATE PROBLEMS FOR NEURAL NETWORK LEARNING

ANN learning is well-suited to problems in which the training data corresponds to noisy, complex sensor data, such
as inputs from cameras and microphones.

ANN is appropriate for problems with the following characteristics:


1. Instances are represented by many attribute-value pairs.

2. The target function output may be discrete-valued, real-valued, or a vector of several real- or discrete-valued
attributes.
3. The training examples may contain errors.
4. Long training times are acceptable.
5. Fast evaluation of the learned target function may be required
6. The ability of humans to understand the learned target function is not important
❖ PERCEPTRONS–
• One type of ANN system is based on a unit called a perceptron. Perceptron is a single layer neural network.

• A perceptron takes a vector of real-valued inputs, calculates a linear combination of these inputs, then outputs a
1 if the result is greater than some threshold and -1 otherwise.
• Given inputs x through x, the output O(x1, . . . , xn) computed by the perceptron is

• Where, each wi is a real-valued constant, or weight, that determines the contribution of input xi to the
perceptron output.
• -w0 is a threshold that the weighted combination of inputs w1x1 + . . . + wnxn must surpass in order for the
perceptron to output a 1

Sometimes, the perceptron function is written as,

Representational Power of Perceptrons


• The perceptron can be viewed as representing a hyperplane decision surface in the ndimensional space of
instances (i.e., points)
• The perceptron outputs a 1 for instances lying on one side of the hyperplane and outputs a -1 for instances lying
on the other side, as illustrated in below figure
Perceptrons can represent all of the primitive Boolean functions AND, OR, NAND (~ AND), and NOR (~OR) Some
Boolean functions cannot be represented by a single perceptron, such as the XOR function whose value is 1 if
and only if x1 ≠ x2

Example: Representation of AND functions

If A=0 & B=0 → 0*0.6 + 0*0.6 = 0.


This is not greater than the threshold of 1, so the output = 0.
If A=0 & B=1 → 0*0.6 + 1*0.6 = 0.6.
This is not greater than the threshold, so the output = 0.
If A=1 & B=0 → 1*0.6 + 0*0.6 = 0.6.
This is not greater than the threshold, so the output = 0.
If A=1 & B=1 → 1*0.6 + 1*0.6 = 1.2.
This exceeds the threshold, so the output = 1

Drawback of perceptron
• The perceptron rule finds a successful weight vector when the training examples are linearly separable, it can
fail to converge if the examples are not linearly separable

The Perceptron Training Rule


The learning problem is to determine a weight vector that causes the perceptron to produce the correct + 1 or -
1 output for each of the given training examples

To learn an acceptable weight vector


• Begin with random weights, then iteratively apply the perceptron to each training example, modifying the
perceptron weights whenever it misclassifies an example.

• This process is repeated, iterating through the training examples as many times as needed until the perceptron
classifies all training examples correctly.
• Weights are modified at each step according to the perceptron training rule, which revises the weight wi
associated with input xi according to the rule.

• The role of the learning rate is to moderate the degree to which weights are changed at each step. It is usually
set to some small value (e.g., 0.1) and is sometimes made to decay as the number of weight-tuning iterations
increases
Drawback:
The perceptron rule finds a successful weight vector when the training examples are linearly separable, it can
fail to converge if the examples are not linearly separable.

❖ GRADIENT DESCENT AND THE DELTA RULE –


• If the training examples are not linearly separable, the delta rule converges toward a best-fit approximation to
the target concept.
• The key idea behind the delta rule is to use gradient descent to search the hypothesis space of possible weight
vectors to find the weights that best fit the training examples.

To understand the delta training rule, consider the task of training an unthresholded perceptron. That is, a linear
unit for which the output O is given by

To derive a weight learning rule for linear units, specify a measure for the training error of a hypothesis (weight
vector), relative to the training examples.

Where,
• D is the set of training examples,
• td is the target output for training example d,
• od is the output of the linear unit for training example d
• E ( w⃗) is simply half the squared difference between the target output td and the linear unit output od, summed
over all training examples.

Derivation of the Gradient Descent Rule


How to calculate the direction of steepest descent along the error surface?

The direction of steepest can be found by computing the derivative of E with respect to each component of the
vector w⃗ . This vector derivative is called the gradient of E with respect to w⃗ , written as

The gradient specifies the direction of steepest increase of E, the training rule for gradient descent is

• Here η is a positive constant called the learning rate, which determines the step size in the gradient descent
search.
• The negative sign is present because we want to move the weight vector in the direction that decreases E.

This training rule can also be written in its component form


Calculate the gradient at each step. The vector of 𝜕𝐸 𝜕𝑤𝑖 derivatives that form the gradient can be obtained by
differentiating E from Equation (2), as

GRADIENT DESCENT algorithm for training a linear unit:

Issues in Gradient Descent Algorithm


Gradient descent is an important general paradigm for learning. It is a strategy for searching through a large or
infinite hypothesis space that can be applied whenever
1. The hypothesis space contains continuously parameterized hypotheses
2. The error can be differentiated with respect to these hypothesis parameters

The key practical difficulties in applying gradient descent are


1. Converging to a local minimum can sometimes be quite slow
2. If there are multiple local minima in the error surface, then there is no guarantee that the procedure will find
the global minimum

❖ ADALINE–
• ADALINE is an Adaptive Linear Neuron network with single linear unit. The Adaline network is trained the delta
rule.
• It receives input from several units and bias unit.
• An Adaline model consists of trainable weights. The inputs are of two values (+1 or -1) and the weights have
signs (positive or negative).
• Initially random weights are assigned. The net input calculated is applied to a quantizer transfer function
(activation function) that restores the output to +1 or -1.
• The Adaline model compares the actual output with the target output and with the bias units and then adjusts
all the weights.

Adaline network training algorithm is as follows:


Step 0: Weights and bias are to be set to some random values but not zero.
Set the learning rate parameter a.
Step 1: Perform steps 2-6 when stopping condition is false.
Step 2: Perform steps 3-5 for each bipolar training pair.
Step 3: Set activations for input units i = 1 ton.
Step 4: Calculate the net input to the output unit.
Step 5: Update the weight and bias for i = 1 to n.
Step 6: If the highest weight change that occurred during training is smaller than a specified tolerance then
stops the training process, else continue. This is the test for the stopping condition of a network.

Adaline networks testing algorithm is as follows:


When the training has been completed, the Adaline can be used to classify input patterns. A step function is
used to test the performance of the network.
Step 0: Initialize the weights. (The weights are obtained from the training algorithm.)
Step 1: Perform steps 2-4 for each bipolar input vector x.
Step 2: Set the activations of the input units tox.
Step 3: Calculate the net input to the output units.
Step 4: Apply the activation function over the net input calculated.

❖ MULTILAYER NETWORKS–
Multilayer networks learned by the BACKPROPAGATION algorithm are capable of expressing a rich variety of
nonlinear decision surfaces.

Consider the example:


• Here the speech recognition task involves distinguishing among 10 possible vowels, all spoken in the context of
"h_d" (i.e., "hid," "had," "head," "hood," etc.).
• The network input consists of two parameters, F1 and F2, obtained from a spectral analysis of the sound. The 10
network outputs correspond to the 10 possible vowel sounds. The network prediction is the output whose value
is highest.
• The plot on the right illustrates the highly nonlinear decision surface represented by the learned network. Points
shown on the plot are test examples distinct from the examples used to train the network.
❖ DERIVATION OF BACKPROPAGATION RULE BACKPROPAGATION ALGORITHM CONVERGENCE–
• Deriving the stochastic gradient descent rule: Stochastic gradient descent involves iterating through the training
examples one at a time, for each training example d descending the gradient of the error Ed with respect to this
single example
• For each training example d every weight wji is updated by adding to it Δwji
Case 2: Training Rule for Hidden Unit Weights.
• In the case where j is an internal, or hidden unit in the network, the derivation of the training rule for w ji must
take into account the indirect ways in which wji can influence the network outputs and hence Ed.
• For this reason, we will find it useful to refer to the set of all units immediately downstream of unit j in the
network and denoted this set of units by Downstream( j).
• netj can influence the network outputs only through the units in Downstream(j). Therefore, we can write
❖ GENERALIZATION–
What is an appropriate condition for terminating the weight update loop? One choice is to continue training
until the error E on the training examples falls below some predetermined threshold. To see the dangers of
minimizing the error over the training data, consider how the error E varies with the number of weight
iterations

• Consider first the top plot in this figure. The lower of the two lines shows the monotonically decreasing error E
over the training set, as the number of gradient descent iterations grows. The upper line shows the error E
measured over a different validation set of examples, distinct from the training examples. This line measures the
generalization accuracy of the network-the accuracy with which it fits examples beyond the training data.
• The generalization accuracy measured over the validation examples first decreases, then increases, even as the
error over the training examples continues to decrease. How can this occur? This occurs because the weights
are being tuned to fit idiosyncrasies of the training examples that are not representative of the general
distribution of examples. The large number of weight parameters in ANNs provides many degrees of freedom
for fitting such idiosyncrasies.
• Why does overfitting tend to occur during later iterations, but not during earlier iterations? By giving enough
weight-tuning iterations, BACKPROPAGATION will often be able to create overly complex decision surfaces that
fit noise in the training data or unrepresentative characteristics of the particular training sample.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy