8 ANN Classifier Part 2
8 ANN Classifier Part 2
1 ouput
0.5 1
1 0.5
1 0.5
OK, don’t change .
Animal Big Stripy Long Tiger?
Teeth Tail
1 Yes Yes Yes Yes
Learning in NNs 2
3
No
Yes
Yes
Yes
Yes
No
No
Yes
4 Yes No Yes No
tiger 2
0 ouput
0.5 0
1 0.5
1 0.5
OK, don’t change .
Learning tigers 2
3
No
Yes
Yes
Yes
Yes
No
No
Yes
Tiger 3 - Not quite right. 4 Yes No Yes No
1 output
0.5 0
1 0.5
0.5
0
So increase the weights on “active” connections
1 output
0.6 1
1 0.6
0.5
0
Learning tigers
Example 4 still not quite right.
Decrease weights on active connections.
End up with:
1 output
0.5 0
0 0.6
0.4
1
Learning tigers
After finishing with the fourth example, we say that
we have performed one epoch of training.
1 output
0.5 0
0 0.6
0.4
1
Learning tigers
One epoch of training means that the perceptron (or
any learning system in general) has been trained
(having its weights adjusted) with the samples in
the training data
Learning tigers
One epoch of training means that the perceptron (or
any learning system in general) has been trained
(having its weights adjusted) with the samples in
the training data
The answer in NO
Learning tigers
Shall we stop the training after one epoch ?
The answer in NO
The answer in NO
At this stage, we say the perceptron has done with the training
(or finished the learning) and now is ready to be deployed (used
to tell whether or not the input features correspond to a tiger.
Perceptron Learning
Repeat:
For each example
If actual output is 1 and target is 0, decrease
weights on active connections by small amount.
If actual output is 0 and target is 1, increase
weights on active connections by small amount.
Until network gives right results for all examples.
(Active connections are those for which the input is
1).
Epoch
Perceptron Learning
Repeat:
For each example
If actual output is 1 and target is 0, decrease
weights on active connections by small amount.
If actual output is 0 and target is 1, increase
weights on active connections by small amount.
Until network gives right results for all examples.
(Active connections are those for which the input is
1).
Learning tigers
Class activity: (10 mn)
Starting with the weights we obtained in the first epoch
1 output
0.5 0
0 0.6
0.4
1
1 output
0.5 0
0 0.6
0.4
1
xn w n n
inputs
x Y=output
Y f (( xi wi ) )
x2 w2
i 1
w1
x1
f: Activation function
• Step function : f(X) = 1 1if XX >0 , 0 otherwise
1 e
• Sigmoid function f(x) =
Artificial Intelligence – Solving Problems by Searching Slide 18
Perceptron Learning: General case
Adjusting the weights iteratively until the amplitude
of error e is minimized
e( p ) Yd ( p ) Y ( p )
p=1 n
2- Activation Y f (( xi wi ) )
i 1
Apply inputs x1(p),…xn(p), calculate actual output
3- Adjust weight
wi ( p 1) wi ( p ) wi ( p )
wi ( p ) xi ( p ) e( p ), : positive constant less than 1
e( p ) Yd ( p ) Y ( p )
4- Iteration
p=p+1, go back to 2 and repeat until convergence
X1. X2. Y
0. 0. 0
0. 1. 0
1. 0. 0
1. 1. 1
X1. X2. Y
0. 0. 0
0. 1. 0
1. 0. 0
1. 1. 1
Apply the algorithm
0.2, 0.1
Initialize w1 and w2. for example 0.3. and -0.1
Artificial Intelligence – Solving Problems by Searching Slide 22
Perceptron Learning: Example: simulate
.the ANDGate
0.2, 0.1
epoc inputs Desired weights Actual Final
h output outpu weights
t
x1 x2 Yd w1 w2 Y e w1 w2
1 0 0 0 0.3 -0.1 0 0 0.3 -0.1
0 1 0 0.3 -0.1 0 0 0.3 -0.1
1 0 0 0.3 -0.1 1 -1 0.2 -0.1
1 1 1 0.2 -0.1 0 1 0.3 0.0
2 0 0 0 0.3 0.0 0 0 0.3 0.0
0 1 0 0.3 0.0 0 0 0.3 0.0
- -
The answer is … NO
separable problems
Artificial Intelligence – Solving Problems by Searching Slide 26
An Example of Three-Layer Feed-forward
Networks
x1
x2
Input
Output
xn
Hidden layers
Learning Multi-layer ANN
Back propagation:
Two phases algorithm
Training input pattern is presented to the input
layer.
The network propagates the input pattern from
layer to layer until the output pattern is generated
If it is different from the desired output, an error
is calculated and the propagated backwards
through the network from the out layer to the
input layer.
The weights are updated as the error is
propagated
Artificial Intelligence – Solving Problems by Searching Slide 28
Learning Multi-layer ANN
1-Initialization”
Set all the weights and threshold levels of
the network to random numbers uniformly
distributed with a samll range
p=1
2- Activation
Apply inputs x_1(p),,,,,,x_n(p)
And the desired outputs y_d,1(p),…….y_d,n(p)
Calculate the actual outputs , in the hidden layers
y_j(p),
Calculate the actual outputs in the in the output layer.
Artificial Intelligence – Solving Problems by Searching Slide 29
Learning Multi-layer ANN
Step 3: weight training
-Calculate the error gradient for the
neurons in the output layer
- Calculate the error gradient for the
neurons in the hidden layer
Step 4: iteration
Increase iteration by one, go back to step 2,
and repeat the process until the selected
error criterion is satisfied