0% found this document useful (0 votes)
13 views30 pages

8 ANN Classifier Part 2

Uploaded by

memochi.27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views30 pages

8 ANN Classifier Part 2

Uploaded by

memochi.27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 30

Artificial Intelligence

Artificial Neural Network


(Cont)

Artificial Intelligence Slide 1


Artificial Neural Networks (Remind)
 Biologically inspired approaches.
Animal Big Stripy Long Tiger?
Teeth Tail

Learning in NNs (Remind)


1 Yes Yes Yes Yes
2 No Yes Yes No
3 Yes Yes No Yes
 Basic learning rule is: 4 Yes No Yes No

 See what the output is for an example.


 If it is wrong, adjust the weights a bit.
 Keep going with all the examples, and repeat until
weights converge and right results obtained.
 E.g., tiger 1, initial weights

1 ouput
0.5 1
1 0.5

1 0.5
OK, don’t change .
Animal Big Stripy Long Tiger?
Teeth Tail
1 Yes Yes Yes Yes

Learning in NNs 2
3
No
Yes
Yes
Yes
Yes
No
No
Yes
4 Yes No Yes No
 tiger 2

0 ouput
0.5 0
1 0.5

1 0.5
OK, don’t change .

Artificial Intelligence – Solving Problems by Searching Slide 4


Animal Big Stripy Long Tiger?
Teeth Tail
1 Yes Yes Yes Yes

Learning tigers 2
3
No
Yes
Yes
Yes
Yes
No
No
Yes
 Tiger 3 - Not quite right. 4 Yes No Yes No

1 output
0.5 0
1 0.5

0.5
0
 So increase the weights on “active” connections

1 output
0.6 1
1 0.6

0.5
0
Learning tigers
 Example 4 still not quite right.
 Decrease weights on active connections.
 End up with:

1 output
0.5 0
0 0.6

0.4
1
Learning tigers
After finishing with the fourth example, we say that
we have performed one epoch of training.

1 output
0.5 0
0 0.6

0.4
1
Learning tigers
One epoch of training means that the perceptron (or
any learning system in general) has been trained
(having its weights adjusted) with the samples in
the training data
Learning tigers
One epoch of training means that the perceptron (or
any learning system in general) has been trained
(having its weights adjusted) with the samples in
the training data

This is our training data


Animal Big Stripy Long Tiger?
Teeth Tail
1 Yes Yes Yes Yes
2 No Yes Yes No
3 Yes Yes No Yes
4 Yes No Yes No
Learning tigers
Shall we stop the training after one epoch ?
Learning tigers
Shall we stop the training after one epoch ?

The answer in NO
Learning tigers
Shall we stop the training after one epoch ?

The answer in NO

We need to repeat with a second, a third, a fourth


epoch, and so on, until the perceptron produced the
correct output for all the training samples in the
dataset.
Learning tigers
Shall we stop the training after one epoch ?

The answer in NO

We need to repeat with a second, a third, a fourth epoch, and


so on, until the perceptron produced the correct output for all
the training samples in the dataset.

At this stage, we say the perceptron has done with the training
(or finished the learning) and now is ready to be deployed (used
to tell whether or not the input features correspond to a tiger.
Perceptron Learning
 Repeat:
 For each example


If actual output is 1 and target is 0, decrease
weights on active connections by small amount.

If actual output is 0 and target is 1, increase
weights on active connections by small amount.
 Until network gives right results for all examples.
(Active connections are those for which the input is
1).
Epoch

Perceptron Learning
 Repeat:
 For each example


If actual output is 1 and target is 0, decrease
weights on active connections by small amount.

If actual output is 0 and target is 1, increase
weights on active connections by small amount.
 Until network gives right results for all examples.
(Active connections are those for which the input is
1).
Learning tigers
Class activity: (10 mn)
Starting with the weights we obtained in the first epoch
1 output
0.5 0
0 0.6

0.4
1

Perform another two epochs of training then check if the


perceptron is ready for deployment.
Learning tigers
 Class activity

1 output
0.5 0
0 0.6

0.4
1
xn w n n

inputs
x Y=output
Y  f (( xi wi )   )
x2 w2
i 1
w1
x1

Inputs x can be binary or real numbers (but usually


normalized)

f: Activation function
• Step function : f(X) = 1 1if XX >0 , 0 otherwise
1 e
• Sigmoid function f(x) =
Artificial Intelligence – Solving Problems by Searching Slide 18
Perceptron Learning: General case
 Adjusting the weights iteratively until the amplitude
of error e is minimized

e( p ) Yd ( p )  Y ( p )

p is the iteration number


Yd(p): desired output
Y(p): actual output
Perceptron Learning: General case
1-Initialisation:
set weights and threshold
0.5,0.5]
 to random numbers in the range [-

p=1 n

2- Activation Y  f (( xi wi )   )
i 1
Apply inputs x1(p),…xn(p), calculate actual output
3- Adjust weight
wi ( p  1) wi ( p )  wi ( p )
wi ( p )  xi ( p ) e( p ),  : positive constant less than 1
e( p ) Yd ( p )  Y ( p )

4- Iteration
p=p+1, go back to 2 and repeat until convergence

Artificial Intelligence – Solving Problems by Searching Slide 20


AND Gate

X1. X2. Y
0. 0. 0
0. 1. 0
1. 0. 0
1. 1. 1

Artificial Intelligence – Solving Problems by Searching Slide 21


AND Gate: Design a classifier

X1. X2. Y
0. 0. 0
0. 1. 0
1. 0. 0
1. 1. 1
Apply the algorithm
 0.2,  0.1
Initialize w1 and w2. for example 0.3. and -0.1
Artificial Intelligence – Solving Problems by Searching Slide 22
Perceptron Learning: Example: simulate
.the ANDGate
0.2,  0.1
epoc inputs Desired weights Actual Final
h output outpu weights
t
x1 x2 Yd w1 w2 Y e w1 w2
1 0 0 0 0.3 -0.1 0 0 0.3 -0.1
0 1 0 0.3 -0.1 0 0 0.3 -0.1
1 0 0 0.3 -0.1 1 -1 0.2 -0.1
1 1 1 0.2 -0.1 0 1 0.3 0.0
2 0 0 0 0.3 0.0 0 0 0.3 0.0
0 1 0 0.3 0.0 0 0 0.3 0.0
- -

5 0 0 0 0.1 0.1 0 0 0.1 0.1


0 1 0 0.1 0.1 0 0 0.1 0.1
1 0 0 0.1 0.1 0 0 0.1 0.1
Artificial Intelligence – Solving Problems by Searching Slide 23
Perceptron Learning
Can we train the perceptron for any function?

The answer is … NO

A single preceptron can only be trained for


linearly separable functions/classes (Minsky
and Paper, 1969)
 Delayed the research on ANN for many years

Artificial Intelligence – Solving Problems by Searching Slide 24


Perceptron Learning
 Exercise :
Using the truth tables of the OR, AND and
XOR
plot the outputs of these function in the 2D
space defined by their two inputs. Then
use the obtained plots to tell which
functions can be simulated with a
perceptron.

Artificial Intelligence – Solving Problems by Searching Slide 25


Multi-layer ANN
 A multi-layer is feedfroward NN with one or
more hidden layers.
Typically:
One input layer
At least one hidden layer
An output layer
 The input signals are propagated in a

forward direction on a layer-by-layer basis.


 Multi-layers NN can solve non linearly

separable problems
Artificial Intelligence – Solving Problems by Searching Slide 26
An Example of Three-Layer Feed-forward
Networks
x1

x2

Input
Output

xn

Hidden layers
Learning Multi-layer ANN
 Back propagation:

Two phases algorithm

Training input pattern is presented to the input
layer.

The network propagates the input pattern from
layer to layer until the output pattern is generated

If it is different from the desired output, an error
is calculated and the propagated backwards
through the network from the out layer to the
input layer.

The weights are updated as the error is
propagated
Artificial Intelligence – Solving Problems by Searching Slide 28
Learning Multi-layer ANN
1-Initialization”

Set all the weights and threshold levels of
the network to random numbers uniformly
distributed with a samll range
p=1
2- Activation
Apply inputs x_1(p),,,,,,x_n(p)
And the desired outputs y_d,1(p),…….y_d,n(p)
Calculate the actual outputs , in the hidden layers
y_j(p),
Calculate the actual outputs in the in the output layer.
Artificial Intelligence – Solving Problems by Searching Slide 29
Learning Multi-layer ANN
Step 3: weight training
-Calculate the error gradient for the
neurons in the output layer
- Calculate the error gradient for the
neurons in the hidden layer
Step 4: iteration
Increase iteration by one, go back to step 2,
and repeat the process until the selected
error criterion is satisfied

Artificial Intelligence – Solving Problems by Searching Slide 30

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy