Unit Iii Convolutional Networks and Sequence Modelling
Unit Iii Convolutional Networks and Sequence Modelling
Unit Iii Convolutional Networks and Sequence Modelling
•
Motivation-Sparse Interactions
Learning of Traditional vs Convolutional Networks
• Traditional neural network layers use matrix multiplication by a matrix of
parameters with a separate parameter describing the interaction between each
input unit and each output unit
s =g(WTx )
2.Fully connected model: Single black arrow indicates use of the central element of the
weight matrix . Model has no parameter sharing, so the parameter is used only once
How sparse connectivity and parameter sharing can dramatically improve efficiency of
image edge detection.
Efficiency of Parameter Sharing
• Pooling layer: This layer is periodically inserted in the covnets and its main
function is to reduce the size of volume which makes the computation fast
reduces memory and also prevents overfitting.
• Two common types of pooling layers are max pooling and average
pooling. If we use a max pool with 2 x 2 filters and stride 2, the resultant
volume will be of dimension 16x16x12.
CNN Architecture
• Flattening: The resulting feature maps are flattened into a one-
dimensional vector after the convolution and pooling layers so they can
be passed into a completely linked layer for categorization or
regression.
• Fully Connected Layers: It takes the input from the previous layer and
computes the final classification or regression task.
CNN Architecture
• Output Layer: The output from the fully
connected layers is then fed into a logistic
function for classification tasks like sigmoid or
softmax which converts the output of each class
into the probability score of each class.