E9 205 - Machine Learning For Signal Processing
E9 205 - Machine Learning For Signal Processing
Homework # 5
Due date: April 18, 2022 [12 noon]
April 6, 2022
1. Implementing BackPropagation
leap.ee.iisc.ac.in/sriram/teaching/M LSP 22/assignments/HW 4/Data.tar.gz
The above dataset has training/test subject faces with happy/sad emotion are provided
in the data. Each image is of 100 × 100 matrix. Perform PCA to reduce the dimension
from 10000 to K = 12. Implement a 1 hidden layer deep neural network (layer containing
10 neurons) to classify the happy/sad classes. Use the cross entropy error function with
softmax output activations and ReLU hidden layer activations. Perform 20 iterations of
back propagation and plot the error on training data as a function of the iteration. What
is the test accuracy for this case and how does it change if the number of hidden neurons
is increased to 15. [Implement backpropagation by hand without using any tool]. (Points
30)
(a) Implementing DNNs - Use the PyTorch package to implement a DNN model with 2
and 3 hidden layers. Each hidden layer contains 512 neurons. What is the perfor-
mance on the test dataset for this classifier
(b) Implementing CNNs - Use the same package to implement a CNN model with one
layer of convolutions (kernel size of 3 × 3 with a 2-D convolutional layer and having
128 filters) followed by two dense layers of 256 neurons. Compare the performance
of the CNN with the DNN.
(c) Provide your answers with analysis, plots for various choices of hidden layer dimen-
sions and filter sizes in the CNN.
(Points 40)
3. Neural Networks - Cost Function - Let us define a NN with softmax output layer
and {oi }M M
i=1 and {yi }i=1 denote the input and targets to the NN. The task is classification
with hard targets y ∈ B C×1 , where B denotes boolean variable (0 or 1), and C is the
number of classes. Note that every data point oi is associated with only one class label
ci where ci ∈ {1, 2, .., C} classes. The output of the network is denoted as vi where
vi = {vi1 , vi2 , .., viC } ∈ RC×1 and 0 < vi < 1. The NN cost function can defined using
mean square error (MSE).
M
X
JM SE = ||vi − yi ||2
i=1
(Points 15)
∂J
wt+1 = wt − η
∂w
where t is the iteration index. If the weights w were initialized to w0 = 0, then show
that after Q steps of gradient descent, the weights are given by