0% found this document useful (0 votes)
22 views

E9 205 - Machine Learning For Signal Processing

The document discusses four assignments related to machine learning and signal processing. The first assignment involves implementing backpropagation on a dataset of faces to classify emotions. The second assignment involves using PyTorch to implement DNNs and CNNs on the MNIST dataset. The third assignment defines the cost function for a neural network with softmax outputs. The fourth assignment analyzes learning of neural network weights using gradient descent.

Uploaded by

Pavamana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

E9 205 - Machine Learning For Signal Processing

The document discusses four assignments related to machine learning and signal processing. The first assignment involves implementing backpropagation on a dataset of faces to classify emotions. The second assignment involves using PyTorch to implement DNNs and CNNs on the MNIST dataset. The third assignment defines the cost function for a neural network with softmax outputs. The fourth assignment analyzes learning of neural network weights using gradient descent.

Uploaded by

Pavamana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

E9 205 – Machine Learning for Signal Processing

Homework # 5
Due date: April 18, 2022 [12 noon]

Analytical in writing and submitted in Teams.


Source code also need to be included.
Name of file should be “Assignment5 FullName.pdf” submitted to teams channel.
Assignment should be solved individually without consent.

April 6, 2022

1. Implementing BackPropagation
leap.ee.iisc.ac.in/sriram/teaching/M LSP 22/assignments/HW 4/Data.tar.gz
The above dataset has training/test subject faces with happy/sad emotion are provided
in the data. Each image is of 100 × 100 matrix. Perform PCA to reduce the dimension
from 10000 to K = 12. Implement a 1 hidden layer deep neural network (layer containing
10 neurons) to classify the happy/sad classes. Use the cross entropy error function with
softmax output activations and ReLU hidden layer activations. Perform 20 iterations of
back propagation and plot the error on training data as a function of the iteration. What
is the test accuracy for this case and how does it change if the number of hidden neurons
is increased to 15. [Implement backpropagation by hand without using any tool]. (Points
30)

2. MNIST data - Download the dataset of hand-written digits


http : //yann.lecun.com/exdb/mnist/
containing 10 classes. [Reduce the number of samples in training data if your computing
powers are limited as required by random subsampling]. The PyTorch package is needed
for the rest of the question https : //pytorch.org/

(a) Implementing DNNs - Use the PyTorch package to implement a DNN model with 2
and 3 hidden layers. Each hidden layer contains 512 neurons. What is the perfor-
mance on the test dataset for this classifier
(b) Implementing CNNs - Use the same package to implement a CNN model with one
layer of convolutions (kernel size of 3 × 3 with a 2-D convolutional layer and having
128 filters) followed by two dense layers of 256 neurons. Compare the performance
of the CNN with the DNN.
(c) Provide your answers with analysis, plots for various choices of hidden layer dimen-
sions and filter sizes in the CNN.

(Points 40)
3. Neural Networks - Cost Function - Let us define a NN with softmax output layer
and {oi }M M
i=1 and {yi }i=1 denote the input and targets to the NN. The task is classification
with hard targets y ∈ B C×1 , where B denotes boolean variable (0 or 1), and C is the
number of classes. Note that every data point oi is associated with only one class label
ci where ci ∈ {1, 2, .., C} classes. The output of the network is denoted as vi where
vi = {vi1 , vi2 , .., viC } ∈ RC×1 and 0 < vi < 1. The NN cost function can defined using
mean square error (MSE).
M
X
JM SE = ||vi − yi ||2
i=1

Show that the MSE is bounded in the following manner


M M
X 2 X 2
1 − vici 1 − vici
 
≤ JM SE ≤ 2
i=1 i=1

(Points 15)

4. Learning of NN weights - Consider a quadratic error of the form


1
J = J0 + (w − w∗ )T H(w − w∗ )
2
where w∗ represents the minimum of the function and H represents the Hessian matrix
which is positive definite. Let the eigenvalues and eigenvectors of H be denoted by λj
and uj respectively for j = 1, .., n. The gradient descent based update of w is given by

∂J
wt+1 = wt − η
∂w
where t is the iteration index. If the weights w were initialized to w0 = 0, then show
that after Q steps of gradient descent, the weights are given by

wjQ = ([1 − (1 − ηλj )]Q )wj∗

where wj = wT uj . Show that as Q → ∞, wQ → w∗ if |1 − ηλj | < 1.


(Points 15)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy