Lesson Plan - ML24ECSC306
Lesson Plan - ML24ECSC306
Lesson Plan - ML24ECSC306
FMTH0301/Rev.5.3
Course Plan
Prerequisites: Basics of Python programming, Exploratory Data Analysis, Linear Algebra, Calculus,
and Statistical models.
Course Overview:
The course is designed to enhance critical thinking, creativity, and innovation, enabling students to not
only keep up with current technological trends but also contribute to solving future domain-specific
challenges. The course covers key concepts like supervised and unsupervised learning, ensemble
learning, and seq2seq methodologies, providing a comprehensive understanding of modern AI
techniques. Through course projects, students develop a responsible approach to AI, ensuring their
work positively impacts society while preparing them for practical engineering applications in this rapidly
evolving field, often leading to valuable research publications.
1
School of Computer Science and Engineering
Course Articulation Matrix: Mapping of Course Outcomes (COs) with Program Outcomes
(POs)
Course Title: Machine Learning and Deep Learning Semester: 5
Course Code: 24ECSC306 Year: 2024-25
2
School of Computer Science and Engineering
Course Content
Course Code: 24ECSC306 Course Title: Machine Learning and Deep Learning
L-T-P : 2-0-2 Credits: 4 Contact Hrs.: 6 Hrs.
ISA Marks: 50 ESA Marks: 50 Total Marks: 100
Teaching Hrs: 30 Lab Hrs: 56 ESA Theory Exam Duration: 2 hrs
Content Hrs
Unit - 1
Chapter 1: Introduction and Regression: Fundamentals of ML, linear, ridge, lasso, 4 hrs.
elastic-net regression, evaluation.
Chapter 2: Classification: Linear discriminant analysis, logistic regression, support 5 hrs.
vector machines, decision tree, extra trees, Bayesian networks, evaluation.
Chapter 3: Ensemble learning: Bagging, boosting, stacking, random forest, resampling 6 hrs.
methods.
Unit - 2
Chapter 4: Neural Networks: Perceptron, gradient descent, optimization algorithms, 5 hrs.
backpropagation, hyper parameters, regularization.
Chapter 5: Deep Neural Networks: convolutional neural networks, various CNN 6 hrs.
architectures, model selection and evaluation, bias-variance.
Chapter 6: Seq2Seq models: Recurrent neural networks, long short-term memory, auto 4 hrs.
encoders.
1. Tom Mitchell., Machine Learning, Mc Graw Hill, McGraw-Hill Science, 3rd edition.
2. Ian Goodfellow and Yoshua Bengio and Aaron Courville: Deep Learning, MIT Press, 2016.
References
1. Aurelian Gerona, Hands-On Machine Learning with Scikit-Learn and Tensor Flow, Concepts,
Tools, and Techniques to Build Intelligent Systems, Publisher: O'Reilly Media, July 2016.
2. Luca Pietro Giovanni Antiga, Thomas Viehmann, Eli Stevens, Deep Learning with PyTorch
Manning Publications, 2020.
i. ISA -I 1, 2, 3 30 25 Marks
ii. ISA – II 4, 5, 6 30
iii. Lab Evaluation ISA 1, 2, 3, 4, 5, 6 20 10 Marks
iv. Course Project 1, 2, 3, 4, 5, 6 30 15 Marks
4
School of Computer Science and Engineering
Note
1. Each Question carries15 marks and may consists of sub-questions.
2. Mixing of sub-questions from different chapters within a unit (only for Unit I and Unit II) is allowed in
Minor I, II and ESA
3. Answer 4 full questions of 15 marks each (two full questions from Unit I, II) out of 6 questions in
ESA.
5
School of Computer Science and Engineering
Assessment Methods
Lab
Weightage in activity
Course outcomes (COs)
assessment ISA1 ISA2 and Lab Theory
course
project
1. Explain supervised and
unsupervised machine ✓ ✓ ✓ ✓
40%
learning algorithms.
2. Apply various ensemble
learning methods to
improve the performance of ✓
✓ ✓ ✓
the machine learning 10%
models.
3. Implement Seq2Seq models
for sequence related ✓ ✓ ✓ ✓
10%
multimedia tasks.
4. Employ appropriate
machine learning algorithms
✓ ✓
for a given real-world 20%
application.
5. Create a comprehensive
report of course project and
publish paper at technical ✓ ✓
20%
conference.
Weightage 25% 50% 25%
6
School of Computer Science and Engineering
Course Code and Title: 24ECSC306 / Machine Learning and Deep Learning
Chapter Number and Title: 1. Introduction to Regression Planned Hours: 4 hrs
Learning Outcomes:-
At the end of the topic, the student should be able to:
Lesson Schedule
Class No. - Portion covered per hour
1.Introduction to Machine Learning and its applications
2. Introduction to Linear Regression
3. Gradient Descent for Linear Regression
4. Regularization - Ridge and Lasso and elastic Net
Review Questions
Sl.No. - Questions TLOs BL PI Code
1. What is machine learning? What is a hypothesis? What are the 1 L2 1.3.1
three main components of the machine learning process?
2. Define features, observations, and hypotheses. What are the 3 L2 1.3.1
various data formats of a dataset? How does data format affect
machine learning tasks? Explain with a suitable example.
3. Design a system that predicts the salary of a person based on 3 L3 1.3.1
his experience using the machine learning approach. What will
be experience E to learn task T to perform P for the system?
4. Refer to the data given in the table below, if in the linear 5 L3 2.3.1
regression model θ0 = 0, find the cost using the cost function for θ1 = 0,
0.5 and 1. Which is the best value for θ1?
7
School of Computer Science and Engineering
2 5
3 10
4 14
5. Explain the importance of regularization in machine learning, 4 L2 2.1.3
describe how it works, and discuss common regularization
techniques.
6. Explain the role of the following factors in reaching global minima 3 L3 1.3.1
with a gradient descent algorithm for linear regression.
a. Epochs
b. Learning rate
c. Parameters
d. Bias and Variance
7. Explain overfitting in linear regression with examples. How do 4 L2 2.1.3
you overcome overfitting?
8. Describe the bias-variance tradeoff in machine learning. Why is 4 L2 2.1.3
it important, and how does it affect model performance?
9. Compare L1 and L2 regularization techniques in terms of their 4 L3 2.1.3
effects on model complexity and feature selection. How do they
differ in their impact on the magnitude of weight values?
10. Explain the difference between supervised, unsupervised and 2 L2 1.3.1
reinforcement learning techniques with suitable examples.
8
School of Computer Science and Engineering
Course Code and Title: 24ECSC306 / Machine Learning and Deep Learning
Chapter Number and Title: 2. Classification Planned Hours:5 hrs
Learning Outcomes:-
At the end of the topic, the student should be able to:
Lesson Schedule
Class No. - Portion covered per hour
1. Linear Discriminant Analysis
2. Introduction to Logistic Regression and Gradient Descent
3. Support Vector Machines
4. Decision trees
5. Extra trees and Bayesian networks
Review Questions
Sl.No. - Questions TLOs BL PI
Code
1. Explain Linear Discriminant Analysis and how it performs dimensionality 1 L2 2.1.3
reduction.
2. Why it is necessary to estimate the accuracy of hypothesis. Explain with an 4 L3 2.3.1
example for logistic regression using regularization.
3. The logistic regression model does not calculate the cost using the sum of 3 L3 2.3.1
square of errors. Do you agree with the statement? If yes, explain why and how
is cost calculated in logistic regression? If no, then justify.
4. Apply SVM algorithm for the data points in Table-1 and find dimension of 5 L3 2.3.1
hyperplane to classify them.
X Y Label
4 2 -1
4 -2 -1
6 1 -1
9
School of Computer Science and Engineering
6 -1 -1
8 0 1
10 -1 1
12 2 1
12 -2 1
Table-1
5. Use an appropriate decision tree algorithm on a dataset given below and 5 L3 2.3.1
determine whether to play football or not.
Outlook Temperature Humidity Wind Played
football(yes/no)
Sunny Hot High Weak No
Sunny Hot High Strong No
Overcast Hot High Weak Yes
Rain Mild High Weak Yes
Rain Cool Normal Weak Yes
Rain Cool Normal Strong No
Overcast Cool Normal Strong Yes
Sunny Mild High Weak No
Sunny Cool Normal Weak Yes
Rain Mild Normal Weak Yes
Sunny Mild Normal Strong Yes
Overcast Mild High Strong Yes
Overcast Hot Normal Weak Yes
Rain Mild High Strong No
6. What are Bayesian Networks and How does the Kernel Function works. 5 L2 2.3.1
7. Consider a dataset with features (X) and a label (Y): 5 L3 2.3.1
X = [[5, 3], [4, 7], [6, 8], [3, 9], [8, 2]]
Y = [0, 1, 1, 0, 1]
Classify a new data point [7, 5] and make a final prediction based on majority
voting by building a decision tress with maximum depth 2.
10
School of Computer Science and Engineering
Course Code and Title: 24ECSC306 / Machine Learning and Deep Learning
Chapter Number and Title: 3: Ensemble Learning Planned Hours: 6 hrs.
Learning Outcomes: -
At the end of the topic the student should be able to:
Lesson Schedule
Class No. - Portion covered per hour / per Class
1. Introduction to Ensemble learning
2. Bagging technique
3. Boosting and stacking technique
4. Random Forest technique
5. AdaBoost technique
6. Resampling method
Review Questions
Sl.No. - Questions TLOs BL PI Code
1. Explain how does Ensemble learning help in Incremental Learning? 4 L2 2.1.3
2. Discuss with suitable examples and reasons when we should use and not 4 L3 2.1.3
use the ensemble Learning improves the performance of machine learning
models?
3. Explain how Bagging and Boosting technique help to improve the 5 L2 2.3.1
performance of the model?
4. Consider a regression problem with 'n’ predictions on test data by ‘n’ 1 L3 2.1.3
different models (M1, M2, …. Mn) respectively. Explain the methods which
can be used to combine the predictions of these models?
5. Can we ensemble multiple models of same machine learning algorithm? 3 L3 2.3.1
Justify you answer with suitable reason and example.
11
School of Computer Science and Engineering
12
School of Computer Science and Engineering
Course Code and Title: 24ECSC306 / Machine Learning and Deep Learning
Chapter Number and Title: 4: Neural Network Planned Hours: 5 hrs.
Learning Outcomes: -
At the end of the topic the student should be able to:
Lesson Schedule
Class No. - Portion covered per hour / per Class
1. Introduction to perceptron learning, Model representation
2. Gradient checking
3. Back propagation algorithm
4. Hyper parameter Tuning, Multiclass Classification
5. Applications of Neural Network, Regularization
Review Questions:
13
School of Computer Science and Engineering
14
School of Computer Science and Engineering
Course Code and Title: 24ECSC306 / Machine Learning and Deep Learning
Chapter Number and Title: 5: Deep Neural Network Planned Hours: 6 hrs.
Learning Outcomes:
At the end of the topic the student should be able to:
Lesson Schedule
Class No. - Portion covered per hour / per Class
1. Introduction to Deep Neural Network (DNN) and implementation
2. Implementation of DNN
3. Study of various DNN architecture
4. Model Selection and Evaluation
5. Model Selection and Evaluation
6. Bias-Variance tradeoff
Review Questions:
15
School of Computer Science and Engineering
6. Consider a CNN architecture with 10,000 images with a batch size of 1 L3 2.1.3
64, with each input as 128 * 128, undergoes a convolution operation
with a kernel size of 3 * 3 and a stride of 2. Later max pooling layer is
applied with a pool size of 2 * 2 and stride of 2. Answer the following:
• How many iterations (or batches) will be required to complete
one training epoch?
• How many output feature maps (channels) will you get, and
what will be the dimensions of each output feature map for both
the Convolution and max pooling layer?
7. In a binary classification problem, your model predicts 30 true positives, 1 L3 2.1.3
10 true negatives, 5 false positives, and 15 false negatives. Calculate
the precision, recall, F1-score, and accuracy of the model.
8. Describe the challenges and potential solutions for handling class 3 L3 2.3.1
imbalance in a multi-class image classification problem when using
CNNs.
9. (i) What are the problems of VGGNet architecture? Explain how 4 L3 2.4.1
ResNet overcome these problems?
(ii) What are fully convolutional layers? In what situations are fully
convolutional layers better than fully connected layers and vice-versa?
10. Describe the following optimization techniques and list the advantages 3 L3 2.3.1
and disadvantages.
(i) SGD (ii) Gradient Descent with Momentum (iii) RMSProp (iv) ADAM
16
School of Computer Science and Engineering
Course Code and Title: 24ECSC306 / Machine Learning and Deep Learning
Learning Outcomes:
At the end of the topic the student should be able to:
Lesson Schedule
Class No. - Portion covered per hour / per Class
1. Introduction to Seq2seq models, RNN
2. LSTM
3. Applications of Seq2seq models
4. Autoencoders
Review Questions
Sl.No. - Questions TLOs BL PI Code
1. What is recurrent neural network? Why recurrent neural networks? How 2 L2 2.1.3
does RNN work?
2. What are various applications of recurrent neural networks? Discuss how 3 L3 2.1.3
image captioning works?
3. Explain how recurrent neural networks are different from convolution 1 L2 2.1.3
neural networks?
5. What are autoencoders? How data is represented using autoencoders? 4 L2 2.1.3
What are the various applications of autoencoders?
6. Consider the linear autoencoder over the real numbers. Show that all the 2 L3 2.1.3
information about the data is contained in the mean and covariance matrix
of the data. Show that the standard least square error function is a quadratic
function (parabola) in each individual weight, if all the other weights are
assumed to be constant.
7. Discuss when RNNs fail? How does long short-term memory (LSTM) 2 L2 2.1.3
work to overcome these failures?
17
School of Computer Science and Engineering
8. Explain the concept of denoising autoencoders. How do they work, and 4 L3 2.1.3
what are the advantages of using denoising autoencoders in
applications like image or speech denoising.
18
School of Computer Science and Engineering
Q.No Questions PI
Marks CO BL PO
Code
1.a (i) You are given the function f(x,y) = x2 + xy + y2 and
are trying to find a local minimum using gradient
descent. You randomly start with x = 1.3 and y = 5.4.
Perform the first step of gradient descent with learning
rate α = 0.01. Show the resulting values for x and y as
well as all of your calculations.
10 1 L2 2 2.1.3
(ii) If we train a neural network for 1,000 epochs (one
training example at a time), does it make a difference
whether we present all training examples in turn for
1000 times or whether we first present the first training
example 1000 times, then the second training example
for 1000 times, and so on? Why?
19
School of Computer Science and Engineering
20
School of Computer Science and Engineering
10 1 L3 2 2.1.3
22