0% found this document useful (0 votes)
0 views

DL Practical 3 (1)

The document outlines the implementation and evaluation of a neural network for digit classification using the MNIST dataset, which includes 60,000 training and 10,000 test images of handwritten digits. It details the necessary software requirements, the procedure for building and training the model, and the evaluation of its performance. Additionally, it includes questions for students to reflect on the experiment and understand key concepts in neural network training and evaluation.

Uploaded by

Gautam Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

DL Practical 3 (1)

The document outlines the implementation and evaluation of a neural network for digit classification using the MNIST dataset, which includes 60,000 training and 10,000 test images of handwritten digits. It details the necessary software requirements, the procedure for building and training the model, and the evaluation of its performance. Additionally, it includes questions for students to reflect on the experiment and understand key concepts in neural network training and evaluation.

Uploaded by

Gautam Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

#!

/usr/bin/env python
# coding: utf-8

# In[173]:

#Objective :To implement and evaluate a neural network for digit


classification using the MNIST dataset, enabling the network to recognize
handwritten digits from 0 to 9.

# In[174]:

'''
Theory:

1. Introduction to MNIST Dataset:


The MNIST (Modified National Institute of Standards and Technology)
dataset is a widely used benchmark for image classification tasks.
It consists of 60,000 training images and 10,000 test images, each of
size 28x28 pixels, representing grayscale handwritten digits (0-9).
2. Neural Network Overview:
Neural networks are computational models inspired by the human brain.
They consist of layers of interconnected nodes (neurons) that process
input data to perform tasks like classification.
A basic neural network for image classification consists of the following
layers:
Input Layer: Accepts the flattened input data (e.g., 28x28 images become
784 features).
Hidden Layers: Perform computations using activation functions to learn
non-linear patterns.
Output Layer: Produces probabilities for each class (digit 0-9).'''

# In[175]:

'''
Experiment Details
Software Requirements
Python 3.7 or higher
TensorFlow 2.x
Scikit-learn
Matplotlib
'''

# In[176]:

# Procedure
# Step 1: Import necessary libraries
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.utils import plot_model # For visualizing the
model
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt
from tensorflow.keras import Input

# In[177]:

#Step 2: Load the MNIST Dataset


# Generate a synthetic dataset
(X_train, y_train) , (X_test, y_test) = keras.datasets.mnist.load_data()

# In[178]:

image_no = 10
plt.matshow(X_train[image_no])
y_train[image_no]

# In[179]:

# convert two dimensional data in one dimension dataset


X_train_flattened = X_train.reshape(len(X_train),28*28)
X_test_flattened = X_test.reshape(len(X_test),28*28)

# In[180]:

# Standardize the dataset


scaler = StandardScaler()
X_train = scaler.fit_transform(X_train_flattened)
X_test_flattened = scaler.transform(X_test_flattened)

# In[181]:

#Step3: Define the Model


# Build a shallow neural network
model = Sequential([
Input(shape=(X_train.shape[1],)), # Input layer with shape
specified
Dense(16, activation='relu'), # Hidden layer
Dense(10, activation='sigmoid') # Output layer for binary
classification
])

# In[ ]:

# In[182]:

#Step4: Compile the Model


model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

# In[183]:

# Train the model


history = model.fit(X_train, y_train, epochs=5)

# In[184]:

#Step 5: Evaluate the model


loss, accuracy = model.evaluate(X_test_flattened, y_test)
print(f"Test Accuracy: {accuracy:.2f}")

# In[185]:

# Predictions
y_pred = model.predict(X_test_flattened)

# In[186]:

image_no = 15

plt.matshow(X_test[image_no])
print('label', y_test[image_no])

y_pred[0]
print('predicted', np.argmax(y_pred[image_no]))

#print("Accuracy Score:", accuracy_score(y_test, y_pred))

# In[187]:

# Predictions
y_pred = model.predict(X_test_flattened)
y_pred[0]
print(np.argmax(y_pred[image_no]))

#print("Accuracy Score:", accuracy_score(y_test, y_pred))

# In[188]:

y_predicted_labels = [np.argmax(i) for i in y_pred]


y_predicted_labels[:5]

# In[189]:

#Analyze the Results


cm = tf.math.confusion_matrix(labels=y_test,
predictions=y_predicted_labels)
cm

# In[190]:

import seaborn as sn
plt.figure(figsize = (10,7))
sn.heatmap(cm, annot=True, fmt='d')
plt.xlabel('Predicted')
plt.ylabel('Truth')

# In[191]:

'''
Conclusions:
By completing this experiment, students will gain hands-on experience in
building, training, and evaluating a neural network for digit
classification using the MNIST dataset. They will understand the impact
of various components such as activation functions, loss functions, and
optimizers on the model's performance'''
# In[192]:

# Some Important Questions


'''Questions:
Pre-Lab Questions:
What are the key features of the MNIST dataset?
Why is it important to normalize pixel values in image data?
Explain the purpose of the activation function in neural networks.
In-Lab Questions:
What is the significance of the softmax activation function in the output
layer?
How does the choice of optimizer affect the model's training process?
What do accuracy and loss values represent during training?
Post-Lab Questions:
How does the model perform on test data compared to training data?
Explain why there may be differences.
What are some potential improvements to the neural network architecture
for better performance?

How would you modify the network to handle RGB images instead of
grayscale?'''

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy