0% found this document useful (0 votes)
33 views

Practicals 1 to

1

Uploaded by

badlyautotuned
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

Practicals 1 to

1

Uploaded by

badlyautotuned
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

ITM (SLS) Baroda University B.

Tech – Semester VII

Deep Learning-C2720C2 Lab Manual


Practical-1

Classification with Multilayer Perceptron using Scikit-learn (MNIST Dataset).


The MNIST dataset consists of handwritten digits, and the task is to classify these digits into
their respective classes (0-9).

Import Libraries:

import numpy as np

from sklearn.neural_network import MLPClassifier

from sklearn.datasets import fetch_openml

from sklearn.model_selection import train_test_split

from sklearn.metrics import accuracy_score, classification_report

Load and Prepare Data:

# Load the MNIST dataset

mnist = fetch_openml('mnist_784')

# Split data into features and labels

X = mnist.data

y = mnist.target.astype(int)

# Normalize the pixel values to be in the range [0, 1]

X = X / 255.0

# Split the data into training and testing sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

Create and Train MLP Classifier:

# Create an MLP Classifier

mlp_classifier = MLPClassifier(hidden_layer_sizes=(128, 64), max_iter=100, random_state=42)


# Train the classifier

mlp_classifier.fit(X_train, y_train)

Make Predictions:

# Predict on the test data

y_pred = mlp_classifier.predict(X_test)

Evaluate the Model:

# Calculate accuracy

accuracy = accuracy_score(y_test, y_pred)

print("Accuracy:", accuracy)

# Print classification report

report = classification_report(y_test, y_pred)

print("Classification Report:\n", report)

In this example, we have created a Multilayer Perceptron classifier with two hidden layers (128
and 64 units) and trained it on the MNIST dataset. We then made predictions on the test data and
evaluated the model's performance using accuracy and a classification report.

Practical-2

Hyper-Parameter Tuning in Multilayer Perceptron


Hyperparameter tuning is an important step to optimize the performance of a Multilayer Perceptron
(MLP) model. Scikit-learn provides tools that can help you perform hyperparameter tuning
effectively.

Import Libraries:

import numpy as np

from sklearn.neural_network import MLPClassifier

from sklearn.datasets import fetch_openml


from sklearn.model_selection import train_test_split, GridSearchCV

from sklearn.metrics import accuracy_score, classification_report

Load and Prepare Data:

# Load the MNIST dataset

mnist = fetch_openml('mnist_784')

# Split data into features and labels

X = mnist.data

y = mnist.target.astype(int)

# Normalize the pixel values to be in the range [0, 1]

X = X / 255.0

# Split the data into training and testing sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

Hyperparameter Tuning:

We can use techniques like GridSearchCV or RandomizedSearchCV to search for the best
combination of hyperparameters.

Grid Search:

# Define the parameter grid to search

param_grid = {

'hidden_layer_sizes': [(128,), (256,), (128, 64), (256, 128)],

'activation': ['relu', 'tanh'],

'alpha': [0.0001, 0.001, 0.01],

'max_iter': [100, 200, 300]

# Create an MLP Classifier

mlp_classifier = MLPClassifier(random_state=42)
# Perform grid search

grid_search = GridSearchCV(mlp_classifier, param_grid, cv=3, n_jobs=-1)

grid_search.fit(X_train, y_train)

# Get the best parameters

best_params = grid_search.best_params_

print("Best Parameters:", best_params)

# Train the best model

best_model = grid_search.best_estimator_

best_model.fit(X_train, y_train)

Make Predictions and Evaluate:

# Make predictions

y_pred = best_model.predict(X_test)

# Evaluate the model

accuracy = accuracy_score(y_test, y_pred)

print("Accuracy:", accuracy)

report = classification_report(y_test, y_pred)

print("Classification Report:\n", report)

We used GridSearchCV to search for the best hyperparameters for the MLP model. We can also
use RandomizedSearchCV for a randomized search over the hyperparameter space, which can be
more efficient when the search space is large.

Practical-3

Deep learning Packages Basics: Tensorflow, Keras, Theano and PyTorch.


TensorFlow:

• Developed by Google Brain.


• Open-source deep learning framework.
• Provides a flexible and comprehensive ecosystem for building and deploying various
machine learning models, including neural networks.
• Key features include automatic differentiation, GPU/CPU acceleration, and support for
distributed computing.
• TensorFlow 2.0 introduced the "eager execution" mode for more intuitive model
development.
• Supports both high-level APIs for quick model development and low-level APIs for
advanced customization.

Keras:

• Originally developed as an independent deep learning library, but later integrated into
TensorFlow as its official high-level API.
• Designed to be user-friendly and easy to use, particularly for rapid prototyping.
• Provides a simple and intuitive interface to define and train neural networks.
• Allows for both sequential and functional model building.
• Aims to minimize boilerplate code and streamline the process of creating and training deep
learning models.

Theano:

• Developed by the Montreal Institute for Learning Algorithms (MILA) at the University of
Montreal.
• Theano focused on optimizing mathematical computations, making it efficient for deep
learning tasks.
• Allowed users to define, optimize, and evaluate mathematical expressions involving multi-
dimensional arrays efficiently.
• Was one of the early deep learning frameworks but has since been largely superseded by
TensorFlow, PyTorch, and others.

PyTorch:

• Developed by Facebook's AI Research lab (FAIR).


• Open-source deep learning framework that gained popularity for its dynamic
computational graph and intuitive design.
• Allows users to define and modify computation graphs on-the-fly, making it suitable for
dynamic and complex architectures.
• PyTorch's "autograd" feature enables automatic differentiation.
• Provides a "Tensor" class similar to NumPy arrays and supports GPU acceleration.
• Known for its strong community support, comprehensive documentation, and use in
cutting-edge research.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy