0% found this document useful (0 votes)
76 views

LAB 2 Transfer Learning

The document summarizes a lab on transfer learning for computer vision. The lab aims to fine-tune a pre-trained VGG16 model for classifying images into 10 food categories. Students will learn about transfer learning principles, utilizing pre-trained models like VGG16, and applying techniques like fine-tuning to adapt the model for a new task. Exercises include downloading and preprocessing a food image dataset, building and training a classification model using VGG16 as the base, and evaluating the model's performance on the task.

Uploaded by

mbjanjua35
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views

LAB 2 Transfer Learning

The document summarizes a lab on transfer learning for computer vision. The lab aims to fine-tune a pre-trained VGG16 model for classifying images into 10 food categories. Students will learn about transfer learning principles, utilizing pre-trained models like VGG16, and applying techniques like fine-tuning to adapt the model for a new task. Exercises include downloading and preprocessing a food image dataset, building and training a classification model using VGG16 as the base, and evaluating the model's performance on the task.

Uploaded by

mbjanjua35
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

ADVANCED AI BOOTCAMP

−Computer Vision−

Week 04: Computer Vision

Lab 02: TRANSFER LEARNING

Date: Monday, 10 October, 2023

Time:
1200 - 1300
and
1400 - 1700

Course Instructor: Dr. Junaid Younas

Lab Engineer: Haider Ali


Lab 2: TRANSFER LEARNING (GRADED)

In this lab, we'll delve into the core concepts of transfer learning and its practical applications in
the realm of deep learning. Transfer learning, a powerful technique in machine learning, allows
us to leverage pre-trained models and adapt them to new tasks, saving time and resources while
achieving excellent results. Throughout this lab, you will engage in a series of exercises that
explore the principles and methodologies of transfer learning, covering topics such as:

● Leveraging Pre-trained Models for Custom Tasks


● Fine-tuning Model Architectures for Specific Domains
● Optimizing Transfer Learning for Enhanced Performance
● Real-world Applications of Transfer Learning in Computer Vision

Problem Statement: Develop a deep learning model capable of accurately classifying images
into one of ten food categories. This task necessitates the utilization of a pre-trained VGG
architecture and involves the application of transfer learning and fine-tuning techniques.

Objectives:

1. Foundational Knowledge: Develop a deep understanding of transfer learning principles


and the significance of pre-trained models like VGG16 in deep learning.
2. Practical Application: Apply transfer learning techniques to fine-tune a pre-trained
VGG16 architecture for a specific classification task.
3. Performance Assessment: Evaluate the performance of the fine-tuned model and
explore the advantages and limitations of transfer learning in real-world scenarios.in
solving specific tasks.

Tools/Software Requirements:

Google Colab: Ensure you have access to Google Colab for this exercise. Make sure to change
the runtime type to GPU from the menu to leverage GPU acceleration for faster training and
execution.

Problem: Fine-tuning a Pre-trained VGG16 Model for Multiclass Food Classification

Introduction:

Transfer learning is a fundamental technique in deep learning, utilizing pre-trained models to


enhance performance on new tasks. In this exercise, we focus on fine-tuning the VGG16
architecture for a specific classification task, capitalizing on its learned features for improved
model performance. This exercise provides both theoretical understanding and practical skills in
transfer learning and fine-tuning, valuable for real-world applications.
lab-2-1-1

October 10, 2023

[3]: import matplotlib.pyplot as plt


import numpy as np
import os
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
from keras.applications.vgg16 import VGG16, preprocess_input
from keras.optimizers import Adam
from keras.models import Model
from keras.callbacks import EarlyStopping

0.0.1 Data download


[4]: import os
import requests
import zipfile

def download_and_unzip_data(url, destination_folder):


# Create the destination folder if it doesn't exist
os.makedirs(destination_folder, exist_ok=True)

# Extract the filename from the URL


file_name = os.path.basename(url)

# Define the path to save the downloaded zip file


save_path = os.path.join(destination_folder, file_name)

try:
# Send an HTTP GET request to download the zip file
response = requests.get(url, stream=True)
response.raise_for_status() # Raise an exception if the request was␣
↪not successful

# Save the downloaded zip file


with open(save_path, 'wb') as file:
for chunk in response.iter_content(chunk_size=8192):
file.write(chunk)

1
# Unzip the downloaded file
with zipfile.ZipFile(save_path, 'r') as zip_ref:
zip_ref.extractall(destination_folder)

print(f"File downloaded and unzipped to {destination_folder}")

except requests.exceptions.RequestException as e:
print(f"Failed to download the file: {str(e)}")

[5]: url = "https://storage.googleapis.com/ztm_tf_course/food_vision/


↪101_food_classes_10_percent.zip"

destination_folder = "" # TODO: Change this to your desired destination folder


# TODO: Call download_and_unzip_data and download the data

0.0.2 IMAGE PREPROCESSING and Data Augmentation

[6]: train_dir = '' # TODO: Change the directory to train folder


test_dir = '' # TODO: Change the directory to test folder

# Make sure to give path of folder not file

[7]: import os

# Function to count files in a directory


def count_files_in_directory(directory):
count = 0
for root, dirs, files in os.walk(directory):
count += len(files)
return count

# Count the number of images in the train and test sets


num_train_images = count_files_in_directory(train_dir)
num_test_images = count_files_in_directory(test_dir)

# Print the counts


print(f"Number of images in the train set: {num_train_images}")
print(f"Number of images in the test set: {num_test_images}")

Number of images in the train set: 0


Number of images in the test set: 0

[8]: train_datagen = ImageDataGenerator(


# TODO: Rotation range -> 40,
# TODO: Width shift range -> 0.2,
# TODO: Height shift range -> 0.2,
shear_range=0.2,
zoom_range=0.2,

2
fill_mode='nearest',
validation_split=0.2) # set validation split

[12]: class_subset = sorted(os.listdir(os.path.join(destination_folder,␣


↪'101_food_classes_10_percent/train')))[:10]

BATCH_SIZE = # TODO: Define batch size of your choice


IMG_SHAPE = (224, 224, 3)
traingen = train_datagen.flow_from_directory(train_dir,
# target_size= # TODO: Define␣
↪image size,

class_mode='categorical',
classes=class_subset,
subset='training',
# TODO: BATCH SIZE,
# TODO: Shuffle dataset,
# TODO: Set the seed
)
# TODO: Define valdgen for the validation set. Hint: traingen

testgen = ImageDataGenerator().flow_from_directory(test_dir,
# target_size= # TODO: Define␣
↪image size,

class_mode=None,
classes=class_subset,
batch_size=1,
shuffle=False,
seed=42)

0.0.3 Preprocess VGG16

[ ]: # preprocess_input = TODO: Initial VGG16 Preprocess Hint: libraries

[ ]: # TODO: Download weights of VGG16 model.

# base_model = VGG16(...)

# TODO: Freeze the trainable parameters


# base_model.trainable = # TODO: Freeze the weights
image_batch, label_batch = next(iter(traingen))
feature_batch = base_model(image_batch)
print(feature_batch.shape)

[ ]: # Let's take a look at the base model architecture


# TODO: Print the architecture of base model and get the intuition

3
0.0.4 Architecture
[ ]: inputs = tf.keras.Input(shape=IMG_SHAPE)
# x = # TODO: Preprocess the input by VGG16 Preprocessor
# x = # TODO: Initialize the base_model and pass the preprocessed data
# x = tf.keras.layers. # TODO Flatten the data
# x = # TODO: Apply Dense layer with 4096 neurons and relu activation

# x = # TODO: Apply Dense layer with 1072 neurons and relu activation
# x = # TODO: Apply Dropout layer with 0.2 for dropout value
# x = # TODO: Apply softmax for prediction Hint: what shoould be number of␣
↪neurons in it?

# NOTE: Sequential, .add(...) will not work. We need fully connected network.␣
↪Hint ()(input)

model = tf.keras.Model(inputs, outputs)

[ ]: model.summary()

[ ]: # Create an instance of the Adam optimizer with a custom learning rate


optimizer = Adam(learning_rate=0.001)

# Compile your model with the custom optimizer, categorical cross entropy loss␣
↪and accuracy metric

[ ]: # EarlyStopping
early_stop = EarlyStopping(monitor='val_loss',
patience=10,
restore_best_weights=True,
mode='min')

[ ]: # n_epochs = # TODO: DEFINE NO OF EPOCHS

vgg_history = model.fit(traingen,
batch_size=BATCH_SIZE,
epochs=n_epochs,
validation_data=validgen,
callbacks=[early_stop],
verbose=1)

[ ]: import matplotlib.pyplot as plt

def plot_training_history(history):
"""
Plots the training and validation accuracy and loss curves from a training␣
↪history object.

Parameters:

4
- history: A training history object containing 'accuracy', 'loss',␣
↪'val_accuracy', and 'val_loss' values.
"""
# Plot training & validation accuracy values
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(['Train', 'Validation'], loc='upper left')

# Plot training & validation loss values


plt.subplot(1, 2, 2)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend(['Train', 'Validation'], loc='upper left')

plt.tight_layout()
plt.show()

[ ]: plot_training_history(vgg_history)

[ ]: # Generate predictions
# vgg_model.load_weights('tl_model_v1.weights.best.hdf5') # initialize the best␣
↪trained weights

true_classes = testgen.classes
class_indices = traingen.class_indices
class_indices = dict((v,k) for k,v in class_indices.items())

# vgg_preds = # Make prediction on test dataset


vgg_pred_classes = np.argmax(vgg_preds, axis=1)

[ ]: from sklearn.metrics import accuracy_score

vgg_acc = accuracy_score(true_classes, vgg_pred_classes)


print("VGG16 Model Accuracy without Fine-Tuning: {:.2f}%".format(vgg_acc * 100))

5
0.0.5 Fine Tuning

[ ]: # Reset our image data generators


traingen.reset()
validgen.reset()
testgen.reset()

# TODO: Make the model weights trainable

# Let's take a look to see how many layers are in the base model
print("Number of layers in the model: ", len(model.layers))

[ ]: # Fine-tune from this layer onwards


# fine_tune_at = # Define after how many layers we need to make weights␣
↪trainable

# Freeze all the layers before the `fine_tune_at` layer


for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False

[ ]: # Retrain model with fine-tuning


# TODO Fit the fine tuned model on train and validation set, do apply early␣
↪stopping

[ ]: plot_training_history(vgg_ft_history)

[ ]: # Generate predictions
# vgg_model_ft.load_weights('tl_model_v1_ft.weights.best.hdf5') # initialize␣
↪the best trained weights

vgg_preds_ft = vgg_model_ft.predict(testgen)
# vgg_pred_classes_ft = # TODO Make prediction on fine tuned model

[ ]: vgg_acc_ft = accuracy_score(true_classes, vgg_pred_classes_ft)


print("VGG16 Model Accuracy with Fine-Tuning: {:.2f}%".format(vgg_acc_ft * 100))

[ ]: import seaborn as sns


from sklearn.metrics import confusion_matrix

# Get the names of the ten classes


class_names = testgen.class_indices.keys()

def plot_heatmap(y_true, y_pred, class_names, ax, title):


cm = confusion_matrix(y_true, y_pred)
sns.heatmap(
cm,
annot=True,

6
square=True,
xticklabels=class_names,
yticklabels=class_names,
fmt='d',
cmap=plt.cm.Blues,
cbar=False,
ax=ax
)
ax.set_title(title, fontsize=16)
ax.set_xticklabels(ax.get_xticklabels(), rotation=45, ha="right")
ax.set_ylabel('True Label', fontsize=12)
ax.set_xlabel('Predicted Label', fontsize=12)

fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 10))

plot_heatmap(true_classes, vgg_pred_classes, class_names, ax1, title="Transfer␣


↪Learning (VGG16) No Fine-Tuning")

plot_heatmap(true_classes, vgg_pred_classes_ft, class_names, ax2,␣


↪title="Transfer Learning (VGG16) with Fine-Tuning")

fig.tight_layout()
fig.subplots_adjust(top=1.25)
plt.show()

[ ]: # Plot the predictions with labels

7
Lab 2: TRANSFER LEARNING (UNGRADED)

Optional: You can choose to attempt it or leave it unattempted, or you can attempt it from home;
it's ungraded. We highly encourage you to give it a try

Problem: Use transfer learning for large image classification, going through these steps:

- Create a training set containing at least 100 images per class. For example, you could
classify your own pictures based on the locations or alternatively you can use an existing
dataset from tensor flow datasets.
- Split it into a training set, a validation set, and a test set.
- Build the input pipeline, apply the appropriate preprocessing operations, and optionally
add data augmentation.
- Fine-tune a pretrained model on this dataset.

—---GOOD LUCK—---

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy