LAB 2 Transfer Learning
LAB 2 Transfer Learning
−Computer Vision−
Time:
1200 - 1300
and
1400 - 1700
In this lab, we'll delve into the core concepts of transfer learning and its practical applications in
the realm of deep learning. Transfer learning, a powerful technique in machine learning, allows
us to leverage pre-trained models and adapt them to new tasks, saving time and resources while
achieving excellent results. Throughout this lab, you will engage in a series of exercises that
explore the principles and methodologies of transfer learning, covering topics such as:
Problem Statement: Develop a deep learning model capable of accurately classifying images
into one of ten food categories. This task necessitates the utilization of a pre-trained VGG
architecture and involves the application of transfer learning and fine-tuning techniques.
Objectives:
Tools/Software Requirements:
Google Colab: Ensure you have access to Google Colab for this exercise. Make sure to change
the runtime type to GPU from the menu to leverage GPU acceleration for faster training and
execution.
Introduction:
try:
# Send an HTTP GET request to download the zip file
response = requests.get(url, stream=True)
response.raise_for_status() # Raise an exception if the request was␣
↪not successful
1
# Unzip the downloaded file
with zipfile.ZipFile(save_path, 'r') as zip_ref:
zip_ref.extractall(destination_folder)
except requests.exceptions.RequestException as e:
print(f"Failed to download the file: {str(e)}")
[7]: import os
2
fill_mode='nearest',
validation_split=0.2) # set validation split
class_mode='categorical',
classes=class_subset,
subset='training',
# TODO: BATCH SIZE,
# TODO: Shuffle dataset,
# TODO: Set the seed
)
# TODO: Define valdgen for the validation set. Hint: traingen
testgen = ImageDataGenerator().flow_from_directory(test_dir,
# target_size= # TODO: Define␣
↪image size,
class_mode=None,
classes=class_subset,
batch_size=1,
shuffle=False,
seed=42)
# base_model = VGG16(...)
3
0.0.4 Architecture
[ ]: inputs = tf.keras.Input(shape=IMG_SHAPE)
# x = # TODO: Preprocess the input by VGG16 Preprocessor
# x = # TODO: Initialize the base_model and pass the preprocessed data
# x = tf.keras.layers. # TODO Flatten the data
# x = # TODO: Apply Dense layer with 4096 neurons and relu activation
# x = # TODO: Apply Dense layer with 1072 neurons and relu activation
# x = # TODO: Apply Dropout layer with 0.2 for dropout value
# x = # TODO: Apply softmax for prediction Hint: what shoould be number of␣
↪neurons in it?
# NOTE: Sequential, .add(...) will not work. We need fully connected network.␣
↪Hint ()(input)
[ ]: model.summary()
# Compile your model with the custom optimizer, categorical cross entropy loss␣
↪and accuracy metric
[ ]: # EarlyStopping
early_stop = EarlyStopping(monitor='val_loss',
patience=10,
restore_best_weights=True,
mode='min')
vgg_history = model.fit(traingen,
batch_size=BATCH_SIZE,
epochs=n_epochs,
validation_data=validgen,
callbacks=[early_stop],
verbose=1)
def plot_training_history(history):
"""
Plots the training and validation accuracy and loss curves from a training␣
↪history object.
Parameters:
4
- history: A training history object containing 'accuracy', 'loss',␣
↪'val_accuracy', and 'val_loss' values.
"""
# Plot training & validation accuracy values
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(['Train', 'Validation'], loc='upper left')
plt.tight_layout()
plt.show()
[ ]: plot_training_history(vgg_history)
[ ]: # Generate predictions
# vgg_model.load_weights('tl_model_v1.weights.best.hdf5') # initialize the best␣
↪trained weights
true_classes = testgen.classes
class_indices = traingen.class_indices
class_indices = dict((v,k) for k,v in class_indices.items())
5
0.0.5 Fine Tuning
# Let's take a look to see how many layers are in the base model
print("Number of layers in the model: ", len(model.layers))
[ ]: plot_training_history(vgg_ft_history)
[ ]: # Generate predictions
# vgg_model_ft.load_weights('tl_model_v1_ft.weights.best.hdf5') # initialize␣
↪the best trained weights
vgg_preds_ft = vgg_model_ft.predict(testgen)
# vgg_pred_classes_ft = # TODO Make prediction on fine tuned model
6
square=True,
xticklabels=class_names,
yticklabels=class_names,
fmt='d',
cmap=plt.cm.Blues,
cbar=False,
ax=ax
)
ax.set_title(title, fontsize=16)
ax.set_xticklabels(ax.get_xticklabels(), rotation=45, ha="right")
ax.set_ylabel('True Label', fontsize=12)
ax.set_xlabel('Predicted Label', fontsize=12)
fig.tight_layout()
fig.subplots_adjust(top=1.25)
plt.show()
7
Lab 2: TRANSFER LEARNING (UNGRADED)
Optional: You can choose to attempt it or leave it unattempted, or you can attempt it from home;
it's ungraded. We highly encourage you to give it a try
Problem: Use transfer learning for large image classification, going through these steps:
- Create a training set containing at least 100 images per class. For example, you could
classify your own pictures based on the locations or alternatively you can use an existing
dataset from tensor flow datasets.
- Split it into a training set, a validation set, and a test set.
- Build the input pipeline, apply the appropriate preprocessing operations, and optionally
add data augmentation.
- Fine-tune a pretrained model on this dataset.
—---GOOD LUCK—---