Slide 1: Recognize Flowers With Tensorflow Lite On Android
Slide 1: Recognize Flowers With Tensorflow Lite On Android
Slide
Lite on 1
Android
• Recently Convolutional Neural Networks (CNNs) have been shown to achieve state-of-
the-art performance on various classification tasks. In this paper, we present for the first
time a place recognition technique based on CNN models, by combining the powerful
features learnt by CNNs with a spatial and sequential filter. Applying the system to a 70
km benchmark place recognition dataset we achieve a 75% increase in recall at 100%
precision, significantly outperforming all previous state of the art techniques. We also
conduct a comprehensive performance comparison of the utility of features from all 21
layers for place recognition, both for the benchmark dataset and for a second dataset
with more significant viewpoint changes.
• TensorFlow offers multiple levels of abstraction so you can choose the right one for your
needs. Build and train models by using the high-level Keras API, which makes getting
started with TensorFlow and machine learning easy.
• If you need more flexibility, eager execution allows for immediate iteration and intuitive
debugging. For large ML training tasks, use the Distribution Strategy API for distributed
training on different hardware configurations without changing the model definition.
• Google is quite aggressive in AI research. Over many years, Google developed AI framework called
TensorFlow and a development tool called Colaboratory. Today TensorFlow is open-sourced and
since 2017, Google made Colaboratory free for public use. Colaboratory is now known as Google
Colab or simply Colab.
• Another attractive feature that Google offers to the developers is the use of GPU. Colab supports
GPU and it is totally free. The reasons for making it free for public could be to make its software a
standard in the academics for teaching machine learning and data science. It may also have a long
term perspective of building a customer base for Google Cloud APIs which are sold per-use basis.
• Irrespective of the reasons, the introduction of Colab has eased the learning and development of
machine learning applications.
• The name “convolutional neural network” indicates that the network employs a
mathematical operation called convolution. Convolution is a specialized kind of
linear operation. Convolutional networks are simply neural networks that use
convolution in place of general matrix multiplication in at least one of their layers.
Install TensorFlow
Clone the git repository
1. Import the required packages.
• Create the train generator and specify where the train dataset directory,
image size, batch size.
• First, pick which intermediate layer of MobileNet V2 will be used for feature extraction.
A common practice is to use the output of the very last layer before the flatten
operation, the so-called "bottleneck layer". The reasoning here is that the following fully-
connected layers will be too specialized to the task the network was trained on, and
thus the features learned by these layers won't be very useful for a new task. The
bottleneck features, however, retain much generality.
• In our feature extraction experiment, you were only training a few layers
on top of an MobileNet V2 base model. The weights of the pre-trained
network were not updated during training.
• Let's take a look at the learning curves of the training and validation accuracy/loss, when fine
tuning the last few layers of the MobileNet V2 base model and training the classifier on top of it.
The validation loss is much higher than the training loss, so you may get some overfitting.
• Let's take a look at the learning curves of the training and validation accuracy/loss, when fine tuning the
last few layers of the MobileNet V2 base model and training the classifier on top of it. The validation loss
is much higher than the training loss, so you may get some overfitting.
•Fine-tuning a pre-trained model: To further improve performance, one might want to repurpose the top-
level layers of the pre-trained models to the new dataset via fine-tuning. In this case, you tuned your
weights such that your model learned high-level features specific to the dataset. This technique is usually
recommended when the training dataset is large and very similar to the orginial dataset that the pre-trained
model was trained on
• The app can run on either a real Android device or in the Android
Studio Emulator.
Set up an Android device
• You can't load the app from Android Studio onto your phone unless
you activate "developer mode" and "USB Debugging". This is a one
time setup process.
Follow these instructions.
THANK YOU!