INTERNSHIP
INTERNSHIP
INTERNSHIP
VIRTUAL INTERNSHIP
An Internship Report Submitted at the end of seventh semester
BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING
Submitted By
BODDEPALLI MEENESH
(21981A0520)
This is to certify that this project entitled “GOOGLE AI-ML” done by “BODDEPALLI
MEENESH (21981A0520)” is a student of B.Tech in the Department of Computer Science and Engineering,
Raghu Engineering College, during the period 2021-2025, in partial fulfillment for the award of the Degree of
Bachelor of Technology in Computer Science and Engineering to the Jawaharlal Nehru Technological
University, Gurajada Vizianagaram is a record of bonafide work carried out under my guidance and
supervision.
The results embodied in this internship report have not been submitted to any other University or
Institute for the award of any Degree.
External Examiner
DISSERTATION APPROVAL SHEET
This is to certify that the dissertation titled
Google AI-ML Virtual Internship
BY
BODDEPALLI MEENESH (21981A0520)
Dr.P.Appala Naidu
INTERNSHIP GUIDE
(Professor)
Internal Examiner
External Examiner
Dr.R.Sivaranjani
HOD
(Professor)
Date:
DECLARATION
This is to certify that this internship titled “GOOGLE AI-ML” is bonafide work done by me,
impartial fulfillment of the requirements for the award of the degree B.Tech and submitted to the
Department of Computer Science and Engineering, Raghu Engineering College, Dakamarri.
I also declare that this internship is a result of my own effort and that has not been copied from
anyone and I have taken only citations from the sources which are mentioned in the references.
This work was not submitted earlier at any other University or Institute for the reward of any
degree.
Date:
Place:
BODDEPALLI MEENESH
(21981A0520)
CERTIFICATE
ACKNOWLEDGEMENT
I express sincere gratitude to my esteemed Institute “Raghu Engineering College”, which has
provided us an opportunity to fulfill the most cherished desire to reach my goal.
I take this opportunity with great pleasure to put on record our ineffable personal indebtedness to
Mr. Raghu Kalidindi, Chairman of Raghu Engineering College for providing necessary
departmental facilities.
I would like to thank the Principal Dr. CH. Srinivasu of “Raghu Engineering College”, for
providing the requisite facilities to carry out projects on campus. Your expertise in the subject matter
and dedication towards our project have been a source of inspiration for all of us.
I sincerely express our deep sense of gratitude to Dr.R.Sivaranjani, Professor, Head of
Department in Department of Computer Science and Engineering, Raghu Engineering College, for
her perspicacity, wisdom and sagacity coupled with compassion and patience. It is my great pleasure
to submit this work under her wing. I thank you for guiding us for the successful completion of this
project work.
I would like to thank Karthik Padmanabhan, EduSkills Foundation for providing the
technical guidance to carry out the module assigned. Your expertise in the subject matter and
dedication towards our project have been a source of inspiration for all of us.
I extend my deep hearted thanks to all faculty members of the Computer Science department for
their value based imparting of theory and practical subjects, which were used in the project.
I thank the non-teaching staff of the Department of Computer Science and Engineering, Raghu
Engineering College, for their inexpressible support.
Regards
BODDEPALLI MEENESH
(21981A0520)
ABSTRACT
The AI/ML internship at Google is a super-strict platform designed to allow the
participants to learn in depth about artificial intelligence and machine learning including but
not limited to use of TensorFlow. Interns undertake different areas of work, including the
theory and practice course that are meant to promote practical application of the knowledge
gained.
Efficiency of the course will be presented by unlocking badges through the Google
Developer Profile successively which means earning proficiency in essential skills like object
detection, image classification and product image search.
By working in real-time with TensorFlow over the course of the internship, the
participant will have the capability of designing and deploying machine learning models
without a problem. They also resort to Google Colab, an online platform that provides an
environment for scientific computing and for writing scientific documents (which is called
Jupyter notebook). This makes experimentation and model development more flexible.
Kapotasteria program structure involves having mentorship meetings, group projects, and
code reviews together with other interns, aiming at a balanced learning experience and skill
building.
These skills are put to good use in the real world during the legal application of AI/ML.
During code reviews and presentations, communication and problem-solving skills are further
seen in action, thus preparing students for real-world workplace scenarios. Technology
employed in the program accurately reflects the industry standards and the emphasis on team
learning enables participants to be trained with skills and knowledge that are highly valued in
the modern AI/ML. At the end of the internship, trainers, with strong knowledge of the basic
approaches to Artificial Intelligence/Machine Learning, obtain a practical toolkit that helps
them to solve these tasks. It is through the internship program that trainees are well equipped
to handle opportunities in future in the upcoming digital space.
Table of Contents
S.NO CONTENT PAGE NUMBER
1. Introduction 1
8. Conclusion 19
1. INTRODUCTION
Today's world needs understanding and handling of visual information in almost every field, for
instance: self-driving technology, e-health, e-commerce among others. Computer vision, a branch of
artificial intelligence, involves the capacity of computers to process visual information and has a number
of applications that improve user and business processes.
In this project we will focus on the application of TensorFlow in solving important computer vision
problems namely: object detection, image classification and product search. TensorFlow is a broad open-
source library created by Google that allows the easy integration of advanced solutions into existing
architectures.
Self-driving cars and the pattern recognition technologies require object detection whereas image
classification is useful in enhancing medical images as well as content security. Besides that, visual
product search has also gained importance especially in online shopping where images are used to source
for products instead of written text.
Within the scope of this project, we will consider the methods, algorithms, datasets, and other
relevant components related to the aforementioned tasks. We will demonstrate the application of
TensorFlow in building effective and precise vision systems that are implemented in practice.
This internship program is designed to equip participants with foundational and advanced
knowledge in AI and ML, focusing on real-world applications using Google’s industry-leading tools and
platforms. Interns engage in hands-on projects covering key topics such as supervised and unsupervised
learning, neural networks, natural language processing, and cloud-based AI solutions. Through a
comprehensive curriculum, participants not only gain technical expertise but also develop
problem-solving and analytical skills necessary to build and deploy AI models effectively.
The Google AI/ML Virtual Internship aims to nurture a new generation of AI/ML professionals by
providing in-depth insights into the industry and fostering a deep understanding of AI’s transformative
potential.
2. Program neural networks with TensorFlow
fig 3.1 interface of the object fig 3.2 capturing photo in fig 3.3 captured image
detection mobile app starter app
3.2 Add on-device object detection
In this step, you will add the functionality to the starter app to detect objects in images. As you
saw in the previous step, the starter app contains boilerplate code to take photos with the camera app on
the device. There are also 3 preset images in the app that you can try object detection on if you are
running the codelab on an Android emulator.
When you have selected an image, either from the preset images or taking a photo with the
camera app, the boilerplate code decodes that image into a Bitmap instance, shows it on the screen and
calls the runObjectDetection method with the image.
3.3 Set up and run on-device object detection on an image
There are only 3 simple steps with 3 APIs to set up ML Kit ODT:
● prepare an image: InputImage
● create a detector object: ObjectDetection.getClient(options)
● connect the 2 objects above:
process(image) Step 1: Create an InputImage
Step 2: Create a detector instance
ML Kit follows Builder Design Pattern. You will pass the configuration to the builder, then acquire a
detector from it. There are 3 options to configure (the options in bold are used in this codelab):
● detector mode (single image or stream)
The TFLite Task Library makes it easy to integrate mobile-optimized machine learning models
into a mobile app. It supports many popular machine learning use cases, including object detection,
image classification, and text classification. You can load the TFLite model and run it with just a few
lines of code.
The starter app is the minimal Android application that:
- Uses either the device camera or available preset images.
- Now contains methods for taking pictures and presenting object detection output.
You will add functionality for object detection within the application by filling out the method
`runObjectDetection()`
The functions are defined as follows:
`runObjectDetection(bitmap: Bitmap)`: It is a function that conducts object detection on an input image.
It uses the object detection algorithm.
Add a Pre-trained Object Detection Model
● Download the Model. The pre-trained TFLite model is EfficientDet-Lite. This model is designed
to be mobile efficient, and it's trained on the COCO 2017 data set.
● Add dependencies
● Configure and Perform Object Detection
● Rendering the Detectors Results
● Train a Custom Object Detection Model
● You will train a custom model to detect meal ingredients using TFLite Model Maker and Google
Colab. The dataset is composed of some labeled images of ingredients like cheese and baked
products.
Developed an Android application that can detect objects in images, first by a TFLite pretrained
model, then train and deploy the learnt object detection model. You have utilized TFLite Model Maker
for model training and TFLite Task Library for its integration into the application.
5. Get started with product image search
5.1 Detect objects in images to build a visual product search with ML Kit:Android
Have you seen the Google Lens demo, where you can point your phone camera at an object and
find where you can buy it online? If you want to learn how you can add the same feature to your app,
then this codelab is for you. It is part of a learning pathway that teaches you how to build a product
image search feature into a mobile app.
In this codelab, you will learn the first step to build a product image search feature: how to detect
objects in images and let the user choose the objects they want to search for. You will use ML Kit Object
Detection and Tracking to build this feature.
fig 6.3 interface of product image search app after connecting the two APIs
7. Go further with image classification
In the previous codelab you created an app for Android and iOS that used a basic image labeling
model that recognizes several hundred classes of image. It recognized a picture of a flower very
generically – seeing petals, flower, plant, and sky.
To update the app to recognize specific flowers, daisies or roses for example, you'll need a
custom model that's trained on lots of examples of each of the type of flower you want to recognize.
This codelab will not go into the specifics of how a model is built. Instead, you'll learn about the
APIs from TensorFlow Lite Model Maker that make it easy.
7.1 Install and import dependencies
Install TensorFlow Lite Model Maker. You can do this with a pip install. The &> /dev/null at the
end just suppresses the output. Model Maker outputs a lot of stuff that isn't immediately relevant. It's
been suppressed so you can focus on the task at hand.
7.2 Download and Prepare your Data
If your images are organized into folders, and those folders are zipped up, then if you download
the zip and decompress it, you'll automatically get your images labeled based on the folder they're in.
This directory will be referenced as data_path.
This data path can then be loaded into a neural network model for training with TensorFlow Lite
Model Maker's ImageClassifierDataLoader class. Just point it at the folder and you're good to go.
One important element in training models with machine learning is to not use all of your data for
training. Hold back a little to test the model with data it hasn't previously seen. This is easy to do with
the split method of the dataset that comes back from ImageClassifierDataLoader.
7.3 Create the Image Classifier Model
Model Maker abstracts a lot of the specifics of designing the neural network so you don't have to
deal with network design, and things like convolutions, dense, relu, flatten, loss functions and optimizers.
The model went through 5 epochs – where an epoch is a full cycle of training where the neural network
tries to match the images to their labels. By the time it went through 5 epochs, in around 1 minute, it was
93.85% accurate on the training data. Given that there's 5 classes, a random guess would
be 20% accurate, so that's progress!
7.4 Export the Model
Now that the model is trained, the next step is to export it in the .tflite format that a mobile
application can use. Model maker provides an easy export method that you can use — simply specify
the directory to output to.
For the rest of this lab, I'll be running the app in the iPhone
simulator which should support the build targets from the codelab.
If you want to use your own device, you might need to change the
build target in your project settings to match your iOS version.
Run it and you'll see something like this:
Note the very generic classifications – petal, flower, sky.
The model you created in the previous codelab was trained to
detect 5 varieties of flower, including this one – a daisy.
For the rest of this codelab, you'll look at what it will take to
upgrade your app with the custom model.
1. Open your ViewController.swift file. You may see an error on the ‘import MLKitImageLabeling'
at the top of the file. This is because you removed the generic image labeling libraries when you updated
your pod file.
import MLKitVision
import MLKit
import MLKitImageLabelingCommon
import MLKitImageLabelingCustom
It might be easy to speed read these and think that they're repeating the same code! But it's "Common" and
"Custom" at the end!
2. Next you'll load the custom model that you added in the previous step. Find the getLabels() func.
Beneath the line that reads visionImage.orientation = image.imageOrientation, add these lines:
3. Find the code for specifying the options for the generic ImageLabeler. It's probably giving you
an error since those libraries were removed:
let options = ImageLabelerOptions()
Replace that with this code, to use a CustomImageLabelerOptions, and which specifies the local model:
let options = CustomImageLabelerOptions(localModel: localModel)
...and that's it! Try running your app now! When you try to classify the image it should be more accurate
– and tell you that you're looking at a daisy with high probability!
Starting with the AI Foundations course, I gained essential knowledge about the fundamental
concepts of AI and ML. This course covered critical topics such as supervised and unsupervised
learning, neural networks, and deep learning algorithms. Understanding these key concepts has been
instrumental in shaping my perspective on the growing impact of AI and its applications in various
industries.
Building on this foundation, I progressed to the Applied Machine Learning course, which offered
deeper insights into deploying machine learning models in real-world scenarios. This course provided
practical exposure to training models, fine-tuning hyperparameters, and evaluating model performance
using Google’s AI tools and frameworks. The hands-on experience from this course equipped me with
the ability to build and optimize models to solve real-world problems effectively.
Finally, I completed a capstone project focused on using Google Cloud AI and ML tools, where I
implemented a machine learning solution to a real business problem. This project allowed me to apply
everything I learned, from data preprocessing and model building to deployment. This practical
experience has prepared me for real-world challenges where I can apply AI and ML to drive meaningful
results.
Overall, this virtual internship has not only expanded my technical knowledge but also solidified
my passion for pursuing a career in artificial intelligence and machine learning. The combination of
theoretical learning and practical application through these courses has significantly enriched my
understanding of AI and ML. I am excited to leverage this knowledge as I continue to explore the field
and contribute to innovative solutions in the AI/ML domain.