Visvesvaraya Technological University ": Jnana Sangama", Belagavi-590 018
Visvesvaraya Technological University ": Jnana Sangama", Belagavi-590 018
Submitted by
Manoj R (4AI18CS404) Mohammed Ibrahim Safiulla (4AI18CS405)
Sharath C R (4AI17CS084) Nitish Kumar N H (4AI8CS406)
CERTIFICATE
This is to certify that the Project Phase-I (17CSP78) work entitled “IOT Based Smart
Cold Storage System for Efficient Stock Management” is a bonafide work carried out by
Manoj R (4AI18CS404), Mohammed Ibrahim Safiulla (4AI18CS405), Sharath C R
(4AI17CS084), Nitish Kumar N H (4AI18CS406) students of 7th semester B.E, in partial fulfillment
for the award of Degree of Bachelor of Engineering in Computer Science and Engineering of the
Visvesvaraya Technological University, Belagavi during the academic year 2020-21. It is certified that all
corrections and suggestions indicated for Internal Assessment have been incorporated in the report
deposited in the department library. The project report has been approved, as it satisfies the academic
requirements in respect of Project Work prescribed for the said Degree.
i
ACKNOWLEDGEMENTS
We express our humble Pranamas to his holiness Parama Poojya Jagadguru
Padmabushana Sri Sri Sri Dr. Balagangadharanatha Mahaswamiji and Parama Poojya
Jagadguru Sri Sri Sri Dr. Nirmalanandanatha Mahaswamiji and also to Sri Sri
Gunanatha Swamiji Sringeri Branch Chikkamagaluru who have showered their blessings on us
for framing our career successfully.
We are deeply indebted to our honorable Director Dr. C K Subbaraya for creating the
right kind of care and ambience.
We are Thankful to our beloved Principal Dr. C T Jayadeva for inspiring us to achieve
greater endeavors in all aspects of Learning.
We express our deepest gratitude to Dr. Pushpa Ravikumar, Professor & Head,
Department of Computer Science & Engineering for her valuable guidance, suggestions and
constant encouragement without which success of our Project Phase-I work would have been
difficult.
We are Grateful to our coordinators Mr. Sunitha M R and Mrs. Arpitha C N for their
excellent guidance, constant encouragement, support and constructive suggestions.
We are thankful to our guide Mrs. Chethan P Asst. Professor, Dept. of Computer
Science & Engineering AIT, Chikkamagaluru for his inspiration and lively correspondence for
carrying our project phase-I work.
We would like to thank our beloved parents for their support, encouragement and
blessings.
And last but not least, I would be pleased to express our heartfelt thanks to all teaching
and non-teaching staff of CS&E Department and our friends who have rendered their help,
motivation and support.
Manoj R (4AI18CS404)
Sharath C R (4AI17CS084)
Nitish Kumar N H (4AI18CS406)
Mohammed Ibrahim Safiulla (4AI18CS405)
Table of Contents
CHAPTER TITLE PAGE No.
Abstract i
Acknowledgements ii
Table of Contents iii
List of Figures vi
Chapter 1 Introduction 1
1.1 Introduction 1
1.2 Motivation 5
1.3 Problem Statement 6
1.4 Scope of the Project 6
1.5 Objectives 7
1.6 Advantages 7
1.7 Review of Literature 7
1.8 Organization of the Report 11
1.9 Summary 11
iii
3.2.2 Stock Recognition 16
3.2.3 Weight Identification
3.2.4 Temperature Identification 17
3.2.5 Notification 17
3.3 Use Case Diagram 18
3.4 Module Specification 19
3.4.1 Image Pre-Processing module 19
3.5.2 Segmentation 27
3.5.1 Feature extraction 28
3.5.1 Classification 30
3.5.1 Stock recognition 30
3.6 State Chart Diagram 31
3.7 Summary 32
iv
4.6.5 Feature Extraction
4.6.6 Classification
4.6.7 Stock Recognition
4.7 Summary
REFERENCE
List of Figures
FIGURE NAME PAGE NO
Architectural Diagram of the Proposed System 13
The Process of Stock notifications 16
Use Case Diagram to monitor Smart Cold Storage 17
Use Case Diagram of Stock Detection and Recognition 20
Use Case Diagram of Raspberry Pi Module 21
Use Case Diagram of Weight Identification Module 22
Use Case Diagram of Temperature Identification Module 23
Use Case Diagram of Notification Module 24
Data Flow Diagram of Proposed System 26
Data Flow Diagram of Image Pre-processing 27
Conversion of RGB Image to Gray Scale Image 28
Conversion of Gray Scale Image to Binary Image 28
Conversion of RGB Image to RGB Matrix 29
Conversion of Gray Scale Image to Gray Scale Matrix 29
Data Flow Diagram of Weight Identification 30
Data Flow Diagram of Temperature Identification 31
Data Flow Diagram of Storage Box Analysis 31
Structural Diagram of Proposed System 33
Flowchart of Proposed System 34
Flowchart of Stock Detection and Recognition 36
Flowchart of Weight Identification 37
Flowchart of Identification 38
Flowchart of Notification Module 39
v
IOT Based Smart Cold Storage System for Efficient Stock Management
CHAPTER 1
INTRODUCTION
1.1 Introduction
Smart cold storage has the ability to connect to the internet through Wi-Fi to provide
special features. It is a smart way of dealing with the stock or items present in the storage. It
includes internal cameras, more flexible user-controlled cooling options, and the ability for to
interact with its features using smart phone or tablet. This proto type has the ability to know the
quantity of items present. It will provide notifications to the user and the dealer or shopkeeper, if
they are out of stock, Notifications include temperature, as well as information about the goods.
This prototype uses a pi-cam to detect the presence of objects. Arduino is used as weight scale
controller and Raspberry Pi3 as a home server. The controller will collect all information and send
them to Raspberry Pi-3. Using python programming the weight of the object when placed the
storage box is identified.
Raspberry PI consists of in built WIFI along with LAN connection. An SD card will be
inserted in raspberry pi by installing an operating system called Raspbian OS. It also has a camera
module to which the pi-cam is connected. The Arduino is connected to the GPIO of the Raspberry
pi. Weight scales attached with the load cell are connected to HX711IC. HX711IC acts as a driver
for storing analog output weight. The analog electrical output is given to HX711IC and output of
HX711IC is given to Arduino. The temperature sensor DHT11 is also connected to Arduino. The
temperature sensor DHT11 detects temperature inside the storage box. Arduino acts as an interface
between analog output from weight scales, temperature sensor and digital input of raspberry pi.
Raspberry is programmed in such a way that it captures 50 Samples of each product using pi-cam.
The samples of the product taken are stored in the database. This process is known as training.
Weight scales are placed inside the storage box.
Whenever a product is kept in the storage box, the image of it is captured using pi-cam. The
captured image is detected using python programming. Then weight of the product on each scale is
converted into electrical output using wheat stone bridge circuit. This circuit consists of fixed
resistances and unknown resistances so as to know the change in resistance using strain gauge.
This electrical output given to Arduino, Arduino consists of data regarding temperature and
weight of the products inside the storage box. Arduino is programmed so that the data present in it
is given to raspberry pi. Using python programming the raspberry pi analyses the data. An app is
created to notify the user and shopkeeper. The app consists of data base of product images of
products present in the storage box, weight present on each weight scale and temperature inside
the storage box. Raspberry is programmed to have a threshold value of each weight scale.
Whenever the weight of the products reduce beyond the threshold value the user and shopkeeper
are notified using mobile application. This app displays weight of each object along with its image
so as to make user have knowledge regarding presence of the products and weight of it if present.
In case of absence of any product or reduction beyond given threshold value shopkeeper is
notified. The app is updated automatically with details of temperature and products to make it easy
for the user.
When the Storage Box door was open, then notification will be sent to the Mobile application.
And when the Storage Box door was close, then notification will be sent to the mobile application.
Here we use IR sensor to monitor the cold storage box. IR sensor is connected to the Arduino,
Then This Arduino will help to send the notifications. This will helpful to keep the stock in cold
space, if the storage box door was open then coldness inside the storage box was not maintained.
This leads to the spoilage of items. DHT11 Temperature sensor are used to monitor the
temperature inside the storage box, if the Storage box door was open the temperature inside the
storage box will be increased. Here we used Fans to adjusting the temperature inside the Storage
box and Sprinklers are used to maintain the humidity. When we keep stock inside the storage box
then the stock keeping time will be displayed and when we remove the stock from the storage box
then the stock removing time will also be displayed.
The aim in this project “Smart Cold Storage” by leveraging the latest supply chain
technology and the IOT, which will serve as a hub to improve the efficiency and speedup the process
throughout the entire supply chain. The cold storage facilities available are mostly for a single
commodity like potato, oranges, apple, flowers etc. which results in poor capacity utilization. The
existing cold storage system lacks storage capacity of fruits and vegetables partially responsible for
the non-availability of seasonal fruits and vegetables. A cold storage unit incorporates a refrigeration
system but cannot store for a longer time. It includes internal cameras, more flexible user controlled
cooling options, and the ability for to interact with its features using smart phone or tablet. This proto
type has the ability to know the quantity of items present. It will provide notifications to the user and
the dealer or shopkeeper, if they are out of stock, Notifications include temperature, as well as
information about the goods. This project can be implemented in medical field also to store and
maintain the medical product.
The smart refrigerator is constantly evolving, most of the functions of intelligent
refrigerators is only limited to the original traditional refrigerator, but with the development of
information technology, if the smart refrigerator and information technology can be combined,
then Smart refrigerator will make people's lives more convenient undoutly.
The existing cold storage system lacks storage capacity of fruits and vegetables partially
responsible for the non-availability of seasonal fruits and vegetables.
This proto type has the ability to know the quantity of items present. It will provide
notifications to the user and the dealer or shopkeeper, if they are out of stock, Notifications
include temperature, as well as information about the goods.
Our smart cold storage system is more accurate and is capable of notifying the details about
the contents of the storage and their weights to the user and the product supplier, when the
contents are running out of stock. Temperature inside the storage system will also be indicated
in the specified app through the user’s smart phone.
Input:
Stock Image detection by pi-Camera.
Temperature detection inside the storage Box by DHT11 Sensor.
Weight of the Stock can be calculated by load cell with HX711 IC driver.
Processing:
The process consists of detection of any Stock inside the Storage Box by pi-Camera &
Capture the Image of stock inside the Storage Box & Identifying which type of stock it
is. If stock is not available, then send message like please refill the Box to the user
through the android application.
Another process is to identifying the internal temperature of the Storage Box by
DHT11 Sensors.
Weight of the Stock can be calculated by load cell with HX711 IC driver.
Output:
Display the image of the Stock inside the Storage Box on the mobile application, If we
took Stock from the Storage Box then it display message like Stock is exhausted. If
stock is not available, then notification send the user to refill the Storage Box.
Display the date and time of Stock inserting and Stock removing.
Display the temperature Inside Storage Box and adjusting the temperature, when the
temperature inside the box was high.
Display the net weight of the Stock inside the Storage Box.
Send the notification to the user through the mobile application, when the storage box
door open/close.
Chapter 1-Introduction: This chapter presents a brief description about the IOT
Based Smart Cold Storage System for Efficient Stock Management.
Chapter 4- Detail design: The chapter 4 explains about the detail functionalities
and description of each module.
Summary
The first chapter describes the Cold Storage System for efficient Stock Management
using Internet of Things. The motivation of the project is discussed in section 1.2. Problem
statement of the project explained in section 1.3. And the Scope of the project is described in
the section 1.4 and objectives are presented in the section 1.5. and section 1.6 gives details of
the literature survey reviews, the important papers referred.
CHAPTER 2
Arduino IDE
OpenCV
OpenCV supports a wide variety of programming languages such as C++, Python, Java,
etc., and is available on different platforms including Windows, Linux, OS X, Android, and
iOS. Interfaces for high speed GPU operations based on CUDA and OpenCL are also under
active development. OpenCV-Python is the Python API for OpenCV, combining the best
qualities of the OpenCV C++ API and the Python language.
OpenCV-Python is a library of Python bindings designed to solve computer vision
problems. Python is a general purpose programming language started by Guido van Rossum
that became very popular very quickly, mainly because of its simplicity and code readability.
It enables the programmer to express ideas in fewer lines of code without reducing readability.
The Stock present inside the Storage box are captured using a camera.
The captured Stock images are detected and Recognized using Tenser-flow
algorithm .
Temperature inside the Storage box was detected by DHT11 temperature Sensor
and adjusting the temperature inside the box by fans, when the temperature was
high..
Weight of the Stock can be calculated by load cell with HX711 IC driver.
Receive notifications on mobile application, when the Storage box door was
open/close.
Summary
The chapter 2 considers all the system requirements which is required to develop
this proposed system. The specific requirements for this project have been explained in
section 2.1. The hardware requirements for this project have been explained in section
2.2. The backend software is clearly explained in section 2.3. The functional
requirements are explained in the section 2.4.
CHAPTER 3
High-level design (HLD) explains the architecture that would be used for developing a
software product. The architecture diagram provides an overview of an entire system,
identifying the main components that would be developed for the product and their interfaces.
The HLD uses possibly non-technical to mildly technical terms that should be understandable
to the administrators of the system. In contrast low level design further exposes the logical
detailed design of each of these elements for programmers.
High level design is the design which is used to design the software related
requirements. In this chapter complete system design is generated and shows how the
modules, sub modules and the flow of the data between them are done and integrated. It is
very simple phase that shows the implementation process. The errors done here will be
modified in the coming processes.
The design consideration briefs about how the system behaves for cold storage and
classify the corresponding notation for the given stock image. Stock Detection, Stock
Recognition, Weight Identification, Temperature identification, Notifications.
Stock Detection: The goal of object detection is to detect all the instances from the
known samples. It mainly focuses on the Stock detection and tracking based on its
color, i.e., the input to the proto type will be an image.
Stock Recognition: It is the process of recognizing the product. The process of training
the pi-cam is programmed to capture 50 samples and stores them in database. It
compares the detected image with the stored samples in the database as samples. Here
Pre-processing, segmentation, extraction, classification process are takes place by using
the tenser-flow algorithm.
Weight Identification: Weight identification involves identifying the weight of the
object which is recognized in the process of stock recognition. To identify the
weight of the product placed in the storage box HX711 IC with load cell are used. .
Using python programming the weight of the stock when placed the storage box is
identified.
Notifications: The information regarding the presence of the product and image of
the product, weight of it and temperature inside the storage box are updated
automatically in the mobile Application. If the product is not available inside the
box then user to refill the box this kind of notifications will also send to the user.
When the door of the box will open/close the will also notifications send to the
user. When user keep/remove the stock then also notification send.
IOT Based Smart Cold Storage System for Efficient Stock Management
The figure 3.1 shows the system architecture for the proposed method. Here we can
see that the stock image is captured by a pi-camera and the image input is fed into the stock
recognition system, in which it is processed and compared efficiently. The virtual stock is
updated accordingly.
The system which is being used currently performs the task in stock recognition and
classification of it. The input image (stock image) is matched with the images in the dataset
using Tensor-flow algorithm. So, the matched image with minimum value is recognized and
classified to give the respective output.
The input image is pre-processed and converted to grey scale image to find the
Threshold value based on input image. Based on Threshold value further binary and
segmented images are obtained.
The proposed system has the following steps for Smart Cold Storage:
1. Stock Detection.
2. Stock Recognition
2.1 Pre-processing
2.2 Segmentation
2.3 Feature extraction
2.4 Classification
3. Weight Identification.
4. Temperature Identification.
5. Notifications.
The goal of object detection is to detect all the instances from the known samples. As shown in
Fig. 3.2 the process involves Stock detection, recognition, weight identification, temperature
recognition and notifications. It mainly focuses on the Stock detection and tracking based on its
colour, i.e., the input to the proto type will be an image, which is captured by a pi-cam. A pi-cam will
detect the stock and tracks it continuously. Each sample that is specified in the python script is
detected when placed in the storage area using pi-cam. The visual data captured by the pi-cam is
processed with Raspberry Pi and the stock is detected based on the color or shape. Each detection is
reported with information like name or ID of the stock.
number of the sample matched with the detected image. The numbering of each samples is linked to
specifications the product. Specifications consist of name of the product. Using these parameters and
the detection of the stock the product placed is recognized along with its identity i.e. Name.
3.2.5 Notifications:
After the identification of the data regarding the weight of the products and temperature inside
the storage box the user is notified through an app. An app named all things maker is created and
linked to the program using its ID. Using this mobile application, the image of the product identified
and the weight of it is updated in the app. Whenever the weight of any product is reduced beyond its
threshold value the user and shopkeeper are notified through an alert notification. The information
regarding the presence of the product, weight of it and temperature inside the storage box are updated
automatically in the mobile Application.
Figure 3.2: Use Case Diagram for Recognition and Classification of image.
The use case diagram for Smart cold storage system is as depicted in the above figure
3.3. Initially the set of captured images are stored in SD Card in the Raspberry Pi. The
obtained RGB image is converted in to gray scale image to reduce complexity. Then the
pre-processing techniques are applied on the obtained gray scale image. Then Grey Scale
image to Binary image conversion segmentation process can be takes place, the
significant features are extracted using Tensor-flow Algorithm. Further, we classify the
which type of stock it is, we match the closest image in database and recognize the
required Stock. It is complete process of stock detection.
The process of Raspberry Pi is to capture the image using the Pi-camera and stock
detection process is to identify which type of stock it is, then raspberry pi will help to
display the captured image in the Blynk application.
The process of HX711 IC with Load cell is to just calculating the weight of the
stock, which is already present in the Storage box. Pi-camera will capture the image of
the Stock and ten load cell will activate and weigh the Stock weight. And that weight will
display on the Blynk Application.
The process of DHT11 Temperature sensor is to just shows the temperature inside
the storage box. DHT11 temperature sensor measure the temperature inside the storage
box and then temperature in degree Celsius will be display on the mobile application.
Actors: Stock.
Use Cases: Image capture, RGB Image Pre-processing, Binary Image Segmentation,
Segmented Image feature Extraction, Image Classification.
Functionality: The main functionality of this module is to convert the RGB image to binary
format for faster processing.
Description: Figure 3.4 shows the use case diagram of the Stock detection and Stock
Recognition. In this use case diagram, there are five use cases and one actor. In the first
use case, image is captured and is used as input for this module. In second use case, the
captured image is converted to RGB matrix. RGB image into Gray scale image to reduce
complexity. Then grey image is converted to binary image. In the third use case Binary
image segmentation process takes place and segmented matrix formed. In the fourth use
case, feature extraction from the segmented matrix. At last image can be classified and
display which type of stock it is.
Pi.
Functionality: The main functionality of this module is to capture the Stock Image,
which is present inside the storage box and Display the capture on the Blynk application.
Description: Figure 3.5 shows the use case diagram of Raspberry Pi module. In this use
case diagram, there are two use cases and one actor. In the first use case, the Raspberry Pi
capture the image of Stock, which is present inside the storage box as input. In second
use case, captured image will further going to Stock detection and stock recognition
process and identity which type of stock it is, then stock name as well as captured stock
image will be displayed on the blynk application.
Module
Identification.
Functionality: The main functionality of this module is to Identify the weight of the
particular stock which is available inside the storage box.
Description: Figure 3.6 shows the use case diagram of Weight Identification module. In
this use case diagram, there are two use cases and one actor. In the first use case, the
Pi-camera will capture the stock image and which type of stock it is will be done stock
detection and stock recognition module. Calculate the Weight of particular stock is done
by using the HX711 IC with Load cell. In The second use case, weight of the stock will be
display in the Blynk application.
3.4.4 Temperature
Identification Module.
Identification.
Sensor.
Description: Figure 3.7 shows the use case diagram of Temperature Identification
module. In this use case diagram, there are two use cases and one actor. In the first use
case, the DHT11 Temperature sensor will measure the temperature inside the storage box
in degree Celsius. In the second use case, measured temperature will be displays on the
Blynk application. If the Temperature raised in the storage box, then fans will turn on to
reduce the temperature inside the storage box.
Use Cases: Stock display, Weight display, Temperature display, Door open/close, Date and
Time display.
Functionality: The main functionality of this module is to Display the messages on the Blynk
Application.
As the name specifies so to the meaning of the words, it is the process which is explained
Image pre-processing.
Input image through camera is captured and it can be used to store as dataset for
training or as input image to recognize the character. The image is captured and stored in
any supported format specified by the device.
As shown in the below figure 3.3 initially the set of captured images are stored in
a temporary file in MATLAB. The storage is linked to the file set account from which the
data is accessed. The obtained RGB image is converted in to gray scale image to reduce
complexity.
Figure 3.9: Data Flow Diagram of RGB to Grey Conversion
Segmentation
Segmentation is done to divide image into two regions, background and the
foreground containing region of interest i.e. hand region. The segmented image has the
hand region with the pixel value ‘1’ and the background as the ‘0’. The image is then
used as a mask to get the hand region from the RGB image by multiplying the black and
white image i.e. binary image with the original RGB image. The image is resized to
reduce size of the matrix used for the recognition process. The images are then converted
into column matrix for feature extraction.
As shown in the above figure 3.5 binary image is masked using the threshold value
to obtain the region of interest.
Feature Extraction
Eigen vectors are set of basic functions which describes variability of data. Also,
Eigen vectors are also a kind of coordinate system for which the covariance matrix
becomes diagonal for which the new coordinate system is uncorrelated. Eigen values
measures the variance of data of new coordinate system. Eigen values and Eigen vectors
are a part of linear transformations. Eigen vectors are the directions along which the
linear transformation acts by stretching, compressing or flipping and Eigen values gives
the factor by which the compression or stretching occurs. The more the Eigen vectors the
better the information obtained from the linear transformation.
𝑀
μ= 𝑛=1 𝑇𝑛 where, M is the number of column matrix.
Mean from each of the column vector of the database Ti is subtracted as: Temp = A- μ
Where, A is column matrix & μ is the mean.
Classification
Sign recognition
The obtained feature vectors are matched with the values in the dataset. The
minimum value between the input image and segmented image is obtained after
matching. As a result, thus, the sign is recognized.
State Chart Diagram
A state chart diagram is also named as state diagram. It is popular one among five
UML diagrams and it is used to model the dynamic nature of the system. State chart
defines various states of an object during its lifetime. The state chart diagram is a
composition of finite number of states and the functionalities describe the functioning of
each module in the system. It is a graph where each state is a directed edge and
represented by node and these directed edges represents transition between states.
Segmentation
Input Pre-Processing Do:
Image Binary Image Removal of ROI
Do: Masking
RGB to Grey Scale
Grey to binary
Segmented
Image
Feature Extraction
Do:
Classification Principal Component Eigen ValueAnalysis (PCA)
Do:
Euclidean Distance
Feature Vector
Sign Recognition
Minimum Do:
Distance Matching with the dataset X
Recognised Sign
The figure 3.11 shows the state chart diagram of the Hand recognition and
classification. The process starts with the solid circle, the first state is the reading the
hand gesture image as input and the second state is pre-processing to convert RGB to
gray and then to Binary image. In third state segmentation is done on binary image to
remove the ROI by masking process. In fourth level, we extract the significant features
using PCA and later in Fifth level we use Euclidian distance to find the minimum
distance value and finally we recognize the sign which has closest match with images in
database.
Summary
In third chapter, high level design of the proposed method is discussed. Section
3.1 presents the design considerations for the project. Section 3.2 discusses the system
architecture of proposed system. The next section 3.3 describes use case diagram. Section
3.4 describes module specification for all the modules. The data flow diagram for each
module is explained in section 3.5. The State Chart diagram for the system is explained in
section 3.6.
CHAPTER 4
DETAILED DESIGN
A detail design is the process of each individual module which is completed in the
earlier stage than implementation. It is the second phase of the project first is to design
phase and second phase is individual design of each phase options. It saves more time and
another plus point is to make implementation easier.
The figure 4.1 shows the structural chart of hand gesture recognition system. The
chart is composed of five modules namely Image pre-processing, segmentation, feature
extraction, classification and sign recognition. Hand gesture is the input to the system and
the output is recognized sign in text and speech format.
The RGB color model is additive colour model in which red, green, and blue light
are added together in various ways to reproduce a broad array of colors. The name of the
model comes from the initials of the three additive primary colours, red, green, and blue.
The main purpose of the RGB color model is for the sensing, representation, and display of
images in electronic systems, such as televisions and computers, though it has also been
used in conventional photography.
Before the electronic age, the RGB color model already had a solid theory behind it,
based in human perception of colors. RGB is a device-dependent color model: different
devices detect or reproduce a given RGB value differently, since the color elements (such
as phosphors or dyes) and their response to the individual R, G, and B levels vary from
manufacturer to manufacturer, or even in the same device over time. Thus an RGB value
does not define the same color across devices without some kind of color management.
Typical RGB input devices are color TV and video cameras, image scanners, and
digital cameras. Typical RGB output devices are TV sets of various technologies (CRT,
LCD, plasma), computer and mobile phone displays, video projectors, multicolour LED
displays, large screens such as Jumbo Tron. Color printers, on the other hand, are not
RGB devices, but subtractive color devices (typically CMYK color model).
Grayscale
Grayscale images are distinct from one-bit bi-tonal black-and-white images, which
in the context of computer imaging are images with only the two colors, black, and
white (also called bi-level or binary images). Grayscale images have many shades of grey
in between. Grayscale images are also called monochromatic, denoting the presence of
only one (mono) color (chrome).
Grayscale images are often the result of measuring the intensity of light at each
pixel in a single band of the electromagnetic spectrum (e.g. infrared, visible light,
ultraviolet), and in such cases they are monochromatic proper when only a given frequency
is captured. But also, they can be synthesized from a full color image.
Image obtained by digital camera is in RGB format. But for post processing, this
image needs to be converted into gray scale format. This makes processing much simpler.
The need for converting to Grayscale is to reduce the processing time for the Algorithms,
RGB Images are not required to Processing,
A grayscale or grayscale digital image is an image in which the value of each pixel
is a single sample, that is, it carries only intensity information. Images of this sort, also
known as black-and-white, are composed exclusively of shades of gray, varying from black
at the weakest intensity to white at the strongest.
Grayscale images are often the result of measuring the intensity of light at each
pixel in a single band of the electromagnetic spectrum (e.g. infrared, visible light,
ultraviolet, etc.), and in such cases they are monochromatic proper when only a given
frequency is captured. But also they can be synthesized from a full color image; see the
section about converting to grayscale.
The intensity of a pixel is expressed within a given range between a minimum and a
maximum, inclusive. This range is represented in an abstract way as a range from 0 (total
absence, black) and 1 (total presence, white), with any fractional values in between. This
notation is used in academic papers, but this does not define what "black" or "white" is in
terms of colorimetry.
Technical uses (e.g. in medical imaging or remote sensing applications) often require
more levels, to make full use of the sensor accuracy (typically 10 or 12 bits per sample) and
to guard against round off errors in computations. Sixteen bits per sample (65,536 levels) is a
convenient choice for such uses, as computers manage 16-bit words efficiently. The TIFF
and the PNG (among other) image file formats support 16-bit grayscale natively, although
browsers and many imaging programs tend to ignore the low order 8 bits of each pixel. No
matter what pixel depth is used, the binary representations assume that 0 is black and the
maximum value (255 at 8 bpp, 65,535 at 16 bpp, etc.) is white, if not otherwise noted
Segmentation
In this stage, prior to the gray scale image which was obtained is further converted
into a binary image (consisting of 0 and 1). Here, the binary image is obtained from the
calculated threshold value using the grey scale matrix. Based on threshold value (if the pixel
value in grey scale matrix is greater than threshold value, we then plot 1 in binary matrix else
we tend
to plot 0. Once, after the binary matrix is obtained, we perform masking operation
(multiplying grey scale matrix with the binary matrix) to obtain segmented matrix.
The segmented matrix is thus obtained to get our region of interest (ROI). The
region of interest is ideally the binary matrix consisting of value 1. The matrix having 0
value is ignored as we are not concerned about it.
Feature Extraction
Feature extraction is the most significant step in recognition stage as the size of the
data dimensionality is reduced in this stage. Also, it can be used for signal processing,
pattern matching. In Principle Component Analysis only the unique and significant features
are extracted. Later, we can get Eigen Values and Eigen Vectors. To calculate this Eigen
Values and Eigen Vectors, column matrix of all the images is formed and concatenated to
form single matrix, mean of this matrix is calculated and it is subtracted for normalization.
The mean of the matrix is calculated as:
𝑀
μ= 𝑛=1 𝑇𝑛 where, M is the number of column matrix.
Mean from each of the column vector of the database Ti is subtracted as:
Temp = A- μ
2
𝑛
Ed = √∑𝑖=1(𝑞𝑖,𝑗− 𝑝𝑖,𝑗 )
Where, j =1, 2…, m, n=total pixels in single image m= total number of images in the dataset.
Sign Recognition
Summary
The fourth chapter gives the detailed design for the Hand gesture Recognition
system. In section 4.1 structural chart for the proposed system is shown. Section 4.2 the
flowchart for image processing is shown. Section 4.3 gives the flowchart for segmentation.
Section 4.4 gives the flowchart for feature extraction. Section 4.5 gives the flowchart for
classification. Section 4.6 gives the detailed description of each module.
REFERENCES
[1] Mrs. Neela Harish , Dr. S .Poonguzhali, “Design and development of hand gesture
recognition system for speech impaired people”, International Conference on Industrial
Instrumentation and Control (ICIC) College of Engineering Pune, India. May 28-30, 2015.
[3] Meenakshi Panwar and Pawan Singh Mehra, “Hand Gesture Recognition for Human
Computer Interaction”, International Conference on Computing, Communication and
Applications (ICCCA), Din Digul, Tamilnadu , India, February 2012.
[4] M. Ebrahim AI-Ahdal & Nooritawati Md Tahir," Review in Sign Language Recognition
Systems" Symposium on Computer & Informatics(ISCI),pp:52-57, IEEE ,2012 Sruthi
Upendran, Thamizharasi. A, “American Sign Language Interpreter System for Deaf and
Dumb Individuals” in International Conference on Control, Instrumentation, Communication
and Computational Technologies (ICCICCT), 2014.
[5] Andrew Wilson and Aaron Bobick, “Learning visual behavior for gesture analysis,” In
Proceedings of the IEEE Symposium on Computer Vision, Coral Gables, Florida, pp. 19-21,
November 2015.
[6] Nitesh S. Soni, Prof. Dr. M. S. Nagmode, Mr. R. D. Komati, “Hand Gesture Recognition
&Classification for Deaf & Dumb”, International Conference on Inventive Computation
Technology (ICICT) , Coimbatore, India ,2016.