Seminar Final Report PDF

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 24

GOVERNMENT POLYTECHNIC, PUNE

(AN AUTONOMOUS INSTITUTE OF GOVERNMENT OF MAHARASHTRA)

A
Seminar Report
On

“Surveillance Robot controller using an Android app”

SUBMITTED BY :-

(2023004) – Chaitanya Uday Anarase.


(2023005) – Chaitanya Haridas Andhale.
(2023010) – Rohan Vinayak Daundkar.
(2023048) – Shivsagar Dattatray Pokale.

Under Guidance of
Mr. N.D.Toradmal

Date of Presentation : 17/12/2022

Department of Electronics & Telecommunication


Government Polytechnic, Pune
TABLE OF CONTENT

CHAPTER TITLE PAGE


NO.
1 Introduction 2

2 Literature survey 4

3 Block diagram & specifications 8


3.1 Block Diagram
3.2 Hardware
3.3 Specifications
3.4 Pin Diagram
4 Software System Design 19

5. Conclusion and Result 20

6 References 22
CHAPTER - 1
INTRODUCTIO
N

The advent of new high-speed technology and the growing computer Capacity provided
realistic opportunity for new robot controls and realization of new methods of control theory.
This technical improvement together with the need for high performance robots created
faster, more accurate and more intelligent robots using new robots control devices, new
drivers and advanced control algorithms.

This project describes a new economical solution of robot control systems. In general; the
robots are controlled through wired network. The programming of the robot takes time if
there is any change in the project the reprogramming has to be done. Thus, they are not user
friendly and worked along with the user preferences. To make a robot user-friendly and to get
the multimedia tone in the control of the robot, they are designed to make user commanded
work.

The modern technology has to be implemented to do this. For implementing the modern
technology, it should be known by all the users to make use of it. To reach and to full-fill all
these needs we are using android mobile as a multimedia, user friendly device to control the
robot. This idea is the motivation for this project and the main theme of the project. In this
modern environment everybody uses smart phones which are a part of their day-to-day life.
They use all their daily uses like newspaper reading, daily updates, social networking, and all
the apps like home automation control, vehicle security, human body anatomy, health
maintenance, etc has been designed in the form of applications which can be easily installed
in their hand-held smart phones. This project approached a robotic movement control through
the smart phones.

Hence a dedicated application is created to control an embedded robotic hardware. The


application controls the movement of the robot. The embedded hardware is developed on
Arduino Nano and to be controlled by a Smart phone on the basis of Android platform.
Arduino Nano is to receive the AT commands from the Smart phone and takes the data and
controls the motors of the robot by the motor driver L293D. The robot can able to move
forward, reverse, left and right movements. The Smart phone is been interfaced to the device
by using Bluetooth. A Bluetooth device HC-05 module is going to be added to Arduino Nano
to receive commands from smart phone.

A wireless camera is mounted on the robot body for spying purpose even in complete
darkness by using infrared lighting. Nowadays, security is measure concern in every
organization. To this satisfy issue the organizations use surveillance cameras. The limitation
in using them is that there must be an operator to watch the stream from the cameras and take
respective decisions.
The use of camera-based surveillance has extended from security to tracking, environment
and threat analysis and many more. By using the power of modern computing and hardware it
is possible to automate the process. The emergence of machine learning, Deep learning, and
computer vision tools have made this process efficient and feasible for general purpose use.
So instead of using human support for monitoring and insight generation, we can let the
processor and machine learning system do the task in a more efficient and errorless way. Here
we have mentioned few approaches which had helped us in solving this problem.
CHAPTER - 2
LITERATURE SURVEY

Traditionally, the surveillance monitoring system is done in the larger room and by a
mount of manpower. But nowadays, monitoring surveillance system can be done
through online network. This type of monitoring is less time consuming and can reduce
the manpower. Moreover, it gives the user flexibility to monitor their properties
wherever they want as long as they have the internet network. Security of lives and
properties is an aspect of life which cannot be toyed with. Governments and individuals
desire to know the conditions of their highly valued properties every second of life
despite the fact that these properties are located in different places across the globe.
Surveillance is the monitoring of the behaviour, activities, or other changing
information, usually of people for the purpose of influencing, managing, directing, or
protecting them. . Alternatively, surveillance can be the observation of individuals or
groups by government organizations but can also relate to disease surveillance, which
monitors the progress of a disease in a community while not directly observing
individuals. This work is concerned with the configuring, interfacing and networking of
IP-based camera for real-time security surveillance systems design. The word
surveillance may be applied to observation from a distance by means of electronic
equipment such as CCTV cameras and IP camera, or interception of electronically
transmitted information such as Internet traffic and/or phone calls.[1]

Internet-enabled cameras pervade daily life, generating a huge amount of data, but most
of the video they generate is transmitted over wires and analysed offline with a human
in the loop. The ubiquity of cameras limits the amount of video that can be sent to the
cloud, especially on wireless networks where capacity is at a premium. In this paper, we
present Vigil, a real-time distributed wireless surveillance system that leverages edge
computing to support real-time tracking and surveillance in enterprise campuses, retail
stores, and across smart cities. Vigil intelligently partitions video processing between
edge computing nodes co-located with cameras and the cloud to save wireless capacity,
which can then be dedicated to Wi-Fi hotspots, offsetting their cost. Novel video frame
prioritization and traffic scheduling algorithms further optimize Vigil’s bandwidth
utilization. We have deployed Vigil across three sites in both whitespace and Wi-Fi
networks. Depending on the level of activity in the scene, experimental results show
that Vigil allows a video surveillance system to support a geographical area of coverage
between five and 200 times greater than an approach that simply streams video over the
wireless network. For a fixed region of coverage and bandwidth, Vigil outperforms the
default equal throughput allocation strategy of Wi-Fi by delivering up to 25% more
objects relevant to a user’s query.[2]
Initially, many Security Systems such as RFID, OTP-based systems, and biometrics
were used to prevent unauthorized access. But sooner, people realized merely
preventing unauthorized access was not enough and there was a need to come up with
systems that would aid in catching the intruder, while also unfolding their motives.
Similar PIR-based systems triggered camera and sensor networks on motion detection,
then uploaded captures to the cloud server and sent alerts using GSM technology. All of
the above solutions were heavily hardware dependent which is subjected to high-risk
failure. Also, these solutions usually require a PC along with Wi-Fi DSP Kit which
should be continuously ON for 24 hours, this leads to higher power consumption and
also involves additional charges for every alert being sent.[3]

Automated video surveillance system is most important in the field of security. The task
of surveillance is to detect and track moving objects in the video sequence. Now-a-
days, video surveillance system is an important security asset for commercial, law
enforcement and military applications. In video surveillance, detection of moving
objects from a video is necessary for object classification, target tracking, activity
recognition as well as behaviour understanding. Automated Video Surveillance (AVS)
technologies provide the capability of automatically detecting security incidents or
potentially threatening events taking place within the view of cameras. Simple systems
detect motion in a camera’s field of view. More advanced systems offer features such
as object identification, tracking and analysis, perimeter intrusion detection, and
abandoned/removed object recognition.[4]

Visual surveillance System is basically used for analysis and explanation of object
behaviours. It consists of static and moving object detection, video tracking to
understand the events that occur in scene. The most important objective of this survey
paper is to determine the various methods in static and moving object detection as well
as tracking of moving objects. Any video scene contains objects that can be determined
by object detection technique. There are various classes of detected object such as tree,
clouds, person and other moving objects. Detection for moving object is a very
challenging for any video surveillance system. Object Tracking is used to find the area
where objects are available and shape of objects in each frame in higher level
application.[5]

Digital image mosaicking has been studied for several decades, starting from the
mosaicking of aerial and satellite pictures, and now expanding into the consumer
market for panoramic picture generation. Its success depends on two key components:
image registration and image blending. The first aims at finding the geometric
relationship between the to-be-mosaiced images, while the latter is concerned with
creating a seamless composition. Many researchers have developed techniques for the
special case of document image mosaicking [2, 5, 6, 8, 9, 10]. The basic idea is to
create a full view of a document page, often too large to capture during a single scan or
in a single frame, by stitching together many small patches. If the small images are
obtained through flatbed scanners [2, 8], image registration is somewhat easier
because the overlapping part of
two images differ only by a 2D Euclidean transformation. However, if the images are
captured by cameras, the overlapping images differ by a projective transformation.
Virtually all reported work that we are aware of on document mosaicking using
cameras impose some restrictions on the camera position to avoid perspective
distortion.[6]

machine learning approach for visual object detection which is capable of processing
images extremely rapidly and achieving high detection rates. This work is distinguished
by three key contributions. The first is the introduction of a new image representation
called the “Integral Image” which allows the features used by our detector to be
computed very quickly. The second is a learning algorithm, based on AdaBoost, which
selects a small number of critical visual features from a larger set and yields extremely
efficient classifiers. The third contribution is a method for combining increasingly more
complex classifiers in a “cascade” which allows background regions of the image to be
quickly discarded while spending more computation on promising object-like regions.
The cascade can be viewed as an object specific focus-of-attention mechanism which
unlike previous approaches provides statistical guarantees that discarded regions are
unlikely to contain the object of interest. In the domain of face detection, the system
yields detection rates comparable to the best previous systems. Used in real-time
applications, the detector runs at 15 frames per second without resorting to image
differencing or skin colour detection.[7]

Image Recognition and detection is a classic machine learning problem. It is a very


challenging task to detect an object or to recognize an image from a digital image or a
video. Image Recognition has application in the various field of computer vision, some
of which include facial recognition, biometric systems, self-driving cars, emotion
detection, image restoration, robotics and many more. Deep Learning algorithms have
achieved great progress in the field of computer vision. Deep Learning is an
implementation of the artificial neural networks with multiple hidden layers to mimic
the functions of the human cerebral cortex. The layers of deep neural network extract
multiple features and hence provide multiple levels of abstraction. As compared to
shallow networks, this cannot extract or work on multiple features. Convolutional
neural networks is a powerful deep learning algorithm capable of dealing with millions
of parameters and saving the computational cost by inputting a 2D image and
convolving it with filters/kernel and producing output volumes.[8]

Convolutional Neural Network (CNN) for document image classification. In particular,


document image classes are defined by the structural similarity. Previous approaches
rely on hand-crafted features for capturing structural information. In contrast, we
propose to learn features from raw image pixels using CNN. The use of CNN is
motivated by the the hierarchical nature of document layout. Equipped with rectified
linear units and trained with dropout, our CNN performs well even when document
layouts present large interclass variations. Experiments on public challenging datasets
demonstrate the effectiveness of the proposed approach.[9]
A breakthrough in building models for image classification came with the discovery that
a convolutional neural network (CNN) could be used to progressively extract higher- and
higher-level representations of the image content. Instead of pre-processing the data to derive
features like textures and shapes, a CNN takes just the image's raw pixel data as input and
"learns" how to extract these features, and ultimately infer what object they constitute. To
start, the CNN receives an input feature map: a three-dimensional matrix where the size of
the first two dimensions corresponds to the length and width of the images in pixels. The size
of the third dimension is 3 (corresponding to the 3 channels of a colour image: red, green, and
blue). The CNN comprises a stack of modules, each of which performs three operations.[10]
CHAPTER - 3

BLOCK DIAGRAM & SPECIFICATIONS

3.1 Block Diagram:

Fig 3.1 Block Diagram


3.2 Hardware:

Fig 3.2 Circuit Diagram


INTRODUCTION TO BLOCK DIAGRAM

3.3 Specifications:
1. ESPCAM 32

Fig 3.3 ESPCAM 32

The ESP32-CAM is a small size, low power consumption camera module based on ESP32. It
comes with an OV2640 camera and provides onboard TF card slot.

The ESP32-CAM can be widely used in intelligent IoT applications such as wireless video
monitoring, WIFI image upload, QR identification, and so on.

Specifications:

 WIFI module: ESP-32S


 Processor: ESP32-D0WD
 Built-in Flash: 32Mbit
 RAM: Internal 512KB + External 4M PSRAM
 Antenna: Onboard PCB antenna
 WIFI protocol: IEEE 802.11 b/g/n/e/i
 Bluetooth: Bluetooth 4.2 BR/EDR and BLE
 WIFI mode: Station / SoftAP / SoftAP+Station
 Security: WPA/WPA2/WPA2-Enterprise/WPS
 Output image format: JPEG (OV2640 support only), BMP, GRAYSCALE
 Supported TF card: up to 4G
 Peripheral interface: UART/SPI/I2C/PWM
 IO port: 9
 UART baud rate: default 115200bps
 Power supply: 5V
 Transmitting power:
o 802.11b: 17 ±2dBm(@11Mbps)
o 802.11g: 14 ±2dBm(@54Mbps)
o 802.11n: 13 ±2dBm(@HT20,MCS7)
 Receiving sensitivity:
o CCK,1Mbps: -90 dBm
o CCK,11Mbps: -85 dBm
o 6Mbps(1/2 BPSK): -88 dBm
o 54Mbps(3/4 64-QAM): -70 dBm
o HT20,MCS7(65Mbps, 72.2Mbps): -67 dBm
 Power consumption:
o Flash off: 180mA@5V
o Flash on and brightness max: 310mA@5V
o Deep-Sleep: as low as 6mA@5V
o Modern-Sleep: as low as 20mA@5V
o Light-Sleep: as low as 6.7mA@5V
 Operating temperature: -20 ℃ ~ 85 ℃
 Storage environment: -40 ℃ ~ 90 ℃, <90%RH
 Dimensions: 40.5mm x 27mm x 4.5mm

Features
 Onboard ESP32-S module, supports WiFi + Bluetooth
 OV2640 camera with flash
 Onboard TF card slot, supports up to 4G TF card for data storage
 Supports WiFi video monitoring and WiFi image upload
 Supports multi sleep modes, deep sleep current as low as 6mA
 Control interface is accessible via pinheader, easy to be integrated and embedded into
user products
2. L298N Dual H Bridge Motor Driver

Fig 3.4 L298N Motor Driver

L298N Dual H Bridge Motor Driver is a motor controller breakout board which is typically
used for controlling speed and direction of motors. It can also be used to control the
brightness of certain lighting projects such as high-powered LED arrays.
An H-bridge is a circuit that can drive a current in either polarity and be controlled by pulse
width modulation.
Pin descriptions of L298N Dual H Bridge Motor Driver:

 Out 1: Motor A lead out 1


 Out 2: Motor A lead out 2
 Out 3: Motor B lead out 1
 Out 4: Motor B lead out 2
 GND: Ground
 5V: 5V Logic Input
 EnA: Enables PWM signal for Motor A
 In1: Input for Motor A lead out 1
 In2: Input for Motor A lead out 2
 In3: Input for Motor B lead out 1
 In4: Input for Motor B lead out 2
 EnB: Enables PWM signal for Motor B

L298N Dual H Bridge Motor Driver - General Specifications

 Dual H Bridge Motor Driver


 L298N motor driver IC
 Drives up to 2 bidirectional DC motors
 Integrated 5V power regulator
 5V – 35V drive voltage
 2A max drive current

L298N Dual H Bridge Motor Driver - Technical Specifications

 Double H Bridge Drive Chip: L298N


 Logical Voltage: 5V
 Drive Voltage: 5V-35V
 Logical Current: 0-36mA
 Drive current: 2A (MAX single bridge)
 Max Power: 25W
 Dimensions: 43 x 43 x 26mm
 Weight: 26g

Features:

L298N is an integrated circuit multi watt 15 package and capable of giving high voltage. It is
a high current dual full-bridge driver that is designed to accept standard TTL logic levels. It
can drive inductive loads e.g relays, solenoids, motors (DC and stepping motor), etc.
Its basic features are:
 Maximum supply voltage 46V
 Maximum output DC current 4A
 Low saturation voltage
 Over-temperature protection
 Logical “0” Input Voltage up to 1.5 V
3. HC-05 Bluetooth Module

Fig 3.5 HC-05 Bluetooth Module


It is a class-2 Bluetooth module with Serial Port Profile, which can configure as either Master
or slave. a Drop-in replacement for wired serial connections, transparent usage. You can use
it simply for a serial port replacement to establish connection between MCU, PC to your
embedded project and etc.

HC-05 Specification:

 Bluetooth protocol: Bluetooth Specification v2.0+EDR


 Frequency: 2.4GHz ISM band
 Modulation: GFSK (Gaussian Frequency Shift Keying)
 Emission power: ≤4dBm, Class 2
 Sensitivity: ≤-84dBm at 0.1% BER
 Speed: Asynchronous: 2.1Mbps (Max) / 160 kbps, Synchronous: 1Mbps/1Mbps
 Security: Authentication and encryption
 Profiles: Bluetooth serial port
 Power supply: +3.3VDC 50mA
 Working temperature: -20 ~ +75Centigrade
 Dimension: 26.9mm x 13mm x 2.2 mm
Features:

 HC05 follows the “Bluetooth V2.0+EDR” protocol (EDR stands for Enhanced
Data Rate).
 Its operating frequency is 2.4 GHz ISM Band.
 HC05 uses CSR Blue core 04-External single-chip Bluetooth system with CMOS
technology.
 This module follows the IEEE (Institute of Electrical and
Electronics Engineers) 802.15.1 standard protocol.
 Dimensions of HC-05 are 12.7mmx27mm.

 Its operating voltage is 5V.

 It sends and receives data by UART, which is also used for setting the baud rate.

 it has -80dBm sensitivity.

 This module also uses (FHSS), a technique by which a radio signal is sent at
different frequency levels.

 This module has the ability to work as a master/slave mode.

 This module can be easily connected with a laptop or mobile phone via Bluetooth.

4. The Arduino Uno

Fig 3.6 Arduino Uno


Arduino Uno is a microcontroller board based on the ATmega328P (datasheet). It has 14
digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16
MHz quartz crystal, a USB connection, a power jack, an ICSP header and a reset button. It
contains everything needed to support the microcontroller; simply connect it to a computer
with a USB cable or power it with an AC-to-DC adapter or battery to get started. You can
tinker with your UNO without worrying too much about doing something wrong. The worst-
case scenario is that you would have to replace the chip and start again :)

Arduino Uno Specification

 Microcontroller: ATmega328P
 Operating Voltage: 5V
 Input Voltage (recommended): 7-12V
 Input Voltage (limit): 6-20V
 Digital I/O Pins: 14 (of which 6 provide PWM output)
 PWM Digital I/O Pins: 6
 Analog Input Pins: 6
 DC Current per I/O Pin: 20 mA
 DC current for 3.3V Pin: 50 mA
 Flash Memory: 32 KB (ATmega328P) of which 0.5 KB used by bootloader
 SRAM: 2 KB (ATmega328P)
 EEPROM: 1 KB (ATmega328P)
 Clock Speed: 16 MHz
 LED_BUILTIN: 13
 Length: 68.6 mm
 Width: 58.4 mm
 Weight: 25 g

Features:

 Microcontroller: ATmega328
 Operating Voltage: 5V
 Input Voltage (recommended): 7-12V
 Input Voltage (limits): 6-20V
 Digital I/O Pins: 14 (of which 6 provide PWM output)
 Analog Input Pins: 6
 DC Current per I/O Pin: 40 mA
 DC Current for 3.3V Pin: 50 mA
 Flash Memory: 32 KB of which 0.5 KB used by bootloader
 SRAM: 2 KB (ATmega328)
 EEPROM: 1 KB (ATmega328)
 Clock Speed: 16 MHz
3.4 PIN DIAGRAM

Fig 3.7 Pin Diagram

Pin Category Name Description


Power Pins 5V, GND, 3.3V 5V & 3.3V – Regulated
power can be supplied to
this pin to power the board

GND – Ground pins


Power output pin VCC It is used as a output power
pin and can output either
3.3V or 5V.
Serial pins GPIO 1 and GPIO 3 It is used to communicate
with other board and upload
the code
Micro SD pins GPIO 14, GPIO 15, GPIO 2, Used to connect a micro-SD
GPIO 4, GPIO 12, GPIO 13 card for images and videos
Flashlight GPIO 4, GPIO 33 GPIO 4 is used for LED and
GPIO 33 is used for red light
CHAPTER - 4

SOFTWARE USED

Fig 4.1 Software Used


Arduino is an open-source software and hardware company, that designs and manufactures
single-board microcontrollers and kits to build digital devices. The Board developed by them
are termed as Arduino boards that are capable of reading inputs like – dept info using radar,
light on a sensor, press of a button, etc. can convert them into an output like activating the led
light, motors, buzzers, etc. There are different variants of the Arduino board available in the
markets e.g. Nano, Uno, Atmega, etc. each one of them vary in shape and sizes. It also
provides a Software IDE for their boards, which helps the developer to program the board in
Language, this IDE also gives the liberty to the user to provide input instruction directly from
the system.
CHAPTER - 5
CONCLUSIONS AND RESULT
Conclusion: We have successfully implemented the working of the wireless video
surveillance robot controlled using android mobile device. The robot is successfully
controlled using the android application through the wireless Bluetooth technology. Even the
real time video feel is successfully achieved using the Wi-Fi technology on our designed
android application.

Fig 5.1 Project’s Image

Fig 5.2 Project’s output


Future Scope:
 Surveillance is needed in almost every field.
 It could be a great solution to various problems or situation where wireless
Surveillance is needed our project has tremendous scope as it uses the latest
technology in the market.
 Our application uses the android OS which is currently the most used OS and also has
a great future scope.
 The Surveillance robot can be controlled remotely using the android application; this
gives it a huge scope for future application.
CHAPTER 6

REFERENCES

[1] Michael F.Adaramola ,Michael.A.K.Adelabu “Implementation of Closed-circuit


Television (CCTV) Using Wireless Internet Protocol (IP) Camera” 1.School of Engineering,
Lagos State Polytechnic, Ikorodu, P.M.B. 21,606, Ikeja. Lagos. Nigeria

[2] Tan Zhang, Aakanksha Chowdhery, Paramvir Bahl, Kyle Jamieson, Suman Banerjee
“The Design and Implementation of a Wireless Video Surveillance System” University of
Wisconsin-Madison, Microsoft Research Redmond, University College London

[3] C M Srilakshmi1, Dr M C Padma2 “IOT BASED SMART SURVEILLANCE SYSTEM”


International Research Journal of Engineering and Technology (IRJET) Volume: 04 Issue: 05
| May -2017

[4] Mrs. Prajakta Jadhav1, Mrs. Shweta Suryawanshi2, Mr. Devendra Jadhav3 “Automated
Video Surveillance e-ISSN: 2395 -0056 | p-ISSN: 2395-0072” International Research Journal
of Engineering and Technology (IRJET) Volume: 04 Issue: 05 | May -2017

[5] Pawan Kumar Mishra “A study on video surveillance system for object detection and
tracking” Nalina.P, Muthukannan . K

[6] Jian Liang “Camera-Based Document Image Mosaicing” 18th International Conference
on Pattern Recognition (ICPR'06)

[8] Paul Viola, Michael Jones “Rapid Object Detection using a Boosted Cascade of Simple
Features” 2001 IEEE

[9] Rahul Chauhan, Kamal Kumar Ghanshala, R.C Joshi “Convolutional Neural Network
(CNN) for Image Detection and Recognition” 2018 IEEE

[10] Le Kang, Jayant Kumar, Peng Ye, Yi Li, David Doermann “Convolutional Neural
Networks for Document Image Classification” 2014 IEEE

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy