Smart Stick For Visually Impaired
Smart Stick For Visually Impaired
ISSN No:-2456-2165
Abstract: - Eyes are one of the most precious blessings There are many guidance systems existing for blind
of nature. Blind friends face a lot of difficulty in doing people. They usually have a special stick or cane which is
normal life activities. A lot of work has been done to used by them for their guidance. With the advances of
ease blind people so that they can complete their tasks modern technologies many different types of devices are
by themselves and not be able to depend on other available to support the mobility of visually challenged
people. Keeping this motivation in mind, in this project, people. These mobility aids are generally known as
we have proposed and developed an intelligent blind Electronic Travel Aids (ETAs). These ETA’s are used to
stick. detect the obstacles, roads and deliver the information to
them also avoid from difficulties faced by them to some
The smart walking stick helps visually challenged extent.
people to identify obstacles and provide assistance to
reach their destination. There are multiple walking The Blind people face various difficulties in their life.
sticks and systems which help the user to move around, The System that will be developed will help them in
indoor and outdoor locations but none of them provide following manner.
run-time autonomous navigation along with object
detection and identification alerts as well as face and Provide features such as object recognition.
voice Recognition. The Stick works based on the Provides the distance between object and the user.
technology of IOT, Echolocation, Image Processing, Send the message to relatives when in emergency.
Artificial Intelligence and Navigation system to detect Provide with features ensuring safety and comfort.
near and distant obstacles for user. If that blind person Helping them to overcome all obstacles in their way and
falls down or any other problem happens means the live confidently.
device will give alert to the authorized person. System
uses the Voice recognition to identify near and dear II. LITERATURE SURVEY
ones.
‘Smart cane with range notification’ as the name
Keywords:- Image Processing, Sensors, YOLO, suggests, only the distance between the object and user is
Convolutional Neural Network, Recurrent Neural Network, calculates and then is communicated to user using
Optical Character Recognition, GNSS earphones. Also a cane is specified which is said to have
vibrator for detection .The distance is calculated using
I. INTRODUCTION. Ultrasonic sensor [1].
Blindness is defined as the state of being sightless in From this paper we come to know a cane named
which both eyes suffer from complete loss of vision. The “Assistor” which uses an image sensor, ultrasonic sensor
disability is mostly caused by diabetes, macular and navigation system using mobile.But usage of mobile
degeneration, traumatic injuries, infection and glaucoma. here increases the dependency of user for navigation
World Health Organization has estimated that about 285 purposes [2].
million people worldwide are visually impaired; in which
39 million are blind while another 246 million people The level detection is done by using infrared sensor. It
worldwide are visually impaired; in which 39 million are operates by detecting the distance from the target by
blind while another 246 million have a low vision. The reflection of an infrared beam. Ultrasonic sensors are used
number of people suffering from loss of sight is increasing for object detection and GPS is used for SMS services.
dramatically. The Royal National Institute of Blind People Although the system is quite perfect but use of vibrators are
(RNIB) has predicted that by 2020, the number of visually done which can be quite confusing in chaotic places. Use of
impaired in UK will be over 2 million people. GSM with Arduino is bit slow and causes system to work
slow [3].
TTS is a factor that can be used for Text-To-Speech The method used here is the “Pulse Reflection
applications. Various API’s are available to fulfil the need Method” in which we measure the distance between object
of this conversion [13]. using the transmitting pulse and the receiving pulse. The
relationship between the distance up to the object L and
time T can be expressed by
Distance Calculation:
A. Image Captioning:
Process of converting the input image in the textual
description with an artificial system is Image captioning. It
is an intermediate step to convert a visual input to the
Fig 3:- Working of Water Detector.
audible output.
C. GNSS (Global Navigation Satellite System): Basically it works on two internal Networks, CNN
GNSS is a term for Satellite Navigation Systems that and RNN.CNN(encoder) and RNN(decoder).
provides autonomous geo-spatial positioning with global
coverage. It shows latitude, longitude, altitude, etc. using A technique called Tiny YOLO is been used in the
time signals transmitted by radios from satellites. GPS and
system.
GNSS are very different.
YOLO(You Only Look Once):
Use of GNSS provides user with high accuracy and YOLO (You Only Look Once) is a deep learning
availability at any time. In case, if any satellite fails or do algorithm which is based on multiple convolutional rounds
not work properly, GNSS takes the signal from other that has been used in the system for Object detection. It is a
satellites. superfast, and accurate framework. Unlike the previous
CNN models, YOLO works in quite different manner. The
D. GSM (Global System for Mobile Communication): CNN wouldn't take the entire image in consideration but
GSM (Global System for Mobile communication) is a only the grids that have the chances of detecting objects.
specialized module for communication purposes. It has a But YOLO considers the entire image and creates the
SIM (Subscriber Identity Module), GSM module works on bounding boxes around them and predicts the class
subscription to some network operator. GSM is connected
probability for these bounding boxes. The boxes are made
to the Microcontroller for the communication or sending bold, if the objects have a high probability, and the objects
messages purposes. For the operations to get performed, that have the probability above threshold (defined)are
GSM needs “Extended AT Command set” support for displayed, and the remaining are discarded.
sending/receiving messages. The AT commands are sent by
the microcontroller to the module. The module sends back
an Information Response i.e. the information requested by
the action initiated by the AT command. This is followed
OCR:
OCR (Optical Character Recognition) is a widespread
used technology, used for recognizing the characters in
image or in scanning the text in printed papers, etc. OCR
technology is used to convert virtually any kind of images
containing the text into machine readable text data.
Fig 4:- Working of CNN Once a scanned paper is gone through the OCR
processing, the text of the document can be edited with
RNN(Recurrent Neural Network): word processors like MS Word or google Docs.
RNN is a Network generally used for the text
operations.LSTM(Long Short Term Memory) is a model
that is to be used in the RNN for our Language based
model.[12]
E.g: ”A Man and a Horse.” Then our label and target TTS:
would be: TTS (Text-To-Speech) can be used for converting the
edited text to speech. Speech synthesis is artificial synthesis
Label-[<start> A , Man, and , a, Horse, .] of human voice. It is the conversion of the coded text to
Target-[ A , Man, and ,a, Horse,.<end>] speech. Input given to the module is in text format and
output is given in human voice form. EMIC-2 text-to-
This is done so that our model understands start and speech module is used for the system. It has 9 different
end of labelled sequence. voices and in 2 languages. The working is given below: