0% found this document useful (0 votes)
452 views6 pages

Smart Stick For Visually Impaired

Eyes are one of the most precious blessings of nature. Blind friends face a lot of difficulty in doing normal life activities. A lot of work has been done to ease blind people so that they can complete their tasks by themselves and not be able to depend on other people. Keeping this motivation in mind, in this project, we have proposed and developed an intelligent blind stick.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
452 views6 pages

Smart Stick For Visually Impaired

Eyes are one of the most precious blessings of nature. Blind friends face a lot of difficulty in doing normal life activities. A lot of work has been done to ease blind people so that they can complete their tasks by themselves and not be able to depend on other people. Keeping this motivation in mind, in this project, we have proposed and developed an intelligent blind stick.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Volume 4, Issue 5, May– 2019 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Smart Stick for Visually Impaired


1 2
Mukesh Bhati, Shubham Jagtap,
1 2
Department of Computer Engineering, Department of Computer Engineering,
Dr.D.Y.Patil School of Engineering and Technology Dr.D.Y.Patil School of Engineering and Technology
Pune-412105, Maharashtra, India Pune-412105, Maharashtra, India
3 4
Omkar Bhudhar, Bajarang Jadhav
3 4
Department of Computer Engineering, Department of Computer Engineering,
Dr.D.Y.Patil School of Engineering and Technology Dr.D.Y.Patil School of Engineering and Technology
Pune-412105, Maharashtra, India Pune-412105, Maharashtra, India

Abstract: - Eyes are one of the most precious blessings There are many guidance systems existing for blind
of nature. Blind friends face a lot of difficulty in doing people. They usually have a special stick or cane which is
normal life activities. A lot of work has been done to used by them for their guidance. With the advances of
ease blind people so that they can complete their tasks modern technologies many different types of devices are
by themselves and not be able to depend on other available to support the mobility of visually challenged
people. Keeping this motivation in mind, in this project, people. These mobility aids are generally known as
we have proposed and developed an intelligent blind Electronic Travel Aids (ETAs). These ETA’s are used to
stick. detect the obstacles, roads and deliver the information to
them also avoid from difficulties faced by them to some
The smart walking stick helps visually challenged extent.
people to identify obstacles and provide assistance to
reach their destination. There are multiple walking The Blind people face various difficulties in their life.
sticks and systems which help the user to move around, The System that will be developed will help them in
indoor and outdoor locations but none of them provide following manner.
run-time autonomous navigation along with object
detection and identification alerts as well as face and  Provide features such as object recognition.
voice Recognition. The Stick works based on the  Provides the distance between object and the user.
technology of IOT, Echolocation, Image Processing,  Send the message to relatives when in emergency.
Artificial Intelligence and Navigation system to detect  Provide with features ensuring safety and comfort.
near and distant obstacles for user. If that blind person  Helping them to overcome all obstacles in their way and
falls down or any other problem happens means the live confidently.
device will give alert to the authorized person. System
uses the Voice recognition to identify near and dear II. LITERATURE SURVEY
ones.
‘Smart cane with range notification’ as the name
Keywords:- Image Processing, Sensors, YOLO, suggests, only the distance between the object and user is
Convolutional Neural Network, Recurrent Neural Network, calculates and then is communicated to user using
Optical Character Recognition, GNSS earphones. Also a cane is specified which is said to have
vibrator for detection .The distance is calculated using
I. INTRODUCTION. Ultrasonic sensor [1].

Blindness is defined as the state of being sightless in From this paper we come to know a cane named
which both eyes suffer from complete loss of vision. The “Assistor” which uses an image sensor, ultrasonic sensor
disability is mostly caused by diabetes, macular and navigation system using mobile.But usage of mobile
degeneration, traumatic injuries, infection and glaucoma. here increases the dependency of user for navigation
World Health Organization has estimated that about 285 purposes [2].
million people worldwide are visually impaired; in which
39 million are blind while another 246 million people The level detection is done by using infrared sensor. It
worldwide are visually impaired; in which 39 million are operates by detecting the distance from the target by
blind while another 246 million have a low vision. The reflection of an infrared beam. Ultrasonic sensors are used
number of people suffering from loss of sight is increasing for object detection and GPS is used for SMS services.
dramatically. The Royal National Institute of Blind People Although the system is quite perfect but use of vibrators are
(RNIB) has predicted that by 2020, the number of visually done which can be quite confusing in chaotic places. Use of
impaired in UK will be over 2 million people. GSM with Arduino is bit slow and causes system to work
slow [3].

IJISRT19MY479 www.ijisrt.com 361


Volume 4, Issue 5, May– 2019 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
The task of image sensor is to find what those III. SYSTEM AND FUNCTIONAL DESCRIPTION
obstacles are with precision. The image sensor is used to
capture the image at regular intervals and recognize the
objects. Bluetooth is used with Mobile application, which
again increases the dependency of user [4].

Blindar is an ETA (Electronic Travel Aid) developed


for blind people. Arduino Mega is used for interfacing the
sensors. Also cloud storage Is used here which is a very fine
idea, but this requires a high internet connectivity which in
some areas isn’t strong. [5].

Most of the ETA’s use Sensors for their functioning


similar to them a water sensor is used here to detect water in
the path to avoid slipping by using a buzzer , Also a RF
sensor is used to locate the stick if lost anywhere [6].

Bluetooth module is used to send the signals to the


user through the mobile phone connected via the module
and a speech text to speech converter is used to notify if the
value of distance has crossed the set threshold.ahere too, a
mobile is used for text to speech conversion, which shows
dependency. [7].
Fig 1:- Block Diagram of Stick.
AI based Stick is used only for making smart decisions
and showing them the correct path and keeping away from A. Ultrasonic Sensor:
obstacles [8]. Ultrasound waves are the waves that audible to
human ear. They have frequency greater than 20KHz. Ultra
Using TOF (Time Of Flight) ,the position and angle of Sonic Sensors are used for detecting the objects or
sensors on the stick, threshold limit of distances using obstacles using these waves, similar to that of SONAR or
Ultrasonic Sensor and suitable Algorithms, the distance is RADAR.
calculated of the object[9].
Laplace’s proposed that sound of waves is equal to
Speech or Voice Recognition technology can be used 343m/s. The function of transducers is to convert the
to implement man-machine communication based on electrical energy to mechanical energy for ultrasonic
commands or text. It is a key technology in most of man- vibrations.
machine interface Also pattern matching technology can be
used. AI plays an important role for decision making aspects
[10].

CNN is a concept of Artificial Intelligence which can


be used for the operations related to images, like detection
of an object in an image, recognition of object, can be used
as part for image captioning and many more applications,
which would be an important factor for the project[11].

RNN is a concept of Artificial Intelligence which can


be used for the operations related to text, like Conversion of
a text to Speech, etc., it can be used as part for image
captioning and many more applications, which would be an
important factor for the project. [12]. Fig 2:- Working of UltraSonic Sensor

TTS is a factor that can be used for Text-To-Speech The method used here is the “Pulse Reflection
applications. Various API’s are available to fulfil the need Method” in which we measure the distance between object
of this conversion [13]. using the transmitting pulse and the receiving pulse. The
relationship between the distance up to the object L and
time T can be expressed by
Distance Calculation:

IJISRT19MY479 www.ijisrt.com 362


Volume 4, Issue 5, May– 2019 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
L=0.5*T*C by a Result Code. The result code tells about the successful
execution of that command. Text message may be sent
Where, L=distance, T=time between emission and through the module by interfacing only three signals of the
reception, C= Sonic Speed (343 m/s). serial interface of module with microcontroller i.e., TXD,
VO = (1+ R3/R2) × VIN RXD and GND. In this scheme RTS and CTS signals of
serial port interface of GSM Modem are connected with
When the intensity of emitter led is high, more energy each other.
will fall on detector led and resistance of detector is low, so
the value of the potential (VIN) is high. Similarly when the AT+CMFG command configures the GSM module in
intensity is low, the resistance of the detector is high and so text mode. It works in both manual and automatic mode. If
the value of potential is low. This potential is compared the object is too near and it continuously gives the alerts, it
with a reference potential. According to these compared sends the messages to the stored mobile number.
potentials the output will be 1 or 0 i.e. ‘ON’ or ‘OFF’.
E. Raspberry Pi 3as a Microcontroller:
B. Water Detector: Microcontroller to be used is the Raspberry-Pi, the
Water Detector is used to detect the presence of Water reasons to choose it as a micro-controller are:
and provide an alert to the user. The common device which
relies on electrical conductivity of water to decrease the  Practical, Portable, and inexpensive.
resistance across the two contacts but when the contacts get  Provide an inbuilt Operating System which makes it
bridged by water it sounds an audible alarm. easy to get started.
 Built in Wi-fi module, Bluetooth.
 The 2.5 amps Power source to power up more complex
USB devices.
 1GB Memory,400 MHz GPU,4 USB Ports,
 HDMI video output with 3.5mm jack,17 GPIO
Pins,4.1v Bluetooth.
 5V power source, Micro USB or GPIO header.
 Weight: 45 gm, Dimensions: 85.60mm x 56.5mm

IV. SOFTWARE REQUIREMENTS

A. Image Captioning:
Process of converting the input image in the textual
description with an artificial system is Image captioning. It
is an intermediate step to convert a visual input to the
Fig 3:- Working of Water Detector.
audible output.
C. GNSS (Global Navigation Satellite System): Basically it works on two internal Networks, CNN
GNSS is a term for Satellite Navigation Systems that and RNN.CNN(encoder) and RNN(decoder).
provides autonomous geo-spatial positioning with global
coverage. It shows latitude, longitude, altitude, etc. using A technique called Tiny YOLO is been used in the
time signals transmitted by radios from satellites. GPS and
system.
GNSS are very different.
 YOLO(You Only Look Once):
Use of GNSS provides user with high accuracy and YOLO (You Only Look Once) is a deep learning
availability at any time. In case, if any satellite fails or do algorithm which is based on multiple convolutional rounds
not work properly, GNSS takes the signal from other that has been used in the system for Object detection. It is a
satellites. superfast, and accurate framework. Unlike the previous
CNN models, YOLO works in quite different manner. The
D. GSM (Global System for Mobile Communication): CNN wouldn't take the entire image in consideration but
GSM (Global System for Mobile communication) is a only the grids that have the chances of detecting objects.
specialized module for communication purposes. It has a But YOLO considers the entire image and creates the
SIM (Subscriber Identity Module), GSM module works on bounding boxes around them and predicts the class
subscription to some network operator. GSM is connected
probability for these bounding boxes. The boxes are made
to the Microcontroller for the communication or sending bold, if the objects have a high probability, and the objects
messages purposes. For the operations to get performed, that have the probability above threshold (defined)are
GSM needs “Extended AT Command set” support for displayed, and the remaining are discarded.
sending/receiving messages. The AT commands are sent by
the microcontroller to the module. The module sends back
an Information Response i.e. the information requested by
the action initiated by the AT command. This is followed

IJISRT19MY479 www.ijisrt.com 363


Volume 4, Issue 5, May– 2019 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Basically tiny YOLO is used in the system, as it is B. Text-to-Speech Conversion:
faster than the original one. The reason why it is fast is Speech synthesis is artificial synthesis of human
because it uses only nine convolutional layers whereas, the voice. It is the conversion of the coded text to speech. Input
original version uses 24 convolution layers, hence it is given to the module is in text format and output is given in
slower than tiny YOLO, but is more accurate compared to human voice form [13]. gTTS is used for the system. The
tiny YOLO. working is given below:

 CNN(Convolutional Neural Network ):


CNN is a Network used for image classification.
Pretraied CNN would be used to extract the features from
the input image, and a feature vector is created. CNN is
used for the Image based model[11][12].

Fig 6:- Working of Text to Speech Module.

C. Speaking the Newspaper:


What if the person wants to read the newspaper? How
will it be done here? Researching on this, we found that it is
possible to do this using OCR and TTS.

 OCR:
OCR (Optical Character Recognition) is a widespread
used technology, used for recognizing the characters in
image or in scanning the text in printed papers, etc. OCR
technology is used to convert virtually any kind of images
containing the text into machine readable text data.
Fig 4:- Working of CNN Once a scanned paper is gone through the OCR
processing, the text of the document can be edited with
 RNN(Recurrent Neural Network): word processors like MS Word or google Docs.
RNN is a Network generally used for the text
operations.LSTM(Long Short Term Memory) is a model
that is to be used in the RNN for our Language based
model.[12]

For training the LSTM model, we predefine our label


and target text. Fig 7:- Working of OCR

E.g: ”A Man and a Horse.” Then our label and target  TTS:
would be: TTS (Text-To-Speech) can be used for converting the
edited text to speech. Speech synthesis is artificial synthesis
Label-[<start> A , Man, and , a, Horse, .] of human voice. It is the conversion of the coded text to
Target-[ A , Man, and ,a, Horse,.<end>] speech. Input given to the module is in text format and
output is given in human voice form. EMIC-2 text-to-
This is done so that our model understands start and speech module is used for the system. It has 9 different
end of labelled sequence. voices and in 2 languages. The working is given below:

Fig 8:- Combined working steps of OCR and TTS


Fig 5:- Working of CNN and RNN Combined

IJISRT19MY479 www.ijisrt.com 364


Volume 4, Issue 5, May– 2019 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
V. MATHEMATICAL MODEL  Entire System :
S=output to system.
Ultra-Sonic sensors use the following calculations for S=P Ultrasonic , PPIR , O, Z, X
processing the distance calculations.
VI. RESULTS AND CONCLUSIONS
L=0.5*T*C
The advantage of the device is that they respond to
Where L is a distance between object and user, T is commands much faster and those patients who have lost the
the time between Emission and Reception, C is the Sound ability to have visual skills may utilize them. The main
Speed i.e. (343m/s). purpose of this project is to design a Integrated System for
visually disabled people to move them voluntarily and also
Let S be the System help them to have control over it. In this System
S=Smart stick for visually impaired appropriate values distance calculation, level change, object
recognition are taken under consideration with the help of
 For Sensors. Algorithms, classification techniques and training the
Identify Is as an input to system from sensors System.
Identify P as Process to Sensor
P ultrasonic = T, C, 0.5 The fair Involvement for training the system provide
PPIR = I accurate results. To Conclude the system will get all
Where, T=Time between emission and Reception appropriate results as per the requirements with advance
C= Speed of sound (343m/sec) capabilities.
0.5=constant.
I= Intensity with which light is received. FUTURE SCOPE
Identify L as output for ultrasonic
L ultrasonic= T, C, 0.5 The robotic shall be further upgraded by removing
Identify M as output for PIR noise level and duplicates precisely in signal processing
MPIR =V and thus focus on additional improvement of the detection
Consider A as case of Success of specific object so that, the robotic and home appliances
A=1 can be controlled accurately without any contradiction.
1= Voice command to user.
Consider B as case of failure. REFERENCES
B=0
0= Give a Failure message [1]. M.F. Saaid, A. M. Mohammad, M. S. A. Megat Ali,
“Smart Cane with Range Notification for Blind
 For Image to Speech. People”, 2016 IEEE International Conference on
Identify O as input Automatic Control and Intelligent Systems
O=Q (I2CACIS), 22 October 2016
P=create pattern for objects [2]. Akhilesh Krishnan, 2Deepakraj G, 3Nishanth N,
Identify Z as process 4Dr.K.M.Anandkumar, “Autonomous Walking Stick
Z=d, t, o, s For The Blind Using Echolocation And Image
Where, Processing”, 2016 2nd IEEE International Conference
d= compare the pattern of objects with database on Contemporary Computing and Informatics (ic3i).
o= using pattern detect the object from database [3]. Kunja Bihari Swain, Rakesh kumar Patnaik,
t=if pattern is new, train to system and save in database Suchandra Pal, Raja Rajeswari, Aparna Mishra and
s=Speech Synthesis Charusmita Dash, “ Arduino Based Automated
S=O,Q STICK GUIDE for a Visually Impaired Person”, 2017
Identify X as output. IEEE International Conference on Smart
X=V Technologies and Management for Computing,
Where V=voice command to user Communications, Control,Energy and Materials 2-4
S=O,Z,X August 2017.
Consider A as case of Success [4]. K.Swathi, E.Raja Ismitha, Dr.R.Subhashini, “Smart
Ao=1 Walking Stick Using IOT”, International Journal of
1= Voice command to user about Object. Innovations & Advancement in Computer Science
Consider B as case of failure. IJIACS ISSN 2347 – 8616 Volume 6, Issue 11
November 2017.
Bo=0 [5]. Zeeshan Saquib, Vishakha Murari, Suhas N Bhargav,
0= Give a Failure message. “BlinDar: An Invisible Eye for the Blind People”,
2017 2nd IEEE International Conference On Recent
Trends In Electronics Information & Communication
Technology, May 19-20, 2017.

IJISRT19MY479 www.ijisrt.com 365


Volume 4, Issue 5, May– 2019 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
[6]. Rashidah Funke Olanrewaju, Muhammad Luqman
Azzaki Mohd Radzi, Mariam Rehab, “iWalk:
Intelligent Walking Stick for Visually Impaired
Subjects”, Proc. of the 4th IEEE International
Conference on Smart Instrumentation, Measurement
and Applications (ICSIMA) 28-30 November 2017.
[7]. Do Ngoc Hung, Vo Minh-Thanh, Nguyen Minh-Triet,
Quoc Luong Huy, and Viet Trinh Cuong, "Design and
Implementation of Smart Cane for Visually Impaired
People", Springer Nature Singapore Pte Ltd. 2018. T.
Vo Van et al. (eds.), 6th International Conference on
the Development of Biomedical Engineering in
Vietnam (BME6).
[8]. Uruba Ali, Hoorain Javed, Rekham Khan, Fouzia
Jabeen* and Noreen Akbar, “Intelligent Stick for
Blind Friends”, International Robotics & Automation
Journal(1 – 2018).
[9]. Shashank Chaurasia and K.V.N. Kavitha, “AN
ELECTRONIC WALKING STICK FOR BLINDS”,
IEEE 2014 International Conference on Information
Communication & Embedded Systems (ICICES
2014).
[10]. Jianliang Meng, Junwei Zhang,Haoquan Zhao,
“Overview of the Speech Recognition Technology”,
2012 Fourth International Conference on
Computational and Information.
[11]. Wang Zhiqiang, Liu Jun, “A Review of Object
Detection based on Convolutional Neural
Network”, 36th Chinese Control Conference.
[12]. Kaisheng Xu,Hanli Wang, Pengjie Tang “Image
captioning with deep LSTM based on sequential
residual”, IEEE International Conference on
Multimedia and Expo(ICME) 2017.
[13]. Michael H. O’Malley, “Text to Speech Synthesis
System in Indian English”,2016 IEEE Region Ten
Conference(TENCON),2016.

IJISRT19MY479 www.ijisrt.com 366

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy