HGR SystembySyed
HGR SystembySyed
ABSTRACT: An in-depth literature review has been carried out to look at the present state of data relating to
Hand gesture recognition systems that explicitly focused on the vision-based technique for sign language
detection. The problem existing at the moment is that most of the people are not able to comprehend hand gestures
or convert them to the spoken language quickly enough for the listener to understand. In addition to this
communication with sign language is not a very easy task. This problem demands a better solution which can
assist speech impaired population converse without any difficulties. The authors propose a non-vision based
extended idea that will assist in removing or at least reducing this gap between the speech impaired and the able-
bodied people. The communication of speech-impaired people with others is only by using the motion of their
hands and expressions. According to dumb people, every gesture is associated with a specific meaning and this is
stored in a database. By frequently updating the database the dumb will communicate like a normal person using
the artificial mouth.
KEYWORDS: artificial mouth, Non vision-based technique, sign language detection, speech impaired people,
vision-based technique.
I. INTRODUCTION
Vision-based and Non-vision-based are the two techniques that are used today for sign language recognition. We
have gone with the glove based i.e., non-vision technique as it is progressively useful in gesture recognition which
includes the utilization of extraordinarily structured sensor glove which creates a sign relating to the hand sign.
As the efficiency of the smart glove isn't influenced by light, electric, or attractive fields or some other factors,
the information that is produced is very precise. The proposed system uses the American Sign Language for
communication. In this paper, the authors develop an artificial mouth to ease the communication process of
speech-impaired people. We design one glove which is fitted with the flex sensor in our hand then the flex sensor
senses the signal and this signal given to the microcontroller whereas all the data kept in the database, then
microcontroller matches the motion of hand with the database and produces the speech signal and text i.e. we will
get the output through the speaker and OLED display. Here the Raspberry pi 3B+ is used as a single board
computer and an Arduino nano is used as an A/D converter.
Proposed system: The proposed system aims at reducing the communication barrier between the speech impaired
and normal people thus assisting them for effective communication and easing their difficulties. Our prototype
involves Raspberry pi, Arduino nano as an A/D converter which are interfaced with flex sensors and accelerometer
for reading the hand gestures, speaker module, OLED, hand glove and a power supply of 5v. When a specific sign
is made, the flex sensors gets bended and unique values are generated. This values are stored in the database and
when the sign is made the microcontroller matches the values with the stored values and the output is made
available in the form of text on the OLED and a audio from the speaker which are connected to the raspberry pi
3B.
Hardware components
Accelerometer : Accelerometers are devices that measure acceleration, which is the rate of change of
the velocity of an object. It measures the acceleration of a gesture in three axes and the gives the output to the
Arduino nano.
Flex sensors: Flex sensor is basically a variable resistor whose terminal resistance increases when the sensor is
bent. These sensors are incorporated on the fingers and when ever a sign is made the fingers get bent and a specific
value is then send along with the accelerometer values to the Arduino nano.
Arduino Nano: It is a Microcontroller board developed by Arduino.cc and based on Atmega328p / Atmega168.
In our project it is programmed to work as a Analog to Digital(A/D) converter. The output from accelerometer
and flex sensors is in voltage variation i.e., analog and it must be converted to digital values for the Raspberry pi
to process.
Raspberry pi 3B: The raspberry pi is an extremely sleek and tiny computer board. It is about the size of a normal
credit card, and runs on a Linux based operating system called the Raspbian. It is the heart of our project which
processes the input data and gives the output in text and speech using the OLED and speaker module respectively.
OLED : An Organic Light Emitting Diode(OLED) is a display device which has self light-emitting technology.
It is connected to the Raspberry Pi 3B and it displays the sign detected in the form of text.
Speaker: The speaker module used here consists of a 3.5mm jack which is connected to the raspberry pi and a
usb cable for power supply.it is uses a text to speech conversion (TTS) block for audio output.
Block Diagram
The final prototype is developed using the above block diagram and it consists of flex sensors integrated on
the glove. The unique value generated whenever a sign is made is given with the combination of acceleration
values of axes from the accelerometer to the the A/D converter since the former values are analog in nature
and needs to be converted to digital for the Raspberry pi 3B to process and give the output in the form of text
on the OLED module and in the form of Speech from the audio output of speaker connected to the raspberry
pi 3B using the 3.5mm jack present on it. Here a text to speech converter is used and the basic block diagram
representation of a TTS is given below
Every hand gesture made has specific values that convey a message. That message is kept in a database. In real-
time, the template database is stored in the Raspberry pi and the motion sensors are fixed in their hand. For every
action, the motion sensors get accelerated and gives the signal to the Pi. The Pi matches the signal received with
the database and produces the speech signal. The output of the system is using the speaker. Here different signs
are given and the output is noted. Whenever a sign is made from the above given signs the microprocessor matches
the sign with the stored message in the database and gives the meaning of the sign in the output in the form of text
and speech. The following are some of the snapshots of different signs made and their results.
Table 1:- Tabulated results of six hand gestures with text output.
2 I am watching you/stare
3 I am sorry
4 I am Deaf
5 Wow! Perfect
6 I Want to play
The above table represents the five Hand gesture results on the OLED module. However, many signs have been
implemented. Here the output is both on the OLED and the speaker module
IV. CONCLUSION
We were able to develop an efficient gesture recognition system that did not utilize any markers and camera hence
making it more user and cost-friendly. The flex sensors in combination with the Accelerometer, A/D converter,
and Raspberry pi is successfully and accurately able to translate ASL to text and speech. By mounting these
sensors on a glove, a very convenient to use wearable is made which is not only efficient but also comfortable to
use in our daily lives. It provides an efficient method of alleviating the problems of the speech-impaired
community. It empowers such people with the power of speech and allows them to express themselves better. It
can be concluded that the existing methods assisted in the development of our prototype but the inclusion of new
technology will surely lay a major impact in this field.
Future Scope: In the near future, more sensors can be embedded to recognize full sign language with more
perfection and accuracy and most of the units can be embedded together on a single board resulting in a compact
model. The system can also be designed such that it can translate words from one language to another. In the
future, the accuracy can be improved and more gestures can be added to implement more functions.
ACKNOWLEDGEMENT
The authors appreciate the efforts of Mr. Hakeem Aejaz Aslam(Asst Professor, ECED, MJCET) for his kind
support in conducting the experimental work.
REFERENCES
[1] Y.-C. Kan, C.-K. Chen, “A Wearable Inertial Sensor Node for Body Motion Analysis”, IEEE Sensors
Journal, vol. 12, no. 3, pp. 651–657, March 2012.
[2] S. C. Mukhopadhyay, “Wearable Sensors for Human Activity Monitoring: A Review”, IEEE Sensors
Journal, vol. 15, no. 3, pp. 1321–1330, March 2015.
[3] R. C. King, L. Atallah, B. Lo, G. Yang, “Development of a wireless sensor glove for surgical skills
assessment”, IEEE Trans Inf Technol Biomed, 13(5), pp. 673–679, 2009.
[4] S. C. W. Ong, S. Ranganath, “Automatic Sign Language Analysis-a survey and the future beyond lexical
meaning”, IEEE Trans. Pattern Anal. Mach. Intell, 27(6), pp. 873
[5] A. F. da Silva, A. F. Gonçalves, P. M. Mendes, J. H. Correia, “FBG Sensing Glove for Monitoring Hand
Posture”, IEEE Sensors Journal, vol. 11, no. 10, pp. 2442–2448, October 2011
[6] K. Liu, C. Chen, R. Jafari, N. Kehtarnavaz, “Fusion of Inertial and Depth Sensor Data for Robust Hand
Gesture Recognition”, IEEE Sensors Journal, vol. 14, no. 6, pp. 1898–1903, June 2014.
[7] Yang, H., An, X., Pei, D. and Liu, Y., 2014, September. Towards realizing gesture-to-speech conversion
with a HMM-based bilingual speech synthesis system. In 2014 International Conference on Orange
Technologies (pp. 97-100). IEEE
[8] Song, N., Yang, H. and Wu, P., 2018, May. A Gesture-to-Emotional Speech Conversion by Combining
Gesture Recognition and Facial Expression Recognition. In 2018 First Asian Conference on Affective
Computing and Intelligent Interaction (ACII Asia) (pp. 1-6). IEEE.
[9] Liang, R.H. and Ouhyoung, M., 1998, April. A real-time continuous gesture recognition system for sign
language. In Proceedings third IEEE international conference on automatic face and gesture recognition
(pp. 558-567). IEEE
[10] Preetham, C., Ramakrishnan, G., Kumar, S., Tamse, A. and Krishnapura, N., 2013, April. Hand talk-
implementation of a gesture recognizing glove. In 2013 Texas Instruments India Educators' Conference
(pp. 328-331). IEEE.
[11] Aiswarya, V., Raju, N.N., Joy, S.S.J., Nagarajan, T. and Vijayalakshmi, P., 2018, March. Hidden Markov
Model-Based Sign Language to Speech Conversion System in TAMIL. In 2018 Fourth International
Conference on Biosignals, Images and Instrumentation (ICBSII) (pp. 206-212). IEEE
[12] Harish, N. and Poonguzhali, S., 2015, May. Design and development of hand gesture recognition system
for speech impaired people. In 2015 International Conference on Industrial Instrumentation and Control
(ICIC) (pp. 1129-1133). IEEE.
[13] Vishal, D., Aishwarya, H.M., Nishkala, K., Royan, B.T. and Ramesh, T.K., 2017, December. Sign
Language to Speech Conversion. In 2017 IEEE International Conference on Computational Intelligence
and Computing Research (ICCIC)(pp. 1-4). IEEE.
[14] Vijayalakshmi, P. and Aarthi, M., 2016, April. Sign language to speech conversion. In 2016 International
Conference on Recent Trends in Information Technology (ICRTIT) (pp. 1-6). IEEE.
[15] Heera, S.Y., Murthy, M.K., Sravanti, V.S. and Salvi, S., 2017, February. Talking hands—An Indian sign
language to speech translating gloves. In 2017 International conference on innovative mechanisms for
industry applications (ICIMIA) (pp. 746-751). IEEE.
[16] Bhaskaran, K.A., Nair, A.G., Ram, K.D., Ananthanarayanan, K. and Vardhan, H.N., 2016, December.
Smart gloves for hand gesture recognition: Sign language to speech conversion system. In 2016
International Conference on Robotics and Automation for Humanitarian Applications (RAHA) (pp. 1-
6). IEEE.
[17] Jean, C., and Peter, B., “Recognition of Arm Gestures Using Multiple Orientation Sensors:
Gesture Classification”, IEEE Intelligent transportation systems conference on electronics, Vol. 13, No.
1, pp. 334-520,2004.
[18]. Otiniano, R., and Amara, C. “Finger spelling recognition from rgb-d information using kernel
descriptor”, IEEE Transactions on neural systems and rehabilitation engineering, Vol.28, No. 8, pp. 124-
184, 2006.
[19]. Zhengmao, Z., Prashan, R., Monaragala, N., Malin, P. “Dynamic hand gesture recognition system using
moment invariants”, IEEE Transactions on neural networking and computing, Vol. 21, No. 1, pp. 1034-
1320, 2010.
[20]. Recognition of sign language gestures using neural networks by Peter Vamplew. Department of
Computer Science, University of Tasmania [2]. Fels and G Hinton (1993), Glove-Talk: A Neural
Network Interface Between a Data-Glove and a Speech Synthesiser, IEEE Transactions on Neural
Networks, 4, 1, pp. 2-8 [3]. Johnston (1989), Auslan: The Sign Lang
[21] Laura Dipietro, Angelo M. Sabatini & Paolo Dario (2008) “Survey of Glove-Based Systems and their
applications,” IEEE Transactions on systems, Man and Cybernetics, Vol. 38, No. 4, pp 461-482.
[22] Simei G. Wysoski, Marcus V. Lamar, Susumu Kuroyanagi, & Akira Iwata (2002) “A Rotation Invariant
Approach On Static-Gesture Recognition Using Boundary Histograms And Neural Networks,” IEEE 9th
International Conference on Neural Information Processing, Singapura.
[23] Pavlovic, V. I., Sharma, R. & Huang, T. S. (1997) ” Visual Interpretation of Hand Gestures for Human-
Computer Interaction: A Review”. IEEE Transactions on Pattern Analysis And Machine Intelligence,
Vol. 19, No. 7, pp 677- 695, doi; 10.1109/34.598226
[24] Hung CH, Bai YW, Wu HY. Home appliance control by a hand gesture recognition belt in LED array
lamp case. In: Consumer Electronics (GCCE); 2015 IEEE 4th Global Conference on ; 2015 Oct 27; New
York : IEEE;2015; p. 599-600
[25] She Y, Wang Q, Jia Y, Gu T, He Q, Yang B. A real-time hand gesture recognition approach based on
motion features of feature points. In: Computational Science and Engineering (CSE); 2014 IEEE 17th
International Conference on; 2014 Dec 19; New York: IEEE;2014;p.1096- 1102.