REPORT_LATEX_CODE_A08__1_
REPORT_LATEX_CODE_A08__1_
REPORT_LATEX_CODE_A08__1_
TRANSLATOR
BACHELOR OF TECHNOLOGY
IN
Information Technology
Submitted by
SUPERVISOR
Dr. Ganesh B Regulwar
Associate Professor
April, 2023
Department of Information Technology
CERTIFICATE
We avail this opportunity to express our deep sense of gratitude and heart-
ful thanks to Dr. Teegala Vijender Reddy, Chairman and Sri Teegala
Upender Reddy, Secretary of VCE, for providing a congenial atmosphere to
complete this project successfully.
i
Abstract
ii
Table of Contents
iii
3.4.2 Software requirements . . . . . . . . . . . . . . . . . . . . . 22
3.5 Advantages and Disadvantages . . . . . . . . . . . . . . . . . . . . 22
3.5.1 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.5.2 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . 23
CHAPTER 4 SYSTEM DESIGN DETAILS . . . . . . . . . . . . . 24
4.1 DFD with Detailed Explanation . . . . . . . . . . . . . . . . . . . 24
4.1.1 UML diagrams . . . . . . . . . . . . . . . . . . . . . . . . 25
4.1.2 Flow Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.2 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.2.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
CHAPTER 5 IMPLEMENTATION . . . . . . . . . . . . . . . . . . 29
5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.1.1 Forms of Input . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.1.2 Speech Recognition . . . . . . . . . . . . . . . . . . . . . . . 30
5.1.3 Pre-processing of text . . . . . . . . . . . . . . . . . . . . 31
5.1.4 Porter Stemming Algorithm . . . . . . . . . . . . . . . . . . 32
5.1.5 Text to Sign Language . . . . . . . . . . . . . . . . . . . . 33
5.2 Technologies used . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2.1 HTML (Hyper Text Markup Language) . . . . . . . . . . 33
5.2.2 CSS (Cascading Style Sheets) . . . . . . . . . . . . . . . . 35
5.2.3 Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.2.4 Django framework . . . . . . . . . . . . . . . . . . . . . . . 36
5.3 NLTK Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.3.1 word tokenizes . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.3.2 Elimination of Stop Words . . . . . . . . . . . . . . . . . 38
5.3.3 Lemmatization and Synonym replacement . . . . . . . . . 38
5.3.4 WordNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.3.5 Punkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
CHAPTER 6 TESTING AND RESULT . . . . . . . . . . . . . . . 40
6.1 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.1.1 Testing Objectives . . . . . . . . . . . . . . . . . . . . . . . 40
6.1.2 Testing principles . . . . . . . . . . . . . . . . . . . . . . . . 41
6.2 System Testing Plan . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.3 Screenshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
CHAPTER 7 CONCLUSION AND FUTURE SCOPE . . . . . . 46
7.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.2 Future Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
List of Figures
v
Abbreviations
Abbreviation Description
INTRODUCTION
1.1 Introduction
Sign language is the main means of communication for those with hearing
and voice impairments. Deaf persons are said to speak exclusively in sign
language. This combines hand motions, arm or body motions, and facial
emotions. By utilising this method, deaf individuals can take part in all the
activities that hearing people enjoy, from regular communication to information
access. A natural visual-spatial language called sign language (SL) combines
face emotions, hand shapes, arm orientation, and movement with the movement
of the upper body and upper body parts to produce verbal utterances in three
dimensions rather than just one.People in India who are hard of hearing, deaf,
or both developed the language . The languages of deaf and dumb people may
differ because there are numerous communities of them all over the world.Many
languages, including Urdu, French, and English, are widely used nowadays.
Deaf people around the world communicate in a variety of sign languages
and expressions in a manner similar to this. American Sign Language (ASL),
Indian Sign Language (ISL), and British Sign Language (BSL) are all used
in Britain and India (ASL) in the United States to express their ideas and
interact with one another. There are already interactive systems available for
various sign languages, including ASL, BSL, etc. There are 5.07 million people
with hearing disabilities in India. More than 30% of them are under 20 years
old, while over 50% are between the ages of 20 and 60. These folks use sign
language to communicate with others because They seldom ever speak clearly.
Because sign languages lack a definite structure or syntax, communication
can be challenging for these persons with disabilities to communicate outside
of their small communities using these signs. Other efforts, such as those
using Indian Sign Language, have been made to demonstrate the same for
1
various sign languages. ISL research, which started in 1978, has specifically
shown that it has its own grammar and syntax, making it a complete natural
language. At public places like banks, hospitals, railway stations, and bus
stops, it can be extremely difficult for hearing-impaired people to communicate
since a hearing person cannot understand the sign language a deaf person
employs. Furthermore, a hearing person cannot interact with a deaf person
since he or she might not be conversant with sign language. Communication
between the deaf and hearing communities must be facilitated by translation
into a second language. This programme transforms speech to text, shows
graphics in Indian Sign Language, and accepts speech as input. The paper’s
major objective is to employ the right technology to convert spoken English
into Sign Language of India. Speech-to-text is utilised by the interface. API
to transform audio to text in the first phase of operation. and secondly, the
input text is tokenized into words which are mapped with the corresponding
gestures to obtain Sign Language for the required text. sequence.
1.3 Motivation
For individuals with hearing and speech impairments, sign language is a
natural and preferred mode of communication. Although there have been many
attempts to translate, recognise, and translate sign language motions into text,
creating text-to-sign language conversion systems has proven to be difficult.
This is largely because there isn’t a significant corpus of sign language that
can be used as the basis for creating such systems.
Despite these challenges, The creation of a system for translating text into
Indian sign language has enormous promise for improving information access
and services for the deaf community. By utilizing modern technologies such as
machine learning and computer vision, such a system can accurately recognize
and interpret written text and convert it into sign language.
However, there are several challenges that must be addressed to develop an
accurate and effectiveconversion tool for sign language from text . These include
the need for a comprehensive Indian sign language corpus, the development
of advanced computer vision algorithms, and the need for high-quality data
to train the machine learning models.
The creation of a text-to-sign language conversion system can greatly
improve the deaf community’s access to information and services in Indian
sign language. While several challenges exist in developing such a system,
continued research and development in this area hold tremendous promise in
creating a more inclusive and equitable society.
1.4.1 Aim
This project intends to:
Develop a communication system for deaf people using ISL grammar repre-
sentation.
Overall, the goal is to create a system that allows deaf people to communicate
using ISL grammar, by converting English sentences into an appropriate ISL
representation and enabling communication through sign language animations,
videos, and speech recognition and synthesis technologies.
1.4.2 Objectives
The improvement of communication between people with hearing and speech
impairments and those who do not know sign language is one of the main
goals of this initiative. The deaf community can use this web programme,
which is open source and free to use, to translate text into sign language as
a solution. The chances of success in the areas of education, employment,
interpersonal connections, and public access venues could all be enhanced by
this technology.
The precision and efficiency of the system must be continually improved
in order for this project to be successful. The ISL dictionary needs to be
continuously updated with new words and phrases to increase the system’s
breadth and accuracy. Additionally, the technology used for speech recognition
and animation needs to be continuously refined to improve the system’s
performance and reliability.
This project has the potential to transform the lives of people with hearing
and speech impairments by improving their ability to communicate effectively.
1.5 Scheme
A regular person utilising a computer’s microphone can input speech. Voice
to text conversion—or speech to text-by-text recognition module—takes place
with the aid of a trained voice database. By comparing the database and
converted text, meanings and symbols are discovered. The sign symbols are
then shown alongside the text for hard of hearing people. A gesture is
a movement done with a bodily part, particularly the hands, arms, face,
or head, to convey important information or feelings. In applications that
include human-machine interaction, gesture recognition is useful. Speech to
sign translation is provided by the system. To convert spoken words into word
sequences, speech recognition software is employed.
LITERATURE SURVEY
8
et al. created the INGIT system, which translates Hindi strings into Indian
Sign Language. It was developed specifically for the railway invesOver 60
years ago, Ali and colleagues created a domain-specific system that required
English text as input. The text was changed into ISL text, and then those
letters were changed into symbols in ISL. The following components of the
system’s architecture were present: 1) A text translation input module. 2) A
tokenizer to break up the phrase’s words. 3) A list of ISL symbols specifically
for concerns about railroads. If a phrase didn’t have a matching sign assigned
to it in the repository, the synonyms sign was utilised.tigation sector. FCG
was used to implement Hindi grammar. A thin semantic structure is created
from the user input by the generated module. By passing this input via
ellipsis resolution, extra words are eliminated. Depending on the type of
phrase, the ISL generator module subsequently constructed an appropriate
ISL-tag structure. A HamNoSys converter was then used to create a graphical
simulation.Over 60 years ago, Ali and colleagues created a domain-specific
system that required English text as input. The text was changed into ISL
text, and then those letters were changed into symbols in ISL. The following
components of the system’s architecture were present: 1) A text translation
input module. 2) A tokenizer to break up the phrase’s words. 3) A list of ISL
symbols specifically for concerns about railroads. If a phrase didn’t have a
matching sign assigned to it in the repository, the synonyms sign was utilised.
4) A specially crafted translator mapped each word to its associated symbol.
Moreover, it screened the words to be translated by removing any derogatory
or abusive terms as well as any words without a stored indication. use webkit
Speech Recognition to capture audio input, which is then processed usingusing
the Chrome/Google Speech API, the audio is converted into text. After
that, we employ natural language processing (NLP) to divide the text into
manageable bits. Also, our system has a dependence parser that examines
the sentence’s grammar and creates word links. 5) A word adder that
added the words in the order they were entered. A two-phase process for
creating sign language was created by Vij et al. The first stage concentrated
on preprocessing Hindi 2 Sentences and converting them into ISL grammar.
SYSTEM ANALYSIS
3.1 Overview
The objective of this research is to develop a system that can translate audio
messages into Indian Sign Language (ISL). The system receives audio input,
converts it to text, and then displays appropriate ISL clips or preset GIFs.
This approach makes it easier for normal and deaf people to communicate
with one another.
The demand for better communication tools that deliver precise outcomes is
rising as technology develops. This study has a significant impact on the field
of speech to sign language translation, potentially bridging the communication
gap between the hearing and hearing-impaired communities. The equipment is
not simply for those who have hearing loss. It also has larger implications in
public spaces like hospitals, banks, train stations, and bus stops where vocal
people might not comprehend sign language.
Everyone who wants to learn sign language translation is the system’s
intended audience.facilitate better communication. By using this system, indi-
viduals can acquire basic knowledge of sign language, which can improve their
ability to communicate with the hearing-impaired population.
In conclusion, the development of this system can significantly enhance
the quality of life for the hearing-impaired population in India and promote
inclusivity and accessibility in various sectors. With ongoing advancements and
improvements in the technology used for speech recognition and animation,
this system has significant potential for adoption and can become an essential
tool for creating a more inclusive and equitable society.
18
3.2 Existing System
There are currently no reliable models for translating text into Indian Sign
Language, despite the fact that sign language is a universal language used by
people with hearing and speech disabilities to overcome communication hurdles
(ISL). Furthermore absent is a sufficient audio-visual accompaniment to oral
communication.
There has been little progress made in the computerization of ISL, despite
great advancements in identifying the sign languages used by different countries.
British Sign Language (BSL) or American Sign Language (ASL) have been
the main subjects of earlier research in this area. Just a small number of ISL
systems have been created to far, and this lack of progress emphasises the
need for further funding and innovation in this field.
Our proposed system aims to address this gap by providing an efficient
means of converting input audio into animation, which enables speech to be
translated into ISL. With continuous improvements and advancements, this
system can become an essential tool for creating a more inclusive and accessible
society for individuals with hearing and speech impairments.
It is imperative to recognize that the development of such a system requires
continuous efforts to update and To add new words and phrases, the ISL
dictionary should be expanded. The system’s accuracy is also influenced by
the calibre of the audio input , and background noise and speaker accents
can negatively impact the system’s performance. Hence, ongoing improvements
in speech recognition and animation technology are essential to enhance the
system’s accuracy and effectiveness.
In conclusion, the lack of proper computerization of ISL underscores the
need for further innovation and investment in this area. Our approach is
suggested to close the communication gap between people with hearing and
speech difficulties. by providing an efficient means of converting input audio
into animation. With continuous efforts towards improvement, this system can
become an essential tool for creating a more inclusive and equitable society.[9]
of movies and GIFs that depict the words is used to compare every word in
the phrase. If a word cannot be discovered, the system breaks it down into
its component letters and displays the matching system-predefined films or
clips. Input of audio or text, tokenization of the input, word or letter search
2. The computer must have an Intel Core i3 or Intel Atom CPU to run
the system.
3. Internet browsers, such as Chrome, are necessary for accessing the sys-
tem’s web interface.
4. Access to the Internet is required for using the system’s online features.
3.5.1 Advantages
The suggested technique has a number of benefits for improving commu-
nication between people who have speech and hearing problems. Its potential
to be used in higher-level applications is a key benefit. With audio in-
put, the system can extract audio features that can then be used in other
applications.[11]
Another benefit is that the system does sign language translation more
quickly. Instead of analysing each sign language independently in the dataset,
it uses clustering by extracting audio features because it is quicker. Its speed
makes interpersonal communication more effective.
The system also offers more accurate results because it uses speech features
instead of metadata for sign language comparison. As a result, we can achieve
higher precision in our results.
Finally, the proposed system offers an easy-to-use user interface, making
it accessible to all individuals. The interface has been designed to be simple
and user-friendly, allowing normal users to retrieve their features without any
hindrance.
In conclusion, the proposed system offers several benefits, including faster
sign language translation, higher accuracy, and an easy-to-use interface. With
these advantages, the system can significantly improve communication between
individuals with hearing and speech impairments, and promote inclusivity and
3.5.2 Disadvantages
1. A small training dataset is currently being utilised with the project,
and it is stored on a personal computer or in a folder. Although it might be
expanded, there are several storage-related limitations to the project. 2. File
size and format limitations: While feature extraction is easier with files of
this type, the project can only be utilised with.mp4 files. Also, longer video
clips that go beyond the limit are difficult to analyse since they require more
storage and processing power.
24
Figure 4.1: Level 0 DFD
1 method will be further divided into subfunctions at Level 2, and so on.[12]
4.2.1 Algorithm
1. Begin by launching the web application.
3. Input the desired text into the system or speak into the microphone for
voice input.
[13]
IMPLEMENTATION
5.1 Overview
Individuals with hearing impairments face challenges in communicating with
the majority of the population who use spoken language as their primary means
of communication. This communication gap can lead to social isolation, limited
employment opportunities, and reduced access to education and healthcare.
There is now a chance to close this communication gap and enhance the
lives of people with hearing loss thanks to the advancement of multimedia,
animation, and other computer technologies.
For those who have hearing loss, sign language—a visual and gesture
language—is their main form of communication. The bulk of the hearing
population could not, however, be conversant in sign language, which might
widen the communication gap even further.
We offer a solution to this issue that records audio as input using Web
Kit Speech Recognition and uses Google Speech API to transform audio into
text. The text is subsequently divided into smaller, more digestible parts
using Natural Language Processing (NLP), and then a reliance parser is used
to examine the sentence’s grammatical structure and create word links. The
algorithm then transforms the text into clips or films of sign language.
By leveraging computer technology, the communication gap between those
with hearing loss and the general public can be closed, our proposed system
has significant potential to promote inclusivity, accessibility, and equity across
various sectors. Improved access to education, healthcare, and employment
can lead to a more equitable and inclusive society.[14]
29
5.1.1 Forms of Input
Our suggested method is made to take inputs in a variety of formats.
Live speech or text input are both acceptable forms of input. Users have the
freedom to select the input format that best meets their needs and preferences
thanks to this functionality.
The system’s ability to accept input in different formats makes it versatile
and applicable in various settings. For instance, users who prefer to communi-
cate through written texts can conveniently input text into the system, while
those who prefer to communicate through speech can use the live speech input
feature.
Additionally, our system’s compatibility with multiple input formats en-
hances its usability in different settings, such as education and healthcare. For
example, teachers can use the text input feature to communicate with hearing-
impaired students during class sessions, while healthcare providers can use the
live speech input feature to communicate with hearing-impaired patients during
consultations.
Moreover, our proposed system’s flexibility in input formats aligns with
the goal of promoting inclusivity and accessibility in society. It eliminates
the barriers that limit communication for the hearing-impaired population and
enables them to interact and communicate seamlessly with the remainder of
the globe.
In conclusion, a key component that improves the usability and accessibility
of our suggested system is its adaptability to accept a variety of input types.
Our method can assist in closing the communication gap between the hearing-
impaired and hearing people, so fostering a more equal and inclusive society.
5.2.3 Python
Python is a well-liked high-level programming language that prioritises the
code’s readability and usability. It was developed with the goal of assisting
programmers in writing logical and clear code for both small and large projects.
Python’s object-oriented approach and language structures support a variety of
programming paradigms, including structured, object-oriented, and functional
programming. The language’s dynamic typing and garbage collection make it
easier to work with and free developers from onerous memory management
responsibilities. Several people refer to Python’s vast standard library as a
”batteries included” feature. It provides developers with access to a variety
of modules and tools that can speed up and improve the efficiency of their
project completion. Python was made possible because Guido van Rossum
created the well-known high-level programming language Python in the late
5.3.4 WordNet
WordNet is a lexical database that organizes words into synsets, or groups
of cognitive synonyms, with lexical and semantic connections between them.
It is available in over 200 languages and can be accessed through a freely
available online browser. Examples of images you may see when using the
WordNet browser include word definitions, synsets, and semantic relationships
between words.
5.3.5 Punkt
Punkt is designed to automatically learn parameters from a corpus as-
sociated with the target domain, such as a collection of acronyms. The
6.1 Testing
The primary objective of testing, an essential part of software quality
assurance, is to execute a programme and discover bugs. It plays a crucial
part in ensuring software quality by offering the last word on the definition,
design, and code of the programme. System testing, which evaluates the
system as a whole and looks for any inconsistencies and deviations from the
original design, is an essential step in the testing process.
The proposed system must pass a variety of tests to ensure that it is
functional and efficient before it is ready for user acceptability testing. A
successful test will discover an error that hasn’t yet been discovered, and a
good test case will have a fair chance of doing so. Testing poses an intriguing
challenge for software engineers since it is still possible to overlook certain
errors or problems, even with thorough testing. To increase the precision
and efficiency of the testing process, it is essential to continually improve the
testing processes and instruments.
Testing is a crucial part of software quality assurance since it helps to
confirm that the software system works as planned. System testing evaluates
the programme as a whole, and the testing process helps identify flaws and
issues. Because the testing procedure is not error-proof, procedures and tools
must always be enhanced to produce the finest outcomes.
40
3. A test is successful if it identifies a mistake that has gone undetected.
6.3 Screenshots
The Above Screenshot represents the home page of the website.which will
have home register login and converter
The below screenshot represents the tool to convert the audio or text to
sign language Some of the outputs are displayed in below screenshots
7.1 Conclusion
Several people in the nation who are deaf or have speech or hearing issues
communicate primarily in Indian Sign Language (ISL). Given the difficulties
associated with reading and understanding written messages, sign language
presents a more favoured and practical means of communicating. In sign
language, words, emotions, and noises are expressed using hand gestures, lip
movements, and facial expressions.
Little attention has been paid to the Python programming language’s
potential as a communication tool for people with speech and hearing problems.
By providing effective ways of communication for those with hearing and speech
impairments, our suggested solution seeks to close this gap.The technology helps
Indians who have hearing loss access information more easily and also works as
a teaching aid for ISL. With the help of our approach, people with disabilities
may communicate with the rest of society and express themselves clearly.
Our method efficiently translates speech into sign language by transforming
input audio into an animation, This can help remove barriers to communication
between those who are hearing and those who are hearing-impaired. Also,
this method has a great chance of being adopted in fields like employment,
healthcare, and education where the need for more inclusive communication
tools is greatest.
However, continuous updates to the ISL dictionary are necessary to ensure
the system’s accuracy and effectiveness. Additionally, the quality of the input
audio, including background noise and speaker accents, can affect the system’s
performance. Thus, ongoing advancements in speech recognition and animation
technology are necessary.
In conclusion, our proposed system can significantly enhance the quality
46
of life for the hearing-impaired population in India and promote inclusivity
and accessibility across various sectors. With continuous development and
improvement, our system can become an essential tool for creating a more
inclusive and equitable society.
[1] Nasser H Dardas and Nicolas D Georganas. “Real-time hand gesture de-
tection and recognition using bag-of-features and support vector machine
techniques”. In: IEEE Transactions on Instrumentation and measurement
60.11 (2011), pp. 3592–3607.
[2] Anand Ballabh and Umesh Chandra Jaiswal. “A study of machine trans-
lation methods and their challenges”. In: Int. J. Adv. Res. Sci. Eng 4.1
(2015), pp. 423–429.
[3] Purva C Badhe and Vaishali Kulkarni. “Indian sign language translator
using gesture recognition algorithm”. In: 2015 IEEE international con-
ference on computer graphics, vision and information security (CGVIS).
IEEE. 2015, pp. 195–200.
[4] Taner Arsan and Oğuz Ülgen. “Sign language converter”. In: International
Journal of Computer Science & Engineering Survey (IJCSES) 6.4 (2015),
pp. 39–51.
[5] Mohammed Elmahgiubi, Mohamed Ennajar, Nabil Drawil, and Mohamed
Samir Elbuni. “Sign language translator and gesture recognition”. In: 2015
Global Summit on Computer & Information Technology (GSCIT). IEEE.
2015, pp. 1–6.
[6] M Mahesh, Arvind Jayaprakash, and M Geetha. “Sign language translator
for mobile platforms”. In: 2017 International Conference on Advances in
Computing, Communications and Informatics (ICACCI). IEEE. 2017,
pp. 1176–1181.
[7] Neha Poddar, Shrushti Rao, Shruti Sawant, Vrushali Somavanshi, and
Sumita Chandak. “Study of sign language translation using gesture recog-
nition”. In: International Journal of Advanced Research in Computer and
Communication Engineering 4.2 (2015).
[8] Anbarasi Rajamohan, R Hemavathy, and M Dhanalakshmi. “Deaf-mute
communication interpreter”. In: International Journal of Scientific Engi-
neering and Technology 2.5 (2013), pp. 336–341.
[9] Yogeshwar I Rokade and Prashant M Jadav. “Indian sign language recog-
nition system”. In: International Journal of engineering and Technology
9.3 (2017), pp. 189–196.
[10] Shreyashi Narayan Sawant and MS Kumbhar. “Real time sign lan-
guage recognition using pca”. In: 2014 IEEE International Conference on
Advanced Communications, Control and Computing Technologies. IEEE.
2014, pp. 1412–1415.
[11] Madhuri Sharma, Ranjna Pal, and Ashok Kumar Sahoo. “Indian sign
language recognition using neural networks and KNN classifiers”. In:
ARPN Journal of Engineering and Applied Sciences 9.8 (2014), pp. 1255–
1259.
48
[12] Amitkumar Shinde and Ramesh Kagalkar. “Sign language to text and
vice versa recognition using computer vision in Marathi”. In: International
Journal of Computer Applications 975 (2015), p. 8887.
[13] Stephanie Stoll, Necati Cihan Camgoz, Simon Hadfield, and Richard
Bowden. “Text2Sign: towards sign language production using neural ma-
chine translation and generative adversarial networks”. In: International
Journal of Computer Vision 128.4 (2020), pp. 891–908.
[14] Neha V Tavari, AV Deorankar, and P Chatur. “A review of literature
on hand gesture recognition for Indian Sign Language”. In: International
Journal 1.7 (2013).