0% found this document useful (0 votes)
174 views35 pages

B09 SignLanguageDetection

hgbjnnnnnnnnnjnhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh

Uploaded by

Lakshmi Pooja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
174 views35 pages

B09 SignLanguageDetection

hgbjnnnnnnnnnjnhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh

Uploaded by

Lakshmi Pooja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Sign Language Detection

A Major Project Report submitted


in partial fulfilment of the requirements
for the award of the degree of

BACHELOR OF TECHNOLOGY

In

COMPUTER SCIENCE & ENGINEERING

By

1. KOVVURI SUSHMA (18B01A0574)


2. GADIDESI CHANDRIKA (18B01A0590)
3. PAPPULA KUSUMA LATHA (18B01A05A9)
4. SANKA SRI NAGA VARDHINI VYSHNAVI (18B01A05B7)
5. KASANI RUPA SRI (19B01A0508)

Under the esteemed guidance of


Dr. P. KIRAN SREE
Professor and Head of the Department

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


SHRI VISHNU ENGINEERING COLLEGE FOR WOMEN(A)
(Approved by AICTE, accredited by NBA & NAAC, Affiliated to JNTU Kakinada)
BHIMAVARAM – 534 202
2021 – 2022
SHRI VISHNU ENGINEERING COLLEGE FOR WOMEN(A)
(Approved by AICTE, Accredited by NBA & NAAC, Affiliated to JNTU Kakinada)
BHIMAVARAM – 534 202

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

CERTIFICATE

This is to certify that the Mini Project-2 entitled “SIGN LANGUAGE DETECTION”, is being
submitted by K.SUSHMA, G.CHANDRIKA, P.KUSUMA LATHA, S.VYSHNAVI ,K.RUPA
SRI bearing the Regd. No. 18B01A0574, 18B01A0590, 18B01A05A9, 18B01A05B7,
19B01A0508 in partial fulfilment of the requirements for the award of the degree of
“Bachelor of Technology in Computer Science & Engineering” is a record of bonafide
work carried out by them under my guidance and supervision during the academic year
2021–2022 and it has been found worthy of acceptance according to the requirements of
the university.

Internal Guide Head of the Department

External Examiner
ACKNOWLEDGEMENTS

It is natural and inevitable that thoughts and ideas of other people tend to drift into the
subconscious due to various human parameters, where one feels to acknowledge the help
and guidance derived from others. We acknowledge each one of those who have contributed
to the fulfilment of this project report.

We wish to place out a deep sense of gratitude to Sri. K. V. Vishnu Raju, chairman of SVES
for providing us with all facilities necessary to carry out this project successfully.

We express our heart full thanks to our principal Dr. G. Srinivasa Rao for his constant
support on every progress in our work.

We wish to express our sincere thanks to vice Principal Dr. P. Srinivasa Raju for being a
source of inspiration and constant encouragement.

We are privileged to express our sincere gratitude to the honourable Head of the
Department Dr. P. Kiran Sree for giving his support and guidance in our endeavours.

We express our sincere thanks to the members of the Project Review Committee Ms. P.
Sudha Rani, Associate Professor, Dr. V. Purushothama Raju, Professor in CSE & Dean
(Academics)-SVECW.

We express our deep sense of gratitude and sincere appreciation to our guide Dr. P. Kiran
Sree, Head of the Department in CSE - SVECW, for his indispensable suggestions, unflinching
and esteemed guidance throughout the project.

It has been a great pleasure doing project work at Shri Vishnu Engineering College for Women
as a part of our curriculum.

PROJECT ASSOCIATES
KOVVURI SUSHMA (18B01A0574)
GADIDESI CHANDRIKA (18B01A0590)
PAPPULA KUSUMA LATHA (18B01A05A9)
SANKA SRI NAGA VARDHINI VYSHNAVI (18B01A05B7)
KASANI RUPA SRI (19B01A0508)
ABSTRACT

There have been several advancements in technology and a lot of research has been
done to help the people who are deaf and dumb. The purpose of this project is to
design a convenient system that is used to detect the visual-gestural language used
by deaf and hard hearing people for communication purposes. One should learn sign
language to interact with them. Most of the existing tools for sign language learning
use external sensors which are costly. Because of this, the process of sign language
learning is a very difficult task.

Our project aims at extending a step forward in this field by using Deep Learning and
python. In this approach, a dataset is collected and the useful information extracted
is input into supervised learning techniques. Computer recognition of sign language
deals from sign gesture acquisition and continues till text generation followed by
conversion of this text into speech. First the images are collected for learning using
webcam and OpenCV. Then the images are labeled for sign language detection using
labelImg. Next, we build a CNN model and then we train the model with the training
dataset and finally the language is detected using OpenCV and webcam.

So ideally the result is a real-time object detection device that can detect different
sign language poses based on American Sign Language System.

Keywords: deep learning, sign language recognition, hand gesture


recognition, gesture analysis, CNN, OpenCV.
Table of Contents
1. INTRODUCTION 1
2. SYSTEM ANALYSIS 4
2.1 EXISTING SYSTEM 4
2.2 PROPOSED SYSTEM 5
2.3 FEASIBILITY STUDY 6
3. SYSTEM REQUIREMENTS SPECIFICATION 7
3.1 SOFTWARE REQUIREMENTS 7
3.2 HARDWARE REQUIREMENTS 8
4. SYSTEM DESIGN 10
4.1 INTRODUCTION 10
4.2 UML DIAGRAMS 11
4.2.1. USE CASE DIAGRAM 12
4.2.2.CLASS DIAGRAM 14
5. SYSTEM IMPLEMENTATION 15
5.1 INTRODUCTION 15
5.2 PROJECT MODULES 17
5.3 ALGORITHMS USED 20
5.4 SCREENS 25
6. SYSTEM TESTING 27
6.1 INTRODUCTION 27
6.2 TESTING METHODS 28
6.3 TESTING TYPES 29
6.4 TEST CASES 31
7. CONCLUSION 32
8. BIBLIOGRAPHY 33
9. APPENDIX 34
9.1 INTRODUCTION TO MACHINE LEARNING 34
9.2 PYTHON 35
9.3 PYTHON PACKAGES 35
1. INTRODUCTION

Sign Language is the means of communication among the deaf and mute community.
Sign Language emerges and evolves naturally within the hearing-impaired
community. Sign Language communication involves manual and non-manual signals
where manual signs involve fingers, hands, arms and non-manual signs involve face,
head, eyes and body. Sign Language is a well-structured language with a phonology,
morphology, syntax and grammar. Sign language is a complete natural language
that uses different ways of expression for communication in everyday life. Sign
Language recognition systems transfer communication from human-human to
human-computer interaction. The aim of the sign language recognition system is to
present an efficient and accurate mechanism to transcribe text or speech, thus the
“dialog communication” between the deaf and hearing person will be smooth.

There are two approaches for this problem:

1) Sensor Based Approach: This approach collects the data of gestures


performed by using different sensors. The data is then analyzed and conclusions
are drawn in accordance with the recognition model. In case of hand gesture
recognition different types of sensors were used and placed on the hand, when
the hand performs any gesture, the data is recorded and is then further
analyzed. But this approach damages the natural motion of the hand because of
use of external hardware. The major disadvantage is that complex gestures
cannot be performed using this method.

2) Vision Based Approach: This approach takes images from the camera as data
of gestures. The vision-based method mainly concentrates on capturing images
of gestures and extracting the main feature and recognizing it. This approach is
more convenient to use as it does not need the user to wear any gadgets.
The goal of this project was to build a neural network able to classify which gesture
of the Sign Language is being signed, given an image of a signing hand. This project
is a first step towards building a possible sign language translator, which can take
communications in sign language and translate them into written language. Such a
translator would greatly lower the barrier for many deaf and mute individuals to be
able to better communicate with others in day-to-day interactions.

Minimizing the verbal exchange gap among D&M and non-D&M people turns into a
want to make certain effective conversation among all. Sign language translation is
among one of the most growing lines of research and it enables the maximum natural
manner of communication for those with hearing impairments. A hand gesture
recognition system offers an opportunity for deaf people to talk with vocal humans
without the need of an interpreter. The system is built for the automated conversion
of ASL into textual content and speech.

This goal is further motivated by the isolation that is felt within the deaf community.
Loneliness and depression exist at higher rates among the deaf population, especially
when they are immersed in a hearing world. Large barriers that profoundly affect life
quality stem from the communication disconnect between the deaf and the hearing.
Some examples are information deprivation, limitation of social connections, and
difficulty in integrating a society.

In our project we primarily focus on producing a model which can recognize


Fingerspelling based hand gestures in order to form a complete word by combining
each gesture. The gestures we aim to train are as given in the image below.
Therefore, to enable dynamic communication, we present a sign language
recognition system that uses Convolutional Neural Networks (CNN) in real time to
translate an image of a user’s signs into text.

Objectives :

● Collection of images for dataset with webcam using OpenCV and Python.

● Labelling of collected images using LabelImg package.

● Building a CNN model and training the model with a training dataset.

● Integrating the model with the GUI.

● Real Time translation of sign language to text in live.

● Finally, the text that is predicted will be converted into speech.


2. SYSTEM ANALYSIS

2.1 EXISTING SYSTEM

The increasing growth of machine learning, computer techniques divided into traditional
methods and machine learning methods. This section describes the related works of
conversion of Sign language to text and how machine learning methods are better than
traditional methods. The existing method in this project is the sensor-based method.

the sensor-based approach collects the data of gestures


performed by using different sensors. The data is then analyzed and conclusions
are drawn in accordance with the recognition model. In case of hand gesture
recognition different types of sensors were used and placed on the hand, when
the hand performs any gesture, the data is recorded and is then further analyzed.
But this approach damages the natural motion of the hand because of use of
external hardware. The major disadvantage is that complex gestures cannot be
performed using this method.

Disadvantages:
⦁ Need to carry sensors every time with hand.
⦁ High cost.
⦁ Low Accuracy.
⦁ Complex gestures cannot be recognized.
⦁ No speech conversion is present.

2.2 PROPOSED SYSTEM

In our project we primarily focus on producing a CNN model which can recognize
Sign Language gesture and can convert into text. The predicted text is again
converted to speech so that the visually challenging people can also able to
understand the gesture. Both the text and audio will be shown in our Application.

Block Diagram :

Advantages:
⦁ Accuracy is good.
⦁ Low complexity.
⦁ Highly efficient.
⦁ Complex gestures can also be detected.
⦁ Speech conversion is present.
2.3 Feasibility Study

Generally the feasibility study is used for determining the resource


cost, benefits and whether the proposed system is feasible with respect
to the organization. The proposed system feasibility could be as follows.
There are six types of feasibility which are equally important are:
1.1.1 Technical feasibility
1.1.2 Economic feasibility
1.1.3 Behavioural feasibility

Technical Feasibility

Technical feasibility deals with the existing technology, software and


hardware requirements for the proposed system. The proposed system
“Sign Language Detection” is planned to run on Html CSS and php. Thus,
the project is considered technically feasible for the development. The work
for the project can be done with current equipment, existing software
technology and available personnel. Hence the proposed system is
technically feasible.

Economic Feasibility

This method is most frequently used for evaluating the effectiveness


of a Project. It is also called as benefit analysis. In this project “Sign
Language Detection” is developed on current equipment, existing
software technology Since the required hardware and software for
developing the system is already available in the organization, it does not
cost must developing the proposed system.

Behavioural Feasibility

This project has been implemented by python and it satisfies all


conditions and norms of the organization and the users. This proposed
system “Nest Away” Application has much behavioral feasibility because
users are provided with a better facility.
3 SYSTEM REQUIREMENTS SPECIFICATION

3.1 Software Requirements

• PYTHON AND JUPYTER NOTEBOOK

Python is a high-level, interpreted, interactive and object-oriented


scripting language. Python is designed to be highly readable. It uses
English words frequently whereas other languages use punctuation, and
it has fewer syntactic constructions than other languages.Python runs on
an interpreter system, meaning that code can be executed as soon as it
is written. This means that prototyping can be very quick.Python can be
treated in a procedural way, an object-oriented way or a functional way.

The Jupyter Notebook is an open-source web application that allows you


to create and share documents that contain live code, equations,
visualizations and narrative text. Uses include: data cleaning and
transformation, numerical simulation, statistical modeling, data
visualization, machine learning, and much more.

• OPENCV

OpenCV (Open-Source Computer Vision Library) is an open source


computer vision and machine learning software library. OpenCV was built
to provide a common infrastructure for computer vision applications and
to accelerate the use of machine perception in commercial products. Being
a BSD-licensed product, OpenCV makes it easy for businesses to utilize
and modify the code.

The library has more than 2500 optimized algorithms, which includes a
comprehensive set of both classic and state-of-the-art computer vision
and machine learning algorithms. These algorithms can be used to detect
and recognize faces, identify objects, classify human actions in videos,
track camera movements, track moving objects, extract 3D models of
objects, produce 3D point clouds from stereo cameras, stitch images
together to produce a high resolution image of an entire scene, find similar
images from an image database, remove red eyes from images taken
using flash, follow eye movements, recognize scenery and establish
markers to overlay it with augmented reality.

• CNN

A Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm


which can take in an input image, assign importance (learnable weights and
biases) to various aspects/objects in the image and be able to differentiate one
from the other. The pre-processing required in a ConvNet is much lower as
compared to other classification algorithms. While in primitive methods filters are
hand-engineered, with enough training, ConvNets have the ability to learn these
filters/characteristics.

3.2 Hardware Requirements

RAM – 1GB
Processor – Intel core i5
Hard Disk – 512GB
4 SYSTEM DESIGN

4.1 Introduction

System design is the process of designing the elements of a system


such as the architecture, modules and components, the different interfaces of
those components and thedata that goes through that system.

System Analysis is the process that decomposes a system into its


component pieces for the purpose of defining how well those components
interact to accomplish the set requirements. The purpose of the System Design
process is to provide sufficient detailed dataand information about the system
and its system elements to enable the implementation consistent with
architectural entities as defined in models and views of the system architecture.

The purpose of the design phase is to plan a solution of the problem


specified by the requirement document. This phase is the first step in moving
from problem domain to the solution domain. The design of a system is perhaps
the most critical factor affecting the quality of the software, and has a major
impact on the later phases, particularly testing and maintenance. The output of
this phase is the design document. This document is similar to a blue print or
plan for the solution, and is used later during implementation, testing and
maintenance.

The design activity is often divided into two separate phase-system design
and detaileddesign. System design, which is sometimes also called top-level
design, aims to identify the modules that should be in the system, the
specifications of these modules, and how they interact with each other to
produce the desired results. At the end of system design all the major data
structures, file formats, output formats, as well as the major modules in the
system and their specifications are decided.

A design methodology is a systematic approach to creating a design by


application ofset of techniques and guidelines. Most methodologies focus on
system design. The two basic

principles used in any design methodology are problem partitioning and


abstraction. Alarge system cannot be handled as a whole, and so for design it’s
partitioned into smaller systems. Abstraction is a concept related to problem
partitioning. When partitioning is used during design, the design activity focuses
on one part of the system at a time. Since the partbeing designed interacts with
other parts of the system, a clear understanding of the interaction is essential
for property designing the part.

4.2 UML Diagrams

UML Diagrams is a rich visualizing model for representing the system


architecture anddesign. These diagrams help us to know the flow of the system.
Some of them are:

• Use case diagram

• Sequence diagram

• Collaboration diagram

• State chart diagram

USE CASE DIAGRAMS

A Use Case Diagram in the Unified Modelling Language (UML) is a type


of behavioural diagram defined by and created from a Use-case analysis. Its
purpose is to present a graphical overview of the functionality provided by a
system in terms of actors, their goals (represented as use cases), and any
dependencies between those use cases.
The main purpose of a use case diagram is to show what system
functions are performed for which actor. Roles of the actors in the system can
be depicted. Interaction among actors is not shown on the use case diagram. If
this interaction is essential to a coherent description of the desired behavior,
perhaps the system or use case boundaries should be re-examined.
Alternatively, interaction among actors can be part of the assumptionsused in the
use case.

Use cases:
A use case describes a sequence of actions that provide something
of measurablevalue to an actor and is drawn as a horizontal ellipse.
Actors:
An actor is a person, organization, or external system that plays a role
in one or moreinteractions with the system.

System boundary boxes:


A rectangle is drawn around the use cases, called the system boundary box,
to indicate the scope of system. Anything within the box represents functionality
that is in scope and anything outside the box is not.
Four relationships among use cases are used often in practice.

Include:
In one form of interaction, a given use case may include another. "Include
is a Directed Relationship between two use cases, implying that the behaviour
of the included use case is inserted into the behaviour of the including use
case.The first use case often depends on theoutcome of the included use case.
This is useful for extracting truly common behaviours frommultiple use cases
into a single description. The notation is a dashed arrow from the includingto the
included use case, with the label "«include»“. There are no parameters or return
values.To specify the location in a flow of events in which the base use case
includes the behaviour of another, you simply write include followed by the name
of use case you want to include, asin the following flow for track order.

Extend:

In another form of interaction, a given use case (the extension) may


extend another.This relationship indicates that the behaviour of the extension
use case may be inserted in the extended use case under some conditions. The
notation is a dashed arrow from the extension to the extended use case, with
the label "«extend»". Modellers use the «extend» relationship to indicate use
cases that are "optional" to the base use case.

Generalization:

In the third form of relationship among use cases, a


generalization/specialization relationship exists. A given use case may have
common behaviours, requirements, constraints, and assumptions with a more
general use case. In this case, describe them once,and deal with it in the same
way, describing any differences in the specialized cases. The notation is a solid
line ending in a hollow triangle drawn from the specialized to the more general
use case (following the standard generalization notation

Associations:

Associations between actors and use cases are indicated in use case
diagrams by solidlines. An association exists whenever an actor is involved with
an interaction described by a use case. Associations are modelled as lines
connecting use cases and actors to one another,with an optional arrowhead on
one end of the line. The arrowhead is often used to indicatingthe direction of
the initial invocation of the relationship or to indicate the primary actor within
the use case.

USE CASE DIAGRAM FOR SIGN LANGUAGE DETECTION:


CLASS DIAGRAM:

In software engineering, a class diagram in the Unified Modeling


Language is a type of static structure diagram that describes the
structure of a system by showing the system's classes, their attributes,
operations and the relationships among objects.
Purpose of Class Diagrams

• Shows static structure of classifiers in a system

• Diagram provides a basic notation for other structure diagrams prescribed


by UML

• Helpful for developers and other team members too

• Business Analysts can use class diagrams to model


systems from a business A UML class diagram is made up
of:
• A set of classes and

• A set of relationships between classes

CLASS NAMES : User, System

CLASS DIAGRAM FOR SIGN LANGUAGE DETECTION:


5. SYSTEM IMPLEMENTATION

5.1 Introduction

American sign language is a predominant sign language Since the only


disability Deaf and Dumb (hereby referred to as D&M) people have is
communication related and since they cannot use spoken languages, the only
way for them to communicate is through sign language. Communication is the
process of exchange of thoughts and messages in various ways such as speech,
signals, behaviour and visuals. D&M people make use of their hands to express
different gestures to express their ideas with other people. Gestures are the
non-verbally exchanged messages and these gestures are understood with
vision. This nonverbal communication of deaf and dumb people is called sign
language. A sign language is a language which uses gestures instead of sound
to convey meaning combining hand-shapes, orientation and movement of the
hands, arms or body, facial expressions and lip-patterns. Contrary to popular
belief, sign language is not international. These vary from region to region.

5.2 Project Modules

• Building and training the model

This model consists of preparing the dataset and pre-processing of dataset.


Then the dataset is divided into training dataset and testing dataset, And
then building a CNN model. After building the model the model is trained
with the training dataset.

• Live Detection module

In this module, the user can directly turn on their camera and start showing
the gestures, these gestures are then given to the model and the model
translates the gesture into text. The user can see the translated sign to text
in the screen.

• Text to Speech conversion module:


The user can also able to hear the translated text into speech. Both the text
and audio will be shown to the user. i.e., which ever is present in the screen
the user will be able to hear it.

5.3 ALGORITHMS USED:

CONVOLUTIONAL NEURAL NETWORKS:

A Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm which


can take in an input image, assign importance (learnable weights and biases) to
various aspects/objects in the image and be able to differentiate one from the other.
The pre-processing required in a ConvNet is much lower as compared to other
classification algorithms. While in primitive methods filters are hand-engineered, with
enough training, ConvNets have the ability to learn these filters/characteristics.

The architecture of a ConvNet is analogous to that of the connectivity pattern of


Neurons in the Human Brain and was inspired by the organization of the Visual
Cortex. Individual neurons respond to stimuli only in a restricted region of the visual
field known as the Receptive Field. A collection of such fields overlaps to cover the
entire visual area.

Generally, a Convolutional Neural Network has three layers, which are as follows;

● Input: If the image consists of 32 widths, 32 height encompassing three R,


G, B channels, then it will hold the raw pixel([32x32x3]) values of an image.
● Convolution: It computes the output of those neurons, which are associated
with input's local regions, such that each neuron will calculate a dot product
in between weights and a small region to which they are actually linked to in
the input volume. For example, if we choose to incorporate 12 filters, then it
will result in a volume of [32x32x12].
● ReLU Layer: It is specially used to apply an activation function elementwise,
like as max (0, x) thresholding at zero. It results in ([32x32x12]), which
relates to an unchanged size of the volume.
● Pooling: This layer is used to perform a downsampling operation along the
spatial dimensions (width, height) that results in [16x16x12] volume.

● Locally Connected: It can be defined as a regular neural network layer that


receives an input from the preceding layer followed by computing the class
scores and results in a 1-Dimensional array that has the equal size to that
of the number of classes.

We will start with an input image to which we will be applying multiple feature
detectors, which are also called filters to create the feature maps that comprise a
Convolution layer. Then on the top of that layer, we will be applying the ReLU or
Rectified Linear Unit to remove any linearity or increase non-linearity in our images.

Next, we will apply a Pooling layer to our Convolutional layer, so that from every
feature map we create a Pooled feature map as the main purpose of the pooling layer
is to make sure that we have spatial invariance in our images. It also helps to reduce
the size of our images as well as avoid any kind of overfitting of our data. After that,
we will flatten all of our pooled images into one long vector or column of all of these
values, followed by inputting these values into our artificial neural network. Lastly,
we will feed it into the locally connected layer to achieve the final output.
1. Convolution Layer:

In convolution layer we take a small window size [typically of length 5*5] that
extends to the depth of the input matrix. The layer consists of learnable filters of
window size. During every iteration we slid the window by stride size [typically 1],
and compute the dot product of filter entries and input values at a given position.

As we continue this process we will create a 2-Dimensional activation matrix that


gives the response of that matrix at every spatial position. That is, the network will
learn filters that activate when they see some type of visual feature such as an edge
of some orientation or a blotch of some colour.

2. Pooling Layer:
We use pooling layer to decrease the size of activation matrix and ultimately reduce
the learnable parameters. There are two types of pooling:

a. Max Pooling: In max pooling we take a window size [for example window of size
2*2], and only take the maximum of 4 values. Well lid this window and continue this
process, so well finally get an activation matrix half of its original Size.
b. Average Pooling: In average pooling, we take advantage of of all Values in a
window.

3. Fully Connected Layer:

In convolution layer, neurons are connected only to a local region, while in a fully connected
region, we will connect all the inputs to neurons.
4.Final Output Layer:
After getting values from fully connected layer, we will connect them to the final layer of
neurons [having count equal to total number of classes], that will predict the probability of
each image to be in different classes.

5.4 SCREENS:
Images of Dataset for class P

CNN-Model
Home Screen Image
Live Translation of Gesture to Text
6. SYSTEM TESTING

6.1. Introduction:

Software Testing is an important element of the software quality


assurance and represents the ultimate review of specification, design
and coding. The increasing feasibility of software as a system and the
cost associated with the software failures are motivated forces for III
planned through testing.

TESTING OBJECTIVES

These are several rules that can save as testing objectives:

• Testing is a process of executing program with the intent of finding an error.

• A good test case is one that has a high probability of finding an undiscovered
error.

Test Levels

The purpose of testing is to discover errors. Testing is the process


of trying to discover every conceivable fault or darkness in a work
product. It provides a way to check the functionality of components, sub
assemblies, assemblies and/or a finished product. Software system
meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type
addresses a specific testing requirement.

6.2. Testing Methods

Unit Testing

Unit testing involves the design of test cases that validate that the
internal program logic is functioning properly, and that program inputs
produce valid outputs. All decision branches and internal code flow
should be validated. It is the testing of individual software units of the
application.

Integration Testing

Integration tests are designed to test integrated software


components to determine if they run as one program. Testing is event
driven and is more concerned with the basic outcome of screens or fields.

Functional Testing

Functional tests provide systematic demonstrations that functions


tested are available as specified by the business and technical
requirements, system documentation, and user manuals. Organization
and preparation of functional tests is focused on requirements, key
functions, or special test cases.

System Testing

System testing ensures that the entire integrated software system


meets requirements. It tests a configuration to ensure known and
predictable results. An example of system testing is the configuration
oriented system integration test.

White Box Test

White Box Testing is a testing in which in which the software tester


has knowledge of the inner workings, structure and language of the
software, or at least its purpose. It is purpose. It is used to test areas
that cannot be reached from a black box level.

Black Box Test

Black Box Testing is testing the software without any knowledge of


the inner workings, structure or language of the module being tested.
Black box tests, as most other kinds of tests, must be written from a
definitive source document, such as specification or requirements
document, such as specification or requirements document.

Unit Testing

Unit testing is usually conducted as part of a combined code and


unit test phase of the software lifecycle, although it is not uncommon for
coding and unit testing to be conducted as two distinct phases.

Integration Testing

Software integration testing is the incremental integration testing


of two or more integrated software components on a single platform to
produce failures caused by interface defects.

Acceptance Testing

User Acceptance Testing is a critical phase of any project and


requires significant participation by the end user
7.CONCLUSION

In this report, a functional real time vision based American Sign Language
recognition for Deaf and mute people have been developed for ASL alphabets.
We achieved final accuracy of 90.0% on our data set. We have improved our
prediction after implementing two layers of algorithms wherein we have
verified and predicted symbols which are more similar to each other.
This gives us the ability to detect almost all the symbols provided that they
are shown properly, there is no noise in the background and lighting is
adequate.
8.FUTURE SCOPE

• It can be integrated with various search engines and texting application


such as google, WhatsApp. So that even the illiterate people could be able
to chat with other persons, or query something from web just with the help
of gesture.
• This project is working on image currently, further development can lead to
detecting the motion of video sequence and assigning it to a meaningful
sentence with TTS assistance
• Presently our system can be applicable to only ASL but can be extended to
various sign language recognition system and various gestures can also be
added.
9. BIBLIOGRAPHY

[1] T. Yang, Y. Xu, and “A., Hidden Markov Model for Gesture Recognition”, CMU-RI-
TR-94 10, Robotics Institute, Carnegie Mellon Univ., Pittsburgh, PA, May 1994.
[2] Pujan Ziaie, Thomas M uller, Mary Ellen Foster, and Alois Knoll “A Na ̈ıve Bayes
Munich, Dept. of Informatics VI, Robotics and Embedded Systems, Boltzmannstr. 3,
DE-85748 Garching, Germany.
[3]https://docs.opencv.org/2.4/doc/tutorials/imgproc/gausian_median_blur_bilater
al_filter/gausian_median_blur_bilateral_filter.html
[4] Mohammed Waleed Kalous, Machine recognition of Auslan signs using
PowerGloves: Towards large-lexicon recognition of sign language.
[5]aeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-
Neural Networks-Part-2/
[6] http://www-i6.informatik.rwth-aachen.de/~dreuw/database.php
[7] Pigou L., Dieleman S., Kindermans PJ., Schrauwen B. (2015) Sign Language
Recognition Using Convolutional Neural Networks. In: Agapito L., Bronstein M.,
Rother C. (eds) Computer Vision - ECCV 2014 Workshops. ECCV 2014. Lecture Notes
in Computer Science, vol 8925. Springer, Cham
[8] Zaki, M.M., Shaheen, S.I.: Sign language recognition using a combination of new
vision-based features. Pattern Recognition Letters 32(4), 572–577 (2011).

[9] N. Mukai, N. Harada and Y. Chang, "Japanese Fingerspelling Recognition


Based on Classification Tree and Machine Learning," 2017 Nicograph International
(NicoInt), Kyoto, Japan, 2017, pp. 19-24. doi:10.1109/NICOInt.2017.9
[10] Byeongkeun Kang, Subarna Tripathi, Truong Q. Nguyen” Real-time sign
language fingerspelling recognition using convolutional neural networks from depth
map” 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR)
[11] Number System Recognition (https://github.com/chasinginfinity/number-sign-
recognition)
[12] https://opencv.org/
[13] https://en.wikipedia.org/wiki/TensorFlow
[14] https://en.wikipedia.org/wiki/Convolutional_neural_nework
[15] http://hunspell.github.io/
10. APPENDIX:

INTRODUCTION TO PYTHON

Python
What is Python? Chances you are asking yourself this. You may have found this book
because you want to learn to program but don’t know anything about programming
languages. Or you may have heard of programming languages like C, C++, C#, or
Java and want to know what Python is and how it compares to “big name” languages.
Hopefully I can explain it to you.
Python concepts
If you’re not interested in the how’s and why's of Python, feel free to skip to the next
chapter. In this chapter I will try to explain to the reader why I think Python is one
of the best languages available and why it’s a great one to start programming with.
• Open-source general-purpose language.
• Object Oriented, Procedural, Functional
• Easy to interface with C/ObjC/Java/Fortran
• Easy-is to interface with C++ (via SWIG)
• Great interactive environment
• Great interactive environment
Python is a high-level, interpreted, interactive and object-oriented scripting
language. Python is designed to be highly readable. It uses English keywords
frequently whereas other languages use punctuation, and it has fewer syntactic
constructions than other languages.
⦁ Python is Interpreted − Python is processed at runtime by the interpreter. You
do not need to compile your program before executing it. This is similar to PERL and
PHP.
⦁ Python is Interactive − you can actually sit at a Python prompt and interact
with the interpreter directly to write your programs.
⦁ Python is Object-Oriented − Python supports Object-Oriented style or
technique of programming that encapsulates code within objects.
⦁ Python is a Beginner's Language − Python is a great language for the
beginner-level programmers and supports the development of a wide range of
applications from simple text processing to WWW browsers to games.

Numpy
Humpy’s main object is the homogeneous multidimensional array. It is a table of
elements (usually numbers), all of the same type, indexed by a tuple of positive
integers. In numbly dimensions are called axes. The number of axes is rank.
• Offers Matlab-ish capabilities within Python
• Fast array operations
• 2D arrays, multi-D arrays, linear algebra etc.

Matplotlib
• High quality plotting library.
Python class and objects
These are the building blocks of OOP. Class creates a new object. This object can be
anything, whether an abstract data concept or a model of a physical object, e.g. a
chair. Each class has individual characteristics unique to that class, including
variables and methods. Classes are very powerful and currently “the big thing” in
most programming languages. Hence, there are several chapters dedicated to OOP
later in the book.
The class is the most basic component of object-oriented programming. Previously,
you learned how to use functions to make your program do something.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy