Final Group 1
Final Group 1
Final Group 1
A PROJECT REPORT
Submitted by
KIRAN KUMAR B (408CS19010)
BHARATH KUMAR V (408CS19003)
HARSHITHA S (408CS19006)
KARTHIK SHETTY K (408CS19008)
KOUSHIK P (408CS19012)
in partial fulfilment for the award of the diploma
of
DIPLOMA IN COMPUTER SCIENCE & ENGINEERING
PROGRAMME
IN
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
BY
KIRAN KUMAR B (408CS19010)
BHARATH KUMAR V (408CS19003)
HARSHITHA S (408CS19006)
KARTHIK SHETTY K (408CS19008)
KOUSHIK P (408CS19012)
Date:
Place: Bengaluru
i
HINDUSTAN ELECTRONICS ACADEMEY
POLYTECHNIC
Diploma in Computer Science and Engineering
BONAFIDE CERTIFICATE
SIGNATURE SIGNATURE
ii
DEPARTMENT OF TECHNICAL EDUCATION
HINDUSTAN ELECTRONICS ACADEMEY
POLYTECHNIC
Chinnapanahalli, Marathahalli Post, Bengaluru-560037
Department of Computer Science and Engineering
CERTIFICATE
We Certified that this project report entitled “AI VIRTUAL MOUSSE” which is
being submitted by KIRAN KUMAR B,BHARATH KUMAR
V,HARSHITHA,KARTHIK SHETTY K,KOUSHIK P, Reg. No. 408CS19010,
408CS19003, 408CS19006, 408CS19008, 408CS19012 a bonafide student of
Hindustan Electronic Academy Polytechnic in partial fulfilment for the award of
Diploma in Computer Science and Engineering during the year 2021-2022 is record
of students own work carried out under my/our guidance. It is certified that all
corrections/suggestions indicated for internal Assessment have been incorporated in the
Report and one copy of it being deposited in the polytechnic library.
The project report has been approved as it satisfies the academic requirements in respect
of Project work prescribed for the said diploma.
It is further understood that by this certificate the undersigned do not endorse or approve
any statement made, opinion expressed or conclusion drawn there in but approve the
project only for the purpose for which it is submitted.
1. 2.
iii
ACKNOWLEDGEMENT
We are extremely grateful to our beloved Principal Mr. S. Ramakrishna Reddy, HEA
Polytechnic for his kind co-operation.
We thank our project guide Mrs. Anitta Mathew, Lecturer of ComputerScience and
Engineering Department, HEA Polytechnic for her valuable guidance during the
course of project and continuous support to fulfil the project successfully.
Last but not the least we thank my Parents, Family members and Friends, for their great
support and encouragement throughout this project work.
iv
LIST OF FIGURES
v
ABSTRACT
vi
TABLE OF CONTENTS
CANDIDATE DECLARATION ................................................................................. i
PROJECT GUIDE CERTIFICATE ......................................................................... ii
CERTIFICATE ........................................................................................................... iii
ACKNOWLEDGEMENT .......................................................................................... iv
LIST OF FIGURES .................................................................................................... v
ABSTRACT ................................................................................................................. vi
CHAPTER 1
INTRODUCTION
A virtual mouse is software that allows users to give mouse inputs to a system without using
an actual mouse. To the extreme it can also be called as hardware because it uses an ordinary
web camera. A virtual mouse can usually be operated with multiple input devices, which may
include an actual mouse or a computer keyboard. Virtual mouse which uses web camera works
with the help of different image processing techniques.
In this the hand movements of a user is mapped into mouse inputs. A web camera is set to take
images continuously. Most laptops today are equipped with webcams, which have recently
been used insecurity applications utilizing face recognition. In order to harness the full
potential of a webcam, it can be used for vision based CC, which would effectively eliminate
the need for a computer mouse or mouse pad. The usefulness of a webcam can also be greatly
extended to other HCI application such as a sign language database or motion controller. Over
the past decades there have been significant advancements in HCI technologies for gaming
purposes, such as the Microsoft Kinect and Nintendo Wii. These gaming technologies provide
a more natural and interactive means of playing videogames. Motion controls is the future of
gaming and it have tremendously boosted the sales of video games, such as the Nintendo Wii
which sold over 50 million consoles within a year of its release. HCI using hand gestures is
very intuitive and effective for one to one interaction with computers and it provides a Natural
User Interface (NUI). There has been extensive research towards novel devices and techniques
for cursor control using hand gestures. Besides HCI, hand gesture recognition is also used in
sign language recognition, which makes hand gesture recognition even more significant.
3. Modeling:
• Data Design: In this particular, we have to collect all the required data like how many
finger inputs we required, how many fingers we required for motion tracking. Also we have
to implement how many parts we want to display on UI.
• System Models: System should have Web camera support to trace motion of hand. Also, it
has some python libraries preinstalled so it should be easy to access.
• UI Design: To implement user interface first of all we have to decide environment required
for the
development of the application. After that according to architecture we should start
implementing the design.
4. Specification: • Use cases: After developing this application user should be able to access
their system through Motion Tracker Application.
• User are able to use this Application to maintain eye distance between their device and
himself. Also, during streaming he/she can able to access their system without any movement
of their body.
5. Review: We are developing such application which is combination of AI & Web. After
completing this project user can access their system with the help of their finger by using
system’s camera.
CHAPTER 2
OBJECTIVES
USED
AI virtual mouse
CHAPTER 3
This chapter provides the hardware requirements, software functional and non-functional
requirement, and deployment environments.
2 GB RAM (Minimum)
Python: To access camera & tracking all hand motion, python is very easy &
accurate to use. Python comes with lots of build in libraries which makes code
short and easily understandable. Python version required for building of this
application is 3.7
Open CV Library: OpenCV are also included in the making of this program.
OpenCV (Open-Source Computer Vision) is a library of programming functions
for real time computer vision. OpenCV have the utility that can read image
pixels value, it also has the ability to create real time eye tracking and blink
detection.
Tkinter: The tkinter package is the standard Python interface to the Tk GUI
toolkit.
SYSTEM ANALYSIS
AI virtual mouse
CHAPTER 4
SYSTEM ANALYSIS
The existing system consists of a mouse that can be either wireless or wired to control
the cursor, know we can use hand gestures to monitoring the system. The existing virtual
mouse control system consists of the simple mouse operation using the colored tips for
detection which are captured by web-cam, hence colored fingers acts as
an object which the web-camsense color like red, green, blue color to monitor the
system,whereas could perform basic mouse operation like minimize, drag, scroll up ,
scroll down , left-click right-click using hand gestures without any colored finger
because skin color recognition system is more flexible than the existing system.In the
existing system use static hand recognition like fingertip identification, hand shape,
Number of fingers to defined action explicitly , which makes a system more complex to
understand and difficult to use.
The system works by identifying the color of the hand and decides the position of the
cursor accordingly but there are different conditions and scenario which make it
difficult for the algorithm to run in the real environment due to the Following reasons.
● Noises in the environment.
● Lighting condition in the environment
● Different textures of skin.
● Background object in the same colour of skin.
Fig. 1 Input Processing
So it becomes very important that the colour determining algorithm works
accurately. The proposed system can work for the skin tone of any color as well as can
work accurately in any lighting condition as well for the purpose of clicking the user
needs to create a 15 degree angle between its two-finger the proposed system can easily
replace the traditional mouse as well as the algorithm that requires colored tapes for
controlling the mouse .the research paper can be a pioneer in its field and can be a source
of further research in the corresponding field. The project can be developed with “zero-
cost” and can easily integrate with the existing system.
Virtual Mouse using Hand gesture recognition allows users to control mouse with
the help of hand gestures.
System’s webcam is used for tracking hand gestures.
Computer vision techniques are used for gesture recognition.
OpenCV consists of a package called video capture which is used to capture data
from a live video.
main thing we need to identify are the applications the model is going to develop
so the development of the mouse movement without using the system mouse.
CHAPTER 5
SYSTEM DESIGN
5.1 Introduction
System design is a modelling process. It can be defined as a transaction from a user view to
programmers (developers) view. It concentrates on transferring of requirementspecification
to design specification. The design phase acts as a bridge between the requirements
specification and implementation phase. In this stage, the completedescription of our project
was understood and all possible combination to be implemented was considered.
ssssc
The AI virtual mouse system makes use of the transformational algorithm, and it
converts the co-ordinates of fingertip from the webcam screen to the computer window
full screen for controlling the mouse.
II. METHODOLOGY
The various functions and conditions used in the system are explained in the flowchart of the
real-time AI virtual mouse system
Fig.5.2.1
The proposed AI virtual mouse system is based on the frames that have been captured by the
webcam in a laptop or PC. By using the Python computer vision library OpenCV, the video
capture object is created and the web camera will start capturing video. The web camera
captures and passes the frames to the AI virtual system.
Fig.5.3.1
The AI virtual mouse system uses the webcam where each frame is captured till the
termination of the program.
The AI virtual mouse system makes use of the transformational algorithm, and it converts the
co-ordinates of fingertip from the webcam screen to the computer window full screen for
controlling the mouse. When the hands are detected and when we find which finger is up for
performing the specific mouse function, a rectangular box is drawn with respect to the
computer window in the webcam region where we move throughout the window using the
mouse cursor.
Fig.5.3.3
In this stage, we are detecting which finger is up using the tip Id of the respective finger that
we found using the MediaPipe and the respective co-ordinates of the fingers that are up and
according to that, the particular mouse function is performed.
Fig.5.3.4
5.3.5. Mouse Functions Depending on the Hand Gestures and Hand Tip
Detection Using Computer Vision
5.3.6. For the Mouse Cursor Moving around the Computer Window
If the index finger is up with tip Id = 1 or both the index finger with tip Id = 1 and the middle
finger with tip Id = 2 are up, the mouse cursor is made to move around the window of the
computer using the AutoPy package of Python.
Fig.5.3.6
If both the index finger with tip Id = 1 and the middle finger with tip Id = 0 are up and the
distance between the two fingers is lesser than 30px, the computer is made to perform the left
mouse button click using the pynput Python package.
If both the index finger with tip Id = 1 and the middle finger with tip Id = 2 are up and the
distance between the two fingers is lesser than 40 px, the computer is made to perform the
right mouse button click using the pynput Python package.
If both the index finger with tip Id = 1 and the thumb finger with tip Id = 2 are up and the
distance between the two fingers is lesser than 10 px and if the two fingers are moved up the
page, the computer is made to perform the scroll up mouse function using the PyAutoGUI
Python package.
If both the index finger with tip Id = 1 and the thumb finger with tip Id = 2 are up and the
distance between the two fingers is lesser than 10px and if the two fingers are moved down
the page, the computer is made to perform the scroll down mouse function using the
PyAutoGUI Python package.
If all the fingers are up with tip Id = 0, 1, 2, 3, and 4, the computer is made to not perform any
mouse events in the screen.
The various functions and conditions used in the system are explained in the flowchart
of the real-time AI virtual mouse system in figure.
Camera Used in the AI Virtual Mouse System. The proposed AI virtual mouse system is based
on the frames that have been captured by the webcam in a laptop or PC. By using the Python
computer vision library OpenCV, the video capture object is created and the web camera will
start capturing video, as shown in Figure. The web camera captures and passes the frames to
the AI virtual system.
Capturing the Video and Processing. The AI virtual mouse system uses the webcam where
each frame is captured till the termination of the program. The video frames are processed
from BGR to RGB colour space to find the hands in the video frame by frame as shown in the
following code:
Fig.5.3.11
Rectangular Region for Moving through the Window. The AI virtual mouse system makes use
of the transformational algorithm, and it converts the coordinates of fingertip from the webcam
screen to the computer window full screen for controlling the mouse. When the hands are
detected and when we find which finger is up for performing the specific mouse function, a
rectangular box is drawn with respect to the computer window in the webcam region where
we move throughout the window using the mouse cursor.
Detecting Which Finger Is Up and Performing the Particular Mouse Function. In this stage,
we are detecting which finger is up using the tip Id of the respective finger that we found using
the MediaPipe and the respective co-ordinates of the fingers that are up , and according to that,
the particular mouse function is performed.
Mouse Functions Depending on the Hand Gestures and Hand Tip Detection Using Computer
Vision For the Mouse Cursor Moving around the Computer Window. If the index finger is up
with tip Id = 1 or both the index finger with tip Id = 1 and the middle finger with tip Id = 2 are
up, the mouse cursor is made to move around the window of the computer using the AutoPy
package of Python.
For the Mouse to Perform Left Button Click. If both the index finger with tip Id = 1 and the
thumb finger with tip Id = 0 are up and the distance between the two fingers is lesser than
30px, the computer is made to perform the left mouse button click using the pynput.
5.5.1 GOALS
A use case diagram in the Unified Modelling Language (UML) is a type of behavioural
diagram defined by and created from a Use-case analysis. Its purpose is to present a
graphical overview of the functionality provided by a system in terms of actors, their
goals (represented as use cases), and any dependencies between those use cases. The
main purpose of a use case diagram is to show what system functions are performed for
which actor. Roles of the actors in the system can be depicted.
CHAPTER 6
PROGRAM CODE
Python.py
# Imports
import cv2
import mediapipe as mp
import pyautogui
import math
pyautogui.FAILSAFE = False
mp_drawing = mp.solutions.drawing_utils
mp_hands = mp.solutions.hands
# Gesture Encodings
class Gest(IntEnum):
# Binary Encoded
FIST = 0
PINKY = 1
RING = 2
MID = 4
LAST3 = 7
INDEX = 8
FIRST2 = 12
LAST4 = 15
THUMB = 16
PALM = 31
# Extra Mappings
V_GEST = 33
TWO_FINGER_CLOSED = 34
PINCH_MAJOR = 35
PINCH_MINOR = 36
# Multi-handedness Labels
class HLabel(IntEnum):
MINOR = 0
MAJOR = 1
class HandRecog:
self.finger = 0
self.ori_gesture = Gest.PALM
self.prev_gesture = Gest.PALM
self.frame_count = 0
self.hand_result = None
self.hand_label = hand_label
self.hand_result = hand_result
sign = -1
sign = 1
dist = (self.hand_result.landmark[point[0]].x -
self.hand_result.landmark[point[1]].x)**2
dist += (self.hand_result.landmark[point[0]].y -
self.hand_result.landmark[point[1]].y)**2
dist = math.sqrt(dist)
return dist*sign
dist = (self.hand_result.landmark[point[0]].x -
self.hand_result.landmark[point[1]].x)**2
dist += (self.hand_result.landmark[point[0]].y -
self.hand_result.landmark[point[1]].y)**2
dist = math.sqrt(dist)
return dist
def get_dz(self,point):
return abs(self.hand_result.landmark[point[0]].z -
self.hand_result.landmark[point[1]].z)
def set_finger_state(self):
if self.hand_result == None:
return
points = [[8,5,0],[12,9,0],[16,13,0],[20,17,0]]
self.finger = 0
dist = self.get_signed_dist(point[:2])
dist2 = self.get_signed_dist(point[1:])
try:
ratio = round(dist/dist2,1)
Dept. of CSE, HEA Polytechnic Aug-2022 Page 28
AI virtual mouse
except:
ratio = round(dist/0.01,1)
self.finger = self.finger | 1
def get_gesture(self):
if self.hand_result == None:
return Gest.PALM
current_gesture = Gest.PALM
if self.hand_label == HLabel.MINOR :
current_gesture = Gest.PINCH_MINOR
else:
current_gesture = Gest.PINCH_MAJOR
point = [[8,12],[5,9]]
dist1 = self.get_dist(point[0])
dist2 = self.get_dist(point[1])
ratio = dist1/dist2
current_gesture = Gest.V_GEST
else:
current_gesture = Gest.TWO_FINGER_CLOSED
else:
current_gesture = Gest.MID
else:
current_gesture = self.finger
if current_gesture == self.prev_gesture:
self.frame_count += 1
else:
self.frame_count = 0
self.prev_gesture = current_gesture
if self.frame_count > 4 :
self.ori_gesture = current_gesture
return self.ori_gesture
class Controller:
tx_old = 0
ty_old = 0
trial = True
flag = False
grabflag = False
pinchmajorflag = False
pinchminorflag = False
pinchstartxcoord = None
pinchstartycoord = None
pinchdirectionflag = None
prevpinchlv = 0
pinchlv = 0
framecount = 0
prev_hand = None
pinch_threshold = 0.3
def getpinchylv(hand_result):
return dist
def getpinchxlv(hand_result):
return dist
def changesystembrightness():
currentBrightnessLv = sbcontrol.get_brightness()/100.0
currentBrightnessLv += Controller.pinchlv/50.0
currentBrightnessLv = 1.0
currentBrightnessLv = 0.0
sbcontrol.fade_brightness(int(100*currentBrightnessLv) , start =
sbcontrol.get_brightness())
def changesystemvolume():
devices = AudioUtilities.GetSpeakers()
currentVolumeLv = volume.GetMasterVolumeLevelScalar()
currentVolumeLv += Controller.pinchlv/50.0
currentVolumeLv = 1.0
currentVolumeLv = 0.0
volume.SetMasterVolumeLevelScalar(currentVolumeLv, None)
def scrollVertical():
def scrollHorizontal():
pyautogui.keyDown('shift')
pyautogui.keyDown('ctrl')
pyautogui.keyUp('ctrl')
pyautogui.keyUp('shift')
def get_position(hand_result):
point = 9
sx,sy = pyautogui.size()
x_old,y_old = pyautogui.position()
x = int(position[0]*sx)
y = int(position[1]*sy)
if Controller.prev_hand is None:
Controller.prev_hand = x,y
delta_x = x - Controller.prev_hand[0]
delta_y = y - Controller.prev_hand[1]
ratio = 1
Controller.prev_hand = [x,y]
ratio = 0
else:
ratio = 2.1
return (x,y)
def pinch_control_init(hand_result):
Controller.pinchstartxcoord = hand_result.landmark[8].x
Controller.pinchstartycoord = hand_result.landmark[8].y
Controller.pinchlv = 0
Controller.prevpinchlv = 0
Controller.framecount = 0
if Controller.framecount == 5:
Controller.framecount = 0
Controller.pinchlv = Controller.prevpinchlv
if Controller.pinchdirectionflag == True:
controlHorizontal() #x
controlVertical() #y
lvx = Controller.getpinchxlv(hand_result)
lvy = Controller.getpinchylv(hand_result)
Controller.pinchdirectionflag = False
Controller.framecount += 1
else:
Controller.prevpinchlv = lvy
Controller.framecount = 0
Controller.pinchdirectionflag = True
Controller.framecount += 1
else:
Controller.prevpinchlv = lvx
Controller.framecount = 0
x,y = None,None
if gesture != Gest.PALM :
x,y = Controller.get_position(hand_result)
# flag reset
Controller.grabflag = False
pyautogui.mouseUp(button = "left")
Controller.pinchmajorflag = False
Controller.pinchminorflag = False
# implementation
if gesture == Gest.V_GEST:
Controller.flag = True
if not Controller.grabflag :
Controller.grabflag = True
pyautogui.mouseDown(button = "left")
pyautogui.click()
Controller.flag = False
pyautogui.click(button='right')
Controller.flag = False
pyautogui.doubleClick()
Controller.flag = False
if Controller.pinchminorflag == False:
Controller.pinch_control_init(hand_result)
Controller.pinchminorflag = True
Controller.pinch_control(hand_result,Controller.scrollHorizontal,
Controller.scrollVertical)
if Controller.pinchmajorflag == False:
Controller.pinch_control_init(hand_result)
Controller.pinchmajorflag = True
Controller.pinch_control(hand_result,Controller.changesystembrightness,
Controller.changesystemvolume)
'''
Main Class
'''
class GestureController:
gc_mode = 0
cap = None
CAM_HEIGHT = None
s CAM_WIDTH = None
dom_hand = True
GestureController.gc_mode = 1
GestureController.cap = cv2.VideoCapture(0)
GestureController.CAM_HEIGHT =
GestureController.cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
GestureController.CAM_WIDTH =
GestureController.cap.get(cv2.CAP_PROP_FRAME_WIDTH)
def classify_hands(results):
try:
handedness_dict = MessageToDict(results.multi_handedness[0])
if handedness_dict['classification'][0]['label'] == 'Right':
right = results.multi_hand_landmarks[0]
else :
left = results.multi_hand_landmarks[0]
except:
pass
try:
handedness_dict = MessageToDict(results.multi_handedness[1])
if handedness_dict['classification'][0]['label'] == 'Right':
right = results.multi_hand_landmarks[1]
else :
left = results.multi_hand_landmarks[1]
except:
pass
if GestureController.dom_hand == True:
GestureController.hr_major = right
GestureController.hr_minor = left
else :
GestureController.hr_major = left
GestureController.hr_minor = right
def start(self):
handmajor = HandRecog(HLabel.MAJOR)
handminor = HandRecog(HLabel.MINOR)
if not success:
continue
image.flags.writeable = False
results = hands.process(image)
image.flags.writeable = True
if results.multi_hand_landmarks:
GestureController.classify_hands(results)
handmajor.update_hand_result(GestureController.hr_major)
handminor.update_hand_result(GestureController.hr_minor)
handmajor.set_finger_state()
handminor.set_finger_state()
gest_name = handminor.get_gesture()
if gest_name == Gest.PINCH_MINOR:
Controller.handle_controls(gest_name, handminor.hand_result)
else:
gest_name = handmajor.get_gesture()
Controller.handle_controls(gest_name, handmajor.hand_result)
mp_drawing.draw_landmarks(image, hand_landmarks,
mp_hands.HAND_CONNECTIONS)
else:
Controller.prev_hand = None
break
GestureController.cap.release()
cv2.destroyAllWindows()
gc1 = GestureController()
gc1.start()
TESTING
AI virtual mouse
CHAPTER 7
TESTING
7.1 Overview
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub-assemblies, assemblies and/ or a finished product. It
is the process of exercising software with the intent of ensuring that the Software system
meets its requirements and user expectations and does not fail in an unacceptable
manner. There are various types of test. Each test type addresses a specific testing
requirement.
The case for A/B testing is very strong: When any changes are being made in design,
performing tests along the way means you backup design decisions with data, for
example.
Members of a design team will disagree on what is the best path to pursue
Client and designer will disagree as to which variation of an interface will work
better
Business or Making team and design team will disagree on which will work
better
Apart from being a platform for everyone to raise sometimes heated personal opinions
and biases, discussion like these usually lead nowhere other than hour-long, heavy-
loaded meetings. Data is by far, the best way to settle these debates. A client wouldn’t
be arguing that a blue button is better than a red one if he knew the red variation would
increase his revenue by .5% (or let’s say, $500/day).
A design team wouldn’t argue over which imagery to use if they knew that a certain
variation increase retention. A/B testing helps teams deliver better work, more
efficiently.
Going further, it also allows you to improve key business metrics. Testing – specially
if it’s conducted continually – enables you to optimize your interface and make sure
your website is delivering the best results possible. Picture for a moment an ecommerce
store.
The goal is to increase the number of checkouts. A/B testing only the listing page would
have a really small effect on the total number checkouts. It wouldn’t move the needle
significantly and neither would be optimizing just the homepage header, for example.
However, running tests to optimize all areas – from the menus, all the way to the
checkout confirmation – will results in a compound effect that will make more of an
impact.
A type of User Acceptance Testing, Beta Testing, also known as “field testing”, is done
in the customer’s environment. Beta testing is commonly used for brand new features
and products. The purpose of beta testing is to provide access to users who then provides
feedback, which helps improve the application. Beta testing often involves a limited
number of users.
7.4 White Box Testing
White box testing is a method of testing software in which the internal workings, code,
architecture, design, etc) are known to the tester. White box testing validates the internal
structure and therefore often focuses primarily on improving security, and making the
flow of inputs/ outputs more efficient and optimized. In white box testing, the tester is
often testing for internal security holes and broken or poorly structured coding paths.
The term “white box” is used because in this type of testing, you have visibility into the
internal workings. Because of this, white box testing usually requires a more technical
person. Types of white box testing include unit testing and integration testing.
Black box testing is a method of testing software in which the internal workings, (code,
architecture, design, etc), are NOT known to the tester. Black box testing focuses on
the behaviour of the software and involves testing from an external or end-user
perspective. With black box testing, the tester is testing the functionality of the software
without looking at the code or having any knowledge of the application’s internal flows.
Inputs and outputs are tested and compared to the expected output and if the actual
output doesn’t match the expected output, a bug has been found.
The term “black box” is used because in this type of testing, you don’t look inside of
the application. For this reason, non-technical people often conduct black box testing.
Types of black box testing include functional testing, system testing, usability testing,
and regression testing.
Positive testing is the type of testing that can be performed on the system by providing
the valid data as input. It checks whether an application behaves as expected with
positive inputs. This test is done to check the application that does what it is supposed
to do.
Unit testing involves the design of test cases that validate that the internal program logic
is functioning properly, and that program inputs produce valid outputs. All decisions
branches and internal code flow should be validated. It is the testing of individual
software units of the application. It is done after the completion of an individual unit
before integration. This is a structural testing, that relies on knowledge of its
construction and is invasive. Unit tests perform basic tests at component level and test
a specific business process, application, and/ or system configuration. Unit tests ensure
that each unique path of a business process performs accurately to the documented
specification and contains clearly defined inputs and expected results. Unit testing is
usually conducted as part of a combined code and unit test phaseof the software
lifecycle, although it is not uncommon for coding and unit testing to beconducted as
two distinct phases.
Field testing will be performed manually and functional tests will be written detail.
Test objectives
Features to be tested
Is it fast ?
Functional tests provide systematic demonstrations that functions tested are available
as specified by the business and technical requirements, system documentation, and
user manuals.
Functional testing is centred on the following items:
Before functional testing is complete, additional test are identified and the effective
value of current test is determined.
The task of the integration test is to check that components or software, e.g. component
in a software system or – one step up – software applications at the company level –
interact without error.
7.11 System Testing
System testing ensures that the entire integrated software system meets requirements.
It tests a configuration to ensure known and predictable results. An example of system
testing is based on process description and flows, emphasizing pre-driven process links
and integration points.
7.12 Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.
Test Results
All the test cases mentioned above passed successfully. No defects encountered.
Test Cases :-
SCREENSHOTS
AI virtual mouse
CHAPTER 8
INPUT AND OUTPUT SCREENS
8.1 Snapshot of the Project and Description
LIMITATIONS
AI virtual mouse
CHAPTER 9
LIMITATION OF THE PROJECT
The proposed AI virtual mouse has some limitations such as small decrease in
accuracy of the right click mouse function
The model has some difficulties in executing clicking and dragging to select
the text.
FUTURE APPLICATION
AI virtual mouse
CHAPTER 10
There are several features and improvements needed in order for the program to be more user
friendly, accurate, and flexible in various environments.
The following describes the improvements and the features required:-
Smart Movement: Due to the current recognition process are limited within 25cm
radius, an adaptive zoomin/out functions are required to improve the covered distance,
where it can automatically adjust the focus rate based on the distance between the
users and the webcam
Better Accuracy & Performance: The response time are heavily relying on the
hardware of the machine, this includes the processing speed of the processor, the size
of the available RAM, and the available features of webcam. Therefore, the program
may have better performance when it's running on a decent machine with a webcam
that performs better in different types of lightings.
Mobile Application: In future this web application also able to use on Android
devices, where touchscreen concept is replaced by hand gestures.
BIBLIOGRAPHY
AI virtual mouse