IJRPR9994

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

International Journal of Research Publication and Reviews, Vol 4, no 2, pp 979-984, February 2023

International Journal of Research Publication and Reviews


Journal homepage: www.ijrpr.com ISSN 2582-7421

Virtual Mouse and Keyboard using OpenCV

Abhishek Pandey1, Nikhil Digole2, Shyam Mishra3


1,2,3
Student, Department of Information Technology, Vasantdada Patil Pratishthan’s College of Engineering and Visual Arts, Mumbai, India

ABSTRACT

The advancement of technology has led to computing being integrated into mobile devices like smartphones and palm tops. However, the traditional QWERTY
keyboard remains unchanged as the primary input device. This paper proposes a virtual keyboard application that utilizes image processing to create a visual
representation of a keyboard. The virtual keyboard will be functional and accessible by using a camera to capture hand gestures as typing inputs. The same concept
applies to the development of a virtual mouse that will use finger recognition as inputs. The camera will capture hand movements to control the mouse. The virtual
keyboard and mouse will be created by fetching the image of a keyboard or mouse using the camera, and capturing the typing or mouse movements.

Keywords: Human-Computer Interaction, Colour Detection, Web camera, Gesture Recognition, Image Processing, Green Colour Object.

1. Introduction

In the present-day scenario, most of the mobile phones are using touch screen technology to interact with the user. But this technology is still not cheap
to be used in desktops and laptops. Our objective was to create a virtual mouse system using a Web camera to interact with the computer in a more user-
friendly manner that can be an alternative approach for the touch screen. The Computer webcam is capturing the video of the person sitting in front of
the computer, there will be a small green box which will be generated in the middle of the screen. In that green box, the objects shown will be processed
by the code and matched with it if it matches then a red colored border will be generated, which means the computer has identified the object and then
by moving the object the mouse cursor can be moved. This will not only help in the security of the computer but also help in generating a virtual
computational experience. Here in the place of different objects, using hand gestures one gesture will be moving the cursor, the different gesture will be
used for right click and different for left click, similarly with a simple gesture can do the keyboard functions virtually that may have been done on some
keyboard as a physical aspect. It the gesture does not match the box will show an only green box when the known gesture is observed a red border will
occur with respect with the location of the finger caps. The only problem was that the right click and left click functions were very much difficult using
this process. Open cv is basically an open source computer vision and machine learning software library. Numpy is python library that provides a simple
yet powerful data structures. Keras is basically neural network library and tensorflow is the open- source library for a number of various tasks in machine
learning.

The basic algorithm we are going to use in this project is Svm Hog and convolutional neural network which is svm hog is histogram of oriented gradient
is used for features extraction in the human detection process, while linear support vector machine (svm) are used for human classification.

2. Literature Survey

Computer vision based mouse is to control the mouse tasks and recent challenges, erden et al. [1] have been investigated a camera and computer vision
based technologies, such as image segmentation and gesture recognition. To overcome the limitations of erden et al.[3]this project, we have taken
inspirational from the Hojoon Park which is inspired by computer vision-based technology that has the capability to control mouse movements in web
camera. However, he used finger-tips to control the mouse cursor and the angle between the thumb and index finger was used to perform clicking actions.
Chu-Feng Lien had used an intuitive method to detect hand motion by its Motion History Images (MHI). In this approach, the only fingertip was used to
control both the cursor and mouse click. In his approach, the user needs to hold the mouse cursor on the desired spot for a specific period of time for
clicking operation [2]. Kamran Niyazi et al used Web camera to detect color tapes for cursor movement. The clicking actions were performed by
calculating the distance between two colored tapes in the fingers. In K N Shah et al. [9] have represented some of the innovative methods of the finger
tracking used to interact with a computer system using computer vision. They have divided the approaches used in Human-Computer Interaction (HCI)
into two categories.

• HCI without using interface.

• Moreover, they have mentioned some useful applications using finger tracking through computer vision.
International Journal of Research Publication and Reviews, Vol 4, no 2, pp 979-984 February 2023 980

In this study described the motivation and the design considerations of an economical head-operated computer mouse. In addition, it focuses on the
invention of a head-operated computer mouse that employs two tilt sensors placed in the headset to determine the head position and to function as a
simple head-operated computer mouse. One tilt sensor detects the lateral head-motion to drive the left/right displacement of the mouse. The other one
detects the head’s vertical motion to move up and down with respect to the displacement of the mouse. A touch switch device was designed to contact
gently with operator’s cheek. The operator may puff his cheek to trigger the device to perform a single click, double clicks, and drag commands. This
system was invented to assist people with disabilities to live an independent professional life.

3. Problem Statement

Computer vision-based mouse can easily be applied to the web services, smart home systems, robot manipulation, and games. That is why tracking non-
rigid motions from sequential videos have been a great interest to the computer vision community. We grew up interacting with the physical objects
around us. How we manipulate these objects in our lives every day, we use gestures not only to interact with objects but to interact with each other and
this brings us a step closer to Human-object relationship by using gesture recognition technique. In this research still webcam has been used to recognize
the gestures. There is no need for 3D or stereo cameras and above research has also been tested on low-cost 1.3-megapixel laptop webcam.

The work is dedicated to a computer vision-based mouse that acts as an interface between the user and various computing devices in the dynamic
environment. This paper presents the technique to perform numerous mouse operations thus obviating the need for hardware used for interaction between
the user and the computing device. The same approach can be applied to endless tasks such as browsing images, playing games, changing T.V channels,
etc. There is a threshold value for distance (in meters) between the user and camera which can further be varied according to the camera's resolution. It
means if the subject who wants to be recognized with his hand gestures in some environment, the subject has to come close to certain fixed distance to
the camera. This research was done on 1.3- megapixel webcam with a threshold value of 2m.

4. Architecture

Keyboard:

Figure 1 – Block Diagram for Keyboard

I. The keyboard will be displayed on the desktop screen.

II. The camera will be available to capture live feeds of your fingerprint keyboard.

III. Thus, by processing the Image, in real time the typed words on the keyboard will be detected.

IV. Those words will be displayed on the desktop.


International Journal of Research Publication and Reviews, Vol 4, no 2, pp 979-984 February 2023 981

Mouse:

Figure 2 – Block Diagram for Mouse Mouse

I. The mouse will be represented by use of our finger for recognition of the cursor moments.

II. A camera will be there to capture live feed of our finger movement on the screen.

III. Hence, In the Image processing, in real time movement of finger will be detected.

IV. Those co-ordinates will be taken as the input of the mouse. The mouse will be represented by use of our finger for recognition of the cursor
moments.

4.1 Flow Diagram

Figure 3- Flowchart Diagram of Mouse


International Journal of Research Publication and Reviews, Vol 4, no 2, pp 979-984 February 2023 982

Figure 4- Flow Diagram of Keyboard

The flow chart diagram describes the flow of functions which take place in the system. It shows the flow of execution of how the command is issued and
that how the command is being handled and how the processing and the required output function is performed.

6. Methodology

OpenCV

OpenCV is a computer vision library which contains image-processing algorithms for object detection. OpenCV is a library of python programming
language, and real-time computer vision applications can be developed by using the computer vision library. The OpenCV library is used in image and
video processing and also analysis such as face detection and object detection.

DETECTING BACKGROUND –

Given the feed from the camera, the 1st thing to do is to remove the background. We use running average over a sequence of images to get the average
image which will be the background too

BACKGROUND SUBTRACTION-

Background subtraction involves calculating a reference image, subtracting each new frame from this image and thresolding the result which results is a
binary segmentation of the image which highlights regions of non-stationary objects.
International Journal of Research Publication and Reviews, Vol 4, no 2, pp 979-984 February 2023 983

CONTOUR EXTRACTION-

Contour extraction is performed using OpenCV’s inbuilt edge extraction function. It uses a canny filter. You can tweak parameters to get ………better
edge detection.

CONVEX HULL AND DEFECTS –

Convex hull points are most likely to be on the fingers as they are the extremities and hence this fact can be used to detect no of fingers. We are finding
the deepest point of deviation on the contour.

FINGRETIP DETECTION (COLOR TAPE) –

We estimate the locations of the user’s fingertips (in image-space) based on geometrical features of the contours and regions obtained. Detect the locations
of the user’s fingertips. Detect whether any fingertips are in contact with the tabletop.

TOUCH DETECTION –

We are given as input the estimated positions of the user’s fingertips and must output which of those tips are estimated to be in contact with the keyboard
mat. We used a technique called shadow analysis. For Keyboard Mapping Touch Point to Keystrokes-

In this we map touch to keystroke and recognized the character.

For Mouse Tracking and Finger Detection – We are tracking and counting the no of finger. Gesture Recognition - Click gesture –

Single click, Double click Keystroke-Send the keystroke to operating system.

6.1 Hardware and Software Details

Hardware used in project are: -

• Laptop with Operating system: macOS, Linux- Ubuntu 16.04 to 17.10, or Windows 7 to 10, with 2GB RAM (4GB preferable)

Webcam -

Web cam is used to capture and recognizes an object in view and tracks the user’s hand gestures using computer vision techniques. As input, it sends the
data to system.

The camera acts as digital eyes seeing what the user sees. It also tracks the movement of hand.

Software used in project are: -

• IDLE (Any)

Library used in project are: - cv2:

• Capturing video using OpenCV NumPy:

• NumPy is a Python library used for working with arrays. Imutils:

• A series of convenience functions to make basic image processing functions such as

• translation, rotation, resizing, skeletonization, and displaying Matplotlib images

• easier with OpenCV and both Python 2.7 and Python 3. Json:

• Python has a built-in package called json, which can be used to work with JSON data.

• It's done by using the JSON module, which provides us with a lot of methods which among loads () and load () methods are going to help us
to read the JSON file.

Pyautogui:

• Pyautogui is a library that allows you to control the mouse and keyboard to do various things. It is a cross-platform GUI automation
Python module for human beings.

Wx:

• Python provides wxpython module which allows us to create high functional GUI.
International Journal of Research Publication and Reviews, Vol 4, no 2, pp 979-984 February 2023 984

7. Future Scope

• The project's upcoming work will focus on enhancing the Fingertip Detection module to be insensitive to variations in lighting and determining
the 3D posture of the panel for the purpose of augmenting 3D objects in reality.

• In the future, we will make use of other graphic features, like the character shape and icon feature, in the human-computer interface to identify
touch events on the projected screen.

• Our future plan includes adding more functionalities such as expanding and reducing windows, closing windows, and so on, by utilizing the
palm and multiple fingers.

• We aim to integrate voice recognition into the keyboard as well.

8. Conclusion

• In this study, a virtual mouse application based on object tracking was developed and implemented using a webcam and the Python
programming environment with OpenCV libraries. This technology has numerous applications in areas such as augmented reality, computer
graphics, gaming, prosthetics, and biomedical engineering.

• We created a system that takes in inputs from colored fingertips movements on the screen to control the mouse cursor, captured in real-time
through a camera.

• All standard mouse functions such as left and right clicks, double clicks, and scrolling were integrated into the system.

• The results indicate that if the vision algorithms can perform well in a range of environments, our system will function more effectively,
potentially improving presentation experiences and reducing workspace.

• Our aim was to create this technology as affordably as possible, while also ensuring compatibility with a standardized operating system, with
the potential to aid patients who lack mobility in their limbs.

• The overarching goal was to develop a virtual mouse and keyboard using hand gesture recognition and image processing to control the
movement of the mouse pointer according to hand gestures.

9. Reference

A. Banerjee, A. Ghosh, K. Bharadwaj, H. Saikia, Mouse Control using a Web Camera based on Colour Detection, Int. J. Comput. Trends Technol. 9
(2014) 15–20. doi:10.14445/22312803/ijctt-v9p104.

Y. Chen, E.R. Hoffmann, R.S. Goonetilleke, Structure of Hand/Mouse Movements, IEEE Trans. Human-Machine Syst. 45 (2015) 790–798.
doi:10.1109/THMS.2015.2430872.

Erdem, E. Yardimci, Y. Atalay, V. Cetin, 2002. Computer vision based mouse, Proceedings. (ICASS). IEEE International Conference.

Eckert, M. Lopez, M. ; Lazaro, C. ; Meneses, J. ; Martinez Ortega, J.F., 2015 Mokey - A motion based keyboard interpreter .Tech. Univ. of Madrid,
Madrid, Spain.

Chu-Feng Lien, Portable Vision-Based HCI - A Real-time Hand Mouse System on Handheld Devices.

Jun Hu, Guolin Li, Xiang Xie, Zhong Lv, and Zhihua Wang, Senior Member, IEEE:Bare-fingers Touch Detection by the Button’s Distortion in a
Projector–Camera System.

Dikovita, C.R.; Abeysooriya, D.P., "V-Touch: Markerless laser-based interactive surface," 2013 International Conference on Advances in ICT for
Emerging Regions (ICTer), Dec. 2013: 248-252.

AlKassim, Z., "Virtual laser keyboards: A giant leap towards human-computer interaction," 2012 International Conference on Computer Systems and
Industrial Informatics (ICCSII), Dec. 2012:1-5.

Du H, Oggier T, Lustenberger F, et al. “A virtual keyboard based on true-3D optical ranging,”British Machine Vision Conference. 2005, pp.220-229.

Hagara M, Pucik J. “Accuracy of 3D camera based virtual keyboard”. 2014 24th International Conference on Radioelektronika
(RADIOELEKTRONIKA), IEEE, 2014: 1-4.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy