termpaper

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Minor Project

MID SEM REPORT


ON

SIGHT SYNC PROSTHETICS

Submitted By:

Name Roll No Branch


Mukul Sunda A25305221032 B. TECH CSE
Vridhie Chawla A25305221044 B. TECH CSE
Aryaman Sharma A25305221039 B. TECH CSE

Under the guidance of


Dr. Himanshu Verma
Assistant Professor
Amity School of Engineering and Technology

School of Computer Science


Amity School of Engineering and Technology
Amity University, Mohali
2024

Approved By

(Dr. Himanshu Verma) (Dr. Harvinder Singh)


Project Mentor Project Coordinator

0
Abstract

This paper explores the utilization of human gesture and hand tracking
technologies for replicating natural movements in prosthetic hands. The
advancement in sensor technologies and machine learning algorithms has
enabled precise tracking and interpretation of human gestures, providing an
opportunity to replicate these movements in prosthetic devices.

By leveraging depth sensors, cameras, and sophisticated algorithms, these


tracking systems accurately capture intricate hand gestures and motions in real-
time. Techniques such as pose estimation and machine learning algorithms
facilitate the translation of these gestures into commands for prosthetic hands.

Challenges lie in ensuring seamless integration between the tracking system and
the prosthetic device, adapting to various hand movements, and providing a
natural user experience
The application of gesture and hand tracking in prosthetics offers significant
potential. It enables prosthetic hands to mirror the natural movements of the
user, enhancing dexterity and functionality. Additionally, it holds promise in
improving the user's overall quality of life by providing intuitive control over
the prosthetic device.

Future developments aim to refine tracking accuracy, enhance gesture


recognition capabilities, and foster greater adaptability to individual user
preferences and needs. Ongoing research strives to create more intuitive
interfaces between the tracking system and prosthetic devices, ultimately
enabling seamless and effortless control. Also, the main objective is to make
affordable and simple understandable system.

Keywords – HCI, Quadrant control, convex hull

1
TABLE OF CONTENTS

Sr no. Contents Pg no.


1. Introduction 3
2. Objective 4
3. Computer Vision over other HCI 5
4. Challenges faced in tracking 6
5. Setup 7
6. Software Used 8
7. Working of Computer Vision 9
I. Filtering
II. Isolation
III. Control
IV. Communication

8. Flowcharts 11
9. The prosthetic Hand 13
10. Conclusion 14
11. References 15

2
Introduction

As a method of Human-Computer Interaction (HCI), hand gesture recognition can


serve as a substitute to standard remote controls or keystrokes currently
prescribed. It can do away with the need to learn complex control systems as it
provides a natural intuitive system.

Presently, there are two main types of HCIs that can interpret human hand
gestures.

1.The first is the Data Glove method, which has the user wear a glove of some
description that requires the use of accelerometers, gyroscopes, power source for
wireless gloves or a network of data and power lines for a wired glove.

2.The second method is by using a form of Computer Vision to isolate the


hand and track its movements; in some experiments a colored glove is used to aid
the tracking process. Both methods of hand gesture recognition have their
arguments for and against.

3
OBJECTIVE

The main objective is to make a gesture recognition system which uses computer
vision for real time hand movement of the prosthetic hand, this will enhance the
overall user experiences of the prosthetics

1. Fusing computer vision with Prosthetics


Both the learning of hardware and software are implemented. Making a
system that can interpret complex human gestures and can translate them
into real world prosthetic movement

2. Exploring the techniques and algorithm


Leaning of new algorithms used in computer vison and implementing them
in real time for most optimal results

3. Evaluating system performance


Using various sensors for testing helps evaluating the system for most fast
and accurate gesture predictions

4. Multi Sector
This project can be use in vast fields like medical, engineering, remote
sensing areas etc.

4
Why Use Computer Vision over Glove method
With the Data Glove method, experiments have found that the trending accuracy
tends to be higher than some computer vision methods, upwards of 95% gesture
recognition due to their higher sampling rate of 100 samples per second.
However, there are a few draw backs.

First are the high cost of a data gloves.

Second is accelerometers drift, resulting in a continuous lowering of accuracy over


extended periods of activity.
(Accelerometer drift - Accelerometers are analog sensors and have some offset and gain
error points, errors that vary (drift) over time and temperature causing wrong readings.)

Finally, data gloves tend to be cumbersome and uncomfortable when used or long
periods at a time and not robust enough for outdoor use

Methods involving Computer Vision (CV) can range in complexity and accuracy
based upon the camera arrangement that is used. The advantages of using CV as
an HCI are:

• Public device can be made touchless


• A CV system can monitor multiple hands
• Cameras are portable, often found on every device
• Cameras do not impede hand movements allowing for more natural
expression of hand gestures
• More real time evaluation can be done

5
Challenges faced by Computer Vision Tracking

When trying to detect hand gestures, cameras sensors face challenges like
background interference, image capture rate or slow frames per second for real
time processing, and 2-dimensional image representing of a 3-dimensional hand,
depth estimation. Some main problems can be overcome by:

A. Background Interference
The CV method can overcome this issue by using image processing software such
as OpenCV. Background illumination and noise can be edited from the picture
frame using the technique of background subtraction which can remove stationary
static objects from the image and focus on the momentary objects.

B. Sampling Rate
Sampling rate can be improved upon by utilizing a camera with higher
specifications. While the camera in this paper only has a frame rate of 30 Frames
Per Second (FPS)Further increases in frame rate can be made with more
expensive cameras, but this will not make the system low cost.

C. 3-Dimensional representation
Knowing where unobserved parts of the hand are located, is paramount to build a
control system. Many CV applications use the Hidden Markov method to estimate
the current hand gesture, based on the previous hand gestures and established
Hidden Markov chains. A Hidden Markov model can estimate locations of the
unseen hand parts based upon their previously observed locations.

D. Depth Estimation
Evaluating the depth from a 2D image is a difficult task and need advance
algorithm to evaluate where the object is placed. Use of sensors like lasers or
Infrared lights will work well in these scenarios, but significantly increase the cost
of the project.

6
THE SETUP
A. Cameras
Two cameras were tested in this experiment. Each camera was
chosen for accessibility and cost. The first camera tested was a built-
in camera on the Lenovo legion 5 pro PC. The camera is the HD
Realtek Webcam with 28FPS and a max resolution of 1280 x 720
pixels. This camera was primarily chosen to see if standard built-in
cameras are equipped enough to accurately capture and evaluate
gestures for use in control systems. The second camera tested was
the qhmpl Camera which used a USB based plug and play concept.
The camera is capable of up to 17FPS at lower resolutions. These
cameras were chosen because of their low cost and similar frame
rate. This allowed for the built-in camera to be tested against an
external camera to evaluate its performance more accurately.

B. Arduino
Arduino is the processing unit of the incoming signal. It
assigns to which servos it should give signals in form of
0 and 1. It continually receive the data by the help of
serial communication from the python script. All the
data which will be send to the prosthetic hand will be
processed by this microcontroller.

D. Robotic hand
The robotic arm was based on popsicle wooden stick. The build of
the arm contains several servo motors. Six TowerPro servo motors
were used to actuate the fingers. The basic working is based upon
real human working of hand movement. The strings represent
tendons which are pulled by the servo making the finger movement
possible.

7
SOFTWARE PACKAGES USED
A. OpenCV
OpenCV is an Open-Source Computer Vision library that provides real time image processing. It
is open source and cross-platform. It can be configured to run the application on a computer’s
graphics processing unit (GPU) or a central processing unit (CPU). Within this project, all image
processing is done on the CPU. Even though GPUs are specialized for image processing, we ran
the image processing on the CPU rather than the GPU to understand the minimum processing
power needed for real time gesture recognition for control systems. Also, state of the art GPUs
would increase the cost of the system.

B. Arduino
The Arduino Software (IDE) is an integrated development environment used to write, compile,
and upload code to Arduino boards. It provides a user-friendly interface for coding and
interacting with hardware.

How CV perceives information


The design and implementation of the system is set to permit real time control of the robotic
arm. To achieve real time results on a low-cost system the computation effort needs to be kept
to a minimum. This is accomplished through filtering, isolation, and control.

A. Filtering
This is a simple method of reducing background noise and unwanted objects from the camera
stream.
The following steps are performed to achieve hand isolation:
1. A Gaussian blur is applied to the camera stream
2. Spectrum conversion from Blue, Green, and Red (BGR) to Hue, Saturation, and Value
(HSV)
3. Threshold Detection
4. Morphological Opening and Closing

1.A Gaussian Blur


Blurring the image helps to reduce white noise and smooths edges of the image. This
also removes large variances in the captured image’s colour spectrum which helps
with threshold detection.

8
2.A Spectrum Converstion
Converting from BGR to HSV is common practice when dealing with CV projects.
Utilizing HSV enables the separation of color and intensity which helps reduces the
effects of lighting issues

3.A Averaging
Hand isolation is accomplished by sampling the pixel data. The mean and the standard
deviation of those samples are used to set the upper and lower limits for threshold
detection.

4.A Threshold detection


OpenCV’s “inRange” function takes the upper and lower limits and filters the image. This
creates a black and white image that has clearly defined edges that can then be used for
the isolation process.

5.A Morphological Opening and Closing


This process further cleans the image by eroding and dilating the hash edges left over
from threshold detection. Making the contouring stage faster.

B. Isolation
Identifying key features of the hand is paramount for gesture recognition. Finding these features
is difficult. The following steps are taken to accomplish real time tracking and hand gesture
recognition:
• Edge detection and evaluation
• Finger detection
• Centre of palm evaluation

1.B Edge Detection and Evaluation


Applying edge detection becomes a simple manageable process of using OpenCV’s find
Contours” function. This adds all the continuous points along a boundary to a vector
which can be draw and evaluated. If the image has more than one boundary, sorting is
required to find the contour that represents the hand. This should be the contour with

9
the largest area. Sometimes smaller boundaries are detected, and this additional step
ensures the hand is located quickly and accurately.

2.B Finger Detection


Creating a convex hull around the contour of the hand supports finding the fingertips
and interdigital folds. The process looks for intersecting points between the hull and the
contour. These points are classed as fingertips. The furthest points from the hull along
the contour are classed as interdigital folds.

C. Control
Evaluating the position of the fingertips, defects, and centre of the palm is what supports the
constructions of a robust yet simple control system. This is broken down into two main parts: quadrant
control and hand pose detection. The current control system uses a quadrant-based approach to enable
or disable arm movements. This creates a safe environment for the robot arm to operate in.

1.C Quadrant control


Quadrant control is evaluated using a derivation of the cross-product formula which tests
to see where the hand lies

If the value of D is positive, the point resides on one side of the line. If the value of D is
negative, the point is on the opposite side of the line. If the value is equal to zero, the point is
on the line. For this control system, if the point is evaluated to being on the line it is treated as
being across the line. The point that is evaluated is the centre of hand. There are 16 different
regions that centre of the hand may reside in but only nine different commands issued. To
minimize computation effort while tracking the hand position, four tests are performed. The
outcome of these tests is either a “1” or a “0”. Using Boolean logic, the four test outputs are
combined into a 4-bit number.

10
2) Hand Pose Detection
When detecting a hand pose, the application compares the average value of the
fingertips to the average value of the defects, relative to the centre of the palm. A cross
cultural study of hand dimensions shows that approximately from the centre of the palm
to the fingertips is 1.86 times longer than from the centre of the palm to the defects.
Applying this ratio allows for more accurate detection between hand poses at variable
distances from the camera

D. Communication
Communication approach is done by live serial output from the pc itself. The python code
gathers information from the camera and then calculate the hand indexes. The calculated result
is sent in form of 0 and 1 to the Arduino board by serial (0-off and 1-on). The output set is
sent in form of ‘$ 0 0 0 0 0’ where the variable set changes according to the finger movement

11
E. System Flow Charts

12
The Robotic Hand

The complete construction of the hand is made up of wood (popsicle, balsa) and
other recycle material. This cut the cost up to 80% as compare to a 3D printed
one. The working of the hand is similar to our real hand in which the strings act as
tendons and servo act as pulling muscles

TENDONS

13
CONCLUSION

Complex computer vision-based gesture recognition systems are being developed


that are capable of interpreting sign language in real time. Those system will be
interfacing with neural networks such as Google’s TensorFlow and system
requirements on computer processing will be much higher because the application
will be configured to operate on specialized GPUs. Considering, a low-cost gesture
recognition application to be used as a control method for robotics, this project
proves that simple control system can be built using computer vision.

14
References

Oudah, M.; Al-Naji, A.; Chahl, J. Hand Gesture Recognition Based on Computer
Vision: A Review of Techniques. J. Imaging 2020, 6, 73.

Aron Raymond See, Jolo Gerard Miel Tolentino, Bhuvanut


Duangsasidhorn, Wan Jung Chang, Yang-Kun Ou, Ming-Che Chen,
"Boosting Resnet18 for a Smart Glasses Input Module to Control a 3D
Printed Prosthetic Arm", 2022 IEEE International Conference on Consumer
Electronics (ICCE), pp.1-3, 2022.

15

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy