Fall Detection Using Height, Velocity

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

International Journal of Integrated Engineering – Special Issue 2018: Seminar on Postgraduate Study, Vol.

10 No. 3 (2018) p. 32-41


DOI: https://10.30880/ijie.2018.10.03.006

A Novel Algorithm for Human Fall Detection using Height,


Velocity and Position of the Subject from Depth Maps
Yoosuf Nizam1, M. Mahadi Abdul Jamil1,*, Mohd Norzali Haji Mohd2, M.
Youseffi3, M. C. T. Denyer4
1
Biomedical Engineering Modeling and Simulation (BIOMEMS) Research Group, Faculty of Electrical and Electronic
Engineering, Universiti Tun Hussein Onn Malaysia, 86400 Parit Raja, Batu Pahat, Johor, Malaysia.
1
Embedded Computing Systems (EmbCos), Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein
Onn Malaysia, 86400 Parit Raja, Batu Pahat, Johor, Malaysia.
1
Faculty of Engineering and Informatics, School of Engineering, University of Bradford, Bradford, West Yorkshire
BD7 1DP, UK.
1
School of Medical Sciences, University of Bradford, Bradford, West Yorkshire BD7 1DP, UK.

Received 9 January 2018; accepted 16 May 2018, available online 2 July 2018

Abstract: Human fall detection systems play an important role in our daily life, because falls are the main obstacle
for elderly people to live independently and it is also a major health concern due to aging population. Different
approaches are used to develop human fall detection systems for elderly and people with special needs. The three
basic approaches include some sort of wearable devices, ambient based devices or non-invasive vision-based
devices using live cameras. Most of such systems are either based on wearable or ambient sensor which is very
often rejected by users due to the high false alarm and difficulties in carrying them during their daily life activities.
This paper proposes a fall detection system based on the height, velocity and position of the subject using depth
information from Microsoft Kinect sensor. Classification of human fall from other activities of daily life is
accomplished using height and velocity of the subject extracted from the depth information. Finally position of the
subject is identified for fall confirmation. From the experimental results, the proposed system was able to achieve
an average accuracy of 94.81% with sensitivity of 100% and specificity of 93.33%.
Keywords: Fall detection, Kinect sensor, Depth images, Non-invasive, Depth sensor

1. Introduction vibration and pressure sensors are used to generate high


false alarms. As a result, such devices are often rejected
Classification of daily activities of human life is a
by the users. On the other hand, vision-based fall
key topic in the field of assistive technology due to
detection systems are very accurate in classifying human
emergence of numerous applications requiring human
falls. Therefore, vision-based sensors [4-6] for human
posture and movement recognition. Among such
detection and identification are important sensors among
application is the human fall detection system, which also
the researchers, especially as they tend to base their fall
requires distinguishing human activities. Human fall
detection on non-wearable sensors [7].
detection systems are very often required for many people
Vision based approaches using live cameras are
in today’s aging population including elderly and people
accurate in detecting human falls and it does not generate
with special needs such as disable and obese. It is also a
high false alarms. However normal cameras require
major health concern to many communities in todays’
adequate lighting and it does not preserve the privacy of
aging population, because human falls are the main
users. As a result, users are not willing to accept having
causes of injury related deaths for seniors aged 79 and
such systems installed at their home. Moreover, with
above ([1, 2]. Injuries from fall is also a major public
normal color camera it is not possible to achieve the
health concern [3] and it is the main obstacle for elderly
accuracy than that of a depth sensor, in extracting the
people to live independently.
position of the subject. Thus, a depth sensor would be a
Fall detection systems use various approaches to
preferable option over color camera, since it can identify
distinguish human fall from other activities of daily life,
human even in dark and at the same time preserve the
such as wearable based sensors, non-wearable sensors
privacy of users and it has shown good performance in
and vision-based sensors. Thus, the three basic
previous works [8-12]. One of the sensor that generate
approaches used in developing human fall detection
depth images to track human skeleton is Microsoft Kinect
systems are wearable based device, camera (vision) based
for windows. This paper proposes a vision-based fall
and ambience-based devices. Wearable based devices
detection system, which uses depth information from
such as fall detection belts and ambient devices using
Kinect sensor to classify human fall from other activities
*Corresponding author: mahadi@uthm.edu.my
2018 UTHM Publisher. All right reserved. 32
penerbit.uthm.edu.my/ojs/index.php/ijie
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41

of daily life. The algorithm will first classify human Rougier et al presented a method for fall detection
activities using height and velocity of the subject to that uses human centroid height relative to the ground and
identify any potential fall movement. If any such body velocity. They have also dealt with occlusions,
movement is detected the system will extract the position which was a weakness of previous works and claimed to
of the subject to confirm fall. have a good fall detection results with an overall success
rate of 98.7% [19].
2. Related works A privacy-preserving fall detection method for
indoor environment using Microsoft Kinect depth sensor
Since this paper focuses on the use of the depth
was proposed by Gasparrini et at. They used Ad-Hoc
information to categorize human activities for fall
segmentation algorithm on a ceiling mount configuration.
detection, this section will review only a set of selected
The raw data directly from the sensor were analyzed and
papers that had based their fall detection on depth
the system extracts the elements to classify all the blobs
sensors.
in the scene through the implemented solutions. The
Combination of a wearable wireless accelerometer
recognized human subjects from the blobs is followed by
and a depth sensor-based fall detection were conducted in
a tracking algorithm between frames. A fall is detected if
which used distance between the person centre of gravity
the depth blob associated to a person is near to the floor
and floor to confirm fall. The authentication of fall after
[20].
potential fall indication from the accelerometer is
They proposed another work using a wearable device
accomplished from a Kinect sensor depth images [13].
along with the depth sensor using three algorithms. The
Another study proposed fall detection using ceiling
first algorithm uses a wearable accelerometer at wrist and
mount depth camera [14].
Kinect sensor. For the second algorithms, the wearable
A statistical method for human fall detection based
accelerometer is place at waist. The third algorithm
on how human move during the last few frames is
avoids using the orientation of the accelerometer and uses
proposed by Zhang et at. They used statistical decision
the distance of spine joint to floor instead of the angle of
making as opposed to hard-coded decision making used
the accelerometer which is orientation of the sensor [21].
in related works. Duration of fall in frames, total head
The related works either uses height of subject and
drop-change of head height, maximum speed (largest
the velocity of the joints or position and velocity of joints
drop in head height), smallest head height and fraction of
to classify human falls [22]. They used Different
frames where head-drop is considered for fall detection
algorithms to extract the subject from the depth images.
[15].
The proposed algorithm uses combination velocity of
A Spatio temporal context (STC) tracking of depth
body and height of subject to classify human fall from
images from a Kinect sensor was proposed by Yang et al,
other activities of daily life.
where the parameters of the Single Gauss Model (SGM)
are estimated and the coefficients of the floor plane are
extracted in pre-processing. Than Foreground coefficient 3. Methodology
of ellipses is used to determine the head position and STC The method proposed in this paper employs the
algorithm is used to track head position. The distance height of the subject and velocity of the body to identify
from head to floor plane is calculated in every following any potential fall movements. Once such a movement is
frame and a fall is indicated if an adaptive threshold has observed the position of the joints (human joints) is
reached [16]. measured with respective to the floor plan to confirm if
Bian et al presented a method for fall detection based the subject is lying on the floor. The algorithm starts with
on two features: distance between human skeleton joints the extraction of depth information from Kinect v1 sensor
and the floor, and the joint velocity. A fall is detected if for the extraction of 3D human joint position and floor
the distance between the joints and the floor is close. plane. These data are then used to compute the velocity of
Then the velocity of the joint hitting the floor is used to the body, the distance between the head to floor plane
distinguish the event from a fall accident or a sitting/lying (height) and the position of the other joints to classify an
down on the floor [7]. unintentional human fall from other activities of daily life
A fall detection and reporting system using Microsoft
Kinect sensor presented by Kawatsu et al, uses two Overview of the system: Human skeleton coordinates
algorithms. The first uses only a single frame to generated from the Microsoft Kinect Sensor are used in
determine a fall and the second uses time series data to the proposed algorithm for fall detection. The skeleton
distinguish between fall and slow lying down on the joints data are stored as (x, y, z) coordinates expressed in
floor. For these algorithms they use the joint position and meters. In this coordinate system, the positive y-axis
the joint velocities. The reporting can be sent as emails or extends vertical upwards from the depth sensor, the
text messages and can include pictures during & after the positive x-axis extends to the left placing the Kinect
fall [17]. sensor on a surface level and the positive z-axis extending
In another study, a mobile robot system is in the direction in which the Kinect is pointed. The z
introduced, which follows a person and detects when the value gives the distance of the object to the sensor
person has fallen using a Kinect sensor. They used the (objects close to sensor will have a small depth value and
distance between the body joints and the floor plane to object far away will have larger depth value). Using these
detect fall [18]. joint coordinate data, the movement of any joint, velocity

33
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41

and position of the subject can be computed along with two frames to extract another set of skeleton data than
the direction respective to the previous joint position. The calculate the initial velocity and height of the subject. The
following Fig. 1 displays the skeleton generated from the process will skip 10 frames before calculating the next
sensor, labeled with joints considered in the algorithm. velocity for the identification of any abnormal velocity
Table 1 shows the combinations of joints considered for with respective to the previously calculated velocity. If it
different parameters used in the proposed algorithm for senses an abnormal increase in velocity the next stage
the classification process of human fall from other will calculate the height of the subject to identify the
activities of daily life. direction of the movement or change with respective to
the previously calculated height. This is because an
abnormal increase in velocity can be caused from many
activities where the pattern and the changes in direction
of height are different like increase in walking speed
(increased step frequency or step length), sitting on chair
or floor, lying down on floor or bed and running. For an
example for walking and running the height changes will
be a straight fluctuation on to any direction and for other
activities mentioned above the direction will be straight
down or most probably diagonally down to y-axis of the
image. Therefore, the system can differentiate the
activities by considering how far the height drops and the
position of joints (posture) after the change (abnormal
velocity) stabilizes. This process of classifying human fall
from other activities of daily is illustrated in Fig. 2. As
seen from the Fig. 2., the algorithm will execute stage 3 if
it couldn’t sense any abnormal change in velocity (from
Fig. 1 Illustration of joints considered in the stage 1) or in height (from stage 2), where the system will
proposed algorithm. check for any activity or movement. If movements are
detected than the system will display the joint coordinates
Combinations of labeled joints in Fig. 1. are used for and start the process all over. If no activity is detected in
the calculations of subject’s height, velocity, speed and stage 3 or unusual height change is observed in stage 2,
position (posture) as described in Table 1. The proposed the process will activate the microphone array of the
fall detection algorithm using these parameters are sensor to listen to any voice from a falling person and
discussed in the next section and the detailed derivation then system will execute stage 4 after skipping 15 frames
of the parameters in the following section. In the (half of a second). In stage 4, the system will compute the
following Table 1, the tick mark indicates that the joint is joint position to predict the posture and the algorithm will
used for the calculation of specific parameter. check if the joints are below the previously extracted
knee joints (left knee and right knee) in stage 1. This is
Table 1 Joint coordinates used for the different because for activities like lying on bed the changes of
parameters. velocity and height follow similar like during a fall
except that, at the end the subject will be lying on bed.
Shoulder Hip Left and During a fall, all the joint height should be below or equal
Joints Head
center center right Knee
to the knee joint before the detection of abnormal change
Height ✓ Not used Not used ✓
in velocity or height. For lying on bed, the joint heights
Velocity ✓ ✓ ✓ Not used
will probably be above the previous knee joint height
Speed Not used ✓ Not used Not used because the subject will be lying on bed. If the joints are
Position ✓ ✓ ✓ ✓ below the previous knee joint position than the system
will confirm it as a fall. This is because a lying posture of
Fall detection algorithm: The proposed algorithm for joints can be observed from activities other than fall like
fall detection goes through four stages which starts after a lying on bed or even fall on bed where the consequences
person is detected in the acquired depth image from the will not be severe.
sensor. If a person is not detected the loop will go back Some of the daily activities that are closely related to
and acquire the next image. If a person is detected stage 1 human fall are shown in Table 2 along with the
will be executed, where the algorithms will extract characteristics considered in the classification of the
skeleton coordinates data of all joints and take a delay of events

34
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41

Table 2 Characteristics of some movements.

Activity Characteristics
Sitting on chair Hip center and head vertical, angle formed from hip center to head & hip center to knee < 90°
Sitting on floor Hip center and head vertical, hip center, left and right knee on floor
Lying on floor Hip center and head horizontal, all joints below or equal previous knee height
Lying on bed Hip center and head horizontal, all joints above or equal to previous knee height
Fall Velocity & height is high, all joints below or equal previous knee height

words, speed is the magnitude component of the velocity


and velocity contains speed (magnitude component) plus
direction component. In this paper, speed is used to
calculate the speed of walking and running. Velocity is
used to classify different activities in terms of rate of
change of head, shoulder center and hip center with
direction. The position of joints is computed by
considering the height and the angles in between the
joints as per the characteristics mentioned in Table 2 for
the joints in Table 1.
To calculate the height of the subject and the distance
from knee joints to floor, the algorithm uses the floor plan
provided by the Kinect and the generated skeleton
coordinates as a basis for the distance calculation in
equation 2. To find out the floor plane equation at start
up, three points from 3D floor plane is chosen and then
the system automatically solve the floor plane equation.
The skeleton frame generated from depth image also
contains floor-clipping-plane vector, which has the
coefficients of an estimated floor-plane equation as
shown by Equation (1).

Ax + By + Cz + D = 0 (1)

Here: A = vFloorClipPlane.x
B = vFloorClipPlane.y
C = vFloorClipPlane.z
D = vFloorClipPlane.w

The equation is normalized such that D is the height


of the camera from the floor in meters. Using this
equation, the floor plane or even stair plane can be
detected at the same time. To calculate the distance
between any joint and the floor, the joint coordinates and
floor plane equation can be applied to the following
Equation (2).
Fig. 2 Process flow of the proposed algorithm.
D (distance - joint to floor) = (2)
Floor Plane and calculation of other parameters: This
section will discuss on the calculation of the parameters
used in the proposed algorithm including height, velocity, Where: x, y, z are the coordinates of the joint.
speed and position of joints. Height is typically referred
to the distance from floor to head (height of the subject) For speed and velocity calculation, joint coordinates
but in an instance in stage 4, the distance from knee joints are extracted after every consecutive frame. Speed (of
(left knee and right knee) to floor (knee joint height) is walking or running) is calculated by taking the difference
also considered for confirming fall which is computed of the shoulder center position in between three frames
stage 1. Speed is the rate of change of position of the (skipping one frame) from the respective axis after
subject, calculated from the change of position of considering the direction of the movement (direction
shoulder center over time. Velocity is the change of component of velocity). The direction is obtained using
position per unit time in a particular direction. In other the coordinate system of the Kinect. As per the coordinate

35
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41

system shown in Fig. 4, any movement to the right or top coordinate), tc is the current time in second and tp is the
or going far (to any axis) gives a positive value for the previous time in second.
distance difference from current position to previous
position. Similarly, any movement to left or down or If the direction is vertically (irregular) to any side
coming close gives a negative value for the distance (any axis), the distance travelled, cannot be simple
difference from current position to previous position. calculated by subtracting the position between two frames
Using this concept, the direction is determined to all the on any axis, because the changes is not on the axis and so
movements shown in Fig. 4. If the direction is straight if the changes is considered as on the axis, then the
(right or left), x-axis is considered and if it is horizontal distance will be less than the actual distance travelled.
(up or down) y-axis and z-axis is used if the movement is This distance can be calculated by assuming the distance
coming close or going far as shown in Fig. 3a to 3c travelled as the hypotenuse of a right-angled triangle
respectively. whose opposite and adjacent are form by joining the
Once the difference of shoulder center position is current and previous position of the particular joint to the
determined after considering the direction of the x-axis and y-axis. Here the actual distance travelled is the
movement, it is than divided by the time taken for the hypotenuse and the other two sides (opposite and
movement to calculate the speed as shown in equation 3. adjacent) of the triangle will be formed by taking the
The time taken for the movement is 1/15 seconds, straight movement to the two axes (x and y) depending on
because the sensor generates 30 frames per second and the direction of the movement. For an example if the
the joint position is taken after skipping one frame (time direction is diagonally down or up with respective to y-
for two frames). For the magnitude component of axis than the opposite and adjacent of the triangle will be
velocities, the same concept as for speed (equation 3) is formed from movement straight onto x-axis and y-axis as
used except that the speed for the shoulder center and hip shown in Fig. 3d. The formula used to calculate this
center is also calculated together with direction. distance (hypotenuse of the triangle formed) is shown in
equation 4. Once distance is calculated the magnitude
part of the velocity is calculated by using the equation 3.
Speed = Meter/second (3)
Where Dc is the Current Distance (current joint D (distance change for irregular movements) =
coordinate), Dp is the Previous Distance (previous joint (4)

Fig. 3 Coordinate system and velocity calculation description.

36
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41

Fig. 5 Description of angles while sitting on floor.

34.89° (5)

Fig. 4 Description of direction of movements.


82.40° (6)
The position of head, shoulder center, hip center, left
and right knee are derived by considering the height of
the particular joint and the joint coordinates (x, y and z 4. Results and discussion
values). Height of any joint is calculated by applying the
The proposed system has been tested with different
joint coordinates to equation 2. The height of the next
activities of daily life such as walking, running, sitting on
joint and the angle formed by the joints are also
chair, sitting on floor, lying on floor and falling from
considered in determining the current posture. An
standing. The experimental results showed that the
example of joint of angle calculation for the joint formed
proposed system can classify human fall from other
in between shoulder center, hip center and left knee while
activities of daily life using height changes and velocity
sitting on floor is illustrated in Fig. 5. with the equations.
of the subject together with the position of the subject
In this example the angle formed by the line joining
after fall. The extraction of joint position after a potential
shoulder center to hip center and left knee to hip center is
fall activity showed good performance in differentiating
calculated by dividing it into two parts. The first part
human activities and confirming human fall.
labelled as ‘a1’ is calculated from the right-angle triangle
The order of velocity and height of the subject used
formed by extending the x-axis coordinate of hip center
in the proposed algorithm greatly helped in eliminating
and y-axis coordinate of shoulder center. The angle ‘a1’
human activities that are closely analogous to fall before
is calculated by taking the inverse tangent of x-axis
going to the final stage for fall confirmation. Thus, the
difference between hip center and shoulder center
algorithm proved to reduce the error rate since those
(opposite) and y-axis difference of the same two joints
activities that are mostly misinterpreted as fall (such as
(adjacent). Similarly, angle ‘a2’ is calculated by taking
lying on floor) is classified out in stage 3 (of Fig. 2.)
the inverse tangent of x-axis difference between hip
before posture computation for confirming falls. Fig. 6.
center and left knee (opposite) and y-axis difference of
shows screenshot of some activities performed in
the same two joints (adjacent). The final angle formed by
experimental testing and the corresponding changes of
three joints is the sum of the two angles (a1 + a2). The
height of the subject are shown in Fig. 7. As seen from
same concept is applied to calculate the angle between
Fig. 7, the height changes pattern can distinguish some
any three joints.
activities without the consideration velocity, but for
activities that possess similar height change pattern does
require velocity.

Fig. 6 Screenshot of activities performed with corresponding depth images. (a) Standing; (b) bend and stand up; (c)
Walking across the sensor; (d) Walking slowly across the sensor; (e) Running; (f) Walking around the sensor; (g) sit
down on floor; (h) falling from standing; (i) sit on chair; (j) fall from chair; (k) stand up.

37
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41

Fig. 7 Changes of height for the activities in Fig. 6.

The effectiveness of the algorithm to determine the height during lying on floor from a side of view of the
direction of the movement is extremely important, sensor. Legend (c) shows the difference in the drop of
because the variables used to calculate the parameter height with mean of just 4.185 centimeter between the
(used in fall detection) are dependent on the direction of two lying pattern. It can be concluded from Fig. 8, that
the movement from previous location to new location. the system is accurate in determining the direction of
Therefore, an incorrect direction can lead to a calculation movements, because for the two lying on floor in Fig. 8,
of a wrong parameter and thus it can bias the fall the fall detection parameters must use different variables
prediction. Two deliberate lying on floor from two sides (two lying are on different view to sensor and thus the
to the viewing angle of sensor with almost the same direction of height will be different). The similarity of
duration are shown in Fig. 8. The pattern described in height change for two lying, shows that the system had
Fig. 8, legend (a) is while lying on floor in front (facing) considered the direction of the movement correctly.
of the sensor and legend (b) is showing the changes in

Fig. 8 Changes in height during lying on floor from different sides.

On the other hand, Fig. 9 illustrates the changes of Table 3 Mean and Standard deviation for walking.
height for walking across (horizontal) the sensor, around Mean STD
the view of the sensor, straight to the direction the sensor
is pointed and diagonal to two sides the sensor is pointed. Across the sensor view 176.4152 8.239622
From the Fig. 9. and the statistics in Table 3, the system Around the sensor view 176.9429 7.523967
was able calculate height more accurately for walking Diagonally to sensor 185.7137 8.095673
straight to the direction the sensor is pointed. It can also Forth and back to sensor 175.3316 4.645903
be concluded that the accuracy dropped for movements
across the sensor and movements diagonally to two sides
where the sensor is pointed.

38
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41

Fig. 9 Changes of height for walking to different sides on the viewing of the sensor.

The results presented so far is enough to prove that used in the fall detection process is calculated accurately.
that proposed system is able to identify the direction of The Fig. 10, shows the calculated height changes during a
the movements accurately. The direction identification in normal slow lying on floor from standing together with
vital, because the parameters used in the system is head, shoulder center and hip position changes on y-axis
calculated based on the direction of the movement with of the image. It is clear from the Fig. 10. that the
respective to the previous location. In order to verify that calculated height was very accurate as compared with the
the developed system can precisely predict any fall actual position data.
movement, it is important to prove that the parameters

Fig. 10 Changes in height and position for lying on floor.

From the experimental results, it was found that the velocity (downwards). The changes in height are similar
one daily activity which is very similar to fall is lying on in nature except the rate of change over the time. Falling
floor. Fig. 11. illustrates the changes of height during on floor took about 2.1 seconds while lying on floor took
normal slow lying on floor and falling on floor from nearly 3.5 second to completely rest on the floor.
standing together with their respective changes in

39
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41

Fig. 11 Changes in height during fall and lying on floor with their velocity.

Table 4 shows the confusion matrix of falls and non- fall movements from other daily activities with an
falls from the experimental activities performed. It shows average accuracy of 94.81%. The system was also able to
the overall number of trials conducted with the predicted gain a sensitivity of 100% with a specificity of 93.33%.
value for each of the activity. Accuracy, sensitivity and The proposed system was able to accurately distinguish
specificity is calculated using the following equation 7, 8 all fall movements from other activities of daily life, even
and 9 respectively. though it failed to identify some lying down on floor from
fall. The velocity of joints greatly helps to classify certain
movements where the distance change pattern possessed
Accuracy = (7)
similar variation. The proposed system could be further
improved by considering additional joints especially in
Sensitivity = (8) the final stage for fall confirmation. Furthermore,
experiments can be done to see if the velocity can be
Specificity = (9) improved by increasing the gap for joint coordinate
extraction.
Table 4 The matrix for average of fall and non-fall
values. Acknowledgements

Predicted This paper was partly sponsored by Center for


Total Falls None-falls Graduate Studies. This work is funded under the project
False Negative titled “Biomechanics computational modeling using
True Positive (TP) depth maps for improvement on gait analysis”. The
Falls (FN)
17
Actual

0 Author Yoosuf Nizam would also like to thank the


True Negative Universiti Tun Hussein Onn Malaysia for providing lab
False Positive (FP)
None-falls (TN) components and GPPS (Project Vot No. U462) sponsor
4
56 for his studies.

Using the above three equations on the average of


fall and none-fall activities in Table 4, the system was References
able to gain an average accuracy of 94.81% with
[1] Baker S.P. and Harvey A. Fall injuries in the
specificity of 93.33% and sensitivity of 100%. The reason
elderly. Clinics in geriatric medicine, Vol 1(3),
for the100% sensitivity, is the fact that the developed
(1985), pp. 501-512.
system was able to classify all fall movements conducted
[2] Griffiths C., Rooney C., and Brock A. Leading
in the trial. The reduced specificity is due to the inability
causes of death in England and Wales–how
of the system to identify some lying down on floor from
should we group causes. Health Statistics
fall.
Quarterly, Vol 28(9), (2005), pp.
[3] Kannus P., Sievänen H., Palvanen M., Järvinen
5. Conclusion T., and Parkkari J. Prevention of falls and
This paper proposed a human fall detection system consequent injuries in elderly people. The
based on human height, velocity and position of body Lancet, Vol 366(9500), (2005), pp. 1885-1893.
computed from depth images generated by the Microsoft [4] Mohd M.N.H., Kashima M., Sato K., and
Kinect sensor. The experimental results showed that the Watanabe M. A Non-invasive Facial Visual-
algorithm used on the system can accurately distinguish Infrared Stereo Vision Based Measurement as an

40
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41

Alternative for Physiological Measurement. in wireless accelerometer. Computer methods and


(2015) Lecture Notes in Computer Science programs in biomedicine, Vol 117(3), (2014),
(including subseries Lecture Notes in Artificial pp. 489-501.
Intelligence and Lecture Notes in [14] Kepski M. and Kwolek B. Fall detection using
Bioinformatics). (2014). Springer. ceiling-mounted 3d depth camera. in Computer
[5] Mohd M.N.H., Kashima M., Sato K., and Vision Theory and Applications (VISAPP), 2014
Watanabe M. Internal state measurement from International Conference on. (2014). IEEE.
facial stereo thermal and visible sensors through [15] Zhang Z., Liu W., Metsis V., and Athitsos V. A
SVM classification. ARPN Journal of viewpoint-independent statistical method for fall
Engineering and Applied Sciences, Vol 10(18), detection. in Pattern Recognition (ICPR), 2012
(2015), pp. 8363-8371. 21st International Conference on. (2012). IEEE.
[6] Tomari R., Kobayashi Y., and Kuno Y. Multi- [16] Yang L., Ren Y., Hu H., and Tian B. New fast
view head detection and tracking with long fall detection method based on spatio-temporal
range capability for social navigation planning. context tracking of head by using depth images.
in International Symposium on Visual Sensors, Vol 15(9), (2015), pp. 23004-23019.
Computing. (2011). Springer. [17] Kawatsu C., Li J., and Chung C., Development
[7] Bian Z.-P., Chau L.-P., and Magnenat-Thalmann of a fall detection system with Microsoft Kinect,
N. A depth video approach for fall detection in Robot Intelligence Technology and
based on human joints height and falling Applications 2012. Springer, (2013), pp. 623-
velocity. International Coference on Computer 630.
Animation and Social Agents, (2012), pp. [18] Mundher Z.A. and Zhong J. A Real-Time Fall
[8] Nizam Y., Jamil M.M.A., and Mohd M.N. A Detection System in Elderly Care Using Mobile
Depth Image Approach to Classify Daily Robot and Kinect Sensor. International Journal
Activities of Human Life for Fall Detection of Materials, Mechanics and Manufacturing,
Based on Height and Velocity of the Subject. in Vol 2(2), (2014), pp. 133-138.
International Conference on Movement, Health [19] Rougier C., Auvinet E., Rousseau J., Mignotte
and Exercise. (2016). Springer. M., and Meunier J., Fall detection from depth
[9] Nizam Y., Mohd M.N.H., and Jamil M.M.A. map video sequences, in Toward useful services
Classification of Human Fall from Activities of for elderly and people with disabilities.
Daily Life using Joint Measurements. Journal of Springer, (2011), pp. 121-128.
Telecommunication, Electronic and Computer [20] Gasparrini S., Cippitelli E., Spinsante S., and
Engineering (JTEC), Vol 8(4), (2016), pp. 145- Gambi E. A depth-based fall detection system
149. using a Kinect® sensor. Sensors, Vol 14(2),
[10] Nizam Y., Mohd M.N.H., and Jamil M.M.A. (2014), pp. 2756-2775.
Human Fall Detection from Depth Images using [21] Gasparrini S., Cippitelli E., Gambi E., Spinsante
Position and Velocity of Subject. Procedia S., Wåhslén J., Orhan I., and Lindh T., Proposal
Computer Science, Vol 105, (2017), pp. 131- and experimental evaluation of fall detection
137. solution based on wearable and depth data
[11] Mohd M.N.H., Nizam Y., Suhaila S., and Jamil fusion, in ICT Innovations 2015. Springer,
M.M.A. An optimized low computational (2016), pp. 99-108.
algorithm for human fall detection from depth [22] Nizam Y., Mohd M.N.H., Tomari R., and Jamil
images based on Support Vector Machine M.M.A. Development of Human Fall Detection
classification. (2017), pp. System using Joint Height, Joint Velocity, and
[12] Nizam Y., Mohd M.N., and Jamil M.M.A. Joint Position from Depth Maps. Journal of
BIOMECHANICAL APPLICATION: Telecommunication, Electronic and Computer
EXPLOITATION OF KINECT SENSOR FOR Engineering (JTEC), Vol 8(6), (2016), pp. 125-
GAIT ANALYSIS. (2006), pp. 131.
[13] Kwolek B. and Kepski M. Human fall detection
on embedded platform using depth maps and

41

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy