Fall Detection Using Height, Velocity
Fall Detection Using Height, Velocity
Fall Detection Using Height, Velocity
Received 9 January 2018; accepted 16 May 2018, available online 2 July 2018
Abstract: Human fall detection systems play an important role in our daily life, because falls are the main obstacle
for elderly people to live independently and it is also a major health concern due to aging population. Different
approaches are used to develop human fall detection systems for elderly and people with special needs. The three
basic approaches include some sort of wearable devices, ambient based devices or non-invasive vision-based
devices using live cameras. Most of such systems are either based on wearable or ambient sensor which is very
often rejected by users due to the high false alarm and difficulties in carrying them during their daily life activities.
This paper proposes a fall detection system based on the height, velocity and position of the subject using depth
information from Microsoft Kinect sensor. Classification of human fall from other activities of daily life is
accomplished using height and velocity of the subject extracted from the depth information. Finally position of the
subject is identified for fall confirmation. From the experimental results, the proposed system was able to achieve
an average accuracy of 94.81% with sensitivity of 100% and specificity of 93.33%.
Keywords: Fall detection, Kinect sensor, Depth images, Non-invasive, Depth sensor
of daily life. The algorithm will first classify human Rougier et al presented a method for fall detection
activities using height and velocity of the subject to that uses human centroid height relative to the ground and
identify any potential fall movement. If any such body velocity. They have also dealt with occlusions,
movement is detected the system will extract the position which was a weakness of previous works and claimed to
of the subject to confirm fall. have a good fall detection results with an overall success
rate of 98.7% [19].
2. Related works A privacy-preserving fall detection method for
indoor environment using Microsoft Kinect depth sensor
Since this paper focuses on the use of the depth
was proposed by Gasparrini et at. They used Ad-Hoc
information to categorize human activities for fall
segmentation algorithm on a ceiling mount configuration.
detection, this section will review only a set of selected
The raw data directly from the sensor were analyzed and
papers that had based their fall detection on depth
the system extracts the elements to classify all the blobs
sensors.
in the scene through the implemented solutions. The
Combination of a wearable wireless accelerometer
recognized human subjects from the blobs is followed by
and a depth sensor-based fall detection were conducted in
a tracking algorithm between frames. A fall is detected if
which used distance between the person centre of gravity
the depth blob associated to a person is near to the floor
and floor to confirm fall. The authentication of fall after
[20].
potential fall indication from the accelerometer is
They proposed another work using a wearable device
accomplished from a Kinect sensor depth images [13].
along with the depth sensor using three algorithms. The
Another study proposed fall detection using ceiling
first algorithm uses a wearable accelerometer at wrist and
mount depth camera [14].
Kinect sensor. For the second algorithms, the wearable
A statistical method for human fall detection based
accelerometer is place at waist. The third algorithm
on how human move during the last few frames is
avoids using the orientation of the accelerometer and uses
proposed by Zhang et at. They used statistical decision
the distance of spine joint to floor instead of the angle of
making as opposed to hard-coded decision making used
the accelerometer which is orientation of the sensor [21].
in related works. Duration of fall in frames, total head
The related works either uses height of subject and
drop-change of head height, maximum speed (largest
the velocity of the joints or position and velocity of joints
drop in head height), smallest head height and fraction of
to classify human falls [22]. They used Different
frames where head-drop is considered for fall detection
algorithms to extract the subject from the depth images.
[15].
The proposed algorithm uses combination velocity of
A Spatio temporal context (STC) tracking of depth
body and height of subject to classify human fall from
images from a Kinect sensor was proposed by Yang et al,
other activities of daily life.
where the parameters of the Single Gauss Model (SGM)
are estimated and the coefficients of the floor plane are
extracted in pre-processing. Than Foreground coefficient 3. Methodology
of ellipses is used to determine the head position and STC The method proposed in this paper employs the
algorithm is used to track head position. The distance height of the subject and velocity of the body to identify
from head to floor plane is calculated in every following any potential fall movements. Once such a movement is
frame and a fall is indicated if an adaptive threshold has observed the position of the joints (human joints) is
reached [16]. measured with respective to the floor plan to confirm if
Bian et al presented a method for fall detection based the subject is lying on the floor. The algorithm starts with
on two features: distance between human skeleton joints the extraction of depth information from Kinect v1 sensor
and the floor, and the joint velocity. A fall is detected if for the extraction of 3D human joint position and floor
the distance between the joints and the floor is close. plane. These data are then used to compute the velocity of
Then the velocity of the joint hitting the floor is used to the body, the distance between the head to floor plane
distinguish the event from a fall accident or a sitting/lying (height) and the position of the other joints to classify an
down on the floor [7]. unintentional human fall from other activities of daily life
A fall detection and reporting system using Microsoft
Kinect sensor presented by Kawatsu et al, uses two Overview of the system: Human skeleton coordinates
algorithms. The first uses only a single frame to generated from the Microsoft Kinect Sensor are used in
determine a fall and the second uses time series data to the proposed algorithm for fall detection. The skeleton
distinguish between fall and slow lying down on the joints data are stored as (x, y, z) coordinates expressed in
floor. For these algorithms they use the joint position and meters. In this coordinate system, the positive y-axis
the joint velocities. The reporting can be sent as emails or extends vertical upwards from the depth sensor, the
text messages and can include pictures during & after the positive x-axis extends to the left placing the Kinect
fall [17]. sensor on a surface level and the positive z-axis extending
In another study, a mobile robot system is in the direction in which the Kinect is pointed. The z
introduced, which follows a person and detects when the value gives the distance of the object to the sensor
person has fallen using a Kinect sensor. They used the (objects close to sensor will have a small depth value and
distance between the body joints and the floor plane to object far away will have larger depth value). Using these
detect fall [18]. joint coordinate data, the movement of any joint, velocity
33
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41
and position of the subject can be computed along with two frames to extract another set of skeleton data than
the direction respective to the previous joint position. The calculate the initial velocity and height of the subject. The
following Fig. 1 displays the skeleton generated from the process will skip 10 frames before calculating the next
sensor, labeled with joints considered in the algorithm. velocity for the identification of any abnormal velocity
Table 1 shows the combinations of joints considered for with respective to the previously calculated velocity. If it
different parameters used in the proposed algorithm for senses an abnormal increase in velocity the next stage
the classification process of human fall from other will calculate the height of the subject to identify the
activities of daily life. direction of the movement or change with respective to
the previously calculated height. This is because an
abnormal increase in velocity can be caused from many
activities where the pattern and the changes in direction
of height are different like increase in walking speed
(increased step frequency or step length), sitting on chair
or floor, lying down on floor or bed and running. For an
example for walking and running the height changes will
be a straight fluctuation on to any direction and for other
activities mentioned above the direction will be straight
down or most probably diagonally down to y-axis of the
image. Therefore, the system can differentiate the
activities by considering how far the height drops and the
position of joints (posture) after the change (abnormal
velocity) stabilizes. This process of classifying human fall
from other activities of daily is illustrated in Fig. 2. As
seen from the Fig. 2., the algorithm will execute stage 3 if
it couldn’t sense any abnormal change in velocity (from
Fig. 1 Illustration of joints considered in the stage 1) or in height (from stage 2), where the system will
proposed algorithm. check for any activity or movement. If movements are
detected than the system will display the joint coordinates
Combinations of labeled joints in Fig. 1. are used for and start the process all over. If no activity is detected in
the calculations of subject’s height, velocity, speed and stage 3 or unusual height change is observed in stage 2,
position (posture) as described in Table 1. The proposed the process will activate the microphone array of the
fall detection algorithm using these parameters are sensor to listen to any voice from a falling person and
discussed in the next section and the detailed derivation then system will execute stage 4 after skipping 15 frames
of the parameters in the following section. In the (half of a second). In stage 4, the system will compute the
following Table 1, the tick mark indicates that the joint is joint position to predict the posture and the algorithm will
used for the calculation of specific parameter. check if the joints are below the previously extracted
knee joints (left knee and right knee) in stage 1. This is
Table 1 Joint coordinates used for the different because for activities like lying on bed the changes of
parameters. velocity and height follow similar like during a fall
except that, at the end the subject will be lying on bed.
Shoulder Hip Left and During a fall, all the joint height should be below or equal
Joints Head
center center right Knee
to the knee joint before the detection of abnormal change
Height ✓ Not used Not used ✓
in velocity or height. For lying on bed, the joint heights
Velocity ✓ ✓ ✓ Not used
will probably be above the previous knee joint height
Speed Not used ✓ Not used Not used because the subject will be lying on bed. If the joints are
Position ✓ ✓ ✓ ✓ below the previous knee joint position than the system
will confirm it as a fall. This is because a lying posture of
Fall detection algorithm: The proposed algorithm for joints can be observed from activities other than fall like
fall detection goes through four stages which starts after a lying on bed or even fall on bed where the consequences
person is detected in the acquired depth image from the will not be severe.
sensor. If a person is not detected the loop will go back Some of the daily activities that are closely related to
and acquire the next image. If a person is detected stage 1 human fall are shown in Table 2 along with the
will be executed, where the algorithms will extract characteristics considered in the classification of the
skeleton coordinates data of all joints and take a delay of events
34
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41
Activity Characteristics
Sitting on chair Hip center and head vertical, angle formed from hip center to head & hip center to knee < 90°
Sitting on floor Hip center and head vertical, hip center, left and right knee on floor
Lying on floor Hip center and head horizontal, all joints below or equal previous knee height
Lying on bed Hip center and head horizontal, all joints above or equal to previous knee height
Fall Velocity & height is high, all joints below or equal previous knee height
Ax + By + Cz + D = 0 (1)
Here: A = vFloorClipPlane.x
B = vFloorClipPlane.y
C = vFloorClipPlane.z
D = vFloorClipPlane.w
35
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41
system shown in Fig. 4, any movement to the right or top coordinate), tc is the current time in second and tp is the
or going far (to any axis) gives a positive value for the previous time in second.
distance difference from current position to previous
position. Similarly, any movement to left or down or If the direction is vertically (irregular) to any side
coming close gives a negative value for the distance (any axis), the distance travelled, cannot be simple
difference from current position to previous position. calculated by subtracting the position between two frames
Using this concept, the direction is determined to all the on any axis, because the changes is not on the axis and so
movements shown in Fig. 4. If the direction is straight if the changes is considered as on the axis, then the
(right or left), x-axis is considered and if it is horizontal distance will be less than the actual distance travelled.
(up or down) y-axis and z-axis is used if the movement is This distance can be calculated by assuming the distance
coming close or going far as shown in Fig. 3a to 3c travelled as the hypotenuse of a right-angled triangle
respectively. whose opposite and adjacent are form by joining the
Once the difference of shoulder center position is current and previous position of the particular joint to the
determined after considering the direction of the x-axis and y-axis. Here the actual distance travelled is the
movement, it is than divided by the time taken for the hypotenuse and the other two sides (opposite and
movement to calculate the speed as shown in equation 3. adjacent) of the triangle will be formed by taking the
The time taken for the movement is 1/15 seconds, straight movement to the two axes (x and y) depending on
because the sensor generates 30 frames per second and the direction of the movement. For an example if the
the joint position is taken after skipping one frame (time direction is diagonally down or up with respective to y-
for two frames). For the magnitude component of axis than the opposite and adjacent of the triangle will be
velocities, the same concept as for speed (equation 3) is formed from movement straight onto x-axis and y-axis as
used except that the speed for the shoulder center and hip shown in Fig. 3d. The formula used to calculate this
center is also calculated together with direction. distance (hypotenuse of the triangle formed) is shown in
equation 4. Once distance is calculated the magnitude
part of the velocity is calculated by using the equation 3.
Speed = Meter/second (3)
Where Dc is the Current Distance (current joint D (distance change for irregular movements) =
coordinate), Dp is the Previous Distance (previous joint (4)
36
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41
34.89° (5)
Fig. 6 Screenshot of activities performed with corresponding depth images. (a) Standing; (b) bend and stand up; (c)
Walking across the sensor; (d) Walking slowly across the sensor; (e) Running; (f) Walking around the sensor; (g) sit
down on floor; (h) falling from standing; (i) sit on chair; (j) fall from chair; (k) stand up.
37
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41
The effectiveness of the algorithm to determine the height during lying on floor from a side of view of the
direction of the movement is extremely important, sensor. Legend (c) shows the difference in the drop of
because the variables used to calculate the parameter height with mean of just 4.185 centimeter between the
(used in fall detection) are dependent on the direction of two lying pattern. It can be concluded from Fig. 8, that
the movement from previous location to new location. the system is accurate in determining the direction of
Therefore, an incorrect direction can lead to a calculation movements, because for the two lying on floor in Fig. 8,
of a wrong parameter and thus it can bias the fall the fall detection parameters must use different variables
prediction. Two deliberate lying on floor from two sides (two lying are on different view to sensor and thus the
to the viewing angle of sensor with almost the same direction of height will be different). The similarity of
duration are shown in Fig. 8. The pattern described in height change for two lying, shows that the system had
Fig. 8, legend (a) is while lying on floor in front (facing) considered the direction of the movement correctly.
of the sensor and legend (b) is showing the changes in
On the other hand, Fig. 9 illustrates the changes of Table 3 Mean and Standard deviation for walking.
height for walking across (horizontal) the sensor, around Mean STD
the view of the sensor, straight to the direction the sensor
is pointed and diagonal to two sides the sensor is pointed. Across the sensor view 176.4152 8.239622
From the Fig. 9. and the statistics in Table 3, the system Around the sensor view 176.9429 7.523967
was able calculate height more accurately for walking Diagonally to sensor 185.7137 8.095673
straight to the direction the sensor is pointed. It can also Forth and back to sensor 175.3316 4.645903
be concluded that the accuracy dropped for movements
across the sensor and movements diagonally to two sides
where the sensor is pointed.
38
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41
Fig. 9 Changes of height for walking to different sides on the viewing of the sensor.
The results presented so far is enough to prove that used in the fall detection process is calculated accurately.
that proposed system is able to identify the direction of The Fig. 10, shows the calculated height changes during a
the movements accurately. The direction identification in normal slow lying on floor from standing together with
vital, because the parameters used in the system is head, shoulder center and hip position changes on y-axis
calculated based on the direction of the movement with of the image. It is clear from the Fig. 10. that the
respective to the previous location. In order to verify that calculated height was very accurate as compared with the
the developed system can precisely predict any fall actual position data.
movement, it is important to prove that the parameters
From the experimental results, it was found that the velocity (downwards). The changes in height are similar
one daily activity which is very similar to fall is lying on in nature except the rate of change over the time. Falling
floor. Fig. 11. illustrates the changes of height during on floor took about 2.1 seconds while lying on floor took
normal slow lying on floor and falling on floor from nearly 3.5 second to completely rest on the floor.
standing together with their respective changes in
39
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41
Fig. 11 Changes in height during fall and lying on floor with their velocity.
Table 4 shows the confusion matrix of falls and non- fall movements from other daily activities with an
falls from the experimental activities performed. It shows average accuracy of 94.81%. The system was also able to
the overall number of trials conducted with the predicted gain a sensitivity of 100% with a specificity of 93.33%.
value for each of the activity. Accuracy, sensitivity and The proposed system was able to accurately distinguish
specificity is calculated using the following equation 7, 8 all fall movements from other activities of daily life, even
and 9 respectively. though it failed to identify some lying down on floor from
fall. The velocity of joints greatly helps to classify certain
movements where the distance change pattern possessed
Accuracy = (7)
similar variation. The proposed system could be further
improved by considering additional joints especially in
Sensitivity = (8) the final stage for fall confirmation. Furthermore,
experiments can be done to see if the velocity can be
Specificity = (9) improved by increasing the gap for joint coordinate
extraction.
Table 4 The matrix for average of fall and non-fall
values. Acknowledgements
40
Nizam et al., Int. J. of Integrated Engineering - Special Issue 2018: Seminar on Postgraduate Study, Vol. 10 No. 3 (2018) p. 32-41
41