Collaborative Fall Detection Using A Wearable Device and A Companion Robot

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Collaborative Fall Detection using a Wearable Device and a Companion

Robot

Fei Liang, Ricardo Hernandez, Jiaxing Lu, Brandon Ong, Matthew Jackson Moore, Weihua Sheng, Senlin Zhang

Abstract— Older adults who age in place face many health


problems and need to be taken care of. Fall is a serious
problem among elderly people. In this paper, we present
the design and implementation of collaborative fall detection
using a wearable device and a companion robot. First, we
developed a wearable device by integrating a camera, an
accelerometer and a microphone. Second, a companion robot
communicates with the wearable device to conduct collaborative
fall detection. The robot is also able to contact caregivers in case
of emergency. The collaborative fall detection method consists
of motion data based preliminary detection on the wearable
device and video-based final detection on the companion robot.
Both convolutional neural network (CNN) and long short-term
memory (LSTM) are used for video-based fall detection. The ex- Fig. 1: The concept of collaborative activity monitoring using
perimental results show that the overall accuracy of video-based
algorithm is 84%. We also investigated the relation between
a wearable device and a companion robot.
the accuracy and the number of image frames. Our method
improves the accuracy of fall detection while maximizing the
battery life of the wearable device. In addition, our method
significantly increases the sensing range of the companion robot. In recent years, companion robots are coming to our
homes which may provide a great opportunity to address the
fall detection problem [3]. Companion robots can not only
I. M OTIVATION provide basic functions such as news, playing music, chat-
ting, but also offer medical consultation and ask caregivers
In recent years, we have witnessed a steady growth of
for help when older adults have an emergency. However,
older adult population who are above the age of 65 [1].
using cameras on the robot or in the home environment
This can be attributed to the fact that the baby boomer
to monitor older adult activities is intrusive and may cause
generation finally reached this age group while people’s life
significant privacy concerns. Therefore we need to develop
expectancy has increased in recent decades. Older adults face
a new way for the robot to collect data to understand
many health problems and have to find assistance from the
older adults behaviors and their surroundings. We propose
younger generation or professional caregivers. Particularly,
that a wearable multi-sensor device could alleviate privacy
fall has become a serious problem among older adults [2],
concerns of users since the captured images are not of the
which usually results in injuries, and sometimes even deaths.
user but of the surroundings [4]. In an independent living
Due to financial reasons, it is not realistic to have a personal
environment, this method can maximize privacy protection.
health practitioner to monitor the state of those facing health
In addition, when the wearable camera is integrated with
problems. Although many research efforts have been devoted
other sensors such as a microphone and an accelerometer,
to fall detection, accurately detecting a fall and providing
the wearable device could collect more data that be useful
medical care in real time is still very challenging.
for the robot to expand its sensing capability in understanding
the behavior of the older adults.
This project is supported by the National Science Foundation (NSF)
Grants CISE/IIS 1910993, EHR/DUE 1928711, the Joint Fund of Chi- Therefore, the objective of this paper is to propose, de-
nese Ministry of Education for Pre-research of Equipment under Grant velop and test a collaborative activity monitoring system
6141A02022371, and the Open Research Project of the State Key Labo-
ratory of Industrial Control Technology, Zhejiang University, China (No. (CAMS) that combines a wearable device with a companion
ICT2021B14 ). robot for elderly care. The current focus of activity moni-
Fei Liang, Ricardo Hernandez, Jiaxing Lu, Brandon T. Ong and Wei- toring is on fall detection. Fig. 1 illustrates the concept of
hua Sheng are with the School of Electrical and Computer Engineer-
ing, Oklahoma State University, Stillwater, OK, 74078, USA (e-mails: collaborative activity monitoring in which the robot works
weihua.sheng@okstate.edu). Matthew Jackson Moore is with closely with the wearable device to understand the behavior
the School of Mechanical and Aerospace Engineering, Oklahoma State of the older adults. In emergency situations, such as a fall, the
University, Stillwater, OK, 74078, USA. Senlin Zhang is with the State Key
Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou robot will notify the healthcare providers or family caregivers
310027, China. (e-mail: slzhang@zju.edu.cn). through a Cloud based management system.
This paper has three major contributions. First, it proposed Other researchers explored the use of both wearable
and developed a collaborative strategy between a wearable sensors and environmental sensors for fall detection. For
device and a companion robot to understand human activities example, Martinez et al. [12] proposed a multimodal fall
which minimizes the privacy intrusion while allowing the detection system which consists of wearable sensors, ambient
robot to maximize its sensing range. Second, a compact sensors and vision sensors. They combined LSTM with CNN
wearable activity monitoring unit (WAMU) is designed and to detect a fall from raw data (sensor data and video data)
implemented, which collects multi-modal data regarding the in real-time. However, fall detection based on environmental
wearer and the environment, conducts preliminary fall detec- vision sensors causes significant privacy concerns.
tion based on motion data in a power-efficient way. Third, Companion robots have been used to support elderly
we developed, implemented and tested the vision-based fall care in homes or care institutions [13]. Many companion
detection algorithm by leveraging deep neural networks on robots have been developed for elderly care, including the
the companion robot. Aibo [14] robot and Paro [15] robot. Equipped with various
The rest of this paper is organized as follows: Section II functions such as speech recognition, object recognition,
introduces the related work. Section III presents the overall and dialogue management, companion robots could be an
design of the CAMS. Section IV describes the design of the avatar representing caregivers or social companions [3] [16]
WAMU, the companion robot and the software. Section V that improve the quality of life for older adults through
presents the experimental setup, design and results. Section companionship and social interaction. A companion robot
VI concludes our work and discusses the future work. can offer proactive assistance when emergency occurs. They
could provide safety assistance including confirming a fall
II. R ELATED W ORK and sending notifications. Do et al. [3] developed a fall
detection system based on sound events on a companion
A significant amount of work has been done recently in robot. It could recognize various falling sounds. After a fall
the field of fall detection using wearable devices. Several is detected, the robot could connect to a remote caregiver
projects are based on a single sensor. Bourke et al. [5] for assistance. However, the shortcoming with the method is
used accelerometer to get data and develop a simple rule that the robot may not be able to hear the sound when the
to classify an event as a fall by using a threshold which is older adult is far from the robot.
derived by taking the magnitude of the three signals from
each tri-axial accelerometer data. Torti et al. [6] focused on III. OVERALL D ESIGN OF C OLLABORATIVE ACTIVITY
the implementation of recurrent neural networks (RNNs) ar- M ONITORING S YSTEM (CAMS)
chitectures of microcontroller units (MCU) for fall detection
with tri-axial accelerometers. Shojaei et al. [7] demonstrated As shown in Fig. 2, the CAMS consists of a WAMU, a
a vision-based detection method using LSTM to verify a companion robot, and a healthcare management system.
fall with 3D joint skeleton features. However, using a single The WAMU measures the acceleration of the user motion
sensor to collect data has limited information and usually to determine if a fall is occurring. A positive result triggers
results in many false positives. the capture of a sequence of images that are then sent
Multiple wearable sensors were also used for fall detec- via WiFi to the companion robot. The companion robot
tion, which usually offers higher accuracy. Hussain et al. processes the images to further confirm or reject the fall. If
[8] presented a system which includes a gyroscope and an confirmed, the robot begins its emergency protocol, engaging
accelerometer to minimize false positives in fall detection. the user to determine if the caregivers should be notified.
Mao et al. [9] proposed a fall monitoring system in which a The audio is recognized through the Google Speech API,
portable sensor collects the data with a 3-axis accelerometer, while the Rasa [17] module provides basic skills for older
a 3-axis gyroscope and a 3-axis magnetometer then sends adults to use, such as weather, news, quotes, jokes, music,
it to the mobile phone for further fall detection decisions. photo-taking, and wiki information. The WAMU and the
Ozcan et al. [4] developed a fall detection system using a companion robot are also in constant communication so that
wearable device equipped with cameras and accelerometers. the user can still take advantage of the capabilities of the
They detected falls by analyzing the video and accelerometer robot.
data captured by a smartphone. As the smartphone needs to The design requirements of the WAMU are as follows:
be attached to the waist with belt, it is not human-friendly 1) The WAMU should collect motion of the wearer and
for older adults. In addition, the camera and accelerometer the video and audio data of the surrounding environment of
are independent and can not communicate with each other, the wearer.
which may lead to unnecessary waste of energy due to 2) The wearable device should be lightweight and er-
the heavy computation on video data. Shahiduzzaman et gonomic for older adults and rechargeable with a reasonable
al. [11] designed a Smart Helmet integrated with wearable battery life for daily use.
cameras, accelerometers and gyroscope sensors, which is not 3) The WAMU should be able to communicate with the
convenient to the older adults living in home. They provided companion robot within the range of a typical home, which
a fall detection algorithm that processes both accelerometer allows the wearable device and the robot to collaboratively
and video data on the Smart Helmet. understand the user’s daily activities.
Fig. 2: The overall design of the CAMS.

form factor is desired. The general purpose IO pins allow


multiple peripheral devices to be connected to the ESP32-
CAM. The chip also integrates WiFi 802.11b/g/n allowing
the board to connect to a server which runs on the companion
robot. Finally with the extension of the memory capacity,
large blocks of data such as images and audio can be acquired
and stored on the board for preprocessing and transmission.
The ADXL345 digital accelerometer is adopted as the motion
sensor, which is a small, ultra-low power, 3-axis accelerom-
eter with a high resolution of 13-bit measurements up to
16g. The sensitivity and resolution are sufficient for detecting
falls. I2C communication protocol is used for the ADXL345
to talk to the ESP32-CAM at a rate of 800 Hz. Finally, the
audio input device is an INMP441 omnidirectional MEMS
microphone. This chip is fitted with the necessary features to
provide a high-performance, low-power, digital-output using
the industry-standard I2S protocol.
2) Battery: As shown in Fig. 3, the battery fitted to the
Fig. 3: The design of the Wearable Activity Monitoring Unit circuit board is a lithium-ion polymer battery which provides
(WAMU). a thin, lightweight design with a capacity of 2500 mAh. The
output of the battery ranges from 4.7V at full charge to 3.7V.
This voltage output is not compatible with the ESP32-CAM
IV. H ARDWARE AND S OFTWARE D ESIGN that requires a supply voltage of around 5V. To accommodate
it, a voltage booster is used to increase the voltage to 5V.
A. Wearable Activity Monitoring Unit Along with the booster, the board is also equipped with a
Fig. 3 shows the design of the WAMU which has three charging circuit that takes a USB-C connection and charges
parts: the circuit board, battery and housing. the battery at a max rate of 1000 mA.
1) Circuit Board: As shown in Fig. 3, the board consists It is necessary to have an estimate of the battery life.
of an ESP32 development board kit, a 3-axis accelerometer, The component that has a considerable amount of power
and a digital MEMS microphone. ESP32-CAM is a small draw is the ESP32-CAM which has a maximum of 240
embedded computing module that incorporates an ESP32S mA current consumption when transmitting data. The other
chip, an OV2640 CMOS image sensor, and an external 4MB components have very small power consumption. Therefore
PSRAM. This module is chosen for its compactness, the only the ESP32-CAM is considered in estimating the battery
amount of available pins for use, and many other func- life. We assume 15 false positives will occur every hour. The
tionalities. Since the WAMU is worn by older adults for average amount of current draw would be around 82.5 mA
a sustained period of time, a lightweight design with a small which results in a battery life of 30 hours and is sufficient
Fig. 4: The overview of the WAMU housing.

Fig. 6: The prototype of the ASCC companion robot.

Fig. 5: A person wearing the wearable activity monitoring


unit (WAMU).

for supporting daily use.


3) WAMU Housing: Many methods to house the wearable
device hardware were investigated. Some of the designs
include a solid frame necklace, a headset mount, and various Fig. 7: The software flowchart of the proposed system.
corded necklace designs. After hands-on experimentation and
rapid prototyping, the corded necklace proved to be the most
adaptable and user friendly. The chosen design houses all to communicate with caregivers or family members when the
electrical components in one case as shown in Fig. 4. The older adult is in emergency situations.
case has both front and back parts. On the housing there
is a cut out for the camera lens (front) and charging port C. Software
(bottom). During the prototyping phase, the housing can be 1) Software Flowchart: The flow chart of software is
opened and closed easily. A flexible cord is used to suspend shown in Fig. 7. It can be divided into two parts: the wearable
the device around the user’s neck. Compared to a solid frame, device part and the robot part. In the wearable device part,
the cord allows the device to fit on any user. Fig.5 shows a acceleration data are collected by the 3-axis accelerometer
person wearing the WAMU along with the inside, front and and the data are processed locally on the device. If the
back view of the WAMU. magnitude of the 3-axis acceleration vector is greater than
the threshold, it is declared as a potential fall event and the
B. Companion Robot camera function will be triggered. As a result, a sequence
Fig. 6 shows the ASCC Companion Robot [16] developed of images during falling are captured and sent to the robot.
in our lab. It consists of three main parts: the head, the body On the robot, after verifying the fall, an alarm will be sent
and the power base. The robot head is comprised of a vision to the caregivers or family members. Besides, the robot will
system with an Intel Realsense RGB-D camera, an auditory communicate with the older adult through natural language.
system with four microphones for speech recognition and It will ask the older adult “are you ok?” and wait for older
sound localization, and a touch screen which is connected to adults’ responses to make further decisions.
an ARM-based board running Android OS and used for user 2) Data Communication: The WAMU is connected to the
interfacing. An Intel NUC with a Core i5 processor is hosted robot using TCP/IP sockets. In order to save energy, the wifi
in the robot body, which facilitates speech recognition, video- turns on only when a possible fall event is detected. Then the
based fall detection and other skills, as well as the capability image data are sent to the robot. After receiving the response
from the robot, if it needs to respond to the robot by audio,
the microphone data will be collected and sent through this
connection. After that, the connection will be closed and the
wifi will turn off.
3) Fall Detection Algorithms: Fall detection is imple-
mented in two steps. The first step is a preliminary classifier
that runs on the WAMU using the accelerometer data. The
second step runs on the robot which receives a sequence
of images from the WAMU and classifies them through
recurrent neural network (RNN). The RNN-based classifier
provides more accurate classification and reduces possible
false positive results.
For the accelerometer-based preliminary fall detection
a simple power-efficient algorithm is used, in which we
compare the magnitude of the 3-axis acceleration vector
against a predefined threshold for detecting a fall. According
to [5], we set the threshold to 3g, which is sufficient to
detect the falls. For the video-based algorithm, RNN is a Fig. 8: The modules of LSTM and RNN.
type of artificial neural networks which consists of nodes
from a directed graph along a temporal distance. Compared
with convolutional neural network (CNN), the RNN model is
influenced by previous features, which provides its memory
of previous action and the ability to classify actions which
involve consecutive image frames. We built an architecture
for video classification based on both CNN and e long short-
term memory (LSTM). We use the Resnet152 as the CNN
for extracting features for each frame and compressing the
frame from an image to a vector. The last fully-connected
classification layer is deleted and therefore the result is a
processed and compressed vector that contains the features
of the original image and can be analyzed by the following
RNN.
The LSTM is an artificial RNN architecture [18]. As
shown in Fig. 8, compared with traditional RNN archi- Fig. 9: The smart home testbed and the testing scenario.
tectures, the LSTM architecture is more complicated. The
RNN concatenates past state and current state and controls
the outputs through the tanh function. The LSTM contains CAM board has a 2-core 160MHz CPU and 520KB SRAM
additional input and output. Thus, instead of only having plus 4MB PSRAM. The resolution of the captured images
memory of state at the most recent moment like the RNN, is 640 by 640 and the frame rate is between 15 and 60 fps.
the LSTM has long time memory. After getting the processed Meanwhile, the accelerometer-based fall detection algorithm
feature vector through Resnet, an LSTM with three hidden runs on the board, and the accelerometer data has a sampling
layers analyses the vector and derives the classification result. frequency of 400Hz. The robot runs on a 4-core Intel i5 CPU
4) Alarm Management: As falls may result in serious with 8G memory and the OS is Ubuntu 16.04. As shown in
injuries among the elderly [19], timely intervention by Fig. 5 and Fig. 9, in order to collect data and validate the
caregivers can play a vital role. We chose an open source algorithms, we set up a mattress in our smart home testbed.
mobile APP Telegram [20] to be the communication channel Human subjects participating in the experiment fell onto the
between the companion robot and the caregivers. With the mattress. Before the test, IRB approval was obtained from
mobile device, caregivers or family members can receive our university.
telegram messages when older adults are in trouble (e.g. fall
or asking for help). The message type can be text, image or B. Experiment and Results
audio, which could provide abundant information for further The wearable camera captures images and sends to the
decisions. robot when it detects abnormal acceleration data. The fall
detection program on the robot first compresses every image
V. E XPERIMENTAL E VALUATION
into a vector. The RNN then receives a sequence of input
A. Test setup vectors representing each frame. A long short-term memory
We implemented the fall detection system using the pro- (LSTM) network is used to train the RNN model. In order to
posed WAMU and the existing companion robot. The ESP32- test the effectiveness of the RNN-based fall detection system,
Fig. 10: A sample sequence of 10 consecutive images col-
lected during a fall.

Fig. 11: Accuracy of Resnet+RNN with respect to the


as Fig. 5 shows, we collected data from 4 male volunteers number of image frames used.
aged between 24 and 31. They were asked to complete
each action in Table I. Each video represents an action and
consists of 10 frames, as shown in Fig. 10. The wearable
From Fig. 11, we find that using 10 frames the accuracy of
device captures the image of the environment instead of the
fall detection is 84%. When the frame number is less than 5,
user which helps reduce privacy concern. Six actions are
the accuracy is too low to verify a fall. The accuracy of using
listed for classification, as shown in Table I below. Hence,
7 frames is very close to the accuracy of using 10 frames. But
a total of 600 sets of experimental data were collected, with
with 7 frames, we can save 3 frames which leads to lower
100 sets for each action. We divided the dataset, which has
power consumption in communication and computation.
a total of 6000 images, into three parts: training dataset,
validation dataset, and test dataset, with the ratio of 3:1:1.
In order to determine the suitable number of frames for fall VI. C ONCLUSION AND F UTURE W ORK
detection, we compared the accuracy of fall detection using
In this paper, we presented the design and implementation
different numbers of frames.
of a collaborative activity monitoring system (CAMS) based
TABLE I: The Dataset. on a wearable device and a companion robot. We use
fall detection as a case study to evaluate the CAMS. The
Action Samples Number of Frames
Walk 100 1000 collaborative fall detection method consists of two steps:
Sit down 100 1000 accelerometer-based fall detection on the WAMU and video-
Fall forward 100 1000 based fall confirmation or rejection on the companion robot.
Fall backward 100 1000
Fall left 100 1000
Accelerometer-based detection algorithm is energy-efficient,
Fall right 100 1000 which extends the battery life of the wearable device. The
Total 600 6000 video-based algorithm combines CNN and LSTM. The ex-
perimental results show that the overall accuracy of fall
detection using the video-based algorithm is 84%, while
TABLE II: Confusion Matrix of Detection using Resnet + for some classes of activities the accuracy reaches 94%,
RNN including falling backward and walking. We also analyzed
the accuracy of fall detection using different numbers of
Real Classes
Backward Forward Left Right Sit down Walk image frames, which achieves a tradeoff between accuracy
Predicted Classes

Backward 0.9434 0.0138 0.0446 0.0235 0.0290 0.0032 and power consumption. Apparently, the results we reported
Forward 0.0081 0.8489 0.0250 0.0451 0.0159 0.0021
Left 0.0232 0.0277 0.7902 0.1294 0.0654 0.0284 here are still preliminary but promising. We need further im-
Right 0.0182 0.0723 0.1027 0.7373 0.0813 0.0221
Sit down 0.0020 0.0255 0.0232 0.0147 0.7935 0.000 prove the whole system, particularly in the following aspects:
Walk 0.0051 0.0117 0.0143 0.0500 0.0150 0.9442 1) Energy consumption. We will conduct a more thorough
investigation of the power consumption on the WAMU and
The confusion matrix of our proposed approach on testing optimize both hardware and software to further reduce the
dataset is shown in Table II. The overall accuracy is 84%, power consumption. 2) Fall detection. We will collect more
as Fig. 11 shows, and it is obtained at 53th epoch. As data from more realistic falls and study the impact of various
shown in the table, the accuracy of some classes reaches factors on fall detection accuracy, including the speed of
94%, including falling backward and walking. However, the falling, the location of falling, etc. 3) WAMU design. We
accuracy of falling left, falling right, and sitting down is not will further improve the ergonomics of the WAMU to make
as good as that of other classes. it more human-friendly.
R EFERENCES
[1] J. Zhao and X. Li, “The status quo of and development strategies for
healthcare towns against the background of aging population,” Journal
of Landscape Research, vol. 10, no. 4, pp. 41–44, 2018.
[2] T. Nguyen Gia, V. K. Sarker, I. Tcarenko, A. M. Rahmani,
T. Westerlund, P. Liljeberg, and H. Tenhunen, “Energy efficient
wearable sensor node for IoT-based fall detection systems,”
Microprocessors and Microsystems, vol. 56, no. October 2017, pp.
34–46, 2018. [Online]. Available: https://doi.org/10.1016/j.micpro.
2017.10.014
[3] H. M. Do, M. Pham, W. Sheng, D. Yang, and M. Liu, “RiSH: A robot-
integrated smart home for elderly care,” Robotics and Autonomous
Systems, vol. 101, pp. 74–92, 2018.
[4] K. Ozcan and S. Velipasalar, “Wearable Camera- and Accelerometer-
Based Fall Detection on Portable Devices,” IEEE Embedded Systems
Letters, vol. 8, no. 1, pp. 6–9, 2016.
[5] A. K. Bourke, J. V. O’Brien, and G. M. Lyons, “Evaluation of a
threshold-based tri-axial accelerometer fall detection algorithm,” Gait
and Posture, vol. 26, no. 2, pp. 194–199, 2007.
[6] E. Torti, A. Fontanella, M. Musci, N. Blago, D. Pau, F. Leporati,
and M. Piastra, “Embedded real-time fall detection with deep learning
on wearable devices,” Proceedings - 21st Euromicro Conference on
Digital System Design, DSD 2018, pp. 405–412, 2018.
[7] A. Shojaei-Hashemi, P. Nasiopoulos, J. J. Little, and M. T. Pourazad,
“Video-based Human TelegramFall Detection in Smart Homes Using
Deep Learning,” Proceedings - IEEE International Symposium on
Circuits and Systems, vol. 2018-May, pp. 0–4, 2018.
[8] F. Hussain, F. Hussain, M. Ehatisham-Ul-Haq, and M. A. Azam,
“Activity-Aware Fall Detection and Recognition Based on Wearable
Sensors,” IEEE Sensors Journal, vol. 19, no. 12, pp. 4528–4536, 2019.
[9] A. Mao, X. Ma, Y. He, and J. Luo, “Highly portable, sensor-based
system for human fall monitoring,” Sensors (Switzerland), vol. 17,
no. 9, 2017.
[10] A. Pinto, “Wireless Embedded Smart Cameras: Performance Analysis
and their Application to Fall Detection for Eldercare,” p. 122, 2011.
[11] K. M. Shahiduzzaman, X. Hei, C. Guo, and W. Cheng, “Enhancing
Fall Detection for Elderly with Smart Helmet in a Cloud-Network-
Edge Architecture,” 2019 IEEE International Conference on Consumer
Electronics - Taiwan, ICCE-TW 2019, pp. 2019–2020, 2019.
[12] L. Martinez-VillasenoTelegramr, H. Ponce, and K. Perez-Daniel,
“Deep learning for multimodal fall detection,” Conference Proceedings
- IEEE International Conference on Systems, Man and Cybernetics,
vol. 2019-October, pp. 3422–3429, 2019.
[13] I. Anghel, T. Cioara, D. Moldovan, M. Antal, C. D. Pop, I. Salomie,
C. B. Pop, and V. R. Chifu, “Smart environments and social robots
for age-friendly integrated care services,” International Journal of
Environmental Research and Public Health, vol. 17, no. 11, 2020.
[14] Aibo. [Online]. Available: http://www.sony-aibo.com/
[15] Paro. [Online]. Available: http://www.parorobots.com/
[16] F. Erivaldo Fernandes, H. M. Do, K. Muniraju, W. Sheng, and A. J.
Bishop, “Cognitive orientation assessment for older adults using social
robots,” in 2017 IEEE International Conference on Robotics and
Biomimetics (ROBIO), 2017, pp. 196–201.
[17] Rasa. [Online]. Available: https://rasa.com/
[18] Lstm. [Online]. Available: http://colah.github.io/posts/
2015-08-Understanding-LSTMs/
[19] M. Alwan, P. J. Rajendran, S. Kelli, D. Mack, S. Dalali, and M. W.
I, “KellI, DalalI, FelderI,” pp. 6–10, 2006.
[20] Telegram. [Online]. Available: https://telegram.org/

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy