Final
Final
Final
net/publication/356879325
Toward AI-Assisted UAV for Human Detection in Search and Rescue Missions
CITATIONS READS
5 281
3 authors, including:
Some of the authors of this publication are also working on these related projects:
Deployment of Electro-optical Sensors and Drones for Living Human Detection and Localization in Search and Rescue Missions View project
All content following this page was uploaded by Mostafa Rizk on 08 December 2021.
Abstract—Search and rescue missions during and after disas- ing. In this context, unmanned aerial vehicles (UAVs) have
ters require all efforts and high financial expenses. Rapid locating been proposed to provide geospatial imaging. A UAV is an
of wounded and lost individuals contributes in directing the aircraft that is controlled remotely without the need of an
rescuers and medical teams. This may increase the probability
of saving human lives and plays a significant role in reducing on-board human pilot. Also, current UAVs can navigate in
expenses. Currently, the use of unmanned aerial vehicles (UAVs) fully or partially autonomous modes [3]. THe main use of
or drones for remote surveillance and reconnaissance is becoming UAVs have been focused in military applications. Nowadays,
increasingly popular. On the other hand, emergent artificial UAVs are being used in commercial, scientific, recreational,
intelligence (AI) algorithms based on convolution neural networks security, agricultural, power line inspection and other civilian
(CNN) reveal the ability of real-time detection. Combining the
high-performance detection and classification capabilities pro- applications [4] [5] [6]. This rapid expanding to civilian
vided by emergent AI techniques with the exploratory abilities of fields other than surveillance and aerial photography refers
UAVs allows the UAVs to process the captured sequence of images to the ability of UAVs to carry multitude equipment and to
and report back results in real-time. The evolution of AI-assisted navigate for long time periods. The employment of UAVs
UAVs enables the detection of wounded and trapped persons equipped with electro-optical sensors, real-time processing
while flying and allows proper and fast transfer of information
to ground stations to lead the rescuers and medical teams to modules and advanced communication systems demonstrates
victims’ locations. In this paper, we explore augmenting UAVs a novel low-cost solution to enhance the current capabilities
with processing units executing emergent AI-based detectors. The of government authorities and rescue agencies in detecting
proposed system can detect humans in real time and send the and locating wounded and lost individuals during and after
corresponding coordinates to the ground station. disasters. The use of UAVs in search and rescue missions has
Index Terms—UAV, Human detection, Search and rescue mis-
sions, artificial intelligence, YOLO a great impact in decreasing efforts, financial expenses and
time invested to save human lives.
I. I NTRODUCTION In computer vision, several techniques based on feature
Nowadays, government authorities and non-governmental extraction. Histogram of oriented gradients (HOG) [7] and
organizations seek to increase their capabilities to rescue scale-invariant feature transform (SIFT) [8]are examples of
wounded individuals during crises. One of the most dangerous the techniques that have been widely used for object detec-
threats to an injured person is the delay in response of the tion. Currently, artificial intelligence (AI) is the key-trend for
medical staff. The time since injury is important because it object detection. Neural networks (NN) have been exploited to
influences medical management. The longer the delay between implement computer vision algorithms using machine learning
injury and medical action, the greater risk of complications [1]. concept. Several algorithms based on NN have been proposed
In general, the delay in providing proper medical treatment targeting object detection where objects are classified and
refers to the lack of information about the position of the then localized within the image by drawing a bounding box
injured individuals and their current medical situation. During around the object of interest. Recently, convolutional neural
rescue and search missions, significant efforts, financial ex- networks (CNN) have been deployed to classify images. Image
penses, and time are invested to save human lives. Despite the classification using CNN results in assigning a label to an input
fact that the UN-affiliated International Search and Rescue Ad- image from a fixed set of predefined classes.
visory Group (INSARAG) has set various standards on rescue Mainly, an image is labeled according to the most dom-
and saving operations [2], records show that hundreds of first inating object in the image. However, images could include
responders are involved at random to save a single human life multiple objects. Hence, one label is not sufficient. In addition,
without any direction or guidance. This situation influences detection imposes localizing the objects within the image. A
the efficiency and efficacy of the operations and may have spatial region (coordinates and size) of each object is required.
bad effects on the wounded, trapped or lost individuals. Several techniques have been proposed to make use of the
One promising method to ensure successful and efficient CNN in object detection. These techniques offer detecting of
search and rescue missions is the use of geospatial imag- multiple objects in images while generating the information of
their bounding boxes. Object detection methods are classified
This work was supported by a project funded by the Lebanese University. into two categories: (1) one stage models and (2) two stage
models. In two-stage models, region proposals are determined
in the first stage and image classification in the second stage.
In [9], sliding window approach has been utilized in region
proposal stage. Later, classifiers are applied on windows
at different locations with different scales. In [10], region-
convolutional neural network (R-CNN) has been introduced
to detect objects in images. R-CNN method generates about
2000 region proposals in the image using selective search
method [11]. This technique reduces the number of regions
needed when adopting sliding window methods. However,
after running classification in all regions, post-processing is
needed to refine the bounding boxes, eliminate duplicates
and adjust the detection scores. These operations increase the
complexity of the method. R-CNN suffers from space and
time cost during training. Also, a limitation of slow detection
is imposed. Other enhanced versions of R-CNN have been
Fig. 1. The network architecture of YOLOv3 [16]
introduced such as fast R-CNN and faster R-CNN [12] which
form a leap towards real-time detection.
In [13], You Only Look Once (YOLO) has been introduced.
Instead of exploiting available classifiers to detect objects in
various locations in an image, object detection is treated as
a single regression issue in the YOLO technique, which goes
straight from image pixels to bounding box coordinates and
class probabilities [13]. It does not iterate the classification
in different regions of an image. In deed, YOLO forms a
unified model of all phases on a neural network. The input
image including the object(s) to be detected, passes forward
through a single neural network of multiple layers. The method
directly produces predictive vectors corresponding to each ob-
ject occurring in the input image including the corresponding
coordinates of its bounding box and class probabilities. Since Fig. 2. Overview of the proposed system
it has been released [13], several versions of YOLO have been
introduced. In [14], YOLOv2 has been proposed an improved
model of YOLO by replacing the backbone network with a proposed system. The remainder part of this paper is organized
simpler Darknet-19 and by eliminating the fully connected lay- as follows. Next section states the motivations about using
ers at the end. In [15], the authors have introduced YOLOv3, UAVs in search and rescue missions. Section II provides a brief
which is another improved detector based on YOLO. The survey about related research works. Section IV shows the
feature extractor in YOLOv3 adopts a hybrid architecture proposed method, the conducted experiments and the obtained
based on Darknet-53 and Residual networks (ResNet). Also, results. Finally, the conclusion is presented in Section.
YOLOv3 makes detections at 3 different scales to enable
detection of small objects. YOLOv3 is a fully convolutional II. M OTIVATIONS
neural network made up of 1×1 and 3×3 convolution filters. It Search and rescue missions depending on human rescuers
maintains a fast processing speed while providing remarkable may not be sufficient in many cases due to several reasons.
accuracy for objects of varying sizes. Fig. 1 shows the network First, the wounded persons are mainly not precisely located.
architecture of YOLOv3 [16]. YOLOv3 has been verified in Also, the number of these persons is unknown for the rescue
different works targeting various applications. The obtained team as well as their current medical situation and required
results show that YOLOv3 insures real- time detection while medical tools for treatment. On the other hand, covering a
achieving remarkable accuracy and precision. huge affected area requires additional time for human rescuers
In our work, we experimented YOLOv3 to detect humans especially for regions with poor weather and uneven surfaces.
in aerial images. The obtained results show its relevance in Also, the environment of the affected area forms a serious
detecting human bodies in real-time while providing accept- threat for the rescuers such as the presence of toxic gases, fires,
able accuracy. Accordingly, we propose an airborne system radiation, high temperature, biohazards, etc. This endangers
that alerts the rescue teams. Once a human body is detected, the ground rescuers’ lives and causes mental and physical
the system sends a notification message to the ground station. fatigue [17].
The message includes the coordinates of the location and the Compared to other vehicles, UAVs are characterized by their
number of detected humans. Fig. 2 depicts the overview of the cost-effectiveness, ease to use and maintain, affordability and
availability in the markets in addition to their ease and fast (DCNN).
accessibility to the disaster regions [18]. The cost of acquiring
manned aerial vehicles is prohibitively high. Manned aerial III. RELATED WORK
vehicles operation necessitates high-trained personnel and their Many research works have been conducted to prove the
use requires costly preventive and corrective maintenance in feasibility of using UAVs equipped with sensing techniques in
order to ensure the safety of the aerial vehicle and the pilot. rescue and search operations. In [29], a self-adaptive image-
Furthermore, very severe restrictions are required for take-off matching system has been presented for processing in real-
and landing areas, which are frequently located far from the time UAV videos for fast natural disaster response. In [30],
search and rescue area. Ground vehicles have been proposed to the authors have proposed a method for gathering informa-
be used in search and rescue missions [19] [20] [21]. Powering tion of damaged area after a natural disaster. The method
the ground vehicles adds an overhead in terms of the size, compromises real-time capturing of images, labeling their
weight and cost of batteries. The usage of ground vehicles is position and altitude, and sending the image with the flight
related to the characteristics of the region as the inclination and parameters to the ground control station (GCS) to construct
land type affect the movement of ground robots, which makes a three-dimensional hazard map. In [31], a brief analysis
ground robots not guaranteed to move outside urban areas. of the wilderness search and rescue problem with emphasis
Also, ground vehicles suffers from limited mobility and face on practical aspects of visual-based aerial search has been
difficulties to overcoming obstacles and barriers. This makes presented. As part of this analysis, the authors have presented
ground robots not guaranteed to move outside urban areas. and analyzed a generalized contour search algorithm. In [32],
Also, it requires hard real-time controlling by the operator as the authors have presented a sensors-suite that integrates video
a small delay in transmission of control commands causes the and still visible spectrum cameras deployed on helicopter
robot to drift, which may lead to damaging the robot or its sur- UAV. Human body detection adopting positioning algorithms
rounding. In addition, the human operator should be provided through visible and infrared imagery has been presented in
with real-time video for the robot surrounding environment, [33]. In addition, vision-based human detection algorithm
which forms a bottleneck in terms of transmission power and from UAV imagery has been explored in [34]. In [35], the
bandwidth. task of automatically finding people lying on the ground in
In addition, the use of UAVs has been proved to be efficient images taken from the on-board camera of a UAV has been
for medical missions. Many research works have addressed addressed. The authors have evaluated various state-of-the-art
using UAVs to deliver medication to rural regions and trans- visual people detection methods in the context of vision based
port medical products across long distances, including blood victim detection from an UAV. The well-known cascaded Haar
derivatives and pharmaceuticals human blood samples [18] classifiers have been explored in [36] for real-time detection
[22], microbiological specimens and biological samples [23] on UAV imagery for human detection. In this work human
[24] and defibrillators to out-of-hospital heart attack victims detection has been proven using a combination of thermal
[25]. The use of UAVs equipped with electro-optical sensors and color images. Human body detection and geo-localization
and processing modules could help to overcome the current using color and thermal images for UAV search and rescue
vulnerabilities of traditional man-dependent methods and that missions have been addressed. In [37], a quadcopter equipped
of manned aerial vehicle or ground robots. In fact, it could play with light-weight microwave detector has been introduced to
an advanced role in providing real-time, data-on demand and detect living humans trapped under rubble. In [38], multiple
reliable information, which pace of collection and processing airborne vehicles and ground platforms h ave been used to
is critical in rescue and search missions. complete the EU-ICARUS project, which has been hailed as
On the other hand, human detection using AI techniques one of the most reliable geospatial data delivery systems for
has been addressed in several works. In [26], the authors rescue operations. In [39], the authors have proposed a camera-
have proposed a new human detection approach based on based position-detection system for SAR missions. A fully
Fourier and Histogram of Orient Gradient (HOG) descriptors autonomous fixed-wing UAV has been built using an all-in-
using CUDA. The support vector machine classifier (SVM) one camera-based target recognition and positioning system.
has been used to integrate both types of features descriptors to In [40], the authors have demonstrated using UAVs equipped
achieve best performance. In [27], the authors have proposed with vision cameras to assist in avalanche search and rescue
the use of CNN-based algorithm to detect and localize human operations. Machine learning techniques have been used in
position inside surveillance video footage. The experimental the procedure. To extract discriminative features, the photos
results show high accuracy level at 76.4%. The processing time of the avalanche debris collected by the UAV have been first
for one image is 0.145 seconds. Thus, it will take at least 2 processed with a pre-trained CNN. In [41], a human body
seconds of processing time for 15 fps to detect humans on detection algorithm based on color and depth data captured
a second video footage. The authors in [28] have proposed from on-board sensors has been presented. Also, the authors
an combination between: Intensity histogram, Local Binary have proposed a computational model for tracking numerous
Pattern (LBP), and HOG to detect human and animal. The persons with invariance to scale and rotation of the point of
suggested method in [28] develops a rapid deep learning clas- view with respect to the target. In [42], a fully autonomous
sification system using a deep convolutional neural network rescue UAV with on-board real-time human detection have
Camera GPS
AI
Long-Range
computing Controller
Transceiver antenna
device
V. C ONCLUSION
The major goal of the proposed work in this paper is to
develop a revolutionary solution that will aid government
officials in better detecting and locating wounded, lost, and Fig. 6. Samples of the obtained detection results in live video sequences
trapped people during and after a crisis. Nowadays drone-
based systems are the key trend for remote surveillance and re-
connaissance. On the other hand, AI has a growing role in the R EFERENCES
domain of object detection. In this paper, we propose a system
that employs drones equipped with advanced technological [1] J. Hayward-Karlsson, S. Jeffery, A. Kerr, and H. Schmidt, Hospitals for
War-Wounded: A practical guide for setting up and running a surgical
devices and embedding AI techniques to facilitate search and hospital in an area of armed conflict. Geneva: International Committee
rescue missions. The system makes use of emergent real- of the Red Cross, 1998.
time and high accuracy AI-based object detection method to [2] “INSARAG web site,” http://www.insarag.org/, accessed: 2021-10-20.
detect and locate individuals. The architecture of the proposed [3] M. Rizk, A. Mroue, M. Farran, and J. Charara, “Real-time SLAM
based on image stitching for autonomous navigation of UAVs in GNSS-
system is demonstrated by showing the functionality of all denied regions,” in Proc. IEEE International Conference on Artificial
constituting modules. The obtained detection results show Intelligence Circuits and Systems (AICAS), 2020, pp. 301–304.
that the proposed system meets with real-time requirement [4] J. Everaerts, “The use of unmanned aerial vehicles UAVs for remote
sensing and mapping,” The International Archives of the Photogram-
and provides acceptable accuracy in detecting individuals in metry, Remote Sensing and Spatial Information Sciences, vol. 37, 01
diverse environments. Future work will focus on improving 2008.
the accuracy and conducting studies concerning power con- [5] C. Martinez, C. Sampedro, A. Chauhan, and P. Campoy, “Towards
autonomous detection and tracking of electric towers for aerial power
sumption. line inspection,” in Proc. IEEE International Conference on Unmanned
Aircraft Systems (ICUAS), 2014, pp. 284–295.
ACKNOWLEDGMENT [6] M. Rizk, T. N. Al-Deen, H. Diab, A. M. Ahmad, and Z. El-Bazzal,
The authors would like to thank the University of Bahrain “Proposition of online UAV-based pollutant detector and tracker for
narrow-basin rivers: A case study on Litani River,” in Proc. IEEE
for organizing the DASA’21 conference and supporting the International Conference on Computer and Applications (ICCA), 2018,
publication of this paper. pp. 189–195.
[7] N. Dalal and B. Triggs, “Histograms of oriented gradients for human [30] T. Suzuki, J. ichi Meguro, Y. Amano, T. Hashizume, R. Hirokawa,
detection,” IEEE Computer society Conference on Computer Vision and K. Tatsumi, K. Sato, and J. ichi Takiguchi, “Information collecting
Pattern Recognition (CVPR), vol. 1, pp. 886–893, 2005. system based on aerial images obtained by a small UAV for disaster
[8] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” prevention,” in Mechatronics, MEMS, and Smart Materials (ICMIT),
International Journal of Computer Vision, vol. 60, pp. 91–110, 2004. M. Sasaki, G. C. Sang, Z. Li, R. Ikeura, H. Kim, and F. Xue, Eds., vol.
[9] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, 6794, International Society for Optics and Photonics. SPIE, 2008, pp.
“Object detection with discriminatively trained part-based models,” 538 – 543.
IEEE Transactions on Pattern Analysis and Machine Intelligence, [31] M. Goodrich, B. Morse, D. Gerhardt, J. Cooper, M. Quigley, J. Adams,
vol. 32, no. 9, pp. 1627–1645, 2010. and C. Humphrey, “Supporting wilderness search and rescue using a
[10] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature camera-equipped mini UAV,” Journal of Field Robotics, vol. 25, pp. 89
hierarchies for accurate object detection and semantic segmentation,” in – 110, 02 2008.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), [32] A. Ahmed, M. Nagai, C. Tianen, and R. Shibasaki, “UAV based moni-
2014, pp. 580–587. toring system and object detection technique development for a disaster
[11] J. R. R. Uijlings, K. E. A. van de Sande, T. Gevers, and A. W. M. Smeul- area,” International Archives of Photogrammetry, Remote Sensing and
ders, “Selective search for object recognition,” International Journal of Spatial information Sciences, vol. 37, pp. 373–377, 2008.
Computer Vision, vol. 104, pp. 154–171, 2013. [33] P. Rudol and P. Doherty, “Human body detection and geolocalization
[12] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real- for UAV search and rescue missions using color and thermal imagery,”
time object detection with region proposal networks,” in Advances in in Proc. IEEE Aerospace Conference, 2008, pp. 1–8.
Neural Information Processing Systems, C. Cortes, N. Lawrence, D. Lee, [34] M. Andriluka, S. Roth, and B. Schiele, “Pictorial structures revisited:
M. Sugiyama, and R. Garnett, Eds., vol. 28. Curran Associates, Inc., People detection and articulated pose estimation,” in Proc. IEEE Con-
2015. ference on Computer Vision and Pattern Recognition (CVPR), 2009, pp.
[13] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look 1014–1021.
Once: Unified, real-time object detection,” 2016. [35] M. Andriluka, P. Schnitzspan, J. Meyer, S. Kohlbrecher, K. Petersen,
[14] J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” in O. von Stryk, S. Roth, and B. Schiele, “Vision based victim detection
Proceedings of the IEEE Conference on Computer Vision and Pattern from unmanned aerial vehicles,” in Proc. IEEE International Conference
Recognition (CVPR), July 2017. on Intelligent Robots and Systems (RSJ), 2010, pp. 1740–1747.
[36] A. Gaszczak, T. P. Breckon, and J. Han, “Real-time people and vehicle
[15] ——, “YOLOv3: An incremental improvement,” 2018.
detection from UAV imagery,” in Intelligent Robots and Computer
[16] S. Kim and H. Kim, “Zero-centered fixed-point quantization with
Vision: Algorithms and Techniques, vol. 7878, International Society for
iterative retraining for deep convolutional neural network-based object
Optics and Photonics. SPIE, 2011, pp. 71–83.
detectors,” IEEE Access, vol. 9, pp. 20 828–20 839, 2021.
[37] M. Donelli, “A rescue radar system for the detection of victims trapped
[17] R. Murphy, S. Tadokoro, and A. Kleiner, Disaster robotics. Springer, under rubble based on the independent component analysis algorithm,”
2016, pp. 1577–1604. Progress In Electromagnetics Research M, vol. 19, 01 2011.
[18] C. A. Thiels, J. M. Aho, S. P. Zietlow, and D. H. Jenkins, “Use of [38] G. De Cubber, D. Doroftei, D. Serrano, K. Chintamani, R. Sabino, and
unmanned aerial vehicles for medical product transport,” Air Medical S. Ourevitch, “The EU-ICARUS project: Developing assistive robotic
Journal, vol. 34, no. 2, pp. 104 – 108, 2015. tools for search and rescue operations,” in Proc. IEEE International
[19] B. Doroodgar, Y. Liu, and G. Nejat, “A learning-based semi-autonomous Symposium on Safety, Security, and Rescue Robotics (SSRR), 2013, pp.
controller for robotic exploration of unknown disaster scenes while 1–4.
searching for victims,” IEEE Transactions on Cybernetics, vol. 44, [39] J. Sun, B. Li, Y. Jiang, and C.-y. Wen, “A camera-based target detection
no. 12, pp. 2719–2732, 2014. and positioning UAV system for search and rescue (SAR) purposes,”
[20] L. Pineda, T. Takahashi, H. Jung, S. Zilberstein, and R. Grupen, Sensors, vol. 16, p. 1778, 10 2016.
“Continual planning for search and rescue robots,” in Proc. IEEE-RAS [40] M. B. Bejiga, A. Zeggada, A. Nouffidj, and F. Melgani, “A convolutional
International Conference on Humanoid Robots, 2015, pp. 243–248. neural network approach for assisting avalanche search and rescue
[21] Y. Liu and G. Nejat, “Multirobot cooperative learning for semi- operations with UAV imagery,” Remote Sensing, vol. 9, no. 2, 2017.
autonomous control in urban search and rescue applications,” Journal [41] A. Al-Kaff, M. Gómez-Silva, F. M. Moreno, A. de la Escalera, and
of Field Robotics, vol. 33, no. 4, p. 512–536, June 2016. J. Armingol, “An appearance-based tracking algorithm for aerial search
[22] T. Amukele, P. Ness, A. Tobian, J. Boyd, and J. Street, “Drone and rescue purposes,” Sensors, vol. 19, p. 652, 02 2019.
transportation of blood products,” Transfusion, vol. 57, 11 2016. [42] E. Lygouras, N. Santavas, A. Taitzoglou, K. Tarchanidis, A. Mitropoulos,
[23] T. K. Amukele, J. Street, K. Carroll, H. Miller, and S. X. Zhang, “Drone and A. Gasteratos, “Unsupervised human detection with an embedded
transport of microbes in blood and sputum laboratory specimens,” vision system on a fully autonomous UAV for search and rescue
Journal of Clinical Microbiology, vol. 54, no. 10, pp. 2622–2625, 2016. operations,” Sensors, vol. 19, no. 16, 2019.
[24] T. K. Amukele, J. Hernandez, C. L. H. Snozek, R. G. Wyatt, M. Douglas, [43] M. Le and M. Le, “Human detection and tracking for autonomous
R. Amini, and J. Street, “Drone transport of chemistry and hematology human-following quadcopter,” in Proc. IEEE International Conference
samples over long distances,” American journal of clinical pathology, on System Science and Engineering (ICSSE), 2019, pp. 6–11.
vol. 148, no. 5, p. 427—435, November 2017. [44] “COCO - common objects in context web site,” https://cocodataset.org/,
[25] A. Claesson, A. Bäckman, M. Ringh, L. Svensson, P. Nordberg, T. Djärv, accessed: 2020-06-20.
and J. Hollenberg, “Time to delivery of an automated external defib- [45] M. Rizk, A. Mroue, K. A. Ali, and J. Charara, “Development and
rillator using a drone for simulated out-of-hospital cardiac arrests vs implementation of an embedded low-cost GNSS receiver emulator for
emergency medical services,” JAMA, vol. 317, p. 2332, 06 2017. UAV testing,” in Proc. IEEE International Conference on Electronics,
[26] H. Bahri, M. Chouchene, R. Khemiri, F. Sayadi, and M. Atri, “Fast mov- Circuits and Systems (ICECS), 2019, pp. 101–102.
ing human detection using fourier and hog descriptors based CUDA,”
in International Multi-Conference on Systems, Signals Devices (SSD),
2018, pp. 202–207.
[27] D. M. Dinama, Q. A’yun, A. D. Syahroni, I. Adji Sulistijono, and
A. Risnumawan, “Human detection and tracking on surveillance video
footage using convolutional neural networks,” in Proc. International
Electronics Symposium (IES), 2019, pp. 534–538.
[28] H. Yousif, J. Yuan, R. Kays, and Z. He, “Fast human-animal detec-
tion from highly cluttered camera-trap images using joint background
modeling and deep learning classification,” in Proc. IEEE International
Symposium on Circuits and Systems (ISCAS), 2017, pp. 1–4.
[29] J. Wu and G. Zhou, “Real-time UAV video processing for quick-
response to natural disaster,” in Proc. IEEE International Symposium
on Geoscience and Remote Sensing, 2006, pp. 976–979.