We Are Intechopen, The World'S Leading Publisher of Open Access Books Built by Scientists, For Scientists

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

We are IntechOpen,

the world’s leading publisher of


Open Access books
Built by scientists, for scientists

6,800
Open access books available
182,000
International authors and editors
195M
Downloads

Our authors are among the

154
Countries delivered to
TOP 1%
most cited scientists
12.2%
Contributors from top 500 universities

Selection of our books indexed in the Book Citation Index


in Web of Science™ Core Collection (BKCI)

Interested in publishing with us?


Contact book.department@intechopen.com
Numbers displayed above are based on latest data collected.
For more information visit www.intechopen.com
Chapter

Aerial Drones for Fire Disaster


Response
Ramasenderan Narendran, Thiruchelvam Vinesh,
Soon Hou Cheong and Han Xiang Yee

Abstract

The significance of fire in human society encompasses essential functions like


illumination, warmth, and cooking but also poses immense risk when uncontrolled,
leading to catastrophic damage and loss of life. Traditional firefighting responses
are often hindered by geographical and logistical challenges, resulting in delays
that exacerbate the severity of fires. This research introduces an innovative solution
through the use of an autonomous firefighting drone, designed for round-the-clock
surveillance and rapid response to fire scenes. Utilizing image processing and neu-
ral networks, the drone can efficiently detect fire and smoke, serving as the first
responder, and is equipped with fire extinguishing balls to initiate suppression. The
work extends to explore the application of AI edge aerial drones in disaster response,
not only to fires but also floods and landslides, particularly in Malaysia and Southeast
Asia. By focusing on various urban, peri-urban, and rural contexts, the research
delineates potential implementation strategies aimed at enhancing situational
awareness for first responders and reducing response time to reach victims, thereby
facilitating more effective disaster response operations. The study’s findings point to
a considerable advancement in firefighting technology that could lead to decreased
fire damage and saved lives, filling a critical gap in the disaster response playbook.
This advancement in firefighting technology enhances response times, decreases fire
damage, and ultimately, saves lives.

Keywords: fire, fire extinguishing ball, fire fighting, artificial intelligence, unmanned
aerial vehicle

1. Introduction

The ongoing evolution of cities into the scientific, economic, administrative, and
cultural epicenters of their respective nations has been a remarkable phenomenon of
the twentieth century. This evolution, despite delivering substantial improvements in
living conditions, also precipitates a host of challenges. These include increased traffic
congestion, which stresses infrastructure while amplifying air and noise pollution,
a housing crunch prompting cities to expand both horizontally and vertically, and
issues such as water scarcity and waste management.

1
Drones – Various Applications

Increasingly complex urban infrastructure has amplified concerns regarding


safety and security. In response, communities have implemented various technical
approaches to create a multidisciplinary emergency rescue team which comprises
of first responders, paramedics, firefighting personnel who are ready to be called in
upon request, for energy and communication systems.
The past century has witnessed the evolution of new building materials, construc-
tion styles, and methods of resource utilization in urban settings. However, despite
significant progress in fire prevention, fire incidents in urban areas continue to grow
more frequent, complex, and hazardous, posing major challenges to first responders.
The Department of Statistics Malaysia (DOSM) reported [1] 50,720 fire incidents
in 2019, a 24.1% increase from 2015. Amidst the restrictions during Covid-19, the
reporting of 15,393 fire incidents from March to August, saw nearly half of which
were due to preventable open burning. The smoke from these fires not only harms the
environment but also adversely impacts human respiratory systems. Structural fires,
mainly ignited by electrical faults (49.1% of cases during the MCO), gas leaks (17%),
sparks (14%), and flammable items such as lighters, candles, and matches (7–2%)
contribute significantly to fire incidents [2].
While geography shields Malaysia from natural disasters like earthquakes and
tsunamis, the most common threats stem from man-made incidents, including traffic
accidents, fires, and floods. Fire, one of the essential elements from ancient times,
represents power and intensity. However, when uncontrolled, fire can be deadly. The
damage caused by fires in various settings, including residential buildings, vehicles,
and even airplane engines, inspires fear and devastation from afar. Yet, the reality
experienced by those at the centre of such incidents is vastly more distressing and
often marked by terror, suffering, and tragically, fatal outcomes.
Fire spreads quickly, causing extensive damage and leaving lasting physical and
psychological scars on its victims. Firefighting involves combating this relentless ele-
ment with the forceful application of water—a task carried out by brave fire brigades.
However, their courage is accompanied by substantial risks. Firefighters often face
unknown variables, such as the overall structural stability, fire origin, potential for
collapse, building temperature, smoke density, and more. They plunge into hazard-
ous environments, sometimes blind or with minimal briefing, where unpredictable
circumstances can lead to fatalities. The world [3] sees more than 50 firefighter deaths
annually, excluding the tragic loss of 340 firefighters in the World Trade Centre
disaster.
Efforts to reduce the risks faced by firefighters have been made worldwide, with
technology taking the lead. Unmanned firefighting machines equipped with capabili-
ties for monitoring, inspecting, and extinguishing fires have been developed as a safer
alternative. However, a vital yet often overlooked factor is the pre-evaluation of the
fire environment. Information about fire intensity, smoke concentration, the number
of trapped individuals, and the presence of explosive materials remains unknown
until firefighters arrive at the scene. Hence, thorough pre-evaluation can enhance
safety and efficiency in firefighting operations.
Unmanned Aerial Vehicles (UAVs) are now employed for surveillance in vari-
ous sectors, including wildlife management, fire behavior monitoring, and package
delivery, in many [4] countries. However, in Malaysia, the adoption of UAVs in
firefighting is minimal, primarily due to local humidity and high temperatures.
Currently, drones require human operators for guidance and face constraints such
as distance limitation and battery life. They also need precise navigation and rapid
response to avoid obstacles. 3D area mapping could be a solution, allowing drones to
2
Aerial Drones for Fire Disaster Response
DOI: http://dx.doi.org/10.5772/intechopen.1002525

follow a predetermined path and provide valuable information about the fire’s exter-
nal environment. However, high-resolution 3D imagery necessitates sophisticated
and expensive vision cameras, which can inflate costs when implemented in a drone
swarm. Furthermore, the risk of damage to drones remains high due to various factors
that can cause failure.
Despite advancements in technology, fire incidents and the associated fatalities
continue to rise. The National Fire Protection Agency (NFPA) suggests an ideal
response time of 9 minutes and 20 seconds, of which includes 14.3% of turnout time
and 85.7% of travel time [5]. Kamarulzam Malik Abdullah, the director of the Sabah
Fire and Rescue Department, cites factors such as a shortage of fire stations and
geographical constraints for this delay.
The deployment of autonomous firefighting drones and fire extinguisher balls
can reduce response time and help firefighters gain easier access to the incident
scene.
The potential use case for UAVs remains high as the enable both accessibility and
data collection for a multitude of analytics [6] which can be harnessed for timely and
critical information which equips fire fighters with situational awareness to combat
the disaster successfully with minimized human casualties and property damage.

2. Fire detection

An innovative approach [7] to fire detection systems through the use of


triboelectric nanogenerators (TENGs). These devices, proposed initially by Wang’s
group in 2012, have the ability to harness energy from the wind, thus offering a poten-
tial solution for fire detection in power failure scenarios common in fire accidents.
TENGs offer certain distinct advantages over conventional methods. For one,
they have a high sensitivity to the frequency of mechanical vibrations. They are also
smaller in size compared to traditional turbines, allowing for greater space utilization.
In the event of a fire, power sources are often cut off to prevent further complica-
tions such as short circuits or power leakage. However, this also means that remote
rescue electronic devices cease functioning. This is where TENGs come into play.
They can serve as an intelligent fire alarm system that continues to operate during a
fire accident.
TENGs can also harness wind energy even in low-speed regions and convert it to
useful power. They offer a novel power source alternative to conventional ones like
power plants and batteries, also reducing the length of wiring needed from these
traditional sources. Additionally, the use of TENGs optimizes space utilization.
However, the previous iterations of TENGs relied on elastomers and wind, operat-
ing through fluid-induced vibration (FIV). This operation subjected the elastomer
to complicated stress and continual fatigue fractures, thereby affecting the device’s
efficiency. Since wind usually flows at slow speeds and in casual directions around
the Earth’s surface, the device required low base wind velocity and multidimensional
wind energy for efficient data collection in fire detection.
To overcome these issues, Zhang et al. proposed a TENG based on fluid-induced
vibration, referred to as F-TENG. The F-TENG consists of six identical TENG units,
spinning switches, and a low-cost lever mechanism. This design enables the device
to persistently collect and analyze wind energy from any given direction. The incor-
poration of six identical TENG units and spinning switches facilitates gathering
wind energy from multiple angles, lowers the starting speed, and reduces the risk
3
Drones – Various Applications

of deformation to the TENG. This approach offers a more efficient solution for fire
detection compared to the conventional F-TENG that uses an elastomer.
Furthermore, the researchers have developed a methodology to detect fires based
on the concentration of smoke. This involves using a capacitor and a rectifier bridge
driven by the F-TENG. This not only facilitates the detection of fire but also allows for
the calculation of the direction and speed of fire spread.
More impressively, the system can anticipate potential fire hazards by utilizing
the data gathered by the F-TENG, such as humidity, temperature, wind speed, and
wind direction. This predictive feature enhances the system’s potential as a robust fire
prevention and safety mechanism.
In recent research, a blockchain technology is applied to develop [8] a novel
fire detection system that incorporates image processing. The innovative approach
involves the use of a “blockchain golden fox,” which consists of various intercon-
nected units. The fundamental unit within this system, known as the meta-network,
is constructed from a blend of activation functions and connection methods.
In this unique setup, neurons are connected with protrusions, employing both
linear and non-linear mapping as activation functions. These functions subsequently
confine the neuron’s output to a specific range.
To activate the alarm system, a weight value derived from the blockchain is
utilized. This is achieved using the term frequency-inverse document frequency
(TF-IDF) algorithm, as the weight value resides between 0 and 1.
During the process of image processing, the image is first converted into grayscale.
This is due to the fact that red color, which is associated with fire, has a higher wave-
length compared to green and blue color. By making the image grayscale, the red color
or light is made more visible, thereby enabling faster and easier fire detection.
Following the grayscale conversion, the image then undergoes binarization to
enhance the contrast of the video or image being inputted into the system. Turning
the colored or grayscale image into binary is also beneficial in reducing pixel
interference.
The image processing concludes with the morphological process, which serves to
eliminate or reduce noise in the image. This includes the erosion and dilation process,
further enhancing the efficiency and accuracy of the fire detection system.
In the research [8] conducted, a comprehensive image processing workflow, was
used to detect fire based on blockchain technology. The process commences with
image pre-processing, which reduces the quantity of image data and facilitates valu-
able data discovery. Subsequently, image segmentation takes place, partitioning the
image into distinct segments, each with their unique properties, thereby highlighting
areas of interest. The creation of a feature map, which assigns values and extracts vital
data from the image, ensues. Image matching, the final step, entails comparing these
extracted features with a mapping table for object or fire recognition.
Throughout the experimentation process, it was noted that high temperatures
triggered a fire alarm response within 13.3 seconds, while high smoke density levels
elicited a response in 18.1 seconds.
In a separate study [9] image processing involving color filtering and histograms
was used for fire detection. An examination of RGB (red, green, and blue) and HSV
(hue, saturation, and value) color spaces showed that filtering in RGB color spaces
was more suitable for detecting fire in static light conditions, such as interiors, com-
pared to open areas. However, for image processing, the threshold technique in HSV
color spaces was more advantageous, as demonstrated by their previous work [10].
This study also involved color filtering, beginning with an image conversion from
4
Aerial Drones for Fire Disaster Response
DOI: http://dx.doi.org/10.5772/intechopen.1002525

RGB to HSV. The HSV image was subsequently transformed into grayscale and binary
formats for better visibility. Their experimental setup involved placing four different
colors (yellow, blue, green, and red) next to the fire, which were not detected as the
HSV value was set to detect fire color. The optimum parameters for detecting fire
were established as hue ranging from 0–16 to 0–20, saturation from 74–164 to 74–168,
and value from 200–228 to 200–232.
In the application of drones for fire detection, it was discovered that the onboard
camera could only detect fire if the drone was flying at an altitude not exceeding 8
meters above the fire.
A technique [7] used flame emission spectroscopy in conjunction with image
processing for fire detection, noting the technique’s superior response time compared
to gas sensors and thermal sensors. This method also allowed for a more nuanced
understanding of the combustion process by tracking various oxidants. In terms
of image processing, the YCbCr color space was used, with a temporal smoothing
algorithm improving image quality and enhancing fire detection sensitivity.
Furthermore, it is recommended [11] the integration of the Internet of Things
(IoT) with image processing for fire detection. While they acknowledged the wide-
spread use of the RGB color model due to its simplicity and applicability in various
applications, they noted its limitations in color recognition and object description.
Comparatively, the HSI color space was easier for humans to understand and bet-
ter at non-uniform illumination. However, this color space was unstable due to its
angular nature. The YCbCr color space excelled in reducing image size and improving
image quality but was dependent on the original RGB signal. A satellite network [11]
and unmanned aerial vehicles (UAVs) is utilized to collect images for fire detection,
supplemented by sensor nodes placed in smart cities to gather environmental data
such as humidity, temperature, light intensity, and smoke.

2.1 Fire size detection

In their fire sizing study, a novel approach towards active fire detection (AFD)
based on the interpretation of Landsat-8 Imagery via Deep Multiple Kernel Learning
was explored. The research utilized a recent dataset developed by De Almeida et al.,
containing image patches of 256 × 256 pixels that illustrate wildfires occurring in
various locations across all continents. The dataset thereby provides an opportunity to
adapt the MultiScale-Net to a diverse set of geographical, climatic, atmospheric, and
illumination conditions [12].
The method of Deep Multiple Kernel Learning is highlighted as being particu-
larly effective for spectral-spatial feature extraction from remote sensing images.
To address the limited availability of training samples, the researchers employed a
straightforward data augmentation technique, which led to the generation of 27 dif-
ferent configurations in this study [12].
The research group also introduced a novel indicator, the Active Fire Index (AFI),
devised to enhance the accuracy of thermal and spectral band analysis. AFI draws its
efficacy from the high sensitivity of the Landsat-8 sensor to fire radiation, and it is
derived from the SWIR2 and Blue bands [12].
Active Fire Index (AFI):

ρ7
AFI = (1)
ρ2
5
Drones – Various Applications

where ρ7 and ρ2 represent the SWIR2 and Blue values in Landsat-8 images,
respectively.
Eq. (1) can be used to calculate AFI, where ρ7 and ρ2 represent the SWIR2 and Blue
values in Landsat-8 images, respectively. AFI’s utility lies in its ability to distinguish
fires from their background owing to the high reflectance in the SWIR2 spectrum
and the relatively low reflectance in the Blue spectrum [12]. It further proves effec-
tive in eliminating smoke and possible clouds present in the image scenes, which are
frequent hurdles in active fire detection.
However, the AFI method does not come without its limitations, for instance, it
can underperform when non-fire objects that are bright have a high reflectance in the
SWIR2 and a low reflectance in the Blue spectra. Furthermore, the accuracy of fire
detection can be compromised due to certain inherent characteristics of fire such as
the limited spatial resolution of satellite images and the significant influence of the
Earth’s dense atmosphere [12].
The independence of the proposed method from the thermal bands paves the way
for future studies to explore the potential of Sentinel-2 data for AFD, which could
offer higher spatial and temporal resolution. Moreover, the research team suggests
that the processing time of the proposed method could be evaluated using cloud
platforms like Google Earth Engine (GEE) [12].
Another novel approach to understand and model wildfires was using a method to
isolate individual fires from the commonly used moderate resolution burned region
data [13].
In their study, they acknowledge the necessity to separate individual fires from
large clusters of burned area as these fires are produced by extensive burn patches.
This comes as a challenge in current global fire systems, as the available satellite data
products only identify active fire pixels. Hence, they developed the Global Fire Atlas,
which outlines individual fires based on a fresh methodology that identifies the direc-
tion of fire spread, the position and timing of individual fire regions and calculates
the size of the fire, its period, daily expansion, fire line length and speed [14].
This Global Fire Atlas algorithm was applied to the MCD64A1 Col.6 burned-area
dataset [14], and it was found that the minimum detected fire size is one MODIS
pixel, equivalent to approximately 250 m × 250 m. This result provides a very precise
and promising avenue for future research.
In another research endeavor, a comprehensive study evaluating the influence of
the loss function, architecture, and image type for Deep Learning-Based Wildfire
Segmentation [5]. They processed two image types, namely the visible and the
Fire-Gan fused, and evaluated 36 resultant combinations of the architectures and loss
functions with reference to three different metrics—MCC, F1 Score, and HAF [15].
Their analysis demonstrated that the Akhloufi + Dice + Visible combination yields
the best results for all metrics. Moreover, the Akhloufi architecture and the Focal
Tversky loss function were found to be prevalent in the top five for all metrics. Thus,
considering the performance evaluation and correlation analysis, the combination of
Akhloufi + Focal Tversky + visible was recognized as the most robust performer due
to its consistent results with minimal variance [15].

3. Fire fighting strategies

The firefighting drone functions as a first responder at the fire scene, capable of
warning people in the area and notifying the fire department. Moreover, it comes
6
Aerial Drones for Fire Disaster Response
DOI: http://dx.doi.org/10.5772/intechopen.1002525

with a feature that enables it to extinguish parts of the fire, creating pathways for
people to escape and providing clear paths for firefighters to enter [16].
A system [16] that can deploy a fire extinguishing ball to assist in wildfire fighting.
The mechanism for payload deployment was constructed with components such as a
motor, power supply, and receiver, allowing the drone to receive signals from the user
to open the valve through the motor and release the fire extinguishing ball.
The researchers found that the 0.5 kg weight of the fire extinguishing ball was
insufficient in extinguishing fires effectively. The choice of the 0.5 kg weight during
the experiment was due to budget constraints, and it took approximately 3–4 sec-
onds to activate the fire extinguishing ball, covering an area of about a meter in
diameter. However, the researchers believe that a larger fire extinguishing ball could
extinguish fires over a larger area. According to the official webpage, the founder
mentioned that the 1.3 kg weight of the fire extinguishing ball can cover an area up
to 3 cubic meters [16].
The YOLOv4 algorithms [17] were improved by reducing the neural network’s
weight using the Kernelized Correlation Filter (KCF) algorithm for tracking and
adaptive learning rate template update when tracking fails. The results were obtained
from a multi-rotor copter by testing the adaptive tracking strategy. KCF used a non-
linear kernel function to handle non-linearity and non-stationarity and adapted the
threshold value for the Average Percentage of Correctly Estimated (APCE) metric to
provide a more accurate evaluation of the tracking algorithm’s performance [17].
The use of YOLOv5 for object detection and combined it with the DeepSORT
algorithm for tracking and positioning objects using UAVs. DeepSORT is a real-time
object tracking algorithm that uses a deep neural network-based object detector and
a simple online and real-time tracking algorithm, SORT (Simple Online and Realtime
Tracking), to effectively track objects even in challenging situations [18].
The nearest neighbor (NN) method [19] to measure and track selected objects.
They employed the nearly constant velocity (NCV) motion model for discrete-time
kinematic modeling in object tracking. The NCV model assumed that the target’s
velocity was constant over a short period, and the Kalman filter was used to predict
and estimate the target’s position and velocity at each time step [19]. The nearly
constant velocity (NCV) motion model is a common choice for the discrete-time
kinematic model for a target in object tracking. This model assumes that the target’s
velocity is constant over a short period of time, and that the position of the target at
each time step is determined by its position and velocity at the previous time step. In
the NCV model, the target’s position and velocity at time step k are represented by the
state vector x_k = [x_k, y_k, vx_k, vy_k]^T, where x_k and y_k are the target’s posi-
tion coordinates, and vx_k and vy_k are its velocity components. The state transition
function for the NCV model is given by:
Nearly constant velocity (NCV):

x=
_ k Fx _ k − 1 + w _ k − 1 (2)

where F is the state transition matrix, and w_k−1 is the process noise. The state
transition matrix F is typically set to the identity matrix with the velocity components
set to the time step between frames. The Kalman filter algorithm is then used to esti-
mate the target’s position and velocity at each time step by using the state transition
function and the measurement function. The measurement function is used to relate
the target’s state vector to its measured position in the current frame. The filter uses
7
Drones – Various Applications

the measurement and prediction to correct the estimate of the state vector. The NCV
motion model is simple to implement and computationally efficient, making it well-
suited for real-time tracking applications. However, it does not consider the accelera-
tion of the target, which can lead to errors in the estimated position and velocity when
the target’s motion is not truly constant.
A method for recognizing the self-location of a drone flying in an indoor environ-
ment using ultra-wide band communication module DWM1000 was proposed [20].
The self-localization algorithm uses trilateration and the gradient descent method to
determine the drone’s position with an error of within 10–20 cm. Real-time 3D posi-
tion information of the drone can be obtained and used for autonomous flight control
through a deep learning-based control scheme. This scheme improves upon a conven-
tional CNN algorithm by incorporating deeper layers and appropriate dropouts in the
CNN structure, which uses input data from a single camera. When a drone experi-
ences drift, it may not move in a straight line but instead veer slightly to one side. In
this situation, the real-time first-person view (FPV) of the drone is used as input data
in the proposed CNN structure to predict the pulse-width modulation (PWM) value
for the roll direction, which helps to correct the drift and keep the drone moving in
the desired direction. The goal of this work was to create a drone that could safely
navigate in environments where GPS signals are not available, such as tunnels and
underground parking garages. To achieve this, a control board based on a digital
signal processor and ultra-wideband modules were developed, along with control
algorithms for stable flight. A 3D position estimation algorithm using a proposed 2D
projection method was implemented to determine the drone’s position, which was
then used for hovering motion control and way-point navigation.

4. 3D area mapping

There will be ways for performing 3D mapping using a 2D LiDAR instead of 3D


LiDAR sensor. Using LiDAR Lite v2 sensor can be detecting objects up to 40 m. It was
low cost and high performances. Due to its lightweight characteristic, it was suitable
for UAV. Using the servo motor as the actuator module for turning the LiDAR sen-
sor. The Arduino UNO board was used as the microcontroller that moves the servo
motors. Upon detection, a point cloud graph will be plotted since the ultrasonic
sensor was a distance-based sensor. For turning it into a 3D scan, different step will be
used. Different steps of the servo motors will provide different resolution of images.
A chair was used in the research method. It concludes that, the smaller the steps
needed, the longer the processed time needed which have more scanned points to be
plotted.
There were many applications can be done using 3D mapping. In this case, it was
used for urban analysis for both spatial distribution of population and residential
buildings. These data called census data can be collected by urban area units of a
dimension. Unfortunately, the geometry of these census tracts was found changing
between two censuses, either on spatial aggregation or by spatial disaggregation. This
happened due to new structures being built or when urban densification occurred.
Using dasymetric mapping with different geometrical shapes and different types of
information such as lost spaces, public spaces, proximity, and urban density become
every useful. Dasymetric mapping was a spatial interpolation technique that allows
the re-allocation of area data from source to targets geometries. In order to operates
this condition for a lower cost, using a UAV will be a low-cost -friendly technology.
8
Aerial Drones for Fire Disaster Response
DOI: http://dx.doi.org/10.5772/intechopen.1002525

UAV offers great potential in operating airborne sensor system and computer vision
for monitoring 3D buildings and spatial planning. Swinglet CAM was a fixed-system
system that used for acquiring 3D point cloud over the region. With the help of few
sensors such as RGB sensors for georeferenced images pair, capturing real time points
using GNSS systems and internal measurement unit (IMU). Some of the parameters
can be obtained like position of laser sensors based on one or more GNSS base station.
IMU can provides parameters such as sensor’s altitude and heading angles (roll, pitch,
or yaw) Moreover, LiDAR or satellite imagery can be the tools for extracting 3D points
clouds. Information of the mapped building such as height, area, volumetric informa-
tion can be extracted using 3DEBP (3D extraction building parameters).
UAV point cloud can be obtained by multi-stereo image matching processing of
eight ground control points allowing a set of 3D points being estimated through those
stereo pair pixels. There will be millions of points being captured so these points have
been filtered through clustering large applications (CLARA) algorithm. Large dataset
such as point cloud will be suitable to use CLARA because due to its partitioning abil-
ity to divide these data into few sub-groups.
Even though there were mean error for the estimated building block volume when
referencing from original dataset. In the early days, the monitoring method for fires
event were using satellites system as it was high efficiency, accurate and automatic
operations [21]. However, this method were not reliable and poor signals from the
satellite if was in remote areas. There were many alternatives in terms of fire monitor-
ing which can be used which was the remote sensing which had advantage in large
area monitoring and high spatial resolution images were efficient. Comparing to
InSAR techniques that provides area surveying and mapping had an advantage since
it can monitor day/night with different weather conditions but on the bad side, it was
limited to its corresponding time as fire monitoring required continuous observa-
tions. With satellite imaging, the probability of false alarms was high as well as dense
fog, cloudy and rainy images may affect its monitoring.
One of the methods was using UAVs as the replacement of satellite images since
UAVs can provided real time monitoring at low altitude and fast acquisition of data.
Its reliable and simple to operate. Attaching a digital camera will be ideal as it can
captured basic surface model and regular images for analysis. Other options will be
mounting a multiple lens camera for proper texture information of the mapped area.
This can show the strength of UAVs in comparing against the traditional method of
satellite imaging. By implementing image processing algorithm into the UAVs, drone
mapping can be done with the help of data analytics called ThingSpeak cloud plat-
form. It utilized the platform and provided capabilities of accessing data both online
and offline. Using Drone deploy software, it can be able to map drones which offers
web and app-based platform. It can provide high resolution images and videos to
target area.
With the help of 3D mapping with SLAM, even rover at mars and moon can esti-
mate its current positions and construct a map for surrounding. Some of the sample
sensors that can used were RGB-D and LiDAR to provide high dense 3D point cloud
but limited communication links, data storage, power supply could be a problem.
Alternatives, a stereo camera, or monocular vision could work due to lightweight and
lower power consumptions. The implementation of Visual SLAM mapping can be
performing using a pair of stereo images used to train the deep learning model, the
disparity map was estimated. This map was initially used to create 3D point cloud.
A good implementation of SLAM technique for terrain perceptions and mapping
results were determines by the choice of sensors and sensor fusion. According to the
9
Drones – Various Applications

research paper [22], different methods were used. First, it uses monocular SLAM
framework with extended Kalman filter (EKF) that able to track unconstrained
motion of a rover but for this to work, those distinct feature must be well distributed.
The biggest problem of this method will be scale ambiguity and measurement drift
since there were no inertial and range sensor when using single camera. Alternatives,
data from RGB-D SLAM will acquire both depth per-pixel and visual texture infor-
mation. However, lighting circumstances can have interferences causing noisy and
homogenous point-clouds. Some of the researcher proposed LiDAR SLAM as it
can solved the robustness problems as the research paper had mentioned the global
terrain map was made with sparse-method and batch of alignment algorithm [22]
LiDAR camera fusion with SLAM had great advantages since it can perfectly make use
of these both sensors individually. Other than that, used of stereo SLAM was also one
of the proposed methods because it can create a terrain map through sensor fusion
and using the Gaussian-based Odometer error model, it can improve the accuracy by
predicting the non-systematic error from wheel interaction.
To prove the concept, training dataset on earth was not practical since in this
paper, the focus was on mars and moon. A self-supervised deep learning model
was used and 2D and 3D convolution layer based on geometric-based CNN were
trained using the images in the subset collected by the pairs of stereo cameras. The
2D convolution layer will extract images features from each image and constructed a
cost volume. Then, the 3D convolution layer will be aggregating to infer the dispar-
ity value using probability distribution for each pixel. Lastly, the disparity map was
constructed as a regressed from the probability distribution.
According to the research done, SLAM mapping using combination of single-
scan Terrestrial Laser Scanning (TLS) point cloud and Mobile Laser Scanning (MLS)
point cloud can addressed the consistency problem and maintained accuracy map-
ping without using the GNSS-IMU system. LiDAR Odometry and global optimiza-
tion will be the main key to solve the problem. The LiDAR odometry will estimate
the motion of each detected frame of MLS point cloud relative to single-scan TLS
data. Real and virtual features will be extracted as a part of tree stem mapping
using NDT algorithm. Virtual features were sample that reconstructed the tree stem
centrelines while real features were evenly sample the point cloud. Layer clustering
method will be used to extracted virtual features where else Differences of Gaussian
(DOG) method will be used to extract real features. In terms of global optimization,
it mitigates the accumulative error and transforms all points cloud into common
coordinate system. By adopting a method based on global map that does not consider
loop closure and the adjustment of all data to address the global optimization of
SLAM.
This research was done to present the optimization of SLAM using Gmapping
algorithm Different SLAM algorithm had been used and invented such as Gmapping,
Hector SLAM and Karto SLAM. Standards dataset and ground truth map the
parameters from source code, C++ program were used to optimize the parameters in
a different way. 2 ways were used which were optimizing parameters separately and
see its results or the second optimization where parameters collectively and see its
results [23].
First method will be optimizing the parameters separately by their functions. The
parameters chosen will be editable which means user can provide new values and run
the programmed again to conduct evaluation compared to dataset. The main param-
eters were changed uniformly in equity manner which was uniform optimization
and second was references where time between successive recalculations of the map
10
Aerial Drones for Fire Disaster Response
DOI: http://dx.doi.org/10.5772/intechopen.1002525

which was the map update interval. Gmapping algorithm were used in this method.
After optimizing, map results and necessary significant results were varying in each
different parameters testing.
Optimizing parameters separately can led to a little bit of changes on the map
relatively to the dataset. Although the results look similar to each other even though
we optimize the parameters separately, but it does not mean that the result will be an
uniform development as all it were changing were one single parameters [23].
In the journal article title “Mapping and 3D modeling using quadrotor drone and
GIS software” currently the main obstacle in 3D mapping would be the cost of data
acquisition for high resolution satellite imagery especially for mapping in weekly
or daily basis. Researcher proposed an alternative will be using UAV which was low
cost, had a high-resolution image and can acquire any time with limited restrictions
that needed to comply [24]. In this journal article paper, it approached the problem
with developing 3D models by photogrammetric data taken using drone quadcopters.
In order to acquire high resolution images, it uses industrial drone, the DJI Inspire 2
drone supported with high quality camera with dreadlocks so that images taken were
stable and suitable for mapping activities. The industrial camera was professional
as it was the first aerial camera that capable of recording lossless 4 K video in RAW
with framerates up to 30 fps and average bitrate of 1.7 Gbps. With the use of powerful
Micro Four Thirds Sensors (MFT), camera that have can achieved high image quality,
light and compactness make it great for recording anywhere anytime. It also captures
stunning 20MP images with details with stabilization dreadlocks integrated with
3-axis guarding level. OF course, with such high specs’ camera recording, the speci-
fication of PC must be top notch as well. Software used was the AgiSoft Metashape, a
software that performed photogrammetric processing of digital images and generates
3D spatial data to be used in GIS applications.
SLAM can be used to create a map for an unknown terrain by estimating the
positions of the obstacle from the created map. The Visual Simultaneous Localization
and Mapping (VSLAM) method can be achieved by using a monocular camera. To
achieve a comparison and had a better result, the two main method of VSLAM can
be used which was the direct-based method and features-based method. These two
methods can be done by extracting information from the images taken by the camera.
From the journal, Large Scale Direct (LSD-SLAM) was preferred for the direct based
method as when implementing the LSD-SLAM, it used the whole images as input and
point cloud were built up based on the images pixel while for feature-based method,
the preferred choice was the ORB-SLAM where data was extracted from a orb and the
representation of pixels will be recorded [25]. In terms of accuracy, the direct-based
method will be much more accurate as more points will be captured and recorded
compared the feature-based method.
As mentioned in this section, 3D area mapping with SLAM would a very good way
as it reconstructs the terrain and mapped unknown terrain. There were many differ-
ent types of SLAM but VSLAM was one of the famous types as it can directly acquire
points from the images taken by monocular camera. Direct method and indirect
method can be used to differentiate these two approaches to implement this SLAM.
Direct method using the LSDO-SLAM can acquired better results as it uses the whole
images as input and directly acquired different density pixel for map plotting. Direct
method often be the better choice when the external surrounding had poor lighting
and poor texture surfaced. Comparing to direct method, the indirect method, the
ORB-SLAM were used and by extracting some of the features for the observations of
the 3D map.
11
Drones – Various Applications

5. Conclusion

The exploration of drone systems as detailed in this chapter presents a multifac-


eted analysis of contemporary technology with broad applications. While the capa-
bilities of these drone systems have been found to deliver numerous benefits in terms
of safety and effectiveness, it has become evident that there are intrinsic limitations
that present challenges to their optimal use.

1. Battery power: the first significant limitation is a shortage of battery power,


resulting in limited flight time. This not only restricts the drone’s operational
capabilities but also affects the overall efficiency. The constant need for battery
replacement or recharging poses logistical challenges.

2. Payload capacity and motor power: linked to the issue of battery power is the
drone’s limited payload capacity and insufficient motor power. These constraints
have been found to restrict the ability to lift heavier objects or carry additional
equipment, such as larger batteries. The consequent low lift capacity hampers
the drone’s maneuverability, especially under adverse weather conditions or
heavy payload.

3. System high temperature: the incorporation of 4G (LTE) cellular connectivity


via Raspberry Pi, while enhancing communication, has resulted in high system
temperatures due to the absence of a cooling mechanism. This heat issue threat-
ens the system’s stability and longevity, risking failures and reduced lifespan.

4. Camera performance: the last major limitation noted is the suboptimal per-
formance of the drone’s camera, delivering lower-resolution photos and video
streams. This affects the drone’s efficiency in tasks requiring high-definition
imagery, such as aerial surveillance or inspection.

However, these limitations are not insurmountable, and the chapter has also
provided a pathway towards mitigating these challenges:

1. Improving battery configuration: investigating alternative power sources,


including high-capacity batteries or solar charging, could substantially extend
flight duration.

2. Enhancing motor power: a comprehensive analysis to identify suitable motor set-


tings could significantly improve lift capacity and maneuverability.

3. Addressing high-temperature issues: including efficient cooling mechanisms,


such as heat sinks or cooling fans, would ensure system stability and longevity.

4. Upgrading camera module: investing in higher-quality cameras or gimbal-


stabilized camera systems could lead to better-quality imagery.

The integration of these recommendations requires a systematic approach, includ-


ing in-depth research, extensive testing, and due consideration of compatibility and
integration challenges. By addressing these limitations, the drone system could realize
its full potential, becoming more versatile and efficient in various applications, such
12
Aerial Drones for Fire Disaster Response
DOI: http://dx.doi.org/10.5772/intechopen.1002525

as surveillance or inspection. This chapter’s insights and recommendations contribute


to the broader discourse on drone technology, setting a constructive path for future
innovations and implementations. It lays the groundwork for researchers, practitio-
ners, and policymakers to leverage drone technology, harnessing its strengths while
thoughtfully addressing its weaknesses.

Author details

Ramasenderan Narendran*, Thiruchelvam Vinesh, Soon Hou Cheong


and Han Xiang Yee
Asia Pacific University, Kuala Lumpur, Malaysia

*Address all correspondence to: narendran@apu.edu.my

© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
13
Drones – Various Applications

References

[1] Nasir NA, Rijal NS. Fire safety Information Systems. 2022;2022:1-11.
management systems at commercial DOI: 10.1155/2022/9304991
building: Pasir Puteh supermarket.
e-Proceeding. 50 p [9] Ahmad A, Prakash O, Khare K,
Bhambhu L. Detection of flames in video
[2] Timbuong J. Firefighters dealt by color matching and energy separation.
with more fires in first MCO. The Star Journal of Real-Time Image Processing.
Newspaper. 2020 2019;16(5):1625-1642

[3] Fahy RF, LeBlanc PR, Mollis JL. [10] Ahmad A, Khare K, Bhambhu L.
Firefighter Fatalities in the United Detection of flames in video by color
States-2010. Quincy, MA: National Fire matching. IEEE Sensors Journal.
Protection Association; 2011 2017;17(6):1795-1802

[4] Tan CF, Dhar MS. Fire Fighting [11] Amit K, Rajat C, Tripti S, Bhaskar T.
Mobile Robot: State of the Art and Recent A review of fire detection systems with
Development. Australian Journal of Basic new results on image processing
and Applied Sciences. 2013;7:220-230 for early stage forest fire detection.
Wireless Networks. 2020;26:4577-4595.
[5] Averill J, Moore-Merrell L, Ranellone DOI: 10.1007/s11276-019-02125-7
R Jr, Weinschenk C, Taylor N,
Goldstein R, et al. Report on High-Rise [12] Rostami A, Shah-Hosseini R, Asgari S,
Fireground Field Experiments. Technical Zarei A, Aghdami-Nia M, Homayouni S.
Note (NIST TN). Gaithersburg, MD: Active fire detection from landsat-8
National Institute of Standards and imagery using deep multiple kernel
Technology; 2013. DOI: 10.6028/NIST. learning. Remote Sensing [Internet]
TN.1797 2022;14(4):992. DOI: 10.3390/rs14040992

[6] Ramasesenderan N, Rajasekaran T, [13] Andela N, Morton DC, Giglio L,


Sivanesan S. Analysis and optimisation Paugam R, Chen Y, Hantson S, et al. The
of lubrication viscosity on the bearing Global Fire Atlas of individual fire size,
operations. In: Journal of Engineering duration, speed and direction. Earth
Science and Technology, Special Issue System Science Data. 2019;11:529-552.
on SIET2022, May, 51-57. Kuala Lumpur, DOI: 10.5194/essd-11-529-2019
Malaysia: Asia Pacific University; 2022
[14] Louis G, Luigi B, David PR,
[7] Qiu X, Xi T, Sun D, Zhang E, Li C, Michael LH, et al. The collection 6
Peng Y, et al. Fire detection algorithm MODIS burned area mapping algorithm
combined with image processing and and product. Remote Sensing of
flame emission spectroscopy. Fire Environment. 2018;217:72-85.
Technology. 2018;54(5):1249-1263. DOI: 10.1016/j.rse.2018.08.005. Available
DOI: 10.1007/s10694-018-0727-x from: https://www.sciencedirect.com/
science/article/pii/S0034425718303705.
[8] Zhao F. Application research ISSN 0034-4257
of image processing technology
for fire detection and fire alarm [15] Ciprián-Sánchez JF, Ochoa-Ruiz G,
based on blockchain. Mobile Rossi L, Morandini F. Assessing the
14
Aerial Drones for Fire Disaster Response
DOI: http://dx.doi.org/10.5772/intechopen.1002525

impact of the loss function, architecture [22] Sungchul H, Antyanta B, Jae-Min P,


and image type for deep learning-based Minseong C, Hyu-Soung S. Visual
wildfire segmentation. Applied Sciences. SLAM-based robotic mapping method
[Internet]. 30 Jul 2021;11(15):7046. for planetary construction. Sensors
DOI: 10.3390/app11157046 (Basel). 2021;21(22):7715. DOI: 10.3390/
s21227715
[16] Aydin B, Selvi E, Tao J, Starek MJ.
Use of fire-extinguishing balls for a [23] Gunaza Teame W, Zhongmin
conceptual system of drone-assisted Prof W, Yu Dr Y. Optimization
wildfire fighting. Drones. 12 Feb of SLAM Gmapping based on
2019;3(1):17 Simulation. International Journal of
Engineering Research & Technology
[17] Tian J, Shen L, Zheng Y. Genetic (IJERT). 2020;9(04). DOI: 10.17577/
algorithm based approach for multi-UAV IJERTV9IS040107
cooperative reconnaissance mission
planning problem. In: Proceedings [24] Budiharto W, Edy I, Jarot SS,
of the International Symposium on Andry C, Heri N, Alexandar Agung
Methodologies for Intelligent Systems; Santoso G. Mapping and 3D modelling
2006 Sep 27-29; Bari, Italy. pp. 101-110 using quadrotor drone and GIS software.
Journal of Big Data. 2021;8(48).
[18] Wu W, Liu H, Li L, Long Y, Wang X, DOI: 10.1186/s40537-021-00436-8
Wang Z, et al. Application of local fully
Convolutional Neural Network combined [25] Krul S, Pantos C, Frangulea M,
with YOLO v5 algorithm in small target Valente J. Visual SLAM for indoor
detection of remote sensing image. livestock and farming using a small drone
PLoS ONE. 2021;16(10):e0259283. with a monocular camera: A feasibility
DOI: 10.1371/journal.pone.0259283 study. Drones. 2021;5:41. DOI: 10.3390/
drones5020041
[19] Yeom SK, Seegerer P, Lapuschkin S,
Binder A, Wiedemann S, Müller KR,
et al. Pruning by explaining: A novel
criterion for deep neural network
pruning. Pattern Recognition.
2021;115:107899. DOI: 10.1016/j.
patcog.2021.107899

[20] Son SJ, Kim HS, Kim DH. Waypoint


and autonomous flying control of
an indoor drone for GPS-denied
environments. IAES International
Journal of Robotics and Automation
(IJRA). 2022;11(3):233-249.
DOI: 10.11591/ijra.v11i3.pp233-249

[21] Amit S, Pradeep KS, Yugal K. An


integrated fire detection system using
IoT and image processing technique
for smart cities. Sustainable Cities
and Society. 2020;61. DOI: 10.1016/j.
scs.2020.102332
15

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy