We Are Intechopen, The World'S Leading Publisher of Open Access Books Built by Scientists, For Scientists
We Are Intechopen, The World'S Leading Publisher of Open Access Books Built by Scientists, For Scientists
We Are Intechopen, The World'S Leading Publisher of Open Access Books Built by Scientists, For Scientists
6,800
Open access books available
182,000
International authors and editors
195M
Downloads
154
Countries delivered to
TOP 1%
most cited scientists
12.2%
Contributors from top 500 universities
Abstract
Keywords: fire, fire extinguishing ball, fire fighting, artificial intelligence, unmanned
aerial vehicle
1. Introduction
The ongoing evolution of cities into the scientific, economic, administrative, and
cultural epicenters of their respective nations has been a remarkable phenomenon of
the twentieth century. This evolution, despite delivering substantial improvements in
living conditions, also precipitates a host of challenges. These include increased traffic
congestion, which stresses infrastructure while amplifying air and noise pollution,
a housing crunch prompting cities to expand both horizontally and vertically, and
issues such as water scarcity and waste management.
1
Drones – Various Applications
follow a predetermined path and provide valuable information about the fire’s exter-
nal environment. However, high-resolution 3D imagery necessitates sophisticated
and expensive vision cameras, which can inflate costs when implemented in a drone
swarm. Furthermore, the risk of damage to drones remains high due to various factors
that can cause failure.
Despite advancements in technology, fire incidents and the associated fatalities
continue to rise. The National Fire Protection Agency (NFPA) suggests an ideal
response time of 9 minutes and 20 seconds, of which includes 14.3% of turnout time
and 85.7% of travel time [5]. Kamarulzam Malik Abdullah, the director of the Sabah
Fire and Rescue Department, cites factors such as a shortage of fire stations and
geographical constraints for this delay.
The deployment of autonomous firefighting drones and fire extinguisher balls
can reduce response time and help firefighters gain easier access to the incident
scene.
The potential use case for UAVs remains high as the enable both accessibility and
data collection for a multitude of analytics [6] which can be harnessed for timely and
critical information which equips fire fighters with situational awareness to combat
the disaster successfully with minimized human casualties and property damage.
2. Fire detection
of deformation to the TENG. This approach offers a more efficient solution for fire
detection compared to the conventional F-TENG that uses an elastomer.
Furthermore, the researchers have developed a methodology to detect fires based
on the concentration of smoke. This involves using a capacitor and a rectifier bridge
driven by the F-TENG. This not only facilitates the detection of fire but also allows for
the calculation of the direction and speed of fire spread.
More impressively, the system can anticipate potential fire hazards by utilizing
the data gathered by the F-TENG, such as humidity, temperature, wind speed, and
wind direction. This predictive feature enhances the system’s potential as a robust fire
prevention and safety mechanism.
In recent research, a blockchain technology is applied to develop [8] a novel
fire detection system that incorporates image processing. The innovative approach
involves the use of a “blockchain golden fox,” which consists of various intercon-
nected units. The fundamental unit within this system, known as the meta-network,
is constructed from a blend of activation functions and connection methods.
In this unique setup, neurons are connected with protrusions, employing both
linear and non-linear mapping as activation functions. These functions subsequently
confine the neuron’s output to a specific range.
To activate the alarm system, a weight value derived from the blockchain is
utilized. This is achieved using the term frequency-inverse document frequency
(TF-IDF) algorithm, as the weight value resides between 0 and 1.
During the process of image processing, the image is first converted into grayscale.
This is due to the fact that red color, which is associated with fire, has a higher wave-
length compared to green and blue color. By making the image grayscale, the red color
or light is made more visible, thereby enabling faster and easier fire detection.
Following the grayscale conversion, the image then undergoes binarization to
enhance the contrast of the video or image being inputted into the system. Turning
the colored or grayscale image into binary is also beneficial in reducing pixel
interference.
The image processing concludes with the morphological process, which serves to
eliminate or reduce noise in the image. This includes the erosion and dilation process,
further enhancing the efficiency and accuracy of the fire detection system.
In the research [8] conducted, a comprehensive image processing workflow, was
used to detect fire based on blockchain technology. The process commences with
image pre-processing, which reduces the quantity of image data and facilitates valu-
able data discovery. Subsequently, image segmentation takes place, partitioning the
image into distinct segments, each with their unique properties, thereby highlighting
areas of interest. The creation of a feature map, which assigns values and extracts vital
data from the image, ensues. Image matching, the final step, entails comparing these
extracted features with a mapping table for object or fire recognition.
Throughout the experimentation process, it was noted that high temperatures
triggered a fire alarm response within 13.3 seconds, while high smoke density levels
elicited a response in 18.1 seconds.
In a separate study [9] image processing involving color filtering and histograms
was used for fire detection. An examination of RGB (red, green, and blue) and HSV
(hue, saturation, and value) color spaces showed that filtering in RGB color spaces
was more suitable for detecting fire in static light conditions, such as interiors, com-
pared to open areas. However, for image processing, the threshold technique in HSV
color spaces was more advantageous, as demonstrated by their previous work [10].
This study also involved color filtering, beginning with an image conversion from
4
Aerial Drones for Fire Disaster Response
DOI: http://dx.doi.org/10.5772/intechopen.1002525
RGB to HSV. The HSV image was subsequently transformed into grayscale and binary
formats for better visibility. Their experimental setup involved placing four different
colors (yellow, blue, green, and red) next to the fire, which were not detected as the
HSV value was set to detect fire color. The optimum parameters for detecting fire
were established as hue ranging from 0–16 to 0–20, saturation from 74–164 to 74–168,
and value from 200–228 to 200–232.
In the application of drones for fire detection, it was discovered that the onboard
camera could only detect fire if the drone was flying at an altitude not exceeding 8
meters above the fire.
A technique [7] used flame emission spectroscopy in conjunction with image
processing for fire detection, noting the technique’s superior response time compared
to gas sensors and thermal sensors. This method also allowed for a more nuanced
understanding of the combustion process by tracking various oxidants. In terms
of image processing, the YCbCr color space was used, with a temporal smoothing
algorithm improving image quality and enhancing fire detection sensitivity.
Furthermore, it is recommended [11] the integration of the Internet of Things
(IoT) with image processing for fire detection. While they acknowledged the wide-
spread use of the RGB color model due to its simplicity and applicability in various
applications, they noted its limitations in color recognition and object description.
Comparatively, the HSI color space was easier for humans to understand and bet-
ter at non-uniform illumination. However, this color space was unstable due to its
angular nature. The YCbCr color space excelled in reducing image size and improving
image quality but was dependent on the original RGB signal. A satellite network [11]
and unmanned aerial vehicles (UAVs) is utilized to collect images for fire detection,
supplemented by sensor nodes placed in smart cities to gather environmental data
such as humidity, temperature, light intensity, and smoke.
In their fire sizing study, a novel approach towards active fire detection (AFD)
based on the interpretation of Landsat-8 Imagery via Deep Multiple Kernel Learning
was explored. The research utilized a recent dataset developed by De Almeida et al.,
containing image patches of 256 × 256 pixels that illustrate wildfires occurring in
various locations across all continents. The dataset thereby provides an opportunity to
adapt the MultiScale-Net to a diverse set of geographical, climatic, atmospheric, and
illumination conditions [12].
The method of Deep Multiple Kernel Learning is highlighted as being particu-
larly effective for spectral-spatial feature extraction from remote sensing images.
To address the limited availability of training samples, the researchers employed a
straightforward data augmentation technique, which led to the generation of 27 dif-
ferent configurations in this study [12].
The research group also introduced a novel indicator, the Active Fire Index (AFI),
devised to enhance the accuracy of thermal and spectral band analysis. AFI draws its
efficacy from the high sensitivity of the Landsat-8 sensor to fire radiation, and it is
derived from the SWIR2 and Blue bands [12].
Active Fire Index (AFI):
ρ7
AFI = (1)
ρ2
5
Drones – Various Applications
where ρ7 and ρ2 represent the SWIR2 and Blue values in Landsat-8 images,
respectively.
Eq. (1) can be used to calculate AFI, where ρ7 and ρ2 represent the SWIR2 and Blue
values in Landsat-8 images, respectively. AFI’s utility lies in its ability to distinguish
fires from their background owing to the high reflectance in the SWIR2 spectrum
and the relatively low reflectance in the Blue spectrum [12]. It further proves effec-
tive in eliminating smoke and possible clouds present in the image scenes, which are
frequent hurdles in active fire detection.
However, the AFI method does not come without its limitations, for instance, it
can underperform when non-fire objects that are bright have a high reflectance in the
SWIR2 and a low reflectance in the Blue spectra. Furthermore, the accuracy of fire
detection can be compromised due to certain inherent characteristics of fire such as
the limited spatial resolution of satellite images and the significant influence of the
Earth’s dense atmosphere [12].
The independence of the proposed method from the thermal bands paves the way
for future studies to explore the potential of Sentinel-2 data for AFD, which could
offer higher spatial and temporal resolution. Moreover, the research team suggests
that the processing time of the proposed method could be evaluated using cloud
platforms like Google Earth Engine (GEE) [12].
Another novel approach to understand and model wildfires was using a method to
isolate individual fires from the commonly used moderate resolution burned region
data [13].
In their study, they acknowledge the necessity to separate individual fires from
large clusters of burned area as these fires are produced by extensive burn patches.
This comes as a challenge in current global fire systems, as the available satellite data
products only identify active fire pixels. Hence, they developed the Global Fire Atlas,
which outlines individual fires based on a fresh methodology that identifies the direc-
tion of fire spread, the position and timing of individual fire regions and calculates
the size of the fire, its period, daily expansion, fire line length and speed [14].
This Global Fire Atlas algorithm was applied to the MCD64A1 Col.6 burned-area
dataset [14], and it was found that the minimum detected fire size is one MODIS
pixel, equivalent to approximately 250 m × 250 m. This result provides a very precise
and promising avenue for future research.
In another research endeavor, a comprehensive study evaluating the influence of
the loss function, architecture, and image type for Deep Learning-Based Wildfire
Segmentation [5]. They processed two image types, namely the visible and the
Fire-Gan fused, and evaluated 36 resultant combinations of the architectures and loss
functions with reference to three different metrics—MCC, F1 Score, and HAF [15].
Their analysis demonstrated that the Akhloufi + Dice + Visible combination yields
the best results for all metrics. Moreover, the Akhloufi architecture and the Focal
Tversky loss function were found to be prevalent in the top five for all metrics. Thus,
considering the performance evaluation and correlation analysis, the combination of
Akhloufi + Focal Tversky + visible was recognized as the most robust performer due
to its consistent results with minimal variance [15].
The firefighting drone functions as a first responder at the fire scene, capable of
warning people in the area and notifying the fire department. Moreover, it comes
6
Aerial Drones for Fire Disaster Response
DOI: http://dx.doi.org/10.5772/intechopen.1002525
with a feature that enables it to extinguish parts of the fire, creating pathways for
people to escape and providing clear paths for firefighters to enter [16].
A system [16] that can deploy a fire extinguishing ball to assist in wildfire fighting.
The mechanism for payload deployment was constructed with components such as a
motor, power supply, and receiver, allowing the drone to receive signals from the user
to open the valve through the motor and release the fire extinguishing ball.
The researchers found that the 0.5 kg weight of the fire extinguishing ball was
insufficient in extinguishing fires effectively. The choice of the 0.5 kg weight during
the experiment was due to budget constraints, and it took approximately 3–4 sec-
onds to activate the fire extinguishing ball, covering an area of about a meter in
diameter. However, the researchers believe that a larger fire extinguishing ball could
extinguish fires over a larger area. According to the official webpage, the founder
mentioned that the 1.3 kg weight of the fire extinguishing ball can cover an area up
to 3 cubic meters [16].
The YOLOv4 algorithms [17] were improved by reducing the neural network’s
weight using the Kernelized Correlation Filter (KCF) algorithm for tracking and
adaptive learning rate template update when tracking fails. The results were obtained
from a multi-rotor copter by testing the adaptive tracking strategy. KCF used a non-
linear kernel function to handle non-linearity and non-stationarity and adapted the
threshold value for the Average Percentage of Correctly Estimated (APCE) metric to
provide a more accurate evaluation of the tracking algorithm’s performance [17].
The use of YOLOv5 for object detection and combined it with the DeepSORT
algorithm for tracking and positioning objects using UAVs. DeepSORT is a real-time
object tracking algorithm that uses a deep neural network-based object detector and
a simple online and real-time tracking algorithm, SORT (Simple Online and Realtime
Tracking), to effectively track objects even in challenging situations [18].
The nearest neighbor (NN) method [19] to measure and track selected objects.
They employed the nearly constant velocity (NCV) motion model for discrete-time
kinematic modeling in object tracking. The NCV model assumed that the target’s
velocity was constant over a short period, and the Kalman filter was used to predict
and estimate the target’s position and velocity at each time step [19]. The nearly
constant velocity (NCV) motion model is a common choice for the discrete-time
kinematic model for a target in object tracking. This model assumes that the target’s
velocity is constant over a short period of time, and that the position of the target at
each time step is determined by its position and velocity at the previous time step. In
the NCV model, the target’s position and velocity at time step k are represented by the
state vector x_k = [x_k, y_k, vx_k, vy_k]^T, where x_k and y_k are the target’s posi-
tion coordinates, and vx_k and vy_k are its velocity components. The state transition
function for the NCV model is given by:
Nearly constant velocity (NCV):
x=
_ k Fx _ k − 1 + w _ k − 1 (2)
where F is the state transition matrix, and w_k−1 is the process noise. The state
transition matrix F is typically set to the identity matrix with the velocity components
set to the time step between frames. The Kalman filter algorithm is then used to esti-
mate the target’s position and velocity at each time step by using the state transition
function and the measurement function. The measurement function is used to relate
the target’s state vector to its measured position in the current frame. The filter uses
7
Drones – Various Applications
the measurement and prediction to correct the estimate of the state vector. The NCV
motion model is simple to implement and computationally efficient, making it well-
suited for real-time tracking applications. However, it does not consider the accelera-
tion of the target, which can lead to errors in the estimated position and velocity when
the target’s motion is not truly constant.
A method for recognizing the self-location of a drone flying in an indoor environ-
ment using ultra-wide band communication module DWM1000 was proposed [20].
The self-localization algorithm uses trilateration and the gradient descent method to
determine the drone’s position with an error of within 10–20 cm. Real-time 3D posi-
tion information of the drone can be obtained and used for autonomous flight control
through a deep learning-based control scheme. This scheme improves upon a conven-
tional CNN algorithm by incorporating deeper layers and appropriate dropouts in the
CNN structure, which uses input data from a single camera. When a drone experi-
ences drift, it may not move in a straight line but instead veer slightly to one side. In
this situation, the real-time first-person view (FPV) of the drone is used as input data
in the proposed CNN structure to predict the pulse-width modulation (PWM) value
for the roll direction, which helps to correct the drift and keep the drone moving in
the desired direction. The goal of this work was to create a drone that could safely
navigate in environments where GPS signals are not available, such as tunnels and
underground parking garages. To achieve this, a control board based on a digital
signal processor and ultra-wideband modules were developed, along with control
algorithms for stable flight. A 3D position estimation algorithm using a proposed 2D
projection method was implemented to determine the drone’s position, which was
then used for hovering motion control and way-point navigation.
4. 3D area mapping
UAV offers great potential in operating airborne sensor system and computer vision
for monitoring 3D buildings and spatial planning. Swinglet CAM was a fixed-system
system that used for acquiring 3D point cloud over the region. With the help of few
sensors such as RGB sensors for georeferenced images pair, capturing real time points
using GNSS systems and internal measurement unit (IMU). Some of the parameters
can be obtained like position of laser sensors based on one or more GNSS base station.
IMU can provides parameters such as sensor’s altitude and heading angles (roll, pitch,
or yaw) Moreover, LiDAR or satellite imagery can be the tools for extracting 3D points
clouds. Information of the mapped building such as height, area, volumetric informa-
tion can be extracted using 3DEBP (3D extraction building parameters).
UAV point cloud can be obtained by multi-stereo image matching processing of
eight ground control points allowing a set of 3D points being estimated through those
stereo pair pixels. There will be millions of points being captured so these points have
been filtered through clustering large applications (CLARA) algorithm. Large dataset
such as point cloud will be suitable to use CLARA because due to its partitioning abil-
ity to divide these data into few sub-groups.
Even though there were mean error for the estimated building block volume when
referencing from original dataset. In the early days, the monitoring method for fires
event were using satellites system as it was high efficiency, accurate and automatic
operations [21]. However, this method were not reliable and poor signals from the
satellite if was in remote areas. There were many alternatives in terms of fire monitor-
ing which can be used which was the remote sensing which had advantage in large
area monitoring and high spatial resolution images were efficient. Comparing to
InSAR techniques that provides area surveying and mapping had an advantage since
it can monitor day/night with different weather conditions but on the bad side, it was
limited to its corresponding time as fire monitoring required continuous observa-
tions. With satellite imaging, the probability of false alarms was high as well as dense
fog, cloudy and rainy images may affect its monitoring.
One of the methods was using UAVs as the replacement of satellite images since
UAVs can provided real time monitoring at low altitude and fast acquisition of data.
Its reliable and simple to operate. Attaching a digital camera will be ideal as it can
captured basic surface model and regular images for analysis. Other options will be
mounting a multiple lens camera for proper texture information of the mapped area.
This can show the strength of UAVs in comparing against the traditional method of
satellite imaging. By implementing image processing algorithm into the UAVs, drone
mapping can be done with the help of data analytics called ThingSpeak cloud plat-
form. It utilized the platform and provided capabilities of accessing data both online
and offline. Using Drone deploy software, it can be able to map drones which offers
web and app-based platform. It can provide high resolution images and videos to
target area.
With the help of 3D mapping with SLAM, even rover at mars and moon can esti-
mate its current positions and construct a map for surrounding. Some of the sample
sensors that can used were RGB-D and LiDAR to provide high dense 3D point cloud
but limited communication links, data storage, power supply could be a problem.
Alternatives, a stereo camera, or monocular vision could work due to lightweight and
lower power consumptions. The implementation of Visual SLAM mapping can be
performing using a pair of stereo images used to train the deep learning model, the
disparity map was estimated. This map was initially used to create 3D point cloud.
A good implementation of SLAM technique for terrain perceptions and mapping
results were determines by the choice of sensors and sensor fusion. According to the
9
Drones – Various Applications
research paper [22], different methods were used. First, it uses monocular SLAM
framework with extended Kalman filter (EKF) that able to track unconstrained
motion of a rover but for this to work, those distinct feature must be well distributed.
The biggest problem of this method will be scale ambiguity and measurement drift
since there were no inertial and range sensor when using single camera. Alternatives,
data from RGB-D SLAM will acquire both depth per-pixel and visual texture infor-
mation. However, lighting circumstances can have interferences causing noisy and
homogenous point-clouds. Some of the researcher proposed LiDAR SLAM as it
can solved the robustness problems as the research paper had mentioned the global
terrain map was made with sparse-method and batch of alignment algorithm [22]
LiDAR camera fusion with SLAM had great advantages since it can perfectly make use
of these both sensors individually. Other than that, used of stereo SLAM was also one
of the proposed methods because it can create a terrain map through sensor fusion
and using the Gaussian-based Odometer error model, it can improve the accuracy by
predicting the non-systematic error from wheel interaction.
To prove the concept, training dataset on earth was not practical since in this
paper, the focus was on mars and moon. A self-supervised deep learning model
was used and 2D and 3D convolution layer based on geometric-based CNN were
trained using the images in the subset collected by the pairs of stereo cameras. The
2D convolution layer will extract images features from each image and constructed a
cost volume. Then, the 3D convolution layer will be aggregating to infer the dispar-
ity value using probability distribution for each pixel. Lastly, the disparity map was
constructed as a regressed from the probability distribution.
According to the research done, SLAM mapping using combination of single-
scan Terrestrial Laser Scanning (TLS) point cloud and Mobile Laser Scanning (MLS)
point cloud can addressed the consistency problem and maintained accuracy map-
ping without using the GNSS-IMU system. LiDAR Odometry and global optimiza-
tion will be the main key to solve the problem. The LiDAR odometry will estimate
the motion of each detected frame of MLS point cloud relative to single-scan TLS
data. Real and virtual features will be extracted as a part of tree stem mapping
using NDT algorithm. Virtual features were sample that reconstructed the tree stem
centrelines while real features were evenly sample the point cloud. Layer clustering
method will be used to extracted virtual features where else Differences of Gaussian
(DOG) method will be used to extract real features. In terms of global optimization,
it mitigates the accumulative error and transforms all points cloud into common
coordinate system. By adopting a method based on global map that does not consider
loop closure and the adjustment of all data to address the global optimization of
SLAM.
This research was done to present the optimization of SLAM using Gmapping
algorithm Different SLAM algorithm had been used and invented such as Gmapping,
Hector SLAM and Karto SLAM. Standards dataset and ground truth map the
parameters from source code, C++ program were used to optimize the parameters in
a different way. 2 ways were used which were optimizing parameters separately and
see its results or the second optimization where parameters collectively and see its
results [23].
First method will be optimizing the parameters separately by their functions. The
parameters chosen will be editable which means user can provide new values and run
the programmed again to conduct evaluation compared to dataset. The main param-
eters were changed uniformly in equity manner which was uniform optimization
and second was references where time between successive recalculations of the map
10
Aerial Drones for Fire Disaster Response
DOI: http://dx.doi.org/10.5772/intechopen.1002525
which was the map update interval. Gmapping algorithm were used in this method.
After optimizing, map results and necessary significant results were varying in each
different parameters testing.
Optimizing parameters separately can led to a little bit of changes on the map
relatively to the dataset. Although the results look similar to each other even though
we optimize the parameters separately, but it does not mean that the result will be an
uniform development as all it were changing were one single parameters [23].
In the journal article title “Mapping and 3D modeling using quadrotor drone and
GIS software” currently the main obstacle in 3D mapping would be the cost of data
acquisition for high resolution satellite imagery especially for mapping in weekly
or daily basis. Researcher proposed an alternative will be using UAV which was low
cost, had a high-resolution image and can acquire any time with limited restrictions
that needed to comply [24]. In this journal article paper, it approached the problem
with developing 3D models by photogrammetric data taken using drone quadcopters.
In order to acquire high resolution images, it uses industrial drone, the DJI Inspire 2
drone supported with high quality camera with dreadlocks so that images taken were
stable and suitable for mapping activities. The industrial camera was professional
as it was the first aerial camera that capable of recording lossless 4 K video in RAW
with framerates up to 30 fps and average bitrate of 1.7 Gbps. With the use of powerful
Micro Four Thirds Sensors (MFT), camera that have can achieved high image quality,
light and compactness make it great for recording anywhere anytime. It also captures
stunning 20MP images with details with stabilization dreadlocks integrated with
3-axis guarding level. OF course, with such high specs’ camera recording, the speci-
fication of PC must be top notch as well. Software used was the AgiSoft Metashape, a
software that performed photogrammetric processing of digital images and generates
3D spatial data to be used in GIS applications.
SLAM can be used to create a map for an unknown terrain by estimating the
positions of the obstacle from the created map. The Visual Simultaneous Localization
and Mapping (VSLAM) method can be achieved by using a monocular camera. To
achieve a comparison and had a better result, the two main method of VSLAM can
be used which was the direct-based method and features-based method. These two
methods can be done by extracting information from the images taken by the camera.
From the journal, Large Scale Direct (LSD-SLAM) was preferred for the direct based
method as when implementing the LSD-SLAM, it used the whole images as input and
point cloud were built up based on the images pixel while for feature-based method,
the preferred choice was the ORB-SLAM where data was extracted from a orb and the
representation of pixels will be recorded [25]. In terms of accuracy, the direct-based
method will be much more accurate as more points will be captured and recorded
compared the feature-based method.
As mentioned in this section, 3D area mapping with SLAM would a very good way
as it reconstructs the terrain and mapped unknown terrain. There were many differ-
ent types of SLAM but VSLAM was one of the famous types as it can directly acquire
points from the images taken by monocular camera. Direct method and indirect
method can be used to differentiate these two approaches to implement this SLAM.
Direct method using the LSDO-SLAM can acquired better results as it uses the whole
images as input and directly acquired different density pixel for map plotting. Direct
method often be the better choice when the external surrounding had poor lighting
and poor texture surfaced. Comparing to direct method, the indirect method, the
ORB-SLAM were used and by extracting some of the features for the observations of
the 3D map.
11
Drones – Various Applications
5. Conclusion
2. Payload capacity and motor power: linked to the issue of battery power is the
drone’s limited payload capacity and insufficient motor power. These constraints
have been found to restrict the ability to lift heavier objects or carry additional
equipment, such as larger batteries. The consequent low lift capacity hampers
the drone’s maneuverability, especially under adverse weather conditions or
heavy payload.
4. Camera performance: the last major limitation noted is the suboptimal per-
formance of the drone’s camera, delivering lower-resolution photos and video
streams. This affects the drone’s efficiency in tasks requiring high-definition
imagery, such as aerial surveillance or inspection.
However, these limitations are not insurmountable, and the chapter has also
provided a pathway towards mitigating these challenges:
Author details
© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
13
Drones – Various Applications
References
[1] Nasir NA, Rijal NS. Fire safety Information Systems. 2022;2022:1-11.
management systems at commercial DOI: 10.1155/2022/9304991
building: Pasir Puteh supermarket.
e-Proceeding. 50 p [9] Ahmad A, Prakash O, Khare K,
Bhambhu L. Detection of flames in video
[2] Timbuong J. Firefighters dealt by color matching and energy separation.
with more fires in first MCO. The Star Journal of Real-Time Image Processing.
Newspaper. 2020 2019;16(5):1625-1642
[3] Fahy RF, LeBlanc PR, Mollis JL. [10] Ahmad A, Khare K, Bhambhu L.
Firefighter Fatalities in the United Detection of flames in video by color
States-2010. Quincy, MA: National Fire matching. IEEE Sensors Journal.
Protection Association; 2011 2017;17(6):1795-1802
[4] Tan CF, Dhar MS. Fire Fighting [11] Amit K, Rajat C, Tripti S, Bhaskar T.
Mobile Robot: State of the Art and Recent A review of fire detection systems with
Development. Australian Journal of Basic new results on image processing
and Applied Sciences. 2013;7:220-230 for early stage forest fire detection.
Wireless Networks. 2020;26:4577-4595.
[5] Averill J, Moore-Merrell L, Ranellone DOI: 10.1007/s11276-019-02125-7
R Jr, Weinschenk C, Taylor N,
Goldstein R, et al. Report on High-Rise [12] Rostami A, Shah-Hosseini R, Asgari S,
Fireground Field Experiments. Technical Zarei A, Aghdami-Nia M, Homayouni S.
Note (NIST TN). Gaithersburg, MD: Active fire detection from landsat-8
National Institute of Standards and imagery using deep multiple kernel
Technology; 2013. DOI: 10.6028/NIST. learning. Remote Sensing [Internet]
TN.1797 2022;14(4):992. DOI: 10.3390/rs14040992