Structual Crack

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Construction and Building Materials 252 (2020) 119096

Contents lists available at ScienceDirect

Construction and Building Materials


journal homepage: www.elsevier.com/locate/conbuildmat

Concrete crack detection and quantification using deep learning and


structured light
Song Ee Park a, Seung-Hyun Eem b, Haemin Jeon a,⇑
a
Dept. Civil and Environmental Engineering, Hanbat National University, Daejeon 34158, Republic of Korea
b
School of Convergence and Fusion System Engineering, Kyungpook National University, Sangju 37224, Republic of Korea

h i g h l i g h t s

 The deep learning and structured light have been used to detect and quantify cracks.

a r t i c l e i n f o a b s t r a c t

Article history: Considering the deterioration of civil infrastructures, the evaluation of structural safety by detecting
Received 26 October 2019 cracks is becoming increasingly essential. In this paper, the advanced technologies of deep learning
Received in revised form 13 March 2020 and structured light composed of vision and two laser sensors have been applied to detect and quantify
Accepted 5 April 2020
cracks on surfaces of concrete structures. The YOLO (You Only Look Once) algorithm has been used for
real-time detection, and the sizes of the detected cracks have been calculated based on the positions
of the projected laser beams on the structural surface. Since laser beams may not be projected in parallel
Keywords:
due to installation or manufacturing errors, the laser alignment correction algorithm with a specially
Structural health monitoring
Crack
designed jig module and a distance sensor is applied to increase the accuracy of the size measurement.
Detection The performance of the algorithm has been verified through simulations and experimental tests, and the
Quantification results show that the cracks on the structural surfaces can be detected and quantified with high accuracy
Deep leaning in real-time.
Structured light Ó 2020 Elsevier Ltd. All rights reserved.

1. Introduction In addition, one of the most conventional methods, impact


echo, has the disadvantage that 1) the detection is available for
The assessment of structural damage has attracted increasing highly viscous material and 2) the received signal is very depen-
attention in the past several decades due to increased considera- dent on the structural shape and size [4,5]. Another conventional
tion of aging structures and awareness of civil infrastructure safety. method of ultrasonic pulse velocity 1) requires a lot of time for
The presence of structural external damages such as cracks is the inspection and 2) is difficult to apply to structures with rough
important parameter that directly indicates structural safety surface [6,7]. The limitation of the impulse response method is
[1,2]. To date, the conditions of structures have been diagnosed that 1) data reliability is highly dependent on the inspection
by monitoring the cracks using manpower. The conventional point and 2) small defects are difficult to detect through the
method is limited in terms of objectivity since the monitoring per- method [8,9]. Both the acoustic emission and infrared thermogra-
formance highly depends on the proficiency of the experts and the phy methods have problems that are affected by the background
level of accessibility to the massive civil structures. Looking at the environments such as noise and atmospheric temperature
size of the global structural health monitoring market, it is [10,11]. Each of the methods mentioned above has its own
expected to grow at a compounded annual growth rate of 17.9% advantages and disadvantages, and studies are being conducted
from 1.5 billion USD in 2018, to 3.4 billion dollars in 2023 [3]. to incorporate two or more methods to overcome the limitations
For these reasons, increasing social demand for automated crack [5,12]. On the other hand, deep learning is applied to various
detection and quantification system can be seen. fields regardless of these limitations once the data necessary
for training are secured. Therefore, non-destructive studies of
structures using deep learning have been actively conducted in
⇑ Corresponding author. recent years [13–18].
E-mail address: hjeon@hanbat.ac.kr (H. Jeon).

https://doi.org/10.1016/j.conbuildmat.2020.119096
0950-0618/Ó 2020 Elsevier Ltd. All rights reserved.
2 S.E. Park et al. / Construction and Building Materials 252 (2020) 119096

Cha et al. proposed a crack detection algorithm using deep detect cracks and estimate the size of cracks without the prior
learning with a Convolutional Neural Network (CNN), as well as knowledge or any installation.
a multiple damage containing steel delamination, steel corrosion, The rest of this paper is organized as follows. In Section 2, an
bolt corrosion, and concrete cracks detection algorithm using a overview of the proposed crack detection using a structured light
Faster Region-based CNN (R-CNN) [13,14]. For real time crack and deep learning method is described. In Section 3, crack quantifi-
detection, Choi et al. proposed deep learning network, SDDNet, cation using a structured light and a distance sensor is explained.
optimized for crack detection in images of a wide range of back- The results of simulations and experiments are discussed in Sec-
ground features [15]. Zhang et al. proposed CrackNet, an efficient tion 4. Finally, a discussion of the findings is presented in Section 5.
architecture based on CNN but without pooling layers, that guaran-
tees high precision [16]. Kim et al. proposed a crack identification
method using R-CNN for crack detection and a square shaped mar- 2. Crack detection using deep learning
ker for measuring the size of the detected cracks [17]. The volu-
metric damage quantification method using depth camera has 2.1. Detection of crack and laser beams using YOLO
been proposed Beckman et al [18]. However, most of the afore-
mentioned studies have one of following limitations: 1) size of As shown in Fig. 1, the crack detection and quantification
the detected damages are not provided [13–15], 2) quantification method using a structured light composed of a vision and laser
of pixel unit has been performed [16], or 3) installation of external sensors, as well as a distance sensor, is proposed. The deep learning
markers is needed for quantification [17]. In comparison with the algorithm with the vision sensor is used to detect the structural
damage detection and quantification results with a depth camera cracks, and detected cracks are quantified using a structured light
in [18], the proposed method shows smaller quantification error and a distance sensor. Due to alignment or installation errors asso-
with a combination of a structured light and a distance sensor. ciated with the lasers, a laser pose calibration with a specially
In this paper, a crack detection and quantization method using designed jig module is performed in advance. In the crack detec-
structured light and machine learning is proposed without the tion, deep learning that recently has been emerged as powerful
need for any external attachment and with high accuracy. In this method in a wide range of problems including object detection
method, cracks on surface of concrete structures are detected by automatically learning feature representations from given data
and localized with the deep learning algorithm, and they are quan- has been used. The object detection determines the existence of
tified using a structured light composed of vision and laser sensors. objects within predefined categories, and returns the positions
Since it is not possible to project laser beams in parallel and the and probabilities of the object if it exists.
accuracy of the size measurement is highly dependent on the The Convolutional Neural Network (CNN) is composed of
positions of the projected laser beams, a laser pose calibration convolutional layers, activation layers, pooling layers, a fully con-
algorithm with a specially designed jig module and a distance sen- nected layer, and a softmax layer [19,20]. The convolution layer
sor that can calibrate the positions of the beams with the varying includes a set of filters with learnable weights and the input data
range has been developed. After considering the laser pose, the are convolved with each filter in consideration of the sizes of the
extent of crack can be measured with improved accuracy. The pro- padding, stride, and filters. The activation layer of the Rectified
posed system is validated through simulations and experiments, Linear Unit (ReLU) is applied, which solves the vanishing gradient
and the results indicate that the proposed system can successfully problem. The feature (activation) map, which is the output of the

Fig. 1. Schematic diagram of crack detection and quantification using deep learning and structured light.
S.E. Park et al. / Construction and Building Materials 252 (2020) 119096 3

convolution step, is down-sampled by performing the max pooling detected if the center of the object belongs to the grid cell. Each
layer. This step slides a filter on the feature map and outputs the grid cell predicts the bounding box which is composed of following
representative value, such as the maximum, average, or L2-norm elements, the coordinates of the center of the box relative to the
value from the sub-region. Through this step, the feature detection boundary of the grid cell, the ratio of width and height in the pre-
can be sufficiently robust to prevent overfitting and the computa- dicted object and the entire image, as well as the confidence score,
tional cost can be reduced as well. The fully connected layer con- since it is desired that the predicted bounding box value and actual
nects to all of the neurons and the softmax layer is the multiclass value are the same. The confidence score indicates the probability
classifier which calculates the probability of each individual class that the corresponding object will be in the detected bounding box
and gives the result as the class with the highest probability. Since and the probability that the object will match the predicted object.
the CNN method has strong advantages of shift, scale, and distor- If there is no object for the grid cell, the confidence score will drop
tion invariance, it has been applied to various fields, including to zero. The latest version of YOLO, YOLOv3, proposed by Joseph
structural crack detection. Redmon uses Darknet-53 network which has 53 convolutional lay-
In order to reduce the computation time in CNN, R-CNN ers. The score of objectness on each bounding box is calculated by
(Region-based CNN), which includes the region proposal using a logistic regression. In order to detect cracks in real time, a
algorithm, has been proposed. Rather than selecting a large num- light version of YOLOv3 (YOLOv3-tiny) which ensures detection
ber of regions with sliding window techniques in CNN, about speed of 220 frames per second has been used in this paper.
2000 candidate regions are extracted (cropped) and warped in a The performance of the object detection using a deep learning is
square. The warped images are used as input images on CNN. Since highly dependent on the number of learning datasets. Since it is
R-CNN still has a drawback in terms of computation time as the hard to collect sufficient amounts of data, transfer learning
CNN is performed on each warped image, Fast R-CNN was pro- employing a pre-trained network with a very large dataset has
posed. Instead of cropping the input image, the CNN is applied been used [28]. In this paper, images of cracks, lasers, and
on the entire image, and candidate regions are extracted from pseudo-cracks are collected to build a database. The collected
the feature map. The Faster R-CNN method speeds up the object images are separated into three classes: cracks, lasers, and
detection by using the region proposal network. In Faster R-CNN, pseudo-cracks; and they are manually labelled with the bounding
the region proposal network shares convolutional features with box as shown in Fig. 2. Following the labelling, the image dataset is
the detection network, and this reduces computation time [21]. trained with YOLOv3-tiny architecture and COCO dataset with
Rather than using a bounding box for object localization, Mask approximately 330,000 training images [27,29], which is coded in
R-CNN proposed a pixel-level object instance segmentation a Python environment with a TensorFlow open source machine
method by adding a branch to Faster R-CNN that outputs a binary learning software library.
mask for each region of interest [22]. Mask R-CNN combines Faster
R-CNN with a fully convolutional network on the region of interest. 2.2. Integration of multiple detected cracks using image processing
In order to perform object detection and semantic segmentation at technology
the same time, Faster R-CNN plays the role of object detection and
adds a fully convolutional network that performs mask segmenta- Since a single crack usually detected within multiple bounding
tion on each region of interest. boxes with the deep learning algorithm, an algorithm that recog-
In order to speed up the object detection, the Single Shot Detec- nizes and integrates multiple region proposals in a single crack
tor (SSD) algorithm has been introduced [23]. Since SSD completely has been applied as shown in Table 1. As shown in the table,
eliminates the region proposal generation as well as subsequent bounding boxes detected with deep learning are sorted by the
pixel and resampling steps; and all calculations are performed in coordinates and the diagonal length is calculated. The calculated
a single network, the computational speed and accuracy are higher diagonal length is compared with the distance from the center of
than those of the existing proposed algorithms. The real-time the previous bounding box to the nearest corner of the current
detection algorithm with faster speed, You Only Look Once (YOLO), bounding box. If one the corners of the bounding box is located
has been introduced [24–27]. YOLO detects in real time, without within the previously detected one, they are integrated and regen-
sacrificing detection accuracy, by using a single convolution net- erated into a single region. After merging bounding boxes, the
work method that simultaneously detects bounding boxes and image processing techniques such as histogram equalization,
image classes. YOLO divides the input image into multiple grids image binarization, and Harris’ edge detection have been used to
with the same height and width and allows the object to be detect center of projected laser beams and contour of the crack.

Fig. 2. Labelling of (a) cracks, (b) lasers, (c) pseudo-cracks, and (d) its combination.
4 S.E. Park et al. / Construction and Building Materials 252 (2020) 119096

Table 1
Integration algorithm of multiple detected bounding boxes in a single crack.

For bounding boxes n = 1, 2, 3,. . .


1: Initialization
Save region proposals sorted by X-coordinate, xn and calculate the diagonal length on each region, ln
2: Identification
Calculate distance, dn1,n, from the center of the n-1th bounding box to the corner of the nth bounding box
Check 0.5ln > dn1,n, then go to step 3
3: Integration
Integrate the regions by defining the corners with top left and bottom right ones
4: Update
Update the total number of cracks to n1 and region proposals xn1
5: Repeat steps 2–4

3. Crack quantification using a structured light dL ðDÞ ¼ 0:9945D þ 0:3203 ð1Þ

where dL is an expected range measurement and D is the reference


In order to quantify the detected cracks, a structured light com-
distance between the lasers and the target. The obtained range
posed of vision and two lasers, as well as a distance sensor, have
value from the distance sensor, dL, is calibrated in order to calculate
been used in this paper. The overview of the crack quantification
D, and it is used to find the distance between the positions of the
using structured light is shown in Fig. 1. As shown in the figure,
projected laser beams, L. Using the calibrated laser positions, the
two lasers with known installed positions are mounted on the sys-
detected crack can be quantified with increased accuracy.
tem and project their beams to the structure. The vision sensor
captures the structural surface including cracks and the projected
laser beams. With the known installation positions of the lasers, 4. Simulation and experimental tests
the ratio between pixel and mm dimensions can be estimated.
By using the ratio, the detected crack can be quantified in the In order verify the performance of the proposed detection and
mm dimension. quantification algorithm, simulations and experimental tests have
Since most of the point lasers are not projected in parallel due been performed. In total, 1800 images of cracks, lasers, and
to the installation or manufacturing errors, a laser pose calibration pseudo-cracks have been collected and augmented by rotating or
method has been proposed in this paper. To calibrate the initial translating the images. The augmented images are labelled and
pose of the lasers, a specially designed jig device as shown in trained with YOLOv3-tiny including the COCO dataset. YOLOv3-
Fig. 3 is applied. The entire process of the laser pose calibration tiny is a state-of-the-art, one of real-time object detection architec-
with the jig and a distance sensor is shown in Fig. 4. As shown in ture satisfying high accuracy and computation time. To increase
the figures, the positions of the projected laser beams on the screen the accuracy of the detection, the collected images are augmented
with the range of 10 to 100 cm are calculated with the captured by rotating and translating the images. The batch size and the
images. From the captured images with varying ranges, positions learning rate are set to be 4 and 0.001, respectively.
of projected laser beams within known size of the screen is calcu- The performance of the detection is evaluated by calculating
lated by using image processing techniques such as Harris’ corner accuracy and precision. The accuracy is the number of correct pre-
detection, homography calculation, and image warping. The rela- dictions made divided by the total number of predictions made,
tionship between the distance of the projected laser beams, L, with and the precision is the number of true positives divided by the
the distance from the lasers and target structure, D, is calculated by total number of positive predictions, which is the sum of true pos-
using linear regression method. itives and false positives. The performance of detection using
Since the accuracy of the quantification highly depends on trained YOLOv3-tiny network with the augmented dataset are
performance of the range sensor (D), the calibration of the ranging evaluated with 150 images: 50 of cracks, 50 of lasers, and another
values has been performed. The layout of the test to calculate a sys- 50 of pseudo-cracks on concrete structures. As shown in the confu-
tematic bias such as clock accuracy or communication latency is sion matrix (see Table 2), and the accuracy and precision of the
described as follows: 1) the lasers are placed at a fixed distance proposed method are shown to be 94% and 98%, respectively.
from a target structure; 2) 1,000 of range measurements are
obtained for reference distance from 100 to 500 cm with intervals 4.1. Simulations
of 10 cm, respectively [30]. The experimental setup and ranging
characteristics of the distance sensor in a line of sight environment In the simulations, four different shapes of cracks caused by
is shown in Fig. 5. By using the linear regression, a simple ranging shrinkage, settlement, and fatigue etc. are generated as shown in
model dL can be constructed as follows: Fig. 6, and they are augmented by rotating 30°, 60°, 90° and

(a) (b)

Fig. 3. Laser pose calibration with a specially designed jig. (a) Side and (b) front views of the installation.
S.E. Park et al. / Construction and Building Materials 252 (2020) 119096 5

Fig. 4. Procedure of laser pose calibration with the specially designed jig module.

(a)

(b) (c)
Fig. 5. Ranging characteristics of a distance sensor in a line-of-sight environment. (a) Experimental setup, (b) reference and measured distances with linear fit, (c) ranging
error caused by systematic bias.

Table 2 standard deviation of the thickness and length of the cracks on


Confusion matrix of crack and laser detection. the simulations are 0.12(0,09) mm and 0.15(0.39) mm, respec-
Actual Prediction
tively. The numbers in parentheses are the standard deviation val-
ues. The measurement errors defined as the difference between the
Crack Laser Psuedo-cracks
estimated and the actual values have been greatly reduced by
Crack 44 – 6 applying laser pose calibration algorithm. Using a high-
Laser – 50 –
Psuedo-cracks 2 – 48
performance computer composed of CPU (i7-8700K, Intel), ram
memory of 16.0 GB, and GPU (GeForce GTX 1080, NVIDIA), the
computation time is found to be 0.034 s; 0.02 for crack and laser
detection and 0.014 for quantification.
120°. On the images including the cracks and projected laser beams
(see Fig. 7(a)), the possible regions of cracks and lasers are detected
using YOLOv3-tiny (see Fig. 7(b)) and the positions of laser beams 4.2. Experimental tests
and contours of the cracks are calculated on the cropped images
(see Fig. s 7(c)). The thickness and length of detected cracks are In order to verify the feasibility of the proposed system, exper-
estimated using the calibrated distance of the two projected laser imental tests were performed. As shown in Fig. 8, the proposed
beams in mm dimension (see Fig. 7(d)). The simulation results system was composed of two lasers, a vision, and a distance sensor.
are shown in Table 3. As shown in the table, the average and In the experiments, a commercially available laser from Laserlab,
6 S.E. Park et al. / Construction and Building Materials 252 (2020) 119096

Fig. 6. Actual and simulated crack images of concrete structures.

Fig. 7. Process and results of simulation. (a) Original image, (b) crack and laser detection by YOLOv3-tiny, (c) calculation of center of laser beams and contours of cracks, and
(d) quantification of the crack.

Table 3
The measurement results (Meas.) and its error of simulations (unit: mm).

Thickness (T) Length (L)


w/o calibration w/ calibration w/o calibration w/ calibration
Meas. Error Meas. Error Meas. Error Meas. error
Case S1) T: 6.60 L: 118.60 17.32 10.72 6.62 0.02 149.20 30.60 118.39 0.21
17.34 10.74 6.71 0.11 138.61 20.01 118.70 0.10
16.48 9.88 6.68 0.08 141.53 22.93 119.19 0.59
17.94 11.34 6.87 0.27 140.08 21.48 118.58 0.02
16.80 10.20 6.70 0.10 143.32 24.72 119.47 0.87
Case S2) T: 3.40 L: 133.40 11.10 7.70 3.10 0.30 166.05 32.65 132.36 1.04
11.26 7.86 3.27 0.13 163.28 29.88 133.37 0.03
11.29 7.89 3.17 0.23 156.37 22.97 134.56 1.16
11.61 8.21 3.09 0.31 159.22 25.82 133.41 0.01
12.43 9.03 3.19 0.21 165.41 32.01 134.25 1.15
Case S3) T: 1.00 L: 199.20 3.42 2.42 0.96 0.04 240.08 40.88 198.69 0.51
3.67 2.67 1.05 0.05 236.85 37.65 198.99 0.21
3.73 2.73 1.08 0.08 237.52 38.32 199.28 0.08
3.41 2.41 1.01 0.01 242.40 43.2 198.66 0.54
3.34 2.34 1.12 0.12 238.89 39.69 198.90 0.30
Case S4) T: 7.00 L: 237.20 13.62 6.62 6.83 0.17 275.29 38.09 236.32 0.88
13.53 6.53 7.01 0.01 273.23 36.03 236.16 1.04
12.76 5.76 7.08 0.08 275.38 38.18 236.49 0.71
13.61 6.61 6.97 0.03 271.97 34.77 236.91 0.29
12.66 5.66 6.95 0.05 273.38 36.18 236.62 0.58
S.E. Park et al. / Construction and Building Materials 252 (2020) 119096 7

Ltd, 5MP OV5647-M12 from ArducCam, Co., and LIDAR-Lite v3


from Garmin, Ltd were used as a dot laser, a vision sensor, and a
distance sensor, respectively. The images and distances are cap-
tured with Raspberry Pi 3 from Raspberry pi foundation and trans-
ferred to main computer with the same specifications as the
simulation. The measured distance is calibrated using Eq. (1), and
the distance between the laser beams are calculated. From the cap-
tured images, the cracks and lasers are detected using a deep learn-
ing algorithm. The positions of lasers and contour of the crack are
calculated in the detected bounding boxes. After checking that a
single crack split across multiple bounding boxes, the size of the
crack is measured by considering the calibrated laser pose. The
detection and quantification images with four different cases are
shown in Fig. 9. The integration algorithm for multiple bounding
boxes in a single crack is applied as shown in the figure.
The measurement results were evaluated against the ground
truth which are measured by using crack scales SC-10 and SC-20
from Sincon Co., which measures thickness and length of the
cracks. The errors of four cases are illustrated in Table 4. It can
be seen that the errors of the proposed system are smaller than
0.3 mm and 1.6 mm in thickness and length, respectively. The com-
Fig. 8. Experimental setup. puting time of detection and quantification is about 0.04 s, which
guarantees a sampling rate of more than 25 Hz.

Fig. 9. Procedures of crack and laser detection and quantification with four cases (Cases E1 to E4).
8 S.E. Park et al. / Construction and Building Materials 252 (2020) 119096

Table 4
The measurement results (Meas.) and its error of experimental tests (unit: mm).

Thickness (T) Length (L)


w/o calibration w/ calibration w/o calibration w/ calibration
Meas. Error Meas. Error Meas. Error Meas. error
Case E1) T: 2.50, L: 204.70 10.58 8.08 2.22 0.28 256.54 51.84 203.32 1.38
Case E2) T: 9.00, L: 263.10 23.68 14.68 8.93 0.07 316.44 53.34 262.92 0.18
Case E3) T: 1.20, L: 193.00 8.20 7.00 1.12 0.08 237.32 44.32 191.58 1.42
Case E4) T: 1.00, L: 187.30 5.06 4.06 0.91 0.09 190.02 2.72 185.79 1.51

5. Conclusion [5] S.K.U. Rehman, Z. Ibrahim, S.A. Memon, M. Jameel, Nondestructive test methods
for concrete bridges: a review, Constr. Build. Mater. 107 (2016) 58–86.
[6] J.A. Bogas, M.G. Gomes, A. Gomes, Compressive strength evaluation of
This paper proposed a structural crack detection and quantifica- structural lightweight concrete by non-destructive ultrasonic pulse velocity
tion method using a deep learning and structured light. For real- method, Ultrasonics 53 (5) (2013) 962–972.
[7] T.R. Naik, V.M. Malhotra, J.S. Popovics, The ultrasonic pulse velocity method.
time detection, the YOLOv3-tiny algorithm has been used and Handbook on Nondestructive Testing of Concrete, Second ed., CRC Press, 2003.
bounding boxes of cracks and laser beams are detected with 8-1–8-19.
25 Hz. The positions of the projected laser beams are calculated [8] A.G. Davis, The nondestructive impulse response test in North America: 1985–
2001, NDT E Int. 36 (4) (2003) 185–193.
by using image processing techniques and they are used to
[9] A.G. Davis, M.K. Lim, C.G. Petersen, Rapid and economical evaluation of
quantify the detected cracks. To increase the accuracy, the laser concrete tunnel linings with impulse response and impulse radar non-
alignment with varying range between the target structure and destructive methods, NDT E Int. 38 (3) (2005) 181–186.
lasers are calibrated by using a specially designed jig module. In [10] A. Nair, C. Cai, Acoustic emission monitoring of bridges: Review and case
studies, Eng. Struct. 32 (6) (2010) 1704–1714.
this process, the calibrated distance of the range sensor is used. [11] A. Sokačić, Application of infrared thermography to the non-destructive
For multiple detected bounding boxes on a single crack, the inte- testing of concrete structures, Gradevinski fakultet, Sveučilište u Zagrebu,
gration algorithm by considering the positions and sizes of the 2010.
[12] Z.-M. Sbartaï, D. Breysse, M. Larget, J.-P. Balayssac, Combining NDT techniques
detected bounding boxes is introduced. The simulation and exper- for improved evaluation of concrete properties, Cem. Concr. Compos. 34 (6)
imental tests were performed in order to validate the performance (2012) 725–733.
of the proposed algorithm. The results show that the cracks and [13] Y.J. Cha, W. Choi, O. Büyüköztürk, Deep learning-based crack damage
detection using convolutional neural networks, Comput.-Aided Civ.
lasers are detected in real-time with an accuracy and a precision Infrastruct. Eng. 32 (5) (2017) 361–378.
of 94% and 98%, respectively, and that the size of the cracks are cal- [14] Y.J. Cha, W. Choi, G. Suh, S. Mahmoudkhani, O. Büyüköztürk, Autonomous
culated with an error of less than 1.5 mm. In the future, experi- structural visual inspection using region-based deep learning for detecting
multiple damage types, Comput.-Aided Civ. Infrastruct. Eng. 33 (9) (2018)
ments with an unmanned aerial vehicle will be performed to
731–747.
verify the applicability of the proposed system to civil structures. [15] W. Choi, Y.-J. Cha, SDDNet: Real-time crack segmentation, IEEE Trans. Ind.
Electron. (2019).
[16] A. Zhang, K.C. Wang, B. Li, E. Yang, X. Dai, Y. Peng, Y. Fei, Y. Liu, J.Q. Li, C. Chen,
CRediT authorship contribution statement Automated pixel-level pavement crack detection on 3D asphalt surfaces using
a deep-learning network, Comput.-Aided Civ. Infrastruct. Eng. 32 (10) (2017)
805–819.
Song Ee Park: Writing - original draft, Methodology, Validation. [17] I.-H. Kim, H. Jeon, S.-C. Baek, W.-H. Hong, H.-J. Jung, Application of crack
Seung-Hyun Eem: Writing - review & editing. Haemin Jeon: Writ- identification techniques for an aging concrete bridge inspection using an
ing - review & editing, Conceptualization, Supervision. unmanned aerial vehicle, Sensors 18 (6) (2018) 1881.
[18] G.H. Beckman, D. Polyzois, Y.-J. Cha, Deep learning-based automatic
volumetric damage quantification using depth camera, Autom. Constr. 99
(2019) 114–124.
Declaration of Competing Interest
[19] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to
document recognition, Proc. IEEE 86 (11) (1998) 2278–2324.
The authors declare that they have no known competing finan- [20] J. Han, D. Zhang, G. Cheng, N. Liu, D. Xu, Advanced deep-learning techniques
cial interests or personal relationships that could have appeared for salient and category-specific object detection: a survey, IEEE Signal Process.
Mag. 35 (1) (2018) 84–100.
to influence the work reported in this paper. [21] S. Ren, K. He, R. Girshick, J. Sun, Faster r-cnn: towards real-time object
detection with region proposal networks, Adv. Neural Inf. Process. Syst. (2015)
91–99.
Acknowledgements [22] K. He, G. Gkioxari, P. Dollár, R. Girshick, Mask r-cnn, Proc. IEEE Int. Conf.
Comput. Vision (2017) 2961–2969.
This research was supported by the framework of young [23] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, A.C. Berg, Ssd:
Single shot multibox detector, in: European Conference on Computer Vision,
researchers’ program managed by National Research Foundation, Springer, 2016, pp. 21–37.
Republic of Korea (No. NRF-2017R1C1B5075299) of Korean [24] J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: unified, real-
government. time object detection, in: Proceedings of the IEEE Conference on Computer
Vision and pattern Recognition, 2016, pp. 779–788.
[25] R. Huang, J. Pedoeem, C. Chen, YOLO-LITE: a real-time object detection
algorithm optimized for non-GPU computers, in: 2018 IEEE International
References
Conference on Big Data (Big Data), IEEE, 2018, pp. 2503–2510.
[26] Y.J. Wai, Z. Bin Mohd Yussof, S.I. Bin Salim, L.K. Chuan, Fixed point
[1] D. Balageas, C.-P. Fritzen, A. Güemes, Structural Health Monitoring, John Wiley implementation of tiny-yolo-v2 using opencl on fpga, Int. J. Adv. Comput.
& Sons, 2010. Sci. Appl. 9 (10) (2018) 506–512.
[2] Q. Yu, J. Guo, S. Wang, Q. Zhu, B. Tao, Study on a new bridge crack detection [27] J. Redmon, A. Farhadi, Yolov3: an incremental improvement, arXiv preprint
robot based on machine, in: International Conference on Intelligent Robotics arXiv:1804.02767 (2018).
and Applications, Springer, 2012, pp. 174–184. [28] J. Gao, H. Ling, W. Hu, J. Xing, Transfer learning based visual tracking with
[3] Markets and Markets, Structural Health Monitoring Market by Technology Gaussian processes regression, in: European Conference on Computer Vision,
(Wired and Wireless), Offering (Hardware (Sensors, Data Acquisition Systems) Springer, 2014, pp. 188–203.
and Software & Services), Vertical (Civil Infrastructure, Energy), Application, [29] COCO Dataset, http://cocodataset.org/, 2019 (accessed 3 June 2019).
and Geography – Global Forecast to 2023, Markets and Markets, 2019. [30] J. Jung, H. Myung, Indoor localization using particle filter and map-based NLOS
[4] H. Azari, S. Nazarian, D. Yuan, Assessing sensitivity of impact echo and ranging model, in: 2011 IEEE International Conference on Robotics and
ultrasonic surface waves methods for nondestructive evaluation Automation, IEEE, 2011, pp. 5185–5190.
of concrete structures, Constr. Build. Mater. 71 (2014) 384–391.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy