Csrnet: Dilated Convolutional Neural Networks For Understanding The Highly Congested Scenes
Csrnet: Dilated Convolutional Neural Networks For Understanding The Highly Congested Scenes
Csrnet: Dilated Convolutional Neural Networks For Understanding The Highly Congested Scenes
Congested Scenes
Yuhong Li1,2 , Xiaofan Zhang1 , Deming Chen1
1
University of Illinois at Urbana-Champaign
2
Beijing University of Posts and Telecommunications
{leeyh,xiaofan3,dchen}@illinois.edu
arXiv:1802.10062v4 [cs.CV] 11 Apr 2018
Abstract
We propose a network for Congested Scene Recognition
called CSRNet to provide a data-driven and deep learning
method that can understand highly congested scenes and
perform accurate count estimation as well as present high-
quality density maps. The proposed CSRNet is composed
of two major components: a convolutional neural network
(CNN) as the front-end for 2D feature extraction and a di-
lated CNN for the back-end, which uses dilated kernels to
deliver larger reception fields and to replace pooling opera-
tions. CSRNet is an easy-trained model because of its pure Figure 1. Pictures in first row show three images all containing 95
convolutional structure. We demonstrate CSRNet on four people in ShanghaiTech Part B dataset [18], while having totally
datasets (ShanghaiTech dataset, the UCF CC 50 dataset, different spatial distributions. Pictures in second row show their
the WorldEXPO’10 dataset, and the UCSD dataset) and density maps.
we deliver the state-of-the-art performance. In the Shang-
haiTech Part B dataset, CSRNet achieves 47.3% lower comes from the prediction manner: since the generated den-
Mean Absolute Error (MAE) than the previous state-of-the- sity values follow the pixel-by-pixel prediction, output den-
art method. We extend the targeted applications for count- sity maps must include spatial coherence so that they can
ing other objects, such as the vehicle in TRANCOS dataset. present the smooth transition between nearest pixels. Also,
Results show that CSRNet significantly improves the output the diversified scenes, e.g., irregular crowd clusters and dif-
quality with 15.4% lower MAE than the previous state-of- ferent camera perspectives, would make the task difficult,
the-art approach. especially for using traditional methods without deep neu-
ral networks (DNNs). The recent development of congested
scene analysis relays on DNN-based methods because of
1. Introduction
the high accuracy they have achieved in semantic segmen-
Growing number of network models have been devel- tation tasks [7, 8, 9, 10, 11] and the significant progress they
oped [1, 2, 3, 4, 5] to deliver promising solutions for crowd have made in visual saliency [12]. The additional bonus of
flows monitoring, assembly controlling, and other security using DNNs comes from the enthusiastic hardware com-
services. Current methods for congested scenes analysis are munity where DNNs are rapidly investigated and imple-
developed from simple crowd counting (which outputs the mented on GPUs [13], FPGAs [14, 15, 16], and ASICs [17].
number of people in the targeted image) to density map pre- Among them, the low-power, small-size schemes are es-
senting (which displays characteristics of crowd distribu- pecially suitable for deploying congested scene analysis in
tion) [6]. This development follows the demand of real- surveillance devices.
life applications since the same number of people could Previous works for congested scene analysis are mostly
have completely different crowd distributions (as shown in based on multi-scale architectures [4, 5, 18, 19, 20]. They
Fig. 1), so that just counting the number of crowds is not have achieved high performance in this field but the de-
enough. The distribution map helps us for getting more ac- signs they used also introduce two significant disadvantages
curate and comprehensive information, which could be crit- when networks go deeper: large amount of training time
ical for making correct decisions in high-risk environments, and non-effective branch structure (e.g., multi-column CNN
such as stampede and riot. However, it is challenging to (MCNN) in [18]). We design an experiment to demon-
generate accurate distribution patterns. One major difficulty strate that the MCNN does not perform better compared to a
deeper, regular network in Table 1. The main reason of us-
ing MCNN in [18] is the flexible receptive fields provided
by convolutional filters with different sizes across the col-
umn. Intuitively, each column of MCNN is dedicated to a
certain level of congested scene. However, the effectiveness
of using MCNN may not be prominent. We present Fig. 2
to illustrate the features learned by three separated columns
(representing large, medium, and small receptive fields) in
MCNN and evaluate them with ShanghaiTech Part A [18]
dataset. The three curves in this figure share very similar
patterns (estimated error rate) for 50 test cases with different
congest densities meaning that each column in such branch
structure learn nearly identical features. It performs against
the original intention of the MCNN design for learning dif- Figure 2. The estimated error of 50 samples from the testing set
ferent features for each column. in ShanghaiTech Part A [18] generated by the three pre-trained
In this paper, we design a deeper network called CSR- columns of MCNN. Small, Medium, Large respectively stand for
Net for counting crowd and generating high-quality den- the columns with small, medium or large kernels in the MCNN.
sity maps. Unlike the latest works such as [4, 5] which
use the deep CNN for ancillary, we focus on designing a Method Parameters MAE MSE
CNN-based density map generator. Our model uses pure Col. 1 of MCNN 57.75k 141.2 206.8
convolutional layers as the backbone to support input im- Col. 2 of MCNN 45.99k 160.5 239.0
ages with flexible resolutions. To limit the network com- Col. 3 of MCNN 25.14k 153.7 230.2
plexity, we use the small size of convolution filters (like MCNN Total 127.68k 110.2 185.9
3 × 3) in all layers. We deploy the first 10 layers from A deeper CNN 83.84k 93.0 142.2
VGG-16 [21] as the front-end and dilated convolution lay-
ers as the back-end to enlarge receptive fields and extract Table 1. To demonstrate that MCNN [18] may not be the best
deeper features without losing resolutions (since pooling choice, we design a deeper, single-column network with fewer
layers are not used). By taking advantage of such innovative parameters compared to MCNN. The architecture of the pro-
structure, we outperform the state-of-the-art crowd count- posed small network is: CR(32, 3) − M − CR(64, 3) − M −
ing solutions (a MCNN based solution called CP-CNN [5]) CR(64, 3)−M −CR(32, 3)−CR(32, 3)−CR(1, 1). CR(m, n)
represents the convolutional layer with m filters whose size is n×n
with 7%, 47.3%, 10.0%, and 2.9% lower Mean Abso-
followed by the ReLu layer. M is the max-pooling layer. Results
lute Error (MAE) in ShanghaiTech [18] Part A, Part B,
show that the single-column version achieves higher performance
UCF CC 50 [22], and WorldExpo’10 [3] datasets respec- on ShanghaiTech Part A dataset [18] with the lowest MAE and
tively. Also, we achieve high performance on the UCSD Mean Squared Error (MSE)
dataset [23] with 1.16 MAE. After extending this work to
vehicle counting on TRANCOS dataset [20], we achieve 2.1. Detection-based approaches
15.4% lower MAE than the current best approach, called Most of the early researches focus on detection-based
FCN-HA [24]. approaches using a moving-window-like detector to detect
The rest of the paper is structured as follows. Sec. 2 people and count their number [26]. These methods re-
presents the previous works for crowd counting and den- quire well-trained classifiers to extract low-level features
sity map generation. Sec. 3 introduces the architecture and from the whole human body (like Haar wavelets [27] and
configuration of our model while Sec. 4 presents the exper- HOG (histogram oriented gradients) [28]). However, they
imental results on several datasets. In Sec. 5, we conclude perform poorly on highly congested scenes since most of
the paper. the targeted objects are obscured. To tackle this problem,
researchers detect particular body parts instead of the whole
2. Related work body to complete crowd scenes analysis [29].
Following the idea proposed by Loy et al. [25], the po-
2.2. Regression-based approaches
tential solutions for crowd scenes analysis can be classified
into three categories: detection-based methods, regression- Since detection-based approaches can not be adapted
based methods, and density estimation-based methods. By to highly congested scenes, researchers try to deploy
combining the deep learning, the CNN-based solutions regression-based approaches to learn the relations among
show even stronger ability in this task and outperform the extracted features from cropped image patches, and then
traditional methods. calculate the number of particular objects. More features,
such as foreground and texture features, have been used for work [18]. Such bloated network structure requires more
generating low-level information [30]. Following similar time to train. (2) Multi-column CNNs introduce redundant
approaches, Idrees et al. [22] propose a model to extract structure as we mentioned in Sec. 1. Different columns
features by employing Fourier analysis and SIFT (Scale- seem to perform similarly without obvious differences. (3)
invariant feature transform) [31] interest-point based count- Both solutions require density level classifier before send-
ing. ing pictures in the MCNN. However, the granularity of den-
sity level is hard to define in real-time congested scene anal-
2.3. Density estimation-based approaches ysis since the number of objects keeps changing with a large
When executing the regression-based solution, one crit- range. Also, using a fine-grained classifier means more
ical feature, called saliency, is overlooked which causes in- columns need to be implemented which makes the design
accurate results in local regions. Lempitsky et al. [32] pro- more complicated and causes more redundancy. (4) These
pose a method to solve this problem by learning a linear works spend a large portion of parameters for density level
mapping between features in the local region and its object classification to label the input regions instead of allocating
density maps. It integrates the information of saliency dur- parameters to the final density map generation. Since the
ing the learning process. Since the ideal linear mapping is branch structure in MCNN is not efficient, the lack of pa-
hard to obtain, Pham et al. [33] use random forest regression rameters for generating density map lowers the final accu-
to learn a non-linear mapping instead of the linear one. racy. Taking all above disadvantages into consideration, we
propose a novel approach to concentrate on encoding the
2.4. CNN-based approaches deeper features in congested scenes and generating high-
Literature also focuses on the CNN-based approaches quality density map.
to predict the density map because of its success in clas-
sification and recognition [34, 21, 35]. In the work pre-
3. Proposed Solution
sented by Walach and Wolf [36], a method is demonstrated The fundamental idea of the proposed design is to de-
with selective sampling and layered boosting. Instead of ploy a deeper CNN for capturing high-level features with
using patch-based training, Shang et al. [37] try an end-to- larger receptive fields and generating high-quality density
end regression method using CNNs which takes the entire maps without brutally expanding network complexity. In
image as input and directly outputs the final crowd count. this section, we first introduce the architecture we proposed,
Boominathan et al. [19] present the first work purely us- and then we present the corresponding training methods.
ing convolutional networks and dual-column architecture
for generating density map. Marsden et al. [38] explore 3.1. CSRNet architecture
single-column fully convolutional networks while Sindagi Following the similar idea in [19, 4, 5], we choose VGG-
et al. [39] propose a CNN which uses the high-level prior in- 16 [21] as the front-end of CSRNet because of its strong
formation to boost the density prediction performance. An transfer learning ability and its flexible architecture for eas-
improved structure is proposed by Zhang et al. [18] who ily concatenating the back-end for density map generation.
introduce a multi-column based architecture (MCNN) for In CrowdNet [19], the authors directly carve the first 13 lay-
crowd counting. Similar idea is shown in Onoro and Sas- ers from VGG-16 and add a 1 × 1 convolutional layer as
tre [20] where a scale-aware, multi-column counting model output layer. The absence of modifications results in very
called Hydra CNN is presented for object density estima- weak performance. Other architectures, such as [4], uses
tion. It is clear that the CNN-based solutions outperform VGG-16 as the density level classifier for labeling input im-
the previous works mentioned in Sec. 2.1 to 2.3. ages before sending them to the most suitable column of
the MCNN, while the CP-CNN [5] incorporates the result
2.5. Limitations of the state-of-the-art approaches
of classification with the features from density map gen-
Most recently, Sam et al. [4] propose the Switch-CNN erator. In these cases, the VGG-16 performs as an ancil-
using a density level classifier to choose different regres- lary without significantly boosting the final accuracy. In
sors for particular input patches. Sindagi et al. [5] present this paper, we first remove the classification part of VGG-
a Contextual Pyramid CNN, which uses CNN networks to 16 (fully-connected layers) and build the proposed CSRNet
estimate context at various levels for achieving lower count with convolutional layers in VGG-16. The output size of
error and better quality density maps. These two solutions this front-end network is 1/8 of the original input size. If
achieve the state-of-the-art performance, and both of them we continue to stack more convolutional layers and pooling
used multi-column based architecture (MCNN) and den- layers (basic components in VGG-16), output size would
sity level classifier. However, we observe several disad- be further shrunken, and it is hard to generate high-quality
vantages in these approaches: (1) Multi-column CNNs are density maps. Inspired by the works [10, 11, 40], we try
hard to train according to the training method described in to deploy dilated convolutional layers as the back-end for
extracting deeper information of saliency as well as main-
taining the output resolution.
N
X 3.2.3 Training details
F (x) = δ(x − xi ) × Gσi (x), with σi = βdi (2)
i=1 We use a straightforward way to train the CSRNet as
an end-to-end structure. The first 10 convolutional layers
For each targeted object xi in the ground truth δ, we use are fine-tuned from a well-trained VGG-16 [21]. For the
di to indicate the average distance of k nearest neighbors. other layers, the initial values come from a Gaussian ini-
To generate the density map, we convolve δ(x − xi ) with tialization with 0.01 standard deviation. Stochastic gradient
a Gaussian kernel with parameter σi (standard deviation), descent (SGD) is applied with fixed learning rate at 1e-6
where x is the position of pixel in the image. In experiment, during training. Also, we choose the Euclidean distance to
we follow the configuration in [18] where β = 0.3 and measure the difference between the ground truth and the es-
k = 3. For input with sparse crowd, we adapt the Gaussian timated density map we generated which is similar to other
kernel to the average head size to blur all the annotations. works [19, 18, 4]. The loss function is given as follow:
The setups for different datasets are shown in Table 2.
N
3.2.2 Data augmentation 1 X 2
L(Θ) = k Z(Xi ; Θ) − ZiGT k2 (3)
2N i=1
We crop 9 patches from each image at different loca-
tions with 1/4 size of the original image. The first four
patches contain four quarters of the image without overlap- where N is the size of training batch and Z(Xi ; Θ) is
ping while the other five patches are randomly cropped from the output generated by CSRNet with parameters shown as
the input image. After that, we mirror the patches so that we Θ. Xi represents the input image while ZiGT is the ground
double the training set. truth result of the input image Xi .
4. Experiments Architecture MAE MSE
CSRNet A 69.7 116.0
We demonstrate our approach in five different public
CSRNet B 68.2 115.0
datasets [18, 3, 22, 23, 44]. Compared to the previous state-
CSRNet C 71.91 120.58
of-the-art methods [4, 5], our model is smaller, more accu-
rate, and easier to train and deploy. In this section, the eval- CSRNet D 75.81 120.82
uation metrics are introduced, and then an ablation study Table 4. Comparison of architectures on ShanghaiTech Part A
of ShanghaiTech Part A dataset is conducted to analyze the dataset
configuration of our model (shown in Table 3). Along with
the ablation study, we evaluate and compare our proposed their back-end respectively while CSRNet C combines the
method to the previous state-of-the-art methods in all these dilated rate of 2 and 4. The number of parameters of these
five datasets. The implementation of our model is based on four models are the same as 16.26M. We intend to compare
the Caffe framework [13]. the results by using different dilation rates. After training
on Shanghai Part A dataset using the method mentioned
4.1. Evaluation metrics in Sec. 3.2, we perform the evaluation metrics defined in
Sec. 4.1. We try dropout [45] for preventing the potential
The MAE and the MSE are used for evaluation which
overfitting problem but there is no significant improvement.
are defined as:
So we do not include dropout in our model. The detailed
N evaluation results are shown in Table 4, where CSRNet B
1 X
M AE = | Ci − CiGT | (4) achieves the lowest error (the highest accuracy). Therefore,
N i=1 we use CSRNet B as the proposed CSRNet for the follow-
v ing experiments.
u
u1 X N
M SE = t | Ci − CiGT |2 (5) 4.3. Evaluation and comparison
N i=1
4.3.1 ShanghaiTech dataset
where N is the number of images in one test sequence
and CiGT is the ground truth of counting. Ci represents the ShanghaiTech crowd counting dataset contains 1198 an-
estimated count which is defined as follows: notated images with a total amount of 330,165 persons [3].
This dataset consists of two parts as Part A containing 482
L X
W
X images with highly congested scenes randomly downloaded
Ci = zl,w (6)
from the Internet while Part B includes 716 images with
l=1 w=1
relatively sparse crowd scenes taken from streets in Shang-
L and W show the length and width of the density map hai. Our method is evaluated and compared to other six
respectively while zl,w is the pixel at (l, w) of the generated recent works and results are shown in Table 5. It indicates
density map. Ci means the estimated counting number for that our method achieves the lowest MAE (the highest ac-
image Xi . curacy) in Part A compared to other methods and we get
We also use the PSNR and SSIM to evaluate the quality 7% lower MAE than the state-of-the-art solution called CP-
of the output density map on ShanghaiTech Part A dataset. CNN. CSRNet also delivers 47.3% lower MAE in Part B
To calculate the PSNR and SSIM, we follow the preprocess compared to the CP-CNN. To evaluate the quality of gen-
given by [5], which includes the density map resizing (same erated density map, we compare our method to the MCNN
size with the original input) with interpolation and normal- and the CP-CNN using Part A dataset and we follow the
ization for both ground truth and predicted density map. evaluation metrics in Sec. 3.2. Samples of the test cases can
be found in Fig 5. Results are shown in Table 6 which in-
4.2. Ablations on ShanghaiTech Part A dicates CSRNet achieves the highest SSIM and PSNR. We
In this subsection, we perform an ablation study to an- also report the quality result of ShanghaiTech dataset in Ta-
alyze the four configurations of the CSRNet on Shang- ble 11.
haiTech Part A dataset [18] which is a new large-scale
crowd counting dataset including 482 images for congested 4.3.2 UCF CC 50 dataset
scenes with 241,667 annotated persons. It is challenging
to count from these images because of the extremely con- UCF CC 50 dataset includes 50 images with different
gested scenes, the varied perspective, and the unfixed res- perspective and resolutions [22]. The number of annotated
olution. These four configurations are shown in Table 3. persons per image ranges from 94 to 4543 with an average
CSRNet A is the network with all the dilation rate of 1. number of 1280. 5-fold cross-validation is performed fol-
CSRNet B and D maintain the dilation rate of 2 and 4 in lowing the standard setting in [22]. Result comparisons of
Method MAE MSE
Idrees et al. [22] 419.5 541.6
Zhang et al. [3] 467.0 498.5
MCNN [18] 377.6 509.1
Onoro et al. [20] Hydra-2s 333.7 425.2
Onoro et al. [20] Hydra-3s 465.7 371.8
Walach et al. [36] 364.4 341.4
Marsden et al. [38] 338.6 424.5
Cascaded-MTL [39] 322.8 397.9
Switching-CNN [4] 318.1 439.2
CP-CNN [5] 295.8 320.9
CSRNet (ours) 266.1 397.5
Figure 7. Samples generated by CSRNet from ShanghaiTech Part A [18] dataset. The left column shows the original images; the medium
column displays the ground truth density maps while the right column indicates our generated density maps.
Figure 8. Samples generated by CSRNet from ShanghaiTech Part B [18] dataset. The left column shows the original images; the medium
column displays the ground truth density maps while the right column indicates our generated density maps.
Figure 9. Samples generated by CSRNet from UCF CC 50 [22] dataset. The left column shows the original images; the medium column
displays the ground truth density maps while the right column indicates our generated density maps.
Figure 10. Samples generated by CSRNet from WorldExpo’10 [3] dataset. The left column shows the images masked by the ROI (region
of interest); the medium column displays the ground truth density maps and the right column indicates our generated density maps.
Figure 11. Samples generated by CSRNet from UCSD [23] dataset. The left column shows the images masked by the ROI; the medium
column displays the ground truth density maps and the right column indicates our generated density maps.
Figure 12. Samples generated by CSRNet from TRANCOS [44] dataset. The left column shows the images masked by the ROI; the
medium column displays the ground truth density maps while the right column shows the generated density maps.