An Overview
An Overview
Chunwei Tian, Lunke Fei, Wenxian Zheng, Yong Xu, Wangmeng Zuo,
Chia-Wen Lin
PII: S0893-6080(20)30266-5
DOI: https://doi.org/10.1016/j.neunet.2020.07.025
Reference: NN 4563
Please cite this article as: C. Tian, L. Fei, W. Zheng et al., Deep learning on image denoising: An
overview. Neural Networks (2020), doi: https://doi.org/10.1016/j.neunet.2020.07.025.
This is a PDF file of an article that has undergone enhancements after acceptance, such as the
addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive
version of record. This version will undergo additional copyediting, typesetting and review before it
is published in its final form, but we are providing this version to give early visibility of the article.
Please note that, during the production process, errors may be discovered which could affect the
content, and all legal disclaimers that apply to the journal pertain.
of
Bio-Computing Research Center, Harbin Institute of Technology, Shenzhen, Shenzhen, 518055, Guangdong,
a.
China
pro
Shenzhen Key Laboratory of Visual Object Detection and Recognition, Shenzhen, 518055, Guangdong, China
b.
School of Computers, Guangdong University of Technology, Guangzhou, 510006, Guangdong, China
c.
Tsinghua Shenzhen International Graduate School, Shenzhen, 518055, Guangdong, China
d.
e.
re-
Peng Cheng Laboratory, Shenzhen, 518055, Guangdong, China
School of Computer Science and Technology, Harbin Institute of Technology, Harbin, 150001, Heilongjiang,
f.
China
lP
Department of Electrical Engineering and the Institute of Communications Engineering, National Tsing Hua
g.
University, Hsinchu, Taiwan
rna
of
a
Bio-Computing Research Center, Harbin Institute of Technology, Shenzhen, Shenzhen, 518055, Guangdong, China
b
Shenzhen Key Laboratory of Visual Object Detection and Recognition, Shenzhen, 518055, Guangdong, China
c
School of Computers, Guangdong University of Technology, Guangzhou, 510006, Guangdong, China
d
Tsinghua Shenzhen International Graduate School, Shenzhen, 518055, Guangdong, China
e
Peng Cheng Laboratory, Shenzhen, 518055, Guangdong, China
pro
f
School of Computer Science and Technology, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
g
Department of Electrical Engineering and the Institute of Communications Engineering, National Tsing Hua
University, Hsinchu, Taiwan
Abstract
re-
Deep learning techniques have received much attention in the area of image denoising. However,
there are substantial differences in the various types of deep learning methods dealing with image
denoising. Specifically, discriminative learning based on deep learning can ably address the issue
of Gaussian noise. Optimization models based on deep learning are effective in estimating the
real noise. However, there has thus far been little related research to summarize the different
deep learning techniques for image denoising. In this paper, we offer a comparative study of
lP
deep techniques in image denoising. We first classify the deep convolutional neural networks
(CNNs) for additive white noisy images; the deep CNNs for real noisy images; the deep CNNs for
blind denoising and the deep CNNs for hybrid noisy images, which represents the combination
of noisy, blurred and low-resolution images. Then, we analyze the motivations and principles of
the different types of deep learning methods. Next, we compare the state-of-the-art methods on
public denoising datasets in terms of quantitative and qualitative analysis. Finally, we point out
rna
1. Introduction
Jou
Digital image devices have been widely applied in many fields, including recognition of indi-
viduals [117, 222, 223], and remote sensing [50]. The captured image is a degraded image from the
latent observation, in which the degradation processing is affected by factors such as lighting and
noise corruption [265, 254]. Specifically, the noise is generated in the processes of transmission
and compression from the unknown latent observation [231]. It is essential to use image denoising
techniques to remove the noise and recover the latent observation from the given degraded image.
∗
Corresponding author
Email address: yongxu@ymail.com (Yong Xu)
Preprint submitted to Neural Networks August 5, 2020
Journal Pre-proof
Image denoising techniques have attracted much attention in recent 50 years [19, 233]. At
the outset, nonlinear and non-adaptive filters were used for image applications [87]. Nonlinear
filters can preserve the edge information to suppress the noise, unlike linear filters [170]. Adap-
tive nonlinear filters depend on local signal-to-noise ratios to derive an appropriate weighting
of
factor for removing noise from an image corrupted by the combination of additive random, signal-
dependent, impulse noise and additive random noise [19]. Non-adaptive filters can simultaneously
use edge information and signal-to-noise ratio information to estimate the noise [82]. In time,
machine learning methods, such as sparse-based methods were successfully applied in image de-
pro
noising [45]. A non-locally centralized sparse representation (NCSR) method used nonlocal self-
similarity to optimize the sparse method, and obtained high performance for image denoising [49].
To reduce computational costs, a dictionary learning method was used to quickly filter the noise
[54]. To recover the detailed information of the latent clean image, priori knowledge (i.e., total
variation regularization) can smooth the noisy image in order to deal with the corrupted image
[163, 178]. More competitive methods for image denoising can be found in [150, 275, 258], in-
cluding the Markov random field (MRF) [184], the weighted nuclear norm minimization (WNNM)
re-
[69], learned simultaneous sparse coding (LSSC) [150], cascade of shrinkage fields (CSF) [184],
trainable nonlinear reaction diffusion (TNRD) [35] and gradient histogram estimation and preser-
vation (GHEP) [275].
Although most of the above methods have achieved reasonably good performance in image
denoising, they suffered from several drawbacks [146], including the need for optimization meth-
ods for the test phase, manual setting parameters, and a certain model for single denoising tasks.
lP
Recently, as architectures became more flexible, deep learning techniques gained the ability to
overcome these drawbacks [146].
The original deep learning technologies were first used in image processing in the 1980s [60]
and were first used in image denoising by Zhou et al. [38, 273]. That is, the proposed denoising
work first used a neural network with both the known shift-invariant blur function and additive
noise to recover the latent clean image. After that, the neural network used weighting factors
rna
to remove complex noise [38]. To reduce the high computational costs, a feedforward network
was proposed to make a tradeoff between denoising efficiency and performance [198]. The feed-
forward network can smooth the given corrupted image by Kuwahara filters, which were similar
to convolutions. In addition, this research proved that the mean squared error (MSE) acted as a
loss function and was not unique to neural networks [48, 68]. Subsequently, more optimization
algorithms were used to accelerate the convergence of the trained network and to promote the
denoising performance [16, 47, 61]. The combination of maximum entropy and prima-dual La-
Jou
grangian multipliers to enhance the expressive ability of neural networks proved to be a good tool
for image denoising [15]. To further make a tradeoff between fast execution and denoising per-
formance, greedy algorithms and asynchronous algorithms were applied in neural networks [164].
Alternatively, designing a novel network architecture proved to be very competitive in eliminat-
ing the noise, through either increasing the depth or changing activation function [190]. Cellular
neural networks (CENNs) mainly used nodes with templates to obtain the averaging function and
effectively suppress the noise [190, 161]. Although this proposed method can obtain good denois-
ing results, it requires the parameters of the templates to be set manually. To resolve this problem,
the gradient descent was developed [252, 114]. To a certain degree, these deep techniques can im-
2
Journal Pre-proof
prove denoising performance. However, these networks did not easily allow the addition of new
plug-in units, which limited their applications in the real world [59].
Based on the reasons above, convolutional neural networks (CNNs) were proposed [141, 180].
The CNN as well as the LeNet had real-world application in handwritten digit recognition [113].
of
However, due to the following drawbacks, they were not widely applied in computer systems
[109]. First, deep CNNs can generate vanishing gradients. Second, activation functions such as
sigmoid [154] and tanh [92] resulted in high computational cost. Third, the hardware platform
did not support the complex network. However, that changed in 2012 with AlexNet in that year’s
pro
ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) [109]. After that, deep network
architectures (e.g., VGG [189] and GoogLeNet [196]) were widely applied in the fields of image
[225, 217, 119], video [138, 248], nature language processing [52] and speech processing [267],
especially low-level computer vision [169, 204].
Deep networks were first applied in image denoising in 2015 [129, 234]. The proposed network
need not manually set parameters for removing the noise. After then, deep networks were widely
applied in speech [268], video [247] and image restoration [209, 176]. Mao et al. [152] used
re-
multiple convolutions and deconvolutions to suppress the noise and recover the high-resolution
image. For addressing multiple low-level tasks via a model, a denoising CNN (DnCNN) [258]
consisting of convolutions, batch normalization (BN) [89], rectified linear unit (ReLU) [159] and
residual learning (RL) [75] was proposed to deal with image denoising, super-resolution, and
JPEG image deblocking. Taking into account the tradeoff between denoising performance and
speed, a color non-local network (CNLNet) [116] combined non-local self-similarity (NLSS) and
lP
CNN to efficiently remove color-image noise.
In terms of blind denoising, a fast and flexible denoising CNN (FFDNet) [260] presented dif-
ferent noise levels and the noisy image patch as the input of a denoising network to improve
denoising speed and process blind denoising. For handling unpaired noisy images, a generative
adversarial network (GAN) CNN blind denoiser (GCBD) [31] resolved this problem by first gener-
ating the ground truth, then inputting the obtained ground truth into the GAN to train the denoiser.
rna
Alternatively, a convolutional blind denoising network (CBDNet) [71] removed the noise from the
given real noisy image by two sub-networks, one in charge of estimating the noise of the real noisy
image, and the other for obtaining the latent clean image. For more complex corrupted images,
a deep plug-and-play super-resolution (DPSR) method [262] was developed to estimate blur ker-
nel and noise, and recover a high-resolution image. Although other important research has been
conducted in the field of image denoising in recent years, there have been only a few reviews to
summarize the deep learning techniques in image denoising [205]. Although Ref. [205] referred to
Jou
a good deal work, it lacked more detailed classification information about deep learning for image
denoising. For example, related work pretaining to unpaired real noisy images was not covered.
To this end, we aim to provide an overview of deep learning for image denoising, in terms of both
applications and analysis. Finally, we discuss the state-of-the-art methods for image denoising,
including how they can be further expanded to respond to the challenges of the future, as well as
potential research directions. An outline of this survey is shown in Fig. 1.
This overview covers more than 200 papers about deep learning for image denoising in recent
years. The main contributions in this paper can be summarized as follows.
1. The overview illustrates the effects of deep learning methods on the field of image denoising.
3
Journal Pre-proof
of
The main frameworks of deep
learning in image denoising
pro
Additive white Hybrid noisy
Real noisy images Blind denoising
noisy images images
Weak edge-
information noisy-
image denoising
Non-linear
noisy-image
denoising
re-
High dimensional
noisy-image
denoising
1RQVDOLHQW
QRLV\LPDJH
GHQRLVLQJ
High
computational cost
for image
denoising
Denoising
speed
Denoising
performance
Performance comparsionreferred
to deep learning methods on
image denoising
lP
Challenges and potential directions
of deep learning for image
denoising
Figure 1: Outline of the survey. It consists of four parts, including basic frameworks, categories, performance com-
parison, challenges and potential directions. Specifically, categories comprise additive white noisy images, real noisy
rna
2. The overview summarizes the solutions of deep learning techniques for different types
of noise (i.e., additive white noise, blind noise, real noise and hybrid noise) and analyzes the
motivations and principles of these methods in image denoising, where blind noise denotes noise
of unknown types. Finally, we evaluate the denoising performance of these methods in terms of
quantitative and qualitative analysis.
Jou
3. The overview points out some potential challenges and directions for deep learning in the
use of image denoising.
The rest of this overview is organized as followed.
Section 2 discusses the popular deep learning frameworks for image applications. Section
3 presents the main categories of deep learning in image denoising, as well as a comparison and
analysis of these methods. Section 4 offers a performance comparison of these denoising methods.
Section 5 discusses the remaining challenges and potential research directions. Section 6 offers
the authors’ conclusions.
4
Journal Pre-proof
of
2.1. Machine learning methods for image denoising
Machine learning methods consist of supervised, semi-supervised and unsupervised learning
methods. Supervised learning methods [133, 228, 120] use the given label to put the obtained
pro
features closer to the target for learning parameters and training the denoising model. For example,
take a given denoising model y = x + µ, where x, y and µ represent the given clean image,
noisy image and additive Gaussian noise (AWGN) of standard deviation σ, respectively. From
the equation above and Bayesian knowledge, it can be seen that the learning of parameters of the
denoising model relies on pair {xk , yk }N k=1 , where xk and yk denote the kth clean image and noisy
image, respectively. Also, N is the number of noisy images. This processing can be expressed as
xk = f (yk , θ, m), where θ is the parameters and m denotes the given noise level.
re-
Unsupervised learning methods [115] use given training samples to find patterns rather than
label matching and finish specific tasks, such as unpairing real low-resolution images [250]. The
recently proposed Cycle-in-Cycle GAN (CinCGAN) recovered a high-resolution image by first
estimating the high-resolution image as a label, then exploiting the obtained label and loss function
to train the super-resolution model.
lP
Semi-supervised learning methods [40] apply a model from a given data distribution to build
a learner for labeling unlabeled samples. This mechanism is favored by small sample tasks, such
as medical diagnosis. A semi-supervised learned sinogram restoration network (SLSR-Net) can
learn feature distribution from paired sinograms via a supervised network, and then, convert the
obtained feature distribution to a high-fidelity sinogram from unlabeled low-dose sinograms via
an unsupervised network [157].
rna
where n is the final layer of the neural network. To help readers understand the principle of the
neural network, a visual example is provided in Fig. 2.
5
Journal Pre-proof
of
w4
x2 w5
w10
w6
h2 w11
o1
w7 w8
pro
x3 w9
w12
b1 h3
b2 b4
b1
b3
b4
b2re-
b3
lP
Figure 2: Two-layer neural network.
The two-layer fully connected neural network includes two layers: a hidden layer and output
layer (the input layer is not generally regarded as a layer of a neural network). There are parameters
to be defined: x1 , x2 , x3 and o1 represent the inputs and output of this neural network, respectively.
rna
w1 , w2 , ..., w11 , w12 and b1 , b2 , b3 , b4 are the weights and biases, respectively. For example, the
output of one neuron h1 via Eqs. (3) and (4) is obtained as follows:
(BP) [81] and loss function to learn parameters. That is, when the loss value is within specified
limitation, the trained model is considered as well-trained. It should be noted that if the number
of layers of a neural network is more than three, it is also referred to as a deep neural network.
Stacked auto-encoders (SARs) [80] and deep belief networks (DBNs) [17, 79] are typical deep
neural networks. They used stacked layers in an unsupervised manner to train the models and
obtain good performance. However, these networks are not simple to implement and require a
good deal of manual settings to achieve an optimal model. Due to this, end-to-end connected
networks, especially CNNs, were proposed [241]. CNNs have wide applications in the field of
image processing, especially image denoising.
6
Journal Pre-proof
of
However, due to the Sigmoid activation function, LeNet had a slow convergence speed, which was
a shortcoming in real-world applications.
After LeNet, the proposed AlexNet [109] was a milestone for deep learning. Its success
was due to several reasons. First, the graphics processing unit (GPU) [154] provided strong
pro
computational ability. Second, random clipping (i.e., dropout) solved the overfitting problem.
Third, ReLU [159] improved the speed of stochastic gradient descent (SGD) rather than Sigmoid
[22]. Fourth, the data augmentation method further addressed the overfitting problem. Although
AlexNet achieved good performance, it required substantial memory usage due to its large con-
volutional kernels. That limited its real-world applications, such as in smart cameras. After that,
during the period of 2014 to 2016, deeper network architectures with small filters were preferred to
improve the performance and reduce computational costs. Specifically, VGG [189] stacked more
convolutions with small kernel sizes to win the ImageNet LSVR Challenge in 2014. Fig. 3 depicts
the network architecture.
re-
3 ´ 3 Conv, 256
3 ´ 3 Conv, 512
3 ´ 3 Conv, 512
3 ´ 3 Conv, 256
3 ´ 3 Conv, 512
3 ´ 3 Conv, 512
3 ´ 3 Conv, 128
3 ´ 3 Conv, 256
3 ´ 3 Conv, 512
3 ´ 3 Conv, 128
3 ´ 3 Conv, 256
3 ´ 3 Conv, 512
3 ´ 3 Conv, 512
3 ´ 3 Conv, 512
3 ´ 3 Conv, 64
3 ´ 3 Conv, 64
fc 4096
fc 4096
fc 1000
pool,1/2
pool,1/2
pool,1/2
pool,1/2
pool,1/2
Image
lP
Figure 3: Network architecture of VGG.
With the success of deeper networks, the research turned to increasing their width. GoogLeNet
[196] increased the width to improve the performance for image applications. Moreover, GoogLeNet
transformed a large convolutional kernel into two smaller convolution kernels in order to reduce
the number of parameters and computational cost. GoogLeNet also used the inception module
rna
1´1 convolutions
Previous layer
the network is very wide, it may be subject to the phenomenon of overfitting. To overcome these
problems, ResNet [75] was proposed in 2016. Each block was given by adding residual learning
operation in ResNet to improve the performance of image recognition, which leads to ResNet
winning the mageNet LSVR in 2015. Fig. 5 depicts the concept of residual learning.
of
Input
Residual Block
pro
...
x
Weight layer
f ( x) x
Re LU
identity
Weight layer
f ( x) + x
Re LU
re- ...
Residual Block
Residual Block
Output
lP
Figure 5: Network architecture of ResNet.
Since 2014, deep networks have been widely used in real-world image applications, such as
facial recognition [86] and medical diagnosis [124]. However, in many applications, captured
images, such as real noisy images, are not sufficient, and deep CNNs tend to perform poorly in
image applications. For this reason, GANs [173] were developed. GANs consisted of two net-
works: generative and discriminative networks. The generative network (also referred to as the
rna
generator) is used to generate samples, according to input samples. The discriminative network
(also called the discriminator) is used to judge the truth of both input samples and generated sam-
ples. The two networks are adversarial. Note that if the discriminator can accurately distinguish
real samples and generate samples from generator, the trained model is regarded as finished. The
network architecture of the GAN can be seen in Fig. 6. Due to its ability to construct supplemental
training samples, the GAN is very effective for small sample tasks, such as facial recognition [211]
and complex noisy image denoising [31]. These mentioned CNNs are basic networks for image
Jou
denoising.
Update
of
Real images Discriminator
pro
Figure 6: Network architecture of GAN.
Deep learning software can provide interfaces to call the GPU. Popular software packages
include:
(1) Caffe [96] based on C++, provides C++, Python and Matlab interfaces, which can also
run on both the CPU and GPU. It is widely used for object detection tasks. However, it requires
developers to master C++. re-
(2) Theano [18] is a compiler of math expressions for dealing with large-scale neural net-
works. Theano provides a Python interface and is used in image super-resolution, denoising and
classification.
(3) Matconvnet [214] offers the Matlab interface. It is utilized in image classification, denois-
ing and super-resolution, and video tracking. However, it requires Matlab mastery.
lP
(4) TensorFlow [1] is a relatively high-order machine learning library. It is faster than Theano
for compilation. TensorFlow offers C++ and Python interfaces and is used in object detection,
image classification, denoising and super-resolution.
(5) Keras [41] based on TensorFlow and Theano is implemented in Python and offers a Python
interface. It can be used in image classification, object detection, image resolution, image denois-
ing and action recognition.
rna
(6) PyTorch [168] is implemented in Python and offers a Python interface. It is employed
in image classification, object detection, image segmentation, action recognition, image super-
resolution, image denoising and video tracking.
Due to the insufficiency of real noisy images, additive white noisy images (AWNIs) are widely
used to train the denoising model [102]. AWNIs include Gaussian, Poisson, Salt, Pepper and
multiplicative noisy images [56]. There are several deep learning techniques for AWNI denoising,
including CNN/NN; the combination of CNN/NN and common feature extraction methods; and
the combination of the optimization method and CNN/NN.
of
Tai et al. (2017) [197] CNN Gaussian image denoisng, super-resolution and JPEG deblocking CNN with recursive unit, gate unit for image restoration
Anwar et al. (2017) [12] CNN Gaussian image denoisng CNN with fully connected layer, RL and dilated convolutions for image denoising
McCann et al. (2017)] [102] CNN Inverse problems (i.e., denoising, deconvolution, super-resolution) CNN for inverse problems
Ye et al. (2018) [242] CNN Inverse problems(i.e., Gaussian image denoising, super-resoluion) Signal processing ideas guide CNN for inverse problems
Yuan et al. (2018) [249] CNN Hyper-spectral image denoising CNN with multiscale, multilevel features techniques for hyper-spectral image denoising
Jiang et al. (2018) [98] CNN Gaussian image denoising Multi-channel CNN for image denoising
Chang et al. (2018) [28] CNN Hyper-spectral image (HSI) denoising, HIS restoration CNN consolidated spectral and spatial coins for hyper-spectral image denoising
Jeon et al. (2018) [93] CNN Speckle noise reduction from digital holographic images Speckle noise reduction of digital holographic image from Multi-scale CNN
pro
Gholizadeh-Ansari et al. (2018) [63] CNN Low-dose CT image denoising, X-ray image denosing CNN with dilated convolutions for low-dose CT image denoising
Uchida et al. (2018) [213] CNN Non-blind image denoising CNN with residual learning for non-blind image denoising
Xiao et al. (2018) [227] CNN Stripe noise reduction of infrared cloud images CNN with skip connection for infrared-cloud-image denoising
Chen et al. (2018) [34] CNN Gaussian image denoisng, blind denoising CNN based on RL and perceptual loss for edge enhancement
Yu et al. (2018) [246] CNN Seismic, random, linear and multiple noise reduction of images A survey on deep learning for three applications
Yu et al. (2018) [245] CNN Optical coherence tomography (OCT) image denoising GAN with dense skip connection for optical coherence tomography image denoising
Li et al. (2018) [118] CNN Ground-roll noise reduction An overview of deep learning techniques on ground-roll noise
Abbasi et al. (2018) [2] CNN OCT image denoising Fully CNN with multiple inputs, and RL for OCT image denoising
Zarshenas et al. (2018) [253] CNN Gaussian noisy image denoising Deep CNN with internal and external residual learning for image denoising
Chen et al. (2018) [29] CNN Gaussian noisy image denoising CNN with recursive operations for image denoising
Panda et al. (2018) [165] CNN Gaussian noisy image denoising CNN with exponential linear units, and dilated convolutions for image denoising
Sheremet et al. (2018) [186] CNN Image denoising from info-communication systems CNN on image denoising from info-communication systems
Chen et al. (2018) [30] CNN Aerial-image denoising CNN with multi-scale technique, and RL for aerial-image denoising
Pardasani et al. (2018) [166] CNN Gaussian, poisson or any additive-white noise reduction CNN with BN for image denoising
Couturier et al. (2018) [42]
Park et al. (2018) [167]
Priyanka et al. (2019) [172]
Lian et al. (2019) [194]
Tripathi et al. (2018) [212]
NN
CNN
CNN
CNN
CNN
re-
Gaussian and multiplicative speckle noise reduction
Gaussian noisy image denoising
Gaussian noisy image denoisng
Poisson-noise-image denoising
Gaussian noisy image denoising
Encoder-decoder network with multiple skip connections for image denoising
CNN with dilated convolutions for image denoising
CNN with symmetric network architecture for image denoising
CNN with multi scale, and multiple skip connections for Poisson image denoising
GAN for image denoising
Zheng et al. (2019) [271] CNN Gaussian noisy image denoising CNN for image denoising
Tian et al. (2019) [204] CNN Gaussian noisy image denoising CNN for image denoising
Remez et al. (2018) [175] CNN Gaussian and Poisson image denoising CNN for image denoising
Tian et al. (2020) [207] CNN Gaussian image denoising and real noisy image denoising CNN with BRN, RL and dilated convolutions for image denosing
Tian et al. (2020) [206] CNN Gaussian image denoising, blind denoising and real noisy image denoising CNN with attention mechanism and sparse method for image denoising
Tian et al. (2020) [208] CNN Gaussian image denoising, blind denoising and real noisy image denoising Two CNNs with sparse method for image denoising
lP
multiple low-level vision tasks, i.e., image denoising, super-resolution and deblocking through
CNN, batch normalization [89] and residual learning techniques [75]. Wang et al. [219], Bae
et al. [13] and Jifara et al. [101] also presented a residual learning into deeper CNN for image
denoising. However, the deeper CNN technique relied on a deeper layer rather than a shallow
rna
layer, which resulted in a long-term dependency problem. Several signal-base methods were pro-
posed to resolve this problem. Tai et al. [197] exploited recursive and gate units to adaptively
mine more accurate features and recover clean images. Inspired by a low-rank Hankel matrix in
low-level vision, Ye et al. [242] provided convolution frames to explain the connection between
signal processing and deep learning by convolving local and nonlocal bases. For solving insuffi-
cient noisy images (i.e., hyperspectral and medical images), several recent works have attempted
to extract more useful information through the use of improved CNNs [28, 77, 245, 139]. For
Jou
example, Yuan et al. [249] combined a deep CNN, residual learning and multiscale knowledge
to remove the noise from hyperspectral-noisy images. However, these proposed CNNs led to the
likelihood of increased computational costs and memory consumption, which was not conducive
for real-world applications. To address this phenomenon, Gholizadeh et al. [63] utilized dilated
convolutions [62] to enlarge the receptive field and reduce the depth of network without incurring
extra costs for CT image denoising. Lian et al. [194] proposed a residual network via multi-scale
cross-path concatenation to suppress the noise. Most of the above methods relied on improved
CNNs to deal with the noise. Therefore, designing network architectures is important for image
denoising [167, 118].
10
Journal Pre-proof
Changing network architectures involves the following methods [246, 149, 177]: fusing fea-
tures from multiple inputs of a CNN; changing the loss function; increasing depth or width of the
CNN; adding some auxiliary plug-ins into CNNs; and introducing skip connections or cascade
operations into CNNs. Specifically, the first method includes three types: different parts of one
of
sample as multiple inputs from different networks [2]; different perspectives for the one sample
as input, such as multiple scales [93, 30]; and different channels of a CNN as input [98]. The
second method involves the design of different loss functions according to the characteristics of
nature images to extract more robust features [10]. For example, Chen et al. [34] jointed Eu-
pro
clidean and perceptual loss functions to mine more edge information for image denoising. The
third method enlarged the receptive field size to improve denoising performance via increasing
the depth or width of the network [213, 253, 186]. The fourth method applied plug-ins, such as
activation function, dilated convolution, fully connected layer and pooling operations, to enhance
the expressive ability of the CNN [165, 172, 166]. The fifth method utilized skip connections
[227, 29, 42, 12] or cascade operations [194, 36] to provide complementary information for the
deep layer in a CNN. Table 1 provides an overview of CNNs for AWNI denoising.
re-
3.1.2. CNN/NN and common feature extraction methods for AWNI denoising
Feature extraction is used to represent the entire image in image processing, and it is important
for machine learning [130, 144, 238]. However, because deep learning techniques are black box
techniques, they do not allow the choice of choose features, and therefore cannot guarantee that
the obtained features are the most robust [187, 221]. Motivated by problem, researches embedded
lP
common feature extraction methods into CNNs for the purpose of image denoising. They did this
for five reasons: weak edge-information noisy images, non-linear noisy images, high dimensional
noisy images and non-salient noisy images, and high computational costs.
For weak edge-information noisy images, CNN with transformation domain methods were
proposed by Guan et al. [70], Li et al. [122], Liu et al. [137], Latif et al. [111] and Yang et al.
[237]. However, they were not effective in removing the noise. Specifically, in [137], the proposed
rna
solution used the wavelet method and U-net to eliminate the gridding effect of dilated convolutions
on enlarging the receptive field for image restoration.
For non-linear noisy images, CNNs with kernel methods proved useful [14, 235]. These meth-
ods mostly consisted of three steps [158]. The first step used CNN to extract features. The second
step utilized the kernel method to convert obtained non-linear features into linearity. The third step
exploited the residual learning to construct the latent clean image.
For high dimensional noisy images, the combination of CNN and the dimensional reduction
Jou
method was proposed [229, 72]. For example, Khaw et al. [106] used a CNN with principal
component analysis (PCA) for image denoising. This consisted of three steps. The first step
used convolution operations to extract features. The second step utilized the PCA to reduce the
dimension of the obtained features. The third step employed convolutions to deal with the obtained
features from the PCA and to reconstruct a clean image.
For non-salient noisy images, signal processing can guide the CNN in extracting salient fea-
tures [94, 103, 174, 2]. Specifically, skip connection is a typical operation of signal processing
[103].
11
Journal Pre-proof
Table 2: CNN/NN and common feature extraction methods for AWNI denoising.
References Methods Applications Key words (remarks)
Bako et al. (2017) [194] CNN Monte Carlo-rendered images denoising CNN with kernel method for estimating noise piexls
Ahn et al. (2017) [7] CNN Gaussian image denoising CNN with NSS for image denoising
Khaw et al. (2017) [106] CNN Impulse noise reduction CNN with PCA for image denoising
Vogel et al. (2017) [215] CNN Gaussian image denoising U-net with multi scales technique for image denoising
of
Mildenhall et al. (2018) [158] NN Low-light synthetic noisy image denoising, real noise Encoder-decoder with multi scales, and kernel method for image denoising
Liu et al. (2018) [137] CNN Gaussian image denoisng, super-resolution and JPEG deblocking U-net with wavelet for image restoration
Yang et al. (2018) [237] CNN Gaussian image denoisng CNN with BM3D for image denoising
Guo et al. (2018) [72] CNN Image blurring and denoising CNN with RL, and sparse method for image denoising
Jia et al. (2018) [94] CNN Gaussian image denoisng CNN with multi scales, and dense RL operations for image denoising
Ran et al. (2018) [174] CNN OCT image denoising, OCT image super-resolution CNN with multi views for image restoration
Li et al. (2018) [140] CNN Medical image denoising, stomach pathological image denoising CNN consolidated wavelet for medical image denoising
Ahn et al. (2018) [8] CNN Gaussian image denoisng CNN with NSS for image denoising
pro
Xie et al. (2018) [229] CNN Hyper-spectral image denoising CNN with RL, and PCA for low-dose CT image denoising
Kadimesetty et al. (2019) [103] CNN Low-Dose computed tomography (CT) image denoising CNN with RL, batch normalization (BN) for medical image denoising
Guan et al. (2019) [70] CNN Stripe noise reduction CNN with wavelet-image denoising
Abbasi et al. (2019) [2] NN 3D magnetic resonance image denoising, medical image denoising GAN based on encoder-decoder and RL for medical denoising
Xu et al. (2019) [235] CNN Synthetic and real noisy and video denoising CNN based on deformable kernel for image and video denoising
For tasks involving high computational costs, a CNN with relations nature of pixels from an
image was very effective in decreasing complexity [2, 8, 7]. For example, Ahn et al. [7] used a
CNN with non-local self-similarity (NSS) to filter the noise, where similar characteristics of the
re-
given noisy image can accelerate the speed of extraction feature and reduce computational costs.
More detailed information on these methods mentioned can be found in Table 2.
12
Journal Pre-proof
Table 3: The combination of the optimization method and CNN/NN for AWNI denoising.
References Methods Applications Key words (remarks)
Hong et al. (2018) [83] CNN Gaussian image denoising Auto-Encoder with BN, and ReLU for image denoising
Cho et al. (2018) [39] CNN Gaussian image denoising CNN with separable convolution, and gradient prior for image denoising
Fu et al. (2018) [58] CNN Salt and pepper noise removal CNN with non-local switching filter for salt and pepper noise
Yeh et al. (2018) [243] CNN Image denoising super-resolution and inpainting GAN with MAP for image restoration
of
Liu et al. (2018) [136] CNN Medical image denoising, computed tomography perfusion for image denoising CNN with genetic algorithm for medical image denoising
Tassano et al. (2019) [202] CNN Gaussian image denoisng CNN with noise level, upscaling, downscaling operation for image denoising
Heckel et al. (2018) [76] CNN Image denoisng CNN with deep prior for image denoising
Jiao et al. (2017) [100] CNN Gaussian image denoisng, image inpainting CNN with inference, residual operation for image restoration
Wang et al. (2017) [218] CNN Image denoising CNN with total variation for image denoising
Li et al. (2019) [128] CNN Image painting CNN with split Bregman iteration algorithm for image painting
Sun et al. (2018) [195] CNN Gaussian image denoisng GAN with skip-connections, and ResNet blocks for image denoising
Zhi et al. (2018) [272] CNN Gaussian image denoising GAN with multiscale for image denoising
pro
Du et al. (2018) [51] CNN Gaussian image denoising CNN with wavelet for medcial image restoration
Liu et al. (2019) [140] CNN Gaussian image denoising, real noisy image denoising, rain removal Dual CNN with residual operations for image restoration
Khan et al. (2019) [105] CNN Symbol denoising CNN with quadrature amplitude modulation for symbol denoising
Zhang et al. (2019) [266] CNN Image Possian denoising CNN with variance-stabilizing transformation for poisson denoising
Cruz et al. (2018) [43] CNN Gaussian image denoising CNN with nonlocal filter for image denoising
Jia et al. (2019) [95] CNN Gaussian image denoising CNN based on a fractional-order differential equation for image denoising
For improving denoising performance, a CNN combined optimization method was used to
make a noisy image smooth [76, 65, 100]. A CNN with total variation denoising reduced the
effect of noise pixels [218]. Combining the Split Bregman iteration algorithm and CNN [128]
can enhance pixels through image depth to obtain a latent clean image. A dual-stage CNN with
re-
feature matching can better recover the detailed information of the clean image, especially noisy
images [195]. The GAN with the nearest neighbor algorithm was effective in filtering out noisy
images from clean images [272]. A combined CNN used wavefront coding to enhance the pixels
of latent clean images via the transform domain [51]. Other effective denoising methods are shown
in [140, 105, 66]. Table 3 shows detailed information about the combination of the optimization
lP
methods and CNN/NN in AWNI denoising.
from the given real corrupted image. Multiscale knowledge is effective for image denoising. For
example, a CNN consisting of convolution, ReLU and RL employed different phase features to en-
hance the expressive ability of the low-light image denoising model [201]. To overcome the blurry
and false image artifacts, a dual U-Net with skip connection was proposed for computed tomogra-
phy (CT) image reconstruction [73]. To address the problem of resource-constraints problem, Tian
et al. [207] used a dual CNN with batch renormalization [88], RL and dilated convolutions to deal
with real noisy images. Based on nature of light images, two CNNs utilized anisotropic parallax
Jou
analysis to generate structural parallax information for real noisy images [32]. Using a CNN to
resolve remote sensing [97] and medical images [107] under low-light conditions proved effective
[99]. To extract more detailed information, recurrent connections were used to enhance the repre-
sentative ability to deal with corrupted images in the real world [64, 269]. To deal with unknown
real noisy images, a residual structure was utilized to facilitate low-frequency features, and then,
an attention mechanism [206] could be applied to extract more potential features from channels
[11]. To produce the noisy image, a technique used imitating cameral pipelines to construct the
degradation model in order to filter the real noisy images [91]. To tackle the problem of unpairing
noisy images, an unsupervised learning method embedded into the CNN proved effective in image
13
Journal Pre-proof
denoising [44]. The self-consistent GAN [236] first used a CNN to estimate the noise of the given
noisy image as a label, and then, applied another CNN and the obtained label to remove the noise
for other noisy images. This concept has also been extended to general CNNs. The Noise2Inverse
method used a CNN to predict the value of a noisy pixel, according to its surrounding noisy pixels
of
[78]. The attention mechanism merged into a 3D self-supervised network can improve the effi-
ciency of removing the noise from medical noisy images [123]. More detailed information about
the above research is shown in Table 4.
Table 4: CNNs for real noisy image denoising.
pro
References Methods Applications Key words (remarks)
Tao et al. (2019) [201] CNN Real noisy image denoising, low-light image enhancement CNN with ReLU, and RL for real noisy image denoising
Chen et al. (2018) [31] CNN Real noisy image denoising, blind denoising GAN for real noisy image denoising
Han et al. (2018) [73] CNN CT image reconstruction U-Net with skip connection for CT image reconstruction
Chen et al. (2018) [32] CNN Real noisy image denoising CNNs with anisotropic parallax analysis for real noisy image denoising
Jian et al. (2018) [97] CNN Low-light remote sense image denoising CNN for image denoising
Khoroushadi et al. (2019) [107] CNN Medical image denoising, CT image denoising CNN for image denoising
Jiang et al. (2018) [99] CNN Low-light image enhancement CNN with symmetric pathways for low-light image enhancement
Godard et al. (2018) [64] CNN Real noisy image denoising CNN with recurrent connections for real noisy image denoising
Zhao et al. (2019) [269] CNN Real noisy image denoising CNN with recurrent conncetions for real noisy image denoising
Anwar et al. (2019) [11] CNN Real noisy image denoising CNN with RL, attention mechanism for real noisy image denoising
Jaroensri et al. (2019) [220] CNN Real noisy image denoising CNN for real noisy image denoising
Green et al. (2018) [67]
Brooks et al. (2019) [24]
Tian et al. (2020) [207]
Tian et al. (2020) [206]
Tian et al. (2020) [208]
CNN
CNN
CNN
CNN
CNN
re-
CT image denoising, real noisy image denoising
Real noisy image denoising
Gaussian image denoising and real noisy image denoising
Gaussian image denoising, blind denoising and real noisy image denoising
Gaussian image denoising, blind denoising and real noisy image denoising
CNN for real noisy image denoising
CNN with image processing pipeline for real noisy image denoising
CNN with BRN, RL and dilated convolutions for image denosing
CNN with attention mechanism and sparse method for image denoising
Two CNNs with sparse method for image denoising
Cui et al. (2019) [44] CNN Positron emission tomography image denoising, real noisy image denoising Unsupervised CNN for unpair real noisy image denoising
Yan et al. (2019) [236] CNN Real noisy image denoising Self-supervised GAN for unpair real noisy image denoising
Broaddus et al. (2020) [23] CNN Blind denoising and real noisy image denoising Self-supervised CNN for unpair fluorescence microscopy image denoising
Li et al. (2020) [123] CNN CT noisy image denoising Self-supervised CNN with attention mechanism for unpair CT image denoising
Hendriksen et al. (2020) [78] CNN CT noisy image denoising Self-supervised CNN for unpair CT image denoising
lP
Wu et al. (2020) [224] CNN CT noisy image denoising Self-supervised CNN for unpair dynamic CT image denoising
The method combining CNN and prior knowledge can better deal with both speed and complex
noise task in real noisy images. Zhang et al. [259] proposed using half quadratic splitting (HQS)
and CNN to estimate the noise from the given real noisy image. Guo et al. [71] proposed a three-
phase denoising method. The first phase used a Gaussian noise and in-camera processing pipeline
rna
to synthesize noisy images. The synthetic and real noisy images were merged to better represent
real noisy images. The second phase used a sub-network with asymmetric and total variation
losses to estimate the noise of real noisy image. The third phase exploited the original noisy image
and estimated noise to recover the latent clean image. To address the problem of unpaired noisy
images, the combination of CNN and prior knowledge in a semi-supervised way was developed
[157]. A hierarchical deep GAN (HD-GAN) first used a cluster algorithm to classify multiple
categories of each patient’s CT, then built a dataset by collecting the images in the same categories
Jou
from different patients. Finally, the GAN was used to deal with the obtained dataset for image
denoising and classification [40]. A similar method performed well in 3D mapping [185].
A CNN with channel prior knowledge was effective for low-light image enhancement [200].
Table 5 shows the detailed information about the above research.
14
Journal Pre-proof
of
Ma et al. (2018) [148] CNN Tomography image denoising GAN with edge-prior for CT image denoising
Yue et al. (2018) [251] CNN Real-noisy image denoising, blind denoising CNN with variational inference for blind denoising and real-noisy image denoisng
Song et al. (2019) [192] CNN Real noisy image denoising CNN with dynamic residual dense block for real noisy image denoising
Lin et al. (2019) [131] CNN Real noisy image denoising GAN with attentive mechanism and noise domain for real noisy image denoising
Meng et al. (2020) [157] CNN Real noisy image denoising CNN with semi-supervised learning for medical noisy image denoising
Shantia et al. (2015) [185] CNN Real noisy image denoising CNN with semi-supervised learning for 3D map
Choi et al. (2019) [40] CNN Real noisy image denoising CNN with semi-supervised learning for medical noisy image denoising
pro
3.3. Deep learning techniques for blind denoising
In the real world, images are easily corrupted and noise is complex. Therefore, blind denoising
techniques are important [142]. An FFDNet [260] used noise level and noise as the input of CNN
to train a denoiser for unknown noisy images. Subsequently, several methods were proposed to
solve the problem of blind denoising. An image device mechanism proposed by Kenzo et al. [90]
re-
utilized soft shrinkage to adjust the noise level for blind denoising. For unpaired noisy images,
using CNNs to estimate noise proved effective [191]. Yang et al. [239] used known noise levels to
train a denoiser, then utilized this denoiser to estimate the level of noise. To resolve the problem of
random noise attenuation, a CNN with RL was used to filter complex noise [256, 188]. Changing
the network architecture can improve the denoising performance for blind denoising. Majumdar et
lP
al. [151] proposed the use of an auto-encoder to tackle unknown noise. For mixed noise, cascaded
CNNs were effective in removing the additive white Gaussian noise (AWGN) and impulse noise
[4]. Table 6 displays more information about these denoising methods.
Table 6: Deep learning techniques for blind denoising.
References Methods Applications Key words (remarks)
Zhang et al. (2018) [260] CNN Blind denoising CNN with varying noise level for blind denoising
rna
Kenzo et al. (2018) [90] CNN Blind denoising CNN with soft shrinkage for blind denoising
Soltanayev et al. (2018) [191] CNN Blind denoising CNN for unpaired noisy images
Yang et al. (2017) [239] CNN Blind denoising CNNs with RL for blind denoising
Zhang et al. (2018) [256] CNN Blind denoising, random noise CNN with RL for blind denoising
Si et al. (2018) [188] CNN Blind denoising, random noise CNN for image denoising
Majumdar et al. (2018) [99] NN Blind denoising Auto-encoder for blind denoising
Abiko et al. (2019) [64] CNN Blind denoising, complex noisy image denoising cascaded CNNs for blind denoising
Cha et al. (2019) [239] CNN Blind denoising GAN for blind image denoising
Tian et al. (2020) [206] CNN Gaussian image denoising, blind denoising and real noisy image denoising CNN with attention mechanism and sparse method for image denoising
Jou
15
Journal Pre-proof
handle arbitrary blur kernels, Zhang et al. [262] proposed to use cascaded deblurring and single-
image super-resolution (SISR) networks to recover plug-and-play super-resolution images. These
hybrid noisy image denoising methods are presented in Table 7.
Table 7: Deep learning techniques for hybrid noisy image denoising.
of
References Methods Applications Key words (remarks)
Li et al. (2018) [127] CNN Noise, blur kernel, JPEG compression The combination of CNN and warped guidance for multiple degradations
Zhang et al. (2018) [261] CNN Noise, blur kernel, low-resolution image CNN for multiple degradations
Kokkinos et al. (2019) [108] CNN Image demosaicking and denoising Residual CNN with iterative algorithm for image demosaicking and denoising
It is noted that an image carries finite information, which is not beneficial in real-world ap-
pro
plications. To address this problem, burst techniques were developed [226]. However, the burst
image suffered from the effects of noise and camera shake, which increased the difficulty of imple-
menting the actual task. Recently, there has been much interest in deep learning technologies for
burst image denoising, where the noise is removed frame by frame [9]. Recurrent fully convolu-
tional deep neural networks can filter the noise for all frames in a sequence of arbitrary length [64].
The combination of CNN and the kernel method can boost the denoising performance for burst
noisy images [153, 158]. In terms of complex background noisy images, an attention mechanism
re-
combined the kernel and CNN to enhance the effect of key features for burst image denoising,
which can accelerate the training speed [255]. For low-light conditions, using a CNN to map
a given burst noisy image to sRGB outputs can obtain a multi-frame denoising image sequence
[269]. To reduce network complexity, a CNN with residual learning directly trained a denoising
model rather than an explicit aligning procedure [199]. These burst denoising methods are listed
lP
in Table 8.
Table 8: Deep learning techniques for burst denoising.
References Methods Applications Key words (remarks)
Xia et al. (2019) [226] CNN Burst denoising CNN for burst denoising
Aittala et al. (2018) [9] CNN Burst denoising CNN for burst denoising
Godard et al. (2018) [64] CNN Burst denoising CNN for burst denoising
Marin et al. (2019) [153] CNN Burst denoising CNN with kernel idea for burst denoising
rna
Mildenhall et al. (2018) [158] CNN Burst denoising CNN with kernel idea for burst denoising
Zhang et al. (2020) [255] CNN Burst denoising CNN with kernel idea and attention idea for burst denoising
Zhao et al. (2019) [269] CNN Burst denoising CNN for burst denoising
Tan et al. (2019) [199] CNN Burst denoising CNN without explicit aligning procedure for burst denoising
Similar to burst images, video detection is decomposed into each frame. Therefore, deep
learning techniques for additive white noisy-image denoising, real noisy image denoising, blind
denoising , hybrid noisy image denoising are also suitable to video denoising [182, 216]. A recur-
rent neural network [33] utilized an end-to-end CNN to remove the noise from corrupted video. To
Jou
improve video denoising, reducing the video redundancy is an effective method. A non-local patch
idea fused CNN can efficiently suppress the noise for video and image denoising [46]. A CNN
combined temporal information to make a tradeoff between performance and training efficiency
in video denoising [203]. For blind video denoising, a two-stage CNN proved to be a good choice
[53]. The first phase trained a video denoising model by fine-tuning a pre-trained AWGN denois-
ing network [53]. The second phase obtained latent clean video by the obtained video denoising
model. These video denoising methods are described in Table 9.
16
Journal Pre-proof
of
Davy et al. (2018) [46] CNN Additive white Gaussian noisy video CNN with non-local idea for video denoising
Tassano et al. (2019) [203] CNN Additive white Gaussian noisy video CNN with temporal information for video denoising
Ehret et al. (2019) [53] CNN blind video denoising CNN with pre-trained technology for blind video denoising
pro
4. Experimental results
4.1. Datasets
4.1.1. Training datasets
The training datasets are divided into two categories: gray-noisy and color-noisy images.
Gary-noisy image datasets can be used to train Gaussian denoisers and blind denoisers. They
included the BSD400 dataset [21] and Waterloo Exploration Database [147]. The BSD400 dataset
re-
was composed of 400 images in .png format, and was cropped into a size of 180 × 180 for training
a denoising model. The Waterloo Exploration Database consisted of 4,744 nature images with
a .png format. Color-noisy images included the BSD432 [258], Waterloo Exploration Database
and polyU-Real-World-Noisy-Images datasets [230]. Specifically, the polyU-Real-World-Noisy-
Images consisted of 100 real noisy images with sizes of 2, 784 × 1, 856 obtained by five cameras:
a Nikon D800, Canon 5D Mark II, Sony A7 II, Canon 80D and Canon 600D.
lP
4.1.2. Test datasets
The test datasets included gray-noisy and color-noisy image datasets. The gray-noisy image
dataset was composed of Set12 and BSD68 [258]. The Set12 contained 12 scenes. The BSD68
contained 68 nature images. They were used to test the Gaussian denoiser and a denoiser of blind
rna
noise. The color-noisy image dataset included CBSD68, Kodak24 [57], McMaster [263], cc [160],
DND [171], NC12 [112], SIDD [3] and Nam [160]. The Kodak24 and McMaster contained 24
and 18 color noisy images, respectively. The cc contained 15 real noisy images of different ISO,
i.e., 1,600, 3,200 and 6,400. The DND contained 50 real noisy images and the clean images were
captured by low-ISO images. The NC12 contained 12 noisy images and did not have ground-truth
clean images. The SIDD contained real noisy images from smart phones, and consisted of 320
image pairs of noisy and ground-truth images. The Nam included 11 scenes, which were saved in
Jou
JPGE format.
Table 10: PSNR (dB) of different methods on the BSD68 for different noise levels (i.e., 15, 25 and 50).
Methods 15 25 50
BM3D [45] 31.07 28.57 25.62
WNNM [69] 31.37 28.83 25.87
EPLL [274] 31.21 28.68 25.67
of
MLP [25] - 28.96 26.03
CSF [184] 31.24 28.74 -
TNRD [35] 31.42 28.92 25.97
ECNDNet [204] 31.71 29.22 26.23
RED [152] - - 26.35
DnCNN [258] 31.72 29.23 26.23
pro
DDRN [219] 31.68 29.18 26.21
PHGMS [13] 31.86 - 26.36
MemNet [197] - - 26.35
EEDN [34] 31.58 28.97 26.03
NBCNN [213] 31.57 29.11 26.16
NNC [253] 31.49 28.88 25.25
ELDRN [165] 32.11 29.68 26.76
PSN-K [10] 31.70 29.27 26.32
PSN-U [10] 31.60 29.17 26.30
DDFN [42] 31.66 29.16 26.19
re- CIMM [12] 31.81 29.34 26.40
DWDN [122] 31.78 29.36 -
MWCNN [137] 31.86 29.41 26.53
BM3D-Net [237] 31.42 28.83 25.73
MPFE-CNN [103] 31.79 29.31 26.34
IRCNN [259] 31.63 29.15 26.19
FFDNet [260] 31.62 29.19 26.30
BRDNet [207] 31.79 29.29 26.36
ETN [218] 31.82 29.34 26.32
lP
ADNet [206] 31.74 29.25 26.29
NN3D [43] - - 26.42
FOCNet [95] 31.83 29.38 26.50
DudeNet [208] 31.78 29.29 26.31
Table 11: FSIM of different methods on the BSD68 for different noise levels (i.e., 15, 25 and 50).
Methods 15 25 50
rna
Comparisons of denoising methods should take into consideration additive white noise, includ-
ing Gaussian, Poisson, low-light noise, and salt and pepper noise, all of which have significantly
different noise levels. Furthermore, many of the methods use different tools, which can have a sig-
nificant influence on denoising results. For these reasons, we chose typical Gaussian noise to test
the denoising performance of the various methods. In addition, most of the denoising methods use
PSNR as a quantitative index. Therefore, we used the BSD68, Set12, CBSD68, Kodak24 and Mc-
Master datasets to test the denoising performance of deep learning techniques for additive white
noisy-image denoising. Table 10 shows the PSNR values of different networks with different noise
levels for gray additive white noisy image denoising. To understand the denoising performance
18
Journal Pre-proof
of different methods, we used a feature similarity index (FSIM) [264] as a visual quality metric
to conduct experiments on BSD68 for different noise levels (i.e., 15, 25 and 50), as shown in Ta-
ble 11. To test the ability of dealing with single gray additive white noisy images from different
networks, Set12 was used to conduct experiments, as shown in Table 12. Table 13 displays the
of
denoising performance of different methods for color additive white noisy image denoising. Table
14 presents the efficiency of different methods for image denoising. For qualitative analysis, we
magnified one area of the latent clean image from different methods. As shown in Figs. 7-10, the
observed area is clearer, and the corresponding method has better denoising performance.
pro
4.2.2. Deep learning techniques for real-noisy image denoising
For testing the denoising performance of deep learning techniques for real-noisy images, the
public datasets, such as DND, SIDD, Nam and CC, were chosen to design the experiments. We
chose not to use the NC12 dataset because the ground-truth clean images from NC12 were un-
available. Also, to help readers better understand these methods, we added several traditional
denoising methods, such as BM3D, as comparative methods. From Tables 15 and 16, we can see
re-
that the DRDN obtained the best results on the DND and SSID in real-noisy image denoising,
respectively. For compressed noisy images, the AGAN obtained excellent performance, as shown
in Table 17. For real noisy images of different ISO values, the SDNet and BRDNet achieved the
best and second-best denoising performance, respectively, as described in Table 18.
5. Discussion
Deep learning techniques are seeing increasing use in image denoising. This paper offers a
survey of these techniques in order to help readers understand these methods. In this section, we
present the potential areas of further research for image denoising and points out several as yet
unsolved problems.
19
Journal Pre-proof
Table 12: PSNR (dB) of different methods on the Set12 for different noise levels (i.e., 15, 25 and 50).
Images C.man House Peppers Starfish Monarch Airplane Parrot Lena Barbara Boat Man Couple Average
Noise Level σ = 15
BM3D [45] 31.91 34.93 32.69 31.14 31.85 31.07 31.37 34.26 33.10 32.13 31.92 32.10 32.37
WNNM [69] 32.17 35.13 32.99 31.82 32.71 31.39 31.62 34.27 33.60 32.27 32.11 32.17 32.70
of
EPLL [274] 31.85 34.17 32.64 31.13 32.10 31.19 31.42 33.92 31.38 31.93 32.00 31.93 32.14
CSF [184] 31.95 34.39 32.85 31.55 32.33 31.33 31.37 34.06 31.92 32.01 32.08 31.98 32.32
TNRD [35] 32.19 34.53 33.04 31.75 32.56 31.46 31.63 34.24 32.13 32.14 32.23 32.11 32.50
ECNDNet [204] 32.56 34.97 33.25 32.17 33.11 31.70 31.82 34.52 32.41 32.37 32.39 32.39 32.81
DnCNN [258] 32.61 34.97 33.30 32.20 33.09 31.70 31.83 34.62 32.64 32.42 32.46 32.47 32.86
pro
PSN-K [10] 32.58 35.04 33.23 32.17 33.11 31.75 31.89 34.62 32.64 32.52 32.39 32.43 32.86
PSN-U [10] 32.04 35.03 33.21 31.94 32.93 31.61 31.62 34.56 32.49 32.41 32.37 32.43 32.72
CIMM [12] 32.61 35.21 33.21 32.35 33.33 31.77 32.01 34.69 32.74 32.44 32.50 32.52 32.95
IRCNN [259] 32.55 34.89 33.31 32.02 32.82 31.70 31.84 34.53 32.43 32.34 32.40 32.40 32.77
FFDNet [260] 32.43 35.07 33.25 31.99 32.66 31.57 31.81 34.62 32.54 32.38 32.41 32.46 32.77
BRDNet [207] 32.80 35.27 33.47 32.24 33.35 31.85 32.00 34.75 32.93 32.55 32.50 32.62 33.03
ADNet [206] 32.81 35.22 33.49 32.17 33.17 31.86 31.96 34.71 32.80 32.57 32.47 32.58 32.98
DudeNet [208] 32.71 35.13 33.38 32.29 33.28 31.78 31.93 34.66 32.73 32.46 32.46 32.49 32.94
Noise Level σ = 25
BM3D [45] 29.45 32.85 30.16 28.56 29.25 28.42 28.93 32.07 30.71 29.90 29.61 29.71 29.97
WNNM [69]
EPLL [274]
MLP [25]
CSF [184]
29.64 33.22 30.42
29.26 32.17 30.17
29.61 32.56 30.30
29.48 32.39 30.32
re-
29.03
28.51
28.82
28.80
29.84
29.39
29.61
29.62
28.69 29.15 32.24 31.24 30.03 29.76 29.82
28.61 28.95 31.73 28.61 29.74 29.66 29.53
28.82 29.25 32.25 29.54 29.97 29.88 29.73
28.72 28.90 31.79 29.03 29.76 29.71 29.53
30.26
29.69
30.03
29.84
TNRD [35] 29.72 32.53 30.57 29.02 29.85 28.88 29.18 32.00 29.41 29.91 29.87 29.71 30.06
ECNDNet [204] 30.11 33.08 30.85 29.43 30.30 29.07 29.38 32.38 29.84 30.14 30.03 30.03 30.39
DnCNN [258] 30.18 33.06 30.87 29.41 30.28 29.13 29.43 32.44 30.00 30.21 30.10 30.12 30.43
PSN-K [10] 30.28 33.26 31.01 29.57 30.30 29.28 29.38 32.57 30.17 30.31 30.10 30.18 30.53
lP
PSN-U [10] 29.79 33.23 30.90 29.30 30.17 29.06 29.25 32.45 29.94 30.25 30.05 30.12 30.38
CIMM [12] 30.26 33.44 30.87 29.77 30.62 29.23 29.61 32.66 30.29 30.30 30.18 30.24 30.62
IRCNN [259] 30.08 33.06 30.88 29.27 30.09 29.12 29.47 32.43 29.92 30.17 30.04 30.08 30.38
FFDNet [260] 30.10 33.28 30.93 29.32 30.08 29.04 29.44 32.57 30.01 30.25 30.11 30.20 30.44
BRDNet [207] 31.39 33.41 31.04 29.46 30.50 29.20 29.55 32.65 30.34 30.33 30.14 30.28 30.61
ADNet [206] 30.34 33.41 31.14 29.41 30.39 29.17 29.49 32.61 30.25 30.37 30.08 30.24 30.58
DudeNet [208] 30.23 33.24 30.98 29.53 30.44 29.14 29.48 32.52 30.15 30.24 30.08 30.15 30.52
Noise Level σ = 50
rna
BM3D [45] 26.13 29.69 26.68 25.04 25.82 25.10 25.90 29.05 27.22 26.78 26.81 26.46 26.72
WNNM [69] 26.45 30.33 26.95 25.44 26.32 25.42 26.14 29.25 27.79 26.97 26.94 26.64 27.05
EPLL [274] 26.10 29.12 26.80 25.12 25.94 25.31 25.95 28.68 24.83 26.74 26.79 26.30 26.47
MLP [25] 26.37 29.64 26.68 25.43 26.26 25.56 26.12 29.32 25.24 27.03 27.06 26.67 26.78
TNRD [35] 26.62 29.48 27.10 25.42 26.31 25.59 26.16 28.93 25.70 26.94 26.98 26.50 26.81
ECNDNet [204] 27.07 30.12 27.30 25.72 26.82 25.79 26.32 29.29 26.26 27.16 27.11 26.84 27.15
DnCNN [258] 27.03 30.00 27.32 25.70 26.78 25.87 26.48 29.39 26.22 27.20 27.24 26.90 27.18
PSN-K [10] 27.10 30.34 27.40 25.84 26.92 25.90 26.56 29.54 26.45 27.20 27.21 27.09 27.30
PSN-U [10] 27.21 30.21 27.53 25.63 26.93 25.89 26.62 29.54 26.56 27.27 27.23 27.04 27.31
CIMM [12] 27.25 30.70 27.54 26.05 27.21 26.06 26.53 29.65 26.62 27.36 27.26 27.24 27.46
Jou
IRCNN [259] 26.88 29.96 27.33 25.57 26.61 25.89 26.55 29.40 26.24 27.17 27.17 26.88 27.14
FFDNet [260] 27.05 30.37 27.54 25.75 26.81 25.89 26.57 29.66 26.45 27.33 27.29 27.08 27.32
BRDNet [207] 27.44 30.53 27.67 25.77 26.97 25.93 26.66 29.73 26.85 27.38 27.27 27.17 27.45
ADNet [206] 27.31 30.59 27.69 25.70 26.90 25.88 26.56 29.59 26.64 27.35 27.17 27.07 27.37
DudeNet [208] 27.22 30.27 27.51 25.88 26.93 25.88 26.50 29.45 26.49 27.26 27.19 26.97 27.30
Image denoising based on deep learning techniques mainly are effective in increasing denois-
ing performance and efficiency, and performing complex denoising tasks. Solutions for improving
denoising performance include the following:
1) Enlarging the receptive field can capture more context information. Enlarging the receptive
20
Journal Pre-proof
Table 13: PSNR (dB) of different methods on the CBSD68, Kodak24 and McMaster for different noise levels (i.e.,
15, 25, 35, 50 and 75).
Datasets Methods σ = 15 σ = 25 σ = 35 σ = 50 σ = 75
CBM3D [45] 33.52 30.71 28.89 27.38 25.74
DnCNN [258] 33.98 31.31 29.65 28.01 -
of
CBSD68 DDRN [219] 33.93 31.24 - 27.86 -
EEDN [34] 33.65 31.03 - 27.85 -
DDFN [42] 34.17 31.52 29.88 28.26 -
CIMM [12] 31.81 29.34 - 26.40 -
BM3D-Net [237] 33.79 30.79 - 27.48 -
pro
IRCNN [259] 33.86 31.16 29.50 27.86 -
FFDNet [260] 33.80 31.18 29.57 27.96 26.24
BRDNet [207] 34.10 31.43 29.77 28.16 26.43
GPADCNN [39] 33.83 31.12 29.46 - -
FFDNet [202] 33.76 31.18 29.58 - 26.57
ETN [218] 34.10 31.41 - 28.01 -
ADNet [206] 33.99 31.31 29.66 28.04 26.33
DudeNet [208] 34.01 31.34 29.71 28.09 26.40
CBM3D [45] 34.28 31.68 29.90 28.46 26.82
DnCNN [258] 34.73 32.23 30.64 29.02 -
Kodak24 re-
IRCNN [259]
FFDNet [260]
BRDNet [207]
FFDNet [202]
ADNet [206]
34.56 32.03 30.43 28.81
34.55 32.11 30.56 28.99 27.25
34.88 32.41 30.80 29.22 27.49
34.53 32.12 30.59 -
-
27.61
34.76 32.26 30.68 29.10 27.40
DudeNet [208] 34.81 32.26 30.69 29.10 27.39
CBM3D [45] 34.06 31.66 29.92 28.51 26.79
DnCNN [258] 34.80 32.47 30.91 29.21 -
McMaster IRCNN [259] 34.58 32.18 30.59 28.91 -
lP
FFDNet [260] 34.47 32.25 30.76 29.14 27.29
BRDNet [207] 35.08 32.75 31.15 29.52 27.72
ADNet [206] 34.93 32.56 31.00 29.36 27.53
Table 14: Running time of 13 popular denoising methods for the noisy images of sizes 256 × 256, 512 × 512 and
1024 × 1024.
rna
21
Journal Pre-proof
of
pro
(a) (b) (c)
re-
lP
Figure 7: Denoising results of different methods on one image from the BSD68 with σ=15: (a) original image,
(b) noisy image/24.62dB, (c) BM3D/35.29dB, (d) EPLL/34.98dB, (e) DnCNN/36.20dB, (f) FFDNet/36.75dB, (g)
IRCNN/35.94dB, (h) ECNDNet/36.03dB, and (i) BRDNet/36.59dB.
field can be accomplished by increasing the depth and width of the networks. However, this results
in higher computational costs and more memory consumption. One technique for resolving this
22
Journal Pre-proof
of
(a) (b) (c)
pro
(d) (e) (f)
re-
(g) (h) (i)
Figure 8: Denoising results of different methods on one image from the Set12 with σ=25: (a) original image, (b)
lP
noisy image/20.22dB, (c) BM3D/29.26dB, (d) EPLL/29.44dB, (e) DnCNN/30.28dB, (f) FFDNet/30.08dB, (g) IR-
CNN/30.09dB, (h) ECNDNet/30.30dB, and (i) BRDNet/30.50dB.
rna
Figure 9: Denoising results of different methods on one image from the McMaster with σ=35: (a) original image, (b)
noisy image/18.46dB, (c) DnCNN/33.05B, (d) FFDNet/33.03dB, (e) IRCNN/32.74dB, and (f) BRDNet/33.26dB.
23
Journal Pre-proof
of
(a) (b) (c)
pro
(d)
re- (e) (f)
Figure 10: Denoising results of different methods on one image from the Kodak24 with σ=50: (a) original image, (b)
noisy image/14.58dB, (c) DnCNN/25.80B, (d) FFDNet/26.13dB, (e) IRCNN/26.10dB, and (f) BRDNet/26.33dB.
Table 15: PSNR (dB) of different methods on the DND for real-noisy image denoising.
lP
Methods DND
EPLL [274] 33.51
TNRD [35] 33.65
NCSR [49] 34.05
MLP [25] 34.23
BM3D [45] 34.51
rna
problem, is dilated convolution, which not only contributions to higher performance and efficiency,
but is also very effective for mining more edge information.
2) The simultaneous use of extra information (also called prior knowledge) and a CNN is an
effective approach to facilitate obtaining more accurate features. This is implemented by designing
the loss function.
24
Journal Pre-proof
Table 16: PSNR (dB) of different methods on the SIDD for real-noisy image denoising.
Methods SIDD
CBM3D [45] 25.65
WNNM [69] 25.78
MLP [25] 24.71
of
DnCNN-B [258] 23.66
CBDNet [71] 33.28
VDN [251] 39.23
DRDN [192] 39.60
pro
Table 17: PSNR (dB) of different methods on the Nam for real-noisy image denoising.
Methods Nam
NI [5] 31.52
TWSC [232] 37.52
BM3D [45] 39.84
NC [112] 40.41
WNNM [69] 41.04
CDnCNN-B [258] 37.49
re- MCWNNM [137]
CBDNet [71]
CBDNet(JPEG) [71]
DRDN [192]
37.91
41.02
41.31
38.45
AGAN [131] 41.38
Table 18: PSNR (dB) of different methods on the cc for real-noisy image denoising.
lP
Camera Settings CBM3D [45] MLP [25] TNRD [35] DnCNN [258] NI [5] NC [112] WNNM [69] BRDNet [207] SDNet [270] ADNet [206] DudeNet [208]
39.76 39.00 39.51 37.26 35.68 38.76 37.51 37.63 39.83 35.96 36.66
Canon 5D ISO=3200 36.40 36.34 36.47 34.13 34.03 35.69 33.86 37.28 37.25 36.11 36.70
36.37 36.33 36.45 34.09 32.63 35.54 31.43 37.75 36.79 34.49 35.03
34.18 34.70 34.79 33.62 31.78 35.57 33.46 34.55 35.50 33.94 33.72
Nikon D600 ISO=3200 35.07 36.20 36.37 34.48 35.16 36.70 36.09 35.99 37.24 34.33 34.70
37.13 39.33 39.49 35.41 39.98 39.28 39.86 38.62 41.18 38.87 37.98
36.81 37.95 38.11 35.79 34.84 38.01 36.35 39.22 38.77 37.61 38.10
rna
Nikon D800 ISO=1600 37.76 40.23 40.52 36.08 38.42 39.05 39.99 39.67 40.87 38.24 39.15
37.51 37.94 38.17 35.48 35.79 38.20 37.15 39.04 38.86 36.89 36.14
35.05 37.55 37.69 34.08 38.36 38.07 38.60 38.28 39.94 37.20 36.93
Nikon D800 ISO=3200 34.07 35.91 35.90 33.70 35.53 35.72 36.04 37.18 36.78 35.67 35.80
34.42 38.15 38.21 33.31 40.05 36.76 39.73 38.85 39.78 38.09 37.49
31.13 32.69 32.81 29.83 34.08 33.49 33.29 32.75 33.34 32.24 31.94
Nikon D800 ISO=6400 31.22 32.33 32.33 30.55 32.13 32.79 31.16 33.24 33.29 32.59 32.51
30.97 32.29 32.29 30.09 31.52 32.86 31.98 32.89 33.22 33.14 32.91
Average 35.19 36.46 36.61 33.86 35.33 36.43 35.77 36.73 37.51 35.69 35.72
Table 19: Different methods on the BSD68 for different noise levels (i.e., 15, 25 and 50).
Jou
Methods 15 25 50
DnCNN-B [258] 31.61 29.16 26.23
FFDNet [260] 31.62 29.19 26.30
SCNN [90] 31.48 29.03 26.08
ADNet-B [206] 31.56 29.14 26.24
DnCNN-SURE-T [191] - 29.00 25.95
DnCNN-MSE-GT [191] - 29.20 26.22
G2G1(LM,BSD) [27] 31.55 28.93 25.73
25
Journal Pre-proof
Table 20: Average PSNR (dB) results of different methods on Set12 with noise levels of 25 and 50.
Images C.man House Peppers Starfish Monarch Airplane Parrot Lena Barbara Boat Man Couple Average
Noise Level σ = 25
DnCNN-B [258] 29.94 33.05 30.84 29.34 30.25 29.09 29.35 32.42 29.69 30.20 30.09 30.10 30.36
FFDNet [260] 30.10 33.28 30.93 29.32 30.08 29.04 29.44 32.57 30.01 30.25 30.11 30.20 30.44
of
ADNet-B [206] 29.94 33.38 30.99 29.22 30.38 29.16 29.41 32.59 30.05 30.28 30.01 30.15 30.46
DudeNet-B [208] 30.01 33.15 30.87 29.39 30.31 29.07 29.40 32.42 29.76 30.18 30.03 30.06 30.39
DNCNN-SURE-T [191] 29.86 32.73 30.57 29.11 30.13 28.93 29.26 32.08 29.44 29.86 29.91 29.78 30.14
DNCNN-MSE-GT [191] 30.14 33.16 30.84 29.40 30.45 29.11 29.36 32.44 29.91 30.11 30.08 30.06 30.42
Noise Level σ = 50
DnCNN-B [258] 27.03 30.02 27.39 25.72 26.83 25.89 26.48 29.38 26.38 27.23 27.23 26.91 27.21
pro
FFDNet [260] 27.05 30.37 27.54 25.75 26.81 25.89 26.57 29.66 26.45 27.33 27.29 27.08 27.32
ADNet-B [206] 27.22 30.43 27.70 25.63 26.92 26.03 26.56 29.53 26.51 27.22 27.19 27.05 27.33
DudeNet-B [208] 27.19 30.11 27.50 25.69 26.82 25.85 26.46 29.35 26.38 27.20 27.13 26.90 27.22
DNCNN-SURE-T [191] 26.47 29.20 26.78 25.39 26.53 25.65 26.21 28.81 25.23 26.79 26.97 26.48 26.71
DNCNN-MSE-GT [191] 27.03 29.92 27.27 25.65 26.95 25.93 26.43 29.31 26.17 27.12 27.22 26.94 27.16
Table 21: Different methods on the VggFace2and WebFace for image denoising.
3) Combining local and global information can enhance the memory abilities of the shallow
layers on deep layers to better filter the noise. Two methods for addressing this problem are
residual operation and recursive operation.
lP
4) Single processing methods can be used to suppress the noise. The single processing tech-
nique fused into the deep CNN can achieve excellent performance. For example, the wavelet
technique is gathered into the U-Net to deal with image restoration [137].
5) Data augmentation, such as horizontal flip, vertical flip and color jittering, can help the
denoising methods learn more types of noise, which can enhance the expressive ability of the
rna
denoising models. Additionally, using the GAN to construct virtual noisy images is also useful for
image denoising.
6) Transfer learning, graph and neural architecture search methods can obtain good denoising
results.
7) Improving the hardware or camera mechanism can reduce the effect of noise on the captured
image.
Compressing deep neural networks has achieved great success in improving the efficiency of
Jou
denoising. Reducing the depth or the width of deep neural networks can reduce the complexity
of these networks in image denoising. Also, the use of small convolutional kernel and group
convolution can reduce the number of parameters, thereby accelerating the speed of training. The
fusion of dimension reduction methods, such as principal component analysis (PCA) and CNN,
can also lead to improvements in denoising efficiency.
For resolving complex noisy images, step-by-step processing is a very popular method. For
example, using a two-step mechanism is a way of dealing with a noisy image with low-resolution.
The first step involves the recovery of a high-resolution image by a CNN. The second step uses a
novel CNN to filter the noise of the high-resolution image. In the example above, the two CNNs
26
Journal Pre-proof
are implemented via a cascade operation. This two-step mechanism is ideal for unsupervised noise
tasks, such as real noisy images and blind denoising. That is, the first step relies on a CNN with
optimization algorithms, i.e., maximum a posteriori, to estimate the noise as ground truth (referred
as a label). The second step utilized another CNN and obtained ground truth to train a denoising
of
model for real-noisy image denoising or blind denoising. The self-supervised learning fused into
the CNN is a good choice for real-noisy image denoising or blind denoising.
Although deep learning techniques have attained great success in these three scenarios, there
are still challenges in the field of image denoising. These include:
pro
1) Deeper denoising networks require more memory resources.
2) Training deeper denoising networks is not a stable solution for real noisy image, unpaired
noisy image and multi-degradation tasks.
3) Real noisy images are not easily captured, which results in inadequate training samples.
4) Deep CNNs are difficult to solve unsupervised denoising tasks.
5) More accurate metrics need to be found for image denoising. PSNR and SSIM are popu-
lar metrics for the task of image restoration. PSNR suffers from excessive smoothing, which is
re-
very difficult to recognize indistinguishable images. SSIM depends on brightness, contrast and
structure, and therefore cannot accurately evaluate image perceptual quality.
6. Conclusion
In this paper, we compare, study and summarize the deep networks used for on image de-
lP
noising. First, we show the basic frameworks of deep learning for image denoising. Then, we
present the deep learning techniques for noisy tasks, including additive white noisy images, blind
denoising, real noisy images and hybrid noisy images. Next, for each category of noisy tasks,
we analyze the motivation and theory of denoising networks. Next, we compare the denoising
results, efficiency and visual effects of different networks on benchmark datasets, and then per-
form a cross-comparison of the different types of image denoising methods with different types of
rna
noise. Finally, some potential areas for further research are suggested, and the challenges of deep
learning in image denoising are discussed.
Over the past few years, Gaussian noisy image denoising techniques have achieved great suc-
cess, particularly in scenarios where the Gaussian noise is regular. However, in the real world the
noise is complex and irregular. Improving the hardware devices in order to better suppress the
noise for capturing a high-quality image is very important. Moreover, the obtained image may
be blurry, low-resolution and corrupted. Therefore, it is critical to determine how to effectively
Jou
recover the latent clean image from the superposed noisy image. Furthermore, while the use of
deep learning techniques to learn features requires the ground truth, the obtained real noisy im-
ages do not have the ground truth. These are urgent challenges that researches and scholars need
to address.
Acknowledgments
This paper is partially supported by the National Natural Science Foundation of China under
Grant No. 61876051, in part by Shenzhen Municipal Science and Technology Innovation, Council
27
Journal Pre-proof
under Grant No. JSGG20190220153602271 and in part by the Natural Science Foundation of
Guang dong Province under Grant No. 2019A1515011811.
References
of
References
[1] Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard,
M., et al., 2016. Tensorflow: A system for large-scale machine learning. In: 12th Symposium on Operating
pro
Systems Design and Implementation. pp. 265–283.
[2] Abbasi, A., Monadjemi, A., Fang, L., Rabbani, H., Zhang, Y., 2019. Three-dimensional optical coherence
tomography image denoising through multi-input fully-convolutional networks. Computers in Biology and
Medicine 108, 1–8.
[3] Abdelhamed, A., Lin, S., Brown, M. S., 2018. A high-quality denoising dataset for smartphone cameras. In:
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1692–1700.
[4] Abiko, R., Ikehara, M., 2019. Blind denoising of mixed gaussian-impulse noise by single cnn. In: ICASSP
2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, pp.
1717–1721. re-
[5] ABSoft, N., 2017. Neat image.
[6] Aharon, M., Elad, M., Bruckstein, A., 2006. K-svd: An algorithm for designing overcomplete dictionaries for
sparse representation. IEEE Transactions on Signal Processing 54 (11), 4311–4322.
[7] Ahn, B., Cho, N. I., 2017. Block-matching convolutional neural network for image denoising. arXiv preprint
arXiv:1704.00524.
[8] Ahn, B., Kim, Y., Park, G., Cho, N. I., 2018. Block-matching convolutional neural network (bmcnn): Improv-
ing cnn-based denoising by block-matched inputs. In: 2018 Asia-Pacific Signal and Information Processing
lP
Association Annual Summit and Conference (APSIPA ASC). IEEE, pp. 516–525.
[9] Aittala, M., Durand, F., 2018. Burst image deblurring using permutation invariant convolutional neural net-
works. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 731–747.
[10] Aljadaany, R., Pal, D. K., Savvides, M., 2019. Proximal splitting networks for image restoration. arXiv preprint
arXiv:1903.07154.
[11] Anwar, S., Barnes, N., 2019. Real image denoising with feature attention. arXiv preprint arXiv:1904.07396.
rna
[12] Anwar, S., Huynh, C. P., Porikli, F., 2017. Chaining identity mapping modules for image denoising. arXiv
preprint arXiv:1712.02933.
[13] Bae, W., Yoo, J., Chul Ye, J., 2017. Beyond deep residual learning for image restoration: Persistent homology-
guided manifold simplification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition Workshops. pp. 145–153.
[14] Bako, S., Vogels, T., McWilliams, B., Meyer, M., Novák, J., Harvill, A., Sen, P., Derose, T., Rousselle, F.,
2017. Kernel-predicting convolutional networks for denoising monte carlo renderings. ACM Transactions on
Graphics (TOG) 36 (4), 97.
[15] Bedini, L., Tonazzini, A., 1990. Neural network use in maximum entropy image restoration. Image and Vision
Jou
28
Journal Pre-proof
[20] Bigdeli, S. A., Zwicker, M., 2017. Image restoration using autoencoding priors. arXiv preprint
arXiv:1703.09964.
[21] Bigdeli, S. A., Zwicker, M., Favaro, P., Jin, M., 2017. Deep mean-shift priors for image restoration. In: Ad-
vances in Neural Information Processing Systems. pp. 763–772.
[22] Bottou, L., 2010. Large-scale machine learning with stochastic gradient descent. In: Proceedings of COMP-
of
STAT’2010. Springer, pp. 177–186.
[23] Broaddus, C., Krull, A., Weigert, M., Schmidt, U., Myers, G., 2020. Removing structured noise with self-
supervised blind-spot networks. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI).
IEEE, pp. 159–163.
[24] Brooks, T., Mildenhall, B., Xue, T., Chen, J., Sharlet, D., Barron, J. T., 2019. Unprocessing images for learned
pro
raw denoising. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp.
11036–11045.
[25] Burger, H. C., Schuler, C. J., Harmeling, S., 2012. Image denoising: Can plain neural networks compete with
bm3d? In: 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, pp. 2392–2399.
[26] Cao, Q., Shen, L., Xie, W., Parkhi, O. M., Zisserman, A., 2018. Vggface2: A dataset for recognising faces
across pose and age. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition
(FG 2018). IEEE, pp. 67–74.
[27] Cha, S., Park, T., Moon, T., 2019. Gan2gan: Generative noise learning for blind image denoising with single
re-
noisy images. arXiv preprint arXiv:1905.10488.
[28] Chang, Y., Yan, L., Fang, H., Zhong, S., Liao, W., 2018. Hsi-denet: Hyperspectral image restoration via
convolutional neural network. IEEE Transactions on Geoscience and Remote Sensing 57 (2), 667–682.
[29] Chen, C., Xiong, Z., Tian, X., Wu, F., 2018. Deep boosting for image denoising. In: Proceedings of the
European Conference on Computer Vision (ECCV). pp. 3–18.
[30] Chen, C., Xu, Z., 2018. Aerial-image denoising based on convolutional neural network with multi-scale resid-
ual learning approach. Information 9 (7), 169.
lP
[31] Chen, J., Chen, J., Chao, H., Yang, M., 2018. Image blind denoising with generative adversarial network based
noise modeling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp.
3155–3164.
[32] Chen, J., Hou, J., Chau, L.-P., 2018. Light field denoising via anisotropic parallax analysis in a cnn framework.
IEEE Signal Processing Letters 25 (9), 1403–1407.
[33] Chen, X., Song, L., Yang, X., 2016. Deep rnns for video denoising. In: Applications of Digital Image Process-
ing XXXIX. Vol. 9971. International Society for Optics and Photonics, p. 99711T.
rna
[34] Chen, X., Zhan, S., Ji, D., Xu, L., Wu, C., Li, X., 2018. Image denoising via deep network based on edge
enhancement. Journal of Ambient Intelligence and Humanized Computing, 1–11.
[35] Chen, Y., Pock, T., 2016. Trainable nonlinear reaction diffusion: A flexible framework for fast and effective
image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (6), 1256–1272.
[36] Chen, Y., Yu, M., Jiang, G., Peng, Z., Chen, F., 2019. End-to-end single image enhancement based on a dual
network cascade model. Journal of Visual Communication and Image Representation 61, 284–295.
[37] Chetlur, S., Woolley, C., Vandermersch, P., Cohen, J., Tran, J., Catanzaro, B., Shelhamer, E., 2014. cudnn:
Efficient primitives for deep learning. arXiv preprint arXiv:1410.0759.
Jou
[38] Chiang, Y.-W., Sullivan, B., 1989. Multi-frame image restoration using a neural network. In: Proceedings of
the 32nd Midwest Symposium on Circuits and Systems,. IEEE, pp. 744–747.
[39] Cho, S. I., Kang, S.-J., 2018. Gradient prior-aided cnn denoiser with separable convolution-based optimization
of feature dimension. IEEE Transactions on Multimedia 21 (2), 484–493.
[40] Choi, K., Vania, M., Kim, S., 2019. Semi-supervised learning for low-dose ct image restoration with hierarchi-
cal deep generative adversarial network (hd-gan). In: 2019 41st Annual International Conference of the IEEE
Engineering in Medicine and Biology Society (EMBC). IEEE, pp. 2683–2686.
[41] Chollet, F., et al., 2015. Keras.
[42] Couturier, R., Perrot, G., Salomon, M., 2018. Image denoising using a deep encoder-decoder network with
skip connections. In: International Conference on Neural Information Processing. Springer, pp. 554–565.
[43] Cruz, C., Foi, A., Katkovnik, V., Egiazarian, K., 2018. Nonlocality-reinforced convolutional neural networks
29
Journal Pre-proof
of
collaborative filtering. IEEE Transactions on Image Processing 16 (8), 2080–2095.
[46] Davy, A., Ehret, T., Morel, J.-M., Arias, P., Facciolo, G., 2018. Non-local video denoising by cnn. arXiv
preprint arXiv:1811.12758.
[47] de Figueiredo, M. T., Leitao, J. M., 1992. Image restoration using neural networks. In: [Proceedings] ICASSP-
92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing. Vol. 2. IEEE, pp. 409–
pro
412.
[48] de Ridder, D., Duin, R. P., Verbeek, P. W., Van Vliet, L., 1999. The applicability of neural networks to non-
linear image processing. Pattern Analysis & Applications 2 (2), 111–128.
[49] Dong, W., Zhang, L., Shi, G., Li, X., 2012. Nonlocally centralized sparse representation for image restoration.
IEEE Transactions on Image Processing 22 (4), 1620–1630.
[50] Du, B., Wei, Q., Liu, R., 2019. An improved quantum-behaved particle swarm optimization for endmember
extraction. IEEE Transactions on Geoscience and Remote Sensing.
[51] Du, H., Dong, L., Liu, M., Zhao, Y., Jia, W., Liu, X., Hui, M., Kong, L., Hao, Q., 2018. Image restoration
re-
based on deep convolutional network in wavefront coding imaging system. In: 2018 Digital Image Computing:
Techniques and Applications (DICTA). IEEE, pp. 1–8.
[52] Duan, C., Cui, L., Chen, X., Wei, F., Zhu, C., Zhao, T., 2018. Attention-fused deep matching network for
natural language inference. In: IJCAI. pp. 4033–4040.
[53] Ehret, T., Davy, A., Morel, J.-M., Facciolo, G., Arias, P., 2019. Model-blind video denoising via frame-to-
frame training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp.
11369–11378.
lP
[54] Elad, M., Aharon, M., 2006. Image denoising via sparse and redundant representations over learned dictionar-
ies. IEEE Transactions on Image Processing 15 (12), 3736–3745.
[55] Fan, E., 2000. Extended tanh-function method and its applications to nonlinear equations. Physics Letters A
277 (4-5), 212–218.
[56] Farooque, M. A., Rohankar, J. S., 2013. Survey on various noises and techniques for denoising the color image.
International Journal of Application or Innovation in Engineering & Management (IJAIEM) 2 (11), 217–221.
[57] Franzen, R., 1999. Kodak lossless true color image suite. source: http://r0k. us/graphics/kodak 4.
rna
[58] Fu, B., Zhao, X., Li, Y., Wang, X., Ren, Y., 2019. A convolutional neural networks denoising approach for salt
and pepper noise. Multimedia Tools and Applications 78 (21), 30707–30721.
[59] Fukushima, K., 1980. Neocognitron: A self-organizing neural network model for a mechanism of pattern
recognition unaffected by shift in position. Biological Cybernetics 36 (4), 193–202.
[60] Fukushima, K., Miyake, S., 1982. Neocognitron: A self-organizing neural network model for a mechanism of
visual pattern recognition. In: Competition and Cooperation in Neural Nets. Springer, pp. 267–285.
[61] Gardner, E., Wallace, D., Stroud, N., 1989. Training with noise and the storage of correlated patterns in a neural
network model. Journal of Physics A: Mathematical and General 22 (12), 2019.
Jou
[62] Gashi, D., Pereira, M., Vterkovska, V., 2017. Multi-scale context aggregation by dilated convolutions machine
learning-project.
[63] Gholizadeh-Ansari, M., Alirezaie, J., Babyn, P., 2018. Low-dose ct denoising with dilated residual network.
In: 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society
(EMBC). IEEE, pp. 5117–5120.
[64] Godard, C., Matzen, K., Uyttendaele, M., 2018. Deep burst denoising. In: Proceedings of the European Con-
ference on Computer Vision (ECCV). pp. 538–554.
[65] Gondara, L., Wang, K., 2017. Recovering loss to followup information using denoising autoencoders. In: 2017
IEEE International Conference on Big Data (Big Data). IEEE, pp. 1936–1945.
[66] Gong, D., Zhang, Z., Shi, Q., Hengel, A. v. d., Shen, C., Zhang, Y., 2018. Learning an optimizer for image
deconvolution. arXiv preprint arXiv:1804.03368.
30
Journal Pre-proof
[67] Green, M., Marom, E. M., Konen, E., Kiryati, N., Mayer, A., 2018. Learning real noise for ultra-low dose lung
ct denoising. In: International Workshop on Patch-based Techniques in Medical Imaging. Springer, pp. 3–11.
[68] Greenhill, D., Davies, E., 1994. Relative effectiveness of neural networks for image noise suppression. In:
Machine Intelligence and Pattern Recognition. Vol. 16. Elsevier, pp. 367–378.
[69] Gu, S., Zhang, L., Zuo, W., Feng, X., 2014. Weighted nuclear norm minimization with application to image
of
denoising. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2862–
2869.
[70] Guan, J., Lai, R., Xiong, A., 2019. Wavelet deep neural network for stripe noise removal. IEEE Access 7,
44544–44554.
[71] Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L., 2019. Toward convolutional blind denoising of real pho-
pro
tographs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1712–
1722.
[72] Guo, Z., Sun, Y., Jian, M., Zhang, X., 2018. Deep residual network with sparse feedback for image restoration.
Applied Sciences 8 (12), 2417.
[73] Han, Y., Ye, J. C., 2018. Framing u-net via deep convolutional framelets: Application to sparse-view ct. IEEE
Transactions on Medical Imaging 37 (6), 1418–1429.
[74] He, J., Dong, C., Qiao, Y., 2019. Multi-dimension modulation for image restoration with dynamic controllable
residual learning. arXiv preprint arXiv:1912.05293.
re-
[75] He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition. In: Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition. pp. 770–778.
[76] Heckel, R., Huang, W., Hand, P., Voroninski, V., 2018. Rate-optimal denoising with deep neural networks.
arXiv preprint arXiv:1805.08855.
[77] Heinrich, M. P., Stille, M., Buzug, T. M., 2018. Residual u-net convolutional neural network architecture for
low-dose ct denoising. Current Directions in Biomedical Engineering 4 (1), 297–300.
[78] Hendriksen, A. A., Pelt, D. M., Batenburg, K. J., 2020. Noise2inverse: Self-supervised deep convolutional
lP
denoising for linear inverse problems in imaging. arXiv preprint arXiv:2001.11801.
[79] Hinton, G., Osindero, S., ???? The, y. 2006. a fast learning algorithm for deep belief nets. Neural Computation
18 (7).
[80] Hinton, G. E., Salakhutdinov, R. R., 2006. Reducing the dimensionality of data with neural networks. Science
313 (5786), 504–507.
[81] Hirose, Y., Yamashita, K., Hijiya, S., 1991. Back-propagation algorithm which varies the number of hidden
units. Neural Networks 4 (1), 61–66.
rna
[82] Hong, S.-W., Bao, P., 2000. An edge-preserving subband coding model based on non-adaptive and adaptive
regularization. Image and Vision Computing 18 (8), 573–582.
[83] Hongqiang, M., Shiping, M., Yuelei, X., Mingming, Z., 2018. An adaptive image denoising method based on
deep rectified denoising auto-encoder. In: Journal of Physics: Conference Series. Vol. 1060. IOP Publishing,
p. 012048.
[84] Hore, A., Ziou, D., 2010. Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on
Pattern Recognition. IEEE, pp. 2366–2369.
[85] Hsu, C.-C., Lin, C.-W., 2017. Cnn-based joint clustering and representation learning with feature drift com-
Jou
pensation for large-scale image data. IEEE Transactions on Multimedia 20 (2), 421–429.
[86] Hu, G., Yang, Y., Yi, D., Kittler, J., Christmas, W., Li, S. Z., Hospedales, T., 2015. When face recognition
meets with deep learning: an evaluation of convolutional neural networks for face recognition. In: Proceedings
of the IEEE international Conference on Computer Vision Workshops. pp. 142–150.
[87] HUANG, T., 1971. Stability of two-dimensional recursive filters(mathematical model for stability problem in
two-dimensional recursive filtering).
[88] Ioffe, S., 2017. Batch renormalization: Towards reducing minibatch dependence in batch-normalized models.
In: Advances In Neural Information Processing Systems. pp. 1945–1953.
[89] Ioffe, S., Szegedy, C., 2015. Batch normalization: Accelerating deep network training by reducing internal
covariate shift. arXiv preprint arXiv:1502.03167.
[90] Isogawa, K., Ida, T., Shiodera, T., Takeguchi, T., 2017. Deep shrinkage convolutional neural network for
31
Journal Pre-proof
of
[93] Jeon, W., Jeong, W., Son, K., Yang, H., 2018. Speckle noise reduction for digital holographic images using
multi-scale convolutional neural networks. Optics Letters 43 (17), 4240–4243.
[94] Jia, X., Chai, H., Guo, Y., Huang, Y., Zhao, B., 2018. Multiscale parallel feature extraction convolution neural
network for image denoising. Journal of Electronic Imaging 27 (6), 063031.
[95] Jia, X., Liu, S., Feng, X., Zhang, L., 2019. Focnet: A fractional optimal control network for image denoising.
pro
In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6054–6063.
[96] Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T., 2014.
Caffe: Convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM International
Conference on Multimedia. ACM, pp. 675–678.
[97] Jian, W., Zhao, H., Bai, Z., Fan, X., 2018. Low-light remote sensing images enhancement algorithm based on
fully convolutional neural network. In: China High Resolution Earth Observation Conference. Springer, pp.
56–65.
[98] Jiang, D., Dou, W., Vosters, L., Xu, X., Sun, Y., Tan, T., 2018. Denoising of 3d magnetic resonance images
566–574.
re-
with multi-channel residual learning of convolutional neural network. Japanese Journal of Radiology 36 (9),
[99] Jiang, L., Jing, Y., Hu, S., Ge, B., Xiao, W., 2018. Deep refinement network for natural low-light image
enhancement in symmetric pathways. Symmetry 10 (10), 491.
[100] Jiao, J., Tu, W.-C., He, S., Lau, R. W., 2017. Formresnet: Formatted residual learning for image restoration. In:
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. pp. 38–46.
[101] Jifara, W., Jiang, F., Rho, S., Cheng, M., Liu, S., 2019. Medical image denoising using convolutional neural
lP
network: a residual learning approach. The Journal of Supercomputing 75 (2), 704–718.
[102] Jin, K. H., McCann, M. T., Froustey, E., Unser, M., 2017. Deep convolutional neural network for inverse
problems in imaging. IEEE Transactions on Image Processing 26 (9), 4509–4522.
[103] Kadimesetty, V. S., Gutta, S., Ganapathy, S., Yalavarthy, P. K., 2018. Convolutional neural network-based
robust denoising of low-dose computed tomography perfusion maps. IEEE Transactions on Radiation and
Plasma Medical Sciences 3 (2), 137–152.
[104] Karlik, B., Olgac, A. V., 2011. Performance analysis of various activation functions in generalized mlp archi-
rna
tectures of neural networks. International Journal of Artificial Intelligence and Expert Systems 1 (4), 111–122.
[105] Khan, S., Khan, K. S., Shin, S. Y., 2019. Symbol denoising in high order m-qam using residual learning of
deep cnn. In: 2019 16th IEEE Annual Consumer Communications & Networking Conference (CCNC). IEEE,
pp. 1–6.
[106] Khaw, H. Y., Soon, F. C., Chuah, J. H., Chow, C.-O., 2017. Image noise types recognition using convolutional
neural network with principal components analysis. IET Image Processing 11 (12), 1238–1245.
[107] Khoroushadi, M., Sadegh, M., 2018. Enhancement in low-dose computed tomography through image denoising
techniques: Wavelets and deep learning. Ph.D. thesis, ProQuest Dissertations Publishing.
Jou
[108] Kokkinos, F., Lefkimmiatis, S., 2019. Iterative joint image demosaicking and denoising using a residual de-
noising network. IEEE Transactions on Image Processing.
[109] Krizhevsky, A., Sutskever, I., Hinton, G. E., 2012. Imagenet classification with deep convolutional neural
networks. In: Advances in Neural Information Processing Systems. pp. 1097–1105.
[110] Kutzner, C., Páll, S., Fechner, M., Esztermann, A., de Groot, B. L., Grubmüller, H., 2019. More bang for your
buck: improved use of gpu nodes for gromacs 2018. arXiv preprint arXiv:1903.05918.
[111] Latif, G., Iskandar, D. A., Alghazo, J., Butt, M., Khan, A. H., 2018. Deep cnn based mr image denoising for
tumor segmentation using watershed transform. International Journal of Engineering & Technology 7 (2.3),
37–42.
[112] Lebrun, M., Colom, M., Morel, J.-M., 2015. The noise clinic: a blind image denoising algorithm. Image
Processing On Line 5, 1–54.
32
Journal Pre-proof
[113] LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al., 1998. Gradient-based learning applied to document
recognition. Proceedings of the IEEE 86 (11), 2278–2324.
[114] Lee, C.-C., de Gyvez, J. P., 1996. Color image processing in a cellular neural-network environment. IEEE
Transactions on Neural Networks 7 (5), 1086–1098.
[115] Lee, D., Yun, S., Choi, S., Yoo, H., Yang, M.-H., Oh, S., 2018. Unsupervised holistic image generation from
of
key local patches. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 19–35.
[116] Lefkimmiatis, S., 2017. Non-local color image denoising with convolutional neural networks. In: Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3587–3596.
[117] Lei, Y., Yuan, W., Wang, H., Wenhu, Y., Bo, W., 2016. A skin segmentation algorithm based on stacked
autoencoders. IEEE Transactions on Multimedia 19 (4), 740–749.
pro
[118] Li, H., Yang, W., Yong, X., 2018. Deep learning for ground-roll noise attenuation. In: SEG Technical Program
Expanded Abstracts 2018. Society of Exploration Geophysicists, pp. 1981–1985.
[119] Li, J., Li, M., Lu, G., Zhang, B., Yin, H., Zhang, D., 2020. Similarity and diversity induced paired projection
for cross-modal retrieval. Information Sciences.
[120] Li, J., Lu, G., Zhang, B., You, J., Zhang, D., 2019. Shared linear encoder-based multikernel gaussian process
latent variable model for visual classification. IEEE transactions on cybernetics.
[121] Li, J., Zhang, B., Zhang, D., 2017. Shared autoencoder gaussian process latent variable model for visual
classification. IEEE transactions on neural networks and learning systems 29 (9), 4272–4286.
re-
[122] Li, L., Wu, J., Jin, X., 2018. Cnn denoising for medical image based on wavelet domain. In: 2018 9th Interna-
tional Conference on Information Technology in Medicine and Education (ITME). IEEE, pp. 105–109.
[123] Li, M., Hsu, W., Xie, X., Cong, J., Gao, W., 2020. Sacnn: Self-attention convolutional neural network for
low-dose ct denoising with self-supervised perceptual loss network. IEEE Transactions on Medical Imaging.
[124] Li, Q., Cai, W., Wang, X., Zhou, Y., Feng, D. D., Chen, M., 2014. Medical image classification with convo-
lutional neural network. In: 2014 13th International Conference on Control Automation Robotics & Vision
(ICARCV). IEEE, pp. 844–848.
lP
[125] Li, S., He, F., Du, B., Zhang, L., Xu, Y., Tao, D., 2019. Fast spatio-temporal residual network for video
super-resolution. arXiv preprint arXiv:1904.02870.
[126] Li, X., Du, B., Xu, C., Zhang, Y., Zhang, L., Tao, D., 2020. Robust learning with imperfect privileged infor-
mation. Artificial Intelligence 282, 103246.
[127] Li, X., Liu, M., Ye, Y., Zuo, W., Lin, L., Yang, R., 2018. Learning warped guidance for blind face restoration.
In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 272–289.
[128] Li, Z., Wu, J., 2019. Learning deep cnn denoiser priors for depth image inpainting. Applied Sciences 9 (6),
rna
1103.
[129] Liang, J., Liu, R., 2015. Stacked denoising autoencoder and dropout together to prevent overfitting in deep
neural network. In: 2015 8th International Congress on Image and Signal Processing (CISP). IEEE, pp. 697–
701.
[130] Liang, X., Zhang, D., Lu, G., Guo, Z., Luo, N., 2019. A novel multicamera system for high-speed touchless
palm recognition. IEEE Transactions on Systems, Man, and Cybernetics: Systems.
[131] Lin, K., Li, T. H., Liu, S., Li, G., 2019. Real photographs denoising with noise domain adaptation and attentive
generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern
Jou
33
Journal Pre-proof
of
[139] Liu, W., Lee, J., 2019. A 3-d atrous convolution neural network for hyperspectral image denoising. IEEE
Transactions on Geoscience and Remote Sensing.
[140] Liu, X., Suganuma, M., Sun, Z., Okatani, T., 2019. Dual residual networks leveraging the potential of paired
operations for image restoration. In: Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition. pp. 7007–7016.
pro
[141] Lo, S.-C., Lou, S.-L., Lin, J.-S., Freedman, M. T., Chien, M. V., Mun, S. K., 1995. Artificial convolution neural
network techniques and applications for lung nodule detection. IEEE Transactions on Medical Imaging 14 (4),
711–718.
[142] LOO TIANG KUAN, L., 2017. Survey of deep neural networks in blind denoising using different architectures
and different labels. Ph.D. thesis.
[143] Lu, Y., Lai, Z., Li, X., Wong, W. K., Yuan, C., Zhang, D., 2018. Low-rank 2-d neighborhood preserving
projection for enhanced robust image representation. IEEE Transactions on Cybernetics 49 (5), 1859–1872.
[144] Lu, Y., Wong, W., Lai, Z., Li, X., 2019. Robust flexible preserving embedding. IEEE Transactions on Cyber-
netics. re-
[145] Lu, Z., Yu, Z., Ya-Li, P., Shi-Gang, L., Xiaojun, W., Gang, L., Yuan, R., 2018. Fast single image super-
resolution via dilated residual networks. IEEE Access.
[146] Lucas, A., Iliadis, M., Molina, R., Katsaggelos, A. K., 2018. Using deep neural networks for inverse problems
in imaging: beyond analytical methods. IEEE Signal Processing Magazine 35 (1), 20–36.
[147] Ma, K., Duanmu, Z., Wu, Q., Wang, Z., Yong, H., Li, H., Zhang, L., 2016. Waterloo exploration database: New
challenges for image quality assessment models. IEEE Transactions on Image Processing 26 (2), 1004–1016.
lP
[148] Ma, Y., Chen, X., Zhu, W., Cheng, X., Xiang, D., Shi, F., 2018. Speckle noise reduction in optical coherence
tomography images based on edge-sensitive cgan. Biomedical Optics Express 9 (11), 5129–5146.
[149] Mafi, M., Martin, H., Cabrerizo, M., Andrian, J., Barreto, A., Adjouadi, M., 2018. A comprehensive survey on
impulse and gaussian denoising filters for digital images. Signal Processing.
[150] Mairal, J., Bach, F. R., Ponce, J., Sapiro, G., Zisserman, A., 2009. Non-local sparse models for image restora-
tion. In: ICCV. Vol. 29. Citeseer, pp. 54–62.
[151] Majumdar, A., 2018. Blind denoising autoencoder. IEEE Transactions on Neural Networks and Learning Sys-
rna
[155] McCann, M. T., Jin, K. H., Unser, M., 2017. Convolutional neural networks for inverse problems in imaging:
A review. IEEE Signal Processing Magazine 34 (6), 85–95.
[156] Meinhardt, T., Moller, M., Hazirbas, C., Cremers, D., 2017. Learning proximal operators: Using denoising
networks for regularizing inverse imaging problems. In: Proceedings of the IEEE International Conference on
Computer Vision. pp. 1781–1790.
[157] Meng, M., Li, S., Yao, L., Li, D., Zhu, M., Gao, Q., Xie, Q., Zhao, Q., Bian, Z., Huang, J., et al., 2020. Semi-
supervised learned sinogram restoration network for low-dose ct image reconstruction. In: Medical Imaging
2020: Physics of Medical Imaging. Vol. 11312. International Society for Optics and Photonics, p. 113120B.
[158] Mildenhall, B., Barron, J. T., Chen, J., Sharlet, D., Ng, R., Carroll, R., 2018. Burst denoising with kernel
prediction networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
pp. 2502–2510.
34
Journal Pre-proof
[159] Nair, V., Hinton, G. E., 2010. Rectified linear units improve restricted boltzmann machines. In: Proceedings of
the 27th international conference on machine learning (ICML-10). pp. 807–814.
[160] Nam, S., Hwang, Y., Matsushita, Y., Joo Kim, S., 2016. A holistic approach to cross-channel image noise
modeling and its application to image denoising. In: Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition. pp. 1683–1691.
of
[161] Nossek, J., Roska, T., 1993. Special issue on cellular neural networks-introduction.
[162] Nvidia, C., 2011. Nvidia cuda c programming guide. Nvidia Corporation 120 (18), 8.
[163] Osher, S., Burger, M., Goldfarb, D., Xu, J., Yin, W., 2005. An iterative regularization method for total variation-
based image restoration. Multiscale Modeling & Simulation 4 (2), 460–489.
[164] Paik, J. K., Katsaggelos, A. K., 1992. Image restoration using a modified hopfield network. IEEE Transactions
pro
on Image Processing 1 (1), 49–63.
[165] Panda, A., Naskar, R., Pal, S., 2018. Exponential linear unit dilated residual network for digital image denois-
ing. Journal of Electronic Imaging 27 (5), 053024.
[166] Pardasani, R., Shreemali, U., 2018. Image denoising and super-resolution using residual learning of deep
convolutional network. arXiv preprint arXiv:1809.08229.
[167] Park, J. H., Kim, J. H., Cho, S. I., 2018. The analysis of cnn structure for image denoising. In: 2018 Interna-
tional SoC Design Conference (ISOCC). IEEE, pp. 220–221.
[168] Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer,
re-
A., 2017. Automatic differentiation in pytorch.
[169] Peng, Y., Zhang, L., Liu, S., Wu, X., Zhang, Y., Wang, X., 2019. Dilated residual networks with symmetric
skip connection for image denoising. Neurocomputing 345, 67–76.
[170] Pitas, I., Venetsanopoulos, A., 1986. Nonlinear mean filters in image processing. IEEE Transactions on Acous-
tics, Speech, and Signal Processing 34 (3), 573–584.
[171] Plotz, T., Roth, S., 2017. Benchmarking denoising algorithms with real photographs. In: Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition. pp. 1586–1595.
lP
[172] Priyanka, S. A., Wang, Y.-K., 2019. Fully symmetric convolutional network for effective image denoising.
Applied Sciences 9 (4), 778.
[173] Radford, A., Metz, L., Chintala, S., 2015. Unsupervised representation learning with deep convolutional gen-
erative adversarial networks. arXiv preprint arXiv:1511.06434.
[174] Ran, M., Hu, J., Chen, Y., Chen, H., Sun, H., Zhou, J., Zhang, Y., 2019. Denoising of 3d magnetic resonance
images using a residual encoder–decoder wasserstein generative adversarial network. Medical Image Analysis
55, 165–180.
rna
[175] Remez, T., Litany, O., Giryes, R., Bronstein, A. M., 2018. Class-aware fully convolutional gaussian and poisson
denoising. IEEE Transactions on Image Processing 27 (11), 5707–5722.
[176] Ren, D., Shang, W., Zhu, P., Hu, Q., Meng, D., Zuo, W., 2020. Single image deraining using bilateral recurrent
network. IEEE Transactions on Image Processing.
[177] Ren, D., Zuo, W., Hu, Q., Zhu, P., Meng, D., 2019. Progressive image deraining networks: a better and simpler
baseline. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3937–
3946.
[178] Ren, D., Zuo, W., Zhang, D., Zhang, L., Yang, M.-H., 2019. Simultaneous fidelity and regularization learning
Jou
for image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence.
[179] Ren, W., Liu, S., Ma, L., Xu, Q., Xu, X., Cao, X., Du, J., Yang, M.-H., 2019. Low-light image enhancement
via a deep hybrid network. IEEE Transactions on Image Processing 28 (9), 4364–4375.
[180] Ren, W., Pan, J., Zhang, H., Cao, X., Yang, M.-H., 2020. Single image dehazing via multi-scale convolutional
neural networks with holistic edges. International Journal of Computer Vision 128 (1), 240–259.
[181] Roth, S., Black, M. J., 2005. Fields of experts: A framework for learning image priors. In: 2005 IEEE Computer
Society Conference on Computer Vision and Pattern Recognition (CVPR’05). Vol. 2. Citeseer, pp. 860–867.
[182] Sadda, P., Qarni, T., 2018. Real-time medical video denoising with deep learning: application to angiography.
International journal of applied information systems 12 (13), 22.
[183] Schmidhuber, J., 2015. Deep learning in neural networks: An overview. Neural Networks 61, 85–117.
[184] Schmidt, U., Roth, S., 2014. Shrinkage fields for effective image restoration. In: Proceedings of the IEEE
35
Journal Pre-proof
of
noising in infocommunication systems. In: 2018 International Scientific-Practical Conference Problems of
Infocommunications. Science and Technology (PIC S&T). IEEE, pp. 429–432.
[187] Shwartz-Ziv, R., Tishby, N., 2017. Opening the black box of deep neural networks via information. arXiv
preprint arXiv:1703.00810.
[188] Si, X., Yuan, Y., 2018. Random noise attenuation based on residual learning of deep convolutional neural
pro
network. In: SEG Technical Program Expanded Abstracts 2018. Society of Exploration Geophysicists, pp.
1986–1990.
[189] Simonyan, K., Zisserman, A., 2014. Very deep convolutional networks for large-scale image recognition. arXiv
preprint arXiv:1409.1556.
[190] Sivakumar, K., Desai, U. B., 1993. Image restoration using a multilayer perceptron with a multilevel sigmoidal
function. IEEE Transactions on Signal Processing 41 (5), 2018–2022.
[191] Soltanayev, S., Chun, S. Y., 2018. Training deep learning based denoisers without ground truth data. In: Ad-
vances in Neural Information Processing Systems. pp. 3257–3267.
re-
[192] Song, Y., Zhu, Y., Du, X., 2019. Dynamic residual dense network for image denoising. Sensors 19 (17), 3809.
[193] Stone, J. E., Gohara, D., Shi, G., 2010. Opencl: A parallel programming standard for heterogeneous computing
systems. Computing in science & engineering 12 (3), 66.
[194] Su, Y., Lian, Q., Zhang, X., Shi, B., Fan, X., 2019. Multi-scale cross-path concatenation residual network for
poisson denoising. IET Image Processing.
[195] Sun, X., Kottayil, N. K., Mukherjee, S., Cheng, I., 2018. Adversarial training for dual-stage image denoising
enhanced with feature matching. In: International Conference on Smart Multimedia. Springer, pp. 357–366.
lP
[196] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich,
A., 2015. Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition. pp. 1–9.
[197] Tai, Y., Yang, J., Liu, X., Xu, C., 2017. Memnet: A persistent memory network for image restoration. In:
Proceedings of the IEEE international Conference on Computer Vision. pp. 4539–4547.
[198] Tamura, S., 1989. An analysis of a noise reduction neural network. In: International Conference on Acoustics,
Speech, and Signal Processing,. IEEE, pp. 2001–2004.
rna
[199] Tan, H., Xiao, H., Lai, S., Liu, Y., Zhang, M., 2019. Deep residual learning for burst denoising. In: 2019 IEEE
4th International Conference on Image, Vision and Computing (ICIVC). IEEE, pp. 156–161.
[200] Tao, L., Zhu, C., Song, J., Lu, T., Jia, H., Xie, X., 2017. Low-light image enhancement using cnn and bright
channel prior. In: 2017 IEEE International Conference on Image Processing (ICIP). IEEE, pp. 3215–3219.
[201] Tao, L., Zhu, C., Xiang, G., Li, Y., Jia, H., Xie, X., 2017. Llcnn: A convolutional neural network for low-light
image enhancement. In: 2017 IEEE Visual Communications and Image Processing (VCIP). IEEE, pp. 1–4.
[202] Tassano, M., Delon, J., Veit, T., 2019. An analysis and implementation of the ffdnet image denoising method.
Image Processing On Line 9, 1–25.
Jou
[203] Tassano, M., Delon, J., Veit, T., 2019. Dvdnet: A fast network for deep video denoising. In: 2019 IEEE
International Conference on Image Processing (ICIP). IEEE, pp. 1805–1809.
[204] Tian, C., Xu, Y., Fei, L., Wang, J., Wen, J., Luo, N., 2019. Enhanced cnn for image denoising. CAAI Transac-
tions on Intelligence Technology 4 (1), 17–23.
[205] Tian, C., Xu, Y., Fei, L., Yan, K., 2018. Deep learning for image denoising: a survey. In: International Confer-
ence on Genetic and Evolutionary Computing. Springer, pp. 563–572.
[206] Tian, C., Xu, Y., Li, Z., Zuo, W., Fei, L., Liu, H., 2020. Attention-guided cnn for image denoising. Neural
Networks.
[207] Tian, C., Xu, Y., Zuo, W., 2020. Image denoising using deep cnn with batch renormalization. Neural Networks
121, 461–473.
[208] Tian, C., Xu, Y., Zuo, W., Du, B., Lin, C.-W., Zhang, D., 2020. Designing and training of a dual cnn for image
36
Journal Pre-proof
of
[211] Tran, L., Yin, X., Liu, X., 2017. Disentangled representation learning gan for pose-invariant face recognition.
In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1415–1424.
[212] Tripathi, S., Lipton, Z. C., Nguyen, T. Q., 2018. Correction by projection: Denoising images with generative
adversarial networks. arXiv preprint arXiv:1803.04477.
[213] Uchida, K., Tanaka, M., Okutomi, M., 2018. Non-blind image restoration based on convolutional neural net-
pro
work. In: 2018 IEEE 7th Global Conference on Consumer Electronics (GCCE). IEEE, pp. 40–44.
[214] Vedaldi, A., Lenc, K., 2015. Matconvnet: Convolutional neural networks for matlab. In: Proceedings of the
23rd ACM International Conference on Multimedia. ACM, pp. 689–692.
[215] Vogel, C., Pock, T., 2017. A primal dual network for low-level vision problems. In: German Conference on
Pattern Recognition. Springer, pp. 189–202.
[216] Wang, C., Zhou, S. K., Cheng, Z., 2020. First image then video: A two-stage network for spatiotemporal video
denoising. arXiv preprint arXiv:2001.00346.
[217] Wang, H., Wang, Q., Gao, M., Li, P., Zuo, W., 2018. Multi-scale location-aware kernel representation for
1248–1257.
re-
object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp.
[218] Wang, T., Qin, Z., Zhu, M., 2017. An elu network with total variation for image denoising. In: International
Conference on Neural Information Processing. Springer, pp. 227–237.
[219] Wang, T., Sun, M., Hu, K., 2017. Dilated deep residual network for image denoising. In: 2017 IEEE 29th
International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, pp. 1272–1279.
[220] Wang, X., Dai, F., Ma, Y., Guo, J., Zhao, Q., Zhang, Y., 2019. Near-infrared image guided neural networks
lP
for color image denoising. In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP). IEEE, pp. 3807–3811.
[221] Wei, J., Xia, Y., Zhang, Y., 2019. M3net: A multi-model, multi-size, and multi-view deep neural network for
brain magnetic resonance image segmentation. Pattern Recognition 91, 366–378.
[222] Wen, J., Xu, Y., Liu, H., 2020. Incomplete multiview spectral clustering with adaptive graph learning. IEEE
Transactions on Cybernetics 50 (4), 1418–1429.
[223] Wen, J., Zhang, Z., Zhang, Z., Fei, L., Wang, M., 2020. Generalized incomplete multiview clustering with
rna
37
Journal Pre-proof
In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 20–36.
[233] Xu, J., Zhang, L., Zuo, W., Zhang, D., Feng, X., 2015. Patch group based nonlocal self-similarity prior learning
for image denoising. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 244–252.
[234] Xu, Q., Zhang, C., Zhang, L., 2015. Denoising convolutional neural network. In: 2015 IEEE International
Conference on Information and Automation. IEEE, pp. 1184–1187.
of
[235] Xu, X., Li, M., Sun, W., 2019. Learning deformable kernels for image and video denoising. arXiv preprint
arXiv:1904.06903.
[236] Yan, H., Tan, V., Yang, W., Feng, J., 2019. Unsupervised image noise modeling with self-consistent gan. arXiv
preprint arXiv:1906.05762.
[237] Yang, D., Sun, J., 2017. Bm3d-net: A convolutional neural network for transform-domain collaborative filter-
pro
ing. IEEE Signal Processing Letters 25 (1), 55–59.
[238] Yang, J., Chu, D., Zhang, L., Xu, Y., Yang, J., 2013. Sparse representation classifier steered discriminative
projection with applications to face recognition. IEEE Transactions on Neural Networks and Learning Systems
24 (7), 1023–1035.
[239] Yang, J., Liu, X., Song, X., Li, K., 2017. Estimation of signal-dependent noise level function using multi-
column convolutional neural network. In: 2017 IEEE International Conference on Image Processing (ICIP).
IEEE, pp. 2418–2422.
[240] Yang, J., Zhang, L., Xu, Y., Yang, J.-y., 2012. Beyond sparsity: The role of l1-optimizer in pattern classification.
re-
Pattern Recognition 45 (3), 1104–1118.
[241] Yao, Y., Wu, X., Zhang, L., Shan, S., Zuo, W., 2018. Joint representation and truncated inference learning for
correlation filter based tracking. In: Proceedings of the European Conference on Computer Vision (ECCV).
pp. 552–567.
[242] Ye, J. C., Han, Y., Cha, E., 2018. Deep convolutional framelets: A general deep learning framework for inverse
problems. SIAM Journal on Imaging Sciences 11 (2), 991–1048.
[243] Yeh, R. A., Lim, T. Y., Chen, C., Schwing, A. G., Hasegawa-Johnson, M., Do, M., 2018. Image restoration with
lP
deep generative models. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP). IEEE, pp. 6772–6776.
[244] Yi, D., Lei, Z., Liao, S., Li, S. Z., 2014. Learning face representation from scratch. arXiv preprint
arXiv:1411.7923.
[245] Yu, A., Liu, X., Wei, X., Fu, T., Liu, D., 2018. Generative adversarial networks with dense connection for
optical coherence tomography images denoising. In: 2018 11th International Congress on Image and Signal
Processing, BioMedical Engineering and Informatics (CISP-BMEI). IEEE, pp. 1–5.
rna
[246] Yu, S., Ma, J., Wang, W., 2019. Deep learning for denoising. Geophysics 84 (6), V333–V350.
[247] Yuan, D., Fan, N., He, Z., 2020. Learning target-focusing convolutional regression model for visual object
tracking. Knowledge-Based Systems, 105526.
[248] Yuan, D., Li, X., He, Z., Liu, Q., Lu, S., 2020. Visual object tracking with adaptive structural convolutional
network. Knowledge-Based Systems, 105554.
[249] Yuan, Q., Zhang, Q., Li, J., Shen, H., Zhang, L., 2018. Hyperspectral image denoising employing a spatial–
spectral deep residual convolutional neural network. IEEE Transactions on Geoscience and Remote Sensing
57 (2), 1205–1218.
Jou
[250] Yuan, Y., Liu, S., Zhang, J., Zhang, Y., Dong, C., Lin, L., 2018. Unsupervised image super-resolution using
cycle-in-cycle generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition Workshops. pp. 701–710.
[251] Yue, Z., Yong, H., Zhao, Q., Meng, D., Zhang, L., 2019. Variational denoising network: Toward blind noise
modeling and removal. In: Advances in Neural Information Processing Systems. pp. 1688–1699.
[252] Zamparelli, M., 1997. Genetically trained cellular neural networks. Neural Networks 10 (6), 1143–1151.
[253] Zarshenas, A., Suzuki, K., 2018. Deep neural network convolution for natural image denoising. In: 2018 IEEE
International Conference on Systems, Man, and Cybernetics (SMC). IEEE, pp. 2534–2539.
[254] Zha, Z., Yuan, X., Yue, T., Zhou, J., 2018. From rank estimation to rank approximation: Rank residual con-
straint for image denoising. arXiv preprint arXiv:1807.02504.
[255] Zhang, B., Jin, S., Xia, Y., Huang, Y., Xiong, Z., 2020. Attention mechanism enhanced kernel prediction
38
Journal Pre-proof
networks for denoising of burst images. In: ICASSP 2020-2020 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP). IEEE, pp. 2083–2087.
[256] Zhang, F., Liu, D., Wang, X., Chen, W., Wang, W., 2018. Random noise attenuation method for seismic data
based on deep residual networks. In: International Geophysical Conference, Beijing, China, 24-27 April 2018.
Society of Exploration Geophysicists and Chinese Petroleum Society, pp. 1774–1777.
of
[257] Zhang, J., Ghanem, B., 2018. Ista-net: Interpretable optimization-inspired deep network for image compressive
sensing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1828–1837.
[258] Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L., 2017. Beyond a gaussian denoiser: Residual learning of
deep cnn for image denoising. IEEE Transactions on Image Processing 26 (7), 3142–3155.
[259] Zhang, K., Zuo, W., Gu, S., Zhang, L., 2017. Learning deep cnn denoiser prior for image restoration. In:
pro
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3929–3938.
[260] Zhang, K., Zuo, W., Zhang, L., 2018. Ffdnet: Toward a fast and flexible solution for cnn-based image denoising.
IEEE Transactions on Image Processing 27 (9), 4608–4622.
[261] Zhang, K., Zuo, W., Zhang, L., 2018. Learning a single convolutional super-resolution network for multiple
degradations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3262–
3271.
[262] Zhang, K., Zuo, W., Zhang, L., 2019. Deep plug-and-play super-resolution for arbitrary blur kernels. In: Pro-
ceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1671–1681.
[263]
[264]
re-
Zhang, L., Wu, X., Buades, A., Li, X., 2011. Color demosaicking by local directional interpolation and nonlocal
adaptive thresholding. Journal of Electronic imaging 20 (2), 023016.
Zhang, L., Zhang, L., Mou, X., Zhang, D., 2011. Fsim: A feature similarity index for image quality assessment.
IEEE transactions on Image Processing 20 (8), 2378–2386.
[265] Zhang, L., Zuo, W., 2017. Image restoration: From sparse and low-rank priors to deep priors [lecture notes].
IEEE Signal Processing Magazine 34 (5), 172–179.
[266] Zhang, M., Zhang, F., Liu, Q., Wang, S., 2019. Vst-net: Variance-stabilizing transformation inspired network
lP
for poisson denoising. Journal of Visual Communication and Image Representation 62, 12–22.
[267] Zhang, Z., Geiger, J., Pohjalainen, J., Mousa, A. E.-D., Jin, W., Schuller, B., 2018. Deep learning for envi-
ronmentally robust speech recognition: An overview of recent developments. ACM Transactions on Intelligent
Systems and Technology (TIST) 9 (5), 49.
[268] Zhang, Z., Wang, L., Kai, A., Yamada, T., Li, W., Iwahashi, M., 2015. Deep neural network-based bottleneck
feature and denoising autoencoder-based dereverberation for distant-talking speaker identification. EURASIP
Journal on Audio, Speech, and Music Processing 2015 (1), 12.
rna
[269] Zhao, D., Ma, L., Li, S., Yu, D., 2019. End-to-end denoising of dark burst images using recurrent fully convo-
lutional networks. arXiv preprint arXiv:1904.07483.
[270] Zhao, H., Shao, W., Bao, B., Li, H., 2019. A simple and robust deep convolutional approach to blind image
denoising. In: Proceedings of the IEEE International Conference on Computer Vision Workshops. pp. 0–0.
[271] Zheng, Y., Duan, H., Tang, X., Wang, C., Zhou, J., 2019. Denoising in the dark: Privacy-preserving deep neural
network based image denoising. IEEE Transactions on Dependable and Secure Computing.
[272] ZhiPing, Q., YuanQi, Z., Yi, S., XiangBo, L., 2018. A new generative adversarial network for texture pre-
serving image denoising. In: 2018 Eighth International Conference on Image Processing Theory, Tools and
Jou
39
Journal Pre-proof
Dear Editors:
I would like to submit the manuscript entitled “Deep Learning on Image Denoising:
An overview”, which I wish to be considered for publication in “Neural Networks”.
No conflict of interest exits in the submission of this manuscript, and manuscript is
approved by all authors for publication. I would like to declare on behalf of my co-
authors that the work described was original research that has not been published
of
previously, and not under consideration for publication elsewhere, in whole or in part.
All the authors listed have approved the manuscript that is enclosed.
I deeply appreciate your consideration of my manuscript, and look forward to
pro
receiving comments from the reviewers. If you have any queries, please don’t hesitate
to contact me at the address below.
Thank you and best regards.
Yours sincerely,
Chunwei Tian, Lunke Fei, Wenxian Zheng, Yong Xu, Wangmeng Zuo and Chia-Wen
Lin
Corresponding author:
re-
Name: Yong Xu
E-mail: yongxu@ymail.com
lP
rna
Jou