DCP and Yypo

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Journal of

Marine Science
and Engineering

Article
Underwater Image Restoration via DCP and Yin–Yang
Pair Optimization
Kun Yu, Yufeng Cheng, Longfei Li, Kaihua Zhang *, Yanlei Liu and Yufang Liu

Henan Key Laboratory of Infrared Materials & Spectrum Measures and Applications, School of Physics,
Henan Normal University, Xinxiang 453007, China; yukun@htu.edu.cn (K.Y.); chengyvfeng@163.com (Y.C.);
lilongfei@htu.edu.cn (L.L.); liuyanlei@htu.edu.cn (Y.L.); yf-liu@htu.edu.cn (Y.L.)
* Correspondence: zhangkaihua@htu.edu.cn

Abstract: Underwater image restoration is a challenging problem because light is attenuated by ab-
sorption and scattering in water, which can degrade the underwater image. To restore the underwater
image and improve its contrast and color saturation, a novel algorithm based on the underwater dark
channel prior is proposed in this paper. First of all, in order to reconstruct the transmission maps of
the underwater image, the transmission maps of the blue and green channels are optimized by the
proposed first-order and second-order total variational regularization. Then, an adaptive model is
proposed to improve the first-order and second-order total variation. Finally, to solve the problem of
the excessive attenuation of the red channel, the transmission map of the red channel is compensated
by Yin–Yang pair optimization. The simulation results show that the proposed restored algorithm
outperforms other approaches in terms of the visual effects, average gradient, spatial frequency,
percentage of saturated pixels, underwater color image quality evaluation and evaluation metric.

Keywords: underwater image restoration; dark channel prior; Yin–Yang pair optimization; visual
effect; red channel compensation


Citation: Yu, K.; Cheng, Y.; Li, L.;
Zhang, K.; Liu, Y.; Liu, Y. Underwater 1. Introduction
Image Restoration via DCP and
Underwater images play a crucial role in marine biology research, resource detection,
Yin–Yang Pair Optimization. J. Mar.
underwater target detection and recognition, etc. [1]. However, harsh underwater environ-
Sci. Eng. 2022, 10, 360. https://
ments with dissolved organic compounds, concentrations of inorganic compounds, bubbles
doi.org/10.3390/jmse10030360
in the water, water particles, etc., seriously affect the quality of underwater imaging [2].
Academic Editor: Alessandro Ridolfi Underwater images with low contrast, color deviation or distortion have prevented the
Received: 12 January 2022
development of marine-science research. Thus, developing an effective image-processing
Accepted: 28 February 2022
method to improve underwater image quality is very essential.
Published: 3 March 2022
To improve the accuracy of underwater target recognition, some scholars have pro-
posed various solutions. Generally, these solutions can be divided into three categories:
Publisher’s Note: MDPI stays neutral
additional information, image enhancement and physics models. In previous research,
with regard to jurisdictional claims in
additional information such as multiple images or polarization filters were used to improve
published maps and institutional affil-
underwater image quality [3–5]. Although these methods are simple and effective, they
iations.
cannot be used in complex scenes or turbid underwater environments. For the image
enhancement algorithms, Lu et al. [6] employed weighted guided trigonometric filtering
and artificial light correction for underwater image enhancement, but the time complex-
Copyright: © 2022 by the authors.
ity of trilateral filtering is high. Ulutas et al. [7] proposed a methodology that included
Licensee MDPI, Basel, Switzerland. pixel-center regionalization and global and local histogram equalization with multi-scale
This article is an open access article fusion to correct color and enhance the image contrast. However, due to the use of the his-
distributed under the terms and togram equalization algorithm, this algorithm increased the image noise while improving
conditions of the Creative Commons the image contrast. Ancuti et al. [8] achieved better results by combining the Laplacian
Attribution (CC BY) license (https:// contrast, contrast, saliency and exposure features of white-balanced, color-corrected images
creativecommons.org/licenses/by/ and the exposure fusion algorithm. However, several results were color shifted because
4.0/). of the exposure process; selecting the exposed images is difficult. In letter [9], a weakly

J. Mar. Sci. Eng. 2022, 10, 360. https://doi.org/10.3390/jmse10030360 https://www.mdpi.com/journal/jmse


J. Mar. Sci. Eng. 2022, 10, 360 2 of 17

supervised color transformation technique inspired by cycle-consistent adversarial net-


works (CCANs) was introduced for color correction. However, it has complex network
structures with long training times and relies on large amounts of training data. The dark
channel prior (DCP) based on the physical model is now widely applied in underwater
image restoration [10–12]. For example, Galdran used the DCP red channel compensation
method to achieve color correction and visibility improvement [13]. Li et al. proposed
underwater image restoration based on blue–green channels dehazing and red channel
correction [14]. Gao et al. came up with the bright channel prior, which was inspired by
the DCP [15]. Although the problems of low contrast and color deviation in underwater
images are improved with the above methods, they cannot balance texture enhancement,
noise suppression and visual enhancement. To solve these problems, several improved
DCP algorithms have been proposed [16–19]. Combining the DCP with the variation
method may be one of the optimal methods because it has the advantages of convenient
numerical calculation expressions, good stability, etc. [20–22]. Hou et al. used non-local
variation, total variation and curvature-total variation to optimize underwater images,
respectively [19,23,24]. In the variation method, the regularization parameter plays a key
role in local smoothing constraints or texture preservation. However, the regularization
parameters are usually set as constants for computational convenience. This will cause a
decrease in the restored image quality. Hence, some researchers attempted to adjust the
regularization parameters in variation modal adaptively. Liao et al. used generalized cross-
validation to select the regularization parameters [25]. Langer et al. proposed an automated
parameter selection algorithm which can select scalar or locally varying regularization
parameters for total variation models [26]. The discrepancy rule-based method is used for
parameter selections [27,28]. Ma et al. established an energy function for regularization
parameters [29].
In this paper, a novel algorithm based on the underwater dark channel prior is pro-
posed. The algorithm comprises the following steps in turn: (1) An optimization model
combining high-order total variation and first-order total variation is proposed. The first-
order variational model is used to preserve the texture in the edge, and the high-order
total variation is used to suppress the background noise. (2) A method for selecting the
regularization parameters is proposed, which can update the parameters while optimizing
the images. (3) The alternating direction method of multipliers (ADMM) is used to im-
prove the calculation speed. (4) The transmission map and the background light of the red
channel estimator are compensated by the Yin–Yang pair optimization. The experiments
demonstrate that the quality of the restored images can be significantly improved by the
proposed algorithm.

2. Background
2.1. Underwater Imaging Model
The propagation of light underwater is a complicated process. According to the
Jaffe–McGlamery model [30], the underwater-imaging model can be divided into three
components: the direct illumination ED , the back-scattering EB and the forward-scattering
of light floating EF [31,32]. The underwater-imaging model ET can be expressed as:

ET = E D + E B + E F (1)

Due to the scattering and absorption of the particles in the water, only part of the light
reaches the camera. Thus, at each image coordinate x, ED is expressed as:

ED ( x ) = J ( x ) t ( x ) (2)

where J is the ideal image and t is the transmission map. When the optical medium is
assumed to be homogeneous, the transmission map t is often estimated by Equation (3):

t( x ) = exp(− βd( x )) (3)


J. Mar. Sci. Eng. 2022, 10, 360 3 of 17

where β is the attenuation coefficient and d is the image depth. Thus, Equation (2) can be
transformed as:
ED ( x ) = J ( x ) exp(− βd( x )) (4)
The backscattered component results from the interaction between the illumination
source and the floating particles dispersed in the water. Therefore, EB can be expressed as:

EB ( x ) = B(1 − t( x )) = B(1 − exp(− βd( x ))) (5)

where B is a color vector of the background light. Schechner and Karpel have shown that
backscattering is the main reason for the deterioration of underwater visibility [6]. Thus,
forward scattering EF can be neglected. The simplified underwater-image-information
model can be rewritten as:

I ( x ) = J ( x )t( x ) + B(1 − t( x )) (6)

where I is the underwater image which is captured by underwater optical imaging equip-
ment. To obtain a better original image, it is essential to estimate B, which can be described
as: n o
B = maxmin min I c (y) c ∈ { R, G, B} (7)
y∈Ω x ∈ I c

where ΩX is a square local patch centered at x, and c represents the color channel. It is
well known that the DCP is widely applied in estimating the transmission map t(x). He
et al. found that one of the R, G, B channels for a local area in the non-sky images is almost
zero [10]. Based on prior experience, the dark channel is defined as follows:
n o
Jdark ( x ) = min min J c (y) = 0, c ∈ { R, G, B} (8)
y∈Ω c

Due to the severe attenuation of the red channel, underwater pictures are in the
presence of blue or green wavelengths. Wen proposed the underwater dark channel prior
(UDCP) by only considering the blue and green channels [27]. The concept of the UDCP is
redefined as: n o
Judark = min min J c (y) = 0 c ∈ { G, B} (9)
y∈Ω c

Combining with Equations (6) and (9), the result is represented as follows:

I c (y)
 
min min c = Judark + 1 − t DCP ( x ), c ∈ { G, B} (10)
y∈Ω c B

The transmission map tDCP (x) can be written as follows:

I c (y)
 
t DCP ( x ) = 1 − min min c (11)
y∈Ω c B

2.2. Yin–Yang Pair Optimization


Yin–Yang pair optimization (YYPO) is a novel metaheuristics optimization [33]. The
algorithm uses the traditional Chinese concept of Yin and Yang balance to effectively
balance exploration and development. The balance of Yin and Yang in Chinese culture is
shown in Figure 1; white and black represent Yang and Yin, respectively, which can be
regarded as exploration and exploitation in YYPO. Exploitation plays a vital role in the
exploration phase because it may help algorithms jump out of the locally optimal solution.
Exploration will make the algorithms converge quickly and reduce the running time in the
exploitation phase. This method has a strong exploration and development-balance ability
and can effectively estimate the optimal solution. The algorithm has been widely used in
the engineering field [34–36] but it is still relatively rare in underwater image restoration.
regarded as exploration and exploitation in YYPO. Exploitation plays a vital role in the
exploration phase because it may help algorithms jump out of the locally optimal solution.
Exploration will make the algorithms converge quickly and reduce the running time in
the exploitation phase. This method has a strong exploration and development-balance
ability and can effectively estimate the optimal solution. The algorithm has been widely
J. Mar. Sci. Eng. 2022, 10, 360 4 of 17
used in the engineering field [34–36] but it is still relatively rare in underwater image res-
toration.

Figure 1.
Figure 1. Yin
Yin and
andYang.
Yang.

In
In the
the initialization
initialization phase
phase of of YYPO,
YYPO, two two random
random pointspoints areare generated
generated in in the
the domain
domain
of [0, 1]nn and their fitness is evaluated, where n is the dimension of the variable space. The
of [0, 1] and their fitness is evaluated, where n is the dimension of the variable space. The
fitter
fitter one
one is named PP11,,which
is named whichisismainlymainlyused usedfor fordevelopment,
development, andandthethe other
other is named
is named P2,
Pwhich
2 , which is mainly
is mainly used used to explore
to explore the variable
the variable space. space. In YYPO,
In YYPO, the minimum
the minimum and theandmax-
the
maximum
imum number number of archive
of archive updates
updates (Imax(Imaxandand Imin),Imin
the),expansion/contraction
the expansion/contraction factor factor (α)
(α) and
and the search radius of P and P (r and r ) are planned.
the search radius of P1 and P2 (r1 and r2) are planned. There are two stages in YYPO.
1 2 1 2 There are two stages in YYPO.
In
In the
the splitting
splitting stage,
stage, although
although both both points
points experience
experience the the splitting
splitting stage,
stage, only
only one
one
point
point along with its search radium (r) undergoes the splitting stage at a time. ThisThis
along with its search radium (r) undergoes the splitting stage at a time. is
is im-
implemented by one of the following two methods and
plemented by one of the following two methods and is decided based on an equal proba- is decided based on an equal
probability:
bility: j j
S j = S j + δr and Sn+ j = S j − δr, j = 1, 2, 3 . . . n (12)
S j = S + δ r and Snj + j = S j − δ r, j = 1,2,3...n
j j
(12)
Or:   
 S j1 + δ1 √r δ2 > 0.5
Or: j1 j2  2
Sj = j (13)
2  S 1 − δ1 √r δ2 ≤ 0.5
j2 j1  2r 
 S j2 + δ1   δ 2 > 0.5
In Equation (12), the subscript j1 represents  2the  number of points, and the superscript
Sj =  (13)
represents the number of the decision variable being modified, while δ is a random number
 S j1 − δ  r 
2

between 0 and 1. In Equation (13), j1is jthe 1


point 
number, δj22 represents
≤ 0.5 the decision variable

2
 2
number and δ1 and δ2 denote a random number between 0 and 1, respectively. In the
archive stage, the (12),
In Equation stagethe starts after the
subscript requiredthe
represents number
number of archive
of points,updates
and the is superscript
reached. It
should
represents be noted that theofarchive
the number stage variable
the decision contains being 2I points at this while
modified, stage, δand
is atwo
randompoints (P1
num-
or P ) are added during each update before the recovery stage.
ber between 0 and 1. In Equation (13), j1 is the point number, j2 represents the decision
2 The search radius will be
updated by Equation (14).
variable number and δ1 and δ2 denote a random number between 0 and 1, respectively. In
the archive stage, the stage starts after the r1 = r1 − rα1 number of archive updates is reached.
required (14)
It should be noted that the archive stage r2 = r2 − rα2 2I points at this stage, and two points
contains
(P1 or
AtPthe
2) are
endadded
of theduring eachstage,
archiving update
thebefore
algorithmthe recovery stage.
determines The search
whether radius will
it has reached the
be updatednumber
maximum by Equation (14). T. If yes, it will output the result, and if not, it will reset
of iterations
the archiving matrix, and then, the number of archive updates I is randomly generated
 r1
between Imin and Imax . The flowchart of YYPO  r1 = r1is−illustrated
α
in Figure 2.
 (14)
r = r − r2
 2 2 α
J. Mar. Sci. Eng. 2022, 10, x FOR PEER REVIEW 5 of 18

At the end of the archiving stage, the algorithm determines whether it has reached
the maximum number of iterations T. If yes, it will output the result, and if not, it will
J. Mar. Sci. Eng. 2022, 10, 360 5 of 17
reset the archiving matrix, and then, the number of archive updates I is randomly gener-
ated between Imin and Imax. The flowchart of YYPO is illustrated in Figure 2.

Figure2.2.The
Figure Theflowchart
flowchartofofYYPO.
YYPO.

3.3.Proposed
ProposedNovel
NovelModel
Model
3.1.
3.1. Novel TransmissionMap
Novel Transmission MapOptimization
OptimizationModel
Model
InInunderwater
underwaterimages,
images,first-order
first-ordervariation
variationcan
canpreserve
preservesharp
sharpfeatures
featuresand
andtexture,
texture,
but staircase artifacts and false edges will appear; the second-order variational smooth
but staircase artifacts and false edges will appear; the second-order variational smooth
ability is strong but it will lose the feeble texture. The following transmission optimization
model is proposed as follows:

1 1
E1 (t, J ) = kλ1 ◦ ε(| Dt|)k + kλ2 ◦ D2 J k + k I − Jt − B(1 − t)k22 + kt − t DCP k22 (15)
2 2
J. Mar. Sci. Eng. 2022, 10, 360 6 of 17

where λ1 , λ2 are the regularization parameters, tDCP is the transmission map of the un-
derwater DCP estimation, D and D2 are the first-order and the second-order difference
operators and ε( ) is the smoothness metric function, which can be shown as follows:
 
|∇t|
ε(|∇t|) = α2 ln 1 + 2 (16)
α

where α is a constant and is the first-order variation of t.


In Equation (15), the first term is the first-order variation of the transmittance map,
the second term is the second-order variation of the restored image, the third term is the
data-fidelity term of the transmissivity map and the fourth term is the data-fidelity term
of the restored image. The first-order variation of the transmission map can preserve
edges and textures, and through second-order variation, the possible staircase artifacts
and noise are smoothed. The problem of Equation (15) is mathematically ill-posed. To
improve the computational efficiency of the model, the ADMM algorithm was designed
from Equation (15) in this paper, and it introduced three auxiliary variables o, p and q,
which are shown in Equation (17):

o = ∇t
p = ∇J (17)
q = ∆J = ∇ p

Therefore, Equation (15) can deform into Equation (18):


n
J = argmin kλ1 ◦ ε(|o |)k + 12 kt − t DCP k22 + θ1
2 ko − Dtk22 +
t,J,o,p,q
kσ1 ◦ (o − Dt)k + kλ2 ◦ | p|k+ (18)
1
2 kI − Jt − (1 − t) Bk22 + θ22 k p − DJ k22 +
2
kσ2 ◦ ( p − DJ )k + θ23 kq − D2 J k2 + kσ3 ◦ q − D2 J k


where θ 1 , θ 2 and θ 3 are the penalty parameters and σ1 , σ2 and σ3 are the Lagrangian multi-
pliers. Thus, Equation (18) can be decomposed into five simpler minimization subproblems.
Let k be the current number of iterations; the subproblems are shown below:
  


 tk+1 = argminE tk , J k , o k , pk , qk


 t  
o k+1 = argminE tk+1 , J k , o k , pk , qk






 o  
J k+1 = argminE tk+1 , J k , o k+1 , pk , qk

(19)
 J  

pk+1 = argminE tk+1 , J k+1 , o k+1 , pk , qk






 p  



 q k + 1 = argmin E t k +1 , J k +1 , o k +1 , p k +1 , q k

q

Firstly, solve tk+1 by fixing Jk , ok , pk and qk ; the Euler–Lagrange equation for tk+1 can
be expressed as Equation (20):
    2  
 (t − t k
DCP ) + Dσ1 − θ1 D Dt − o
k + t B − Jk + ( I − B) B − J k = 0, in Ω
   (20)
 −σk + θ Dt − o k · n = 0, on ∂Ω
1 1

Fix tk+1 , Jk , pk and qk to calculate ok+1 . The generalized soft-threshold formulation is


given by the equation:
σk
 
σ k
λ 2α2 o k Dtk+1 − θ1 0
k +1 k +1 1 1 1
o = max Dt − − · 2 , 0 ·
 , 0 =0 (21)
θ1 θ1 α2 + o k σ k |0|
Dtk+1 − θ1
1
J. Mar. Sci. Eng. 2022, 10, 360 7 of 17

The Jk+1 , pk+1 and qk+1 can calculate the generalized soft-threshold formula and the
Euler–Lagrangian equation by the above method.
  
 Jt2 − It + t(1 − t) B − θ2 D DJ − pk + Dσk = 0, in Ω
2
    (22)
 θ2 pk − DJ + σk · n = 0
2 on Ω

σk
 
D pk − θ22
" #
  σ2k λ2 0
qk+1 = max D pk − − ,0 · ,0 =0 (23)
θ2 θ2  σ k |0|
D pk − θ22
    
 θ2 p − DJ k+1 + σk + Dσk + θ3 D qk+1 − Dp = 0,
2 3 in Ω
    (24)
 θ3 qk+1 − Dp + σk · n = 0,
3 on Ω

Then, the Lagrangian multiplier will be updated as follows:


  


 σ1k+1 = σ1k + θ1 o k+1 − Dtk+1
  
σ2k+1 = σ2k + θ2 pk+1 − DJ k+1 (25)
  
 σ k +1 = σ k + θ 3 v k +1 − D 2 J k +1


3 3

Finally, Equations (22)–(24) can be solved by the Gauss–Seidel iterative method and
fast Fourier transform.

3.2. Transmission Map Optimization Model of Regularization Parameters


A parameter selection model is proposed in Equation (26) to adjust the regularization
parameters adaptively. This model can optimize the underwater image while selecting the
parameters:
E2 (t, J, λ1 , λ2 ) = E1 (t, J, λ1 , λ2 ) + F1 (λ1 , λ2 ) (26)
where E1 is from Equation (15) and the function F1 (λ1 , λ2 ) is the data-fidelity term for the
regularization parameter; therefore, energy function E2 can be written as:

E2 (t, J, λ1 , λ2 ) = kλ1 ◦ ε(| Dt|)k F + kλ2 ◦ D2 J k F +


1
2 kI − Jt − B(1 − t)k2E + 12 kt − t DCP k2E (27)
a1 2 a2 2
2 k λ1 − b k E + 2 k λ2 − b k E

E2 can be decomposed to two subproblems. The first subproblem is shown in


Equation (15), which was employed to estimate t and J, and the second subproblem,
which solves λ1 and λ2 , can be expressed as:
a1 a
(λ1 , λ2 ) = argminkλ1 ◦ ε(| Dt|)k1 + kλ2 ◦ D2 J k1 + kλ1 − bk22 + 2 kλ2 − bk22 (28)
λ1 ,λ2 2 2

where a1 , a2 and b are positive numbers. To calculate λ1 and λ2 , the ADMM is used again.
The subproblem of the energy function E2 is:
  
a1 2
 1
 λ = argmin k λ 1 ◦ ε (| Dt |)k F + 2 k λ 1 − b k E
λ1   (29)
a 2
 λ2 = argmin kλ2 ◦ D2 J k F + 22 kλ2 − bk E

λ2

By calculating Equation (29), λ1 and λ2 can be obtained:

a1 b−ε( Dt)
(
λ1 = a1
a2 b − D 2 J
(30)
λ2 = a2
By calculating Equation (29), λ1 and λ2 can be obtained:
 a1b − ε ( Dt )
 λ1 =
 a1
J. Mar. Sci. Eng. 2022, 10, 360  2
(30)
8 of 17
 λ = a2 b − D J
 2
a2

ItItisisvital
vitaltotoresearch
researchthethenumerical
numericalbehavior
behaviorofofλλ1 and λ2 corresponding to the regions
1 and λ2 corresponding to the regions
ofoft and
t and J at different scales, such as texture or background.Figure
J at different scales, such as texture or background. Figure3a3aisisananunderwater
underwater
image. Figure 3b is the second-order differential of (a).
image. Figure 3b is the second-order differential of (a). Figure 3c,dFigure 3c,d are thearenumerical be-
the numerical
haviors of λ1 and λ 2 , respectively.
behaviors of λ1 and λ2 , respectively.

Figure 3. Numerical behavior of λ1 and λ2 .


Figure 3. Numerical behavior of λ1 and λ2.
In Figure 3, we show the numerical behavior of λ1 and λ2 . In the background regions,
the λ1 and λ2 are large. In the texture regions, the λ1 and λ2 are small. In the underwater
image, the sharp edge of the water scattering forms a large number of slope regions. In
the ramp region, the gradient of t is inversely proportional to λ1 and λ2 . In the variational
model, the values of the regularization parameter control the relative weights of the fidelity
and regularization terms. More precisely, when the values of the regularization parameter
are small, the regularization is also small. When the values of the regularization parameter
are large, over-regularization may occur. In conclusion, in the background region, the
regularization parameter is large, which can smooth the staircase artifacts and false edges;
in the texture region, the small regularization parameter can protect the texture; in the
ramp region, the feature of the regularization parameter can enhance texture and reduce
the width of the ramp.

3.3. Red Compensation Based on Yin–Yang Pair Optimization


Under water, the attenuation of red light is much greater than that of green light, so the
background light and transmission map of the red channel estimated by the DCP cannot
be used directly. In this section, an estimator of the transmission and the background light
of the red channel based on YYPO is proposed.
Due to the high correlation between the red, green and blue channels, compensating
for the red channel alone may make the restored image redder. Therefore, this section
proposes a red-channel-transmission-map estimator based on YYPO. Zhao et al. [37,38]
discovered the relationship between the transmission map of the red channel and the
blue–green channel, as shown by Equations (31) and (32):

βb Br (mλb +i )
tb ( x ) = tr ( x ) βr = tr ( x ) Bb (mλr +i) (31)

βg Br (mλ g +i )
t g ( x ) = tr ( x ) βr = tr ( x ) Bg (mλr +i) (32)
J. Mar. Sci. Eng. 2022, 10, 360 9 of 17

where Br , Bg and Bb are the ground lights of the red, green and blue channel, respectively.
λr , λg and λb are the wavelengths of red, green and blue light, m = −0.00113, i = 1.62517.
According to Equations (31) and (32), the objective function of YYPO can be defined as:

Bb (mλr +i )
2 Bg (mλr +i ) 2
f red (tr ) = tr − tb Br (mλb +i) + tr − t g Br (mλg +i) (33)
2 2
 
Ir
tr = 1 − min (34)
y∈Ω Br
where Ir is the red channel of the original underwater image. Search tr continuously through
YYPO to calculate the value of Equation (33). When the minimum value of Equation (33) is
obtained, the optimal Br can be estimated by Equation (34). The tr is estimated by tg and
tb , which are optimized by Sections 3.1 and 3.2. A framework of the proposed method is
J. Mar. Sci. Eng. 2022, 10, x FOR PEERpresented
REVIEW in Figure 4. Inside the red dotted line is the acceleration part using the ADMM
10 of 18
algorithm.

Figure
Figure 4.
4. Framework
Framework of
of the
the proposed
proposed approach.
approach.

4. Experiments and Discussions


4.1. Evaluation
4.1. Evaluation of
of Objectives
Objectives and
and Approaches
Approaches
In this
In this section, the effectiveness
section, the of the
effectiveness of the proposed
proposed models
models is
is assessed. To ensure
assessed. To ensure the
the
fairness and the objective of all algorithms, they were implemented on a Windows
fairness and the objective of all algorithms, they were implemented on a Windows 10 PC 10 PC
with Intel(R) Core (TM) i7-8700U CPU@3.20GHz, 16.00GB, running Python3.7.
with Intel(R) Core (TM) i7-8700U CPU@3.20GHz, 16.00GB, running Python3.7.
In the experiment, the effectiveness of the proposed algorithm was evaluated from
In the experiment, the effectiveness of the proposed algorithm was evaluated from
two aspects:
two aspects:
(1) To examine the superiority of the proposed algorithm with restoration algorithms;
(1) To examine the superiority of the proposed algorithm with restoration algorithms;
(2) To assess the superiority of the proposed algorithm with synthesized underwater
(2) To assess the superiority of the proposed algorithm with synthesized underwater
images.
images.
In experiment (1), several different types of algorithms were used to validate the pro-
posed algorithm, including the algorithm for wavelength compensation and image de-
hazing (WCID) [39], blue–green channels de-hazing and red channel correction (ARUIR)
[14], guided image filtering (GIF) [18] and underwater light attenuation prior (ULAP) [40].
J. Mar. Sci. Eng. 2022, 10, 360 10 of 17

In Experiment (1), several different types of algorithms were used to validate the pro-
posed algorithm, including the algorithm for wavelength compensation and image de-hazing
(WCID) [39], blue–green channels de-hazing and red channel correction (ARUIR) [14], guided
image filtering (GIF) [18] and underwater light attenuation prior (ULAP) [40]. Experiment
(1) includes two parts: quantitative analysis and subjective analysis. Quantitative analysis
is performed by using the average gradient (AG), spatial frequency (SF), percentage of
saturated pixels (PS), underwater color image quality evaluation (UCIQE) [41] and the
blind referenceless image spatial quality evaluator (BRISQUE) [42]. The IE, AG and SF
reflect the number of edges and textures in the image. The number of edges and textures is
proportional to the AG and SF. However, the underwater image restoration algorithm may
amplify noise, and the noise will increase the AG and SF. Therefore, when using the AG and
SF to evaluate the restored image, it is necessary to combine the restored image itself for
J. Mar. Sci. Eng. 2022, 10, x FOR PEER REVIEW 11 of 18
evaluation. The PS judges the restored image by calculating the ratio between the number
of saturated pixels and the total pixels. When the PS is small, the effect of the restoring
image is better. The UCIQE takes chroma, saturation and contrast as the measurement
trast as the measurement
components and linearlycomponents
combines theand linearly combines
measurement the measurement
components to estimate thecompo-
quality
nents to estimate the quality of the image. BRISQUE is an unreferenced
of the image. BRISQUE is an unreferenced image estimation method. When the score of image estimation
method.
BRISQUE When the score
is lower, of BRISQUE
the image quality is better.
lower, Intheexperiment
image quality is evaluate
(2), to better. Inthe
experiment
restoration
(2), to evaluateofthe
imageability the restoration imageability
proposed algorithm, of the
we used theproposed algorithm,bywe
method proposed Gaoused
et al.the
[15]
method proposed image
for underwater by Gaosynthesis
et al. [15]and
for underwater
adopted theimagepeak synthesis and adopted
signal-to-noise the peak
ratio (PSNR) and
signal-to-noise ratio (PSNR)
structural similarity (SSIM)and structural
to assess similarity
the quality (SSIM)
of the to assess
picture. The PSNRthe quality of the
represents the
picture. The PSNR represents the restoration accuracy between pixels, and
restoration accuracy between pixels, and the SSIM represents the similarity between image the SSIM rep-
resents the similarity between image textures.
textures.

4.2.
4.2. NoNo Reference
Reference Image
Image Restoration
Restoration Effect
Effect Evaluation
Evaluation
InInthis
thissection,
section, thethe superiority
superiority ofof
thethe proposed
proposed algorithm
algorithm is is verified
verified byby real
real images.
images.
It is difficult to evaluate the quality of underwater images because
It is difficult to evaluate the quality of underwater images because there is no reference there is no reference
standardsince
standard sincethere
there is is no
no ground
groundtruth
truthororuniform
uniform measure
measure standard
standard available. So, we
available. So,chose
we
pictures
chose fromfrom
pictures four different-colored
four different-colored scenes (Figure
scenes 5a, Figure
(Figures 6a, that
5a–8a) Figure 7a, Figure
gradually 8a) that
changed
gradually
from green tochanged
blue and from green to the
compared bluealgorithm
and compared the algorithm
in Section 4.1 with our in Section 4.1 with
algorithm. In theour
algorithm. In the transmission-map-optimization model based
transmission-map-optimization model based on the first-order and high-order variationalon the first-order and high-
order variational
models, the maximum models,
number theof
maximum
iterations number
is 30, a1 =ofa2iterations is 30,
= 1, b = 0.05. Thea1selection
= a2 = 1,ofbthese
= 0.05.
The selection of these parameters is the same as that proposed
parameters is the same as that proposed by Punnathanam [33]. Figure 5b–f are the re- by Punnathanam [33].
Figure 5b–f are the restored images via WCID, ARUIR, GIF,
stored images via WCID, ARUIR, GIF, ULAP and ours. Tables 1–4 are the assessment ULAP and ours. Tables 1–4 are
the assessment results of the restored images including the SF, PS
results of the restored images including the SF, PS and UCIQE. Additionally, we calcu- and UCIQE. Additionally,
we calculated the AG and BRISQUE for these four kinds of images, as shown in Figures 9
lated the AG and BRISQUE for these four kinds of images, as shown in Figures 9 and 10.
and 10.

Figure 5. Underwater image enhancement result. (a) The origin image. (b) The result of WCID;
Figure
(c) the5.result
Underwater image
of ARUIR; (d) enhancement result.
the result of GIF; (a) result
(e) the The origin image.
of ULAP; (f)(b)
theThe result
result of WCID; (c)
of ours.
the result of ARUIR; (d) the result of GIF; (e) the result of ULAP; (f) the result of ours.

Table 1. Quantitative analysis of Figure 5.

SF PS UCIQE
From Figure 6b, the contrast is low, and there are color deviations in the restored
image. The phenomenon of red channel overcompensation appears in the restored image.
The recovery effect of Figure 6c–e are poor because the algorithm does not have red cor-
rection. Figure 6f is still green because the assumption that the red channel is the weakest
is invalid. However, compared with the other algorithms, the contrast and visual effect of
J. Mar. Sci. Eng. 2022, 10, 360 11 of 17
the restored image is better. The same conclusion can be drawn from Table 2, Figures 9b
and 10b; the AG, SF, UCIQE and BRISQUE are the best in Figure 6f.

J. Mar. Sci. Eng. 2022, 10, x FOR PEER REVIEW 13 of 18

Figure 6. Underwater image enhancement result. (a) The origin image. (b) The result of WCID; (c)
the result of ARUIR; (d) the result of GIF; (e) the result of ULAP; (f) the result of ours.

Different from Figures 5a–7a is the level of blue. In Figure 7b–d, they all have the
problem of low contrast, and a lot of the texture and edges are missing. The restored image
Figure 7f is the best because the contrast and visual effect are excellent. From Table 3,
Figures
Figure 6.9c Underwater
and 10c show thatenhancement
image the restoredresult.
image(a)Figure 7f, obtained
The origin byThe
image. (b) the result
proposed al-
of WCID;
gorithm, has an advantage in everything except the PS.
(c) the result of ARUIR; (d) the result of GIF; (e) the result of ULAP; (f) the result of ours.

J. Mar. Sci. Eng. 2022, 10, x FOR PEER REVIEW 14 of 18


Figure 7. Underwater image enhancement result. (a) The origin image. (b) The result of WCID;
Figure 7. Underwater image enhancement result. (a) The origin image. (b) The result of WCID; (c)
(c) the result of ARUIR; (d) the result of GIF; (e) the result of ULAP; (f) the result of ours.
the result of ARUIR; (d) the result of GIF; (e) the result of ULAP; (f) the result of ours.

Figure 8a is also blue. From the perspective of the restoration effect, Figure 8f has a
higher restoration degree, but from the PS, the quality of Figure 8f is low because there is
a lot of visible noise. Table 4, Figures 9d and 10d also support this conclusion; the UCIQE
is the best image and BRISQUE is the worst one. Because of the noise, the AG is also the
largest.
It can be seen from Figures 5f–8f that the proposed algorithm can choose different
regularization parameters for the target and background regions. The edge and texture
details of the target are effectively preserved in the target area. In the background area,
the noise is effectively suppressed while improving the image quality, so the regulariza-
tion parameter selection process is of great significance.

Figure 8. Underwater image enhancement result. (a) The origin image. (b) The result of WCID;
Figure 8. Underwater image enhancement result. (a) The origin image. (b) The result of WCID; (c)
(c) the result of ARUIR; (d) the result of GIF; (e) the result of ULAP; (f) the result of ours.
the result of ARUIR; (d) the result of GIF; (e) the result of ULAP; (f) the result of ours.
J. Mar. Sci. Eng. 2022, 10, 360 12 of 17

Table 1. Quantitative analysis of Figure 5.

SF PS UCIQE
WCID 0.065 0.075 0.416
ARUIR 0.052 0.006 0.533
GIF 0.058 0.265 0.423
ULAP 0.049 0.062 0.046
Ours 0.073 0.018 0.572

Table 2. Quantitative analysis of Figure 6.

SF PS UCIQE
WCID 0.063 0.15 0.405
ARUIR 0.057 0.005 0.489
GIF 0.051 0.074 0.361
ULAP 0.054 0.005 0.438
Ours 0.068 0.064 0.595

Table 3. Quantitative analysis of Figure 7.

SF PS UCIQE
WCID 0.06 0.284 0.627
ARUIR 0.047 0.003 0.533
GIF 0.041 0.198 0.553
ULAP 0.053 0.048 0.583
Ours 0.067 0.017 0.626

Table 4. Quantitative analysis of Figure 8.

SF PS UCIQE
WCID 0.061 0.142 0.583
ARUIR 0.06 0.049 0.535
GIF 0.077 0.237 0.552
ULAP 0.125 0.234 0.562
Ours 0.173 0.084 0.65

Figure 5a is the original underwater image. Figure 5b,c have a serious problem of color
deviation. Figure 5e has low contrast, and the restored image is green. The contrast is high
in Figure 5f, and its recovery result is more realistic than the others. However, the corners
of Figure 5f are black. From Table 1, Figures 5f, 9a and 10a are the best in the AG and
UCIQE. The SF of the restored images is almost identical. Because the corners of Figure 5f
are black, the PS and the BRISQUE are not the best.
From Figure 6b, the contrast is low, and there are color deviations in the restored image.
The phenomenon of red channel overcompensation appears in the restored image. The
recovery effect of Figure 6c–e are poor because the algorithm does not have red correction.
Figure 6f is still green because the assumption that the red channel is the weakest is invalid.
However, compared with the other algorithms, the contrast and visual effect of the restored
image is better. The same conclusion can be drawn from Table 2, Figures 9b and 10b; the
AG, SF, UCIQE and BRISQUE are the best in Figure 6f.
J. Mar. Sci. Eng. 2022, 10, 360 Figure 8. Underwater image enhancement result. (a) The origin image. (b) The result of WCID;13 of(c)
17
the result of ARUIR; (d) the result of GIF; (e) the result of ULAP; (f) the result of ours.

(a)Figure
Figure9.9.(a)
Figure Figure5a
5auses
usesthetheAG
AGvalues
valuesofofdifferent
differentalgorithms,
algorithms,(b)
(b)Figure
Figure6a
6auses
usesthe
theAG
AGvalues
values
J. Mar. Sci. Eng. 2022, 10, x FOR PEER REVIEW 15 of 18
ofdifferent
of differentalgorithms,
algorithms,(c)
(c)Figure
Figure7a7auses
usesthe
the
AGAG values
values ofof different
different algorithms
algorithms and
and (d)(d) Figure
Figure 8a 8a
uses
usesthe
theAG
AGvalues
valuesof
ofdifferent
differentalgorithms.
algorithms.

(a)(a)
Figure10.10.
Figure Figure
Figure 5a 5a uses
uses thethe BRISQUE
BRISQUE values
values of different
of different algorithms,
algorithms, (b) Figure
(b) Figure 6a the
6a uses uses the
BRISQUE
BRISQUEvalues
valuesofofdifferent
differentalgorithms,
algorithms, (c) Figure 7a
7a uses
usesthe
theBRISQUE
BRISQUEvalues
valuesofofdifferent
different algo-
algorithms
rithms and
and (d) (d) Figure
Figure 8a uses8athe
uses the BRISQUE
BRISQUE valuesvalues of different
of different algorithms.
algorithms.

4.3. Full Referencefrom


Different Image Restoration
Figures Effect7aEvaluation
5a, 6 and is the level of blue. In Figure 7b–d, they all have
the In
problem of low contrast, and a lot of the texture
this section, since there is no ground-truth and edges
database are missing.
available The restored
for underwater im-
image Figure 7f is the best because the contrast and visual effect are excellent.
ages, to evaluate the performance of the proposed approach more objectively, we quanti- From Table 3,
Figuresanalyzed
tatively 9c and 10cthe show that of
resilience the restored
the algorithm image Figuresynthetic
by using 7f, obtained by the
images. proposed
Figure 11 is
algorithm, has an advantage in everything except
the original image which was obtained in sunny weather. the PS.
J. Mar. Sci. Eng. 2022, 10, 360 14 of 17

Figure 8a is also blue. From the perspective of the restoration effect, Figure 8f has a
higher restoration degree, but from the PS, the quality of Figure 8f is low because there is a
lot of visible noise. Table 4, Figures 9d and 10d also support this conclusion; the UCIQE
is the best image and BRISQUE is the worst one. Because of the noise, the AG is also the
largest.
It can be seen from Figures 5f, 6, 7 and 8f that the proposed algorithm can choose differ-
ent regularization parameters for the target and background regions. The edge and texture
details of the
Figure 10. target 5a
(a) Figure areuses
effectively preserved
the BRISQUE valuesinofthe targetalgorithms,
different area. In the(b)background area,
Figure 6a uses thethe
noise
BRISQUEis effectively suppressed
values of different while (c)
algorithms, improving
Figure 7a the
usesimage quality,values
the BRISQUE so the
of regularization
different algo-
parameter
rithms and selection
(d) Figure process
8a uses theis of great significance.
BRISQUE values of different algorithms.

4.3.
4.3. Full
Full Reference
Reference Image
Image Restoration
Restoration Effect
Effect Evaluation
Evaluation
In this section, since there is no ground-truth
In this section, since there is no ground-truth database available
database for underwater
available images,
for underwater im-
to evaluate
ages, the performance
to evaluate of the proposed
the performance approach
of the proposed more objectively,
approach we quantitatively
more objectively, we quanti-
analyzed the resilience
tatively analyzed of the algorithm
the resilience by using synthetic
of the algorithm by usingimages. Figure
synthetic 11 is Figure
images. the original
11 is
image which was obtained in sunny weather.
the original image which was obtained in sunny weather.

Figure11.
Figure 11. Original
Original image.
image.

Accordingtoto[15]
According [15]
andandthethe actual
actual underwater
underwater situation,
situation, four different
four different background
background lights
lights
are set:are
B1 set: B1 = 0.51,
= [0.35, [0.35,0.28],
0.51,B0.28],
2 = B =
[0.17,
2 [0.17,
0.43, 0.43,
0.33], 0.33],
B 3 = B =
[0.29,
3 [0.29,
0.21, 0.21,
0.47] 0.47]
and B and
4 = B =
[0.12,
4 [0.12,
0.26,
0.26, 0.38].
0.38]. TheseThese four different
four different background
background lightsthe
lights make make the histogram
histogram of the synthesis
of the synthesis images
different. The synthesis
images different. images and
The synthesis background
images light canlight
and background be shown
can beasshown
Figureas12.Figure 12.
Figure
Figure 12a–d
12a–d are
are the
the synthesis
synthesis images
images in in which
which thethe background
background lights lights are
are B1,
B1, B2,
B2, B3
B3
and
and B4,
B4,respectively.
respectively.EachEachgroup
groupofofimages
imagesin in
Figure
Figure 12 12
fromfromleftleft
to right is the
to right is color of the
the color of
background
the background lightlight
andandthe synthesis images.
the synthesis Moreover,
images. Moreover,the the
PSNRPSNR andandSSIM areare
SSIM shown
shownin
Table 5. 5.
in Table
From Table 5, the best PSNR is the restored images of the proposed algorithm, which
Table 5. Full reference evaluation.
shows that the algorithm can effectively restore the intensity of the image. As can be seen
PSNR SSIM
WCID 19.34 0.814
ARUIR 21.91 0.836
GIF 21.16 0.827
ULAP 20.47 0.833
Ours 22.74 0.849

From Table 5, the best PSNR is the restored images of the proposed algorithm, which
shows that the algorithm can effectively restore the intensity of the image. As can be seen
from Table 5, the proposed restoration algorithm achieves the best performance among the
compared methods in terms of the SSIM.
J. Mar. Sci. Eng. 2022, 10, x FOR PEER REVIEW 16 of 18

from Table 5, the proposed restoration algorithm achieves the best performance among
J. Mar. Sci. Eng. 2022, 10, 360 15 of 17
the compared methods in terms of the SSIM.

Figure 12. (a) Synthesis image by B1 , (b) synthesis image byB2 , (c) synthesis image byB3 and
Figure 12. (a) Synthesis image by B1, (b) synthesis image byB2, (c) synthesis image byB3 and (d)
(d) synthesis image by B4 .
synthesis image by B4.
5. Conclusions
Table 5. Full reference evaluation.
A novel underwater image restoration algorithm is proposed in this paper based on
the DCP and Yin–Yang pair optimization. PSNRThe algorithm is composed SSIM of four important
parts: combining
WCID the first-order variational
19.34 and the high-order variational
0.814 transmission-
map-optimization
ARUIR model, the adaptive parameter
21.91 selection method, the solution of the
0.836
transmission-map-optimization
GIF model via
21.16the ADMM, the red channel transmittance map
0.827
and the background light estimator. The algorithm is executed on a set of representative real
ULAP 20.47 0.833
and synthesized underwater images which demonstrate that it can enhance the detailed
Ours 22.74 0.849
texture features of the image while suppressing background noise. Moreover, a large
number of qualitative and quantitative experimental comparison results further ensure
5. Conclusions
that the recovered underwater images have better quality than others’ works. In addition,
A novel underwater
completely discarding image restoration
the red algorithm
channel when is proposed
calculating in this paper based
the transmittance map ofon green
the DCP
and redandchannels
Yin–Yang maypair optimization.
cause The algorithm isWe
it to be overcompensated. composed of four
need to find a wayimportant
to solve this
parts: combining
problem the first-order
considering the redvariational
channel in and the high-order variational transmission-
the future.
map-optimization model, the adaptive parameter selection method, the solution of the
Author Contributions: All authors
transmission-map-optimization modelcontributed
via thesubstantially
ADMM, the to this
red study. Individual
channel contributions
transmittance
were: conceptualization, K.Y. and K.Z.; methodology, Y.C., Y.L. (Yufang
map and the background light estimator. The algorithm is executed on a set of representa- Liu) and K.Y.; software,
Y.C., K.Z. and Y.L. (Yanlei Liu); validation, Y.C., K.Y. and L.L.; formal
tive real and synthesized underwater images which demonstrate that it can enhance the analysis, Y.C.; investigation,
L.L. and
detailed K.Y.;features
texture resources,ofK.Y.,
the Y.L.
image (Yanlei
whileLiu) and K.Z.; data
suppressing curation, Y.C.
background writing—original
noise. Moreover, adraft
preparation, Y.C.; writing—review and editing, Y.C. and K.Y.; visualization, Y.C.; supervision, K.Z.
large number of qualitative and quantitative experimental comparison results further en-
and Y.L. (Yufang Liu); project administration, K.Y.; funding acquisition, K.Y. All authors have read
sure that the recovered underwater images have better quality than others’ works. In ad-
and agreed to the published version of the manuscript.
dition, completely discarding the red channel when calculating the transmittance map of
and redThis
greenFunding: researchmay
channels was cause
fundeditbytothe
beNational Natural Science
overcompensated. We Foundation of China
need to find a way(62075058),
to
the Outstanding Youth Foundation of Henan Normal
solve this problem considering the red channel in the future. University (20200171), the Key Scientific
Research Project of Colleges and Universities in Henan Province (22A140021), the 2021 Scientific
Research Project for Postgraduates of Henan Normal University (YL202101) and the Natural Science
Foundation of Henan Province (Grant Nos. 222300420011, 222300420209).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
J. Mar. Sci. Eng. 2022, 10, 360 16 of 17

Data Availability Statement: Not applicable.


Conflicts of Interest: The publicly archived datasets of the underwater image data used in this
paper, are derived from website: https://li-chongyi.github.io/proj_benchmark.html (accessed on
15 August 2021) and https://github.com/dlut-dimt/RealworldUnderwater-Image-Enhancement-
RUIE-Benchmark (accessed on 15 August 2021).

References
1. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Bekaert, P. Color Balance and Fusion for Underwater Image Enhancement. IEEE
Trans. Image Process. 2018, 27, 379–393. [CrossRef] [PubMed]
2. Drews, P., Jr.; do Nascimento, E.; Moraes, F.; Botelho, S.; Campos, M. Transmission Estimation in Underwater Single Images. In
Proceedings of the 2013 IEEE International Conference on Computer Vision Workshops, Sydney, Australia, 2–8 December 2013.
3. Amer, K.O.; Elbouz, M.; Alfalou, A.; Brosseau, C.; Hajjami, J. Enhancing underwater optical imaging by using a low-pass
polarization filter. Opt. Express 2019, 27, 621–643. [CrossRef] [PubMed]
4. Boffety, M.; Galland, F.; Allais, A.G. Influence of Polarization Filtering on Image Registration Precision in Underwater Conditions.
Opt. Lett. 2012, 37, 3273–3275. [CrossRef] [PubMed]
5. Narasimhan, S.G.; Nayar, S.K. Contrast restoration of weather degraded images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25,
713–724. [CrossRef]
6. Lu, H.; Li, Y.; Xu, X.; Li, J.; Liu, Z.; Li, X.; Yang, J.; Serikawa, S. Underwater image enhancement method using weighted guided
trigonometric filtering and artificial light correction. J. Vis. Commun. Image Represent. 2016, 38, 504–516. [CrossRef]
7. Ulutas, G.; Ustubioglu, B. Underwater image enhancement using contrast limited adaptive histogram equalization and layered
difference representation. Multimed. Tools Appl. 2021, 80, 15067–15091. [CrossRef]
8. Ancuti, C.; Ancuti, C.O.; Haber, T. Enhancing underwater images and videos by fusion. In Proceedings of the 2012 IEEE
Conference on Computer Vision & Pattern Recognition, Providence, RI, USA, 16–21 June 2012.
9. Li, C.; Guo, J.; Guo, C. Emerging from water: Underwater image color correction based on weakly supervised color transfer. IEEE
Signal Proc. Let. 2018, 25, 323–327. [CrossRef]
10. He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33,
2341–2353.
11. Peng, Y.T.; Cosman, P.C. Underwater Image Restoration Based on Image Blurriness and Light Absorption. IEEE Trans. Image
Process. 2017, 26, 1579–1594. [CrossRef]
12. Yu, H.; Li, X.; Lou, Q.; Lei, C.; Liu, Z. Underwater image enhancement based on DCP and depth transmission map. Multimed.
Tools Appl. 2020, 79, 27–28. [CrossRef]
13. Galdran, A.; Pardo, D.; Picon, A.; Alvarez-Gila, A. Automatic Red-Channel underwater image restoration. J. Vis. Commun. Image
Represent. 2015, 26, 132–145. [CrossRef]
14. Li, C.; Quo, J.; Pang, Y.; Chen, S.; Jian, W. Single underwater image restoration by blue-green channels dehazing and red channel
correction. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),
Shanghai, China, 20–25 March 2016.
15. Gao, Y.; Li, H.; Wen, S. Restoration and Enhancement of Underwater Images Based on Bright Channel Prior. Math. Probl. Eng.
2016, 2016, 3141478. [CrossRef]
16. Yang, H.Y.; Chen, P.Y.; Huang, C.C.; Zhuang, Y.Z.; Shiau, Y.H. Low Complexity Underwater Image Enhancement Based on
Dark Channel Prior. In Proceedings of the 2011 Second International Conference on Innovations in Bio-Inspired Computing and
Applications, Shenzhen, China, 16–18 December 2011.
17. Peng, Y.T.; Cao, K.; Cosman, P.C. Generalization of the Dark Channel Prior for Single Image Restoration. IEEE Trans. Image Process.
2018, 27, 2856–2868. [CrossRef]
18. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 35, 1397–1409. [CrossRef]
19. Hou, G.; Pan, Z.; Wang, G.; Yang, H.; Duan, J. An efficient nonlocal variational method with application to underwater image
restoration. Neurocomputing 2019, 369, 106–121. [CrossRef]
20. Song, M.; Qu, H.; Zhang, G.; Tao, S.; Jin, G. A Variational Model for Sea Image Enhancement. Remote Sens. 2018, 10, 1313.
[CrossRef]
21. Tan, L.; Liu, W.; Pan, Z. Color image restoration and inpainting via multi-channel total curvature. Appl. Math. Model. 2018, 61,
280–299. [CrossRef]
22. Liu, J.; Ma, R.; Zeng, X.; Liu, W.; Wang, M.; Chen, H. An efficient non-convex total variation approach for image deblurring and
denoising. Appl. Math. Comput. 2021, 397, 259–268. [CrossRef]
23. Hou, G.; Li, J.; Wang, G.; Pan, Z.; Zhao, X. Applications, Underwater image dehazing and denoising via curvature variation
regularization. Multimed. Tools Appl. 2020, 79, 20199–20219. [CrossRef]
24. Hou, G.; Li, J.; Wang, G.; Yang, H.; Huang, B.; Pan, Z. A novel dark channel prior guided variational framework for underwater
image restoration. J. Vis. Commun. Image Represent. 2020, 66, 102732. [CrossRef]
25. Liao, H.Y.; Fang, L.; Michael, K.N. Selection of regularization parameter in total variation image restoration. J. Opt. Soc. Am. 2009,
26, 2311–2320. [CrossRef] [PubMed]
J. Mar. Sci. Eng. 2022, 10, 360 17 of 17

26. Langer, A. Automated Parameter Selection for Total Variation Minimization in Image Restoration. J. Math. Imaging Vis. 2016, 57,
239–268. [CrossRef]
27. Wen, Y.W.; Chan, R.H. Parameter selection for total-variation-based image restoration using discrepancy principle. IEEE Trans.
Image Process. 2012, 21, 1770–1781. [CrossRef]
28. Chen, A.Z.; Huo, X.M.; Wen, Y.W. Adaptive regularization for color image restoration using discrepancy principle. In Proceedings
of the 2013 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), Kunming, China,
5–8 August 2013.
29. Ma, T.H.; Huang, T.Z.; Zhao, X.L. New Regularization Models for Image Denoising with a Spatially Dependent Regularization
Parameter. Abstr. Appl. Anal. 2013, 2013, 729151. [CrossRef]
30. Wen, H.; Tian, Y.; Huang, T.; Guo, W. Single underwater image enhancement with a new optical model. In Proceedings of the
2013 IEEE International Symposium on Circuits and Systems (ISCAS), Beijing, China, 19–23 May 2013.
31. Barros, W.; Nascimento, E.R.; Barbosa, W.V.; Campos, M.F.M. Single-shot underwater image restoration: A visual quality-aware
method based on light propagation model. J. Vis. Commun. Image Represent. 2018, 55, 363–373. [CrossRef]
32. Yang, M.; Sowmya, A.; Wei, Z.; Zheng, B. Offshore Underwater Image Restoration Using Reflection Decomposition Based
Transmission Map Estimation. IEEE J. Ocean. Eng. 2020, 45, 521–533. [CrossRef]
33. Punnathanam, V.; Kotecha, P. Yin-Yang-pair Optimization: A novel lightweight optimization algorithm. Eng. Appl. Artif. Intell.
2016, 54, 62–79. [CrossRef]
34. Punnathanam, V.; Kotecha, P. Multi-objective optimization of Stirling engine systems using Front-based Yin-Yang-Pair Optimiza-
tion. Energy Convers. Manag. 2017, 133, 332–348. [CrossRef]
35. Yang, B.; Yu, T.; Shu, H.; Zhu, D.; Zeng, F.; Sang, Y.; Jiang, L. Perturbation observer based fractional-order PID control of
photovoltaics inverters for solar energy harvesting via Yin-Yang-Pair optimization. Energy Convers. Manag. 2018, 171, 170–187.
[CrossRef]
36. Song, D.; Liu, J.; Yang, J.; Su, M.; Wang, Y.; Yang, X.; Huang, L.; Joo, Y.H. Optimal design of wind turbines on high-altitude sites
based on improved Yin-Yang pair optimization. Energy 2020, 193, 497–510. [CrossRef]
37. Zhao, X.; Jin, T.; Qu, S. Deriving inherent optical properties from background color and underwater image enhancement. Ocean
Eng. 2015, 94, 163–172. [CrossRef]
38. Jiao, Q.; Liu, M.; Li, P.; Dong, L.; Hui, M.; Kong, L.; Zhao, Y. Underwater Image Restoration via Non-Convex Non-Smooth
Variation and Thermal Exchange Optimization. J. Mar. Sci. Eng. 2021, 9, 570. [CrossRef]
39. Chiang, J.Y.; Chen, Y.C. Underwater image enhancement by wavelength compensation and dehazing. IEEE Trans. Image Process.
2012, 21, 1756–1769. [CrossRef] [PubMed]
40. Song, W.; Wang, Y.; Huang, D.; Tjondronegoro, D. A Rapid Scene Depth Estimation Model Based on Underwater Light Attenuation
Prior for Underwater Image Restoration. In Proceedings of the Advances in Multimedia Information Processing—PMC 2018,
Hefei, China, 21–22 September 2018.
41. Yang, M.; Sowmya, A. An Underwater Color Image Quality Evaluation Metric. IEEE Trans. Image Process. 2015, 24, 62–71.
[CrossRef] [PubMed]
42. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process.
2012, 21, 4695–4708. [CrossRef]

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy