Underwater Image Dehazing Using A Novel Color Channel Based Dual Transmission Map Estimation
Underwater Image Dehazing Using A Novel Color Channel Based Dual Transmission Map Estimation
Underwater Image Dehazing Using A Novel Color Channel Based Dual Transmission Map Estimation
https://doi.org/10.1007/s11042-023-15708-z
Xiaohong Yan1,2 · Guangyuan Wang1 · Peng Lin1 · Junbo Zhang1 · Yafei Wang1 ·
Xianping Fu1,3
Abstract
Underwater images typically suffer from color distortion and low contrast, owing to the
light absorption and scattering. To handle the visual manifestation of such degraded images,
numerous underwater image dehazing algorithms have been presented. However, most of
existing dehazing methods still have room for improvement in terms of preserving more
details in the restored results. In this paper, we propose a simple yet effective dehazing
method. The core idea is to produce the high-quality underwater images with rich detail
information and vivid color. Firstly, an innovative color correction is designed to compensate
the information of each color components. This operation is a pre-processing procedure, in
which, selective absorption is fully considered. Then a dual transmission map-based haze
removal method is introduced. Different from previous methods, a novel color channel with
two terms is constructed to accurately estimate the transmission maps. The one is designed
as the sharpened term to reveal more image detail and edges. The other is a difference
of channel intensity prior term to remove the influence of light scattering. By using this
strategy, our method can generate a natural appearance of the restored image with more detail
information and higher color contrast. Experiments on representative images have proven
that the performer of our method is 8.6% and 2.7% better than the second best on the average
scores of patch-based contrast quality index (PCQI) and Entropy metrics, respectively.
B Xianping Fu
fxp@dlmu.edu.cn
Xiaohong Yan
dl_yanxiaohong@163.com
1 Information Science and Technology School, Dalian Maritime University, Dalian 116026, China
2 School of Software, Dalian Jiaotong University, Dalian 116028, China
3 Pengcheng Laboratory, Shenzhen, Guangdong 518055, China
123
20170 Multimedia Tools and Applications (2024) 83:20169–20192
1 Introduction
Images acquired in harsh underwater scenes are usually characterized by insufficient visi-
bility due to light attenuation (i.e., absorption and scattering). The absorption leads to color
distortion such as greenish and bluish, depending on wavelength and scene-depth. Scattering
changes the direction of light propagation, resulting in low contrast and blurry images. Such
degraded images do not perform well in fundamental computer vision-based tasks, e.g. detec-
tion [21, 43], compression [58], segmentation [25, 51], and adaptive tracking control [29,
30]. To this end, there is an urgent need for dehazing techniques to improve the visual quality
of underwater images. But, it is challenging owing to the medium attenuation properties and
complex illumination.
To deal with these issues, a series of methods based on image formation model (IFM) [10,
36] have been introduced. Mathematically, the simplified IFM can be defined as:
Ic (x) = Jc (x) tc (x) + Bc (1 − tc (x)) , c ∈ {r , g, b} (1)
where Ic (x) is the observed intensity in color channel c at pixel x, Jc (x) is the scene radiance,
Bc is the background light (BL), tc (x) is the transmission map (TM). In the (1), the TM
represents portion of the scene radiance that is not absorbed and scattered. Broadly speaking,
the TM implicitly reflects the degradation degrees of underwater images.
To estimate the derived parameters (BL and TM) described in (1), He et al. [18] firstly
proposed the dark channel prior (DCP). This work has motivated various image dehazing
methods [5] that modify the DCP [18] for underwater scenario. Instead of using the DCP,
there are also useful priors proposed by other single image dehazing techniques. For example,
Carlevaris-bianco et al. [4] designed a maximum intensity prior (MIP) to estimate the depth
of scene. The MIP method exploits the large difference in attenuation among R-G-B color
channels of an underwater image. Zhu et al. [62] proposed a color attenuation prior to restore
the scene radiance of hazy image. Despite the valuable achievements achieved by these
methods, they still introduce several disadvantages:
(1) Traditional underwater image dehazing methods rarely take into account the wavelength-
and scenedepth- dependent absorption, i.e., selective absorption. However, the absorption
in the natural water generally plays a crucial role [1, 12].
(2) Most methods [57, 60] apply post-processing steps to improve the image detail informa-
tion. It can achieve a comparable performance, but this strategy may disrupt structure of
the target input, and produce unnatural results.
To solve the above mentioned problems, we propose a novel haze removal method. Firstly,
an innovative color correction technique is presented as a pre-processing procedure to com-
pensate the attenuation of R-G-B color components. Secondly, we calculate the global
background light from a local region of the color corrected image. Thirdly, a dual transmission
map-based haze removal technique is introduced. Finally, the restored image is calculated
with inversion of the simplified IFM. The proposed method can effectively remove the color
distortion and improve the details, as shown in Fig. 1. It can be seen, the dehazed image
obtains more number of visible edges than the original image. Put differently, the effective-
ness of the proposed method can be further demonstrated by the application of visual edge
detection [17]. Overall, the specific contributions can be summarized as follows:
• We propose a new restoration method to overcome the problem of underwater image
degradation. Color correction is first utilized to establish the foundation for next dehazing
step. Here, a novel technique is presented to remove the hazy based on dual transmission
123
Multimedia Tools and Applications (2024) 83:20169–20192 20171
2264 39088
(a) (b) (c) (d)
Fig. 1 Example of dehazed underwater image yielded by the proposed method. (a) Original image, (b) the
edge map of (a), (c) dehazed image, (d) the edge map of (c). The values in represent the number of visible
edges (VE) [17]
map estimation. Experimental results demonstrate that the proposed method can effec-
tively remove the color distortion, and enhance the visual detail information (See Fig.
1).
• We propose a novel color correction method to eliminate the color distortion of real-
world underwater images. Our method considers the selective light absorption that has
not been taken seriously in the most existing techniques. Gain components are introduced
to compensate the light attenuation by calculating the differences between the reference
color channel and the degraded channels.
• To remove the haze performance of underwater images, a novel color channel based
dual transmission map estimation method is proposed. Here, two terms are formulated to
model this channel. For the first time, a detail sharpened term is introduced to effectively
improve the deterioration of image edges. The other is designed as a difference of channel
intensity prior term to mitigate the influence of light scattering. With the power of this
channel, more detail information will be unveiled in the restored images.
The remainder of this paper is organized as follows: Section 2 introduces the work related
to underwater dehazing. The proposed method is described in Section 3. Section 4 evaluates
and compares experimental results. Section 5 concludes the paper.
2 Related work
There are two groups of image dehazing methods: learning-based methods [28, 32] and
traditional methods [44, 53]. Generally, the traditional methods can be further catego-
rized into underwater image enhancement methods [15] and underwater image restoration
methods [34].
With the popularity of Graphics Processing Unit (GPU), the deep learning approaches [41,
48, 49] have become the most advanced solution. These learning-based methods make a great
performance in the computer vision fields. For example, Li et al. [24] synthesized underwater
images from in-air color images by an unsupervised pipeline. These images served as inputs
to a two-stage strategy to remove the color distortion. Fabbri et al. [9] designed an underwater
generative adversarial network (UGAN) to improve the visual quality of underwater scenes.
This method used cycle generative adversarial network (CycleGAN) [61] as a pre-processing
step to obtain synthetic underwater images. Islam et al. [35] introduced a fully-convolutional
conditional GAN-based model for real-time underwater image enhancement, called FUn-
123
20172 Multimedia Tools and Applications (2024) 83:20169–20192
IEGAN. Its training data was generated via the same strategy proposed by Fabbri et al.
[61]. Recently, Guo et al. [16] presented a multiscale dense generative adversarial network
(DenseGAN). It applied the dense connections, residual learning, and multi-scales network
to facilitate the performance of underwater images enhancement. Li et al. [28] proposed an
underwater image enhancement convolutional neural network method based on underwater
image prior, named UWCNN. In 2021, with the medium transmission map guiding, Li et al.
[27] proposed a deep learning-based model. This method fused the key features of the raw
image in different color spaces to improve the quality of underwater images.
In the last years, various image enhancement methods have been proposed to obtain a haze-
free underwater image. For example, Fu et al. [13] introduced a retinex-based method to
enhance underwater image. Zhang et al. [47] proposed an extended multi-scale retinex-based
model. In CIEL ∗ a ∗ b∗ color space, the bilateral and trilateral filter were used. Ancuti et al. [2]
compensated for the attenuation via blending multiple images derived from color corrected
image in the fusion strategy. Zhuang et al. [63] integrated gradient domain guided image
filtering priors into a retinex-based technique. Recently, Zhang et al. [59] applied bi-interval
histogram and s-shaped function on the degraded underwater images. Ding et al. [6] proposed
a scene depth regularized method to enhance the underwater images. Ulutas et al. [45] firstly
used the global contrast enhancement and the local technique to generate the dehazed images.
Then color compensation was used to obtain the final results.
Most underwater image restoration methods are based on the IFM [10, 36], and calculate
the derived parameters (background light or BL and transmission map or TM). Without
additional information, BL and TM are difficult to estimate. Thus, dark channel prior (DCP)
[18] was introduced to remove haze. For a target input Ic , DCP can be defined as:
dar k
Irgb (x) = min min Ic (y) (2)
y∈Ω(x) c
where Ω(x) means a local patch centered at x. To calculate BL, the top 0.1 % brightest
dar k . Then those bright values were mapped to I , and the highest
pixels were selected from Irgb c
intensity pixel can be estimated as BL.
In order to estimate TM, dividing both sides of (1) by BL, i.e., Bc and then applying
minimum operation on it, that is:
Ic (y) Jc (y)
min min = tc (x) min min + 1 − tc (x) (3)
y∈Ω(x) c Bc y∈Ω(x) c Bc
dar k is generally close to 0, that is:
For a haze-free image, Jrgb
dar k
Jrgb (x) = min min Jc (y) = 0 (4)
y∈Ω(x) c
123
Multimedia Tools and Applications (2024) 83:20169–20192 20173
To preserve the naturalness of image, a small amount of haze are retained in the restored
underwater image Jc . Therefore, (5) can be rewritten as:
Ic (y)
tc (x) = 1 − ω min min (6)
y∈Ω(x) c Bc
Finally, with the estimated Bc and tc (x), the scene radiance is defined as:
Ic (x) − Bc
Jc (x) = + Bc (7)
max (tc (x) , t0 )
where t0 is used to increase the exposure of the Jc .
There are many methods [20] based on DCP. For example, Drews Jr et al. [7] applied
the DCP in the green and blue channels to estimate the transmission for underwater scenes.
Galdran et al. [14] creatively developed a red channel prior (RCP) to restore the underwater
images. This method took into account the colors associated to short wavelengths, i.e., red
channel. Li et al. [23] introduced an underwater image dehazing algorithm based on the min-
imum information loss principle and histogram distribution prior. Wang et al. [50] proposed
an adaptive attenuation-curve prior for underwater image dehazing. Peng et al. [38] presented
a generalized dark channel prior (GDCP) for restoring degraded underwater images. Song
et al. [42] proposed a scene depth estimation model based on underwater light attenuation
prior (ULAP). This method applied learning-based supervised linear regression to train the
model. Recently, Yu et al. [57] presented an effective pipeline to enhance underwater images.
In this method, homomorphic filtering, double transmission map, and dual-image wavelet
fusion were used.
These methods have made remarkable success in the image dehazing filed. However, there
are still some defects in the estimation of TM, which cause their results to be unnatural. In
order to address these issues, we propose an effective underwater image dehazing method,
which will be described in detail in Section 3.
3 Proposed method
In this section, the goal of the proposed method is to generate a haze-free underwater image
with good visibility. The framework of the proposed method is shown in Fig. 2.
Background light
R RCC
3×3
B BCC tƒ
15×15
3×3
t
Saturation map
15×15
tS
123
20174 Multimedia Tools and Applications (2024) 83:20169–20192
Due to complex light source or medium attenuation properties, underwater images are usually
characterized by color distortion. For example, in Fig. 3 (a), the red channel with the longest
wavelength is attenuated faster than blue and green ones, the blue channel is well preserved
(see Fig. 3 (b)). In contrast, for Fig. 3 (c), the blue channel with the shortest wavelength may
also be attenuated, owing to the absorption of organic matter in sea water. The green channel
is relatively well preserved compared with the red and blue ones (see Fig. 3 (d)). To alleviate
some of the color distortion, several traditional color correction techniques based on linear
stretching are proposed. However, most of these methods do not work well (see Fig. 4 and
5), due to not fully considering the unbalanced attenuation caused by selective absorption.
To solve the issues mentioned above, in this paper, an effective color correction method is
proposed.
For an underwater image Ic , c ∈ {r , g, b}, we first estimate a reference map Iθ (x) with
well-preserved information. Here, Iθ (x) = max (Ic (x)). Then, in order to adaptively com-
pensate for the loss of the degraded channels, a gain component gc (x) is proposed, which
can be defined as:
Iθ (x) − Ic (x) β
gc (x) = 1 − I¯c (8)
Ic (x)
where I¯c is the mean value of color component Ic , β is set as 0.7. Finally, a linear operation
is used for each color channel as follows:
123
Multimedia Tools and Applications (2024) 83:20169–20192 20175
0.03
0.025
0.02
Times
0.015
0.01
0.005
0
0 50 100 150 200 250
Grey Levels
(a) (b)
0.03
0.025
0.02
Times 0.015
0.01
0.005
0
0 50 100 150 200 250
Grey Levels
(c) (d)
Fig. 3 Examples of underwater image. (a) and (c) Original underwater images, (b) and (d) represents the
historgram distributions of (a) and (c), respectively
(a)
(b)
(c)
(d)
Original Grey Edge Grey World Max RGB Shade of Grey Grey Pixel Color Constancy Ours
Fig. 4 Comparisons of several color correction methods (Grey Edge [52], Grey World [3], Max RGB [22],
Shades of Grey [11], Grey Pixel [55], and Color Constancy [54])
123
20176 Multimedia Tools and Applications (2024) 83:20169–20192
(a)
(b)
(c)
(d)
Original Grey Edge Grey World Max RGB Shade of Grey Grey Pixel Color Constancy Ours
Color Checker
Fig. 5 Comparisons of several color correction methods (Grey Edge [52], Grey World [3], Max RGB [22],
Shades of Grey [11], Grey Pixel [55], and Color Constancy [54]). The color checker is used as a reference
for calculating CIEDE2000 [40]. The range of CIEDE2000 is [0, 100], a smaller value indicates a small color
difference. In other words, the smaller the metric, the better. The quantitative results are provided in Table. 1
The proposed color correction procedure described in Section 3.1 can eliminate color dis-
tortion to some extent. However, the color corrected image still shows the appearance of
low visibility caused by light scattering (See Fig. 7). In this section, we propose an effec-
tive restoration method to generate haze-free underwater images from the color corrected
versions.
By referring to (1), the color corrected underwater image IcCC can be represented as:
IcCC (x) = Jc (x) tc (x) + Bc (1 − tc (x)) (10)
The corresponding images (same order) are presented in Fig. 5. and best results are highlighted
in color
123
Multimedia Tools and Applications (2024) 83:20169–20192 20177
Fig. 6 The effect of the parameter β. (a) Original image, (b)-(d) respectively represent the color corrected
images with third different values of β. Setting β = 0.7 practically results in the most satisfactory images
It is evident from Section 2.3 (especially (2)-(5)) that the existing DCP-based techniques
used a global strategy to estimate the BL or Bc . When considering the underwater image,
these techniques show limitations due to the existence of various light sources. Thus, we
estimate the BL from local region. Similar to [39], the color corrected image IcCC is divided
into N blocks of 32 × 32 size. To estimate a smooth background block, the score of each
block is calculated as the average pixel value subtracted by the standard deviation of the pixel
values within the block. The candidate block is obtained by seeking the block with highest
score. The top 5% brightest pixels in the dark channel of the candidate block is selected
to alleviate the effect of floating particles. Moreover, the selected pixels are mapped to the
color corrected image. Finally the mean value of the selected pixels is calculated, which
considered as the background light Bc . Figure 8 shows two examples of the proposed BL
estimation method. It can be seen that our method can effectively calculate the BL from the
color corrected images.
After calculating the Bc , the visual quality of the dehazed underwater images usually relies
on the estimation of the TM. In view of the similarities between outdoor terrestrial images
and real-world underwater images, we use the DCP [18] to estimate the TM. By referring to
(6), the TM, i.e., tc (x) is rewritten as:
I CC (y)
tc (x) = 1 − ω min min c (11)
y∈Ω(x) c Bc
where c ∈ {r , g, b}, ω is used to maintain a small amount of haze in the restored image,
it is empirically set to 0.85. Interestingly, the TM estimated via (11) is not always correct.
(a)
(b)
123
20178 Multimedia Tools and Applications (2024) 83:20169–20192
(a)
(b)
Fig. 8 Examples of BL estimation. (a) Color corrected image, (b) background light
Inspired by [39], we design a novel color channel φ(x) to estimate the TM, which is defined
as:
φ (x) = L (x) + (L (x) − G (x) ∗ L (x)) − δ (12)
where L(x) = 0.299IrCC (x) + 0.587IgCC (x) + 0.114IbCC (x), δ = max IcCC (x) −
CC
min Ic (x) , G(x) means a two-dimensional Gaussian function with σ = 0.5 and a size
of 8 × 8 pixels, and ∗ denotes a convolution operator.
Here, we explain the motivation behind (12) as follows. Since removing the haze from a
single image is at the cost of losing detail. Thus, L(x) + (L(x) − G(x) ∗ L(x)) is firstly used
as the sharpened term to compensated for this loss. What’s more, in the background region
of an image is inhomogeneous owing to the difference in light scattering angles. δ is used as
a difference of channel intensity prior term to eliminating the influence of light scattering.
Thus, referring to (11), TM is derived as:
φ (y)
tφ (x) = 1 − ω min min (13)
y∈Ω(x) c Bc
where Ω(x) means a local patch centered at x. To generate natural haze-free underwater
images, we apply 3 × 3 and 15 × 15 for smaller and larger patch size, respectively. In
other words, dual TMs, i.e., tφ3×3 (x) and tφ15×15 (x) are estimated from (13). Furthermore, we
present a saturation map S(x), which can be defined as:
max IcCC (y) − min IcCC (y)
y∈Ω y∈Ω
S (x) = (14)
max IcCC (y)
y∈Ω
where S(x) is used to mitigate the effect of bright element, e.g., artificial light (AL). In the
HSV color model, the dark regions of the underwater image often appear fully saturated.
On the contrary, the lack of saturation in an area will be interpreted as a lot of white light
shooting. Thus, to express the above observation, a lower-bound TM, i.e., t S (x) can be readily
avaliable from (15) as:
t S (x) = 1 − S(x) (15)
where t S (x) means t S3×3 (x) and t S15×15 (x). After this, dual rough TMs, i.e., tκ3×3 (x) and
tκ15×15 (x) are obtained by:
tκ3×3 (x) = max(tφ3×3 (x), t S3×3 (x)) (16)
123
Multimedia Tools and Applications (2024) 83:20169–20192 20179
3.2.3 Radiance
At last, the restored image Jc (x) is calculated by using the background light Bc and the
refined transmission map t f (x). By referring to (7), Jc (x) is defined as:
IcCC (x) − Bc
Jc (x) = + Bc (20)
max{t f (x), t0 }
where t0 is taken as 0.1 to avoid the denominator is zero. For clarity, the detailed steps of the
proposed method are outlined in Algorithm 1.
In this section, experiments are executed to examine the effectiveness of the proposed method.
Test images are taken from the underwater image enhancement benchmark dataset (UIEBD)
123
20180 Multimedia Tools and Applications (2024) 83:20169–20192
(a)
(b)
A B C D E
Fig. 9 Dehazed results yielded by the proposed method. From left to right, original images from the five
subsets A-E are ranked according to a non-reference underwater image quality evaluation (UCIQE) [56], and
the corresponding image quality successively decreases [31]
[26] and underwater image quality set (UIQS) [31]. The UIEBD contains 890 real-world
underwater images and 60 challenging data. The raw images in UIEBD are constructed
from the Internet. These images have multiple color ranges and levels of contrast reduction.
The underwater images in UIQS are captured by the authors in the First Underwater Robot
Picking Contest, Zhangzidao, Dalian, China. We conduct comparative experiments with
several underwater image enhancement/restoration methods, namely, DCP [18], GDCP [38],
MIP [4], RCP [7], FUnIEGAN [35], UGAN [8], ULAP [42], and UWCNN [28].
The proposed method is implemented in MATLAB R2020a using a PC with Intel
Core(TM) i7-4710 CPU @ 2.50 GHz and 16 GB RAM.
To evaluate the proposed method for the improvement of image visibility, we employ several
underwater images. These test images are captured under shallow water at different quality
levels. The corresponding dehazed results are shown in Fig. 9. It can be seen that the proposed
Fig. 10 Qualitative comparisons of different methods. (a) Original image, (b) DCP, (c) GDCP, (d) MIP, (e)
RCP, (f) FUnIEGAN, (g) UGAN, (h) ULAP, (i) UWCNN, (j) Ours
123
Multimedia Tools and Applications (2024) 83:20169–20192 20181
Fig. 11 Qualitative comparisons of different methods. (a) Original image, (b) DCP, (c) GDCP, (d) MIP, (e)
RCP, (f) FUnIEGAN, (g) UGAN, (h) ULAP, (i) UWCNN, (j) Ours
method is able to deal with images in various shallow water conditions. What’s more, thanks
to the novel color channel based dual transmission map estimation, the proposed method can
clearly unveil the hidden details.
In the visual comparison, we select seven underwater images that were captured under dif-
ferent scenes (turbid scene and clear open scene). In turbid water, it contains more suspended
particles and organic matter, which strongly absorb the shortest wavelength, resulting in yel-
lowish or greenish appearance, as shown in Figs. 10, 11, 12, and 13. For these four images,
the scene transmissions deduced via DCP [18] and MIP [4] are wrong, the restored images
are similar to the raw. The GDCP [38] method fails to estimate transmission and calculate
background light, leading to an unsatisfactory results. Although RCP [7] method can restore
the image to a certain extent, it tends to show unnatural artifacts. The results obtained by
ULAP [42] may introduce a color distortion appearance. The FUnIE-GAN [35], UGAN [8],
and UWCNN [28] methods fail to improve contrast of an image and have poor effect on color
correction.
In clear open water, visible light with the longest wavelength is absorbed at the highest
rate, appearing green-bluish to the eye, as shown in Figs. 14, 15, and 16. Under clear open
water, the restored images using DCP [18] method usually look darker, and possess a little
color deviation. The results obtained by MIP [4] may introduce an unpleasing performance,
owing to the inaccurate transmission map estimation. The dehazed images by GDCP [38]
look brighter. The ULAP [42] method produces unsatisfying results with reddish color shift
and artifacts. The FUnIE-GAN [35], UGAN [8], and UWCNN [28] cannot address this scene
properly.
In summary, the proposed method successfully eliminates the effects of light attenua-
tion. It introduces the high-quality underwater images with higher contrast and more detail
information.
123
20182 Multimedia Tools and Applications (2024) 83:20169–20192
Additionally, in order to quantitatively show the superiority of the proposed method, the qual-
ity of the restored underwater images is measured by using Entropy, patch-based contrast
quality index (PCQI) [46], underwater image quality measures (UIQM) [37], and underwater
color image quality evaluation (UCIQE) [56]. The Entropy is used for estimating the abun-
dance of image information. Higher entropy values of an image reveal more information
included in the image. PCQI method is used to deduce the human perception of contrast
variations. A higher PCQI score, the better image contrast. PCQI is defined as:
M
1
PC Q I = qi (xi , yi ) qc (xi , yi ) qs (xi , yi ) (21)
M
i=1
Fig. 12 Qualitative comparisons of different methods. (a) Original image, (b) DCP, (c) GDCP, (d) MIP, (e)
RCP, (f) FUnIEGAN, (g) UGAN, (h) ULAP, (i) UWCNN, (j) Ours
123
Multimedia Tools and Applications (2024) 83:20169–20192 20183
Fig. 13 Qualitative comparisons of different methods. (a) Original image, (b) DCP, (c) GDCP, (d) MIP, (e)
RCP, (f) FUnIEGAN, (g) UGAN, (h) ULAP, (i) UWCNN, (j) Ours
where M is the total number of the patches in the image, qi , qc and qs are three comparison
functions.
The UIQM metric is designed specifically to quantify the colorfulness measure (UICM),
sharpness measure (UISM), and contrast measure (UIConM) that characterize underwater
images. UIQM is described as a linear combination:
U I Q M = c1 × U I C M + c2 × U I S M + c3 × U I Con M (22)
where c1 , c2 , and c3 mean the scale factors. According to the original work, we set c1 =
0.0282, c2 = 0.0953, and c3 = 3.5753.
Fig. 14 Qualitative comparisons of different methods. (a) Original image, (b) DCP, (c) GDCP, (d) MIP, (e)
RCP, (f) FUnIEGAN, (g) UGAN, (h) ULAP, (i) UWCNN, (j) Ours
123
20184 Multimedia Tools and Applications (2024) 83:20169–20192
Fig. 15 Qualitative comparisons of different methods. (a) Original image, (b) DCP, (c) GDCP, (d) MIP, (e)
RCP, (f) FUnIEGAN, (g) UGAN, (h) ULAP, (i) UWCNN, (j) Ours
The UCIQE takes into account three underwater image quality criterions: chroma (σc ),
saturation (conl ) and contrast (μs ). The UCIQE is defined as:
U C I Q E = m 1 × σc + m 2 × conl + m 3 × μs (23)
where m 1 , m 2 and m 3 are the weighted coefficients. We use the recommended weighted
coefficients (m 1 = 0.4680, m 2 = 0.2745, and m 3 = 0.2576). A greater value of the UIQM
or UCIQE means better image quality.
Fig. 16 Qualitative comparisons of different methods. (a) Original image, (b) DCP, (c) GDCP, (d) MIP, (e)
RCP, (f) FUnIEGAN, (g) UGAN, (h) ULAP, (i) UWCNN, (j) Ours
123
Multimedia Tools and Applications (2024) 83:20169–20192 20185
123
20186 Multimedia Tools and Applications (2024) 83:20169–20192
Tables 2, 3, 4, and 5 give the quantitative evaluations of the nine techniques for seven
real-world underwater images selected in Section 4.2. We observe that output results of the
proposed method are typically better than others in terms of Entropy, PCQI and UIQM. The
best Entropy and PCQI values indicate that the proposed method can remarkably increase the
meaningful detail information and improve the most of image contrast. The highest UIQM
score means that the proposed method can effectively balance the colorfulness, sharpness,
and contrast of the output images, and also produce most satisfactory results. It is well-known
that UCIQE does not consider color casts and artifacts. To this end, from the UCIQE metric
quality analysis of Table 5, the proposed method has slightly higher score than the related
methods. The quantitative comparison further show the robustness and effectiveness of the
proposed method.
To fairly assess the effect of the proposed method, we also compare it with different methods
through using the number of visible edges [17], as shown in Figs. 17 and 18. It can be
(a) Original (b) DCP (c) GDCP (d) MIP (e) RCP
(f) FUnIEGAN (g) UGAN (h) ULAP (i) UWCNN (j) Ours
Fig. 17 Experimental results about detail preserving among the images yielded by different methods. (a)
Original image and it’s visible edge maps. (b)-(j) the corresponding results of DCP, GDCP, MIP, RCP, FUn-
IEGAN, UGAN,ULAP, UWCNN, and Ours, respectively. The values in represents the number of visible
edges (VE) [17]
123
Multimedia Tools and Applications (2024) 83:20169–20192 20187
(a) Original (b) DCP (c) GDCP (d) MIP (e) RCP
(f) FUnIEGAN (g) UGAN (h) ULAP (i) UWCNN (j) Ours
Fig. 18 Experimental results about detail preserving among the images yielded by different methods. (a) Orig-
inal image and it’s visible edge maps. (b)-(j) the corresponding results of DCP, GDCP, MIP, RCP, FUnIEGAN,
UGAN,ULAP, UWCNN, and Ours, respectively
observed that the edges of original underwater images are slightly detected, owing to the
light scattering. For these underwater images, the previous techniques seem to introduce
hazy in the restored results, which is reflected in the visible edge maps. Compared with the
(a) (b)
(c) (d)
Fig. 19 Local feature point matching. (a) and (c) original underwater images, (b) and (d) our results
123
20188 Multimedia Tools and Applications (2024) 83:20169–20192
(a)
(b)
Fig. 20 Other degraded images. (a) Original, (b) restored results yielded by the proposed method
original image, the number of the visible edges does not increase much. In contrast, the
proposed method introduces more number of visible edges and effectively restores image
details.
We perform the local feature point matching to further examine the effectiveness of the pro-
posed method. Here, scale invariant feature transform (SIFT) operator [33] is applied for
keypoints calculation. Figure 19 shows the two examples of this application test. Each exam-
ple includes a pair of underwater images, and the corresponding restored images generated
through our method. For the top-side pair, the SIFT operator determine 3 valid matches on the
raw underwater images, and 19 good matches for the restored underwater images obtained
by our method. For bottom-side pair, SIFT extracts 2 good matches between the raw images.
In contrast, it finds 34 valid matches for images restored via our method. The application test
illustrates that our method introduces a good performance when it is used to computer vision
application.
Although the proposed method is introduced for underwater scenes, it still can be gen-
eralized to deal with other types of degraded images. From Fig. 20, it can be seen that the
proposed method can effectivelly improve the visibility of the sandy and hazy images.
5 Conclusion
In this paper, we propose an efficient method for underwater image dehazing. First, to miti-
gate the color distortion of the image, we design a novel color correction method, in which,
selective absorption is taken into account. We then estimate the global background light
from a local region of the color corrected image. Additionally, a dual transmission map strat-
egy is used. A novel color channel with sharpened term and difference of channel intensity
prior term is proposed to accurately deduce the transmission map. Finally, with the estimated
background light and transmission map, the restored image is generated. Experimental results
demonstrate that the proposed method achieves better natural visual quality with more valu-
able information and higher contrast.
123
Multimedia Tools and Applications (2024) 83:20169–20192 20189
Despite the excellent performance, the proposed method still introduces some limitations.
It shows weak robustness to underwater images captured at the FujiFilm camera settings.
In the future, we will try our best to overcome this drawback. More flexible and effective
dehazing method will be proposed to enhance the complex underwater images, especially
those captured in deep water.
Appendix A
See Table. 6.
123
20190 Multimedia Tools and Applications (2024) 83:20169–20192
Acknowledgements The authors sincerely thank the editors and anonymous reviewers for the very helpful
and kind comments to assist in improving the presentation of our paper. This work was supported in part by the
National Natural Science Foundation of China under Grant 62176037, Grant 62002043, and Grant 61802043,
by the Liaoning Revitalization Talents Program under Grant XLYC1908007, by the Foundation of Liaoning
Key Research and Development Program under Grant 201801728, by the Dalian Science and Technology
Innovation Fund under Grant 2018J12GX037, Grant 2019J11CY001, and Grant 2021JJ12GX028
Declarations
Conflict of interest The authors declare that they have no potential conflict of interest
References
1. Akkaynak D, Treibitz T (2018) A revised underwater image formation model. In 2018 IEEE Conf.
Comput. Vis. Pattern Recognit. (CVPR), pp 6723–6732
2. Ancuti CO, Ancuti C, Vleeschouwer CD, Bekaert P (2018) Color balance and fusion for underwater
image enhancement. IEEE Trans. image process 27(1):379–393
3. Buchsbaum G (1980) A spatial processor model for object color perception. Journal of the Franklin
Institute 310(1):337–350
4. Carlevaris-Bianco N, Mohan A, Eustice RM (2010) Initial results in underwater single image dehazing.
In Oceans, IEEE
5. Chiang JY, Chen YC (2012) Underwater image enhancement by wavelength compensation and dehazing.
IEEE Trans. Image Process 21(4):1756–1769
6. Ding X, Liang Z, Wang Y, Fu X (2021) Depth-aware total variation regularization for underwater image
dehazing. Signal Process, Image Commun, p 98
7. Drews-Jr P, do Nascimento ER, Moraes F, Botelho SC, Campos FM (2013) Transmission estimation in
underwater single images. In IEEE Int. Conf. Comput. Vis. Workshops (ICCVW), pp 825–830
8. Fabbri C, Islam MJ, Sattar J (2018) Enhancing underwater imagery using generative adversarial networks.
2018 IEEE Int. Conf. Robot. Autom. (ICRA), pp 7159–7165
9. Fabbri C, Islam MJ, Sattar J (2018) Enhancing underwater imagery using generative adversarial networks.
In 2018 IEEE Inter. Conf. Robot. Autom. (ICRA), pp 7159–7165
10. Fattal R (2008) Single image dehazing. ACM Trans. Graph 27(3):72
11. Finlayson GD, Trezzi E (2004) Shades of gray and colour constancy. In Color Imaging Conferenc, pp
37–41
12. Fu X, Liang Z, Ding X, Yu X, Wang Y (2020) Image descattering and absorption compensation in
underwater polarimetric imaging. Optics and Lasers in Engineering 132
13. Fu X, Zhuang P, Huang Y, Liao Y, Zhang XP, Ding X (2014) A retinex-based enhancing approach for
single underwater. 2014 IEEE Int. Conf. Image Process. (ICIP), pp 4572–4576
14. Galdran A, Pardo D, Picon A, Alvarez Gila A (2015) Automatic red channel underwater image restoration.
J. Vis. Commun. Image Represent 26:132–145
15. Ghani A, Isa N (2015) Enhancement of low quality underwater image through integrated global and local
contrast correction. Appl. Soft Comput 37:332–344
16. Guo Y, Li H, Zhuang P (2020) Underwater image enhancement using a multiscale dense generative
adversarial network. IEEE J. Oceanic Engineer 45(3):862–870
17. Hautiere N, Tarel JP, Aubert D, Dumont E (2011) Blind contrast enhancement assessment by gradient
ratioing at visible edges. Image Analysis. Stereology 27(2):87–95
18. He K, Sun J, Tang X (2011) Single image haze removal using dark channel prior. IEEE Trans. Pattern
Anal. Mach. Intell 33(12):2341–2353
19. He K, Sun J, Tang X (2013) Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell 35(6):1397–
1409
20. Huang SC, Chen BH, Wang WJ (2014) Visibility restoration of single hazy images captured in real-world
weather conditions. IEEE Trans. Circuits Syst. Video Technol 24(10):1814–1824
21. Jian M, Qi Q, Dong J, Yin Y, Lam K (2018) Integrating qdwd with pattern distinctness and local contrast
for underwater saliency detection. J. Vis. Commun. Image Represent 53:31–41
22. Land EH (1978) The retinex theory of color vision. Scientific American 237(6):108–128
23. Li C, Guo J, Cong R, Pang Y, Wang B (2016) Underwater image enhancement by dehazing with minimum
information loss and histogram distribution prior. IEEE Trans. Image Process 25(12):5664–5677
123
Multimedia Tools and Applications (2024) 83:20169–20192 20191
24. Li J, Skinner KA, Eustice RM, Johnson-Roberson M (2018) Watergan: Unsupervised generative net-
work to enable real-time color correction of monocular underwater images. IEEE Robotics Autom. Lett
3(1):387–394
25. Li Y, Xu H, Li Y, Lu H, Serikawa S (2019) Underwater image segmentation based on fast level set method.
Int. J. Comput. Sci. Eng 19(4):562–569
26. Li C, Guo C, Ren W, Cong R, Hou J, Kwong S, Tao D (2020) An underwater image enhancement
benchmark dataset and beyond. IEEE Trans. Image Process 29:4376–4389
27. Li C, Anwar S, Hou J, Cong R, Guo C, Ren W (2021) Underwater image enhancement via medium
transmission-guided multi-color space embedding. IEEE Trans. Image Process 30:4985–5000
28. Li C, Anwar S, Porikli F (2020) Underwater scene prior inspired deep underwater image and video
enhancement. Pattern Recognit 98
29. Liu P, Yu H, Cang S (2018) Optimized adaptive tracking control for an underactuated vibro-driven capsule
system. Nonlinear Dynamics 94(3):1803–1817
30. Liu P, Yu H, Cang S (2019) Adaptive neural network tracking control for underactuated systems with
matched and mismatched disturbances. Nonlinear Dynamics 98(2):1447–1464
31. Liu R, Fan X, Zhu M, Hou M, Luo Z (2020) Real-world underwater enhancement: Challenges, bench-
marks, and solutions under natural light. IEEE Trans. Circuits Syst. Video Technol 30(12):4861–4875
32. Liu X, Gao Z, Chen BM (2021) Ipmgan: Integrating physical model and generative adversarial network
for underwater image enhancement. Neurocomputing 453:538–551
33. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis 60(2):91–
110
34. Ma X, Chen Z, Feng Z (2019) Underwater image restoration through a combination of improved dark
channel prior and gray world algorithms. J. Electronic Imaging 28(5):053033
35. Islam MJ, Xia Y, Sattar J (2020) Fast underwater image enhancement for improved visual perception.
IEEE Robotics Autom. Lett 5(2):3227–3234
36. Narasimhan SG, Nayar SK (2000) Chromatic framework for vision in bad weather. In IEEE Conf. Comput.
Vis. Pattern Recognit. (CVPR), pp 1598–1605
37. Panetta K, Gao C, Agaian S (2016) Human-visual-system-inspired underwater image quality measures.
IEEE J. Oceanic Engineer 41(3):541–551
38. Peng Y-T, Cao K, Cosman PC (2018) Generalization of the dark channel prior for single image restoration.
IEEE Trans. Image Process 27(6):2856–2868
39. Sahu G, Seal A, Krejcar O, Yazidi A (2021) Single image dehazing using a new color channel. J. Vis.
Commun, Image Represent, p 74
40. Sharma G, Wu W, Dalal EN (2005) The ciede2000 color-difference formula: Implementation notes,
supplementary test data, and mathematical observations. Color Research Application 30(1):21–30
41. Shen J, Robertson N (2021) Bbas: Towards large scale effective ensemble adversarial attacks against deep
neural network learning. Inf. Sci 569:469–478
42. Song W, Wang Y, Huang DM, Tjondronegoro D (2018) A rapid scene depth estimation model based on
underwater light attenuation prior for underwater image restoration. Advances in Multimedia Information
Processing - PCM 2018:678–688
43. Sun L, Zhao C, Yan Z, Liu P, Duckett T, Stolkin R (2018) A novel weakly-supervised approach for
rgb-d-based nuclear waste object detection and categorization. IEEE Sensors Journal 19(9):3487–3500
44. Tang K, Yang J, Wang J (2014) Investigating haze-relevant features in a learning framework for image
dehazing. In Computer Vision Pattern Recognition, pp 2995–3002
45. Ulutas G, Ustubioglu B (2021) Underwater image enhancement using contrast limited adaptive histogram
equalization and layered difference representation. Multim. Tools Appl 80(10):15067–15091
46. Wang S, Ma K, Yeganeh H, Wang Z, Lin W (2015) A patch-structure representation method for quality
assessment of contrast changed images. IEEE Signal Process. Lett 22(12):2387–2390
47. Wang Y, Wang H, Yin C, Dai M (2016) Biologically inspired image enhancement based on retinex.
Neurocomputing 177:373–384
48. Wang L, Qian X, Zhang Y, Shen J, Cao X (2020) Enhancing sketch-based image retrieval by cnn semantic
re-ranking. IEEE Trans. Cybern 50(7):3330–3342
49. Wang Z, She Q, Ward TE (2021) Generative adversarial networks in computer vision: A survey and
taxonomy. ACM Comput. Surv 54(2):1–38
50. Wang Y, Liu H, Chau LP (2018) Single underwater image restoration using adaptive attenuation-curve
prior. IEEE Trans. Circuits Syst. I Regul. Pap 65-I(3):992–1002
51. Wei X, Lu W, Xing W (2017) A rapid multi-source shortest path algorithm for interactive image segmen-
tation. Multim. Tools Appl 76(20):21547–21563
52. Weijer JVD, Gevers T, Gijsenij A (2007) Edge-based color constancy. IEEE Trans. Image Process
16(9):2207–2214
123
20192 Multimedia Tools and Applications (2024) 83:20169–20192
53. Yan X, Wang G, Jiang G, Wang Y, Mi Z, Fu X (2022) A natural-based fusion strategy for underwater
image enhancement. Tools Appl, Multim. https://doi.org/10.1007/s11042-022-12267-7
54. Yan X, Wang G, Wang G, Wang Y, Fu X (2022) A novel biologically-inspired method for underwater
image enhancement. Signal Process. Image Commun 104:116670. https://doi.org/10.1016/j.image.2022.
116670
55. Yang KF, Gao SB, Li YJ (2015) Efficient illuminant estimation for color constancy using grey pixels. In
2015 IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp 2254–2263
56. Yang M, Sowmya A (2015) An underwater color image quality evaluation metric. IEEE Trans. Image
Process 24(12):6062–6071
57. Yu H, Li X, Lou Q, Lei C, Liu Z (2020) Underwater image enhancement based on dcp and depth
transmission map. Multim. Tools Appl 79(27–28):20373–20390
58. Yuan F, Zhan L, Pan P, Cheng E (2021) Low bit-rate compression of underwater image based on human
visual system. Signal Process, Image Commun, p 91
59. Zhang W, Dong L, Zhang T, Xu W (2021) Enhancing underwater image via color correction and bi-interval
contrast enhancement. Signal Process, Image Commun, p 90
60. Zhang W, Liu W, Li L, Li J, Zhang M, Li Y (2021) An adaptive color correction method for underwater
single image haze removal. Signal Image Video Process
61. Zhu JY, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent
adversarial networks. IEEE International Conference on Computer Vision (ICCV), pp 2242–2251
62. Zhu Q, Mai J, Shao L (2015) A fast single image haze removal algorithm using color attenuation prior.
IEEE Trans. Image Process 24(11):3522–3533
63. Zhuang P, Ding X (2020) Underwater image enhancement using an edge-preserving filtering retinex
algorithm. Multim. Tools Appl 79(25–26):17257–17277
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under
a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted
manuscript version of this article is solely governed by the terms of such publishing agreement and applicable
law.
123