Multiresolution Visual Enhancement of Hazy Underwater Scene

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Multimedia Tools and Applications (2022) 81:32907–32936

https://doi.org/10.1007/s11042-022-12692-8

Multiresolution visual enhancement of hazy


underwater scene

Deepak Kumar Rout1 · Badri Narayan Subudhi2 · T. Veerakumar1 ·


Santanu Chaudhury3 · John Soraghan4

Received: 26 October 2020 / Revised: 12 February 2021 / Accepted: 21 February 2022 /


Published online: 15 April 2022
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022

Abstract
Haze is an obvious phenomenon in the underwater scenario. The scene visibility reduces to a
great extent due to haze, which makes the underwater visual surveillance quite a challenging
task. In this article, we have exploited the multi-resolution ability of discrete wavelet trans-
form and applied dark channel prior based transmission map estimation scheme to dehaze
the highly degraded underwater image and restored the color. A three-fold scheme for
dehazing of underwater sequences is proposed. In the first stage, image details are extracted
using discrete wavelet transform followed by image negative operation. In the second stage,
the negative of detail images are enhanced by the help of dark channel prior. The third stage
is used for reconstruction, where the enhanced image details are used along with the sin-
gle level approximate of the input image to get the dehazed underwater image using inverse
discrete wavelet transform. The proposed scheme is tested with numerous standard under-
water images, as well as the excavation images of Dwaraka (Dvārakā) underwater ruins.
The effectiveness of the proposed scheme is justified by comparing it with different state-
of-the-art image dehazing techniques. The quantitative evaluation has been carried out using
five well established general purpose non-reference image quality indices namely BIQI
(blind image quality index), BLIINDS (BLind Image Integrity Notator using DCT Statis-
tics), DIIVINE (Distortion Identification-based Image Verity and INtegrity Evaluation),
BRISQUE (Blind/Referenceless Image Spatial Quality Evaluator), and SSEQ (Spatial-
Spectral Entropy-based Quality). Encouraging scores of 35.01, 30.105, 27.22, 30.10, and
27.8, are achieved for the BIQI, BRISQUE, SSEQ, DIIVINE, and BLIINDS, respectively.
Four evaluation measures, exclusively designed for underwater scenarios (underwater image
quality, contrast, sharpness, and colorfulness measures) are also used to test the performance
of the proposed scheme.

Keywords Image dehazing · Dark channel prior · Discrete wavelet transform ·


Underwater image enhancement

 Badri Narayan Subudhi


subudhi.badri@iitjammu.ac.in

Extended author information available on the last page of the article.


32908 Multimedia Tools and Applications (2022) 81:32907–32936

1 Introduction

Underwater image processing and analysis has been gaining its popularity in last one
decade. There are various applications where the visual information can be of huge impor-
tance, such as preservation of sea animals [10, 22], underwater surveillance [33, 51], etc.
The effectiveness of all the applications largely depends on the quality of image sequences
acquired using an underwater camera. Most of the cases, the images acquired are of low
quality and affected by severe degradation processes [24, 56]. Various degradation processes
are responsible for a low-quality underwater image, few of them are; decolorization, poor
and non-uniform illumination, underwater dynamism, salinity, underwater haze, [48, 62,
63], etc. Out of so many reasons of degradation, underwater haze is one of the most impor-
tant reasons of image degradation. The underwater videos are highly degraded because of
the presence of dense haze. The images obtained were highly hazy, decolorized, very low
contrasted, and blurred. Many researchers have proposed various schemes and algorithms
to improve the quality of underwater images [1, 9].
A wavelength compensation and image dehazing (WCID) technique is proposed by
Chiang et al. [15], to handle the color restoration and removal of underwater haze. The
underwater model considers the propagation distance, depth of the object and normalized
residual energy ratio for dehazing. The algorithm proved to be efficient under uniform
lighting condition. But in case of uneven lighting, the efficacy of the said scheme is not dis-
cussed. A dark channel prior based dehazing algorithm has been proposed by He et al. [26],
which is very much effective and one of the established dehazing scheme. To minimize the
halo artifacts they have proposed a Laplacian soft-matting scheme, which yields quite good
results for low-level hazy images. However, it is observed that the said scheme suffers from
a reduction in overall intensity. It fails in case of images having larger patches of uniform
intensity. Ancuti et al. [6], proposed a fusion based approach for underwater image restora-
tion. The degraded image is first white balanced, followed by noise suppression and fusion
of weight maps derived from a single image. It fails in case of poor lighting condition as
well as in case of a very deep underwater environment.
Serikawa et al. [54] have developed an underwater dehazing scheme using joint tri-lateral
filtering. A refinement to the dark channel prior model to estimate background light and
transmission in the underwater environment was also proposed by Cheng et al. [14]. The
color restoration is achieved by normalizing the transmission coefficient in each color plane.
A constant scattering coefficient is assumed, which may not stand always in practice. An
image de-scattering and color correction algorithm has also been proposed by Lu et al. [36].
A prior has been developed based on the difference in attenuation among different color
channels. A weighted guided normalized spatial domain filtering is used to compensate for
transmission losses. It preserves edge, removes noise and takes less computational time for
processing. The efficiency of the algorithm is not studied under the variation of intrinsic
factors like refractive index, scattering coefficient, the distance of propagation, depth of
the scene, etc. Thus it may fail to enhance degraded underwater images under such critical
conditions. Ghani et al. [20] have proposed a color and contrast enhancement scheme by
the recursive adaptive histogram modification. A wavelet base perspective on variational
enhancement technique has been developed by Vsamsetti et al. [64], where the focus was
given to enhance the image quality and preserve the structural details of objects in the scene.
Ahn et al. [4] proposed an enhancement scheme for deep-sea floor images to detect the
crab, using the retinex method along with contrast processing and hue adjustment tech-
niques. An underwater image enhancement scheme was proposed by Alex et al. [5], which
Multimedia Tools and Applications (2022) 81:32907–32936 32909

uses contrast limited adaptive histogram equalization in a reconfigurable platform to achieve


the enhancement. DehazeNet, an algorithm proposed by Cai et al. [12], takes a hazy image as
input and yields the transmission map of it, which is further used to recover haze-free images
by the help of atmospheric scattering model. It adopts convolutional neural network-based
deep architecture, whose layers are specially designed in order to embody the established
assumptions/priors in image dehazing. Qiao et al. [50] have presented a scheme for the
enhancement of underwater sea cucumber images using improved histogram equalization
and wavelet transform. In order to dehaze the underwater image and video, Emberton et al.
[17] have proposed a method, which separates the regions which are having only water,
from the regions occupied by sea cucumbers. Further, the color information of the pure haze
patch is used to dehaze the images. Ancuti et al. [7] have proposed a technique for underwa-
ter image enhancement where the hazy images get white balanced and then sharpened and
gamma corrected separately. Further, the sharpened and gamma corrected images are fused
in a multiscale framework to yield the enhanced image. In order to handle the underwater
scattering and degradation, a de-scattering and enhancement scheme has been developed by
Pan et al. [46], which employed a Convolutional Neural Network (CNN) for the estimation
of the transmission map, which further get refined by the help of adaptive bilateral filter. The
color degradation is handled by a white balance mechanism. Then the dehazed image and
the color corrected image get fused by Laplacian pyramid fusion technique. Further edge
enhancement and denoising is done by the help of hybrid wavelets and direction filter banks
(HWD). The discrete wavelet transform on the HSV color space is used to enhance the
underwater image by slide stretching [67]. A underwater acoustic image contrast enhance-
ment method using wavelet decomposition is proposed by Priyadharshini et al. [49]. Song
et al. [57] have proposed a statistical background light model to enhancing the underwater
image by optimizing the transmission map.
Bayesian filtering is widely used for image restoration. Ju et al. [32] have refined the
image formation model to develop a reliable atmospheric scatering model (RASM), and
proposed a Bayesian dehazing scheme by the help of prior knowledge of the degradation
mechanism. The efficient estimation of transmission map leads to a better image dehaz-
ing process and to do so, Wang et al. [65] have exploited the particle filter framework for
transmission estimation. Nishino et al. [43] have formulated the dehazing process as a max-
imum aposterior estimation problem and used factorial Markov random field to estimate
the scene depth and albedo jointly. Non-linear filtering methods are also employed for the
image dehazing task. Li et al. [34] have proposed an enhancement scheme that deals with
the color correction of monocolour underwater images. Generative adversial networks are
used by Fabbri et al. [18] to generate underwater like images, that are further used to train
the network for enhanced images. Yu et al. [68] have proposed a threefold scheme that
includes homomorphic filtering for color restoration, estimate the depth map of the image
using the light and dark channels, and a dual-wavelet fusion scheme to combine the results
obtained in the previous two stages. An edge-preserving scheme that combines the gradi-
ent domain guided image filtering priors of the illumination map and the reflection map
of the hazy underwater image is developed by Zhuang and Ding [72]. Hassan et al. [25]
have proposed a retinex based underwater dehazing scheme that uses the contrast limited
adaptive histogram equalization to limit the noise and enhance the contrast of dark com-
ponents, subsequently use bilateral filtering to enhance the degraded images. Sub-interval
linear transformation based color restoration scheme is proposed by Zhang et al. [70], that
uses bi-interval contrast enhancement strategy to yield enhanced underwater images. Sahu
et al. [53] have developed a new color channel comprising of the panchromatic channel,
32910 Multimedia Tools and Applications (2022) 81:32907–32936

bright channel, and the dark channel to estimate the transmission map, which is used for the
image dehazing.
From the above discussions, it can be concluded here that most of the works in state-of-
the-art techniques consider spatial domain analysis. Dark channel prior was used in many
cases to reduce the haze [16]. Many have used image histogram based processing schemes
to increase the image clarity, whereas few works have also used wavelength compensation
technique to enhance the image. Some algorithms were also developed in the frequency
domain, where the main focus is on image de-noising, thereby increasing the quality. As per
the author’s knowledge use of wavelet is not yet properly exploited for underwater image
dehazing. Discrete wavelet transform (DWT) is one of the efficient tool to extract the image
details by decomposing the image into approximate component, horizontal, vertical, and
diagonal detail components. However, the computational complexity of DWT makes it a
hard choice for real-time applications. Wavelets are good in illumination variation handling
as well [21, 42]. Goh et al. [21] have developed a face recognition system that addresses
the illumination variation challenge. The proposed technique altered the illumination com-
ponent by forcing all the approximation sub-band to zero. Another illumination invariant
method that uses multi-resolution framework to compute local binary pattern is proposed
by Nikan and Ahmadi [42]. It uses a ratio of gradient amplitude to image intensity as to rep-
resent an image, and further uses a fusion technique to recognize face. Ortega et al. [45] has
explained in detail the memory and complexity issues in the implementation of 2-D DWT.
However, with the advancement of system architecture and capability of computational
power of modern day’s computers and media developer kits, it has been resolved in various
ways [27, 28, 55, 69]. We have checked the time required for the execution of a single level
DWT of images of different resolution, which is reported in Fig. 11. Lifting-based wavelet
transforms are developed to reduce the complexity of DWT [61]. A comparative study of
recent improvements in wavelet-based image coding schemes is presented by Boujelbene
et al. [11].
In this article, dehazing of the underwater images is carried out by the use of dark chan-
nel prior in a multi-resolution framework. The image is first decomposed using Discrete
Wavelet Transform resulting in approximation component, Horizontal component, Vertical
component and the Diagonal component. In the second step, a dark channel prior based
image dehazing scheme is applied on all the above components except the approximation
component. The enhanced horizontal, vertical diagonal components along with the approx-
imation components are fed to the synthesis filter-bank to get the inverse discrete wavelet
transform in the third step. The proposed scheme is tested on a large number of under-
water hazy images. In order to test the efficiency of the scheme under dense haze and
highly degraded condition, we have used the Dwaraka underwater images obtained from the
excavation video of Dwaraka underwater ruins. In order to evaluate the efficiency of the pro-
posed dehazing technique, five general purpose quantitative evaluation measures and four
specially devised underwater image quality evaluation measures are used. A comparison
of the proposed scheme is carried out with seven different state-of-the-art image dehazing
schemes. Thus the key contributions of the proposed scheme are:
– Finer details of an image get largely affected by haze. The wavelet transform is used
to extract the finer details from the hazy image, which is enhanced by the help of DCP
based dehazing strategy.
– Exploited the multiresolution framework to extract degraded finer details of hazy under-
water image by single level wavelet decomposition. This ensures the availability of
image information, that supposed to be enhanced.
Multimedia Tools and Applications (2022) 81:32907–32936 32911

– The dark channel prior (DCP) is redesigned to operate in the wavelet domain, so
as to restore the degraded finer details (horizontal, vertical, and diagonal details) of
the underwater image. This is achieved by mapping the wavelet coefficients between
‘0’ to ‘255’, followed by subtracting from 255 to make the data suitable for DCP
implementation.
– Intensive testing is carried out on various hazy underwater images to evaluate the
efficiency of the proposed scheme.
– A special attention is given to the underwater images of the submerged ruins of ancient
Dwaraka city.
The rest of the paper is organized as follows. The underwater image formation model
is described in Section 2. Section 3 discusses the proposed dehazing scheme. Results are
presented and analyzed in Section 4, where an exclusive experiment and discussion is car-
ried out on the standard underwater images and Dwaraka underwater images. Section 5
concluded the proposed work.

2 Underwater image formation model and need for image prior

The process of image acquisition in underwater environment follows the basic optics,
besides the fact that the degradation process is comprising of many phenomena.
The underwater images get affected by many degradation phenomena such as scattering,
absorption, decolorization, underwater haze, poor lighting, etc. Scattering is defined as the
deviation of optical energy from its actual path, resulting in loss of optical energy reaching
to the camera. A part of the light energy gets absorbed by the medium, which is called as
absorption. The inability of different electromagnetic radiation to penetrate the water varies
with depth resulting in fading of color components over the depth. Red fades very fast, fol-
lowed by green and then blue. Thus in the deep sea, everything looks bluish, resulting in loss
of actual color information of the objects and this phenomenon is termed as decolorization.
Underwater haze is defined as a slight obscuration of the water medium caused due to tiny
suspended particles like planktons, very fine air bubbles, results in restricting the light to
reach to the imaging device. The main source of illumination in an underwater environment
is the refracted sunlight and it keeps on fading with depth resulting in poor lighting. The
alternate source of light could be a light source mounted over the camera, which also got
the limited capability. Thus the image formed at the camera is because of the resultant opti-
cal energy sensed by the imaging device [31]. The underwater image model can be given as
[54];
Ic (x, y) = Jc (x, y)t (x, y) + (1 − t (x, y))bc , (1)
where Ic (x, y) is the intensity value of the degraded c = {Red, Green, Blue} color plane
at location (x, y). Jc (x, y) is the true intensity value at the pixel location considered. The
transmission co-efficient at (x, y) is given by t (x, y) and bc is the background light or the
surrounding light of the water medium for the c color plane. The transmission co-efficient
lies between ‘0’ to ‘1’ and is assumed to be decaying with respect to the distance d(x, y)
(distance between the camera and scene which is represented by the pixel (x, y)). Hence,
transmission co-efficient can be given by [54];
t (x, y) = e−βd(x,y) (2)
where β is the degradation coefficient and is a combination of the scattering co-efficient βs
and absorption co-efficient βa . Substituting (2) in (1), the image model given by Serikawa
32912 Multimedia Tools and Applications (2022) 81:32907–32936

et al. [54] can be rewritten as;

Ic (x, y) = Jc (x, y)e−(βs+βa)d(x,y) + (1 − e−(βs)d(x,y) )bc . (3)

In most of the real world scenario, estimating the scattering and absorption coefficients,
from a single image is not possible. In order to convert this ill-posed estimation problem,
to a well-posed problem, many researchers come up with some assumptions, called image
priors. These image priors are formulated based on some observations. One of the widely
used image-priors is Dark Channel Prior (DCP) [26]. It is said that for a haze-free image, in
a local image patch, the minimum color value approaches zero. Thus by knowing the dark
channel image, the transmission function can be estimated, which further can be used to
dehaze the degraded hazy image.

3 Proposed dehazing scheme

The proposed dehazing scheme, which is demonstrated in Fig. 1, is consisting of three


stages. In the first stage, Discrete Wavelet Transform of the degraded image is obtained.
The dark channel prior based enhancement scheme (which is illustrated in Fig. 2) is carried
out on the horizontal, the vertical and the diagonal details in the second stage. No process-
ing is carried out on the ‘A’ (approximation) component to preserve the coarser details of
the image. Secondly, the coarser details are least affected by the underwater haze. The third
stage is consisting of the inverse discrete wavelet transform of the enhanced horizontal, ver-
tical, and diagonal details along with the approximation component. The dark channel prior
based dehazing scheme is applied on the horizontal, vertical and diagonal details and not
on the approximation component. This is because of the observation that finer details are
mostly affected by the haze and not the larger details. The proposed scheme of underwater
image dehazing is motivated by the fact that the information can be analyzed simultane-
ously, in spatial and frequency domain using wavelets [8]. Thus, a better enhancement can
be achieved by exploiting the spatial information in the approximation component, whereas
the frequency analysis can be carried out for exploiting the detailed components.

3.1 Multiresolution analysis of the degraded image

Multiresolution analysis is one of the most established approach used in image processing:
compression [3], de-noising [29], disease detection [58] etc. Wavelet transform has the abil-
ity to decompose the image into images of multiple resolutions or multiple scales [44]. In
the proposed work, the wavelet transform is exploited to achieve the enhancement of the

H
V Dark
D 'H' details of Image Dehazed Image
channel
Red plane A R, G, B planes negative 'H' details positive
prior

DWT IDWT
Dark
'V' details of Image Dehazed Image
Degraded channel
R, G, B planes negative 'V' details positive Enhanced image
input image Green plane H prior
V
D
A
DWT IDWT
Dark
'D' details of Image Dehazed Image
channel
R, G, B planes negative 'D' details positive
Blue plane prior

H
DWT V IDWT
D
A
'A' components
of
R, G, B planes

Fig. 1 Block diagram of the proposed dehazing scheme


Multimedia Tools and Applications (2022) 81:32907–32936 32913

Computation of Estimation of Estimation of


Degraded Enhanced
Dark Channel Atmospheric Transmission
Image Image
Image Light Map

Fig. 2 Block diagram of the basic Dark Channel Prior based dehazing scheme

underwater image by the dehazing process. The image I can be decomposed into coarser
(Wϕ ) and finer details (Wψi ) for i = {H, V , D}, using the scaling and wavelet function of
the Haar orthogonal basis [23]. Here H , V , and D represents the horizontal, the vertical and
the diagonal details [2].
1  N−1
M−1 
Wϕ (j0 , k1 , k2 ) = √ I (x, y)ϕj0 ,k1 ,k2 (x, y), (4)
MN x=0 y=0

1  N−1
M−1 
Wψi (j0 , k1 , k2 ) = √ I (x, y)ψji0 ,k1 ,k2 (x, y), (5)
MN x=0 y=0
where M and N are the number of rows and columns, x and y are the spatial coordinates
of the image I . The symbol j is the scaling parameter and k1 and k2 are the translation
parameters. The two dimensional scaling function is denoted by ϕ(x, y) whereas, ψ i (x, y)
represents the two dimensional wavelet function. The 2D scaling and wavelet functions are
obtained by performing the 1D row operation followed by 1D column operation. Thus, in
general, it can be written as [2];
ϕ(x, y) = ϕ(x)ϕ(y), (6)
ψ (x, y) = ψ(x)ϕ(y),
H
(7)
ψ (x, y) = ϕ(x)ψ(y),
V
(8)
ψ D (x, y) = ψ(x)ψ(y), (9)
where ϕ(x) and ψ(x) are single dimensional scaling and wavelet functions respectively. The
single level wavelet decomposition of the degraded input image is computed in this stage.
The horizontal, vertical and diagonal details are then enhanced using the dark channel prior
based enhancement scheme, which is explained in detail in the following subsection.

3.2 Dark channel prior based enhancement

The discrete wavelet transform stage results in Wϕ , WψH , WψV , and WψD . The first term is
representing the approximation image, whereas the last three terms represent horizontal,
vertical and diagonal details, respectively. The wavelet transform is linear, thus the linearity
in the image formation model as described in (1), remain preserved even after the transform.
Thus it can be written as;
 
i
Wψ,c (k1 , k2 ) = Hci (k1 , k2 )T i (k1 , k2 ) + 1 − T i (k1 , k2 ) Bci , (10)
i (k , k ) is the wavelet transform of c = {Red, Green, Blue} color channel.
where Wψ,c 1 2
Hc (k1 , k2 ), T i (k1 , k2 ) and Bci are the wavelet transforms of the haze-free finer detail
i

Jci (x, y), transmission function t i (x, y) and background light bci , respectively.
It is expected that most of the image details, such as the horizontal, the vertical and the
diagonal details, are not visible because of haze. Thus the overall idea is to enhance the finer
details so that the scene contents which are not visible can be made visible. Hence the dark
32914 Multimedia Tools and Applications (2022) 81:32907–32936

channel prior based enhancement scheme is applied to the detailed images only and not on
the down-scaled approximated image.
The detailed images are very low-intensity images, thus image negative is carried out in
order to make them suitable [41] for input to the dehazing process.

3.2.1 Negative of the wavelet details

In this stage, the image negative of the detailed wavelet co-efficient maps are obtained
first and then DCP based enhancement scheme is applied on the negative images. This is
obtained as;
Gi (k1 , k2 ) = |(L − 1) − Wψi (k1 , k2 )|, (11)

where Gi (k1 , k2 ) is the negative of Wψi (k1 , k2 ) and i = {H, V , D} is the wavelet detail
considered and L = 2n , where n is the gray level resolution of Wψi . The negative of
i (k , k ) is denoted as Gi (k , k ), which is given by;
Wψ,c 1 2 c 1 2
   
Gic (k1 , k2 ) = (L − 1) − Hci (k1 , k2 )T i (k1 , k2 ) + 1 − T i (k1 , k2 ) Bci . (12)

The negative images are then enhanced using the dark channel prior based dehazing scheme.

3.2.2 Estimation of the dark channel image

The dehazing mechanism is largely dependent on the correct determination of the transmis-
sion function and the background light, which is observed in (10). In order to determine the
transmission function, the dark channel image is required. The dark channel is defined as
the channel with minimum value of R, G, B in an image patch of size m×n [26]. A pictorial
representation of the process of obtaining the dark channel image is illustrated in Fig. 3.
i
Let Hdark (k1 , k2 ) be the dark channel corresponding to the image patch, P (k1 , k2 ) cen-
tered at pixel location (k1 , k2 ). Hci is one of the color channel of true image H , where
c = {R, G, B} and R corresponds to red, G corresponds to green and B corresponds to blue
color channels. Thus Hdarki (k1 , k2 ) can be computed by;
  
Hdark (k1 , k2 ) =
i
min min i
Hc (l1 , l2 ) . (13)
(l1 ,l2 )∈P (k1 ,k2 ) c∈{R,G,B}

The steps for constructing the dark image are enumerated below:
– Consider an image patch of dimension m × n centered at pixel location (k1 , k2 )
– Determine the lowest intensity value of the R, G and B plane separately. Let the mini-
mum intensity values are rmin , gmin , and bmin for the red, green and blue planes of the
image patch, respectively.
– Among the lowest intensity values of the three planes select the minimum value.
i
Hdark (k1 , k2 ) = min{rmin , gmin , bmin }. (14)

3.2.3 Estimation of the background light

The light that gets scattered to the surrounding results in an increment of the overall back-
ground light. The background light Bci is a predominant factor which decides the visibility
of a scene [54, 57, 66]. It can be seen from (10) that the estimation of the haze-free image
Multimedia Tools and Applications (2022) 81:32907–32936 32915

Input Image

83 76 79 53 32 81 84 85 88 120 21 23 24 30 56
24 27 28 25 127 82 84 89 95 125 18 29 27 36 154
100 32 33 58 61 88 87 92 131 130 97 43 56 87 131
35 34 36 45 62 113 127 129 154 155 27 33 21 54 121
61 63 62 66 65 45 51 178 182 180 52 76 77 89 112
Red Plane Green Plane Blue Plane

21 23 24 30 32
Pixel-wise 18 27 27 25 125
Minimum Minimum in
88 32 33 58 61
among R,G, and 3x3 patch
B channel 27 33 21 45 62
45 51 62 66 65

18 23 24
Minimum in a patch is
assigned to the centre pixel 18 21 21
location of that patch. 21 21 21

Dark Channel Image


Fig. 3 Illustration of the dark channel image computation process

depends on the accurate determination of the background light. Background light could be
viewed as the average value of the global image intensity. It is computed by the arithmetic
mean of the gray level values of an image [66]. However, the background light in the con-
text of DCP is the maximum light (which indirectly represents the haze-opaque pixels), that
is sensed by the image acquisition system. It is estimated by the following process:
– Determine the location of the top 0.1% of the brightest pixels in the dark channel image.
– Determine the highest intensity value in the input image among the pixel locations
identified in the previous step.
32916 Multimedia Tools and Applications (2022) 81:32907–32936

3.2.4 Estimating the transmission function

The transmission function has a vital role in deciding the energy received by the sensing
device. Hence in order to dehaze the image, it is important to estimate the transmission
T i (k1 , k2 ). It is assumed that the transmission is constant for the image patch P (k1 , k2 ).
Now applying the minimum operator on both sides of (12), we get;
 
min Gic (l1 , l2 ) =
(l1 ,l2 )∈P (k1 ,k2 )
    
(L − 1) − min Hc (l1 , l2 ) T i (k1 , k2 ) − 1 − T i (k1 , k2 ) Bci .
i
(15)
(l1 ,l2 )∈P (k1 ,k2 )
The minimum operator is applied on the three color channels separately, resulting in the
minimum pixel value in the image patch P (k1 , k2 ). Then the minimal value among the
three color channels for the specific image patch is determined as;
  
min min Gc (l1 , l2 ) = (L − 1)−
i
c (l1 ,l2 )∈P (k1 ,k2 )
     
min min Hci (l1 , l2 ) T i (k1 , k2 ) − 1 − T i (k1 , k2 ) Bci . (16)
c (l1 ,l2 )∈P (k1 ,k2 )
i
As per the basic assumption of Dark Channel Prior, the Hdark of Hci (k1 , k2 ) should tend to
zero for an haze free image [26],
  
Hdark (k1 , k2 ) = min
i
min Hc (l1 , l2 ) → 0.
i
(17)
c (l1 ,l2 )∈P (k1 ,k2 )

Thus, (16) can be rewritten as;


    
min min Gic (l1 , l2 ) = (L − 1) − 1 − T̃ i (k1 , k2 ) Bci , (18)
c (l1 ,l2 )∈P (k1 ,k2 )
  
Gic (l1 , l2 ) L−1  
min min = − 1 − T̃ i
(k 1 , k 2 ) , (19)
c (l1 ,l2 )∈P (k1 ,k2 ) Bci Bci
where T̃ i and T i are the estimated and actual transmission functions, respectively. Now,
solving (19) for T̃ i (k1 , k2 ), we get;
  i 
Gc (l1 , l2 ) L−1
T̃ (k1 , k2 ) = min
i
min − + 1. (20)
c (l1 ,l2 )∈P (k1 ,k2 ) Bci Bci
The transmission can be estimated from (20). The estimated transmission is further refined
by the help of soft matting technique as proposed in [26].

3.2.5 Recovering the image details

The image details are recovered by the help of the refined transmission T̃ i , the negative
of the image details Gic (l1 , l2 ), the estimated background light Bci and a predefined lower
bound of transmission T0 by;
(L − B i − 1) − Gi (k1 , k2 )
H i (k1 , k2 ) =   + Bi . (21)
max T̃ i (k1 , k2 ), T0

The predefined value T0 is used in order to avoid cases when the T̃ i is zero or very much
closer to zero. Thus any issue of hyper-saturation in the reconstructed image is avoided.
Multimedia Tools and Applications (2022) 81:32907–32936 32917

The recovered image details Rψi (k1 , k2 ) can be obtained using the negative of H i (k1 , k2 )
obtained from (21), thus;
Rψi (k1 , k2 ) = (L − 1) − H i (k1 , k2 ). (22)
The recovered image details such as: RψH (k1 , k2 ), RψV (k1 , k2 ), and RψD (k1 , k2 ) components
obtained from (22) are used along with the Wϕ,c (k1 , k2 ) to get back the dehazed underwater
image.

3.3 Inverse discrete wavelet transform

The image details recovered are used along with the image approximation to obtain the
dehazed image D(x, y) as,

1  N−1
M−1 
D(x, y) = √ Wϕ (j0 , k1 , k2 )ϕj0 ,k1 ,k2 (x, y)+
MN k1 =0 k2 =0

 0 −1 M−1
j  N−1

1
√ Rψi (j, k1 , k2 )ψj,k
i
1 ,k2
(x, y). (23)
MN i=H,V ,D j =0 k1 =0 k2 =0
The negative of the inverse discrete wavelet transform image is the required dehazed image.

4 Results and discussions

The proposed algorithm is implemented on a Core i3T M computing system having 4GB
RAM, and 3MB L2 cache. The algorithm is implemented by MATLAB 2014a and Windows
10 operating system. In order to justify the efficiency of the proposed scheme, we have car-
ried out subjective as well as quantitative evaluation of the enhanced images and compared
it with seven other state-of-the-art image dehazing schemes: Zhu et al. [71], He et al. [26],
Gao et al. [19], Cai et al. [12], Chen et al. [13], Pan et al. [46], and Sahu et al. [53]. A quali-
tative comparison is also performed with two non-linear color restoration schemes: Li et al.
[34], and Fabbri et al. [18].
We have evaluated the obtained results by nine different evaluation measures: Blind
Image Quality Index (BIQI) [39], BLind Image Integrity Notator using DCT Statistics
(BLIINDS) [52], Distortion Identification-based Image Verity and INtegrity Evaluation
(DIIVINE) [40], Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) [38],
Spatial-Spectral Entropy-based Quality (SSEQ) [35], Underwater Image Colorfulness Mea-
sure (UICM) [47], Underwater Image Sharpness Measure (UISM) [47], Underwater Image
Contrast Measure (UIConM) [47], and Underwater Image Quality Measure (UIQM) [47].
Brief descriptions of these evaluation measures are provided in Appendix. The overall result
shows that the proposed scheme yields better enhancement as compared to the existing
state-of-the-art techniques. We have presented 27 standard underwater images along with
17 Dwaraka underwater images to showcase the performance of our scheme.

4.1 Application to conventional underwater images

The effectiveness of the proposed scheme is tested on a large number of conventional under-
water images, however for space constraint we have presented the visual results of sixteen
such images in Fig. 4.
32918 Multimedia Tools and Applications (2022) 81:32907–32936

Fig. 4 Dehazing results of various schemes for different underwater images. Column1: Degraded images,
Column2: He et al. [26], Column3: Zhu et al. [71], Column4: Gao et al. [19], Column5: Chen et al. [13],
Column6: Cai et al. [12], Column7: Pan et al. [46], and Column8: Proposed method

4.1.1 Subjective and qualitative analysis

The evaluation of proposed scheme and the other seven state-of-the-art dehazing schemes
used for comparison were carried out using a large number of underwater hazy images. The
Fig. 4 presents the results of few conventional underwater images. It can be seen in Fig. 4,
that all the images are highly corrupted by the dense haze and decolorization. The first
column of Fig. 4 presents a set of degraded images. The results of He et al. [26] scheme on
these set of images are shown in the second column of Fig. 4, where it can be observed that
Multimedia Tools and Applications (2022) 81:32907–32936 32919

this scheme results in images with reduced haze, but failed to restore the color information.
Third column of Fig. 4 presents the results of Zhu et al. [71], which shows that for dense
haze the method is not working. The results of Gao et al. [19] are shown in the fourth column
of Fig. 4, which are darker and failed to restore the color. Fifth column of Fig. 4 presents
the results obtained by Chen et al. [13] scheme. It can be inferred from these results that
this scheme also failed to remove the haze and restore the color effectively. Sixth column of
the figure presents the results of Cai et al. [12], where it can be observed that this scheme
is capable of dehazing the images to some extent but fails completely to restore the color.
At some cases red color patches can be seen resulting in un-natural scene coloring. The
dehazed results of Pan et al. [46] are shown in the seventh column of Fig. 4, where the color
is restored to some extent, but at many instances the red channel is over-enhanced resulting
in un-natural scene color. Last column of Fig. 4 presents the results of the proposed scheme,
which not only able to remove the haze to a great extent but also able to restore the color
information.

4.1.2 Quantitative analysis

The quantitative evaluation of the proposed scheme along with other considered tech-
niques is carried out using the non-reference image quality assessment (NR-IQA) indices
BRISQUE, DIIVINE, BLIINDS, SSEQ, and BIQI. In these cases, there is no need of any
ground truth or reference image to evaluate the amount of enhancement. A lower value of
these indices implies better quality and a higher value represents a poor quality image. The
bold entries in Tables 1, 2, 3, 4, and 5 are representing the best score of the evaluation
metrics.
The overall image quality assessment measures for all the conventional underwater
images considered in our experiment are given in Table 1. The corresponding graph show-
ing a comparison between the methods considered in this article is presented in Fig. 5. It can
be seen from the Table 1 and Fig. 5 that the IQA values are least for the proposed scheme
in comparison to the other considered state-of-the-art dehazing techniques.

4.2 Application to dwaraka underwater images

The proposed scheme is tested on the Dwaraka underwater images and respective dehazing
results are shown in Fig. 6. The underwater images extracted from the considered Dwaraka
excavation video are highly degraded. The frames are hazy, decolorized and blurred. The
dimensions of the extracted frames considered in this experiment are of size 320 × 240.

Table 1 Overall image quality indices for all the considered images

Image BIQI BRISQUE SSEQ DIIVINE BLINDS

Chen et al. [13] 39.34 35.42 36.89 38.24 34.26


Gao et al. [19] 41.83 43.98 38.24 40.37 37.95
He et al. [26] 37.26 31.63 31.15 32.86 30.52
Zhu et al. [71] 38.14 36.18 34.38 37.54 34.18
Cai et al. [12] 38.31 38.61 37.67 39.51 37.69
Pan et al. [46] 36.58 35.18 31.49 35.84 32.76
Sahu et al. [53] 34.22 31.07 30.93 32.72 28.97
Proposed 33.74 30.53 26.34 29.61 28.27
32920 Multimedia Tools and Applications (2022) 81:32907–32936

Fig. 5 Graphical representation of the overall image quality indices

4.2.1 Qualitative and quantitative analysis

Figure 6, presents the dehazing results obtained by the proposed scheme along with the other
considered schemes for comparison. Seventeen of the original degraded images extracted
from the Dwaraka underwater excavation video are shown in the first column of Fig. 6. The
images are highly hazy and blurred. The second column shows dehazed results using He
et al. [26] approach. It can be seen that few regions get highly saturated and result in missing
of some finer details in the images. The results obtained by Zhu et al. [71] are shown in
the third column of Fig. 6. Although the visibility got enhanced to some extent, few regions
are highly saturated and failed to restore the color. Fourth column of the figure shows the
outcomes of Gao et al. [19] and it can be observed that the contrast of the images increased
to some extent but the blue channel gets enhanced more in comparison to other channels,
resulting in bluish images. The results of Chen et al. [13] are presented in the fifth column. It
can be seen that to some extent the images are enhanced and colors are restored slightly. The
dehazing results of Cai et al. [12] are shown in the sixth column, where it can be observed
that the contrast of the scene is increased but for some images, red color patches appeared
in the images, resulting in untrue scene color. The results of Pan et al. [46] are shown in
seventh column of Fig. 6, where it can be observed that in most of the cases the red-channel
is enhanced and the green channel is suppressed, resulting in un-natural color images. The
eighth column of Fig. 6 shows outcomes of the proposed scheme. It can be observed that the
colors are restored to a great extent and the visibility also got enhanced. Table 2, presents
the BIQI values for the images for various schemes considered. The BIQIs of the original
degraded images are also given alongside in the table. The lower BIQI index corresponds to
a better quality image. It can be seen that the average BIQI for the proposed scheme is the
least among the methods considered. Table 3 represents the average evaluation measures for
various schemes and the proposed scheme. The average evaluation measure is the arithmetic
mean of the indices of all the 200 images.
Multimedia Tools and Applications (2022) 81:32907–32936 32921

Fig. 6 Dehazing results of various schemes for the Dwaraka underwater video. Column1: Degraded frames,
Column2: He et al. [26], Column3: Zhu et al. [71], Column4: Gao et al. [19], Column5: Chen et al. [13],
Column6: Cai et al. [12], Column7: Pan et al. [46], and Column8: Proposed method

4.3 Discussions

The use of dark channel prior for single image dehazing has been proposed by He et al.
[26], where the authors have estimated the transmission map of the medium using the dark
channel prior. The application of dark channel prior in underwater image enhancement has
evoked during the beginning of this decade and proved to be one of the strongest priors for
modeling the transmission function. Wavelet transform is also used for image enhancement
in literature. The discrete wavelet transform decomposed a signal into different bands and
32922

Table 2 The BIQI index for few Dwaraka underwater images

Image Chen et al. (2014) Gao et al. (2014) He et al. (2011) Zhu et al. (2015) Cai et al. (2016) Pan et al. (2018) Proposed Scheme Original Image

DwarakaUW01 50.8211 47.5774 38.7619 43.9836 44.3417 44.6041 35.7619 62.0106


DwarakaUW02 48.303 51.8451 39.8549 48.7707 43.9418 44.4747 34.2161 64.1945
DwarakaUW03 47.37 41.8852 35.7037 37.7834 41.0985 40.067 30.9613 55.0165
DwarakaUW04 43.3834 42.7588 31.2299 38.2628 38.2498 38.1685 38.0696 54.5138
DwarakaUW05 41.9598 50.3983 40.9051 39.7014 47.1999 35.9906 36.8173 51.4091
DwarakaUW06 48.5996 43.9288 35.408 45.6737 38.352 42.1543 38.4125 57.8432
DwarakaUW07 40.3899 42.0025 31.3251 36.4306 34.1053 41.2004 36.5075 54.5462
DwarakaUW08 44.227 44.8556 35.0035 37.8036 37.2457 44.5567 35.7184 50.5541
DwarakaUW09 39.7142 41.6295 29.1037 38.533 34.4115 35.8852 36.7408 55.9238
DwarakaUW10 46.7844 57.0264 51.0775 42.3528 56.5192 35.1232 34.3212 52.0812
DwarakaUW11 37.7105 38.8732 30.2296 35.1061 34.8804 31.8499 32.4896 48.4425
DwarakaUW12 38.9072 47.6797 38.6696 37.3745 43.7007 33.8175 36.8971 49.8049
DwarakaUW13 39.578 43.4844 34.7458 40.7696 39.0619 42.826 35.9833 52.4691
DwarakaUW14 58.6635 56.3377 40.3279 49.4848 45.1111 48.1273 38.2714 53.6912
DwarakaUW15 53.0841 38.9046 38.8274 37.193 37.9627 38.9304 36.225 50.4781
DwarakaUW16 39.8442 40.9001 39.0398 36.8274 49.5164 34.2274 34.2622 50.5857
DwarakaUW17 42.3045 46.7489 30.6313 42.5098 39.7393 40.6145 38.6432 57.2276
DwarakaUW18 40.3914 49.3835 41.6641 37.0847 43.0635 35.397 33.2461 48.9201
Multimedia Tools and Applications (2022) 81:32907–32936
Table 2 (continued)

Image Chen et al. (2014) Gao et al. (2014) He et al. (2011) Zhu et al. (2015) Cai et al. (2016) Pan et al. (2018) Proposed Scheme Original Image

DwarakaUW19 48.6414 48.1211 31.6846 46.6982 45.5223 49.1636 38.1142 61.7877


DwarakaUW20 39.625 42.0191 38.4456 35.4231 45.6731 34.8984 34.8873 49.1716
DwarakaUW21 37.5232 35.9559 28.6144 32.7576 37.0457 34.9068 30.8106 52.6485
DwarakaUW22 36.7442 43.4342 31.5061 39.2712 38.2161 33.8071 36.2571 53.085
Multimedia Tools and Applications (2022) 81:32907–32936

DwarakaUW23 35.6895 32.7209 27.4347 30.3147 34.5799 33.5884 29.7622 47.1615


DwarakaUW24 42.6936 49.7099 45.4678 39.7226 50.9666 36.8491 35.6317 52.3041
Dwaraka02 29.5207 27.5999 38.0574 44.8684 42.8612 36.3056 39.8682 62.6371
Dwaraka03 32.0097 23.3755 24.2859 24.7712 33.0293 21.021 19.4718 40.7825
Dwaraka04 29.2846 48.6757 68.337 58.5366 36.6388 37.0429 55.6341 73.5633
Average BIQI 41.9913 43.6234 36.9015 39.9262 41.2235 37.9225 35.7030 54.1797
32923
32924 Multimedia Tools and Applications (2022) 81:32907–32936

Table 3 Average evaluation measures for Dwaraka underwater images

Method BRISQUE DIIVINE BIQI BLIINDS SSEQ

Zhu et al. [71] 36.3982 38.1112 40.1763 33.9964 34.6127


He et al. [26] 31.2148 33.2788 38.0831 31.2648 29.6636
Gao et al. [19] 40.1537 42.1367 41.8629 37.2287 37.5699
Chen et al. [13] 38.1258 39.2124 42.8557 35.3564 37.5248
Cai et al. [12] 48.3263 46.3581 43.5431 40.1187 40.2754
Pan et al. [46] 36.2614 38.6287 39.8359 32.7329 31.5547
Sahu et al. [53] 30.1298 30.7213 37.9358 32.2768 28.9472
Proposed 29.6798 30.5879 36.2869 27.3349 28.1005

thus for the reconstruction of irregular shapes, they perform well. The loss of information
is negligible in the wavelet domain [37]. The compactness of energy is higher in case of the
wavelet transform, which makes it energy efficient. Thus any kind of filtering in wavelet
domain results in negligible information loss. This makes our algorithm suitable to handle
the degraded underwater images. In this article, the strength of the multi-resolution capa-
bility of the wavelet transform is exploited by the help of dark channel prior. Dark Channel
Prior (DCP) based approach work directly on the visual data or image. It determines the
darker channel corresponding to a pixel by considering a spatial neighborhood. However, in
our proposal, the DCP is implemented on the image details (horizontal, vertical and diago-
nal components) obtained after single level decomposition of the degraded image. The dark
channel prior has been newly formulated so as to be applicable on the wavelet transformed
images.
The overall image quality indices for 200 Dwaraka underwater images are given in Fig. 7.
It can be observed from Tables 1, 2, and 3 that the dehazed images obtained by the proposed
scheme yields better quality images as compared to the other considered methods. It is also
found that He et al. [53], and Pan et al. [46] performed well in some instances but on an
average, the proposed method outperforms all.

Fig. 7 Graphical representation of the overall image quality indices for the Dwaraka images
Multimedia Tools and Applications (2022) 81:32907–32936 32925

Fig. 8 Visual comparison with non-linear color correction schemes. Column1: Degraded frames, Column2:
Li et al. [34], Column3: Fabbri et al. [18], and Column4: Proposed method

The dark channel prior signifies the strength of haze at different parts of the image under
consideration. In case of a haze-free image, the dark channel is assumed to be zero, whereas,
for the hazy images, the dark channel results in a higher value. In the above experiment, it is
observed that the proposed scheme has not only dehazed the hazy images but also restored
the color to a great extent. The parameters which affect the performance of the proposed
scheme are the value of L, patch size, the value of predefined transmission T0 . All the
images are having 8-bit intensity resolution, thus the value of L is considered to be 256. In
all the cases the patch size (m × n) is assumed to be (5 × 5), however, the performance
is dependent on the window size considered [26]. The larger patch size leads to a higher
chance of getting the dark channel, whereas a smaller patch leads to a less accurate dark
channel. On the other hand, the higher patch size results in prominent halo artifacts. Thus
a calculative trade-off has to be done in order to fix the patch size. Haze affects the finer
details of a scene, which results in a hazy or blurry image. The motivation behind using
DWT is to extract the finer details from the hazy images. Thus a single level decomposition
32926 Multimedia Tools and Applications (2022) 81:32907–32936

Fig. 9 Probability distribution of the gray level values of degraded and enhanced images using the proposed
technique. Column1: Degraded images, Column2: color histogram of degraded image, Column3: enhanced
image, and Column4: color histogram of enhanced image
Multimedia Tools and Applications (2022) 81:32907–32936 32927

Fig. 10 Visual comparison of performance of the proposed scheme with LWT and DWT. Column1: Degraded
frames, Column2: Proposed scheme with LWT, Column3: Proposed scheme with DWT

is enough to extract them. However, with increase in level of decomposition, the extent of
retrieving the finer details decreases, which will lower the enhancement quality [59]. Thus
single level wavelet decomposition was performed in case of all the images.
The proposed scheme is able to restore the color of underwater decolorized sequences to
some extent. A comparision with Li et al. [34] and Fabbri et al. [18] is presented in Fig. 8,
which are dealing with color restoration task. It can be observed that for decolorized images
without haze or light haze, our approach yields comparable results as of [34] and [18].
The value of predefined transmission T0 is considered based on the image haziness and
scene content. In case of images with larger regions having similar intensity values, the
chance of getting saturated regions in the reconstructed images is higher. Thus for such
images, the T0 is assigned with a higher value. Whereas for the images with larger intensity
32928 Multimedia Tools and Applications (2022) 81:32907–32936

Table 4 BRISQUE score for some standard underwater images

Image Proposed with LWT Proposed with DWT

standardUW 1 26.4750 25.2977


standardUW 2 24.7406 24.0519
standardUW 3 47.2551 45.8032
standardUW 4 19.3159 18.3820
standardUW 5 34.9073 33.1127

variations, the T0 is taken to be less. In our experiment, we have considered images which
are captured from a smaller distance in order to satisfy the requirement of the dark channel
prior assumptions. This is because, for images captured at a larger distance, the transmission
coefficient may be different for different color channels [26]. This method uses the dark
channel prior in a multi-resolution framework, thus able to restore the color as well as
remove the effects of haze in the images to a satisfactory level.
The numerical analysis of the proposed technique is carried out by the help of probability
distribution of the gray level values of the degraded and its corresponding enhanced images
obtained using our approach. Figure 9 shows the probability distribution of the degraded
underwater images and their corresponding enhanced images. It can be observed here that
the histograms of enhanced images attain better dynamic range as compared to the degraded
underwater images. This implies that the contrast of images increased satisfactorily by the
proposed mechanism [60].
In Table 4, the quality index of enhanced images shown in Fig. 10 are reported using the
BRISQUE score. It can be observed from the Table 4 that with DWT, the results are better
enhanced than LWT. A comparison of the processing time is carried out in Table 5 and its
corresponding graphical representation is shown in Fig. 11. It can be observed that lifting-
based wavelet transform (LWT) takes lesser time as compared to DWT, however it misses
some of the image details during the decomposition process [30]. LWT can be used instead
of DWT, in case, a faster dehazing is required by compromising the quality of dehazed
image. This is because the LWT [61] could extract comparably less image details than DWT
[11, 30]. Results of the proposed scheme with DWT based image enhancement and LWT
based image enhancement are presented in Fig. 10.
Let O() be the order of complexity, and M × N is the resolution of image to be
enhanced. Gao et al. [19], Chen et al. [13], and Zhu et al. [71] have same level of com-
plexity
 of O(MN ). Whereas,Pan et al. [46] and Cai et al. [12] have the complexity of
2 2MN+P
O MN wO(nl )/ S . Here, w is the width of the network, nl is the number of
layers,  is the error parameter, P is the total number of parameters, and S is the sam-
ple size. The complexity of He et al. [26] and our proposed scheme is O(MmN n), where
m × n is the size of the image patch used for the estimation of transmission map. Thus the
complexity is not affected by including DWT as its complexity is of order O(MN ).
The improvemnt of image quality by the help of proposed scheme is highlighted in
Table 6. Here, the UICM (Underwater Image Colorfulness Measure), UIConM (Underwater
Image Contrast Measure), UIQM Norm (Normalized Underwater Image Quality Measure),
and UISM (Underwater Image SHarpness Measure) metrics [47] are reported for 14 under-
water images in Enhanced / Degraded format. Degraded represents the respective measures
Table 5 Average processing time of various schemes for underwater images of different resolutions

Resolution DWT LWT Cai et al. Pan et al. He et al. Gao et al. Zhu et al. Chen et al. Proposed

320x240 0.249706 0.129697 1.671707 3.895077 0.668541 1.297387 1.456376 0.738335 0.684218
400x300 0.268389 0.146285 1.846434 2.993705 0.907225 1.93923 1.957451 1.097764 0.926487
480x320 0.287889 0.153862 2.045086 4.05225 1.169016 2.480965 2.634056 1.396333 1.203278
Multimedia Tools and Applications (2022) 81:32907–32936

600x450 0.346142 0.217765 2.835606 4.888489 2.075374 4.377703 4.434058 2.452058 1.978562
1280x720 1.154958 0.913225 10.930513 11.528195 16.285436 33.622311 36.049249 19.260691 16.032869
1920x1080 1.193406 0.953933 11.646571 12.525638 18.090883 37.232902 41.159153 22.505942 17.982491
1920x1200 1.322987 0.976804 12.597281 13.276408 18.329954 37.486487 40.774065 21.809408 18.336924
32929
32930

Table 6 Performance evaluation of the proposed scheme for several underwater sequences using underwater image quality measures

Image UICM Enhanced / Degraded UIConM Enhanced / Degraded UIQM Norm Enhanced / Degraded UISM Enhanced / Degraded

UWS 001 9.5592 / 2.7537 0.8466 / 0.4413 1.0436 / 0.4743 2.6145 / 0.6563
UWS 002 8.8003 / 4.2343 1.1138 / 1.0774 1.6589 / 1.2062 7.5757 / 2.4761
UWS 003 4.1882 / 2.1158 1.1921 / 0.9611 1.4491 / 1.0849 4.2992 / 2.4852
UWS 004 13.2709 / 3.9988 1.0653 / 0.8642 1.7830 / 1.1390 9.3754 / 4.1932
UWS 005 6.3726 / 1.1182 0.7918 / 0.4278 1.1607 / 0.4823 5.1293 / 1.0812
UWS 006 10.3188 / 2.4063 1.0589 / 0.7573 1.4743 / 0.8440 5.6591 / 1.7447
UWS 007 4.6777 / 1.2766 0.6312 / 0.3218 0.6857 / 0.3338 0.9649 / 0.3891
UWS 008 4.3984 / 0.6607 0.8949 / 0.3837 1.1664 / 0.3940 4.1447 / 0.4924
UWS 009 14.8354 / 4.8407 0.9756 / 0.7052 1.3915 / 0.8837 5.1429 / 2.6669
UWS 010 12.8581 / 2.3086 1.1391 / 1.1116 1.6589 / 1.1651 6.8828 / 1.5829
UWS 011 7.6727 / 1.5192 0.6939 / 0.3547 0.8112 / 0.3920 1.5754 / 0.7361
UWS 012 9.8668 / 2.7606 1.1495 / 0.8589 1.5418 / 0.9693 5.4965 / 2.1349
UWS 013 7.4854 / 1.6331 1.9176 / 0.8204 1.4813 / 1.2317 6.5194 / 1.4392
UWS 014 8.9273 / 1.5574 1.3320 / 0.5441 1.5806 / 0.6430 4.8873 / 0.7082
Multimedia Tools and Applications (2022) 81:32907–32936
Multimedia Tools and Applications (2022) 81:32907–32936 32931

Fig. 11 Graphical comparison of time complexity of various schemes for different underwater image
resolutions

of the original degraded image, whereas Enhanced represents the measures obtained using
our proposed technique. A higher index implies better quality of the image.
The proposed scheme performs well at most of the conditions, however, in case of images
with dense haze, and complete decolorization, the performance is not satisfactory. In dense
haze situation, where the finer details of the image along with the coarser details are dis-
torted completely, the multiresolution framework is not capable of extracting efficiently,
which results in poor performance of the proposed scheme.

5 Conclusions and future works

Dehazing of highly hazy degraded underwater images has been addressed in this article.
The amalgamation of the dark channel prior in a multi-resolution framework is capable of
decreasing the effects of haze in the underwater images. The application of dark channel
prior on the colored wavelets results in restoring the color to some extent. The proposed
dehazing scheme proves to be better in terms of various image quality indices.
The proposed scheme is tested with numerous standard hazy underwater images and a
large number of Dwaraka underwater images which are highly degraded because of dense
haze, decolorization, blurring and poor contrast. The performance of the proposed scheme
is validated by five image quality indices by comparing with seven state-of-the-art dehazing
methods. The overall BIQI, BRISQUE, SSEQ, DIIVINE, and BLIINDS scores achieved by
the proposed enhancement scheme are 35.01, 30.105, 27.22, 30.10, and 27.8, respectively.
Four more non-reference image quality indicies those are devised exclusively for underwa-
ter images, are also used to check the performance of the proposed scheme. The subjective,
as well as quantitative analysis, are carried out in order to justify our findings.
The visibility although got enhanced by the proposed scheme, but at instances, few
patches in the image get highly saturated. Thus, in the future, we are planning to exploit
the various attributes of the multi-resolution framework along with a better color estimation
algorithm combined with the probabilistic approach based haze model to tackle the issue
32932 Multimedia Tools and Applications (2022) 81:32907–32936

of dense underwater haze and highly degraded color images. Work also can be done to fix
the patch size adaptively, so as to get the optimal restoration, because the performance is
largely dependent on the patch size considered.

Appendix: Evaluation Measures

Quantitative analysis of the images is carried out using the non-reference image quality
assessment measures. The reason for choosing these measures is that, in our experiments,
the reference or ground truth images are not available.

BRISQUE [38]: It is a natural scene statistic based distortion-generic blind/no-reference


image quality assessment model which operates in the spatial domain. It uses scene
statistics of locally normalized luminance coefficients to quantify possible losses of ‘nat-
uralness’ in the images due to the presence of distortions, thereby leading to a holistic
measure of quality. A support vector regression (SVR) model is employed by using
the differential mean opinion score (DMOS) value. The measure is based on the mean
subtracted contrast normalized (MSCN) coefficient, which is given by;

I (i, j ) − μ(i, j )
IMSCN = , (24)
σ (i, j ) + c
where,

K 
L
μ(i, j ) = wk,l I (i + k, j + l), (25)
k=−K l=−L


K 
L
σ (i, j ) = wk,l [I (i + k, j + l) − μ(i, j )]2 . (26)
k=−K l=−L

DIIVINE [40]: It is a distortion-agnostic approach to blind image quality assessment that


utilizes concepts from natural scene statistics to not only quantify the distortion but also
quantify the distortion type afflicting the images. First, it computes the wavelet coeffi-
cients and normilize them. Next, it identifies the distortion afflicting the images and then
performs the distortion-specific quality assessment.
BLIINDS [52]: It is an efficient, general-purpose, non-distortion specific, blind/no-
reference image quality assessment algorithm that uses discrete cosine transform
coefficients to perform the distortion-agnostic quality assessment.
BIQI [39]: It is a new two-step framework for no-reference image quality assessment
based on natural scene statistics. Once trained, the framework does not require any
knowledge of the distorting process and the framework is modular so it can be extended
to any number of distortions.
SSEQ [35]: It is an efficient general-purpose no-reference image quality assessment
model that utilizes local spatial and spectral entropy features on distorted images. It Uses
a 2-stage framework of distortion classification followed by a quality assessment. It uti-
lizes a support vector machine to train an image distortion and quality prediction engine.
It is capable of assessing the quality of a distorted image across multiple distortion
categories.
Multimedia Tools and Applications (2022) 81:32907–32936 32933

UICM [47]: This parameter is devised to measure the colorfulness of the image. This is
given as;

U I CM = −0.0268 μ2α,RG + μ2α,Y B + 0.1586 σα,RG


2 + σα,Y
2
B, (27)

where μ represents the chrominance intensity, and σ 2 represents the variance in the
chrominance.
UISM [47]: The underwater image sharpness measure aims to compute the measure of
sharpness of an image. It is computed by;

3
U I SM = λc EME (GrayscaleEdgec ) , (28)
c=1
 
2 
k1 
k2
Imax,k,l
EME = log , (29)
k1 k2 Imin,k,l
l=1 k=1
where k1 k2 is the number of blocks an image got divided by.
UIConM [47]: It is the underwater image contrast measure, which is defined mathemat-
ically as;
U I ConM = logAMEE(I ntensity), (30)
where the AMEE is the Agaian measure of enhancement by engropy. This is the average
michaelson contrast in the local regions.
UIQM [47]: It is the normalized underwater image quality measure. It is given by:
U I QM = c1 × U I CM + c2 × U I SM + c3 × U I ConM, (31)
where c1 , c2 , and c3 are the contributing factors which are set to 0.0282, 0.2953, and
3.5753, respectively.

Acknowledgments The authors would like to thank the Marine Archaeological Unit (MAU) of The Archae-
ological Survey of India (ASI), Dr. S. R. Rao, Mr. H. S. Adwani, Mr. M. A. Raveendra, Dr. Nalini Rao, Mr.
Graham Hancock, and Mr. Rafal Reyzer for providing the Dwaraka sequence and images used in this article.

References

1. Abas PE, De Silva LC et al (2019) Review of underwater image restoration algorithms. IET Image
Process 13(10):1587–1596
2. Addison PS (2017) The illustrated wavelet transform handbook: Introductory theory and applications in
science, engineering, medicine and finance. CRC Press, Boca Raton, Florida, USA
3. Agarwal S, Regentova EE, Kachroo P, Verma H (2017) Multidimensional compression of its data using
wavelet-based compression techniques. IEEE Trans Intell Transp Syst 18(7):1907–1917
4. Ahn J, Yasukawa S, Sonoda T, Ura T, Ishii K (2017) Enhancement of deep-sea floor images obtained by
an underwater vehicle and its evaluation by crab recognition. J Mar Sci Technol 22(4):758–770
5. Alex RS, Deepa S, Supriya M (2016) Underwater image enhancement using clahe in a reconfigurable
platform. In: MTS/IEEE Monterey OCEANS, pp 1–5
6. Ancuti CO, Ancuti C (2013) Single image dehazing by multi-scale fusion. IEEE Trans Image Process
22(8):3271–3282
7. Ancuti CO, Ancuti C, De Vleeschouwer C, Bekaert P (2018) Color balance and fusion for underwater
image enhancement. IEEE Trans Image Process 27(1):379–393
8. Antonini M, Barlaud M, Mathieu P, Daubechies I (1992) Image coding using wavelet transform. IEEE
Trans Image Process 1(2):205–220
9. Anwar S, Li C (2020) Diving deeper into underwater image enhancement: a survey. Signal Process
Image Commun 89:115978
32934 Multimedia Tools and Applications (2022) 81:32907–32936

10. Bickford D, Lohman DJ, Sodhi NS, Ng PK, Meier R, Winker K, Ingram KK, Das I (2007) Cryptic
species as a window on diversity and conservation. Trends Ecol Evol 22(3):148–155
11. Boujelbene R, Jemaa YB, Zribi M (2019) A comparative study of recent improvements in wavelet-based
image coding schemes. Multimed Tools Appl 78(2):1649–1683
12. Cai B, Xu X, Jia K, Qing C, Tao D (2016) DehazeNet: An end-to-end system for single image haze
removal. IEEE Trans Image Process 25(11):5187–5198
13. Chen Z, Wang H, Shen J, Li X, Xu L (2014) Region-specialized underwater image restoration in
inhomogeneous optical environments. Opt Int J Light Electron Opt 125(9):2090–2098
14. Cheng CY, Sung CC, Chang HH (2015) Underwater image restoration by red-dark channel prior and
point spread function deconvolution. In: IEEE International conference on signal and image processing
applications, pp 110–115
15. Chiang JY, Chen YC (2012) Underwater image enhancement by wavelength compensation and
dehazing. IEEE Trans Image Process 21(4):1756–1769
16. Colores SS, Moya-Sánchez EU, Ramos-arreguı́n JM, Cabal-Yépez E (2019) Statistical multidirectional
line dark channel for single-image dehazing. IET Image Process 13(14):2877–2887
17. Emberton S, Chittka L, Cavallaro A (2017) Underwater image and video dehazing with pure haze region
segmentation. Comput Vis Image Underst 168:145–156
18. Fabbri C, Islam MJ, Sattar J (2018) Enhancing underwater imagery using generative adversarial
networks. In: IEEE International conference on robotics and automation, pp 7159–7165
19. Gao Y, Hu HM, Wang S, Li B (2014) A fast image dehazing algorithm based on negative correction.
Signal Process 103:380–398
20. Ghani ASA, Isa NAM (2017) Automatic system for improving underwater image contrast and color
through recursive adaptive histogram modification. Comput Electron Agric 141:181–195
21. Goh Y, Teoh AB, Goh MK (2008) Wavelet based illumination invariant preprocessing in face
recognition. In: IEEE Congress on image and signal processing, vol 3, pp 421–425
22. González-Rivero M, Bongaerts P, Beijbom O, Pizarro O, Friedman A, Rodriguez-Ramirez A, Upcroft
B, Laffoley D, Kline D, Bailhache C et al (2014) The catlin seaview survey–kilometre-scale
seascape assessment, and monitoring of coral reef ecosystems. Aquat Conserv: Mar Freshwat Ecosyst
24(S2):184–198
23. Haar A (1910) Zur theorie der orthogonalen funktionensysteme. Math Ann 69(3):331–371
24. Harris S, Ballard R (1986) Argo: Capabilities for deep ocean exploration. In: IEEE OCEANS, pp 6–8
25. Hassan N, Ullah S, Bhatti N, Mahmood H, Zia M (2021) The retinex based improved underwater image
enhancement. Multimed Tools Appl 80(2):1839–1857
26. He K, Sun J, Tang X (2011) Single image haze removal using dark channel prior. IEEE Trans Pattern
Anal Mach Intell 33(12):2341–2353
27. Hsia CH, Chiang JS, Guo JM (2012) Memory-efficient hardware architecture of 2-D dual-mode lifting-
based discrete wavelet transform. IEEE Trans Circuits Syst Video Technol 23(4):671–683
28. Hsia CH, Guo JM, Chiang JS (2009) Improved low-complexity algorithm for 2-D integer lifting-
based discrete wavelet transform using symmetric mask-based scheme. IEEE Trans Circuits Syst Video
Technol 19(8):1202–1208
29. Hussein R, Shaban KB, El-Hag AH (2015) Wavelet transform with histogram-based threshold estimation
for online partial discharge signal denoising. IEEE Trans Instrum Meas 64(12):3601–3614
30. J M, K P (2012) Performance comparison of PCA, DWT-PCA and LWT-PCA for face image retrieval.
Comput Sci Eng 2(6):41–50
31. Jaffe JS (2015) Underwater optical imaging: The past, the present, and the prospects. IEEE J Ocean Eng
40(3):683–700
32. Ju M, Ding C, Zhang D, Guo YJ (2018) BDPK: Bayesian Dehazing using prior knowledge. IEEE Trans
Circuits Syst Video Technol 29(8):2349–2362
33. Lee D, Kim G, Kim D, Myung H, Choi HT (2012) Vision-based object detection and tracking for
autonomous navigation of underwater robots. Ocean Eng 48:59–68
34. Li J, Skinner KA, Eustice RM, Johnson-Roberson M (2018) WaterGan: Unsupervised generative net-
work to enable real-time color correction of monocular underwater images. IEEE Robot Autom Lett
3(1):387–394
35. Liu L, Liu B, Huang H, Bovik AC (2014) No-reference image quality assessment based on spatial and
spectral entropies. Signal Process Image Commun 29(8):856–863
36. Lu H, Li Y, Serikawa S (2015) Single underwater image descattering and color correction. In: IEEE
International conference on acoustics, speech and signal processing, pp 1623–1627
37. Mallat SG (1989) A theory for multiresolution signal decomposition: The wavelet representation. IEEE
Trans Pattern Anal Mach Intell 11(7):674–693
Multimedia Tools and Applications (2022) 81:32907–32936 32935

38. Mittal A, Moorthy AK, Bovik AC (2012) No-reference image quality assessment in the spatial domain.
IEEE Trans Image Process 21(12):4695–4708
39. Moorthy AK, Bovik AC (2010) A two-step framework for constructing blind image quality indices.
IEEE Signal Process Lett 17(5):513–516
40. Moorthy AK, Bovik AC (2011) Blind image quality assessment: From natural scene statistics to
perceptual quality. IEEE Trans Image Process 20(12):3350–3364
41. Naik DK, Rout DK (2014) Outdoor image enhancement: Increasing visibility under extreme haze and
lighting condition. In: IEEE International advance computing conference, pp 1081–1086
42. Nikan S, Ahmadi M (2014) Local gradient-based illumination invariant face recognition using local
phase quantisation and multi-resolution local binary pattern fusion. IET Image Process 9(1):12–21
43. Nishino K, Kratz L, Lombardi S (2012) Bayesian defogging. Int J Comput Vis 98(3):263–278
44. Olszewska JI (2013) Multi-scale, multi-feature vector flow active contours for automatic multiple-
face detection. In: Proceedings of the 6th international conference on bio inspired systems and signal
processing, pp 429–435
45. Ortega A, Jiang W, Fernandez P, Chrysafis CG (1999) Implementations of the discrete wavelet trans-
form: Complexity, memory, and parallelization issues. In: Wavelet applications in signal and image
processing VII, vol 3813, pp 386–400
46. Pan PW, Yuan F, Cheng E (2018) Underwater image de-scattering and enhancing using dehazeNet and
HWD. J Mar Sci Technol 26(4):531–540
47. Panetta K, Gao C, Agaian S (2015) Human-visual-system-inspired underwater image quality measures.
IEEE J Ocean Eng 41(3):541–551
48. Pereira DA, Fraza O et al (2004) Fiber bragg grating sensing system for simultaneous measurement of
salinity and temperature. Opt Eng 43(2):299–304
49. Priyadharsini R, Sharmila TS, Rajendran V (2018) A wavelet transform based contrast enhancement
method for underwater acoustic images. Multidim Syst Sign Process 29(4):1845–1859
50. Qiao X, Bao J, Zhang H, Zeng L, Li D (2017) Underwater image quality enhancement of sea cucumbers
based on improved histogram equalization and wavelet transform. Inf Process Agric 4(3):206–213
51. Rout DK, Subudhi BN, Veerakumar T, Chaudhury S (2018) Spatio-contextual gaussian mixture model
for local change detection in underwater video. Expert Syst Appl 97:117–136
52. Saad MA, Bovik AC, Charrier C (2010) A DCT statistics-based blind image quality index. IEEE Signal
Process Lett 17(6):583–586
53. Sahu G, Seal A, Krejcar O, Yazidi A (2021) Single image dehazing using a new color channel. J Vis
Commun Image Represent 103008:74
54. Serikawa S, Lu H (2014) Underwater image dehazing using joint trilateral filter. Comput Electr Eng
40(1):41–50
55. Shahbahrami A (2012) Algorithms and architectures for 2D discrete wavelet transform. J Supercomput
62(2):1045–1064
56. Singh H, Adams J, Mindell D, Foley B (2000) Imaging underwater for archaeology. J Field Archeol
27(3):319–328
57. Song W, Wang Y, Huang D, Liotta A, Perra C (2020) Enhancement of underwater images with statistical
model of background light and optimization of transmission map. IEEE Trans Broadcast 66(1):153–169
58. Souaidi M, El Ansari M (2019) Multi-scale analysis of ulcer disease detection from wce images. IET
Image Process 13(12):2233–2244
59. Srivastava V, Purwar RK (2017) A five-level wavelet decomposition and dimensional reduction approach
for feature extraction and classification of mr and ct scan images, vol 2017
60. Stark JA (2000) Adaptive image contrast enhancement using generalizations of histogram equalization.
IEEE Trans Image Process 9(5):889–896
61. Sweldens W (1998) The lifting scheme: A construction of second generation wavelets. SIAM J Math
Anal 29(2):511–546
62. Temple SE (2007) Effect of salinity on the refractive index of water: considerations for archer fish aerial
vision. J Fish Biol 70(5):1626–1629
63. Twardowski MS, Boss E, Macdonald JB, Pegau WS, Barnard AH, Zaneveld JRV (2001) A model for esti-
mating bulk refractive index from the optical backscattering ratio and the implications for understanding
particle composition in case I and case II waters. J Geophys Res 106(C7):14129–14142
64. Vasamsetti S, Mittal N, Neelapu BC, Sardana HK (2017) Wavelet based perspective on variational
enhancement technique for underwater imagery. Ocean Eng 141:88–100
65. Wang B, Zheng F, Li X, Zhang S (2019) Single image dehazing using manifold particle filter. In: 12Th
international congress on image and signal processing, biomedical engineering and informatics, pp 1–6
32936 Multimedia Tools and Applications (2022) 81:32907–32936

66. Wood R, Olszewska JI (2012) Lighting-variable adaboost based-on system for robust face detection.
In: Proceedings of the 5th international conference on bio-inspired systems and signal processing,
pp 494–497
67. Yassin AA, Ghadban RM, Saleh SF, Neima HZ (2013) Using discrete wavelet transformation to enhance
underwater image. Int J Comput Sci Issues 10(5):220–228
68. Yu H, Li X, Lou Q, Lei C, Liu Z (2020) Underwater image enhancement based on dcp and depth
transmission map. Multimed Tools Appl 79(27):20373–20390
69. Zervas ND, Anagnostopoulos GP, Spiliotopoulos V, Andreopoulos Y, Goutis CE (2001) Evaluation
of design alternatives for the 2-d-discrete wavelet transform. IEEE Trans Circuits Syst Video Technol
11(12):1246–1262
70. Zhang W, Dong L, Zhang T, Xu W (2021) Enhancing underwater image via color correction and bi-
interval contrast enhancement. Signal Process Image Commun 116030:90
71. Zhu Q, Mai J, Shao L (2015) A fast single image haze removal algorithm using color attenuation prior.
IEEE Trans Image Process 24(11):3522–3533
72. Zhuang P, Ding X (2020) Underwater image enhancement using an edge-preserving filtering retinex
algorithm. Multimed Tools Appl 79(25):1–21

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.

Affiliations

Deepak Kumar Rout1 · Badri Narayan Subudhi2 · T. Veerakumar1 ·


Santanu Chaudhury3 · John Soraghan4

Deepak Kumar Rout


deepak.y.rout@gmail.com
T. Veerakumar
tveerakumar@nitgoa.ac.in
Santanu Chaudhury
santanuc@ee.iitd.ac.in
John Soraghan
j.soraghan@strath.ac.uk
1 National Institute of Technology Goa, Goa, India
2 Indian Institute of Technology Jammu, Nagrota, Jammu, J, K, PIN-181 221, India
3 Indian Institute of Technology Jodhpur, Rajasthan, India
4 University of Strathclyde, Glasgow, UK

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy