Lalpha Beta Based Color Correction
Lalpha Beta Based Color Correction
Lalpha Beta Based Color Correction
Underwater 3D Recording and Modeling, 1617 April 2015, Piano di Sorrento, Italy
DIMEG, University of Calabria, 87036 Rende (CS), Italy - (gianfranco.bianco, muzzupappa, f.bruno)@unical.it
b
Computer Vision and Robotics Group, University of Girona, 17001, Spain - rafael.garcia@udg.edu
c
ICREA, Barcelona, Spain - lneumann@silver.udg.edu
Commission V
KEY WORDS: Underwater Imaging, Color Correction, Ruderman space, Computational Color Constancy
ABSTRACT:
Recovering correct or at least realistic colors of underwater scenes is a very challenging issue for imaging techniques, since illumination
conditions in a refractive and turbid medium as the sea are seriously altered. The need to correct colors of underwater images or
videos is an important task required in all image-based applications like 3D imaging, navigation, documentation, etc. Many imaging
enhancement methods have been proposed in literature for these purposes. The advantage of these methods is that they do not require the
knowledge of the medium physical parameters while some image adjustments can be performed manually (as histogram stretching) or
automatically by algorithms based on some criteria as suggested from computational color constancy methods. One of the most popular
criterion is based on gray-world hypothesis, which assumes that the average of the captured image should be gray. An interesting
application of this assumption is performed in the Ruderman opponent color space l, used in a previous work for hue correction
of images captured under colored light sources, which allows to separate the luminance component of the scene from its chromatic
components. In this work, we present the first proposal for color correction of underwater images by using l color space. In
particular, the chromatic components are changed moving their distributions around the white point (white balancing) and histogram
cutoff and stretching of the luminance component is performed to improve image contrast. The experimental results demonstrate the
effectiveness of this method under gray-world assumption and supposing uniform illumination of the scene. Moreover, due to its low
computational cost it is suitable for real-time implementation.
1.
INTRODUCTION
author
25
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W5, 2015
Underwater 3D Recording and Modeling, 1617 April 2015, Piano di Sorrento, Italy
variations. As example of a possible application, we have processed an image set of an underwater finding with our method in
order to obtain the 3D reconstruction by using a structure-frommotion technique. The proposed method requires just a single
image and does not need any particular filters or prior knowledge
of the scene. Moreover, due to its low computational cost it is
suitable for real-time implementation.
The remaining of paper is organized as follow: in section 2 the
main image enhancement methods for underwater imaging are
summarized. To follow, the color alteration problem in underwater environments is described in section 3 and a few details
as background are presented in section 4. The color correction
method is proposed in section 5 and the experimental results are
reported in section 6. Finally, conclusions and future works are
discussed in section 7.
2.
RELATED WORK
process underwater images. It involves several successive independent processing steps which correct non-uniform illumination, suppress noise, enhance contrast and adjust colors. In particular, color correction is achieved by equalizing the mean of each
color-channel. A linear translation of the histogram (for each
RGB channel) is performed by adding to each pixel the difference
between the desired mean value and the mean of the channel. Images taken under different conditions are showed and elaborated.
A different type of equalization is proposed in (Chambah et al.,
2003), by using Automatic Color Equalization (ACE), for automatic live fish recognition. This method merges two computational color constancy principles: gray world and retinex white
patch. Lightness constancy makes stably perceive the scene regardless changes in mean luminance intensity and color constancy
makes stably perceive the scene regardless changes in color of the
illuminant. ACE is able to adapt to widely varying lighting conditions.
An approach based on the fusion principle has been presented
by Ancuti et al. (2012) for enhancement of images and videos
acquired in different lighting conditions. Two inputs are represented by color corrected and contrast enhanced versions of the
original underwater image and also four weight maps that aim to
increase the visibility. Color correction is performed by a white
balancing process able to remove the color casts but also to recover the white and gray shades of the image, based on the grayworld assumption. In particular, the authors propose to increase
the average value estimated with a percentage [0, 0.5].
A very different approach has been used by Petit et al. (2009),
which propose a method based on light attenuation inversion after processing a color space contraction using quaternions. Applied to the white, the attenuation gives a hue vector characterizing the water color. Using this reference axis, geometrical transformations into the color space are computed with quaternions.
Pixels of water areas of processed images are moved to gray or
colors with a low saturation whereas the objects remain fully colored. Thus, the contrast of the observed scene is significantly
improved and the difference between the background and the rest
of the image is increased. Moreover, Principal Component Analysis (PCA) calculation allows to extract a dominant color of the
image, hence, in most cases, the water color provides also good
results in term of color enhancement by using the method described above. The results are showed on images with a large
presence of a water area.
In (Gouinaud et al., 2011) the authors introduce a Color Logarithmic Image Processing (CoLIP) framework. This novel framework expands the logarithmic image processing theory to color
images in the context of the human color visual perception. The
color correction performs a CoLIP subtraction of the color in the
image that should be white to the eye under daylight illumination,
that is, a pixel of the image that is recognized as white under standard illumination. From the LMS space expressed following the
LIP approach, a three-component function is defined (color tone
function). Its first component is the achromatic tone function,
and the last components are the two chromatic antagonist tone
functions. This method has been successfully evaluated on both
underwater images (blue illuminant) and indoor reflected light
images (yellow illuminant).
Color recovery is also analysed by Torres-Mendez and Dudek
(2005) but from a different perspective: it is formulated as an energy minimization problem using learned constraints. An image
is modeled as a sample function of Markov Random Field and the
color correction is considered as a task of assigning a color value
to each pixel of the input image that best describes its surrounding structure using the training image patches. Training images
are small patches of regions of interest that capture the maximum
of the intensity variations from the image to be restored. The im-
26
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W5, 2015
Underwater 3D Recording and Modeling, 1617 April 2015, Piano di Sorrento, Italy
The sunlight that penetrates from the air-water interface into the
water is altered from several causes. Firstly, the visible spectrum
is modified with the depth because of the radiation absorption at
the different wavelengths, in particular, the radiations with lower
frequency are more absorbed. Thus, the red components disappear already within about 5 m (shallow waters), the orange at 7.5
m, the yellow in the range of 10-14 m, and the green at about 21
m. A schematic representation of this color loss is illustrated in
Figure 1. It is evident that more depth increases, more the lower
temperature colors disappear, producing greenish-blue scenes. It
needs also to take into account the scattering (backward and forward) and absorption effects due to the particles dissolved and
suspended in the medium, like salt, clay and micro-organisms.
The greater the dissolved and suspended matter, the greener (or
browner) the water becomes. These particles not only alter the
colors but also attenuate and diffuse more the light with consequent decreasing of contrast and increasing of blur and noise in
the captured images. Let us remember that when we capture horizontally an underwater image, we have to consider as the light
path the horizontal distance from the target (working distance)
and the depth which we are, hence, the radiation suffers twice
the absorption. As depicted in Figure 1 light intensity that penetrates into the water already decays of 50% at 10 m. Moreover,
under water the color light depends on other factors as the coastal
proximity, the season, the water surface conditions, the time of
the day, the cloudiness of the sky, the presence of marine vegetation or the soil type. For the reasons listed above, it is clear that
4.
BACKGROUND
Before to describe the proposed color correction method, we provide some details about color constancy, white balancing, image formation model and gray world assumption. The l color
space is also introduced.
4.1
27
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W5, 2015
Underwater 3D Recording and Modeling, 1617 April 2015, Piano di Sorrento, Italy
that the object surfaces are Lambertian, that means the surface
reflects the light equally in all directions. These assumptions
can be considered too restrictive, but allow to use a simply image formation model, and however, they are acceptable in many
real cases for close-range acquisition in downward direction, like
seafloor mapping, or capturing objects in the foreground or lesscomplicated scenes where there are not strong light variations.
Thus, we can formulate the problem in a simplified manner following the computational color constancy approach. In the image formation model (Gonzales and Woods, 1992), as represented
in Figure 2, we can consider an underwater image fi (m, n) expressed by:
Here, we describe l space following the transformations matrices suggested in (Reynard et al., 2001), that we have reported in
Appendix for the readers convenience. The several steps to convert RGB coordinates in l space are reported in the flow-chart
of Figure 3. First of all, RGB image has to be corrected by the
non linearity (gamma correction), to work with linear RGB coordinates. This step is very important and necessary to pass to the
next: conversion in the XYZ tristimulus values. This conversion
is achieved by multiplying the linear RGB coordinates fi (m, n)
with Txyz,ij matrix, where the i and j are the matrix indeces in
according to the Einstein notation:
where
(1)
gi (m, n) = illuminant
ri (m, n) = reflectance
(m, n) = image coordinates
with i {1, 2, 3} we indicate the three color components of image, corresponding in RGB space to {R, G, B} channels. Correcting the color casts of an image means computing the illuminant gi and removing its chromatic components from fi . In order
to compute the color of illuminant further assumptions are needed
and for this purpose several computational color constancy methods have been proposed in literature (Gijsenij et al., 2011). In
the next section we describe the formulation of the gray-world
assumption used in our method.
(3)
(4)
(5)
fi (m, n)
fi (m, n)
gi (m, n)
f i (m, n)
(2)
l color space
(6)
28
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W5, 2015
Underwater 3D Recording and Modeling, 1617 April 2015, Piano di Sorrento, Italy
(7)
li (m, n)
;
li
(8)
(a)
(b)
(c)
(d)
EXPERIMENTAL RESULTS
where llms,i
(m, n) are the corrected coordinates in LMS space
ll,i
(m, n) = ll,i (m, n) ll,i ;
(a)
(10)
where ll,i
(m, n) are the corrected coordinates, ll,i (m, n)
the component of RGB image and ll,i its average. In order
to correct color casts of an image the and components (i
{2, 3}) must be changed, as suggested in (Reynard et al., 2001),
while the luminance component remains unchanged, at least regarding the translation of the mean, but it will be elaborated in a
different way, to improve the contrast image, as described in the
next section. After color correction processing we must transfer
the result back to RGB to display it by applying the inverse operations. It has to be noted that the white point ({1, 1, 1} in RGB
space) has {0, 0, 0} coordinates in l space. This means that
the method performs a shift of and components respect to
the coordinates {0, 0}, putting the distributions of these coordinates around the white point. Correcting image casts respect to
the white point in this space seems to be more realistic respect to
adjust the distributions of color channels around a gray point as
in RGB space. We can conclude that the gray-world assumption
is transformed in a white-world assumption in the l space.
(b)
29
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W5, 2015
Underwater 3D Recording and Modeling, 1617 April 2015, Piano di Sorrento, Italy
From the study of the State of the Art we found that underwater image datasets provided with calibrated ground truth are not
available in literature as for the in-air applications of computational color constancy algorithms (Gijsenij et al., 2011). Hence
we limit the performance evaluation of our method to a qualitative evaluation by visual inspection. We have compared the white
balancing results of our method with the ones obtained with other
similar color spaces, in order to investigate if the l space allows to give similar or better results. For this reason we have performed this analysis on CIELab, YCrCb, CIECAM97, CIELuv
spaces, which are composed from a luminance component and
two chromatic components like l space. Similarly to the white
balancing done in l space, we have shifted the median values of chromatic components around its own white point. Moreover we have elaborated the underwater images of Figure 4 with a
30
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W5, 2015
Underwater 3D Recording and Modeling, 1617 April 2015, Piano di Sorrento, Italy
3D reconstruction application
As example of a possible application, we have processed an image set of an underwater finding with our method in order to
obtain the 3D reconstruction by using a structure-from-motion
software. The dataset has been taken in the archaeological site
of Kaulon in Monasterace (Calabria - Italy), capturing a remainder of an ancient pillar. The dataset is composed by 45 images
with resolution of 980 x 1479 pixels. We have pre-processed all
the images with our method and then input in a structure-frommotion algorithm. The results of color correction and the 3D
model obtained by the corrected dataset is shown in Figure 9. As
an alternative we can process directly the texture applied on the
3D model and obtained by UV mapping.
(a)
REFERENCES
Ancuti, C., Ancuti, C. O., Haber, T., Bekaert, P., 2012. Enhancing
underwater images and videos by fusion. In Computer Vision and
Pattern Recognition (CVPR), IEEE Conference on, pp. 81-88.
(b)
Bazeille, S., Quidu, I., Jaulin, L., Malkasse, J.P., 2006. Automatic
Underwater Image Pre-Processing. CMM06 - CARACTERISATION DU MILIEU MARIN, 16-19 Oct.
7.
31
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W5, 2015
Underwater 3D Recording and Modeling, 1617 April 2015, Piano di Sorrento, Italy
Exploration of Submerged Structures: a Case Study in the Underwater Archaeological Site of Baia (Italy). In VAST: International
Symposium on Virtual Reality, Archaeology and Intelligent Cultural Heritage, Eurographics Association, pp. 121-128.
Garcia, R., Nicosevici, T., Cufi, X., 2002. On the way to solve
lighting problems in underwater imaging, In OCEANS 02 MTS /
IEEE, pp.1018-1024 vol.2, 29-31 Oct.
Ghani, A. S. A., Isa, N. A. M., 2014. Underwater image quality
enhancement through composition of dual-intensity images and
Rayleigh-stretching. SpringerPlus, 3(1), 757.
"
Txyz,ij =
(11)
#
(12)
"
Tlms,ij =
(13)
#
(14)
Tpca,ij
= 0
0
0
1
6
"
0
0
1
2
1
1
1
1
1 2
1 0
(15)
#
(16)
Petit, F., Capelle-Laize, A., Carr, P., 2009. Underwater image enhancement by attenuation inversionwith quaternions. In Acoustics, Speech and Signal Processing, IEEE International Conference on, pp. 1177-1180.
Reinhard, E., Ashikhmin, M., Gooch, B.,Shirley, P., 2001. Color
Transfer between Images, IEEE Computer Graphics and Applications, vol. 21, no. 5, pp. 34-41.
Ruderman, D., Cronin, T., Chiao, C., 1998. Statistics of cone
responses to natural images: implications for visual coding, J.
Opt. Soc. Am. A 15, 2036-2045.
Schechner, Y., Averbuch, Y., 2007. Regularized image recovery
in scattering media. IEEE Trans Pattern Anal Mach Intell.
Schettini, R., Corchs, S., 2010. Underwater image processing:
state of the art of restoration and image enhancement methods.
EURASIP Journal on Advances in Signal Processing 2010, 14.
Shamsuddin, N. B., Fatimah, W., Ahmad, W., Baharudin, B. B.,
Kushairi, M., Rajuddin, M., Mohd, F. B., 2012. Significance
Level of Image Enhancement Techniques for Underwater Images.
In IVIC 2012.
Torres-Mndez, L. A., Dudek, G., 2005. Color correction of underwater images for aquatic robot inspection. In Energy Minimization Methods in Computer Vision and Pattern Recognition
pp. 60-73, Springer Berlin Heidelberg.
Worldadventuredivers, 2015. http://worldadventuredivers.com/,
Last accessed on 2015, March 14.
APPENDIX
We report the conversion matrices used by Reinard et al. (2001).
32