Jump to content

Color appearance model

From Wikipedia, the free encyclopedia
(Redirected from CAM16)

A color appearance model (CAM) is a mathematical model that seeks to describe the perceptual aspects of human color vision, i.e. viewing conditions under which the appearance of a color does not tally with the corresponding physical measurement of the stimulus source. (In contrast, a color model defines a coordinate space to describe colors, such as the RGB and CMYK color models.)

A uniform color space (UCS) is a color model that seeks to make the color-making attributes perceptually uniform, i.e. identical spatial distance between two colors equals identical amount of perceived color difference. A CAM under a fixed viewing condition results in a UCS; a UCS with a modeling of variable viewing conditions results in a CAM. A UCS without such modelling can still be used as a rudimentary CAM.

Background

[edit]

Color appearance

[edit]

Color originates in the mind of the observer; “objectively”, there is only the spectral power distribution of the light that meets the eye. In this sense, any color perception is subjective. However, successful attempts have been made to map the spectral power distribution of light to human sensory response in a quantifiable way. In 1931, using psychophysical measurements, the International Commission on Illumination (CIE) created the XYZ color space[1] which successfully models human color vision on this basic sensory level.

However, the XYZ color model presupposes specific viewing conditions (such as the retinal locus of stimulation, the luminance level of the light that meets the eye, the background behind the observed object, and the luminance level of the surrounding light). Only if all these conditions stay constant will two identical stimuli with thereby identical XYZ tristimulus values create an identical color appearance for a human observer. If some conditions change in one case, two identical stimuli with thereby identical XYZ tristimulus values will create different color appearances (and vice versa: two different stimuli with thereby different XYZ tristimulus values might create an identical color appearance).

Therefore, if viewing conditions vary, the XYZ color model is not sufficient, and a color appearance model is required to model human color perception.

Color appearance parameters

[edit]

The basic challenge for any color appearance model is that human color perception does not work in terms of XYZ tristimulus values, but in terms of appearance parameters (hue, lightness, brightness, chroma, colorfulness and saturation). So any color appearance model needs to provide transformations (which factor in viewing conditions) from the XYZ tristimulus values to these appearance parameters (at least hue, lightness and chroma).

Color appearance phenomena

[edit]

This section describes some of the color appearance phenomena that color appearance models try to deal with.

Chromatic adaptation

[edit]

Chromatic adaptation describes the ability of human color perception to abstract from the white point (or color temperature) of the illuminating light source when observing a reflective object. For the human eye, a piece of white paper looks white no matter whether the illumination is blueish or yellowish. This is the most basic and most important of all color appearance phenomena, and therefore a chromatic adaptation transform (CAT) that tries to emulate this behavior is a central component of any color appearance model.

This allows for an easy distinction between simple tristimulus-based color models and color appearance models. A simple tristimulus-based color model ignores the white point of the illuminant when it describes the surface color of an illuminated object; if the white point of the illuminant changes, so does the color of the surface as reported by the simple tristimulus-based color model. In contrast, a color appearance model takes the white point of the illuminant into account (which is why a color appearance model requires this value for its calculations); if the white point of the illuminant changes, the color of the surface as reported by the color appearance model remains the same.

Chromatic adaptation is a prime example for the case that two different stimuli with thereby different XYZ tristimulus values create an identical color appearance. If the color temperature of the illuminating light source changes, so do the spectral power distribution and thereby the XYZ tristimulus values of the light reflected from the white paper; the color appearance, however, stays the same (white).

Hue appearance

[edit]

Several effects change the perception of hue by a human observer:

Contrast appearance

[edit]
Bartleson–Breneman effect

Several effects change the perception of contrast by a human observer:

  • Stevens effect: Contrast increases with luminance.
  • Bartleson–Breneman effect: Image contrast (of emissive images such as images on an LCD display) increases with the luminance of surround lighting.

Colorfulness appearance

[edit]

There is an effect which changes the perception of colorfulness by a human observer:

Brightness appearance

[edit]

There is an effect which changes the perception of brightness by a human observer:

  • Helmholtz–Kohlrausch effect: Brightness increases with saturation. Not modeled by CIECAM02.
  • Contrast appearance effects (see above), modeled by CIECAM02.

Spatial phenomena

[edit]

Spatial phenomena only affect colors at a specific location of an image, because the human brain interprets this location in a specific contextual way (e.g. as a shadow instead of gray color). These phenomena are also known as optical illusions. Because of their contextuality, they are especially hard to model; color appearance models that try to do this are referred to as image color appearance models (iCAM).

Color appearance models

[edit]

Since the color appearance parameters and color appearance phenomena are numerous and the task is complex, there is no single color appearance model that is universally applied; instead, various models are used.

This section lists some of the color appearance models in use. The chromatic adaptation transforms for some of these models are listed in LMS color space.

CIELAB

[edit]

In 1976, the CIE set out to replace the many existing, incompatible color difference models by a new, universal model for color difference. They tried to achieve this goal by creating a perceptually uniform color space (UCS), i.e. a color space where identical spatial distance between two colors equals identical amount of perceived color difference. Though they succeeded only partially, they thereby created the CIELAB (“L*a*b*”) color space which had all the necessary features to become the first color appearance model. While CIELAB is a very rudimentary color appearance model, it is one of the most widely used because it has become one of the building blocks of color management with ICC profiles. Therefore, it is basically omnipresent in digital imaging.

One of the limitations of CIELAB is that it does not offer a full-fledged chromatic adaptation in that it performs the von Kries transform method directly in the XYZ color space (often referred to as “wrong von Kries transform”), instead of changing into the LMS color space first for more precise results. ICC profiles circumvent this shortcoming by using the Bradford transformation matrix to the LMS color space (which had first appeared in the LLAB color appearance model) in conjunction with CIELAB.

Due to the "wrong" transform, CIELAB is known to perform poorly when a non-reference white point is used, making it a poor CAM even for its limited inputs. The wrong transform also seems responsible for its irregular blue hue, which bends towards purple as L changes, making it also a non-perfect UCS.

Nayatani et al. model

[edit]

The Nayatani et al. color appearance model focuses on illumination engineering and the color rendering properties of light sources.

Hunt model

[edit]

The Hunt color appearance model focuses on color image reproduction (its creator worked in the Kodak Research Laboratories). Development already started in the 1980s and by 1995 the model had become very complex (including features no other color appearance model offers, such as incorporating rod cell responses) and allowed to predict a wide range of visual phenomena. It had a very significant impact on CIECAM02, but because of its complexity the Hunt model itself is difficult to use.

RLAB

[edit]

RLAB tries to improve upon the significant limitations of CIELAB with a focus on image reproduction. It performs well for this task and is simple to use, but not comprehensive enough for other applications.

Unlike CIELAB, RLAB uses a proper von Kries step. It also allows for tuning the degree of adaptation by allowing a customized D value. "Discounting-the-illuminant" can still be used by using a fixed value of 1.0.[2]

LLAB

[edit]

LLAB is similar to RLAB, also tries to stay simple, but additionally tries to be more comprehensive than RLAB. In the end, it traded some simplicity for comprehensiveness, but was still not fully comprehensive. Since CIECAM97s was published soon thereafter, LLAB never gained widespread usage.

CIECAM97s

[edit]

After starting the evolution of color appearance models with CIELAB, in 1997, the CIE wanted to follow up itself with a comprehensive color appearance model. The result was CIECAM97s, which was comprehensive, but also complex and partly difficult to use. It gained widespread acceptance as a standard color appearance model until CIECAM02 was published.

IPT

[edit]

Ebner and Fairchild addressed the issue of non-constant lines of hue in their color space dubbed IPT.[3] The IPT color space converts D65-adapted XYZ data (XD65, YD65, ZD65) to long-medium-short cone response data (LMS) using an adapted form of the Hunt–Pointer–Estevez matrix (MHPE(D65)).[4]

The IPT color appearance model excels at providing a formulation for hue where a constant hue value equals a constant perceived hue independent of the values of lightness and chroma (which is the general ideal for any color appearance model, but hard to achieve). It is therefore well-suited for gamut mapping implementations.

ICtCp

[edit]

ITU-R BT.2100 includes a color space called ICtCp, which improves the original IPT by exploring higher dynamic range and larger colour gamuts.[5] ICtCp can be transformed into an approximately uniform color space by scaling Ct by 0.5. This transformed color space is the basis of the Rec. 2124 wide gamut color difference metric ΔEITP.[6]

CIECAM02

[edit]

After the success of CIECAM97s, the CIE developed CIECAM02 as its successor and published it in 2002. It performs better and is simpler at the same time. Apart from the rudimentary CIELAB model, CIECAM02 comes closest to an internationally agreed upon “standard” for a (comprehensive) color appearance model.

Both CIECAM02 and CIECAM16 have some undesirable numerical properties when implemented to the letter of the specification.[7]

iCAM06

[edit]

iCAM06 is an image color appearance model. As such, it does not treat each pixel of an image independently, but in the context of the complete image. This allows it to incorporate spatial color appearance parameters like contrast, which makes it well-suited for HDR images. It is also a first step to deal with spatial appearance phenomena.

CAM16

[edit]

The CAM16 is a successor of CIECAM02 with various fixes and improvements. It also comes with a color space called CAM16-UCS. It is published by a CIE workgroup, but is not CIE standard.[8] CIECAM16 standard was released in 2022 and is slightly different.[9][10]

CAM16 is used in the Material Design color system in a cylindrical version called "HCT" (hue, chroma, tone). The hue and chroma values are identical to CAM16. The "tone" value is CIELAB L*.[11]

OKLab

[edit]

A 2020 UCS designed for normal dynamic range color. Same structure as CIELAB, but fitted with improved data (CAM16 output for lightness and chroma; IPT data for hue). Meant to be easy to implement and use (especially from sRGB), just like CIELAB and IPT were, but with improvements to uniformity.[12]

As of September 2023, it is part of the CSS color level 4 draft[13] and it is supported by recent versions of all major browsers.[14]

Other models

[edit]
OSA-UCS
A 1947 UCS with generally good properties and a conversion from CIEXYZ defined in 1974. The conversion to CIEXYZ, however, has no closed-form expression, making it hard to use in practice.
SRLAB2
A 2009 modification of CIELAB in the spirit of RLAB (with discounting-the-illuminant). Uses CIECAM02 chromatic adaptation matrix to fix the blue hue issue.[15]
JzAzBz
A 2017 UCS designed for HDR color. Has J (lightness) and two chromaticities.[16]
XYB
A family of UCS used in Guetzli and JPEG XL, with a main goal in compression. Better uniformity than CIELAB.[15]

Notes

[edit]
  1. ^ “XYZ” refers to a color model and a color space at the same time, because the XYZ color space is the only color space that uses the XYZ color model. This differs from e.g. the RGB color model, which many color spaces (such as sRGB or Adobe RGB (1998)) use.
  2. ^ "The RLAB Model". Color Appearance Models. 2013. pp. 243–255. doi:10.1002/9781118653128.ch13. ISBN 9781119967033.
  3. ^ Ebner; Fairchild (1998), Development and Testing of a Color Space with Improved Hue Uniformity, Proc. IS&T 6th Color Imaging Conference, Scottsdale, AZ, pp. 8–13{{citation}}: CS1 maint: location missing publisher (link)
  4. ^ Edge, Christopher. "US Patent 8,437,053, Gamut mapping using hue-preserving color space". Retrieved 9 February 2016.
  5. ^ ICtCp Introduction (PDF), 2016
  6. ^ "Recommendation ITU-R BT.2124-0 Objective metric for the assessment of the potential visibility of colour differences in television" (PDF). January 2019.
  7. ^ Schlömer, Nico (2018). Algorithmic improvements for the CIECAM02 and CAM16 color appearance models. arXiv:1802.06067.
  8. ^ Li, Changjun; Li, Zhiqiang; Wang, Zhifeng; Xu, Yang; Luo, Ming Ronnier; Cui, Guihua; Melgosa, Manuel; Brill, Michael H.; Pointer, Michael (December 2017). "Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS". Color Research & Application. 42 (6): 703–718. doi:10.1002/col.22131.
  9. ^ "The CIE 2016 Colour Appearance Model for Colour Management Systems: CIECAM16 | CIE". cie.co.at. Retrieved 2022-09-16.
  10. ^ "PR: Implement support for "CIECAM16" colour appearance model. by KelSolaar · Pull Request #1015 · colour-science/colour". GitHub. Retrieved 2022-09-16.
  11. ^ O'Leary, James. "The Science of Color & Design". Material Design. source code
  12. ^ Ottosson, Björn. "A perceptual color space for image processing".
  13. ^ "CSS Color Module Level 4". www.w3.org.
  14. ^ "oklab() (Oklab color model)". Can I use... Retrieved 27 September 2023.
  15. ^ a b Levien, Raph (18 January 2021). "An interactive review of Oklab".
  16. ^ Safdar, Muhammad; Cui, Guihua; Kim, Youn Jin; Luo, Ming Ronnier (26 June 2017). "Perceptually uniform color space for image signals including high dynamic range and wide gamut". Optics Express. 25 (13): 15131–15151. Bibcode:2017OExpr..2515131S. doi:10.1364/OE.25.015131. PMID 28788944.

References

[edit]


pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy