Pixel Light Compositing
Pixel Light Compositing
Pixel Light Compositing
Documentation
The content of this PixelLight document is published under the Creative Commons
Attribution-NonCommercial-ShareAlike 3.0 Unported
Copyright 2002-2012 by The PixelLight Team
Contents
1 Introduction
1.1 External Dependences . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Scene Renderer
2.1 Deferred Scene Renderer . . . . . . . . . . . .
2.1.1 Geometry Buer (GBuer) Layout . .
2.1.2 Lighting . . . . . . . . . . . . . . . . .
2.2 Post Process Eects . . . . . . . . . . . . . . .
2.2.1 Scene Render Pipeline Post Processing
2.2.2 Post Post Processing . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
5
7
7
8
11
12
12
14
3 Contact
19
Abbreviations
21
1 Introduction
Target Audience This document is meant for C++ programmers.
Motivation Remember the realtime CG days in the previous millennium where a nose
of a character consisted of just three triangles? Well, this were the days were rendering
a scene was mostly about rastering the triangles, meshes are made up of. Nowadays,
rastering mesh triangles is just a part of the complete rendering process. The nal image
you see on your screen is the result of many compositing steps - just like movies with
tons of special eects produce the nal image by compositing multiple image layers. The
PLCompositing component is the place were the compositing steps within the PixelLight
framework are implemented. While the scene graph is a representation of the scene the data, the task of the scene rendering and compositing system is to take the scene
graph and all assigned data and bring them onto the computer monitor in the best way
possible. For legacy hardware, this scene rendering may just mean to render the scene
using simple textures - for decent hardware the scene may be rendered using dynamic
lighting and shadowing as well as tons of used special eects like normal mapping,
Screen-Space Ambient Occlusion (SSAO), High Dynamic Range (HDR) and so on.
2 Scene Renderer
The scene renderer system of PixelLight consists of several render passes. One can add
or remove scene renderer passes as wanted. The scene renderer passes can be categorized
in the following way:
Fixed function: For legacy hardware without shader support and just xed build
in graphics features
Forward: A classic forward renderer using shaders. Each object is drawn per light
again
Deferred: A modern deferred renderer approach performing for example lighting
in image space
This document only describes the deferred scene renderer of PixelLight.
2 Scene Renderer
PLCompositing::SRPDeferredEdgeAA
PLCompositing::SRPDeferredDOF
PLCompositing::SRPEndHDR (independent from the deferred scene renderer)
PLCompositing::SRPDeferredGBuerDebug
Terminology Often, there are many names describing the same thing. Therefore heres
a list of names PixelLight is using (the rst one).
view space = camera space = eye space
clip space = projection space
Data Driven The deferred scene renderer is using a data driven approach. This means
that instead forcing the user to dene shaders by hand, just material descriptions are
provided by the user and the scene renderer has to interpret them as best as possible.
This way, the user only has to create the data and materials once, and then its possible
to use one and the same data set on totally dierent scene renderer techniques without
the need to rewrite all material descriptions and shaders. By using this concept, the
scene renderer is also able to optimize automatically and to scale with the available
hardware.
ber Shaders The deferred scene renderer is using so called ber Shaders. This simply
means that shaders of the scene renderer are written within a high level shader language
and making heavy usage of precompiler directives like #ifdef. Therefore, most times,
theres only one high level shader that takes everything into account that is supported.
During runtime, only the features a current material is really using is taken into account.
This means that many versions of one shader are compiled during runtime resulting in
eective shaders. This ber Shaders may look confusing on the rst look, but they have
the big advantage of code reuse. If theres a bug, this bug will probably inuence all
or most compiled shader versions and therefore the possibility that it can be found and
xed is high. The bug only has to be xed at one place, and not hundreds of places
which would just lead to a high probability of even more bugs.
R
Depth Buer
Albedo R
Normal X
Specular R
Self illumination R
G
Depth Buer
Albedo G
Normal Y
Specular G
Self illumination G
B
Depth Buer
Albedo B
Depth
Specular B
Self illumination B
A
Stencil
Ambient Occlusion (AO)
Specular Exponent
Glow Factor
2 Scene Renderer
GBuer RT0
The rgb-components of RT0 contain the albedo. The albedo is calculated using DiffuseMap.rgb * DiuseColor.rgb. The a-component of RT0 is used for AO, either static1
pre-calculated or dynamic2 .
Fresnel Reection Due to the fresnel eect, a surface becomes more reective near grazing angle.
Fresnel reection is implemented as described within
http://developer.download.nvidia.com/SDK/9.5/Samples/DEMOS/Direct3D9/src/
HLSL_FresnelReection/docs/FresnelReection.pdf. Fresnel reection is controlled by
using the IndexOfRefraction and FresnelReectionPower material parameters.
Spherical Environment Mapping If the given ReectionMap material parameter is
a 2D map, spherical environment mapping as described within http://www.ozone3d.
net/tutorials/glsl_texturing_p04.php is performed. The spherical map has to full the
following conditions:
The texture coordinate of the center of the map is (0,0), and the spheres image
has radius 1
The projection direction is along the z-axis
Cubic Environment Mapping If the given ReectionMap material parameter is a
cube map, cubic environment mapping is performed. More information about cubic
environment mapping can for example be found at http://http.developer.nvidia.com/
CgTutorial/cg_tutorial_chapter07.html.
GBuer RT1
The rg-components of RT1 contain the normal vector within view space. To save
components within the GBuer, only x and y of the normal vector are saved. The
3D normal vector is rebuild later by using z reconstruction. To archive this, the
encode and decode functions from http://aras-p.info/texts/CompactNormalStorage.
html#method04spheremap are used. Cg source code 2.2 shows how the xy-components
of the normal vector are stored into the GBuer. Cg source code 2.3 shows how the
normal vector is restored from the GBuer.
The b-component of RT1 contains the view space linear depth [0...far plane].
Storing normal vector and depth information within one render target is useful for the
SSAO render eect. It just needs the texture from this render target.
1
2
10
1
2
3
4
5
6
7
8
9
10
11
GBuer RT2
The rgb-components of RT2 contain the specular color. The specular color is calculated using SpecularMap.rgb * SpecularColor.rgb. The a-component of RT2 contains the
specular exponent and is calculated using SpecularMap.a * SpecularExponent. As result,
if a SpecularMap has an alpha channel, its used for per texel specular power control.
GBuer RT3
The rgb-components of RT3 contains the composition of emissive maps and light maps.
Alpha is for glow (outshine eect). Lighting, wich is not connected to a particular
realtime light, is also rendered during the GBuer ll.
2.1.2 Lighting
BRDF Model As BRDF model, Blinn-Phong with half vector specular highlights was
chosen.
Gamma Correction Usually, color textures like hand-painted images or photos are
stored in sRGB space, therefore, they must be converted from sRGB to linear space
during rendering. This is automatically done for the texture maps of projective light
11
2 Scene Renderer
sources. If this wouldnt be done, the colors of this texture maps would look bleached
out. Cg source code 2.4 shows how the used gamma correction technique.
1
12
13
2 Scene Renderer
chosen Reinhard tone mapping as described within http://www.cs.ucf.edu/~reinhard/
cdrom/.
For the tone mapping, the logarithmic average luminance of the current HDR image is required. While calculating this luminance value on the Central Processing
Unit (CPU) is trivial, a parallel approach is required for calculating this value on the
Graphics Processing Unit (GPU). Within the literature, there are many ways this logarithmic average luminance can be calculated on the GPU. We decided to use the
technique described within http://developer.download.nvidia.com/SDK/9.5/Samples/
DEMOS/Direct3D9/HDR_FP16x2.zip, it looks like this is one of the more popular
ways to solve the problem. The technique consists of three steps:
First downsample pass with calculation of the pixel luminance and log calculation
Downsample the 1 component texture until it has a size of 4x4 pixels
Calculate the nal 1x1 and its exponential value
Although the last step is a waste of the tremendous GPU power, its more ecient than
downloading the result to the CPU and passing on the logarithmic average luminance
to the tone mapping fragment shader.
For light adaptation, the Pattanaik exponential decay function described within http:
//www.coretechniques.info/PostFX.zip is used. By using this technique, the change of
the logarithmic average luminance is smoothed to simulate the gradual adaptation of
the human eye to dierent lighting conditions.
HDR bloom is also supported.
This render pass also performs gamma correction as described within http://http.
developer.nvidia.com/GPUGems3/gpugems3_ch24.html and http://www.weltenbauer.
com/upload/dateien/gamma_correct_v12.pdf. Cg source code 2.5 shows how the used
gamma correction technique.
1
2
14
15
2 Scene Renderer
Edge glow: Edges will glow
-> ColorEdgeDetect.mat + ColorDownFilter4.mat + ColorBloomH.mat + ColorBloomV.mat + ColorCombine4.mat
Old lm: The image looks like it was lmed with a very old camera. Image errors
appear, the colors are a bit instable and the image is wobble.
-> ColorOldFilm.mat
*> PostProcessOldFilm
Sketch: Looks like a pencil drawing
-> ColorEdgeDetect.mat + ColorInverse.mat
(+ ColorOldFilm.mat if it should look like a sketch of a cartoon :)
Cartoon: Looks like a cartoon because there are black silhouettes
-> ColorEdgeDetect.mat + ColorInverse.mat + (ColorOldFilm.mat for animated
edges) + ColorCombineMul.mat
(+ ColorOldFilm.mat if it should look like an old cartoon :)
American Standard Code for Information Interchange (ASCII): Image is visualized
using ASCII characters
-> ColorDownFilter16.mat + PostProcess/ColorASCIIUp16.mat
Pull: The image is deformed at a given position
-> ColorPull.mat
Pixel: The image has a low resolution so you can see the single pixels
-> ColorDown4.mat + ColorUp4.mat
... even more combinations are possible - you can also tweak the parameters of the
eect material. You can use the sample application 65PostProcess to see this eects in
action or to test your own or new eects.
Post Process File Format
Heres a short post process le (pp extension) example:
1
2
3
4
5
6
<?xml version="1.0"?>
<PostProcess Version="1">
<General TextureFormat="R8G8B8" />
<Pass Material="PostProcess/ColorEdgeDetect.mat" />
<Pass Material="PostProcess/ColorInverse.mat" />
<Pass Class="PostProcessOldFilm" Material="PostProcess/ColorOldFilm.
mat" />
</PostProcess>
Listing 2.6: Post process le example
And heres the DTD of this format:
16
1
2
3
4
5
6
7
<?xml version="1.0"?>
<!DOCTYPE PostProcess [
<!ELEMENT General EMPTY>
<!ATTLIST General TextureFormat CDATA #IMPLIED>
<!ELEMENT Pass EMPTY>
<!ATTLIST Pass Material CDATA #REQUIRED>
]>
Listing 2.7: Post process le format DTD
As you can see, within a post process le, there are optional general information
dening for instance the required Render To Texture (RTT) format. Default setting
for TextureFormat is R8G8B8. TextureFormat can also be R8G8B8A8 if an alpha
channel is required or R16G16B16A16F/R32G32B32A32F for oating point formats.
The dierent passes are the main thing of this format. Each pass is in fact a material
which will be applied to the current RTT result. By using dierent passes after each
other one can create dierent nal eects. Pass element can have a special class controlling/animating for instance certain unique shader parameters. The main work of the
post processes is done within the materials and shaders.
17
3 Contact
contact@pixellight.org
http://www.pixellight.org
19
Abbreviations
SDK Software Development Kit, also known as devkit
XML eXtensible Markup Language
GPU Graphics Processing Unit
CPU Central Processing Unit
ASCII American Standard Code for Information Interchange
HDR High Dynamic Range
LDR Low Dynamic Range
IBL Image Based Lighting
SSAO Screen-Space Ambient Occlusion
HBAO Horizon Based Ambient Occlusion
HDAO High Denition Ambient Occlusion
AO Ambient Occlusion
RTT Render To Texture
GBuer Geometry Buer
21