Pixel Light Compositing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

PixelLight Compositing

Documentation

August 23, 2012


PixelLight 1.0.0-R1

The content of this PixelLight document is published under the Creative Commons
Attribution-NonCommercial-ShareAlike 3.0 Unported
Copyright 2002-2012 by The PixelLight Team

Contents
1 Introduction
1.1 External Dependences . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Scene Renderer
2.1 Deferred Scene Renderer . . . . . . . . . . . .
2.1.1 Geometry Buer (GBuer) Layout . .
2.1.2 Lighting . . . . . . . . . . . . . . . . .
2.2 Post Process Eects . . . . . . . . . . . . . . .
2.2.1 Scene Render Pipeline Post Processing
2.2.2 Post Post Processing . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

5
5
7
7
8
11
12
12
14

3 Contact

19

Abbreviations

21

1 Introduction
Target Audience This document is meant for C++ programmers.
Motivation Remember the realtime CG days in the previous millennium where a nose
of a character consisted of just three triangles? Well, this were the days were rendering
a scene was mostly about rastering the triangles, meshes are made up of. Nowadays,
rastering mesh triangles is just a part of the complete rendering process. The nal image
you see on your screen is the result of many compositing steps - just like movies with
tons of special eects produce the nal image by compositing multiple image layers. The
PLCompositing component is the place were the compositing steps within the PixelLight
framework are implemented. While the scene graph is a representation of the scene the data, the task of the scene rendering and compositing system is to take the scene
graph and all assigned data and bring them onto the computer monitor in the best way
possible. For legacy hardware, this scene rendering may just mean to render the scene
using simple textures - for decent hardware the scene may be rendered using dynamic
lighting and shadowing as well as tons of used special eects like normal mapping,
Screen-Space Ambient Occlusion (SSAO), High Dynamic Range (HDR) and so on.

1.1 External Dependences


PLCompositing depends on the PLCore, PLMath, PLGraphics, PLRenderer,
PLMesh and PLScene libraries.

2 Scene Renderer
The scene renderer system of PixelLight consists of several render passes. One can add
or remove scene renderer passes as wanted. The scene renderer passes can be categorized
in the following way:
Fixed function: For legacy hardware without shader support and just xed build
in graphics features
Forward: A classic forward renderer using shaders. Each object is drawn per light
again
Deferred: A modern deferred renderer approach performing for example lighting
in image space
This document only describes the deferred scene renderer of PixelLight.

2.1 Deferred Scene Renderer


The deferred scene renderer of PixelLight is, like other PixelLight scene renderer implementations, a collection of render passes working together to form the render pipeline.
This scene renderer is using Image Based Lighting (IBL) and requires Shader 3.0 as a
minimum. The eXtensible Markup Language (XML)-le DeferredRendering.sr describes
the render passes to use and their order.
PLScene::SRPBegin (independent from the deferred scene renderer)
PLCompositing::SRPDeferredGBuer (with gamma correction support)
PLCompositing::SRPDeferredHBAO
PLCompositing::SRPDeferredHDAO
PLCompositing::SRPDeferredAmbient
PLCompositing::SRPDeferredGlow
PLCompositing::SRPDeferredLighting (with gamma correction support)
PLCompositing::SRPDeferredGodRays
PLCompositing::SRPDeferredDepthFog

2 Scene Renderer
PLCompositing::SRPDeferredEdgeAA
PLCompositing::SRPDeferredDOF
PLCompositing::SRPEndHDR (independent from the deferred scene renderer)
PLCompositing::SRPDeferredGBuerDebug
Terminology Often, there are many names describing the same thing. Therefore heres
a list of names PixelLight is using (the rst one).
view space = camera space = eye space
clip space = projection space
Data Driven The deferred scene renderer is using a data driven approach. This means
that instead forcing the user to dene shaders by hand, just material descriptions are
provided by the user and the scene renderer has to interpret them as best as possible.
This way, the user only has to create the data and materials once, and then its possible
to use one and the same data set on totally dierent scene renderer techniques without
the need to rewrite all material descriptions and shaders. By using this concept, the
scene renderer is also able to optimize automatically and to scale with the available
hardware.
ber Shaders The deferred scene renderer is using so called ber Shaders. This simply
means that shaders of the scene renderer are written within a high level shader language
and making heavy usage of precompiler directives like #ifdef. Therefore, most times,
theres only one high level shader that takes everything into account that is supported.
During runtime, only the features a current material is really using is taken into account.
This means that many versions of one shader are compiled during runtime resulting in
eective shaders. This ber Shaders may look confusing on the rst look, but they have
the big advantage of code reuse. If theres a bug, this bug will probably inuence all
or most compiled shader versions and therefore the possibility that it can be found and
xed is high. The bug only has to be xed at one place, and not hundreds of places
which would just lead to a high probability of even more bugs.

2.1.1 GBuer Layout


Table 2.1. By default, the GBuer is using FP16 render targets - but FP32 render
targets can be enabled as well.

2.1 Deferred Scene Renderer


RT
DS
RT0
RT1
RT2
RT3

R
Depth Buer
Albedo R
Normal X
Specular R
Self illumination R

G
Depth Buer
Albedo G
Normal Y
Specular G
Self illumination G

B
Depth Buer
Albedo B
Depth
Specular B
Self illumination B

A
Stencil
Ambient Occlusion (AO)
Specular Exponent
Glow Factor

Table 2.1: GBuer Layout


Stencil Buer By default, the PLCompositing::SRPDeferredGBuer scene renderer
pass writes into the stencil buer whether or not a pixel has valid content. 1
within the stencil buer means: The GBuer has no information about the pixel
because no geometry is covering it. Later on, for example within the PLCompositing::SRPDeferredAmbient scene render pass, this stencil buer information can be used
to draw only pixels with valid GBuer content, everything else is not drawn. This is
quite useful if theres for example a bitmap or sky already drawn and should remain in
the background.
Parallax Mapping Parallax Mapping as described in http://www8.cs.umu.se/kurser/
5DV051/VT09/lab/parallax_mapping.pdf.
Two-Sided Polygons Two-sided polygons are handled when lling the GBuer by
drawing the mesh a second time, but with ipped normal vectors. This is a simple,
universal and robust way to solve this problem. As a result, theres no need to implement
special two-sided lighting in later render passes.
Gamma Correction Usually, color textures like hand-painted images or photos are
stored in sRGB space, therefore, they must be converted from sRGB to linear space
during rendering. This is automatically done for the material parameters DiuseMap,
LightMap, EmissiveMap and ReectionMap. If this wouldnt be done, the colors of this
texture maps would look bleached out. Cg source code 2.1 shows how the used gamma
correction technique.
1

float3 linearRGBColor = pow(sRGBColor, 2.2);


Listing 2.1: GBuer gamma correction

2 Scene Renderer
GBuer RT0
The rgb-components of RT0 contain the albedo. The albedo is calculated using DiffuseMap.rgb * DiuseColor.rgb. The a-component of RT0 is used for AO, either static1
pre-calculated or dynamic2 .
Fresnel Reection Due to the fresnel eect, a surface becomes more reective near grazing angle.
Fresnel reection is implemented as described within
http://developer.download.nvidia.com/SDK/9.5/Samples/DEMOS/Direct3D9/src/
HLSL_FresnelReection/docs/FresnelReection.pdf. Fresnel reection is controlled by
using the IndexOfRefraction and FresnelReectionPower material parameters.
Spherical Environment Mapping If the given ReectionMap material parameter is
a 2D map, spherical environment mapping as described within http://www.ozone3d.
net/tutorials/glsl_texturing_p04.php is performed. The spherical map has to full the
following conditions:
The texture coordinate of the center of the map is (0,0), and the spheres image
has radius 1
The projection direction is along the z-axis
Cubic Environment Mapping If the given ReectionMap material parameter is a
cube map, cubic environment mapping is performed. More information about cubic
environment mapping can for example be found at http://http.developer.nvidia.com/
CgTutorial/cg_tutorial_chapter07.html.
GBuer RT1
The rg-components of RT1 contain the normal vector within view space. To save
components within the GBuer, only x and y of the normal vector are saved. The
3D normal vector is rebuild later by using z reconstruction. To archive this, the
encode and decode functions from http://aras-p.info/texts/CompactNormalStorage.
html#method04spheremap are used. Cg source code 2.2 shows how the xy-components
of the normal vector are stored into the GBuer. Cg source code 2.3 shows how the
normal vector is restored from the GBuer.
The b-component of RT1 contains the view space linear depth [0...far plane].
Storing normal vector and depth information within one render target is useful for the
SSAO render eect. It just needs the texture from this render target.

1
2

Some call it Baked Occ


Later on within the scene renderer pipeline, for example PLCompositing::SRPDeferredHBAO renders
AO into this alpha channel

10

2.1 Deferred Scene Renderer


1
2
3
4
5
6

// encodes a 3 component normal vector to a 2 component normal vector


float2 encodeNormalVector(float3 normal)
{
float p = sqrt(normal.z*8 + 8);
return float2(normal.xy/p + 0.5f);
}
Listing 2.2: Normal vector to GBuer

1
2
3
4
5
6
7
8
9
10
11

// decodes a 2 component normal vector to a 3 component normal vector


float3 decodeNormalVector(float2 normal)
{
float2 fenc = normal*4 - 2;
float f = dot(fenc, fenc);
float g = sqrt(1 - f/4);
float3 n;
n.xy = fenc*g;
n.z = 1 - f/2;
return n;
}
Listing 2.3: GBuer to normal vector

GBuer RT2
The rgb-components of RT2 contain the specular color. The specular color is calculated using SpecularMap.rgb * SpecularColor.rgb. The a-component of RT2 contains the
specular exponent and is calculated using SpecularMap.a * SpecularExponent. As result,
if a SpecularMap has an alpha channel, its used for per texel specular power control.
GBuer RT3
The rgb-components of RT3 contains the composition of emissive maps and light maps.
Alpha is for glow (outshine eect). Lighting, wich is not connected to a particular
realtime light, is also rendered during the GBuer ll.

2.1.2 Lighting
BRDF Model As BRDF model, Blinn-Phong with half vector specular highlights was
chosen.
Gamma Correction Usually, color textures like hand-painted images or photos are
stored in sRGB space, therefore, they must be converted from sRGB to linear space
during rendering. This is automatically done for the texture maps of projective light

11

2 Scene Renderer
sources. If this wouldnt be done, the colors of this texture maps would look bleached
out. Cg source code 2.4 shows how the used gamma correction technique.
1

float3 linearRGBColor = pow(sRGBColor, 2.2);


Listing 2.4: GBuer gamma correction

2.2 Post Process Eects


PixelLight comes with a compact and comfortable post processing (is also called image
processing) system. For image processing like bloom the scene is rendered into a texture
instead the usual output. Then the post processing manager is taking this texture and
applies dierent eects on it. At the end you will receive the nal composition you
can draw on screen using for instance a full screen rectangle. In todays mainstream
hardware, HDR rendering must also be implemented as image processing because of the
lack of the output devices like monitor to display such HDR data directly. Therefore
this HDR data must be mapped to the usual RGB data using tone mapping.
The post processing system of PixelLight can be subdivided into two categories:
The Scene Render Pipeline Post Processing integrates into the scene render process
and is using for example data from the deferred rendering GBuer. This type of
post processing is xed build in.
Post Post Processing is completely decoupled from the scene render process and
is performed after the scene rendering is nished. This type of post processing is
not xed build in.

2.2.1 Scene Render Pipeline Post Processing


Depth Fog Classic depth fog is realized as a post processing eect. Three fog modes
are implemented:
LinearMode: Fog eect intensies linearly between the start and end points (f =
(end d)/(end start))
ExponentialMode: Fog eect intensies exponentially (f = 1/((e( d density))))
Exponential2Mode: Fog eect intensies exponentially with the square of the distance (f = 1/((e( (d density)2 ))))
This eect is implemented within the PLCompositing::SRPDeferredDepthFog scene renderer pass.

12

2.2 Post Process Eects


EdgeAA Anti-aliasing is realized using resolution-independent edge detection as
described within http://http.developer.nvidia.com/GPUGems3/gpugems3_ch19.html.
This eect is implemented within the PLCompositing::SRPDeferredEdgeAA scene renderer pass.
SSAO SSAO is calculated dynamically during runtime using per fragment depth and
optionally also normal data. SSAO overwrites the static AO value from GBuer RT0. As
a result, you can either have static AO maps, or dynamic AO, but not both at the same
time. SSAO can be implemented in several ways. Currently the following techniques
are implemented:
PLCompositing::SRPDeferredHBAO: Horizon Based Ambient Occlusion (HBAO)
as described within the NVIDIA Direct3D Software Development Kit (SDK)
10 Code Samples http://developer.download.nvidia.com/SDK/10.5/direct3d/
Source/ScreenSpaceAO/doc/ScreenSpaceAO.pdf
PLCompositing::SRPDeferredHDAO: High Denition Ambient Occlusion (HDAO)
as described within the ATI Radeon SDK http://developer.amd.com/Downloads/
HDAO10.1.zip
The concrete SSAO implementations are derived from the PLCompositing::SRPDeferredSSAO class which oers Cross Bilateral Filter as described
within the NVIDIA Direct3D SDK 10 Code Samples http://developer.download.nvidia.
com/SDK/10.5/direct3d/Source/ScreenSpaceAO/doc/ScreenSpaceAO.pdf.
Volumetric Light Scattering Volumetric Light Scattering as described within http://
http.developer.nvidia.com/GPUGems3/gpugems3_ch13.html. This eect is also known
as crepuscular rays, sunbeams, sunbursts, star are, god rays, sun shafts, or light shafts.
Within PixelLight, we chosen the name god rays because its short. The emissive/light
map content of the GBuer is used as glowing parts. This eect is implemented within
the PLCompositing::SRPDeferredGodRays scene renderer pass.
Glow This eect is implemented within the PLCompositing::SRPDeferredGlow scene
renderer pass and is loosely basing on the technique described within http://http.
developer.nvidia.com/GPUGems/gpugems_ch21.html.
Depth Of Field This eect is implemented within the PLCompositing::SRPDeferredDOF scene renderer pass and is using the technique described
within http://ati.amd.com/developer/gdc/Scheuermann_DepthOfField.pdf.
HDR Tone Mapping The scene render pass SRPEndHDR nishes the HDR render
pipeline by converting the HDR image using tone mapping into an Low Dynamic Range
(LDR) image (the available color range is compressed). As tone mapping operator weve

13

2 Scene Renderer
chosen Reinhard tone mapping as described within http://www.cs.ucf.edu/~reinhard/
cdrom/.
For the tone mapping, the logarithmic average luminance of the current HDR image is required. While calculating this luminance value on the Central Processing
Unit (CPU) is trivial, a parallel approach is required for calculating this value on the
Graphics Processing Unit (GPU). Within the literature, there are many ways this logarithmic average luminance can be calculated on the GPU. We decided to use the
technique described within http://developer.download.nvidia.com/SDK/9.5/Samples/
DEMOS/Direct3D9/HDR_FP16x2.zip, it looks like this is one of the more popular
ways to solve the problem. The technique consists of three steps:
First downsample pass with calculation of the pixel luminance and log calculation
Downsample the 1 component texture until it has a size of 4x4 pixels
Calculate the nal 1x1 and its exponential value
Although the last step is a waste of the tremendous GPU power, its more ecient than
downloading the result to the CPU and passing on the logarithmic average luminance
to the tone mapping fragment shader.
For light adaptation, the Pattanaik exponential decay function described within http:
//www.coretechniques.info/PostFX.zip is used. By using this technique, the change of
the logarithmic average luminance is smoothed to simulate the gradual adaptation of
the human eye to dierent lighting conditions.
HDR bloom is also supported.
This render pass also performs gamma correction as described within http://http.
developer.nvidia.com/GPUGems3/gpugems3_ch24.html and http://www.weltenbauer.
com/upload/dateien/gamma_correct_v12.pdf. Cg source code 2.5 shows how the used
gamma correction technique.
1
2

// By default, gamma is 2.2


float3 sRGBColor = pow(linearRGBColor, 1/gamma);
Listing 2.5: Post processing gamma correction

2.2.2 Post Post Processing


The post processing that is performed after the scene rendering is nished, is completely
based on script like XML les dening how a post process is created. Multiple post
processes can be performed one after another.
Each post process material can have annotation for additional information (Material parameters fScaleX & fScaleY, default value for both 1.0f).

14

2.2 Post Process Eects


Each post process material can have dierent kernels which are converted from
pixel space to texel space automatically.
TexelKernel[n].x = PixelKernel[n].x/width
TexelKernel[n].y = PixelKernel[n].y/height
were width and height are the dimension of the render target.
Material Parameter TargetDimension will receive the render target width and
height
Some special post process eects have special parameters - in this case you have to
derive an own class from PostProcess and implement the update of this special
material parameters by self.
2D and rectangle texture targets are supported
Heres a list of some provided post processing eects dened within PLPostProcessEects.zip:
(-> shows the order in which post-process materials are used)
(*> if a special post process class is used to for instance animate material parameters)
Inverse: Inverses the colors (negative image)
-> ColorInverse.mat
Monochrome: Grayscale image
-> ColorMonochrome.mat
Sepia: Manipulates the colors
-> ColorSepia.mat
Blur: Blurs the image
-> ColorDownFilter4.mat + ColorGBlurH.mat + ColorGBlurV.mat + ColorUpFilter4.mat
Bloom: Bright things glow
-> ColorDownFilter8.mat + ColorBrightPass.mat + ColorBloomH.mat + ColorBloomV.mat + ColorBloomH.mat + ColorBloomV.mat + ColorCombine8.mat
HDR (tone map): Nice high-dynamic-range rendering. A oating point render
target is used instead the standard RGB unsigned char format. As post process a
tone map eect is applied.
-> ColorToneMap.mat
Edge detect: Edge detection
-> ColorEdgeDetect.mat

15

2 Scene Renderer
Edge glow: Edges will glow
-> ColorEdgeDetect.mat + ColorDownFilter4.mat + ColorBloomH.mat + ColorBloomV.mat + ColorCombine4.mat
Old lm: The image looks like it was lmed with a very old camera. Image errors
appear, the colors are a bit instable and the image is wobble.
-> ColorOldFilm.mat
*> PostProcessOldFilm
Sketch: Looks like a pencil drawing
-> ColorEdgeDetect.mat + ColorInverse.mat
(+ ColorOldFilm.mat if it should look like a sketch of a cartoon :)
Cartoon: Looks like a cartoon because there are black silhouettes
-> ColorEdgeDetect.mat + ColorInverse.mat + (ColorOldFilm.mat for animated
edges) + ColorCombineMul.mat
(+ ColorOldFilm.mat if it should look like an old cartoon :)
American Standard Code for Information Interchange (ASCII): Image is visualized
using ASCII characters
-> ColorDownFilter16.mat + PostProcess/ColorASCIIUp16.mat
Pull: The image is deformed at a given position
-> ColorPull.mat
Pixel: The image has a low resolution so you can see the single pixels
-> ColorDown4.mat + ColorUp4.mat
... even more combinations are possible - you can also tweak the parameters of the
eect material. You can use the sample application 65PostProcess to see this eects in
action or to test your own or new eects.
Post Process File Format
Heres a short post process le (pp extension) example:
1
2
3
4
5
6

<?xml version="1.0"?>
<PostProcess Version="1">
<General TextureFormat="R8G8B8" />
<Pass Material="PostProcess/ColorEdgeDetect.mat" />
<Pass Material="PostProcess/ColorInverse.mat" />
<Pass Class="PostProcessOldFilm" Material="PostProcess/ColorOldFilm.
mat" />
</PostProcess>
Listing 2.6: Post process le example
And heres the DTD of this format:

16

2.2 Post Process Eects

1
2
3
4
5
6
7

<?xml version="1.0"?>
<!DOCTYPE PostProcess [
<!ELEMENT General EMPTY>
<!ATTLIST General TextureFormat CDATA #IMPLIED>
<!ELEMENT Pass EMPTY>
<!ATTLIST Pass Material CDATA #REQUIRED>
]>
Listing 2.7: Post process le format DTD
As you can see, within a post process le, there are optional general information
dening for instance the required Render To Texture (RTT) format. Default setting
for TextureFormat is R8G8B8. TextureFormat can also be R8G8B8A8 if an alpha
channel is required or R16G16B16A16F/R32G32B32A32F for oating point formats.
The dierent passes are the main thing of this format. Each pass is in fact a material
which will be applied to the current RTT result. By using dierent passes after each
other one can create dierent nal eects. Pass element can have a special class controlling/animating for instance certain unique shader parameters. The main work of the
post processes is done within the materials and shaders.

17

3 Contact
contact@pixellight.org
http://www.pixellight.org

19

Abbreviations
SDK Software Development Kit, also known as devkit
XML eXtensible Markup Language
GPU Graphics Processing Unit
CPU Central Processing Unit
ASCII American Standard Code for Information Interchange
HDR High Dynamic Range
LDR Low Dynamic Range
IBL Image Based Lighting
SSAO Screen-Space Ambient Occlusion
HBAO Horizon Based Ambient Occlusion
HDAO High Denition Ambient Occlusion
AO Ambient Occlusion
RTT Render To Texture
GBuer Geometry Buer

21

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy