Parallax Occlusion in Direct3D 11: Introduction To 3D Game Programming With Directx 11 Has A Chapter Explaining Normal
Parallax Occlusion in Direct3D 11: Introduction To 3D Game Programming With Directx 11 Has A Chapter Explaining Normal
Parallax Occlusion in Direct3D 11: Introduction To 3D Game Programming With Directx 11 Has A Chapter Explaining Normal
Frank Luna
February 11, 2012
www.d3dcoder.net
Figure 1: Parallax Occlusion Mapping is a pixel technique that does not require a high triangle count, but gives
results comparable to tessellation based displacement mapping.
Page 1 of 12
§1 Conceptual Overview
The basic idea is illustrated in Figure 2. If the surface we are modeling is not truly
planar, but defined by a heightmap , then the point the eye sees is not the point
on the polygon plane, but instead the point where the view ray intersects the
heightmap. Therefore, when lighting and texturing the polygon point we should not
use the texture coordinates and normal at , but instead use the texture coordinates and
normal associated with the intersection point . If we are using normal mapping, then we
just need the texture coordinates at from which we can sample the normal map to get
the normal vector at . Thus, the crux of the algorithm is to find the parallax offset
vector that tells us how much to offset the texture coordinates at to obtain the texture
coordinates at . As we will see in later sections, this is accomplished by tracing the view
ray through the heightmap.
Figure 2: The point the eye sees is not the point on the polygon plane, but instead the point where the view ray
intersects the heightmap. The parallax offset vector that tells us how much to offset the texture coordinates at
to obtain the texture coordinates at .
We need an upper limit on how large the parallax offset vector could possibly be. As
with our tessellation displacement mapping demo, we use the convention that a value
coincides with the polygon surface, and represents the lowest height (most
inward displacement). (Recall that the heightmap values are given in a normalized range
[0, 1], which are then scaled by to world coordinate values.) Figure 3 shows that for a
view ray with starting position at the polygon surface with , it must intersect the
heightmap by the time the ray reaches . Assuming the view vector is normalized,
there exists a such that:
Page 2 of 12
In normalized height coordinates, the difference , but scaled in world
coordinates, the difference is . Therefore,
So, the view ray reaching a height yields the maximum parallax offset:
Figure 3: The maximum parallax offset vector corresponds to the point where the view ray pierces the lowest
possible height .
So far we have made the assumption that we can use as a texture coordinate offset.
However, if we rotate the normalized view ray from world space to tangent space, the
view ray scale is still in the “world scale.” Moreover, in §2 we computed the
intersection of with the world scaled height . Therefore, is also in the “world
scale” and so is . But to use as a texture coordinate offset, we need it in the
“texture scale.” So we need to convert units:
That is to say, we need to know how many world space units (ws) equals one texture
space unit. Suppose we have a grid in world units, and we map a texture onto
the grid with no tiling. Then 1 texture space unit equals 20 world space units and
Page 3 of 12
As a second example, suppose we have a grid in world units, and we map a
texture onto the grid with tiling. Then 5 texture space unit equals 20 world space
units or 1 texture space unit equals 4 world space units and
compute in the application code, and set it as a constant buffer property. Then our
pixel-shader code can continue to use the original formula:
In practice, it might not be easy to determine . An easier way would be to just expose a
slider to an artist and let them tweak interactively.
Page 4 of 12
Figure 4: Ray marching with uniform step size. We cross the heightmap the first time .
Note that we ray march in the normalized heightmap space [0, 1]. So if we are taking
samples, we must step a vertical distance with each step so that the ray travels
a vertical distance of 1 by the sample.
Once we find where the ray crosses the heightmap, we approximate the
heightmap as a piecewise linear function and do a ray/line intersection test to find the
intersection point. Note that this is a 2D problem because we are looking at the cross
plane that is orthogonal to the polygon plane that contains the view ray. Figure 5 shows
that we can represent the two rays by the equations:
Observe that in this particular construction, the rays intersect at the same parameter (in
general this is not true). Therefore, we can set the rays equal to each other and solve for
:
Page 5 of 12
In particular, the equation of the second coordinate implies:
The parameter gives us the percentage through the interval where the intersection
occurs. If the interval direction and length is described by then the parallax
offset vector is given by:
Figure 6 shows that that glancing view rays require more samples than head-on view rays
since the distance traveled is longer. We estimate the ray sample by interpolating
between minimum and maximum sample count constants based on the angle between the
vector from the surface to the eye and surface normal vector:
int gMinSampleCount = 8;
int gMaxSampleCount = 64;
For head-on angles, the dot product will be 1, and the number of samples will be
gMinSampleCount. For parallel angles, the dot product will be 0, and the number of
samples will be gMaxSampleCount.
Figure 6: Glancing view rays travel a further distance than head-on view rays. Therefore, we require more samples
at glancing angles to get an accurate intersection test.
Page 6 of 12
§6 Code
#include "LightHelper.fx"
cbuffer cbPerFrame
{
DirectionalLight gDirLights[3];
float3 gEyePosW;
float gFogStart;
float gFogRange;
float4 gFogColor;
};
cbuffer cbPerObject
{
float4x4 gWorld;
float4x4 gWorldInvTranspose;
float4x4 gWorldViewProj;
float4x4 gTexTransform;
Material gMaterial;
float gHeightScale;
int gMinSampleCount;
int gMaxSampleCount;
};
SamplerState samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = WRAP;
AddressV = WRAP;
};
struct VertexIn
{
float3 PosL : POSITION;
float3 NormalL : NORMAL;
float2 Tex : TEXCOORD;
float3 TangentL : TANGENT;
};
struct VertexOut
{
float4 PosH : SV_POSITION;
float3 PosW : POSITION;
float3 NormalW : NORMAL;
float3 TangentW : TANGENT;
float2 Tex : TEXCOORD;
};
Page 7 of 12
{
VertexOut vout;
return vout;
}
// Normalize.
toEye /= distToEye;
//
// Parallax Occlusion calculations to find the texture coords to use.
//
// Vary number of samples based on view angle between the eye and
// the surface normal. (Head-on angles require less samples than
// glancing angles.)
Page 8 of 12
float2 texStep = maxParallaxOffset * zStep;
int sampleIndex = 0;
float2 currTexOffset = 0;
float2 prevTexOffset = 0;
float2 finalTexOffset = 0;
float currRayZ = 1.0f - zStep;
float prevRayZ = 1.0f;
float currHeight = 0.0f;
float prevHeight = 0.0f;
// Exit loop.
sampleIndex = sampleCount + 1;
}
else
{
++sampleIndex;
prevTexOffset = currTexOffset;
prevRayZ = currRayZ;
prevHeight = currHeight;
currTexOffset += texStep;
//
// Texturing
//
Page 9 of 12
// Default to multiplicative identity.
float4 texColor = float4(1, 1, 1, 1);
if(gUseTexure)
{
// Sample texture.
texColor = gDiffuseMap.Sample( samLinear, parallaxTex );
if(gAlphaClip)
{
// Discard pixel if texture alpha < 0.1. Note that we do this
// test as soon as possible so that we can potentially exit
// the shader early, thereby skipping the rest of the shader
// code.
clip(texColor.a - 0.1f);
}
}
//
// Normal mapping
//
//
// Lighting.
//
ambient += A;
diffuse += D;
spec += S;
}
if( gReflectionEnabled )
{
float3 incident = -toEye;
float3 reflectionVector = reflect(incident, bumpedNormalW);
float4 reflectionColor = gCubeMap.Sample(
samLinear, reflectionVector);
litColor += gMaterial.Reflect*reflectionColor;
Page 10 of 12
}
}
//
// Fogging
//
if( gFogEnabled )
{
float fogLerp = saturate( (distToEye - gFogStart) / gFogRange );
return litColor;
}
§7 Aliasing Issues
Parallax occlusion mapping has aliasing problems as Figure 7 shows. Due to the finite
step size, the ray sometimes misses peaks of the heightmap. Increasing the sampling rate
(i.e., decreasing the step size) helps, but is prohibitively expensive. We note that
tessellation based displacement mapping does not have this problem.
Page 11 of 12
[Tatarchuk06] has a detailed description of the algorithm, extends it to support soft
shadows and an LOD system, and gives advice for artists authoring heightmaps.
[Watt05] devotes a chapter in his book to the algorithm. [Drobot09] describes generating
a quadtree structure of the heightmap stored in a texture to improve the performance of
the ray/heightmap intersection test.
[Watt05] Watt, Alan, and Fabio Policarpo, Advanced Game Development with
Programmable Graphics Hardware, AK Peters, Ltd., 2005.
Page 12 of 12