15 Tutorial 13 - Cube Mapping

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Tutorial 13: Cube Mapping

Summary
In real life, shiny objects reect incoming light - think of a mirror, or the reection of the setting sun on a pool of water. The lighting model introduced last tutorial cant simulate such reections, and to do so would require expensive ray racing operations. Instead, games use an environment map - a set of static images that represent the surrounding scenery that can be sampled from to emulate real reections. This tutorial will demonstrate how to use these environment maps to achieve two things - reections, and sky boxes.

New Concepts
Cube maps, Cube samplers, environment mapping, reecting a direction vector, sky boxes

Environment Maps
Environment maps have long been used to add computationally inexpensive reections to graphical scenes. It is now common in games to see water puddles and other wet surfaces reect the sky, and for shiny materials like metal and glass to reect light from the environment around them.

An example of environment map reections on a car windscreen in Deus Ex: Human Revolution

Cube Maps
Modern games generally use cube maps, which contain enough visual data to create reections for any given surface angle. As the name suggests, cube maps are made up of 6 separate images, which when formed into a cube, form a fully enclosed scene. Imagine taking a photograph of your surroundings at 90 degree intervals, including the ceiling and oor - if you stuck those photographs together, youd create a cube containing a seamless image of everything you could see. It might look something like this:

An example cube map Generally these cube map images are an artist created static scene of mountains or other such far away features; although its possible to render a dynamic cube map, this is prohibitively computationally expensive, as the scene would have to be processed once for each cube side.

Sampling Cube Maps


Just like any other texture, in order to use a cube map, it must be sampled. However, instead of sampling a cube map with a 2D texture coordinate, as with usual 2D textures, cube maps are sampled with a specialised texture sampler using a normalised direction vector - just like the vertex normals youve been using. Imagine the direction vector as being a ray that begins in the middle of the cube formed by the cube mapped textures, with the point where the ray intersects the cube being the texel sampled. So for example, a direction vector of ( 0, 1, 0 ) will sample from the middle of the positive y axis cube map texture, which in the example cube map above would be the sky. The cube map sampler will automatically handle which texture will be sampled from a given direction vector, and in modern rendering APIs sampling can also occur between the cube map textures. Imagine a 45 reection vector - the ray of this vector would intersect exactly at the corner between two cubemap textures, so to sample a correct texel, bilinear interpolation between the two textures takes place.

Left: Sampling direction vectors Right: An example of a vector resulting in cross-texture interpolation

Reections
So, we now know that cube maps are sampled using direction vectors - but which direction vectors? We cant use vertex normals, as they would sample the same point on the cube map regardless of where the viewer was - Imagine a mirror that reected only from its normal, itd be a static scene, no matter where you viewed it from. So, to sample a cube map correctly we need a reection vector. Using a reection vector takes into account how light bounces o a surface from the perspective of the viewer. To calculate the reection vector for a surface, we need its normal, and the incident vector that runs from the view point to the surface - just like the calculations for specularity. Only instead of using them to calculate the Blinn-Phong half-angle, were going to use them to calculate the angle of reection from the angle of incidence like so: R = I 2N (N I) Where R is the reected vector, I is the incident, N is the normal, and NI is their dot product. This will calculate an angle identical to the angle of incidence, but pointing away from the surface being reected.

An example reection vector calculated from a normal and an incident, showing that I = R As the angle of reection is view-dependent, using it to sample the cube map will result in accurate reections that can be used in dynamic scenes.

Skyboxes
Another popular use for cube maps is that of sky boxes. Instead of having a solid background colour, it has long been popular to have an image in the background that moves with the viewpoint, providing a more immersive experience. Even early 3D shooters like id Softwares Doom had a simple form of viewer-oriented background sky:

Simple parallax sky texture in Doom II: Hell On Earth As cube map textures contain the visual data for an all-encompassing view, they can be used to create a realistic background to a game world. Sometimes, this is created by literally enclosing a game level inside a giant cube, with the separate cube map textures applied as normal 2D textures on the 3

cubes faces, but its possible to use the specialised cube map sampler to project the skybox onto the scene, gaining the advantages cube mapping brings, such as the seamless cross-texture interpolation mentioned earlier. Skyboxing with cube maps is usually performed, oddly enough, using a single quad. A quad is drawn to ll the screen, similar to a post-processing stage, but often as the rst thing renderer into a scene. Then, instead of applying the cube map textures as standard 2D textures to the quad, sampled via texture coordinates, the direction vectors formed by normalising the vertex positions are used, instead. These vertex direction vectors are then interpolated to provide the correct direction vector for each fragment.

Left: Head on view of the vertex positions between the camera and the full screen quad. Right: Side view showing interpolated direction vectors formed from normalised vertex positions (shown in italics) For the skybox to rotate with the viewpoint moves, the direction vectors obtained should be rotated by the camera matrix - the inverse of the view matrix used in vertex shaders. For example, consider a normalised direction vector pointing at (0,0,-1). This would sample directly from the centre of the negative z cube map texture - if our camera has an identity matrix for its view, thisll be correct. Now, consider if our camera looks straight up. As were drawing a full screen quad, the direction vector would still be (0,0,-1). If we rotate it by the cameras inverted transform, the direction vector will end up at (0,1,0), sampling the positive y cubemap.

1 0 0 0

0 0 1 0

0 1 0 0

0 0 0 0 0 1 = 0 1 0 0 0 1

By rendering the cube using a skybox cube map shader at the start of a frame, the virtual worlds you create for your games will have the appearance of having mountainous regions or huge skyscrapers, all without having to render massive amounts of polygons.

TODO: Panoramic view of a skybox rendered using cube maps

Example Program
To demonstrate how to perform cube mapping, were going to create an example program based o last tutorials lit heightmap scene - only this time around, well have a skybox instead of a dull grey background, and the heightmap will have some simple water, that correctly reects the skybox. As weve not used the texture matrix in a while, to nish the scene well make the water move using texture matrix manipulation. To do all this, well need three shaders in total - the per fragment lighting shader we created last tutorial, and two new ones; one each to handle the skybox and water reections. So, in your Tutorial12 solution folder, you should create 3 new text les - skyboxVertex.glsl, skyboxFragment.glsl, and reectFragment.glsl, as well as a new Renderer class that inherits from OGLRenderer, and a main le, tutorial12.cpp. Our main le is just the same as the previous few tutorials, so we wont go over it again.

Renderer header le
Our example program overrides the RenderScene and UpdateScene public virtual functions, as with the other tutorials in this series. It also has 3 new protected functions, pointers to 3 Shader instances, as well as a HeightMap, a Mesh, a Light, and a Camera. Finally, we have an OpenGL texture name for our cube map, and a oat member variable to control our water movement. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 # pragma once # include " ../ Framework / OGLRenderer . h " # include " ../ Framework / Camera . h " # include " ../ Framework / HeightMap . h " class Renderer : public OGLRenderer public : Renderer ( Window & parent ); virtual ~ Renderer ( void ); {

virtual void RenderScene (); virtual void UpdateScene ( float msec ); protected : void void void Shader * Shader * Shader * HeightMap * Mesh * Light * Camera * GLuint float }; renderer.h

DrawHeightmap (); DrawWater (); DrawSkybox (); lightShader ; reflectShader ; skyboxShader ; heightMap ; quad ; light ; camera ; cubeMap ; waterRotate ;

Renderer Class le
We start our Renderer class constructor by initialising the Camera, HeightMap, Light, and Mesh class instances. We then initialise our three Shaders, and link them, returning if the linking process fails. Nothing new so far - the camera position and light constructor values are just as they were in the last tutorial. 1 # include " Renderer . h " 2 3 Renderer :: Renderer ( Window & parent ) : OGLRenderer ( parent ) { 4 camera = new Camera (); 5 heightMap = new HeightMap ( " ../ Textures / terrain . raw " ); 6 quad = Mesh :: GenerateQuad (); 7 8 camera - > SetPosition ( Vector3 ( RAW_WIDTH * HEIGHTMAP_X / 2.0 f , 9 500.0 f , RAW_WIDTH * HEIGHTMAP_X )); 10 11 light = new Light ( Vector3 (( RAW_HEIGHT * HEIGHTMAP_X / 2.0 f ) ,500.0 f , 12 ( RAW_HEIGHT * HEIGHTMAP_Z / 2.0 f )) , 13 Vector4 (0.9 f ,0.9 f ,1.0 f ,1) , 14 ( RAW_WIDTH * HEIGHTMAP_X ) / 2.0 f ); 15 16 reflectShader = new Shader ( " ../ Shaders / PerPixelVertex . glsl " , 17 " reflectFragment . glsl " ); 18 skyboxShader = new Shader ( " skyboxVertex . glsl " , 19 " skyboxFragment . glsl " ); 20 lightShader = new Shader ( " ../ Shaders / PerPixelVertex . glsl " , 21 " ../ Shaders / PerPixelFragment . glsl " ); 22 23 if (! reflectShader - > LinkProgram () || ! lightShader - > LinkProgram () || 24 ! skyboxShader - > LinkProgram ()) { 25 return ; 26 } renderer.cpp Next up, we apply the water.tga texture to the quad, and the rock texture and bump maps to the heightMap. On line 37 is the rst real new piece of code - the initialisation of the cube map. Were going to use a handy function of the SOIL library to generate our cubemap, which takes 6 lenames, corresponding to the positive and negative x, y, and z axis textures required to form the full cube the order is important! See how even though we have 6 cube map textures, only one OpenGL texture name is required. We then check that all of the textures have been correctly generated, and if so, enable repeating on the water and rock textures. 27 28 29 30 31 32 33 34 35 36 quad - > SetTexture ( S O I L _ l o a d _ O G L _ t e x t u r e ( " ../ Textures / water . TGA " , SOIL_LOAD_AUTO , SOIL_CREATE_NEW_ID , SO IL_FLA G_MIPM APS )); heightMap - > SetTexture ( S O I L _ l o a d _ O G L _ t e x t u r e ( " ../ Textures / Barren Reds . JPG " , SOIL_LOAD_AUTO , SOIL_CREATE_NEW_ID , SO IL_FLA G_MIPM APS )); heightMap - > SetBumpMap ( S O I L _ l o a d _ O G L _ t e x t u r e ( " ../ Textures / Barren RedsDOT3 . tga " , SOIL_LOAD_AUTO , SOIL_CREATE_NEW_ID , SO IL_FLA G_MIPM APS )); renderer.cpp

37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52

cubeMap = S O I L _ l o a d _ O G L _ c u b e m a p ( " ../ Textures / rusted_west . bmp " , " ../ Textures / rusted_east . bmp " , " ../ Textures / rusted_up . bmp " , " ../ Textures / rusted_down . bmp " , " ../ Textures / rusted_south . bmp " , " ../ Textures / rusted_north . bmp " , SOIL_LOAD_RGB , SOIL_CREATE_NEW_ID , 0 ); if (! cubeMap || ! quad - > GetTexture () || ! heightMap - > GetTexture () || ! heightMap - > GetBumpMap ()) { return ; } S et T e xt u r eR e p ea t i ng ( quad - > GetTexture () , true ); S et T e xt u r eR e p ea t i ng ( heightMap - > GetTexture () , true ); S et T e xt u r eR e p ea t i ng ( heightMap - > GetBumpMap () , true ); renderer.cpp Finally in this constructor, we initialise the waterRotate oat, enable depth testing, and set up a perspective projection matrix. We also enable alpha blending, as we want the water to be slightly transparent. We also use glEnable with a new OpenGL symbolic constant GL TEXTURE CUBE MAP SEAMLESS. This enables bilinear ltering to be applied across the edges of the 6 cube map textures, so we get a seamless skybox. If you tried to do this tutorial without bilinear ltering, youd get pixellated skies, and if you used bilinear without the seamless extension enabled, youd probably see lines at the corners of the individual cube map textures, breaking the illusion.

53 54 55 56 57 58 59 60 61 62

init = true ; waterRotate = 0.0 f ; projMatrix = Matrix4 :: Perspective (1.0 f ,15000.0 f , ( float ) width / ( float ) height , 45.0 f );

glEnable ( GL_DEPTH_TEST ); glEnable ( GL_BLEND ); glBlendFunc ( GL_SRC_ALPHA , G L _ O N E _ M I N U S _ S R C _ A L P H A ); glEnable ( G L _ T E X T U R E _ C U B E _ M A P _ S E A M L E S S ); renderer.cpp As ever, in the destructor we delete everything we created in the constructor - and make currentShader equal 0 to prevent double deletion problems with the destructor of OGLRenderer.

63 Renderer ::~ Renderer ( void ) 64 delete camera ; 65 delete heightMap ; 66 delete quad ; 67 delete reflectShader ; 68 delete skyboxShader ; 69 delete lightShader ; 70 delete light ; 71 currentShader = 0; 72 }

renderer.cpp

Our UpdateScene function has something new in it this time - we want our water texture to slowly rotate and give the impression of owing water, so on line 80 we increase the waterRotate member variable that were going to use to control this by a small amount, governed by the frame time. Other than that, its just the usual - update our camera, and generate a new view matrix from it. 73 void Renderer :: UpdateScene ( float msec ) { 74 camera - > UpdateCamera ( msec ); 75 viewMatrix = camera - > BuildViewMatrix (); 76 waterRotate += msec / 1000.0 f ; 77 } renderer.cpp Our scene rendering is split up again, this time into DrawSkybox, DrawHeightmap, and DrawWater, so all RenderScene does is clear the screen, call these functions, and swap the buers.The order is important - the water transparency eect wont work if DrawWater is called rst, as the heightmap colour data wont be there to blend with, and DrawSkybox must be before the others, as otherwise our skybox would be drawn over the top of the heightmap and water! 78 void Renderer :: RenderScene () { 79 glClear ( G L_ D E PT H _ BU F F ER _ B IT | G L_ C O LO R _ BU F F ER _ B IT ); 80 81 DrawSkybox (); 82 DrawHeightmap (); 83 DrawWater (); 84 85 SwapBuffers (); 86 } renderer.cpp Our rst rendering sub-function is DrawSkybox. This will draw a full-screen quad similar to the one used in the post-processing tutorial, which we will use to project our cube map as a background skybox. First o, the function is bookended by calls to disable and enable writes to the depth buer we dont want the quad to ll the depth buer, and cause our heightmap and water to be discarded. Next, we enable our skybox shader, and update the shader matrices. we dont actually use the model matrix in the vertex shader, so theres no need to change it. After that, we simply draw our quad, and unbind the skybox shader. 87 void Renderer :: DrawSkybox () { 88 glDepthMask ( GL_FALSE ); 89 SetCurrentShader ( skyboxShader ); 90 91 U p d a t e S h a d e r M a t r i c e s (); 92 quad - > Draw (); 93 94 glUseProgram (0); 95 glDepthMask ( GL_TRUE ); 96 } renderer.cpp Next up is DrawHeightmap. Everything in this function should all be familiar to you from last tutorial. We enable the per fragment lighting shader, set the shader light and shader uniform variables, before nally drawing the height map. Easy! 97 void Renderer :: DrawHeightmap () { 98 SetCurrentShader ( lightShader ); 99 SetShaderLight (* light ); 8

100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 }

glUniform3fv ( g l G e t U n i f o r m L o c a t i o n ( currentShader - > GetProgram () , " cameraPos " ) ,1 ,( float *)& camera - > GetPosition ()); glUniform1i ( g l G e t U n i f o r m L o c a t i o n ( currentShader - > GetProgram () , " diffuseTex " ) , 0); glUniform1i ( g l G e t U n i f o r m L o c a t i o n ( currentShader - > GetProgram () , " bumpTex " ) , 1); modelMatrix . ToIdentity (); textureMatrix . ToIdentity (); U p d a t e S h a d e r M a t r i c e s (); heightMap - > Draw (); glUseProgram (0); renderer.cpp

The last rendering sub-function is DrawWater. Its pretty similar to DrawHeightmap, but note that we use the shader reectShader, and instead of binding texture unit 1 to a uniform called bumpTex, we are binding texture unit 2 to a uniform called cubeTex. The water in our scene is simply going to be a single quad, which needs to be translated to the middle of the heightmap, scaled out across the heightmap, and rotated so that it intersects the heightmap horizontally instead of vertically. The heightX and heightZ local variables hold x and y axis positions of the centre of the heightmap, while heightY is the water level - you can modify this to your liking. We also set the texture matrix in this function, to make the texture coordinates repeat, and be rotated about the z -axis by the value waterRotate. 117 void Renderer :: DrawWater () { 118 SetCurrentShader ( reflectShader ); 119 SetShaderLight (* light ); 120 glUniform3fv ( g l G e t U n i f o r m L o c a t i o n ( currentShader - > GetProgram () , 121 " cameraPos " ) ,1 ,( float *)& camera - > GetPosition ()); 122 123 glUniform1i ( g l G e t U n i f o r m L o c a t i o n ( currentShader - > GetProgram () , 124 " diffuseTex " ) , 0); 125 126 glUniform1i ( g l G e t U n i f o r m L o c a t i o n ( currentShader - > GetProgram () , 127 " cubeTex " ) , 2); 128 129 glActiveTexture ( GL_TEXTURE2 ); 130 glBindTexture ( GL_TEXTURE_CUBE_MAP , cubeMap ); 131 132 float heightX = ( RAW_WIDTH * HEIGHTMAP_X / 2.0 f ); 133 134 float heightY = 256 * HEIGHTMAP_Y / 3.0 f ; 135 136 float heightZ = ( RAW_HEIGHT * HEIGHTMAP_Z / 2.0 f ); 137 138 modelMatrix = 139 Matrix4 :: Translation ( Vector3 ( heightX , heightY , heightZ )) * 140 Matrix4 :: Scale ( Vector3 ( heightX ,1 , heightZ )) * 141 Matrix4 :: Rotation (90 , Vector3 (1.0 f ,0.0 f ,0.0 f )); 142 143 textureMatrix = Matrix4 :: Scale ( Vector3 (10.0 f ,10.0 f ,10.0 f )) * 144 Matrix4 :: Rotation ( waterRotate , Vector3 (0.0 f ,0.0 f ,1.0 f )); 145 9

146 147 148 149 150 151 }

U p d a t e S h a d e r M a t r i c e s (); quad - > Draw (); glUseProgram (0); renderer.cpp

Mesh Class
In this tutorial, and the next tutorial, were going to be using the GenerateQuad function, to create geometry to perform lighting calculations on. That means our quad must have normals and tangents, just like the HeightMap, and anything else we draw. Fortunately, we can do this easily, and we dont even have to use the GenerateNormals and GenerateTangents functions! The quads normal simply points down negative z, and its tangent points along positive x, so we can explicitly set these values, in the same way we set the colour per vertex. So, our GenerateQuad function now looks like this: 1 Mesh * Mesh :: GenerateQuad () { 2 Mesh * m = new Mesh (); 3 4 m - > numVertices = 4; 5 m - > type = GL_ TRIAN GLE_ST RIP ; 6 7 m - > vertices = new Vector3 [m - > numVertices ]; 8 m - > textureCoords = new Vector2 [m - > numVertices ]; 9 m - > colours = new Vector4 [m - > numVertices ]; 10 m - > normals = new Vector3 [m - > numVertices ]; 11 m - > tangents = new Vector3 [m - > numVertices ]; 12 13 m - > vertices [0] = Vector3 ( -1.0 f , -1.0 f , 0.0 f ); 14 m - > vertices [1] = Vector3 ( -1.0 f , 1.0 f , 0.0 f ); 15 m - > vertices [2] = Vector3 (1.0 f , -1.0 f , 0.0 f ); 16 m - > vertices [3] = Vector3 (1.0 f , 1.0 f , 0.0 f ); 17 18 m - > textureCoords [0] = Vector2 (0.0 f , 1.0 f ); 19 m - > textureCoords [1] = Vector2 (0.0 f , 0.0 f ); 20 m - > textureCoords [2] = Vector2 (1.0 f , 1.0 f ); 21 m - > textureCoords [3] = Vector2 (1.0 f , 0.0 f ); 22 23 for ( int i = 0; i < 4; ++ i ) { 24 m - > colours [ i ] = Vector4 (1.0 f , 1.0 f ,1.0 f ,1.0 f ); 25 m - > normals [ i ] = Vector3 (0.0 f , 0.0 f , -1.0 f ); 26 m - > tangents [ i ] = Vector3 (1.0 f , 0.0 f ,0.0 f ); 27 } 28 29 m - > BufferData (); 30 31 return m ; 32 } Mesh.cpp

10

Cubemap Reection Shader


The rst new shader we are going to write in this tutorial will use our new cube map to create reections on the water in the scene. We can reuse the per-fragment lighting vertex shader, but well need a new fragment shader, so the following program will go in the reectFragment.glsl le.

Fragment Shader
We start o in familiar fashion to the per-fragment lighting fragment shader, with two texture samplers, uniform variables for the lighting and camera, and the Vertex interface block. Note, however, that the cubeTex sampler is of a new type - samplerCube. This new sampler type allows us to sample our cubemap using a direction vector, and will automatically handle which of the 6 separate textures the resulting sample will be taken from. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 # version 150 core uniform sampler2D diffuseTex ; uniform samplerCube cubeTex ; uniform uniform uniform uniform vec4 vec3 vec3 float lightColour ; lightPos ; cameraPos ; lightRadius ;

in Vertex { vec4 colour ; vec2 texCoord ; vec3 normal ; vec3 worldPos ; } IN ; out vec4 gl_FragColor ; reectFragment.glsl We then calculate the diuse colour, incident vector, light distance, and light attenuation, just like we do in the per-fragment lighting fragment shader. On line 25, though, we have some new code. We sample the cubeTex sampler with a direction vector, which we calculate using the glsl reect function, which as its name implies, calculates the angle of reection that was introduced earlier. On line 28 we combine the attenuated light colour and the sampled cubemap reection colour. Thats everything! The samplerCube variable and reect function do most of the hard work - all we have to do is decide how to blend in the result.

19 void main ( void ) { 20 vec4 diffuse = 21 vec3 incident = 22 float dist = 23 float atten = 24 vec4 reflection = 25 26 27 gl_FragColor = 28 }

texture ( diffuseTex , IN . texCoord ) * IN . colour ; normalize ( IN . worldPos - cameraPos ); length ( lightPos - IN . worldPos ); 1.0 - clamp ( dist / lightRadius , 0.2 , 1.0); texture ( cubeTex , reflect ( incident , normalize ( IN . normal ))); ( lightColour * diffuse * atten )*( diffuse + reflection ); reectFragment.glsl

11

Cubemap Skybox Shader


To create our skybox on screen, were going to sample from a samplerCube again. This time however, well be generating the direction vector ourselves. Earlier, in the DrawSkybox function, we rendered a full-screen quad. To draw the quad weve cheated a bit - in this shader were actually not going to use the model and view matrices to translate the quad anywhere or draw it in relation to the camera. This means itll be drawn with its origin at the clip space origin, so we move it back 1 unit, so the quad is drawn right up against our near plane, covering the screen entirely! We will use the projection matrix though - so changes in the eld of vision will be reected in the shape of the skybox cube, by stretching the sides of the quad out beyond the sides of the screen. We are going to use the viewMatrix elsewhere, to rotate a direction vector, which will then be interpolated like any other vertex data passed to the fragment shader, so there will be a unique direction vector for each fragment. As we dont rotate the quads vertices, the normalised direction vector generated from the vertex positions for the middle of the screen will always be (0,0,-1), so we rotate it by the view matrix to point in the direction our camera is looking.

Vertex Shader
We want the direction vector used for cubemap sampling to rotate with the camera view so that the correct cubemap texel is sampled, so on line 14, we rotate the direction vector by multiplying it by the view matrix...sort of. We dont want to apply the translation component to the rotation, as this would warp our direction vector as the camera moved away from the origin, so we downcast the view matrix to a mat3 - remember, the translation component is in the fourth column of the view matrix. We also actually use the transpose of the view matrix. Why is this? As were rotating an object rather than the camera, we need the inverse of the view matrix to rotate the world objects correctly - remember how we inverted the camera matrix by negating all of the camera class variables in the BuildViewMatrix function? Were going to use another trick to avoid the inverse operation here, too. If the view matrix has no scaling or shearing, then the inverse of a square matrix is equal to its transpose - a much quicker operation to perform! Try it - both inverse and transpose will return the same results. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 # version 150 core uniform mat4 modelMatrix ; uniform mat4 viewMatrix ; uniform mat4 projMatrix ; in vec3 position ;

out Vertex { vec3 normal ; } OUT ; void main ( void ) vec3 tempPos OUT . normal gl_Position } { = position - vec3 (0 ,0 ,1); = inverse ( mat3 ( viewMatrix )) * normalize ( tempPos ); = projMatrix * vec4 ( tempPos , 1.0); skyboxVertex.glsl

12

Fragment Shader
The fragment shader is really easy - the output colour is simply the result of sampling the cubeTex sampler using the interpolated direction vector we kept in the normal variable. We normalise it to unit length to account for minor errors that can occur with interpolation. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 # version 150 core uniform samplerCube uniform vec3 in Vertex { vec3 normal ; } IN ; out vec4 gl_FragColor ; void main ( void ) { gl_FragColor = texture ( cubeTex , normalize ( IN . normal )); } skyboxFragment.glsl cubeTex ; cameraPos ;

Tutorial Summary
If everything works properly, you should see the now-familiar heightmap , with per-fragment lighting. However, you should now have rocky mountains in the distance, courtesy of the cube map and the skybox shader. Try looking around with the mouse - you should be able to see mountains all around, and be able to look straight up and see clouds. Whats more, you should see water has lled your terrain - a closer inspection should show that the cubemap is also reected correctly in the water. If you look straight down into the slowly moving water, you should see the clouds reected back at you. Not bad for being a cube and a quad with a shader applied to it! Another pretty simple eect, but you should now know a little bit about environment mapping using cube maps, and how to sample cube maps using direction vectors. In the next tutorial were going to add something else to our graphical scenes - shadows!

Further Work
1) Last tutorial you were introduced to bump maps. The water texture used in this tutorial has a bump map in the Textures folder, called waterDOT3.tga. How would you modify the cubemap reection shader to make use of this bump map? 2) By using the quad mesh and modifying the skybox shader, you should be able to add a perfectly reecting mirror in your scene. Or, perhaps its a portal to somewhere else? The quad doesnt necessarily have to use the same cubemap as the skybox... 3) In the example program, the heightmap is drawn on top of the skybox. That can result in quite a lot of overdraw, where pixels are drawn multiple times in a frame. Can you think of any way of reducing this overdraw? Investigate how the stencil buer introduced in tutorial 5 could be used to do this. 4) Take a look at http://brainwagon.org/2002/12/05/fun-with-environment-maps/

13

Appendix A: Generating Cube Maps the hard way


In tutorial 3, you saw how to create OpenGL textures from texture data, using the OpenGL function glTexImage2D. Turns out, generating cube maps uses the same function - remember, each of the individual textures is still 2D, even if we can sample the end result using a 3D direction vector. The main change over 2D textures is the rst parameter of the glTexImage2D function. Instead of using GL TEXTURE 2D, one of 6 symbolic constants is used - one for each axis direction. In the semi-pseudocode below, the array axis on line 7 contains the required symbolic constants. The only other change is to glBindTexture and glTexParameteri - like with glTexImage2D, instead of using the symbolic constant GL TEXTURE 2D, GL TEXTURE CUBE MAP is used instead. Not much to it, really! 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 string names [6] = " west . tga " , " up . TGA " , " south . TGA " , }; { " east . TGA " , " down . TGA " , " north . TGA "

GLuint axis [6] = { GL_TEXTURE_CUBE_MAP_POSITIVE_X , GL_TEXTURE_CUBE_MAP_NEGATIVE_X , GL_TEXTURE_CUBE_MAP_POSITIVE_Y , GL_TEXTURE_CUBE_MAP_NEGATIVE_Y , GL_TEXTURE_CUBE_MAP_POSITIVE_Z , G L _ T E X T U R E _ C U B E _ M A P _ N E G A T I V E _ Z }; GLuint skyBoxCubeMap ; glGenTextures (1 , & skyBoxCubeMap ); glBindTexture ( GL_TEXTURE_CUBE_MAP , skyBoxCubeMap ); for ( int i = 0; i < 6; ++ i ) { Texture * t = < Texture data from filename names [ i ] > glTexImage2D ( axis [ i ] , t - > mipmaplevel , t - > internalFormat , t - > width , t - > height , t - > border , t - > format , t - > datatype , t - > data ); glTexParameteri ( GL_TEXTURE_CUBE_MAP , GL_TEXTURE_MIN_FILTER , GL_LINEAR ); glTexParameteri ( GL_TEXTURE_CUBE_MAP , GL_TEXTURE_MAG_FILTER , GL_NEAREST ); glTexParameteri ( GL_TEXTURE_CUBE_MAP , GL_TEXTURE_WRAP_S , GL_CLAMP ); glTexParameteri ( GL_TEXTURE_CUBE_MAP , GL_TEXTURE_WRAP_T , GL_CLAMP ); } glBindTexture ( GL_TEXTURE_CUBE_MAP , 0); OpenGL Cube Map Loading

14

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy