0% found this document useful (0 votes)
123 views70 pages

Chapter 7 - 9 Final

This document discusses various methods for representing and viewing 3D objects in computer graphics. It begins by explaining that 3D objects can be represented in different ways, such as using polygons, surfaces defined by functions, or space partitioning. It then covers topics like 3D modeling, different representation methods including polygons and quadric surfaces, and the viewing transformation pipeline involving modeling, viewing, projection, and rasterization transformations. Key aspects of the 3D graphics pipeline and different coordinate systems used are also summarized.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
123 views70 pages

Chapter 7 - 9 Final

This document discusses various methods for representing and viewing 3D objects in computer graphics. It begins by explaining that 3D objects can be represented in different ways, such as using polygons, surfaces defined by functions, or space partitioning. It then covers topics like 3D modeling, different representation methods including polygons and quadric surfaces, and the viewing transformation pipeline involving modeling, viewing, projection, and rasterization transformations. Key aspects of the 3D graphics pipeline and different coordinate systems used are also summarized.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 70

School of Computing & Informatics

Department of Computer Science

Chapter 7

Representing 3D Objects &


Viewing Transformation

2/18/23 Computer Graphics MTU – CS


Introduction
2

Graphics scenes contain many different kinds of


objects
Trees, flowers, glass, rock, water etc..
There is not any single method that we can use to
describe objects that will include all characteristics of
these different materials
3D object can be represented in many ways in a graphics
application
A surface can be analytically generated using its function
involving coordinates
an object can be represented in terms of its vertices, edges
and polygons
This scheme of object modeling using polygon elements is
the most general method,
2/18/23
and is also
Computer Graphics
convenient for
MTU – CS
Contd…
3

most objects in 3-D graphics are represented using a


polygonal mesh, but this is not the only possibility.
Object Representation Classification:
 Boundary Representations (B-reps) – only the
surface/boundadry of the object is stored, normally
using a set of smaller primitives
e.g. Polygon facets/meshes and spline patches
 Space-partitioning representations – volume of the
object is hierarchically broken down into smaller sub-
regions, until we reach basic 3D primitives such as
cubes, spheres, etc
eg. Octree Representation
Polygonal meshes, in which primitives are normally
convex polygons Computer Graphics
2/18/23 MTU – CS
Contd…
4

3D modeling – process of developing a


mathematical representation of any 3D surface of
object via specialized software
It can be displayed as a 2D image through a
process called 3D rendering or used in a computer
simulation of physical phenomena.
3D models represent a 3D object using a
collection of points in 3D space, connected by
various geometric entities such as triangles, lines,
curved surfaces, etc.
3D models are widely used anywhere in 3D
graphics
2/18/23 Computer Graphics MTU – CS
Methods:
5

Polygon and Quadric surfaces: for


simple Euclidean objects
Spline surfaces and construction: for
curved surfaces
Procedural methods: Eg. Fractals,
Particle systems
Physically based modelling methods
Octree Encoding
Isosurface displays, Volume rendering,
etc
2/18/23 Computer Graphics MTU – CS
Overview of 3D Rendering
6

Modeling:
 Define object in local coordinates
 Place object in world coordinates (modeling
transformation)
Viewing:
 Define camera parameters
 Find object location in camera coordinates (viewing
transformation)
Projection: project object to the viewplane
Clipping: clip object to the view volume
Viewport transformation
Rasterization: rasterize object
2/18/23 Computer Graphics MTU – CS
Viewing Transformation
7

operations needed to produce views of a 3D scene


Generating a view of a 3d scene is similar to
taking a photograph of a scene using a camera
To take a photo
- first we place the camera at a particular position
- then change the direction of the camera
- point the camera to a particular scene and we can rotate
the camera around the 3d object
- finally, when we put the switch, the scene will be
adjusted to the size of the window of the camera and
light form the scene will be projected on to the camera
film
2/18/23 Computer Graphics MTU – CS
Contd…
8

Viewing in 3D involves the following


considerations:
- We can view an object from any spatial
position, eg.
In front of an object,
Behind the object,
In the middle of a group of objects,
Inside an object, etc.
- 3D descriptions of objects must be projected
onto the flat viewing surface of the output
device.
2/18/23 Computer Graphics MTU – CS
-
3D Graphics Pipeline
9

Viewing pipeline  sequence of transformations


that every primitive has to undergo before it is
displayed
- operation of the OpenGL viewing pipeline for 3-D
graphics
Note: different coordinate system after each
transformation
Every transformation can be thought of as
defining a new coordinate system
Steps involved in 3D viewing
- Specify view volume
- Projection onto a projection plane
- Mapping of this projected
2/18/23
scene on toMTU
Computer Graphics
the view surface
– CS
Contd…
10

2/18/23 Computer Graphics MTU – CS


Coordinate Transformation
11

2/18/23 Computer Graphics MTU – CS


Coordinate Systems
12

Several coordinate systems used in graphics


systems.
- World coordinates are the units of the scene to be
modelled; we select a “window” of this world for
viewing
- Viewing coordinates are where we wish to present
the view on the output device; we can achieve
zooming by mapping different sized clipping
windows (via glOrtho2D())to a fixed-size
viewport; can have multiple viewports on screen
simultaneously
2/18/23 Computer Graphics MTU – CS
Contd…
13

The clipping window selects what; the viewport


indicates where on output device and what size
Normalized coordinates are introduced to make
viewing process independent of any output device
(paper vs. mobile phone); clipping normally and
more efficiently done in these coordinates
Device coordinates are specific to output device:
printer page, screen display, etc. but Normalized
Device co-ordinates are independent

2/18/23 Computer Graphics MTU – CS


Modeling Transformation
14

All primitives start off in their own private


coordinate system  modelling coordinate system
Modelling  process of building our 3-D scene
from a number of basic primitives
these primitives must be transformed to get them
in the correct position, scale & orientation to
construct our scene
E.g., a wheel primitive being transformed in four
different ways to form the wheels of a car

2/18/23 Computer Graphics MTU – CS


Contd…
15

modeling transformation – used to transform


primitives so that they construct the 3-D scene we
want
- are normally different for each primitive
Once we have transformed all primitives to their
correct positions we say that our scene is in world
coordinates
- This coordinate system will have an origin somewhere
in the scene, and its axes will have a specific
orientation
- All primitives are now defined in world coordinate
system
2/18/23 Computer Graphics MTU – CS
Viewing Transformation
16

In order to be able to render an image of a scene


we need to define a virtual camera to take the
picture
Just like cameras in real life, virtual cameras must
have a position in space, a direction (which way is
it pointing?) and an orientation (which direction is
‘up’?)
- camera position, a ‘look’ vector and an ‘up’ vector

2/18/23 Computer Graphics MTU – CS


Contd…
17

viewing transformation – represents the


positioning, direction and orientation of the virtual
camera
defines new coordinate system – viewing
coordinate
- origin is normally at the position of the virtual camera,
the camera ‘looks’ along the z-axis, and the y-axis is
‘up’

2/18/23 Computer Graphics MTU – CS


Projection Transformation
18

Now we know where our primitives are in relation


to the virtual camera, we are in a position to ‘take’
our picture  involves ‘projecting’ 3-D primitives
onto a 2-D image plane
Projection from 3D to 2D – defined by straight
projection rays (projectors) emanating from the
‘center of projection’, passing through each point
of the object, and intersecting the ‘projection
plane’ to form a projection
Two different types of projection transformation:
 parallel and
 perspective projections
2/18/23 Computer Graphics MTU – CS
A. Parallel Projection
19

For all projection transformations we can imagine


the 2D image plane as being positioned
somewhere in 3-D space
line primitive in 3-D viewing coordinates being
projected onto the image plane
projection lines of the two end-points are parallel:
after they have passed through the image plane,
they continue to infinity without ever meeting
Eye at infinitely
Parallel projections can be divided into two
subtypes:
 orthographic (orComputer
2/18/23
orthogonal)
Graphics
projections
MTU – CS
and
Contd…
20

Orthographic (or orthogonal) projections


 projection lines intersect with the image plane at right-
angles (they are orthogonal),
 Often used to obtain front, side, & top views of an
object
 Transformation equations are simple
- Suppose the view plane is perpendicular to Zv axis and
along Xv, Yv plane, any point (x, y, z) on the object
projected on to the view plane is  xp = x ,yp = y
 Engineering & architectural drawings commonly employ,
because lengths and angles are accurately depicted, and
can be measured from the drawings
Oblique projections
2/18/23 Computer Graphics MTU – CS
 intersect at an angle (they are oblique)
Contd…
21

2/18/23 Computer Graphics MTU – CS


Contd…
22

Orthographic projection of an object


2/18/23 Computer Graphics MTU – CS
B. Perspective Projection
23

probably more familiar to you, even if you don’t


know it
all projection lines converge to a point  centre
of projection
Visual effect is similar to human visual system
Property:
- objects that are far away from the camera do appear
smaller
- Projectors are rays (i.e., non-parallel)
- Single point centre of projection (i.e. projection
lines converge at a point)
- Difficult to determine exact size and shape of
2/18/23 Computer Graphics MTU – CS
Contd…
24

2/18/23 Computer Graphics MTU – CS


Normalization and Clipping
25

Before final image can be generated, we need to


decide which primitives (or parts of primitives)
are ‘inside’ the picture and ‘outside’  clipping.
- performed by defining a number of clipping planes
4 clipping planes: any point whose projection line
intersects image plane outside the image bounds
(left, right, bottom or top) will not be visible by
the camera
2 extra clipping planes: near and far
Total 6: near, far, left, right, bottom and top
These 6 planes together form a bounded volume,
inside
2/18/23
which points will be
Computer Graphics
visible by
MTU – CS
the virtual
Contd…
26

2/18/23 Computer Graphics MTU – CS


Contd…
27

Different types of projection transformation lead


to different shapes of view volume
For example:
 perspective – forms a view volume in the shape
of a frustum, whereas
 parallel (orthographic) – leads to a view
volume that is a cuboid (a 3-D rectangle)

2/18/23 Computer Graphics MTU – CS


Contd…
28

View Volume of a Parallel Projectio

w Volume of a Perspective Projection

2/18/23 Computer Graphics MTU – CS


Contd…
29

The process of clipping is more efficient if it takes


place in a normalised coordinate system.
Normalising: means scaling all coordinates so that
they range between 0 and 1 (or occasionally -1
and +1).
Therefore: normalisation & clipping
transformation is responsible for both
- removing points that cannot be seen by the virtual
camera, and
- for scaling the view volume so that all coordinates are
normalised.
2/18/23 Computer Graphics MTU – CS
Contd…
30

For example, if we scale to the range [0,1] then


- points at the near clipping plane will have a z-
coordinate of 0 whilst points at far clipping plane will
have 1
- same applies to x-coordinate (left and right clipping
planes) and y-coordinate (bottom and top clipping
planes)
Once the view volume has been normalised, and
clipping has taken place, our remaining points are
in normalised coordinates

2/18/23 Computer Graphics MTU – CS


Viewport Transformation
31

If we wish, we can draw in the whole of the


screen window, but we can also specify a sub
region of it for drawing
- Whatever region we specify for drawing  viewport
- we may want to draw several different (or identical)
images in the same display
viewport transformation defines the mapping from
our normalised coordinate system onto the
viewport in the display window
After undergoing this transformation, our final
rendered image is in device coordinates (i.e. the
coordinate system Computer
2/18/23 of theGraphics
display device)
MTU – CS
Contd…
32

2/18/23 Computer Graphics MTU – CS


33
OpenGL Viewing Transformations
How to position camera - to transform view (camera)
- Method I. Use transform functions to position all
objects in correct positions
- Method II.
gluLookAt( eye_x, eye_y, eye_z, center_x, center_y,
center_z, up_x, up_y, up_z)
 If gluLookAt() was not called, camera has a
default position (origin) and orientation (points
down negative z-axis, and up-vector of (0, 1, 0))
 Should be used after
glMatirxMode(GL_MODELVIEW); //combines
viewing matrix and modeling matrix into one matrix
glLoadIdentity();
2/18/23 Computer Graphics MTU – CS
34
OpenGL Projection Transformations
GL_PROJECTION matrix – to define viewing volume
OpenGL provides 2 functions
glOrtho - to produce a orthographic (parallel)
- glOrtho(left, right, bottom, top, near, far)
- Specify clipping coordinates in six directions.

2/18/23 Computer Graphics MTU – CS


Contd…
35

gluPerpective (fovy, aspect, near, far)


- Field of view is in angle(bigger, objects smaller)
- Aspect ratio is usually set to width/height
- Near clipping plane must > 0

2/18/23 Computer Graphics MTU – CS


Contd…
36

glFrustum - require 6 parameters to specify 6


clipping planes;
- glFrustum(left, right, bottom, top, near, far)
- More general than gluPerspective

2/18/23 Computer Graphics MTU – CS


37
OpenGL Viewport Transformations
glViewport (GLint x, GLint y, GLsizei width, GLsizei
height);
- Defines a pixel rectangle in the window into which the
final image is mapped
- x, y specifies lower-left corner of the viewport, and width
and height are the size of the viewport rectangle

2/18/23 Computer Graphics MTU – CS


OpenGL Viewing Pipeline
38

OpenGL defines 3 matrices in its viewing pipeline:


 model view matrix: combines the effects of
modelling and viewing transformations
- Before this transformation, each primitive is defined in its
own coordinate
- After it, all primitives should be in viewing coordinate
system
 projection matrix: projection of 3-D viewing
coordinate onto the 2D image plane
- clipping performed automatically based on bounds we
specify for view volume
 viewport matrix: defines part of the display that will
be used for drawing
OpenGL maintains one matrix stack for each matrix
in the viewing pipeline
2/18/23 (modelview,MTUprojection
Computer Graphics – CS and
Contd…
39

Matrix Modes
 Matrix Mode (glMatrixMode)
 ModelView Matrix (GL_MODELVIEW)
- Model related operations: glBegin, glEnd, glTranslate,
glRotate, glScale, gluLookAt…
 Projection Matrix (GL_PROJECTION)
- Setup projection matrix: glViewport, gluPerspective/
glOrtho/ glFrustum…

2/18/23 Computer Graphics MTU – CS


Contd…
40

OpenGL functions to modify current modelview


matrix:
glTranslate*
glRotate*
glScale*
glLoadMatrix*
glMultMatrix*
gluLookAt
These are typically used to define the modelling
part
2/18/23
of the modelview matrix.
Computer Graphics MTU – CS
Contd…
41

gluLookAt – used to define the viewing part


of the transformation
- defines the position, direction and orientation of
a virtual camera
The basic format:
 
gluLookAt(x0, y0, z0, xref, yref, zref, Vx,
Vy, Vz);

2/18/23 Computer Graphics MTU – CS


OpenGL 3-D Object Functions
42

The OpenGL library on its own does not provide any


routines for drawing 3-D objects – it provides routines
for drawing basic primitives such as polygons only and
relies on the programmer to construct 3-D objects in
the form of polygonal meshes
However, the glut library does provide a number of
routines for displaying a range of pre-defined 3-D
objects, such as cubes, cones and cylinders
We can divide these routines into those that draw
regular polyhedra and those that draw quadric or cubic
surfaces

2/18/23 Computer Graphics MTU – CS


Regular Polyhedra
43

regular polyhedron/ Platonic solids – a polyhedron in


which all of the faces are regular polygons
regular polygon – is one in which all sides and angles
are equal
Regular polyhedra are five in all: cube, tetrahedron,
octahedron, dodecahedron and icosahedron
Routine for Wireframe
All can be drawnRendering
Polyhedron in either wireframe orSolid
Routine for solid using
Rendering

glut routines
Cube glutWireCube(GLdouble size) glutSolidCube(GLdouble
size)
Tetrahedron glutWireTetrahedron() glutSolidTetrahedron()

Octahedron glutWireOctahedron() glutSolidOctahedron()

Dodecahedron glutWireDodecahedron() glutSolidDodecahedron()

Icosahedron glutWireIcosahedron() glutSolidIcosahedron()


2/18/23 Computer Graphics MTU – CS
Quadric/Cubic Surfaces
44

Quadric surface – is one that can be defined by 2nd


order equations, whilst a cubic surface can be defined
by a 3rd order equation
Objec Routine for Wireframe Routine for Solid Rendering
t
glut library provides
Rendering
routines for drawing a # of
quadric/cubic
Spher surfaces
glutWireSphere(GLdouble glutSolidSphere(GLdouble
e radius, GLint nSlices, GLint radius, GLint nSlices, GLint
nStacks) nStacks)
Torus glutWireTorus(GLdouble inRad, glutSolidTorus(GLdouble
GLdouble outRad, GLint inRad, GLdouble outRad,
nSlices, GLint nStacks) GLint nSlices, GLint nStacks)
Cone glutWireCone(GLdouble glutSolidCone(GLdouble
baseRad, GLdouble height, baseRad, GLdouble height,
GLint nSlices, GLint nStacks) GLint nSlices, GLint nStacks)
Teapo glutWireTeapot(GLdouble glutSolidTeapot(GLdouble
t size) size)
2/18/23 Computer Graphics MTU – CS
Contd…
45

Both take 4 arguments - the radius of the cone at its


base, the height of the cone, and the number of slices
and stacks that make up the cone

Wire-Frame Model - Teapot

2/18/23 Computer Graphics MTU – CS


Lighting in OpenGL
46

One way we can make our scenes look cooler is


by adding light to them, call to
glEnable(GL_LIGHTING) in initRendering, which enables
Phong lighting
glDisable(GL_LIGHTING) if we ever want to turn it back off
Then, call glEnable(GL_LIGHT0) &
glEnable(GL_LIGHT1) to enable two light sources,
numbered 0 and 1
You can disable individual light sources by calling
glDisable(GL_LIGHT0) and glDisable(GL_LIGHT1)
More light can be GL_LIGHT0, GL_LIGHT1, ... ,
or GL_LIGHT7
Then, call glEnable(GL_NORMALIZE)
2/18/23 Computer Graphics MTU – CS
Contd…
Types of lighting implemented in OpenGL
47

Ambient light: shines the same amount on every face


- No source point; affects all polys; independent of position,
orientation and viewing angle;
- Assume light equally from all directions
Diffuse:
- Light scattered (reflects equally) in all directions after it hits
a poly; dependent upon incident angle
Specular:
- ‘Shininess’ ; dependent upon incident and viewing angles
Emissive:
- Makes a poly appear as though it is glowing; does not
actually give off light
This has no physical interpretation (real lights do not have
“diffuse” or “specular”
2/18/23 properties),
Computer Graphics butMTU
may
– CS be useful for
Contd…
48

GLfloat ambientColor[] = {0.2f, 0.2f, 0.2f, 1.0f}; //Color(0.2, 0.2,


0.2)
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientColor);
//Add ambient light
ambientColor - array of 4 GLfloats
first 3 floats represent RGB intensity of the light 
white ambient light that isn't very intense
Note: values don't exactly represent color; represent
an intensity of light
Lights can be directional (infinitely far away) or
positional
Positional lights can be either point lights or
spotlights
2/18/23 Computer Graphics MTU – CS
Contd…
49

//Add positioned light


GLfloat lightColor0[] = {0.5f, 0.5f, 0.5f, 1.0f}; //Color (0.5, 0.5,
0.5)
GLfloat lightPos0[] = {4.0f, 0.0f, 8.0f, 1.0f}; //Positioned at (4, 0,
8)
glLightfv(GL_LIGHT0, GL_DIFFUSE, lightColor0); //set color/ intensity
of light
glLightfv(GL_LIGHT0, GL_POSITION, lightPos0);
to be light somewhat intense, make the intensity
(0.5, 0.5, 0.5)
The first 3 elements of array are position, and last
element is 1
//Add directed lightComputer Graphics
2/18/23 MTU – CS
Contd…
50

Parameter Name Default Value Meaning


===================================================
GL_AMBIENT (0.0, 0.0, 0.0, 1.0) ambient RGBA
GL_DIFFUSE (1.0, 1.0, 1.0, 1.0) diffuse RGBA
GL_SPECULAR (1.0, 1.0, 1.0, 1.0) specular RGBA
GL_POSITION (0.0, 0.0, 1.0, 0.0) position of light
GL_SPOT_DIRECTION
(0.0, 0.0, -1.0) direction of spotlight

GL_SPOT_EXPONENT 0.0 spotlight exponent


GL_SPOT_CUTOFF 180.0 spotlight cutoff angle
GL_CONSTANT_ATTENUATION
1.0 constant attenuation factor
GL_LINEAR_ATTENUATION 0.0 linear attenuation factor
GL_QUADRATIC_ATTENUATION
0.0 quadratic attenuation factor

(Max{Vd, 0})GL_SPOT_EXPONENT GL_SPOT_CUTOFF


V is the normalized direction of the GL_SPOT_DIRECTION (d)
vertex from GL_SPOT_POSITION
V

2/18/23 Computer Graphics MTU – CS


Contd…
51

set up our second light source - make it red, with


an intensity of (0.5, 0.2, 0.2)
Instead of giving it a fixed position, we want to
make it directional, so that it shines the same
amount across our whole scene in a fixed
direction
To do that, we need to use 0 as the last element in
lightPos1. When we do that, instead of the first
three elements' representing the light's position,
they represent the direction from which the light is
shining, relative to the current transformation
state.
2/18/23 Computer Graphics MTU – CS
Contd…
52

face's normal – is a vector that is perpendicular to


the face
OpenGL needs to know the normals to figure out
at what angle a light shines on a face
If a light shines directly on a face, the face is
brighter than if the light shines at an angle
E.g., the first face we draw is parallel to x-y plane
- It is perpendicular to the z-axis, so our normal is (0, 0,
1)
- tell OpenGL by calling glNormal3f(0.0f, 0.0f, 1.0f)
right before we specify the coordinates of the face
It is important that the normal points "outward",
because if a light isComputer
2/18/23
shiningGraphics
in the same
MTU – CS
direction a
Normals
• In OpenGL the normal vector is part of the state
• Set by glNormal*()
– glNormal3f(x, y, z);
– glNormal3fv(p);
• Usually we want to set the normal to have unit length so
cosine calculations are correct
– Length can be affected by transformations
– Note the scale does not preserved length
– glEnable(GL_NORMALIZE) allows for
autonormalization at a performance penalty
2/18/23 Computer Graphics MTU – CS
Specifying Normals
• Normals are specified through glNormal
• Normals are associated with vertices
• Specifying a normal sets the current normal
– Remains unchanged until user alters it
– Usual sequence: glNormal, glVertex, glNormal, glVertex, glNormal,
glVertex…
• Usually, we want unit normals for shading
– glEnable( GL_NORMALIZE )
– This is slow – either normalize them yourself or don’t use glScale
• Evaluators will generate normals for curved surfaces
– Such as splines. GLUT does it automatically for teapot, cylinder,…

2/18/23 Computer Graphics MTU – CS


Contd…
55

Normal Vector
1. A normal vector (or normal) points in a direction
perpendicular to a surface (in general).
2. An object’s normal vectors define its orientation relative
to light sources. These vectors are used by OpenGL to
determine how much light the object receives at its vertices.
3.Lighting for vertices is calculated after MODELVIEW
transformation before PROJECTION transformation (vertex
shading) glBegin (GL_POLYGON);
glNormal3fv(n0);
glVertex3fv(v0);
glNormal3fv(n1);
glVertex3fv(v1);
glNormal3fv(n2);
glVertex3fv(v2);
glNormal3fv(n3);
glVertex3fv(v3);
2/18/23
glEnd();
Computer Graphics MTU – CS
Contd…
56

glMaterial functions are used to specify material


properties, for example:
GLfloat diffuseRGBA = {1.0f, 0.0f, 0.0f, 1.0f};
GLfloat specularRGBA = {1.0f, 1.0f, 1.0f, 1.0f};
glMaterialfv(GL_FRONT, GL_DIFFUSE,
diffuseRGBA);
glMaterialfv(GL_FRONT, GL_SPECULAR,
specularRGBA);
glMaterialf(GL_FRONT, GL_SHININESS, 3.0f);
// A directional light
glLightfv(GL_LIGHT0, GL_POSITION, direction);
glLightfv(GL_LIGHT0,
2/18/23 GL_DIFFUSE,
Computer Graphics MTU – CS
Contd…
57

// A spotlight
glLightfv(GL_LIGHT1, GL_POSITION, position);
glLightfv(GL_LIGHT1, GL_DIFFUSE, diffuseRGBA);
glLightfv(GL_LIGHT1, GL_SPOT_DIRECTION,
spotDirection);
glLightf(GL_LIGHT1, GL_SPOT_CUTOFF, 45.0f);
glLightf(GL_LIGHT1, GL_SPOT_EXPONENT, 30.0f);

When rendering an object, normals should be provided


for each face or for each vertex so that lighting can be
computed:
glNormal3f(nx, ny, nz);
2/18/23
glVertex3f(x, y, z);
Computer Graphics MTU – CS
glRotatef(_angle, 0.0f, 1.0f, 0.0f); 58
//Back
glColor3f(1.0f, 1.0f, 0.0f); glNormal3f(0.0f, 0.0f, -1.0f);
glBegin(GL_QUADS); glVertex3f(-1.5f, -1.0f, -1.5f);
//Front glVertex3f(-1.5f, 1.0f, -1.5f);
glNormal3f(0.0f, 0.0f, 1.0f); glVertex3f(1.5f, 1.0f, -1.5f);
glVertex3f(-1.5f, -1.0f, 1.5f); glVertex3f(1.5f, -1.0f, -1.5f);
glVertex3f(1.5f, -1.0f, 1.5f); //Left
glVertex3f(1.5f, 1.0f, 1.5f); glNormal3f(-1.0f, 0.0f, 0.0f);
glVertex3f(-1.5f, 1.0f, 1.5f); glVertex3f(-1.5f, -1.0f, -1.5f);
//Right glVertex3f(-1.5f, -1.0f, 1.5f);
glNormal3f(1.0f, 0.0f, 0.0f); glVertex3f(-1.5f, 1.0f, 1.5f);
glVertex3f(1.5f, -1.0f, -1.5f); glVertex3f(-1.5f, 1.0f, -1.5f);
glVertex3f(1.5f, 1.0f, -1.5f); glEnd();
glVertex3f(1.5f, 1.0f, 1.5f);
glVertex3f(1.5f, -1.0f, 1.5f);
2/18/23 Computer Graphics MTU – CS
Contd…
59

2/18/23 Computer Graphics MTU – CS


Lights
Setting the Ambient Light
glLightModelfv(GLenum pname,
GLfloat *param)

Parameters:
GL_LIGHT_MODEL_AMBIENT
- global RGBA ambient intensity

2/18/23 Computer Graphics MTU – CS


Materials
Defining Material Properties

glMaterialfv(GLenum face, GLenum pname,


GLfloat *param)
Faces:
GL_FRONT, GL_BACK, GL_FRONT_AND_BACK

Parameters:
GL_AMBIENT, GL_DIFFUSE, GL_SPECULAR,
GL_EMISSION, GL_SHININESS
2/18/23 Computer Graphics MTU – CS
Drawing Example
Adding Lighting:
float lPos[] = {1.0,1.0,1.0,1.0};
glLightfv(GL_LIGHT0,GL_POSITION,lPos);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glShadeModel(GL_SMOOTH);

float diffuse[] = {1.0,0.0,0.0,1.0};


glMaterialfv(GL_FRONT,GL_DIFFUSE,diffuse);

// Setup camera
// Draw objects
2/18/23 Computer Graphics MTU – CS
B. Shading in OpenGL
63

OpenGL only directly supports Gouraud shading or flat


shading. Gouraud is enabled by default, computing vertex
colors, and interpolating colors across triangle faces.
Flat shading can be enabled with glShadeModel(GL
FLAT). This renders an entire face with the color of a
single vertex, giving a faceted appearance.

Left: Flat shading of a triangle mesh in OpenGL. Right: Gouraud shading. Note that the mesh
appears smooth, although the coarseness of the geometry is visible at the silhouettes of the mesh.
2/18/23 Computer Graphics MTU – CS
Contd…
64

With pixel shaders on programmable graphics hardware,


it is possible to achieve Phong shading by using a small
program to compute the illumination at each pixel with
interpolated normals. It is even possible to use a normal
map to assign arbitrary normals within faces, with a pixel
shader using these normals to compute the illumination.

2/18/23 Computer Graphics MTU – CS


Texture Mapping
65

We would like to give objects a more varied and realistic


appearance through complex variations in reflectance that
convey textures.
There are two main sources of natural texture:
 Surface markings — variations in albedo (i.e. the total light
reflected from ambient and diffuse components of reflection),
and
 Surface relief — variations in 3D shape which introduces local
variability in shading.
We will focus only on surface markings.

Examples of surface markings and surface relief

2/18/23 Computer Graphics MTU – CS


Texture Sources
66

Texture Procedures
- Textures may be defined procedurally. As input, a
procedure requires a point on the surface of an object,
and it outputs the surface albedo at that point.
Examples of procedural textures include
checkerboards, fractals, and noise.

A procedural checkerboard pattern applied to a teapot. The checkerboard texture


comes from the OpenGL programming guide chapter on texture mapping.

2/18/23 Computer Graphics MTU – CS


Contd…
67

Digital Images
- To map an arbitrary digital image to a surface, we can
define texture coordinates (u, v) ∈ [0, 1]2.
- For each point [u0, v0] in texture space, we get a point
in the corresponding image.

Texture coordinates of a digital image

2/18/23 Computer Graphics MTU – CS


Texturing in OpenGL
68

To use texturing in OpenGL, a texturing mode must be


enabled
glEnable(GL_TEXTURE_2D); //to display 2D texture on polygons
The dimensions of texture in OpenGL must be powers of
2, and texture coordinates are normalized, so that (0, 0) is
the lower left corner, and (1, 1) is always the upper right
corner.
OpenGL 2.0, however, does allow textures of arbitrary
size, in which case texture coordinates are based on the
original pixel positions of the texture.
Since multiple textures can be present at any time, the
texture to render with must be selected.
Use glGenTextures to create texture handles and
glBindTexture to select
2/18/23 theGraphics
Computer texture withMTU
a given
– CS handle.
Contd…
69

texture can then be loaded from main memory with


glTexImage2D For example:
GLuint handles[2];
glGenTextures(2, handles);
glBindTexture(GL_TEXTURE_2D, handles[0]); // Initialize
texture parameters and load a texture with
glTexImage2D
glBindTexture(GL_TEXTURE_2D, handles[1]); // Initialize
texture parameters and load another texture
There are a number of texture parameters that can be set
to affect the behavior of a texture, using glTexParameteri.
For example, texture wrap repeating can be enabled to
allow a texture to be tiled at the borders, or the minifying
and magnifying functions can be set to control the quality
2/18/23
of textures as they getComputer
veryGraphics
close or far MTU
away– CS
from the
Contd…
70

The texture environment can be set with glTexEnvi,


which controls how a texture affects the rendering of the
primitives it is attached to. An example of setting
parameters and loading an image follows:
glTexEnvi(GL_TEXTURE_ENV,
GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexParameteri(GL_TEXTURE_2D,
GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,
GL_TEXTURE_WRAP_S, GL_REPEAT)
glTexParameteri(GL_TEXTURE_2D,
GL_TEXTURE_WRAP_T, GL_CLAMP)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
2/18/23imageWidth, imageHeight, 0, GL_RGB,
Computer Graphics MTU – CS

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy