0% found this document useful (0 votes)
3 views

CG Unit 4 Notes

The document covers various aspects of color models, including the properties of light, additive (RGB) and subtractive (CMY/CMYK) color models, and the HSV model, detailing how colors are perceived and represented in digital graphics. It also discusses the CIE chromaticity diagram, color gamut, ambient light, and reflection types in computer graphics, emphasizing their importance in creating realistic visual experiences. Additionally, it compares different color models and their applications in digital displays and printing processes.

Uploaded by

minalbarhate634
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

CG Unit 4 Notes

The document covers various aspects of color models, including the properties of light, additive (RGB) and subtractive (CMY/CMYK) color models, and the HSV model, detailing how colors are perceived and represented in digital graphics. It also discusses the CIE chromaticity diagram, color gamut, ambient light, and reflection types in computer graphics, emphasizing their importance in creating realistic visual experiences. Additionally, it compares different color models and their applications in digital displays and printing processes.

Uploaded by

minalbarhate634
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

1|Page © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

Unit – IV
Light, Color, Shading and Hidden Surfaces
1. Color Models

1.1 Properties of Light

1. Wavelength and Color Perception: Light is made up of electromagnetic waves, and the
color we perceive is related to the wavelength of that light. Shorter wavelengths correspond to
colors like violet and blue, while longer wavelengths correspond to red.

2. Additive Color Model (RGB): The RGB model is based on the additive nature of light.
Red, Green, and Blue are the primary colors in this model. Colors are created by combining
light of these three colors in varying intensities. The more you add, the closer the result gets to
white (maximum intensity), and subtracting leads to black (absence of light). This model is
used in digital displays such as computer screens.

3. Subtractive Color Model (CMY/CMYK): In the subtractive model, colors are created by
absorbing (subtracting) certain wavelengths and reflecting others. The CMY (Cyan, Magenta,
Yellow) model is used in printing. CMYK adds black (K) to enhance color depth and detail.

4. Perceived Brightness (Luminance): Luminance refers to the perceived brightness of light.


In color models like YIQ (used in old TV systems) or YUV, luminance (Y) separates brightness
information from color information (UV or IQ), making it useful for certain broadcasting and
compression systems.

5. Saturation: Saturation refers to the intensity or purity of a color. A fully saturated color is
vivid, while a less saturated color appears washed out or grayish. In models like HSV (Hue,
Saturation, Value), saturation determines the colorfulness relative to its brightness.

6. Hue: Hue represents the dominant wavelength of light that determines the basic color (red,
blue, green, etc.). It’s one of the components in models like HSV and HSL, which provide a
more intuitive understanding of color than RGB.

7. Color Temperature: Refers to the hue characteristics of light sources, described in Kelvin
(K). For example, a lower temperature (around 2000K) gives a warmer reddish light, while
higher temperatures (above 5000K) give a cooler bluish light.
2|Page © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

1.2 CIE chromaticity Diagram

The chromaticity diagram represents the spectral colors and their mixtures based on the values
of the primary colors (i.e. Red, Green, Blue) contained by them. Chromaticity contains two
parameters i.e. hue and saturation. When we put hue and saturation together then it is known
as Chrominance.

Chromaticity diagram represents visible colors using X and Y as horizontal and vertical axis.
The various saturated pure spectral colors are represented along the perimeter of the curve
representing the three primary colors – red, green and blue. Point C marked in chromaticity
diagram represents a particular white light formed by combining colors having wavelength:

RED: 700 nm

GREEN: 546.1 nm

BLUE: 438.8 nm

In Chromaticity diagram colors on boundary are completely saturated. The corners in this
chromaticity diagram represents by three primary colors (Red, Green and Blue).

C.I.E. Color Space:

It uses combination of 3 color values that are close to red, green and blue values which are
plotted on a 3D space. Mapped physical wavelengths to perceived colors. It is used to identify
the relative similarity and differences between colors. It is used to represent the wide range of
colors.
3|Page © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

Here,

X: Perceived color

Y: Perceived luminance

Z: Perceived color

Colors perceivable by the human eye:

Uses of C.I.E Chromaticity Diagram in Computer Graphics:

 It is used to evaluate a color against a gamut(range).

 It is used to represent all chromaticities visible to the average person.

 It is used to find the range of colors.

 It is used to find dominant and complementary colors.

Advantages:

 Simple geometric construction for mixing two or more colours.

 Both Perceptual attributes used: hue and saturation.

1.3 RGB

The RGB color model is one of the most widely used color representation method in computer
graphics. It uses a color coordinate system with three primary colors:

R(red), G(green), B(blue)

Each primary color can take an intensity value ranging from 0(lowest) to 1(highest). Mixing
these three primary colors at different intensity levels produces a variety of colors. The
collection of all the colors obtained by such a linear combination of red, green and blue forms
the cube shaped RGB color space.
4|Page © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

The corner of RGB color cube that is at the origin of the coordinate system corresponds to
black, whereas the corner of the cube that is diagonally opposite to the origin represents white.
The diagonal line connecting black and white corresponds to all the gray colors between black
and white, which is also known as gray axis.

In the RGB color model, an arbitrary color within the cubic color space can be specified by its
color coordinates: (r, g, b).

Example:

(0, 0, 0) for black, (1, 1, 1) for white,

(1, 1, 0) for yellow, (0.7, 0.7, 0.7) for gray

Color specification using the RGB model is an additive process. We begin with black and add
on the appropriate primary components to yield a desired color. The concept RGB color model
is used in Display monitor.

Intensities of the primary colors are added to get the resultant color. Thus the resultant color Cλ
is expressed in RGB component as Cλ = rR + gG + bB

1.4 HSV

An HSV color model is the most accurate color model as long as the way humans perceive
colors. The way humans perceive colors is not like how RGB or CMYK make colors. They are
just primary colors fused to create the spectrum.

The H stands for Hue, S for Saturation, and the V for value. Imagine a cone with a spectrum of
red to blue from left to right, and from the center to the edge, the color intensity increases.
5|Page © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

From bottom to up, the brightness increases. Hence resulting white at the center up layer. A
pictographic representation is also shown below.

 Hue: Hue tells the angle to look at the cylindrical disk. The hue represents the color.
The hue value ranges from 0 to 360 degrees.

Angle (in degree) Color


0-60 Red
60-120 Yellow
120-180 Green
180-240 Cyan
240-300 Blue
300-360 Magenta

 Saturation: The saturation value tells us how much quantity of respective color must
be added. A 100% saturation means that complete pure color is added, while a 0%
saturation means no color is added, resulting in grayscale.

 Value: The value represents the brightness concerning the saturation of the color. the
value 0 represents total black darkness, while the value 100 will mean a full brightness
and depend on the saturation.

Advantage:

The advantage of HSV is that it generalizes how humans perceive color. Hence it is the most
accurate depiction of how we feel colors on the computer screen.
6|Page © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

Applications:

 HSV model is used in histogram equalization.

 Converting grayscale images to RGB color images.

 Visualization of images is easy as by plotting the H and S components we can vary the
V component or vice-versa and see the different visualizations.

1.5 CMY

In this model, cyan, magenta and yellow colors are used as primary colors. This model is used
for describing color output to hard-copy devices.

Unlike video monitors which produce a color pattern by combining light from the screen
phosphors, hard-copy devices such as plotters produce a color picture by coating a paper with
color pigments.

The subset of the cartesian co-ordinate system for the CMY model is the same as that for RGB
except that white (full light) instead of black (no light) is at the origin. Colors are specified by
what is removed or subtracted from white light, rather than by what is added to blackness.

Cyan can be formed by adding green and blue light. Therefore, when white light is reflected
from cyan colored ink, the reflected light does not have red component. That is, red light is
absorbed or subtracted, by the ink.

Magenta ink subtracts the green component from incident light and yellow subtracts the blue
component. Therefore, cyan, magenta and yellow subtracts the blue component. Therefore,
cyan, magenta and yellow are said to be complements of red, green and blue respectively.
7|Page © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

The corner of the CMY color cube that is at (0, 0, 0) corresponds to white, whereas the corner
of the cube that is at (1, 1, 1) represents black. The following formulas summarize the
conversion between the two color models:

1.6 Comparison of Color Models

Aspect RGB HSV CMY


Model Type Additive Non-linear Subtractive
transformation of RGB
Primary Purpose Used in digital Used in color selection, Used in printing
displays and screens image editing processes
Primary Colors Red, Green, Blue Hue (color), Saturation Cyan, Magenta, Yellow
(intensity), Value
(brightness)
Color Mixing Adds light to form Visualizes color in terms Removes light (absorbs
colors of tint, tone, and shade wavelengths) to form
colors
Representation Based on light Based on human Based on pigments/inks
intensities perception of color absorbing light
Black and White Black: (0, 0, 0), White: Black: (any hue, 0 Black: C=1, M=1, Y=1,
(255, 255, 255) saturation, 0 value), White: C=0, M=0, Y=0
White: (any hue, 0
saturation, max value)
Application Ideal for screens, Useful in color pickers, Used in printing, color
cameras, and monitors art and design printers
Advantages Directly related to More intuitive for Allows control over ink
display hardware selecting colors based on usage in printing
perception
8|Page © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

Disadvantages Not intuitive for color Doesn't map directly to Doesn't work well with
manipulation hardware output displays (requires
conversion)

2. Illumination Models

2.1 Color Gamut

Color gamut refers to the complete range of colors that a particular device (like a monitor,
printer, or camera) can display, capture, or reproduce. It is essentially a subset of all possible
colors that can be perceived by the human eye, and different devices can handle different ranges
or gamut of colors based on their technology and design.

Different devices (monitors, printers, cameras, projectors, etc.) handle colors differently.
Understanding the color gamut helps ensure that images appear consistently across different
platforms or devices.

For example, a monitor with a wide gamut (like Adobe RGB) can display more vivid and
saturated colors than a standard sRGB monitor. However, if the image is printed on a printer
with a narrower gamut, certain colors might not look as expected.

Common Color Gamuts:

 sRGB: A standard color gamut used in most consumer-grade monitors, TVs, and web
content. It covers a relatively small portion of the visible spectrum but is widely
supported.

 Adobe RGB: A wider gamut used in professional photography and printing, capable of
showing more vibrant colors, particularly greens and blues, compared to sRGB.

 DCI-P3: Used in digital cinema and high-end displays like 4K TVs, this gamut is wider
than sRGB but slightly narrower than Adobe RGB.

 ProPhoto RGB: An even wider gamut used in high-end image editing and professional
photography. It covers more of the visible spectrum but requires specialized equipment
for proper display.

2.2 Ambient Light


9|Page © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

Ambient lighting is a type of lighting that is used to create a realistic environment in


computer graphics. It is usually a soft, warm light that is used to fill in the shadows and create
a more natural look. Ambient lighting can be used to simulate natural lightings, such as the
sun, or artificial lighting, such as fluorescent lights. It is also used to create the illusion of
depth and to provide a more realistic look to a scene.

Ambient lighting is a key element of 3D computer graphics, as it helps add atmosphere and
depth to a scene creating a more realistic, immersive experience. By adding ambient lighting
to a scene, the artist can create a sense of depth and atmosphere that would otherwise be
missing.

Ambient lighting is created by using a combination of several different light sources, such
as directional lights, point lights, spotlights, and area lights. These lights are used to create
a more realistic, three-dimensional environment. The intensity of the ambient light is
determined by the number of light sources used and the brightness of each light source. The
intensity of the ambient light can be adjusted in order to create the desired atmosphere in a
scene.

Example: In a 3D game, even if a character is standing in a shaded area or behind an object


where no light is directly reaching, they are still somewhat visible due to ambient light. For
example, a room with no direct light but still dimly visible due to the overall scene lighting is
attributed to ambient light.

2.3 Diffuse Reflection

When light strikes a surface, some of the light will reflect off of the surface. The angle at which
the light reflects off the surface is determined by the surface’s normal, which is a vector
perpendicular to the surface. The amount of reflected light also depends on the angle at which
the light strikes the surface, as well as the roughness of the surface. Diffuse reflection is when
light reflects off of a surface in all directions. This is opposed to specular reflection, which is
when light reflects off of a surface in a single direction. Diffuse reflection is what gives surfaces
their matte appearance.
10 | P a g e © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

Rough surfaces tend to cause more diffuse reflection than smooth surfaces. This is because
rough surfaces have many different angles that they can reflect light off of, whereas smooth
surfaces have fewer angles. The color of a diffusely reflecting object depends on the
wavelength of the incoming light and the wavelength-dependent absorption coefficient of the
material. For example, a red object will appear red because it absorbs all wavelengths of light
except for red (which it then reflects).

Idiffusive = kd * L * cosθ

There are two main ways to implement diffuse reflection in computer graphics: through a
shader or through environment mapping.

 A shader is a small program that runs on the GPU and is responsible for calculating
the color of each pixel on the screen. In order to create a diffuse reflection, the shader
must take into account the angle at which the light hits the surface, as well as the
roughness of the surface. The roughest the surface, the more scattered the light will be.

 Environment mapping is a technique where a texture map is used to simulate diffuse


reflections. This texture map contains images of surrounding objects, which are then
projected onto the object that needs to be rendered. This technique is often used for
real-time rendering because it can be computationally expensive to calculate diffuse
reflections using shaders.

2.4 Specular Reflection

When we illuminate a shiny surface such as polished metal or an apple with a bright light, we
observe highlight or bright spot on the shiny surface. This phenomenon of reflection of incident
light in a concentrated region around the specular-reflection angle is called specular-reflection.

Due to specular reflection, at the highlight, the surface appears to be not in its original color,
but white, the color of incident light.
11 | P a g e © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

The figure below shows the specular reflection direction at a point on the illuminated surface.
The specular reflection angle equals the angle of the incident light with the two angles
measured on opposite sides of the unit normal surface vector N.

As shown in the figure above, R is the unit vector in the direction of ideal specular-reflection.
L is the unit vector directed towards the point light source and V is the unit vector pointing to
the viewer from the surface position.

The angle ϕ between vector R and vector V is called viewing angle. For an ideal reflector
(perfect mirror), incident light is reflected only in the specular-reflection direction. In such
case, we can see reflected light only when vector V and R coincide, i.e., ϕ = 0.

2.5 Combined Diffuse and Specular reflections with Multiple Light Sources

For a single point light source, combined diffuse and specular reflections from a point on an
illuminated surface as

I = Idiff + Ispec

= KaIa + KdIl (N·L) + KsIl (N·H)ns

Light reflection at any surface point is calculated by summing the contributions from the
individual sources.

𝐼 = 𝐾 𝐼 + 𝐼 [𝐾 (𝑁 · 𝐿 ) + 𝐾 (𝑁 · 𝐻 ) ]

2.5 Comparison between Diffuse and Specular Reflection

Aspect Diffuse Reflection Specular Reflection


12 | P a g e © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

Definition Reflection of light in many directions Reflection of light in a single


from a rough surface. direction from a smooth surface.
Surface Type Occurs on rough, matte surfaces (e.g., Occurs on smooth, shiny surfaces
paper, unpolished wood). (e.g., mirrors, polished metal).
Light Scatters light evenly, resulting in soft Creates sharp, bright highlights
Distribution and uniform illumination. with a distinct direction.
Angle Intensity depends on the angle Intensity depends on the angle
Dependence between the light source and surface between the viewer and the
normal, according to Lambert's law. reflection direction of the light
source.
Color Color appears consistent from Color and intensity vary
Characteristics various angles, primarily influenced significantly based on the
by the surface color. viewer's position.
Visual Effect Produces a soft, non-reflective Produces bright highlights and a
appearance. glossy appearance.
Mathematical Governed by Lambert’s cosine law. Governed by the Phong reflection
Model model (or other models for
specular highlights).
Examples Surfaces like clay, chalk, or fabrics. Surfaces like glass, water, or
metal finishes.

2.5 Comparison between Point Source and Diffuse Illumination

Point Source:

A point light source is a theoretical light source that emits light equally in all directions from
a single, infinitesimally small point. In computer graphics, it's often used to simulate lights like
light bulbs or stars, where light spreads out uniformly from a specific location in space.

Diffuse Illumination:

Diffuse illumination refers to the way light scatters evenly when it strikes a rough or matte
surface. Unlike specular reflection, which creates sharp highlights, diffuse illumination spreads
the light in many directions, resulting in soft and uniform lighting. This effect is angle-
dependent and follows Lambert’s cosine law, meaning the brightness decreases as the surface
faces away from the light source.
13 | P a g e © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

Aspect Point Source Diffuse Illumination


Definition A light source that emits light The scattering of light evenly across a
equally in all directions from a rough surface, producing uniform
single point in space. lighting.
Light Light radiates uniformly outward Light is reflected in many directions
Emission from a single point. after hitting a rough surface.
Surface Direct interaction with surfaces, Creates soft, uniform lighting across
Interaction creating sharp shadows and surfaces, with no sharp highlights.
highlights.
Light Intensity decreases with distance Light intensity depends on the angle of
Spread (following the inverse square law). incidence and spreads evenly across the
surface.
Brightness Affected by the distance from the Affected by the surface orientation
Dependance light source. relative to the light source, according to
Lambert’s cosine law.
Visual Creates sharp contrasts and Produces soft, non-directional shading
Effect shadows. without distinct highlights.
Examples Light bulb, street lamp, or a star. Lighting on matte surfaces like paper or
unpolished walls.

3. Shading Algorithms

3.1 Halftone Shading

Many displays and hardcopy devices are bilevel. They can only produce two intensity levels.
In such displays or hardcopy devices, we can create an apparent increase in the number of
available intensities. This is achieved by incorporating multiple pixels positions into the display
of each intensity value.

When we view a very small area from a sufficiently large viewing distance, our eyes average
fine details within the small area and record only the overall intensity of the area. This
phenomenon of apparent increase in the number of available intensities by considering
combine intensity of multiple pixels is known as halftoning.

The halftoning is commonly used in printing black and white photographs in newspapers,
magazines and books. The pictures produced by halftoning process are called halftones.
14 | P a g e © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

In computer graphics, halftone reproductions are approximated using rectangular pixel regions,
say 2×2 pixels or 3×3 pixels. These regions are called halftone patterns or pixel patterns. Fig.
below shows the halftone patterns to create number of intensity levels.

3.2 Gouraud Shading

Gouraud shading is a method used in computer graphics to simulate the differing effect of light
and color across the surface of an object. Intensity- interpolation is used to develop Gouraud
shading. By intensity interpolation, the intensity of each and every pixel is calculated. Gouraud
shading shades a polygon by linearly interpolating intensity values across the surface. By this,
if we know the intensities of two points then we can be able to find the intensity of any point
in between them.

By Gouraud shading, we can overcome the discontinued intensity values for each polygon that
are matched with the values of adjacent polygons along the common edges.

Each polygon surface is rendered with Gouraud shading by performing the following
calculations.

 The first step is to determine the average normal vector as add a polygon vertex.
Determining the average unit normal vertex at each polygon vertex.
15 | P a g e © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

Calculating the average unit normal vector at point p. Point p is attached with four polygons.

N1+N2+N3+N4
So, the average unit normal vector: N = |
N1+N2+N3+N4|


for n polygons: 𝑁 =

(where i is initialized from 1 to n)

 Applying an illumination model at each vertex to calculate vertex intensity.

 In computer graphics, we use an illumination model to calculate vertex intensity at each


vertex.

 The vertex intensities are linearly interpolated across the polygon's surface. For each
scanline, the intensity at the intersection with a polygon edge is determined by linear
interpolation.

When the surfaces are to be rendered in color, the intensities of each color component are
calculated at the vertices. Gouraud can be connected with a hidden surface algorithm to fill in
the visible polygons along each scan line.

3.3 Phong Shading

A more accurate method for rendering a polygon surface is to interpolate the normal vector and
then apply the illumination model to each surface point. This method developed by Phong Bui
Tuong is called Phong Shading or normal vector Interpolation Shading. It displays more
realistic highlights on a surface and greatly reduces the Match-band effect.

A polygon surface is rendered using Phong shading by carrying out the following steps:
16 | P a g e © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

1. Determine the average unit normal vector at each polygon vertex.

2. Linearly interpolate the vertex normals over the surface of the polygon.

3. Apply an illumination model along each scan line to calculate projected pixel intensities
for the surface points.

Interpolation of the surface normal along a polynomial edge between two vertices as shown in
fig:

Incremental methods are used to evaluate normals between scan lines and along each scan line.
At each pixel position along a scan line, the illumination model is applied to determine the
surface intensity at that point.

Intensity calculations using an approximated normal vector at each point along the scan line
produce more accurate results than the direct interpolation of intensities, as in Gouraud
Shading. The trade-off, however, is that Phong shading requires considerably more
calculations.

Gouraud Shading Phong Shading


Gouraud shading was introduced by Henri Phong Shading was introduced by Phong Bui
Gouraud in 1970. Tuong in 1973.
Evaluates the illumination at border vertices Evaluates the illumination at every point of
and interpolates. the polygon surface.
Interpolates colors along edges and the Interpolates normals instead of colors.
scanline.
Cheaper than Phong Shading Costlier than Gouraud Shading
17 | P a g e © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

Lighting equation used at each vertex. Lighting equation used at each pixel.
Requires average processing and less time. Requires high processing and more time.
Produces average quality. Produces good quality.
Smooth surfaces Smooth and shining surfaces
Gives less accurate results. Gives more accurate results.

4. Hidden Surfaces

4.1 Introduction

Hidden surfaces refer to parts of objects in a 3D scene that are not visible to the viewer because
they are obscured by other objects. In computer graphics, the goal is to determine which
surfaces or parts of an object should be rendered and which should be hidden. This process is
called hidden surface determination or hidden surface removal.

Need of Hidden Surface Algorithms:

The need for hidden surface algorithms arises from the requirement to correctly render 3D
scenes on 2D displays. When rendering a 3D scene, some objects or parts of objects may be
obscured by others. To ensure that only the visible parts of objects are drawn, hidden surface
algorithms determine which surfaces (or parts of surfaces) are visible from the viewer’s
perspective and which are hidden behind other objects.

Here are the key reasons why hidden surface algorithms are needed:

1. Correct Depth Perception

 In a 3D scene, objects closer to the viewer may obscure parts of objects that are farther
away. Without hidden surface algorithms, all surfaces would be drawn, resulting in
incorrect images where distant objects may appear on top of closer objects, breaking
the illusion of depth.
18 | P a g e © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

Example: Imagine you are looking at a building through a tree. Without a hidden surface
algorithm, the building might appear in front of the tree, which would look unrealistic.

2. Optimization and Efficiency

 Rendering every surface of every object in a scene, including hidden ones, is inefficient.
Hidden surface algorithms allow the system to avoid drawing surfaces that won’t be
visible, reducing the computational load and increasing the efficiency of rendering.

Example: In a scene with thousands of objects, rendering hidden surfaces would waste
processing power and slow down the rendering process. Hidden surface algorithms help
minimize redundant calculations.

3. Visual Realism

 Realistic images require that only the visible parts of objects be shown. By determining
which parts of objects are obscured by others, hidden surface algorithms contribute to
producing images that resemble how we perceive objects in the real world.

Example: In video games or simulations, hidden surface algorithms ensure that as you move
through a 3D environment, objects naturally appear and disappear behind other objects,
maintaining immersion and realism.

4. Eliminating Overdraw

 Overdraw happens when multiple surfaces are rendered in the same pixel space, with
only the closest one being visible. Hidden surface algorithms reduce or eliminate
overdraw by ensuring that only the closest surfaces are drawn in each pixel location,
saving rendering resources.

Example: If a tree is blocking a building in the scene, rendering both the tree and the entire
building pixel-by-pixel would be wasteful. Hidden surface algorithms can prevent this
unnecessary overdraw.

4.2 Back Face Detection and Removal

Back Face: The side that faces away from the viewer. This face does not contribute to the
rendered image since it cannot be seen from the camera's perspective.

Back Face Removal (or Back Face Culling) is a technique used in computer graphics to
optimize rendering by eliminating polygons (typically triangles) that are not visible to the
19 | P a g e © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

camera. This method leverages the concept that surfaces facing away from the viewer (the back
faces) do not contribute to the visible scene and can be ignored during the rendering process.

If a polygon is visible, the light surface should face towards us and the dark surface should face
away from us.

The direction of the light face can be identified by examining the result

N·V > 0, where

N: Normal vector to the polygon surface with Cartesian components (A, B, C).

V: A vector in the viewing direction from the eye (or “camera”) position.

We know that the dot product of two vectors gives the product of the lengths of the two vectors
times the cosine of the angle between them. The cosine factor is important to us because if the
vectors are in the same direction (0 ≤ θ < π/2), then the cosine is positive and the overall dot
product is positive. However, if the directions are opposite (π/2 < θ < π), then the cosine and
the overall dot product is negative.

If the dot product is positive, we can say that the polygon faces towards the viewer; otherwise
it faces away and should be removed.

In case, if object description has been converted to projection co-ordinates and our viewing
direction is parallel to the viewing zV axis, then V = (0, 0, Vz) and

V · N = Vz C
20 | P a g e © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

So that we only have to consider the sign of C, the z component of the normal vector N. Now,
if the z component is positive, then the polygon faces towards the viewer, if negative, it faces
away.

4.3 Depth Buffer (z) Algorithm

The Z-buffer algorithm, also known as the depth-buffer algorithm, is used in 3D computer
graphics to determine which objects or parts of objects are visible in a scene. It helps solve the
visibility problem, ensuring that the correct surfaces of objects are displayed, even if multiple
objects overlap or are positioned at different distances from the camera.

When viewing a picture containing non transparent objects and surfaces, it is not possible to
see those objects from view which are behind from the objects closer to eye. To get the realistic
screen image, removal of these hidden surfaces is must. The identification and removal of these
surfaces is called as the Hidden-Surface problem.

Depth-buffer method is one of the commonly used methods for hidden surface detection. It is
an Image space method. Image space methods are based on the pixel to be drawn on 2D. For
these methods, the running time complexity is the number of pixels times number of objects.
And the space complexity is two times the number of pixels because two arrays of pixels are
required, one for frame buffer and the other for the depth buffer.

The Z-buffer method compares surface depths at each pixel position on the projection plane.
Normally z-axis is represented as the depth.

Calculation of depth:

As we know that the equation of the plane is:

ax + by + cz + d = 0, this implies

( )
𝑧= where c!=0

Calculation of each depth could be very expensive, but the computation can be reduced to a
single add per pixel by using an increment method as shown in figure below:
21 | P a g e © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

Let’s denote the depth at point A as Z and at point B as Z’. Therefore:

ax + by + cz + d = 0 implies

( )
𝑧= …(1)

( )
Similarly, 𝑧′ = …(2)

Hence from (1) and (2), we conclude:

𝑧′ = 𝑧 −

Hence, calculation of depth can be done by recording the plane equation of each polygon in
the (normalized) viewing coordinate system and then using the incremental method to find the
depth Z. So, to summarize, it can be said that this approach compares surface depths at each
pixel position on the projection plane. Object depth is usually measured from the view plane
along the z-axis of a viewing system.

Example:

Let S1, S2, S3 be the surfaces. The surface closest to projection plane is called visible surface.
The computer would start (arbitrarily) with surface 1 and put its value into the buffer. It would
22 | P a g e © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

do the same for the next surface. It would then check each overlapping pixel and check to see
which one is closer to the viewer and then display the appropriate color. As at view-plane
position (x, y), surface S1 has the smallest depth from the view plane, so it is visible at that
position.

4.4 Depth Sorts (Painter) Algorithm

Painter’s algorithm is the algorithm which is introduced by Hewells in 1972.

The techniques used by these algorithms are image space and object space.

The name of this algorithm is Painter’s because it works like a painter who creates an oil
painting. Just like an artist paints, he starts his painting with an empty canvas, the first thing
the artist will do is create a background layer for the painting, after this layer he starts
creating another layers of objects one-by-one. In this way he completes his painting, by
covering the previous layer partially or fully according to the requirement of the painting.

This algorithm is basically used to paint the polygons in the view plane by considering their
distance from the viewer. The polygons which are at more distance from the viewer are painted
first. After that the nearer polygons are started painted on or over more distant polygons
according to the requirement.

In this algorithm the polygons or surfaces in the scene are scanned at first and then painted in
the frame buffer in the decreasing distance from the view point of the viewer starting with the
polygons of maximum depth or we can say the minimum z-value.

At first, the depth sort is performed in which the polygons are listed according to
their visibility order or depth priority.

As this algorithm uses the concept of depth priority so it is also called as depth priority
algorithm or priority algorithm.

The frame buffer is painted with the background color. After that the polygon which is farthest
enter to the frame buffer. For this, the pixel information will get changed i.e. information of the
background which has the farthest polygon get replaced with that of the background. This is
going to be repeatedly changed as we move from one polygon to the other and end up with the
nearest polygon.

Usually, comparisons are performed whenever the polygons are going to overlap each other.
The most common method used for the comparison is called as mini-max method. For this
23 | P a g e © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

purpose, the rectangles are drawn around the polygons such that the rectangles exactly fit the
polygons.

Then the rectangles are going to check to see whether they overlap each other or not. If the
rectangles are observed as they do not overlap then we consider that the surfaces are also not
overlap. If the rectangles are overlapped then the surfaces are also overlapped which is as
shown in the following figure:

To find out which rectangle is to be overlapped, we need to find the minimum and maximum
x and y values of the rectangles which are going to be tested for overlapping. If the minimum
value of y of one of the rectangles is larger than the maximum y value of the other rectangle
then they are not going to be overlapped and as the rectangles are not overlapped then the
surfaces are also not overlapped.

The same test is to be performed for the x-coordinates.

If the surfaces are overlapping, we do not know which surface should be present on the top of
the other. To find out which surface is to be present on the top the principle of mini-max is
used on the depth values of both the overlapping surfaces.

Example:

4.5 Area subdivision (Warnock)


24 | P a g e © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

John Warnock proposed an area subdivision algorithm, also known as the Warnock
algorithm. This algorithm extensively uses the concept of area coherence in computing the
visible surface in the scene, which is closer to the viewing plane. Area coherence avoids the
computation of the visibility detection of the common surface that has already been computed
in the earlier step, so there’s no need to recompute it. That’s all the area coherence does.

The area subdivision algorithm is based on the divide and conquer method where the visible
(viewing) area is successively divided into smaller and smaller rectangles until the simplified
area is detected. Consider given with the window panel, where the polygon will be projected,
is as follows:

When we subdivide the window panel against the polygon, we may come through the
following cases which are as follows:

1. Surrounding surface: It’s the case in which the viewing polygon surface completely
surrounds the whole window panel.

2. Overlapping (Intersecting) surface: It’s the case in which the window panel(viewport) and
viewing polygon surface both intersect each other.
25 | P a g e © Haris Chaus | ALL RIGHTS ARE RESERVED as per copyright act.

3. Inside (contained) surface: In this case, in which the whole polygon surface is inscribed
inside the window panel. This case is just opposite to the first (surrounding surface) case.

4. Outside (disjoint) surface: In this case, the whole polygon surface is completely outside
the window panel.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy