COMPUTER GRAPHICS 5units
COMPUTER GRAPHICS 5units
UNIT-I
Computer Graphics is the creation of pictures with the help of a computer. The
end product of the computer graphics is a picture it may be a business graph, drawing,
and engineering.
Today computer graphics is entirely different from the earlier one. It is not
possible. It is an interactive user can control the structure of an object of various input
devices.
Suppose a shoe manufacturing company want to show the sale of shoes for five
years. For this vast amount of information is to store.
So a lot of time and memory will be needed. This method will be tough to
understand by a common man. In this situation graphics is a better alternative. Graphics
tools are charts and graphs.
The computer will receive signals from the input device, and the picture is
modified accordingly. Picture will be changed quickly when we apply command.
i.e., the user cannot make any change in the rendered image. One example of its
Titles shown on T.V.
Non-interactive Graphics involves only one-way communication between the
computer and the user, User can see the produced image, and he cannot make any
change in the image.
In interactive Computer Graphics user have some controls over the picture, i.e.,
the user can make any change in the produced image. One example of it is the ping-
pong game.
Advantages:
1. Higher Quality
2. More precise results or products
3. Greater Productivity
4. Lower analysis and design cost
5. Significantly enhances our ability to understand data and to perceive trends.
For some training applications, particular systems are designed. For example
Flight Simulator.
Flight Simulator: It helps in giving training to the pilots of airplanes. These
pilots spend much of their training not in a real aircraft but on the ground at the controls
of a Flight Simulator.
Advantages:
1. Fuel Saving
2. Safety
3. Ability to familiarize the training with a large number of the world's airports.
2.Use in Biology:
Molecular biologist can display a picture of molecules and gain insight into
their structure with the help of computer graphics.
3. Computer-Generated Maps:
4. Architect:
4. Presentation Graphics:
Example of presentation Graphics are bar charts, line graphs, pie charts and other
displays showing relationships between multiple parameters. Presentation Graphics is
commonly used to summarize
o Financial Reports
o Statistical Reports
o Mathematical Reports
o Scientific Reports
5. Computer Art:
Computer Graphics are also used in the field of commercial arts. It is used to
generate television and advertising commercial.
6. Entertainment:
Computer Graphics are now commonly used in making motion pictures, music
videos and television shows.
7. Visualization:
8. Educational Software:
9. Printing Technology:
1. LOGO
2. COREL DRAW
3. AUTO CAD
4. 3D STUDIO
5. CORE
6. GKS (Graphics Kernel System)
7. PHIGS
8. CAM (Computer Graphics Metafile)
9. CGI (Computer Graphics Interface)
DISPLAY DEVICES:
The most commonly used display device is a video monitor. The operation of
most video monitors based on CRT (Cathode Ray Tube). The following display
devices are used:
CRT stands for Cathode Ray Tube. CRT is a technology used in traditional
computer monitors and televisions. The image on CRT display is created by firing
electrons from the back of the tube of phosphorus located towards the front of the
screen.
A CRT is an evacuated glass tube. An electron gun at the rear of the tube
produces a beam of electrons which is directed towards the front of the tube (screen).
Once the electron heats the phosphorus, they light up, and they are projected on a
screen. The color you view on the screen is produced by a blend of red, blue and green
light.
.
The beam is positioned on the screen by a deflection system of the cathode-ray-tube
consists of two pairs of parallel plates, referred to as the vertical and horizontal
deflection plates.
The intensity of the beam is controlled by the intensity signal on the control grid.
The voltage applied to vertical plates controls the vertical deflection of the
electron beam and voltage applied to the horizontal plates controls the horizontal
deflection of the electron beam.
There are two techniques used for producing images on the CRT screen: Vector
scan / random scan and Raster scan.
3. Focusing system: It is used to create a clear picture by focusing the electrons into a
narrow beam.
4. Deflection Yoke: It is used to control the direction of the electron beam. It creates an
electric or magnetic field which will bend the electron beam as it passes through the
area.
Compare this with some earlier systems in which the only way to carry out an
edit was to clear the whole screen and then redraw the whole image. Also by changing
the stored representation between refresh cycles animation is possible.
The electron beam is swept across the screen one row at a time from top to
bottom. As it moves across each row, the beam intensity is turned on and off to create
a pattern of illuminated spots.
The raster scan system can store information of each pixel position, so it is
suitable for realistic display of objects. Raster Scan provides a refresh rate of 60 to 80
frames per second.
Picture definition is stored in a memory area called the frame buffer. This
frame buffer stores the intensity values for all the screen points. Each screen point is
called a pixel (picture element).
On black and white systems, the frame buffer storing the values of the pixels is
called a bitmap. Each entry in the bitmap is a 1-bit data which determine the on (1) and
off (0) of the intensity of the pixel.
On color systems, the frame buffer storing the values of the pixels is called a
pixmap (Though nowadays many graphics libraries name it as bitmap too).
Each entry in the pixmap occupies a number of bits to represent the color of the
pixel. For a true color display, the number of bits for each entry is 24 (8 bits per
red/green/blue channel, each channel 2^8 =256 levels of intensity value, ie. 256
voltage settings for each of the red/green/blue electron guns).
1. Interlaced Scanning
2. Non-Interlaced Scanning
In Interlaced scanning, each horizontal line of the screen is traced from top to
bottom. Due to which fading of display of object may occur. This problem can be
solved by Non-Interlaced scanning.
In this first of all odd numbered lines are traced or visited by an electron beam,
then in the next circle, even number of lines are located.
For non-interlaced display refresh rate of 30 frames per second used. But it gives
flickers. For interlaced display refresh rate of 60 frames per second is used.
Advantages:
1. Realistic image
2. Million Different colors to be generated
3. Shadow Scenes are possible.
Disadvantages:
1. Low Resolution
2. Expensive
Random Scan System uses an electron beam which operates like a pencil to create a
line image on the CRT screen.
The system cycles back to the first line and design all the lines of the image 30
to 60 time each second. The process is shown in fig:
Advantages:
1. A CRT has the electron beam directed only to the parts of the screen where an
image is to be drawn.
2. Produce smooth line drawings.
3. High Resolution
Disadvantages:
1. Random-Scan monitors cannot display realistic shades scenes.
The CRT Monitor display by using a combination of phosphors. The phosphors are
different colors. There are two popular approaches for producing color displays with a
CRT are:
The Beam-Penetration method has been used with random-scan monitors. In this
method, the CRT screen is coated with two layers of phosphor, red and green and the
displayed color depends on how far the electron beam penetrates the phosphor layers.
This method produces four colors only, red, green, orange and yellow. A beam of
slow electrons excites the outer red layer only; hence screen shows red color only.
A beam of high-speed electrons excites the inner green layer. Thus screen shows a
green color.
0 0 0 Black
0 0 1 Blue
0 1 0 Green
0 1 1 Cyan
1 0 0 Red
1 0 1 Magenta
1 1 0 Yellow
1 1 1 White
Advantages:
1. Inexpensive
Disadvantages:
1. Only four colors are possible
2. Quality of pictures is not as good as with another method.
2. Shadow-Mask Method:
Shadow Mask Method is commonly used in Raster-Scan System because they
produce a much wider range of colors than the beam-penetration method.
Construction: A shadow mask CRT has 3 phosphor color dots at each pixel position.
Shadow mask grid is pierced with small round holes in a triangular pattern.
Figure shows the delta-delta shadow mask method commonly used in color CRT
system.
DVST terminals also use the random scan approach to generate the image on the
CRT screen. The term "storage tube" refers to the ability of the screen to retain the
image which has been projected against it, thus avoiding the need to rewrite the image
constantly.
Advantage:
1. No refreshing is needed.
2. High Resolution
3. Cost is very less
Disadvantage:
1. It is not possible to erase the selected part of a picture.
2. It is not suitable for dynamic graphics applications.
3. If a part of picture is to modify, then time is consumed.
The Flat-Panel display refers to a class of video devices that have reduced volume,
weightand power requirement compare to CRT.
Example: Small T.V. monitor, calculator, pocket video games, laptop computers,
an advertisement board in elevator.
1. Emissive Display: The emissive displays are devices that convert electrical energy
into light. Examples are Plasma Panel, thin film electroluminescent display and
LED (Light Emitting Diodes).
The gas will slow when there is a significant voltage difference between
horizontal and vertical wires. The voltage level is kept between 90 volts to 120 volts.
Plasma level does not require refreshing. Erasing is done by reducing the voltage to 90
volts.
Each cell of plasma has two states, so cell is said to be stable. Displayable point in
plasma panel is made by the crossing of the horizontal and vertical grid. The resolution
of the plasma panel can be up to 512 * 512 pixels.
Advantage:
1. High Resolution
2. Large screen size is also possible.
3. Less Volume
4. Less weight
5. Flicker Free Display
Disadvantage:
1. Poor Resolution
2. Wiring requirement anode and the cathode is complex.
3. Its addressing is also complex.
In an LED, a matrix of diodes is organized to form the pixel positions in the display and
picture definition is stored in a refresh buffer. Data is read from the refresh buffer and
converted to voltage levels that are applied to the diodes to produce the light pattern in
the display.
Liquid Crystal Displays are the devices that produce a picture by passing polarized light
from the surroundings or from an internal light source through a liquid-crystal material
that transmits the light.
LCD uses the liquid-crystal material between two glass plates; each plate is the
right angle to each other between plates liquid is filled. One glass plate consists of rows
of conductors arranged in vertical direction.
Disadvantage:
1. LCDs are temperature-dependent (0-70°C)
2. LCDs do not emit light; as a result, the image has very little contrast.
3. LCDs have no color capability.
4. The resolution is not as good as that of a CRT.
Look-Up Table:
Image representation is essentially the description of pixel colors. There are three
primary colors: R (red), G (green) and B (blue). Each primary color can take on
intensity levels produces a variety of colors. Using direct coding, we may allocate 3 bits
for each pixel, with one bit for each primary color. The 3-bit representation allows each
primary to vary independently between two intensity levels: 0 (off) or 1 (on). Hence
each pixel can take on one of the eight colors.
0 0 0 Black
0 0 1 Blue
0 1 0 Green
0 1 1 Cyan
1 0 0 Red
1 0 1 Magenta
1 1 0 Yellow
1 1 1 White
A widely accepted industry standard uses 3 bytes, or 24 bytes, per pixel, with one
byte for each primary color. The way, we allow each primary color to have 256
different intensity levels.
Thus a pixel can take on a color from 256 x 256 x 256 or 16.7 million possible
choices. The 24-bit format is commonly referred to as the actual color representation.
Lookup Table approach reduces the storage requirement. In this approach pixel
values do not code colors directly. Alternatively, they are addresses or indices into a
table of color values.
The color of a particular pixel is determined by the color value in the table entry
that the value of the pixel references. Figure shows a look-up table with 256 entries. The
entries have addresses 0 through 255.
Each entry contains a 24-bit RGB color value. Pixel values are now 1-byte. The
color of a pixel whose value is i, where 0 <i<255, is persistence by the color value in
the table entry whose address is i. It reduces the storage requirement of a 1000 x 1000
image to one million bytes plus 768 bytes for the color values in the look-up table.
THREE DIMENSIONAL VIEWING:
Graphics monitors for the display of three-dimensionalscenes have been
devisedusing a technique that reflects a CRT image from a vibrating, flexible mirror.
The operation of such a system is demonstrated. As the varifocal mirror vibrates, it
changes focal length. These vibrations are synchronized with the display of an object on a
CRT so that each point on the object is reflected from the mirror into a spatial position
corresponding to the distance of that point from aspecified viewing position.
Thisallows us to walk around an object or scene and view it from different sides.
Which uses a vibrating mirror to project three-dimensional objects into a 25cm by 2 h by 25-
volume.
This system is also capable of displaying two-dimensional cross-sectional "slices" of
objects selected at different depths.
Suchsystems have been usedin medical applications and CAT scan devices, in
geological applications to analyze topological and seismic data, in design applications
involving solid objects, and in three-dimensional simulations of systems, such as molecules
and terrain.
STEREOSCOPIC AND VIRTUAL-REALITY SYSTEMS
Overview of Graphics SystemsAnother technique for representing three dimensional
objects is displaying stereoscopic views.
Thismethod does not produce hue three-dimensional images, but it does provide a
three-dimensional effect by presenting a different view to each eye of an observer so that
scenes do appear to have depth.
Video Controller
A fixed area of the system memory is reserved for the frame buffer, and the video
controller is given direct access to the frame buffer memory.
Frame buffer location, and the corresponding screen positions, are referenced
in Cartesian coordinates.
Scan lines are then labeled from ymax at the top of the screen to 0 at the
bottom. Along each scan line, screen pixel positions are labeled from 0 to xmax.
Random scan monitors draw a picture one line at a time and for this reason are also
referred to as vector displays (or stroke writing or calligraphic displays).
The component lines of a picture can be drawn and refreshed by a random-scan
system in any specified order.
INPUT DEVICES
The Input Devices are the hardware that is used to transfer transfers input to the
computer. The data can be in the form of text, graphics, sound, and text.
Output device display data from the memory of the computer. Output can be text,
numeric data, line, polygon, and other objects.
1. Keyboard
2. Mouse
3. Trackball
4. Spaceball
5. Joystick
6. Light Pen
7. Digitizer
8. Touch Panels
9. Voice Recognition
10. Image scanner
Keyboard:
The most commonly used input device is a keyboard. The data is entered by pressing
the set of keys. All keys are labeled. A keyboard with 101 keys is called a QWERTY
keyboard.
The keyboard has alphabetic as well as numeric keys. Some special keys are also
available.
1. Numeric Keys: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
2. Alphabetic keys: a to z (lower case), A to Z (upper case)
3. Special Control keys: Ctrl, Shift, Alt
4. Special Symbol Keys: ; , " ? @ ~ ? :
5. Cursor Control Keys: ↑→←↓
6. Function Keys: F1 F2 F3....F9.
7. Numeric Keyboard: It is on the right-hand side of the keyboard and used for
fast entry of numeric data.
Mouse:
There are two or three depression switches on the top. The movement of the mouse
along the x-axis helps in the horizontal movement of the cursor and the movement along
the y-axis helps in the vertical movement of the cursor on the screen. The mouse cannot be
used to enter text.
Spaceball:
It is similar to trackball, but it can move in six directions where trackball can move in
two directions only.
The movement is recorded by the strain gauge. Strain gauge is applied with pressure.
It can be pushed and pulled in various directions. The ball has a diameter around 7.5
cm. The ball is mounted in the base using rollers. One-third of the ball is an inside box, the
rest is outside.
Joystick:
Joystick is a stick having a spherical ball as itsboth lower and upper ends as shown
in fig. The lower spherical ball moves in a socket.
The joystick can be changed in all four directions. The function of a joystick is similar
to that of the mouse. It is mainly used in Computer Aided Designing (CAD) and playing
computer games.
Light Pen
Light Pen (similar to the pen) is a pointing device which is used to select a displayed
menu item or draw pictures on the monitor screen.
It consists of a photocell and an optical system placed in a small tube. When its tip
is moved over the monitor screen, and pen button is pressed, its photocell sensing element
detects the screen location and sends the corresponding signals to the CPU.
Digitizers:
The digitizer is an operator input device, which contains a large, smooth board (the
appearance is similar to the mechanical drawing board) & an electronic tracking device,
which can be changed over the surface to follow existing lines.
The electronic tracking device contains a switch for the user to record the desire x & y
coordinate positions. The coordinates can be entered into the computer memory or stored or
an off-line storage medium such as magnetic tape.
Touch Panels:
Touch Panels is a type of display screen that has a touch-sensitive transparent panel
covering the screen. A touch screen registers input when a finger or other object comes in
contact with the screen.
When the wave signals are interrupted by some contact with the screen, that located is
recorded. Touch screens have long been used in military applications.
The user inputs data by speaking into a microphone. The simplest form of voice
recognition is a one-word command spoken by one person. Each command is isolated with
pauses between the words.
The voice-system input can be used to initiate graphics operations or to enter data.
These systems operate by matching an input against a predefined dictionary of words and
phrases.
Image Scanner
It is an input device. The data or text is written on paper. The paper is feeded to
scanner. The paper written information is converted into electronic format; this format is
stored in the computer.
The input documents can contain text, handwritten material, picture extra.
1. Printers
2. Plotters
Printers:
Printer is the most important output device, which is used to print data on paper.
Types of Printers: There are many types of printers which are classified on various
criteria as shown in fig:
1. Impact Printers: The printers that print the characters by striking against the
ribbon and onto the papers are known as Impact Printers.
1. Character Printers
2. Line Printers
2. Non-Impact Printers: The printers that print the characters without striking
against the ribbon and onto the papers are called Non-Impact Printers.
1. Laser Printers
2. Inkjet Printers
Dot Matrix Printers:
Dot matrix has printed in the form of dots. A printer has a head which
contains nine pins. The nine pins are arranged one below other. Each pin can be
activated independently.
All or only the same needles are activated at a time. When needless is not activated,
and then the tip of needle stay in the head.
Drum Printers:
These are line printers, which prints one line at a time. It consists of a drum.
The shape of the drum is cylindrical.
Each band consists of some characters. Each line on drum consists of 132
characters.
Because there are 96 lines so total characters are (132 * 95) = 12, 672.
Chain Printers:
These are called as line printers. These are used to print one line at a line.
Basically, chain consists of links.
Printers can follow any character set style, i.e., 48, 64 or 96 characters.
Non-Impact Printers:
Inkjet Printers:
These printers use a special link called electrostaticink. The printer head has
a special nozzle. Nozzle drops ink on paper. Head contains up to 64 nozzles. The
ink dropped is deflected by the electrostatic plate.
The plate is fixed outside the nozzle. The deflected ink settles on paper.
Laser Printers:
These are non-impact page printers. They use laser lights to produces the dots
needed to form the characters to be printed on a page & hence the name laser
printers.
Step1: The bits of data sent by processing unit act as triggers to turn the laser beam on
& off.
Step2: The output device has a drum which is cleared & is given a positive electric
charge.
Step3: The laser exposed parts of the drum attract an ink powder known as toner.
Step5: The ink particles are permanently fixed to the paper by using either heat or
pressure technique.
Step6: The drum rotates back to the cleaner where a rubber blade cleans off the excess
ink & prepares the drum to print the next page.
PLOTTERS
Advantage:
1. It can produce high-quality output on large sheets.
2. It is used to provide the high precision drawing.
3. It can produce graphics of various sizes.
4. The speed of producing output is high.
Drum Plotter:
It consists of a drum. Paper on which design is made is kept on the drum. The
drum can rotate in both directions. Plotters comprised of one or more pen and
penholders.
Line Drawing Algorithms
Algorithm Description:
Step 1 : Accept Input as two endpoint pixel positions
Step 2: Horizontal and vertical differences between the endpoint positions are assigned to
parameters dx and dy (Calculate dx=xb-xa and dy=yb-ya).
Step 3: The difference with the greater magnitude determines the value of
parameter steps.
Step 4 : Starting with pixel position (xa, ya), determine the offset needed at
each step to generate the next pixel position along the line path.
Step 5: loop the following process for steps number of times
a. Use a unit of increment or decrement in the x and y direction
b. if xa is less than xb the values of increment in the x and y directions are 1 and m
c. if xa is greater than xb then the decrements -1 and – m are used.
Advantages of DDA Algorithm
1. It is the simplest algorithm
2. It is a is a faster method for calculating pixel positions
Disadvantages of DDA Algorithm
1. Floating point arithmetic in DDA algorithm is still time-consuming
2. End point accuracy is poor.
This algorithm is used for scan converting a line. It was developed by Bresenham. It
is an efficient method because it involves only integer addition, subtractions, and
multiplication operations. These operations can be performed very rapidly so lines can be
generated quickly.
To illustrate Bresenham's approach, we- first consider the scan-conversion process for lines
with positive slope less than 1.
Pixel positions along a line path are then determined by sampling at unit x intervals.
Starting from the left endpoint (x0,y0) of a given line, We step to each successive
column (x position) and plot the pixel whose scan-line y value is closest to the line path.
To determine the pixel (xk,yk) is to be displayed, next to decide which pixel to plot
the column xk+1=xk+1.(xk+1,yk) and .(xk+1,yk+1).
At sampling position xk+1, we label vertical pixel separations from the mathematical
line path as d1 and d2. The y coordinate on the mathematical line at pixel column position
xk+1 is calculated as
y =m(xk+1)+b (1) Then
d1 = y-yk = m(xk+1)+b-yk
d2 = (yk+1)-y = yk+1-m(xk+1)-b
To determine which of the two pixel is closest to the line path, efficient test that is based on
the difference between the two pixel separations
d1- d2 = 2m(xk+1)-2yk+2b-1 ____________ (2)
A decision parameter Pk for the kth step in the line algorithm can be obtained by
rearranging equation (2).
By substituting m=Δy/Δx where Δx and Δy are the vertical and horizontal separations
of the endpoint positions and defining the decision parameter as
pk = Δx (d1- d2) = 2Δy xk.-2Δx. yk + c___________ (3)
The sign of pk is the same as the sign of d1- d2,since Δx>0 Parameter C is constant and has
the value 2Δy + Δx(2b-1) which is independent of the pixel position and will be eliminated in
the recursive calculations for Pk.
If the pixel at yk is “closer” to the line path than the pixel at yk+1 (d1< d2) than decision
parameter Pk is negative.
In this case, plot the lower pixel, otherwise plot the upper pixel. Coordinate changes
along the line occur in unit steps in either the x or y directions.
To obtain the values of successive decision parameters using incremental integer
calculations. At steps k+1, the decision parameter is evaluated from equation (3) as
Pk+1 = 2Δy xk+1-2Δx. yk+1 +c
Subtracting the equation (3) from the preceding equation
Pk+1 - Pk = 2Δy (xk+1 - xk) -2Δx(yk+1 - yk)
But xk+1= xk+1
so that Pk+1 = Pk+ 2Δy-2Δx(yk+1 - yk) (4)
Where the term yk+1-yk is either 0 or 1 depending on the sign of parameter Pk This recursive
calculation of decision parameter is performed at each integer x position, starting at the left
coordinate endpoint of the line.
The first parameter P0 is evaluated from equation at the starting pixel position (x0,y0)
and with m evaluated as Δy/Δx
P0 = 2Δy-Δx (5)
Bresenham’s line drawing for a line with a positive slope less than 1 in the following outline
of the algorithm.
The constants 2Δy and 2Δy-2Δx are calculated once for each line to be scan
converted.
Bresenham’s line Drawing Algorithm for |m| < 1
1. Input the two line endpoints and store the left end point in (x0,y0)
2. load (x0,y0) into frame buffer, ie. Plot the first point.
3. Calculate the constants Δx, Δy, 2Δy and obtain the starting value for the decision
parameter as P0 = 2Δy-Δx
4. At each xk along the line, starting at k=0 perform the following test
If Pk< 0, the next point to plot is(xk+1,yk) and Pk+1 = Pk + 2Δy otherwise, the next point to plot
is (xk+1,yk+1) and Pk+1 = Pk + 2Δy - 2Δx
5. Perform step4 Δx times.
Implementation of Bresenham Line drawing Algorithm
voidlineBres (intxa,intya,intxb, intyb)
{
int dx = abs( xa – xb) , dy = abs (ya - yb);
int p = 2 * dy – dx;
inttwoDy = 2 * dy, twoDyDx = 2 *(dy - dx);
int x , y, xEnd;
/* Determine which point to use as start, which as end * /
if (xa> x b )
{
x = xb;
y = yb;
xEnd = xa;
}
else
{
x = xa;
y = ya;
xEnd = xb;
}
setPixel(x,y);
while(x<xEnd)
{
x++;
if (p<0)
p+=twoDy;
else
{
y++;
p+=twoDyDx;
}
setPixel(x,y);
}
}
Advantages
Algorithm is Fast
Uses only integer calculations
Disadvantages
It is meant only for basic line drawing.
Circle-Generating Algorithms
General function is available in a graphics library for displaying various kinds of
curves, including circles and ellipses.
Properties of a circle A circle is defined as a set of points that are all the given
distance (xc,yc).
This distance relationship is expressed by the pythagorean theorem in Cartesian
coordinates as
(x – xc)2 + (y – yc) 2 = r2 ----------------(1)
Use above equation to calculate the position of points on a circle circumference by
stepping along the x axis in unit steps from xc-r to xc+r and calculating the corresponding y
values at each position as
y = yc+(- ) sqrt(r2 – (xc –x )2)------------- (2)
This is not the best method for generating a circle for the following reason
o Considerable amount of computation
o Spacing between plotted pixels is not uniform
To eliminate the unequal spacing is to calculate points along the circle boundary using
polar coordinates r and θ.
Expressing the circle equation in parametric polar from yields the pair of equations
x = xc + rcos θ
y = yc + rsin θ
When a display is generated with these equations using a fixed angular step size, a
circle is plotted with equally spaced points along the circumference.
Set the angular step size at 1/r. This plots pixel positions that are approximately one
unit apart.
The shape of the circle is similar in each quadrant.
Circle sections in adjacent octants within one quadrant are symmetric with
respect to the 450 line dividing the two octants. Where a point at position (x, y) on a one-
eight circle sector is mapped into the seven circle points in the other octants of the xy plane.
To generate all pixel positions around a circle by calculating only the points within
the sector from x=0 to y=0. the slope of the curve in this octant has an magnitude less than of
equal to 1.0. at x=0, the circle slope is 0 and at x=y, the slope is -1.0.
Midpoint circle Algorithm:
In the raster line algorithm at unit intervals and determine the closest pixel position to
the specified circle path at each step for a given radius r and screen center position (xc,yc)
set up our algorithm to calculate pixel positions around a circle path centered at the
coordinate position by adding xc to x and yc to y.
To apply the midpoint method we define a circle function as
fcircle(x,y) = x2+y2-r2
Any point (x,y) on the boundary of the circle with radius r satisfies the equation
fcircle (x,y)=0.
If the point is in the interior of the circle, the circle function is negative. And if the
point is outside the circle the, circle function is positive
fcircle (x,y) <0, if (x,y) is inside the circle boundary ,
fcircle(x,y)=0, if (x,y) is on the circle boundary,
fcircle(x,y)>0, if (x,y) is outside the circle boundary.
The tests in the above eq are performed for the mid-positionsbetween pixels near the
circle path at each sampling step. The circle function is the decision parameter in the
midpoint algorithm.
Midpoint between candidate pixels at sampling position xk+1 along a circular path. Fig
-1 shows the midpoint between the two candidate pixels at sampling position xk+1.
To plot the pixel at (xk,yk) next need to determine whether the pixel at position
(xk+1,yk) or the one at position (xk+1,yk-1) is circular to the circle. Our decision parameter is the
circle function evaluated at the midpoint between these two pixels
Pk= fcircle (xk+1,yk- 1/2)
If Pk<0, this midpoint is inside the circle and the pixel on scan line yk is closer to the
circle boundary.
Otherwise the mid position is outside or on the circle boundary and select the pixel on
scan line yk -1.
By squaring this equation isolating the remaining radical and squaring again. The
general ellipse equation in the form
Ax2+By2+Cxy+Dx+Ey+F=0
The coefficients A,B,C,D,E, and F are evaluated in terms of the focal coordinates and
the dimensions of the major and minor axes of the ellipse.
The major axis is the straight line segment extending from one side of the ellipse to
the other through the foci.
The minor axis spans the shorter dimension of the ellipse, perpendicularly bisecting
the major axis at the halfway position (ellipse center) between the two foci.
An interactive method for specifying an ellipse in an arbitrary orientation is to input
the two foci and a point on the ellipse boundary.
Ellipse equations are simplified if the major and minor axes are oriented to align with
the coordinate axes.
The major and minor axes oriented parallel to the x and y axes parameter rx for this
example labels the semi major axis and parameter ry labels the semi minor axis
((x-xc)/rx)2 +((y-yc)/ry)2=1
Using polar coordinates r and θ, to describe the ellipse in Standard position with the
parametric equations
x=xc+rxcos θ
y=yc+rxsin θ
Angle θ called the eccentric angle of the ellipse is measured around the perimeter of a
bounding circle.
We must calculate pixel positions along the elliptical arc throughout one quadrant,
and then we obtain positions in the remaining three quadrants by symmetry
In the x direction where the slope of the curve has a magnitude less than 1 and unit steps in
the y direction where the slope has a magnitude greater than 1.
Region 1 and 2 can be processed in various ways
1. Start at position (0,ry) and step clockwise along the elliptical path in the first quadrant
shifting from unit steps in x to unit steps in y when the slope becomes less than -1
2. Start at (rx,0) and select points in a counter clockwise order.
2.1 Shifting from unit steps in y to unit steps in x when the slope becomes greater than
-1.0
2.2 Using parallel processors calculate pixel positions in the two regions
simultaneously
3. Start at (0,ry) step along the ellipse path in clockwise order throughout the first quadrant
ellipse function (xc,yc)=(0,0) as
fellipse(x,y)=r2 yx2+r2xy2 –r2x r2y
which has the following properties:
fellipse(x,y) <0, if (x,y) is inside the ellipse boundary,
fellipse(x,y) =0, if(x,y) is on ellipse boundary,
fellipse(x,y) >0, if(x,y) is outside the ellipse boundary
Thus, the ellipse function fellipse(x,y) serves as the decision parameter in the
midpoint algorithm.
Starting at (0,ry), we take unit steps in the x direction until to reach the boundary
between region1 and region 2. Then switch to unit steps in the y direction over the remainder
of the curve in the first quadrant.
At each step to test the value of the slope of the curve.
The ellipse slope is calculated dy/dx= -(2r2yx/2r2xy)
At the boundary between region1 and region2, dy/dx= -(2r2yx/2r2xy)
The following figure shows the midpoint between two candidate pixels at sampling position
xk+1 in the first region
To determine the next position along the ellipse path by evaluating the decision
parameter at this mid point
P1k = fellipse(xk+1,yk-1/2)
if P1k <0, the midpoint is inside the ellipse and the pixel on scan line yk is closer to
the ellipse boundary.
Otherwise the midpoint is outside or on the ellipse boundary and select the pixel on
scan line yk-1
In region 1 the initial value of the decision parameter is obtained by evaluating the
ellipse function at the start position (x0,y0) = (0,ry)
P10 = fellipse(1,ry -½ )
Over region 2, we sample at unit steps in the negative y direction and the midpoint is now
taken between horizontal pixels at each step. For this region, the decision parameter is
evaluated as
P2k = fellipse(xk+½ ,yk- 1)
1. If P2k >0, the mid-point position is outside the ellipse boundary, and select the
pixel at xk.
2. If P2k <=0, the mid-point is inside the ellipse boundary and select pixel position
xk+1.
Introduction of Transformations:
Computer Graphics provide the facility of viewing object from different angles. The
architect can study building from different angles i.e.
1. Front Evaluation
2. Side elevation
3. Top plan
The purpose of using computers for drawing is to provide facility to user to view the
object from different angles, enlarging or reducing the scale or shape of object called as
Transformation.
Types of Transformations:
1. Translation
2. Scaling
3. Rotating
Other transformation
4. Reflection
5. Shearing
Translation
It is the straight line movement of an object from one position to another is called
Translation.
Translation of point:
To translate a point from coordinate position (x, y) to another (x1 y1), we add
algebraically the translation distances Txand Tyto original coordinate.
In general equations,
P’=P+T,
x1=x+Tx
y1=y+Ty
The translation pair (Tx,Ty) is called as shift vector.
For translating polygon, each vertex of the polygon is converted to a new position..
Let P is a point with coordinates (x, y). It will be translated as (x1 y1).
Scaling:
It is used to alter or change the size of objects.
The change is done using scaling factors. There are two scaling factors, i.e. S x in x
direction Sy in y-direction.
x’= x.Sx
y’ = y.Sy
Scaling factor Sx scales object in x direction while Sy scales in y direction.
The transformation equation in matrix form
x’ sx 0 x
y’ = 0 sy y or P’ = S. P
Rotation:
Types of Rotation:
1. Anticlockwise
2. Counterclockwise
The positive value of the pivot point (rotation angle) rotates an object in a counter-
clockwise (anti-clockwise) direction.
The negative value of the pivot point (rotation angle) rotates an object in a
clockwise direction.
Rotation of a point from position (x,y) to position (x’,y’) through angle θ relative to
coordinate origin The transformation equations for rotation of a point position P when the
pivot point is at coordinate origin.
In figure r is constant distance of the point positions Ф is the original angular of the
point from horizontal and θ is the rotation angle.
The transformed coordinates in terms of angle θ and Ф
x’ = rcos(θ+Ф) = rcosθcosФ – rsinθsinФ
y’ = rsin(θ+Ф) = rsinθcosФ + rcosθsinФ
The original coordinates of the point in polar coordinates x = rcosФ, y = rsinФ the
transformation equation for rotating a point at position (x,y) through an angle θ about origin
x’ = xcosθ – ysinθ
y’ = xsinθ + ycosθ
Rotation equation
P’= R . P
Rotation MatrixR = cosθ - sinθ
Sinθ +cosθ
P’=R.P
x’ = cosθ - sinθ x
y’ Sinθ + cosθ y
Other transformation:
Reflection
Shearing
Reflection:
It is a transformation which produces a mirror image of an object. The mirror image can be
either about x-axis or y-axis. The object is rotated by180°.
Types of Reflection:
Reflection about x-axis: The object can be reflected about x-axis with the help of the
following matrix
In this transformation value of x will remain same whereas the value of y will become
negative. Following figures shows the reflection of the object axis. The object will lie another
side of the x-axis.
Reflection about y-axis: The object can be reflected about y-axis with the help of
following transformation matrix.
Here the values of x will be reversed, whereas the value of y will remain the same.
The object will lie another side of the y-axis.
In this value of x and y both will be reversed. This is also called as half revolution
about the origin.
Reflection about line y=x: The object may be reflected about line y = x with the help
of following transformation matrix.
First of all, the object is rotated at 45°. The direction of rotation is clockwise. After it
reflection is done concerning x-axis. The last step is the rotation of y=x back to its original
position that is counterclockwise at 45°.
Shearing:
The rotation of a point, straight line or an entire image on the screen, about a
point other than origin, is achieved by first moving the image until the point of rotation
occupies the origin, then performing rotation, then finally moving the image to its
original position.
For our convenience take it as one. Each two-dimensional position is then represented
with homogeneous coordinates (x, y, 1).
A viewing transformation using standard rectangles for the window and viewport
If we map directly from WCS to a DCS, then changing our device requires rewriting
this mapping (among other changes).
Instead, use Normalized Device Coordinates (NDC) as an intermediate coordinate
system that gets mapped to the device layer.
Will consider using only a square portion of the device.
Windows in WCS will be mapped to viewports that are specified within a unit square
in NDC space.
Map viewports from NDC coordinates to the screen.
From (1)
XV – Xvmin =XVmax–Xvmin *(XW – Xwmin)
XWmax-XWmin
XV = Xvmin +XVmax – Xvmin *(XW – Xwmin)
XWmax-XWmin
From (2)
YV – Yvmin =YVmax – Yvmin *(YW – Ywmin)
YWmax-YWmin
YV = Yvmin +YVmax – Yvmin *(YW – Ywmin)
YWmax-YWmin
Sx = XVmax – XVmin
XWmax-XWmin
Sy = YVmax – YVmin
YWmax-YWmin
Then
XV = XVmin + SX * (XW – Xwmin)
YV = YVmin + Sy* (YW – Ywmin)
The Equations, can also be derived with a set of transformations that converts the window
area into the viewport area. This conversion is performed with the following sequence of
transformations:
1.Perform a scaling transformation using a fixed-point position of (xwmin,
ywmin)that scales the window area to the size of the viewport.
2. Translate the scaled window area to the position of the viewport.
Workstation Transformation:-
From normalized coordinates, objectdescriptions are mapped to the various display
devices. Any number of output devices can be open in a particular application, and another
window-to-viewport transformation can be performed for each open output device. This
mapping, called the workstation transformation, is accomplished by selecting a window
area in normalized space and a viewport area in the coordinates of the display device. With
the workstation transformation, we gain some additional control over the positioning of parts
of a scene on individual output devices. As illustrated in below Fig.
3. The combinations of viewing and window view port mapping for various workstations in a
viewing table with
setViewRepresentation(ws,viewIndex,viewMatrix,viewMappingMatrix, xclipmin,
xclipmax, yclipmin, yclipmax, clipxy)
Where parameter wsdesignates the output device and parameter view index
sets an integer identifier for this window-view port point.
4.The matrices viewMatrix and viewMappingMatrix can be concatenated and referenced by
viewIndex.
setViewIndex(viewIndex) selects a particular set of options from the viewing
table.
5.At the final stage we apply a workstation transformation by selecting a work station
window viewport pair.
setWorkstationWindow(ws, xwsWindmin, xwsWindmax, ywsWindmin,
ywsWindmax)
setWorkstationViewport(ws, xwsVPortmin, xwsVPortmax, ywsVPortmin,
ywsVPortmax)
where was gives the workstation number. Window-coordinate extents are specified in the
range from 0 to 1 and viewport limits are in integer device coordinates.
Clipping operation
Any procedure that identifies those portions of a picture that are inside or outside of a
specified region of space is referred to as clipping algorithm or clipping.
The region against which an object is to be clipped is called clip window. Algorithm
for clipping primitive types:
Point clipping
Line clipping (Straight-line segment)
Area clipping (Polygon)
Curve clipping
Text clipping
Point Clipping
Clip window is a rectangle in standard position.
A point P=(x,y) for display, if following inequalities are satisfied:
xwmin<= x <= xwmaxywmin<= y <= ywmax
where the edges of the clip window (xwmin,xwmax,ywmin,ywmax) can be either the
world-coordinate window boundaries or viewport boundaries.
If any one of these four inequalities is not satisfied, the point is clipped (not saved for
display).
Line Clipping
A line clipping procedure involves several parts. First we test a given line segment
whether it lies completely inside the clipping window. If it does not we try to determine
whether it lies completely outside the window.
Finally if we cannot identify a line as completely inside or completely outside, we
perform intersection calculations with one or more clipping boundaries.
Process lines through “inside-outside” tests by checking the line endpoints. A line
with both endpoints inside all clipping boundaries such as line from P1 to P2 is saved. A line
with both end-point outside any one of the clip boundaries line P3 P4 is outside the window.
All other lines cross one or more clipping boundaries, and may require calculation of multiple
intersection points.
Binary region codes assigned to line end points according to relative position
with respect to the clipping rectangle.
Regions are set up in reference to the boundaries. Each bit position in region code is
used to indicate one of four relative coordinate positions of points with respect to clip
window: to the left, right, top or bottom.
By numbering the bit positions in the region code as 1 through 4 from right to left,
the coordinate regions are corrected with bit positions as
bit 1: left
bit 2: right
bit 3: below
bit4: above
A value of 1 in any bit position indicates that the point is in that relative position.
Otherwise the bit position is set to 0. If a point is within the clipping rectangle the region
code is 0000. A point that is below and to the left of the rectangle has a region code of 0101.
Bit values in the region code are determined by comparing endpoint coordinate values
(x,y) to clip boundaries.
Bit1 is set to 1 if x <xwmin.
For programming language in which bit manipulation is possible region-code bit
values can be determined with following two steps.
(1) Calculate differences between endpoint coordinates and clipping boundaries.
(2) Use the resultant sign bit of each difference calculation to set the corresponding value in
the region code.
bit 1 is the sign bit of x – xwmin
bit 2 is the sign bit of xwmax - x
bit 3 is the sign bit of y – ywmin
bit 4 is the sign bit of ywmax - y.
Once we have established region codes for all line endpoints, we can quickly determine
which lines are completely inside the clip window and which are clearly outside.
Any lines that are completely contained within the window boundaries have a region
code of 0000 for both endpoints, and we accept these lines.
Any lines that have a 1 in the same bit position in the region codes for each endpoint
are completely outside the clipping rectangle, and we reject these lines.
We would discard the line that has a region code of 1001 for one endpoint and a code
of 0101 for the other endpoint.
Both endpoints of this line are left of the clipping rectangle, as indicated by the 1 in
the first bit position of each region code.
A method that can be used to test lines for total clipping is to perform the logical and
operation with both region codes.
If the result is not 0000, the line is completely outside the clipping region. Lines that
cannot be identified as completely inside or completely outside a clip window by these tests
are checked for intersection with window boundaries.
Line extending from one coordinates region to another may pass through the clip
window, or they may intersect clipping boundaries without entering window.
Cohen-Sutherland line clipping starting with bottom endpoint left, right , bottom and
top boundaries in turn and find that this point is below the clipping rectangle.
Starting with the bottom endpoint of the line from P1 to P2, we check P1 against the
left, right, and bottom boundaries in turn and find that this point is below the clipping
rectangle.
We then find the intersection point P1’ with the bottom boundary and discard the line
section from P1 to P1’.
The line now has been reduced to the section from P1’ to P2,Since P2, is outside the
clip window, we check this endpoint against the boundaries and find that it is to the left of the
window.
Intersection point P2’ is calculated, but this point is above the window. So the final
intersection calculation yields P2”, and the line from P1’ to P2”is saved.
This completes processing for this line, so we save this part and go on to the next line.
Point P3 in the next line is to the left of the clipping rectangle, so we determine the
intersection P3’, and eliminate the line section from P3 to P3'. By checking region codes
for the line section from P3'to P4 we find that the remainder of the line is below the clip
window and can be discarded also. Intersection points with a clipping boundary can be
calculated using the slope-intercept form of the line equation.
For a line with endpoint coordinates (x1,y1) and (x2,y2) and the y coordinate of the
intersection point with a vertical boundary can be obtained with the calculation.
y =y1 +m (x-x1)
where x value is set either to xwmin or to xwmax and slope of line is calculated as
m = (y2- y1) / (x2- x1)
the intersection with a horizontal boundary the x coordinate can be calculated as
x= x1 +( y- y1) / m with y set to either to ywmin or to ywmax.
The intersection with the appropriate window boundary is then carried out, depending
on which one of the four regions (L, T, R, or B) contains P2.
If both P1 and P2 are inside the clipping rectangle, we simply save the entire line. If
P1 is in the region to the left of the window, we set up the four regions, L, LT, LR,and
LBshown in fig
Fig: The four clipping regions used in NLN algorithm when p1 is directly left of the
clip window
These four regions determine a unique boundary for the line segment. For instance, if
P2 is in region L, we clip the line at the left boundary and save the line segment from this
intersection point to P2. But if P2 is in region LT,we save the line segment from the left
window boundary to the top boundary. If P2 is not in any of the four regions, L, LT, LR, or
LB, the entire line is clipped.
For the third case, when P1 is to the left and above the clip window, we use the
clipping regions in Fig.
Fig : The two possible sets of clipping regions used in NLN algorithm when P1 is above and
to the left of the clip window
In this case, we have the two possibilities shown, depending on the position of P1,
relative to the top left corner of the window. If P2,is in one of the regions T, L, TR, TB, LR,
or LB, this determines a unique clip window edge for the intersection calculations.
Otherwise, the entire line is rejected.
To determine the region in which P2 is located, we compare the slope of the line to the slopes
of the boundaries of the clip regions. For example, if P1 is left of the clipping rectangle (Fig.
a), then P2, is in region LT if
For polygon clipping, we require an algorithm that will generate one or more closed
areas that are then scan converted for the appropriate area fill. The output of a polygon
clipper should be a sequence of vertices that defines the clipped polygon boundaries.
Sutherland – Hodgeman polygon clipping:
A polygon can be clipped by processing the polygon boundary as a whole against
each window edge. This could be accomplished by processing all polygon vertices against
each clip rectangle boundary.
There are four possible cases when processing vertices in sequence around the
perimeter of a polygon. As each point of adjacent polygon vertices is passed to a window
boundary clipper, make the following tests:
1. If the first vertex is outside the window boundary and second vertex is inside, both the
intersection point of the polygon edge with window boundary and second vertex are added to
output vertex list.
2. If both input vertices are inside the window boundary, only the second vertex is added to
the output vertex list.
3. If first vertex is inside the window boundary and second vertex is outside only the edge
intersection with window boundary is added to output vertex list.
4. If both input vertices are outside the window boundary nothing is added to the output list.
Fig:Clipping a polygon against successive window boundaries.
Fig:Successive processing of pairs of polygon vertices against the left window boundary
Clipping a polygon against the left boundary of a window, starting with vertex 1.
Primed numbers are used to label the points in the output vertex list for this window
boundary.
Vertices 1 and 2 are found to be on outside of boundary. Moving along vertex 3
which is inside, calculate the intersection and save both the intersection point and vertex 3.
Vertex 4 and 5 are determined to be inside and are saved. Vertex 6 is outside so we find and
save the intersection point.
Using the five saved points we repeat the process for next window boundary.
Implementing the algorithm as described requires setting up storage for an output list
of vertices as a polygon clipped against each window boundary. We eliminate the
intermediate output vertex lists by simply by clipping individual vertices at each step and
passing the clipped vertices on to the next boundary clipper.
A point is added to the output vertex list only after it has been determined to be inside
or on a window boundary by all boundary clippers. Otherwise the point does not continue in
the pipeline.
Implementation of Sutherland-Hodgeman Polygon Clipping
typedefenum { Left,Right,Bottom,Top } Edge;
#define N_EDGE 4
#define TRUE 1
#define FALSE 0
int inside(wcPt2 p, Edge b,dcPtwmin,dcPtwmax)
{
switch(b)
{
case Left: if(p.x<wmin.x) return (FALSE); break;
caseRight:if(p.x>wmax.x) return (FALSE); break;
casebottom:if(p.y<wmin.y) return (FALSE); break;
case top: if(p.y>wmax.y) return (FALSE); break;
}
return (TRUE);
}
int cross(wcPt2 p1, wcPt2 p2,Edge b,dcPtwmin,dcPtwmax)
{
if(inside(p1,b,wmin,wmax)==inside(p2,b,wmin,wmax))
return (FALSE);
else
return (TRUE);
}
wcPt2 (wcPt2 p1, wcPt2 p2,intb,dcPtwmin,dcPtwmax )
{
wcPt2iPt;
float m;
if(p1.x!=p2.x) m=(p1.y-p2.y)/(p1.x-p2.x);
switch(b)
{
case Left:
ipt.x=wmin.x;
ipt.y=p2.y+(wmin.x-p2.x)*m;
break;
case Right:
ipt.x=wmax.x;
ipt.y=p2.y+(wmax.x-p2.x)*m;
break;
case Bottom:
ipt.y=wmin.y;
if(p1.x!=p2.x) ipt.x=p2.x+(wmin.y-p2.y)/m;
elseipt.x=p2.x;
break;
case Top:
ipt.y=wmax.y;
if(p1.x!=p2.x) ipt.x=p2.x+(wmax.y-p2.y)/m;
else
ipt.x=p2.x;
break;
}
return(ipt);
}
voidclippoint(wcPt2 p,Edgeb,dcPtwmin,dcPtwmax, wcPt2 *pout,int *cnt, wcPt2
*first[],struct point *s)
{
wcPt2iPt;
if(!first[b]) first[b]=&p;
else if(cross(p,s[b],b,wmin,wmax))
{
ipt=intersect(p,s[b],b,wmin,wmax);
if(b<top) clippoint(ipt,b+1,wmin,wmax,pout,cnt,first,s);
else
{
pout[*cnt]=ipt; (*cnt)++;
}
}
s[b]=p;
if(inside(p,b,wmin,wmax))
if(b<top) clippoint(p,b+1,wmin,wmax,pout,cnt,first,s);
else
{
pout[*cnt]=p; (*cnt)++;
}
}
voidcloseclip(dcPtwmin,dcPtwmax, wcPt2 *pout,int *cnt,wcPt2 *first[], wcPt2 *s)
{
wcPt2iPt;
Edge b;
for(b=left;b<=top;b++)
{
if(cross(s[b],*first[b],b,wmin,wmax))
{
i=intersect(s[b],*first[b],b,wmin,wmax);
if(b<top)
clippoint(i,b+1,wmin,wmax,pout,cnt,first,s);
else
{
pout[*cnt]=i;
(*cnt)++;
}
}
}
}
intclippolygon(dcPt point wmin,dcPtwmax,int n,wcPt2 *pin, wcPt2 *pout)
{
wcPt2 *first[N_EDGE]={0,0,0,0},s[N_EDGE];
inti,cnt=0;
for(i=0;i<n;i++)
clippoint(pin[i],left,wmin,wmax,pout,&cnt,first,s); closeclip(wmin,wmax,pout,&cnt,first,s);
return(cnt);
}
Curve Clipping
Curve-clipping procedures will involve nonlinear equations, and this requires more
processing than for objects with linear boundaries.
The bounding rectangle for a circle or other curved object can be used first to test for
overlap with a rectangular clip window. If the bounding rectangle for the object is completely
inside the window, we save the object.
If the rectangle is determined to be completely outside the window, we discard the
object. In either case, there is no further computation necessary. But if the bounding rectangle
test fails, we can look for other computation-saving approaches.
For a circle, we can use the coordinate extents of individual quadrants and then
octants for preliminary testing before calculating curve-window intersections.
The below figure illustrates circle clipping against a rectangular window. On the first
pass, we can clip the bounding rectangle of the object against the bounding rectangle of the
clip region. If the two regions overlap, we will need to solve the simultaneous line-curve
equations to obtain the clipping intersection points.
Clipping a filled circle
Text clipping
There are several techniques that can be used to provide text clipping in a graphics
package.
The clipping technique used will depend on the methods used to generate characters
and the requirements of a particular application.
The simplest method for processing character strings relative to a window boundary is
to use the all-or-none string-clipping strategy shown in Fig. . If all of the string is inside a
clip window, we keep it. Otherwise, the string is discarded. This procedure is implemented by
considering a bounding rectangle around the text pattern.
The boundary positions of the rectangle are then compared to the window boundaries,
and the string is rejected if there is any overlap. This method produces the fastest text
clipping.
A final method for handling text clipping is to clip the components of individual
characters. We now treat characters in much the same way that we treated lines.
If an individual character overlaps a clip window boundary, we clip off the parts of
the character that are outside the window.
Text Clipping performed on the components of individual characters
Exterior clipping
Procedure for clipping a picture to the interior of a region by eliminating everything
outside the clipping region. By these procedures the inside region of the picture is saved. To
clip a picture to the exterior of a specified region. The picture parts to be saved are those that
are outside the region. This is called as exterior clipping. Exterior clipping is used also in
other applications that require overlapping pictures.
Objects within a window are clipped to interior of window when other higher priority
window overlap these objects. The objects are also clipped to the exterior of overlapping
windows.
UINT-IV
Three Dimensional Display Methods
Parallel Projection
Perspective Projection
Depth cueing
Visible line and surface identification
Surface rendering
Exploded and cutaway views
Three dimensional stereoscopic views
1. Parallel Projection
This method generates view from solid object by projecting parallel lines onto the
display plane. By changing viewing position we can get different views of 3D object
onto 2D display screen.
In a parallel projection, parallel lines in the world-coordinate scene project into
parallel lines on the two-dimensional display plane.
This will display object smaller when it is away from the view plane and of nearly
same size when closer to view plane. It will produce more reliability view as it is the way our
eye is forming image.
In a perspective projection, parallel lines in a scene that are not parallel to the display
plane are projected into converging lines. Scenes displayed using perspective projections
appear more realistic, since this is the way that our eyes and a camera lens form images.
3. Depth cueing
Many times depth information is important so that we can identify for a particular
viewing direction that which are the front and which is the back of display object. Simple
method to do this is depth cueing in which assign higher intensity to closer object &
lower intensity to the far objects.
Depth cuing is applied by choosing maximum and minimum intensity values and
a range of distance over which the intensities are to vary. Another application is to
modelling effect of atmosphere.
4. Visible line and surface Identification
In this method we first identify visible lines or surfaces by some method. Then
display visible lines with highlighting or with some different color. Other way is to
display hidden lines with dashed lines or simply not display hidden lines. But not drawing
hidden lines will loss the some information.
Similar method we can apply for the surface also by displaying shaded surface of
color surface. Some visible surface algorithm establishes visibility pixel by pixel across
the view plane. Other determines visibility of object surface as a whole.
5. Surface Rendering
More realistic image is produce by setting surface intensity according to light reflect
from that surface & the characteristics of that surface. It will give more intensity to the
shiny surface and less to dull surface. It also applies high intensity where light is more &
less where light falls is less.
6. Exploded and Cutaway views
Many times internal structure of the object is need to store. For ex., in machine
drawing internal assembly is important. For displaying such views it will remove
(cutaway) upper of body so that internal part’s can be visible.
7. Three dimensional stereoscopic views
This method display using computer generated scenes. It may display object by
three dimensional views.
The graphics monitor which are display three dimensional scenes are devised
using a technique that reflects a CRT image from a vibrating flexible mirror.
Very good example of this system is GENISCO SPACE GRAPH system, which use
vibrating mirror to project 3D objects into a 25 cm by 25 cm by 25 cm volume. This system
is also capable to show 2D cross section at different depth.
Another way is stereoscopic views.
Stereoscopic views does not produce three dimensional images, but it produce 3D effects
by presenting different view to each eye of an observer so that it appears to have depth.
To obtain this we first need to obtain two views of object generated from viewing
direction corresponding to each eye. We can contract the two views as computer generated
scenes with different viewing positions or we can use stereo camera pair to photograph some
object or scene.
When we see simultaneously both the view as left view with left eye and right view with
right eye then two views is merge and produce image which appears to have depth.
One way to produce stereoscopic effect is to display each of the two views with raster
system on alternate refresh cycles. The screen is viewed through glasses with each lance
design such a way that it act as a rapidly alternating shutter that is synchronized to block out
one of the views.
These are translations, scaling, and rotation. These are also called as basic
transformations are represented using matrix. More complex transformations are handled
using matrix in 3D.
Translation
It is the movement of an object from one position to another position. Translation
is done using translation vectors.
Translation is aadditive.
then after translation its coordinates will be (x1 y1 z1) after translation. Tx Ty Tz are
translation vectors in x, y, and z directions respectively.
x1=x+ Tx
y1=y+Ty
z1=z+ Tz
Point shown in fig is (x, y, z). It become (x1,y1,z1) after translation. Tx Ty Tz are
translation vector.
Scaling
Scaling is used to change the size of an object. The size can be increased or
decreased. The scaling three factors are required Sx Sy and Sz.
Scaling is a multiplication.
X1=X.SX
Y1=X.SX
Z1=X.SX
Matrix for Scaling
Following are steps performed when scaling of objects with fixed point (a, b, c). It can be
represented as below:
Note: If all scaling factors Sx=Sy=Sz.Then scaling is called as uniform. If scaling is done
with different scaling vectors, it is called a differential scaling.
In figure (a) point (a, b, c) is shown, and object whose scaling is to done also shown in steps
in fig (b), fig (c) and fig (d).
Rotation
Parameter θ specifies the rotation angle. In homogeneous coordinate form, the three-
dimensional z-axis rotation equations are expressed as
or
Composite Transformation:-
Aswith two-dimensional transformations.
We form a composite three-dimensional transformation by multiplying the matrix
representations for the individual operations in the transformation sequence.
This concatenation is carried out from right to left, where the rightmost matrix is the
first transformation to be applied to an object and the leftmost matrix is the last
transformation.
The following program provides an example for implementing, a composite
transformation.
A sequence of basic, three-dimensional geometric transformations are combined to
produce a single composite transformation, which is then applied to the coordinate definition
of an object.
THREE-DIMENSIONAL VIEWING
Three-dimensional descriptions of objects must be projected onto the flat viewing
surface of the output device.
And the clipping boundaries now enclose a volume of space, whose shape depends
on the type of projection we select. In this chapter, we explore the general operations needed
to produce views of a three-dimensional scene, and we also discuss specific viewing
procedures provided in packages such as PHIGS and GL.
VIEWING PIPELINE
Computer generation of a view of a three-dimensional scene are somewhat analogous
to the processes involved in taking a photograph. To take a snapshot, we first need to position
the camera at a particular point in space.
We snap the shutter, the scene is cropped to the size of the "window" (aperture) of the
camera, and light from the visible surfaces is projected onto the camera film.
Once the scene has been modeled, world-coordinate positions are converted to
viewing coordinates (device coordinate).
The viewing-coordinate system is used in graphics packages as a reference for
specifying the observer viewing position and the position of the projection plane, which
we can think of in analogy with the camera film plane.
Next, projection operations are performed to convert the viewing-coordinate
description of the scene to coordinate positions on the projection plane, which will then be
mapped to the output device.
Objects outside the specified viewing limits are clipped further consideration, and the
remaining objects are processed through visible-surfaceidentification and surface-
rendering procedures to produce the display within the device viewport.
Projection
It is the process of converting a 3D object into a 2D object. It is also defined as
mapping or transformation of the object in projection plane or view plane. The view plane
is displayed surface.
Perspective Projection
In perspective projection farther away object from the viewer, small it appears. This
property of projection gives an idea about depth.
Two main characteristics of perspective are vanishing points and perspective
foreshortening.
Due to foreshortening object and lengths appear smaller from the center of
projection.
More we increase the distance from the center of projection, smaller will be the object
appear.
Vanishing Point
It is the point where all lines will appear to meet. There can be one point, two point,
and three point perspectives.
One Point: There is only one vanishing point as shown in fig (a)
Two Points: There are two vanishing points. One is the x-direction and other in the y -
direction as shown in fig (b)
Three Points: There are three vanishing points. One is x second in y and third in two
directions.
Parallel Projection
Parallel Projection use to display picture in its true shape and size. When projectors
are perpendicular to view plane then is called orthographic projection.
The parallel projection is formed by extending parallel lines from each vertex on the
object until they intersect the plane of the screen. The point of intersection is the projection
of vertex.
Parallel projections are used by architects and engineers for creating working
drawing of the object, for complete representations require two or more views of an
object using different planes.
Isometric Projection: All projectors make equal angles generally angle is of 30°.
Dimetric: In these two projectors have equal angles. With respect to two principle axis.
Trimetric: The direction of projection makes unequal angle with their principle axis.
Cavalier: All lines perpendicular to the projection plane are projected with no change in
length.
Cabinet: All lines perpendicular to the projection plane are projected to one half of their
length. These give a realistic appearance of object.
UNIT-V
Visible surface (Or) Hidden line surface detection method:
We must remove these hidden surfaces to get a realistic screen image. The
identification and removal of these surfaces is called Hidden-surface problem.
There are two approaches for removing hidden surface problems − Object-Space
method and Image-space method.
In this method each surface is processed separately one pixel position at a time across
the surface. The depth values for a pixel are compared and the closest (smallest z) surface
determines the color to be displayed in the frame buffer.
Depth buffer is used to store depth values for (x, y) position, as surfaces are
processed (0 ≤ depth ≤ 1).
The frame buffer is used to store the intensity value of color value at each position
(x, y).
The z-coordinates are usually normalized to the range [0, 1]. The 0 value for z-
coordinate indicates back clipping pane and 1 value for z-coordinates indicates front
clipping pane.
Algorithm
For all pixels on the screen, set depth [x, y] to 1.0 and intensity [x, y] to a background
value.
For each polygon in the scene, find all pixels (x, y) that lie within the boundaries of a
polygon when projected onto the screen. For each of these pixels:
(b) If z < depth [x, y], this polygon is closer to the observer than others already
recorded for this pixel. In this case, set depth [x, y] to z and intensity [x, y] to a value
corresponding to polygon's shading. If instead z > depth [x, y], the polygon already
recorded at (x, y) lies closer to the observer than does this new polygon, and no action
is taken.
3. After all, polygons have been processed; the intensity array will contain the
solution.
4. The depth buffer algorithm illustrates several features common to all hidden
surface algorithms.
5. First, it requires a representation of all opaque surface in scene polygon in this case.
6. These polygons may be faces of polyhedral recorded in the model of scene or may simply
represent thin opaque 'sheets' in the scene.
7. The IInd important feature of the algorithm is its use of a screen coordinate system. Before
step 1, all polygons in the scene are transformed into a screen coordinate system using matrix
multiplication.
Algorithm
Depthbuffer (x, y) = 0
If Z >depthbuffer (x, y)
Advantages
It is easy to implement.
It reduces the speed problem if implemented in hardware.
It processes one object at a time.
Disadvantages
A fast and simple object-space method for identifying the back faces of a
polyhedron is based on the "inside-outside" tests.
We can simplify this test by considering the normal vector N to a polygon surface,
which has Cartesian components (A, B, C).
In general, if V is a vector in the viewing direction from the eye (or "camera")
position, then this polygon is a back face if
V.N > 0
So that we only need to consider the sign of C the component of the normal vector N.
In a right-handed viewing system with viewing direction along the negative $Z_{V}$
axis, the polygon is a back face if C < 0. Also, we cannot see any face whose normal has z
component C = 0, since your viewing direction is towards that polygon. Thus, in general, we
can label any polygon as a back face if its normal vector has a z component value −
C <= 0
Similar methods can be used in packages that employ a left-handed viewing system.
In these packages, plane parameters A, B, C and D can be calculated from polygon vertex
coordinates specified in a clockwise direction (unlike the counterclockwisedirection used in
a right-handed system).
Also, back faces have normal vectors that point away from the viewing position and
are identified by C >= 0 when the viewing direction is along the positive $Z_{v}$ axis. By
examining parameter C for the different planes defining an object, we can immediately
identify all the back faces.
N1=(v2-v1 )(v3-v2)
If N1.P≥0 visible
N1.P<0 invisible
Advantage
1. It is a simple and straight forward method.
2. It reduces the size of databases, because no need of store all surfaces in the
database, only the visible surface is stored.
It came under the category of list priority algorithm. It is also called a depth-sort
algorithm. In this algorithm ordering of visibility of an object is done. If objects are
reversed in a particular order, then correct picture results.
Objects are arranged in increasing order to z coordinate. Rendering is done in
order of z coordinate. Further objects will obscure near one. Pixels of rear one will overwrite
pixels of farther objects. If z values of two overlap, we can determine the correct order from
Z value as shown in fig (a).
If z objects overlap each other as in fig (b) this correct order can be maintained by
splitting of objects.
The distance is from view plane. The polygons at more distance are painted firstly.
The concept has taken color from a painter or artist. When the painter makes a painting, first
of all, he will paint the entire canvas with the background color. Then more distance objects
like mountains, treesare added. Then rear or foreground objects are added to picture.
Similar approach we will use. We will sort surfaces according to z values.
Painter Algorithm
Step2: Sort all polygons by z value keep the largest value of z first.
The success of any test with single overlapping polygon allows F to be painted.
It is an image space algorithm. It processes one line at a time rather than one pixel at
a time.
It uses the concept area of coherence. This algorithm records edge list, active edge
list. So accurate bookkeeping is necessary. The edge list or edge table contains the coordinate
of two endpoints.
Active Edge List (AEL) contain edges a given scan line intersects during its sweep.
The active edge list (AEL) should be sorted in increasing order of x. The AEL is dynamic,
growing and shrinking.
In order to require one scan-line of depth values, we must group and process all
polygons intersecting a given scan-line at the same time before processing the next scan-line.
Two important tables, edge table and polygon table, are maintained for this.
The Edge Table − It contains coordinate endpoints of each line in the scene, the inverse
slope of each line, and pointers into the polygon table to connect edges to surfaces.
The Polygon Table − It contains the plane coefficients, surface material properties, other
surface data, and may be pointers to the edge table.
SCANLINE ENTERIES
L1 AB BC EH FG
L2 AD EH BC FG
L3 AD EH BC FG
Scan line can deal with multiple surfaces. As each scan line is processed, this line
will intersect many surfaces. The intersecting line will determine which surface is visible.
Depth calculation for each surface is done. The surface rear to view plane is defined. When
the visibility of a surface is determined, then intensity value is entered into refresh buffer.
Algorithm
1. Enter values in Active edge list (AEL) in sorted order using y as value
2. Scan until the flag, i.e. F is on using a background color
3. When one polygon flag is on, and this is for surface S 1enter color intensity as I1 into
refresh buffer
4. When two or image surface flag are on, sort the surfaces according to depth and use
intensity value Sn for the nth surface. This surface will have least z depth value
5. Use the concept of coherence for remaining planes.
It was invented by John Warnock and also called a Warnock Algorithm. It is based
on a divide & conquer method. It uses fundamental of area coherence.
It is used to resolve the visibility of algorithms. It classifies polygons in two cases i.e.
trivial and non-trivial.
Trivial cases are easily handled. Non trivial cases are divided into four equal
subwindows. The windows are again further subdivided using recursion until all polygons
classified trivial and non trivial.
Classification of Scheme
1. Inside surface
2. Outside surface
3. Overlapping surface
4. Surrounding surface
Area-Subdivision Method
Divide the total viewing area into smaller and smaller rectangles until each small area
is the projection of part of a single visible surface or no surface at all.
Continue this process until the subdivisions are easily analyzed as belonging to a single
surface or until they are reduced to the size of a single pixel. An easy way to do this is to
successively divide the area into four equal parts at each step. There are four possible
relationships that a surface can have with a specified area boundary.
The tests for determining surface visibility within an area can be stated in terms of these
four classifications. No further subdivisions of a specified area are needed if one of the
following conditions is true −
The A-buffer expands on the depth buffer method to allow transparencies. The key
data structure in the A-buffer is the accumulation buffer.
If depth >= 0, the number stored at that position is the depth of a single surface
overlapping the corresponding pixel area. The intensity field then stores the RGB
components of the surface color at that point and the percent of pixel coverage.
If depth < 0, it indicates multiple-surface contributions to the pixel intensity. The
intensity field then stores a pointer to a linked list of surface data. The surface buffer in the
A-buffer includes −
The algorithm proceeds just like the depth buffer algorithm. The depth and opacity values
are used to determine the final color of a pixel.
Depth sorting method uses both image space and object-space operations. The depth-
sorting method performs two basic functions −
The scan conversion of the polygon surfaces is performed in image space. This method for
solving the hidden-surface problem is often referred to as the painter's algorithm. The
following figure shows the effect of depth sorting −
The algorithm begins by sorting by depth. For example, the initial “depth” estimate of a
polygon may be taken to be the closest z value of any vertex of the polygon.
Let us take the polygon P at the end of the list. Consider all polygons Q whose z-extents
overlap P’s. Before drawing P, we make the following tests. If any of the following tests is
positive, then we can assume P can be drawn before Q.
If all the tests fail, then we split either P or Q using the plane of the other. The new cut
polygons are inserting into the depth order and the process continues. Theoretically, this
partitioning could generate O(n2) individual polygons, but in practice, the number of
polygons is much smaller.
To build the BSP trees, one should start with polygons and label all the edges.
Dealing with only one edge at a time, extend each edge so that it splits the plane in two. Place
the first edge in the tree as root.
Add subsequent edges based on whether they are inside or outside. Edges that span
the extension of an edge that is already in the tree are split into two and both are added to the
tree.