0% found this document useful (0 votes)
170 views117 pages

COMPUTER GRAPHICS 5units

The document discusses computer graphics and its applications. It defines computer graphics as graphics created using computers to represent image data with specialized hardware and software. Computer graphics can be used to create 2D and 3D images for various fields like engineering, mathematics, and more. The document discusses interactive and non-interactive graphics, as well as common display devices like CRT monitors and their components and scanning techniques. It provides examples of applications such as education, training, biology, maps, and entertainment.

Uploaded by

Teddy Bhai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
170 views117 pages

COMPUTER GRAPHICS 5units

The document discusses computer graphics and its applications. It defines computer graphics as graphics created using computers to represent image data with specialized hardware and software. Computer graphics can be used to create 2D and 3D images for various fields like engineering, mathematics, and more. The document discusses interactive and non-interactive graphics, as well as common display devices like CRT monitors and their components and scanning techniques. It provides examples of applications such as education, training, biology, maps, and entertainment.

Uploaded by

Teddy Bhai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 117

COMPUTER GRAPHICS

UNIT-I

INTRODUCTION OF COMPUTER GRAPHICS

Computer Graphics involves technology to access. The Process transforms and


presents information in a visual form. The role of computer graphics insensible. In
today life, computer graphics has now become a common element in user interfaces,
T.V. commercial motion pictures.

Computer Graphics is the creation of pictures with the help of a computer. The
end product of the computer graphics is a picture it may be a business graph, drawing,
and engineering.

In computer graphics, two or three-dimensional pictures can be created that are


used for research. Many hardware devices algorithm has been developing for
improving the speed of picture generation with the passes of time. It includes the
creation storage of models and image of objects. These models for various fields like
engineering, mathematical and so on.

Today computer graphics is entirely different from the earlier one. It is not
possible. It is an interactive user can control the structure of an object of various input
devices.

Definition of Computer Graphics:


Computer graphics are graphics created using computers and the representation
of image data by a computer specifically with help from specialized graphic hardware
and software.

It is the use of computers to create and manipulate pictures on a display device.


It comprises of software techniques to create, store, modify, represents pictures.
Why computer graphics used?

Suppose a shoe manufacturing company want to show the sale of shoes for five
years. For this vast amount of information is to store.

So a lot of time and memory will be needed. This method will be tough to
understand by a common man. In this situation graphics is a better alternative. Graphics
tools are charts and graphs.

Using graphs, data can be represented in pictorial form. A picture can be


understood easily just with a single look.

Interactive computer graphics work using the concept of two-way


communication between computer users.

The computer will receive signals from the input device, and the picture is
modified accordingly. Picture will be changed quickly when we apply command.

Interactive and Passive Graphics

(a) Non-Interactive or Passive Computer Graphics:

In non-interactive computer graphics, the picture is produced on the monitor,


and the user does not have any controlled over the image,

i.e., the user cannot make any change in the rendered image. One example of its
Titles shown on T.V.
Non-interactive Graphics involves only one-way communication between the
computer and the user, User can see the produced image, and he cannot make any
change in the image.

(b) Interactive Computer Graphics:

In interactive Computer Graphics user have some controls over the picture, i.e.,
the user can make any change in the produced image. One example of it is the ping-
pong game.

Interactive Computer Graphics require two-way communication between the


computer and the user. A User can see the image and make any change by sending his
command with an input device.

Advantages:
1. Higher Quality
2. More precise results or products
3. Greater Productivity
4. Lower analysis and design cost
5. Significantly enhances our ability to understand data and to perceive trends.

APPLICATION OF COMPUTER GRAPHICS

1. Education and Training:

Computer-generated model of the physical, financial and economic system is often


used as educational aids.

Model of physical systems, physiological system, population trends or equipment


can help trainees to understand the operation of the system.

For some training applications, particular systems are designed. For example
Flight Simulator.
Flight Simulator: It helps in giving training to the pilots of airplanes. These
pilots spend much of their training not in a real aircraft but on the ground at the controls
of a Flight Simulator.

Advantages:
1. Fuel Saving
2. Safety
3. Ability to familiarize the training with a large number of the world's airports.

2.Use in Biology:

Molecular biologist can display a picture of molecules and gain insight into
their structure with the help of computer graphics.

3. Computer-Generated Maps:

Town planners and transportation engineers can use computer-generated maps


which display data useful to them in their planning work.

4. Architect:

Architect can explore an alternative solution to design problems at an interactive


graphics terminal. In this way, they can test many more solutions that would not be
possible without the computer.

4. Presentation Graphics:

Example of presentation Graphics are bar charts, line graphs, pie charts and other
displays showing relationships between multiple parameters. Presentation Graphics is
commonly used to summarize

o Financial Reports

o Statistical Reports

o Mathematical Reports

o Scientific Reports

o Economic Data for research reports


o Managerial Reports

o Consumer Information Bulletins

o And other types of reports

5. Computer Art:

Computer Graphics are also used in the field of commercial arts. It is used to
generate television and advertising commercial.

6. Entertainment:

Computer Graphics are now commonly used in making motion pictures, music
videos and television shows.

7. Visualization:

It is used for visualization of scientists, engineers, medical personnel, business


analysts for the study of a large amount of information.

8. Educational Software:

Computer Graphics is used in the development of educational software for


making computer-aided instruction.

9. Printing Technology:

Computer Graphics is used for printing technology and textile design.

Example of Computer Graphics Packages:

1. LOGO

2. COREL DRAW
3. AUTO CAD
4. 3D STUDIO
5. CORE
6. GKS (Graphics Kernel System)
7. PHIGS
8. CAM (Computer Graphics Metafile)
9. CGI (Computer Graphics Interface)

DISPLAY DEVICES:

The most commonly used display device is a video monitor. The operation of
most video monitors based on CRT (Cathode Ray Tube). The following display
devices are used:

1. Refresh Cathode Ray Tube


2. Random Scan and Raster Scan
3. Color CRT Monitors
4. Direct View Storage Tubes
5. Flat Panel Display
6. Lookup Table

CATHODE RAY TUBE (CRT):

CRT stands for Cathode Ray Tube. CRT is a technology used in traditional
computer monitors and televisions. The image on CRT display is created by firing
electrons from the back of the tube of phosphorus located towards the front of the
screen.

A CRT is an evacuated glass tube. An electron gun at the rear of the tube
produces a beam of electrons which is directed towards the front of the tube (screen).

Once the electron heats the phosphorus, they light up, and they are projected on a
screen. The color you view on the screen is produced by a blend of red, blue and green
light.

Important components of CRT is Electron gun, focusing system, magnetic


deflection system, phosphor coated screen and electron beam.

.
The beam is positioned on the screen by a deflection system of the cathode-ray-tube
consists of two pairs of parallel plates, referred to as the vertical and horizontal
deflection plates.

The intensity of the beam is controlled by the intensity signal on the control grid.
The voltage applied to vertical plates controls the vertical deflection of the
electron beam and voltage applied to the horizontal plates controls the horizontal
deflection of the electron beam.

There are two techniques used for producing images on the CRT screen: Vector
scan / random scan and Raster scan.

1. Electron Gun: Electron gun consisting of a series of elements, primarily a heating


filament (heater) and a cathode. The electron gun creates a source of electrons which
are focused into a narrow beam directed at the face of the CRT.

2. Control Electrode: It is used to turn the electron beam on and off.

3. Focusing system: It is used to create a clear picture by focusing the electrons into a
narrow beam.

4. Deflection Yoke: It is used to control the direction of the electron beam. It creates an
electric or magnetic field which will bend the electron beam as it passes through the
area.

In a conventional CRT, the yoke is linked to a sweep or scan generator. The


deflection yoke which is connected to the sweep generator creates a fluctuating electric
or magnetic potential.
5. Phosphorus-coated screen: The inside front surface of every CRT is coated with
phosphors. Phosphors glow when a high-energy electron beam hits them.
Phosphorescence is the term used to characterize the light given off by a phosphor after
it has been exposed to an electron beam.

Compare this with some earlier systems in which the only way to carry out an
edit was to clear the whole screen and then redraw the whole image. Also by changing
the stored representation between refresh cycles animation is possible.

RANDOM SCAN AND RASTER SCAN DISPLAY:

RASTER SCAN DISPLAY

The electron beam is swept across the screen one row at a time from top to
bottom. As it moves across each row, the beam intensity is turned on and off to create
a pattern of illuminated spots.

A Raster Scan Display is based on intensity control of pixels in the form of a


rectangular box called Raster on the screen.

Information of on and off pixels is stored in refresh buffer or Frame buffer.


Televisions in our house are based on Raster Scan Method.

The raster scan system can store information of each pixel position, so it is
suitable for realistic display of objects. Raster Scan provides a refresh rate of 60 to 80
frames per second.
Picture definition is stored in a memory area called the frame buffer. This
frame buffer stores the intensity values for all the screen points. Each screen point is
called a pixel (picture element).

On black and white systems, the frame buffer storing the values of the pixels is
called a bitmap. Each entry in the bitmap is a 1-bit data which determine the on (1) and
off (0) of the intensity of the pixel.

On color systems, the frame buffer storing the values of the pixels is called a
pixmap (Though nowadays many graphics libraries name it as bitmap too).

Each entry in the pixmap occupies a number of bits to represent the color of the
pixel. For a true color display, the number of bits for each entry is 24 (8 bits per
red/green/blue channel, each channel 2^8 =256 levels of intensity value, ie. 256
voltage settings for each of the red/green/blue electron guns).

Types of Scanning or travelling of beam in Raster Scan

1. Interlaced Scanning
2. Non-Interlaced Scanning

In Interlaced scanning, each horizontal line of the screen is traced from top to
bottom. Due to which fading of display of object may occur. This problem can be
solved by Non-Interlaced scanning.

In this first of all odd numbered lines are traced or visited by an electron beam,
then in the next circle, even number of lines are located.

For non-interlaced display refresh rate of 30 frames per second used. But it gives
flickers. For interlaced display refresh rate of 60 frames per second is used.

Advantages:
1. Realistic image
2. Million Different colors to be generated
3. Shadow Scenes are possible.
Disadvantages:
1. Low Resolution
2. Expensive

RANDOM SCAN DISPLAY:

Random Scan System uses an electron beam which operates like a pencil to create a
line image on the CRT screen.

The picture is constructed out of a sequence ofstraight-line segments. Each


line segment is drawn on the screen by directing the beam to move from one point on
the screen to the next, where its x & y coordinates define each point. After drawing
the picture.

The system cycles back to the first line and design all the lines of the image 30
to 60 time each second. The process is shown in fig:

Random-scan monitors are also known as vector displays or stroke-writing displays


or calligraphic displays.

Advantages:
1. A CRT has the electron beam directed only to the parts of the screen where an
image is to be drawn.
2. Produce smooth line drawings.
3. High Resolution
Disadvantages:
1. Random-Scan monitors cannot display realistic shades scenes.

Differentiate between Random and Raster Scan Display:

Random Scan Raster Scan

1. It has high Resolution 1. Its resolution is low.

2. It is more expensive 2. It is less expensive

3. Any modification if needed is 3.Modification is tough


easy

4. Solid pattern is tough to fill 4.Solid pattern is easy to fill

5. Refresh rate depends or 5. Refresh rate does not depend on the


resolution picture.

6. Only screen with view on an 6. Whole screen is scanned.


area is displayed.

7. Beam Penetration technology 7. Shadow mark technology came


come under it. under this.

8. It does not use interlacing 8. It uses interlacing


method.

9. It is restricted to line drawing 9. It is suitable for realistic display.


applications
COLOR CRT MONITORS:

The CRT Monitor display by using a combination of phosphors. The phosphors are
different colors. There are two popular approaches for producing color displays with a
CRT are:

1. Beam Penetration Method


2. Shadow-Mask Method

1. Beam Penetration Method:

The Beam-Penetration method has been used with random-scan monitors. In this
method, the CRT screen is coated with two layers of phosphor, red and green and the
displayed color depends on how far the electron beam penetrates the phosphor layers.

This method produces four colors only, red, green, orange and yellow. A beam of
slow electrons excites the outer red layer only; hence screen shows red color only.

A beam of high-speed electrons excites the inner green layer. Thus screen shows a
green color.

Bit 1:r Bit 2:g Bit 3:b Color name

0 0 0 Black

0 0 1 Blue
0 1 0 Green

0 1 1 Cyan

1 0 0 Red

1 0 1 Magenta

1 1 0 Yellow

1 1 1 White

Advantages:
1. Inexpensive

Disadvantages:
1. Only four colors are possible
2. Quality of pictures is not as good as with another method.

2. Shadow-Mask Method:
Shadow Mask Method is commonly used in Raster-Scan System because they
produce a much wider range of colors than the beam-penetration method.

It is used in the majority of color TV sets and monitors.

Construction: A shadow mask CRT has 3 phosphor color dots at each pixel position.

One phosphor dot emits: red light

Another emits: green light

Third emits: blue light


This type of CRT has 3 electron guns, one for each colordot and a shadow mask
grid just behind the phosphor coated screen.

Shadow mask grid is pierced with small round holes in a triangular pattern.

Figure shows the delta-delta shadow mask method commonly used in color CRT
system.

DIRECT VIEW STORAGE TUBES:

DVST terminals also use the random scan approach to generate the image on the
CRT screen. The term "storage tube" refers to the ability of the screen to retain the
image which has been projected against it, thus avoiding the need to rewrite the image
constantly.

Function of guns: Two guns are used in DVST

1. Primary guns: It is used to store the picture pattern.


2. Flood gun or Secondary gun: It is used to maintain picture display.

Advantage:
1. No refreshing is needed.
2. High Resolution
3. Cost is very less
Disadvantage:
1. It is not possible to erase the selected part of a picture.
2. It is not suitable for dynamic graphics applications.
3. If a part of picture is to modify, then time is consumed.

FLAT PANEL DISPLAY:

The Flat-Panel display refers to a class of video devices that have reduced volume,
weightand power requirement compare to CRT.

Example: Small T.V. monitor, calculator, pocket video games, laptop computers,
an advertisement board in elevator.

1. Emissive Display: The emissive displays are devices that convert electrical energy
into light. Examples are Plasma Panel, thin film electroluminescent display and
LED (Light Emitting Diodes).

2. Non-Emissive Display: The Non-Emissive displays use optical effects to convert


sunlight or light from some other source into graphics patterns. Examples are LCD
(Liquid Crystal Device).

Plasma Panel Display:

Plasma-Panels are also called as Gas-Discharge Display. It consists of an array of


small lights. Lights are fluorescent in nature. The essential components of the plasma-
panel display are:
1. Cathode: It consists of fine wires. It delivers negative voltage to gas cells. The
voltage is released along with the negative axis.
2. Anode: It also consists of line wires. It delivers positive voltage. The voltage is
supplied along positive axis.
3. Fluorescent cells: It consists of small pockets of gas liquids when the voltage is
applied to this liquid (neon gas) it emits light.
4. Glass Plates: These plates act as capacitors. The voltage will be applied, the cell
will glow continuously.

The gas will slow when there is a significant voltage difference between
horizontal and vertical wires. The voltage level is kept between 90 volts to 120 volts.
Plasma level does not require refreshing. Erasing is done by reducing the voltage to 90
volts.

Each cell of plasma has two states, so cell is said to be stable. Displayable point in
plasma panel is made by the crossing of the horizontal and vertical grid. The resolution
of the plasma panel can be up to 512 * 512 pixels.

Figure shows the state of cell in plasma panel display:

Advantage:
1. High Resolution
2. Large screen size is also possible.
3. Less Volume
4. Less weight
5. Flicker Free Display

Disadvantage:
1. Poor Resolution
2. Wiring requirement anode and the cathode is complex.
3. Its addressing is also complex.

LED (Light Emitting Diode):

In an LED, a matrix of diodes is organized to form the pixel positions in the display and
picture definition is stored in a refresh buffer. Data is read from the refresh buffer and
converted to voltage levels that are applied to the diodes to produce the light pattern in
the display.

LCD (Liquid Crystal Display):

Liquid Crystal Displays are the devices that produce a picture by passing polarized light
from the surroundings or from an internal light source through a liquid-crystal material
that transmits the light.

LCD uses the liquid-crystal material between two glass plates; each plate is the
right angle to each other between plates liquid is filled. One glass plate consists of rows
of conductors arranged in vertical direction.

Another glass plate is consisting of a row of conductors arranged in horizontal


direction. The pixel position is determined by the intersection of the vertical &
horizontal conductor. This position is an active part of the screen.

Liquid crystal display is temperature dependent. It is between zeros to seventy


degree Celsius. It is flat and requires very little power to operate.
Advantage:
1. Low power consumption.
2. Small Size
3. Low Cost

Disadvantage:
1. LCDs are temperature-dependent (0-70°C)
2. LCDs do not emit light; as a result, the image has very little contrast.
3. LCDs have no color capability.
4. The resolution is not as good as that of a CRT.

Look-Up Table:

Image representation is essentially the description of pixel colors. There are three
primary colors: R (red), G (green) and B (blue). Each primary color can take on
intensity levels produces a variety of colors. Using direct coding, we may allocate 3 bits
for each pixel, with one bit for each primary color. The 3-bit representation allows each
primary to vary independently between two intensity levels: 0 (off) or 1 (on). Hence
each pixel can take on one of the eight colors.

Bit 1:r Bit 2:g Bit 3:b Color name

0 0 0 Black

0 0 1 Blue

0 1 0 Green

0 1 1 Cyan
1 0 0 Red

1 0 1 Magenta

1 1 0 Yellow

1 1 1 White

A widely accepted industry standard uses 3 bytes, or 24 bytes, per pixel, with one
byte for each primary color. The way, we allow each primary color to have 256
different intensity levels.

Thus a pixel can take on a color from 256 x 256 x 256 or 16.7 million possible
choices. The 24-bit format is commonly referred to as the actual color representation.

Lookup Table approach reduces the storage requirement. In this approach pixel
values do not code colors directly. Alternatively, they are addresses or indices into a
table of color values.

The color of a particular pixel is determined by the color value in the table entry
that the value of the pixel references. Figure shows a look-up table with 256 entries. The
entries have addresses 0 through 255.

Each entry contains a 24-bit RGB color value. Pixel values are now 1-byte. The
color of a pixel whose value is i, where 0 <i<255, is persistence by the color value in
the table entry whose address is i. It reduces the storage requirement of a 1000 x 1000
image to one million bytes plus 768 bytes for the color values in the look-up table.
THREE DIMENSIONAL VIEWING:
Graphics monitors for the display of three-dimensionalscenes have been
devisedusing a technique that reflects a CRT image from a vibrating, flexible mirror.
The operation of such a system is demonstrated. As the varifocal mirror vibrates, it
changes focal length. These vibrations are synchronized with the display of an object on a
CRT so that each point on the object is reflected from the mirror into a spatial position
corresponding to the distance of that point from aspecified viewing position.
Thisallows us to walk around an object or scene and view it from different sides.
Which uses a vibrating mirror to project three-dimensional objects into a 25cm by 2 h by 25-
volume.
This system is also capable of displaying two-dimensional cross-sectional "slices" of
objects selected at different depths.
Suchsystems have been usedin medical applications and CAT scan devices, in
geological applications to analyze topological and seismic data, in design applications
involving solid objects, and in three-dimensional simulations of systems, such as molecules
and terrain.
STEREOSCOPIC AND VIRTUAL-REALITY SYSTEMS
Overview of Graphics SystemsAnother technique for representing three dimensional
objects is displaying stereoscopic views.
Thismethod does not produce hue three-dimensional images, but it does provide a
three-dimensional effect by presenting a different view to each eye of an observer so that
scenes do appear to have depth.

To obtain a stereoscopic projection, we first need to obtain two views of a scene


generated from.
Aviewing direction corresponding to each eye (left and right). We can constructthe
two views as computer-generated scenes with different viewing positions, or we can use a
stem camera pair to photograph some object or scene.
When we simultaneous look at the left view with the left eye and the right view with
the right eye, the two views merge into a single image and we perceive a scene with depth.
Two views of a computergenerated scene for stereoscopicprojection.To increase
viewing comfort, the areasat the left and right edges of this scene that are visible to onlyone
eye have been eliminated.
Stereoscopic viewing is also a component in virtual-reality systems, where users can
step into a scene and interact with the environment. A headset containing an optical
system to generate the Stereoscopic views is commonly used in conjunction with interactive
input devices to locate and manipulate date objects in the scene. A sensing system in the
headset keeps track of the viewer's position, so that the front and back of objects can be m as
the viewer"walks through" and interacts with the display.
An interactive virtual-reality environmentcan also be viewed with stereoscopic
glasses and a video monitor, instead of a headset. This provides a means for obtaining a lower
cost virtual-reality system. The tracking device isplaced on top of the video display and is
used to monitor head movements sothat the viewing position for a scene can be changed as
head position changes.
UNIT_II
RASTER SCAN DISPLAY SYSTEM WITH VIDEO CONTOLLER

In raster scan displays a special area of memory is dedicated to Graphics


only. This memory area is called Frame Buffer.
It holds the set of intensity values for all the screen points.
The video controller retrieves the stored intensity values from frame buffer
and displays them on the screen one row (scan line) at a time , typically 50 times
per second.
Raster display system Architecture
Raster Scan Systems
• In addition to the central processing unit (CPU), a special processor, called the video
controller or display controller, is used to control the operation of the display
device.

Video Controller
A fixed area of the system memory is reserved for the frame buffer, and the video
controller is given direct access to the frame buffer memory.
Frame buffer location, and the corresponding screen positions, are referenced
in Cartesian coordinates.

Scan lines are then labeled from ymax at the top of the screen to 0 at the
bottom. Along each scan line, screen pixel positions are labeled from 0 to xmax.

Simple Organization of the Video Controller


Two registers are used to store the coordinates of the screen pixels.
In color displays, 24 bits per pixel are commonly used, where 8 bits represent 256
levels for each color.
It is necessary to read 24-bits for each pixel from frame buffer. This is very time
consuming.
To avoid this video controller uses Look Up Table (LUT) to store many entries of
pixel values in RGB format.
With this facility, now it is necessary to only read index to the Look Up Table from
the frame buffer for each pixel.
The specified entry in the Look Up Table is then used to control the intensity or color
of the CRT.
Raster Scan Display System with Display Controller
A raster system containing a separate display processor (graphics controller,
display coprocessor)
The purpose of the DP is to free the CPU from the graphics chores.
Display processor
Generation various line styles (dashed, dotted, or solid)Displaying color areas
Performing certain transformation and manipulation on display objects.

Random Scan System


Graphic commands are translated by the graphics package into a display file
stored in the system memory.
This file is then accessed by the display processor unit (DPU)(graphic controller)
to refresh the screen.

Random scan monitors draw a picture one line at a time and for this reason are also
referred to as vector displays (or stroke writing or calligraphic displays).
The component lines of a picture can be drawn and refreshed by a random-scan
system in any specified order.

Refresh rate on a random-scan system depends on the number of lines to be displayed


. Picture definition is now stored as a set of line-drawing commands in an area of memory
referred to as the refresh display file.
Sometimes the refresh display file is called the display list, display program, or
simply the refresh buffer.
To display a specified picture, the system cycles through the set of commands in the
display file, drawing each component line in turn.
After all line- drawing commands have been processed, the system cycles back to the
first line command in the list.
Random-scan displays are designed to draw al the component lines of a picture 30 to
60times each second.

INPUT DEVICES
The Input Devices are the hardware that is used to transfer transfers input to the
computer. The data can be in the form of text, graphics, sound, and text.

Output device display data from the memory of the computer. Output can be text,
numeric data, line, polygon, and other objects.

These Devices include:

1. Keyboard
2. Mouse
3. Trackball
4. Spaceball
5. Joystick
6. Light Pen
7. Digitizer
8. Touch Panels
9. Voice Recognition
10. Image scanner

Keyboard:

The most commonly used input device is a keyboard. The data is entered by pressing
the set of keys. All keys are labeled. A keyboard with 101 keys is called a QWERTY
keyboard.

The keyboard has alphabetic as well as numeric keys. Some special keys are also
available.

1. Numeric Keys: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
2. Alphabetic keys: a to z (lower case), A to Z (upper case)
3. Special Control keys: Ctrl, Shift, Alt
4. Special Symbol Keys: ; , " ? @ ~ ? :
5. Cursor Control Keys: ↑→←↓
6. Function Keys: F1 F2 F3....F9.
7. Numeric Keyboard: It is on the right-hand side of the keyboard and used for
fast entry of numeric data.

Mouse:

A Mouse is a pointing deviceand used to position the pointer on the screen. It is a


small palm size box.

There are two or three depression switches on the top. The movement of the mouse
along the x-axis helps in the horizontal movement of the cursor and the movement along
the y-axis helps in the vertical movement of the cursor on the screen. The mouse cannot be
used to enter text.

Therefore, they are used in conjunction with a keyboard.


Trackball

It is a pointing device. It is similar to a mouse. This is mainly used in notebook or


laptop computer, instead of a mouse. This is a ball which is half inserted, and by
changing fingers on the ball, the pointer can be moved.

Spaceball:

It is similar to trackball, but it can move in six directions where trackball can move in
two directions only.

The movement is recorded by the strain gauge. Strain gauge is applied with pressure.

It can be pushed and pulled in various directions. The ball has a diameter around 7.5
cm. The ball is mounted in the base using rollers. One-third of the ball is an inside box, the
rest is outside.

Joystick:

A Joystick is also a pointing device which is used to change cursor position on a


monitor screen.

Joystick is a stick having a spherical ball as itsboth lower and upper ends as shown
in fig. The lower spherical ball moves in a socket.

The joystick can be changed in all four directions. The function of a joystick is similar
to that of the mouse. It is mainly used in Computer Aided Designing (CAD) and playing
computer games.
Light Pen

Light Pen (similar to the pen) is a pointing device which is used to select a displayed
menu item or draw pictures on the monitor screen.

It consists of a photocell and an optical system placed in a small tube. When its tip
is moved over the monitor screen, and pen button is pressed, its photocell sensing element
detects the screen location and sends the corresponding signals to the CPU.

Digitizers:

The digitizer is an operator input device, which contains a large, smooth board (the
appearance is similar to the mechanical drawing board) & an electronic tracking device,
which can be changed over the surface to follow existing lines.

The electronic tracking device contains a switch for the user to record the desire x & y
coordinate positions. The coordinates can be entered into the computer memory or stored or
an off-line storage medium such as magnetic tape.

Touch Panels:

Touch Panels is a type of display screen that has a touch-sensitive transparent panel
covering the screen. A touch screen registers input when a finger or other object comes in
contact with the screen.
When the wave signals are interrupted by some contact with the screen, that located is
recorded. Touch screens have long been used in military applications.

Voice Systems (Voice Recognition):

The user inputs data by speaking into a microphone. The simplest form of voice
recognition is a one-word command spoken by one person. Each command is isolated with
pauses between the words.

The voice-system input can be used to initiate graphics operations or to enter data.
These systems operate by matching an input against a predefined dictionary of words and
phrases.

Image Scanner

It is an input device. The data or text is written on paper. The paper is feeded to
scanner. The paper written information is converted into electronic format; this format is
stored in the computer.

The input documents can contain text, handwritten material, picture extra.

Types of image Scanner:

1. Flat Bed Scanner:


2. Hand Held Scanner

OUTPUT DEVICES (OR) HARD COPY


DEVICE
It is an electromechanical device, which accepts data from a computer and
translates them into form understand by users.

Following are Output Devices:

1. Printers
2. Plotters
Printers:

Printer is the most important output device, which is used to print data on paper.

Types of Printers: There are many types of printers which are classified on various
criteria as shown in fig:

1. Impact Printers: The printers that print the characters by striking against the
ribbon and onto the papers are known as Impact Printers.

These Printers are of two types:

1. Character Printers
2. Line Printers

2. Non-Impact Printers: The printers that print the characters without striking
against the ribbon and onto the papers are called Non-Impact Printers.

Page Printers are of two types:

1. Laser Printers
2. Inkjet Printers
Dot Matrix Printers:

Dot matrix has printed in the form of dots. A printer has a head which
contains nine pins. The nine pins are arranged one below other. Each pin can be
activated independently.

All or only the same needles are activated at a time. When needless is not activated,
and then the tip of needle stay in the head.

In nine pin printer, pins are arranged in 5 * 7 matrixes.

Daisy Wheel Printers:

Head is lying on a wheel and Pins corresponding to characters are like


petals of Daisy, that's why called Daisy wheel printer.

Drum Printers:

These are line printers, which prints one line at a time. It consists of a drum.
The shape of the drum is cylindrical.

Each band consists of some characters. Each line on drum consists of 132
characters.

Because there are 96 lines so total characters are (132 * 95) = 12, 672.

Chain Printers:

These are called as line printers. These are used to print one line at a line.
Basically, chain consists of links.
Printers can follow any character set style, i.e., 48, 64 or 96 characters.

Non-Impact Printers:
Inkjet Printers:

These printers use a special link called electrostaticink. The printer head has
a special nozzle. Nozzle drops ink on paper. Head contains up to 64 nozzles. The
ink dropped is deflected by the electrostatic plate.

The plate is fixed outside the nozzle. The deflected ink settles on paper.

Laser Printers:

These are non-impact page printers. They use laser lights to produces the dots
needed to form the characters to be printed on a page & hence the name laser
printers.

The output is generated in the following steps:

Step1: The bits of data sent by processing unit act as triggers to turn the laser beam on
& off.

Step2: The output device has a drum which is cleared & is given a positive electric
charge.

Step3: The laser exposed parts of the drum attract an ink powder known as toner.

Step4: The attracted ink powder is transferred to paper.

Step5: The ink particles are permanently fixed to the paper by using either heat or
pressure technique.
Step6: The drum rotates back to the cleaner where a rubber blade cleans off the excess
ink & prepares the drum to print the next page.

PLOTTERS

Plotters are a special type of output device. It is suitable for applications:

1. Architectural plan of the building.


2. CAD applications like the design of mechanical components of aircraft.
3. Many engineering applications.

Advantage:
1. It can produce high-quality output on large sheets.
2. It is used to provide the high precision drawing.
3. It can produce graphics of various sizes.
4. The speed of producing output is high.

Drum Plotter:

It consists of a drum. Paper on which design is made is kept on the drum. The
drum can rotate in both directions. Plotters comprised of one or more pen and
penholders.
Line Drawing Algorithms

The Cartesian slope-intercept equation for a straight line is


y = m .x + b
Where m as slope of the line
b as the y intercept
Given that the two endpoints of a line segment are specified at positions
(x1,y1) and (x2,y2) as in figure

Figure : Line Path between endpoint positions (x1,y1) and (x2,y2)


We can determine the values for the slope m and y intercept b with the following
calculations
m = y2-y1 / x2 - x1
b= y1 - m .x1
For any given x interval Δx along a line, we can compute the corresponding y
interval Δ y
Δy= m Δx
We can obtain the x interval Δx corresponding to a specified Δy as
Δ x = Δ y/m
For lines with slope magnitudes |m| < 1, Δx can be set proportional to a small
horizontal deflectionvoltage and the corresponding vertical deflection is then set
proportional to Δy.
For lines whose slopes have magnitudes |m | >1 ,Δy can be set proportional to a small
vertical deflection voltage with the corresponding horizontal deflection voltage set
proportional to Δx.
Lines with m = 1, Δx = Δy and the horizontal and vertical deflections voltage are
equal.
Digital Differential Analyzer (DDA) Algortihm

The digital differential analyzer (DDA) is a scan-conversion line algorithm based


on calculation either Δy or Δx,
The line at unit intervals in one coordinates and determines corresponding integer
values nearest the line path for the other coordinate.
A line with positive slop, if the slope is less than or equal to 1, at unit x
intervals(Δx=1) and compute each successive y values as
For lines with a positive slope greater than 1 we reverse the roles of x and y, (Δy=1) and
calculate each succeeding x value as
xk+1 = xk + (1/m)
Equations are based on the assumption that lines are to be processed from the left
endpoint to the right endpoint.
If this processing is reversed, Δx=-1 that the starting endpoint is at the right
yk+1 = yk – m
When the slope is greater than 1 and Δy = -1 with
xk+1 = xk-1(1/m)
If the absolute value of the slope is less than 1 and the start endpoint is at the left, we
set Δx = 1 and calculate y values
Algorithm
#define ROUND(a) ((int)(a+0.5))
voidlineDDA (intxa, intya, intxb, intyb)
{
int dx = xb - xa, dy = yb - ya, steps, k;
floatxIncrement, yIncrement, x = xa, y = ya;
if (abs (dx) > abs (dy) steps = abs (dx) ;
else steps = abs dy);
xIncrement = dx / (float) steps;
yIncrement = dy / (float) steps;
setpixel (ROUND(x), ROUND(y) );
for (k=0; k<steps; k++)
{
x += xIncrement;
y += yIncrement;
setpixel (ROUND(x), ROUND(y));
}
}

Algorithm Description:
Step 1 : Accept Input as two endpoint pixel positions
Step 2: Horizontal and vertical differences between the endpoint positions are assigned to
parameters dx and dy (Calculate dx=xb-xa and dy=yb-ya).
Step 3: The difference with the greater magnitude determines the value of
parameter steps.
Step 4 : Starting with pixel position (xa, ya), determine the offset needed at
each step to generate the next pixel position along the line path.
Step 5: loop the following process for steps number of times
a. Use a unit of increment or decrement in the x and y direction
b. if xa is less than xb the values of increment in the x and y directions are 1 and m
c. if xa is greater than xb then the decrements -1 and – m are used.
Advantages of DDA Algorithm
1. It is the simplest algorithm
2. It is a is a faster method for calculating pixel positions
Disadvantages of DDA Algorithm
1. Floating point arithmetic in DDA algorithm is still time-consuming
2. End point accuracy is poor.

BRESENHAM'S LINE ALGORITHM

This algorithm is used for scan converting a line. It was developed by Bresenham. It
is an efficient method because it involves only integer addition, subtractions, and
multiplication operations. These operations can be performed very rapidly so lines can be
generated quickly.

To illustrate Bresenham's approach, we- first consider the scan-conversion process for lines
with positive slope less than 1.
Pixel positions along a line path are then determined by sampling at unit x intervals.
Starting from the left endpoint (x0,y0) of a given line, We step to each successive
column (x position) and plot the pixel whose scan-line y value is closest to the line path.
To determine the pixel (xk,yk) is to be displayed, next to decide which pixel to plot
the column xk+1=xk+1.(xk+1,yk) and .(xk+1,yk+1).
At sampling position xk+1, we label vertical pixel separations from the mathematical
line path as d1 and d2. The y coordinate on the mathematical line at pixel column position
xk+1 is calculated as
y =m(xk+1)+b (1) Then
d1 = y-yk = m(xk+1)+b-yk
d2 = (yk+1)-y = yk+1-m(xk+1)-b
To determine which of the two pixel is closest to the line path, efficient test that is based on
the difference between the two pixel separations
d1- d2 = 2m(xk+1)-2yk+2b-1 ____________ (2)
A decision parameter Pk for the kth step in the line algorithm can be obtained by
rearranging equation (2).
By substituting m=Δy/Δx where Δx and Δy are the vertical and horizontal separations
of the endpoint positions and defining the decision parameter as
pk = Δx (d1- d2) = 2Δy xk.-2Δx. yk + c___________ (3)
The sign of pk is the same as the sign of d1- d2,since Δx>0 Parameter C is constant and has
the value 2Δy + Δx(2b-1) which is independent of the pixel position and will be eliminated in
the recursive calculations for Pk.
If the pixel at yk is “closer” to the line path than the pixel at yk+1 (d1< d2) than decision
parameter Pk is negative.
In this case, plot the lower pixel, otherwise plot the upper pixel. Coordinate changes
along the line occur in unit steps in either the x or y directions.
To obtain the values of successive decision parameters using incremental integer
calculations. At steps k+1, the decision parameter is evaluated from equation (3) as
Pk+1 = 2Δy xk+1-2Δx. yk+1 +c
Subtracting the equation (3) from the preceding equation
Pk+1 - Pk = 2Δy (xk+1 - xk) -2Δx(yk+1 - yk)
But xk+1= xk+1
so that Pk+1 = Pk+ 2Δy-2Δx(yk+1 - yk) (4)
Where the term yk+1-yk is either 0 or 1 depending on the sign of parameter Pk This recursive
calculation of decision parameter is performed at each integer x position, starting at the left
coordinate endpoint of the line.
The first parameter P0 is evaluated from equation at the starting pixel position (x0,y0)
and with m evaluated as Δy/Δx
P0 = 2Δy-Δx (5)
Bresenham’s line drawing for a line with a positive slope less than 1 in the following outline
of the algorithm.
The constants 2Δy and 2Δy-2Δx are calculated once for each line to be scan
converted.
Bresenham’s line Drawing Algorithm for |m| < 1

1. Input the two line endpoints and store the left end point in (x0,y0)
2. load (x0,y0) into frame buffer, ie. Plot the first point.
3. Calculate the constants Δx, Δy, 2Δy and obtain the starting value for the decision
parameter as P0 = 2Δy-Δx
4. At each xk along the line, starting at k=0 perform the following test
If Pk< 0, the next point to plot is(xk+1,yk) and Pk+1 = Pk + 2Δy otherwise, the next point to plot
is (xk+1,yk+1) and Pk+1 = Pk + 2Δy - 2Δx
5. Perform step4 Δx times.
Implementation of Bresenham Line drawing Algorithm
voidlineBres (intxa,intya,intxb, intyb)
{
int dx = abs( xa – xb) , dy = abs (ya - yb);
int p = 2 * dy – dx;
inttwoDy = 2 * dy, twoDyDx = 2 *(dy - dx);
int x , y, xEnd;
/* Determine which point to use as start, which as end * /
if (xa> x b )
{
x = xb;
y = yb;
xEnd = xa;
}
else
{
x = xa;
y = ya;
xEnd = xb;
}
setPixel(x,y);
while(x<xEnd)
{
x++;
if (p<0)
p+=twoDy;
else
{
y++;
p+=twoDyDx;
}
setPixel(x,y);
}
}

Advantages
Algorithm is Fast
Uses only integer calculations
Disadvantages
It is meant only for basic line drawing.
Circle-Generating Algorithms
General function is available in a graphics library for displaying various kinds of
curves, including circles and ellipses.
Properties of a circle A circle is defined as a set of points that are all the given
distance (xc,yc).
This distance relationship is expressed by the pythagorean theorem in Cartesian
coordinates as
(x – xc)2 + (y – yc) 2 = r2 ----------------(1)
Use above equation to calculate the position of points on a circle circumference by
stepping along the x axis in unit steps from xc-r to xc+r and calculating the corresponding y
values at each position as
y = yc+(- ) sqrt(r2 – (xc –x )2)------------- (2)
This is not the best method for generating a circle for the following reason
o Considerable amount of computation
o Spacing between plotted pixels is not uniform
To eliminate the unequal spacing is to calculate points along the circle boundary using
polar coordinates r and θ.
Expressing the circle equation in parametric polar from yields the pair of equations
x = xc + rcos θ
y = yc + rsin θ
When a display is generated with these equations using a fixed angular step size, a
circle is plotted with equally spaced points along the circumference.
Set the angular step size at 1/r. This plots pixel positions that are approximately one
unit apart.
The shape of the circle is similar in each quadrant.
Circle sections in adjacent octants within one quadrant are symmetric with
respect to the 450 line dividing the two octants. Where a point at position (x, y) on a one-
eight circle sector is mapped into the seven circle points in the other octants of the xy plane.
To generate all pixel positions around a circle by calculating only the points within
the sector from x=0 to y=0. the slope of the curve in this octant has an magnitude less than of
equal to 1.0. at x=0, the circle slope is 0 and at x=y, the slope is -1.0.
Midpoint circle Algorithm:
In the raster line algorithm at unit intervals and determine the closest pixel position to
the specified circle path at each step for a given radius r and screen center position (xc,yc)
set up our algorithm to calculate pixel positions around a circle path centered at the
coordinate position by adding xc to x and yc to y.
To apply the midpoint method we define a circle function as
fcircle(x,y) = x2+y2-r2
Any point (x,y) on the boundary of the circle with radius r satisfies the equation
fcircle (x,y)=0.
If the point is in the interior of the circle, the circle function is negative. And if the
point is outside the circle the, circle function is positive
fcircle (x,y) <0, if (x,y) is inside the circle boundary ,
fcircle(x,y)=0, if (x,y) is on the circle boundary,
fcircle(x,y)>0, if (x,y) is outside the circle boundary.

The tests in the above eq are performed for the mid-positionsbetween pixels near the
circle path at each sampling step. The circle function is the decision parameter in the
midpoint algorithm.
Midpoint between candidate pixels at sampling position xk+1 along a circular path. Fig
-1 shows the midpoint between the two candidate pixels at sampling position xk+1.
To plot the pixel at (xk,yk) next need to determine whether the pixel at position
(xk+1,yk) or the one at position (xk+1,yk-1) is circular to the circle. Our decision parameter is the
circle function evaluated at the midpoint between these two pixels
Pk= fcircle (xk+1,yk- 1/2)
If Pk<0, this midpoint is inside the circle and the pixel on scan line yk is closer to the
circle boundary.
Otherwise the mid position is outside or on the circle boundary and select the pixel on
scan line yk -1.

Midpoint circle Algorithm


1. Input radius r and circle center (xc,yc) and obtain the first point on the circumference of the
circle centered on the origin as
(x0,y0) = (0,r)
2. Calculate the initial value of the decision parameter as
P0=(5/4)-r
3. At each xk position, starting at k=0, perform the following test.
If Pk<0 the next point along the circle centered on (0,0) is (xk+1,yk) and Pk+1=Pk+2xk+1+1
4.Determine symmetry points in the other seven octants.
5. Move each calculated pixel position (x,y) onto the circular path centered at (x c,yc) and plot
the coordinate values. x=x+xc y=y+yc
6. Repeat step 3 through 5 until x>=y.
Implementation of Midpoint Circle Algorithm
voidcircleMidpoint (intxCenter, intyCenter, int radius)
{
int x = 0;
int y = radius;
int p = 1 - radius;
voidcirclePlotPoints (int, int, int, int);
/* Plot first set of points */
circlePlotPoints (xCenter, yCenter, x, y);
while (x < y)
{
x++ ;
if (p < 0)
p +=2*x +1;
else
{
y--;
p +=2* (x - Y) + 1;
}
circlePlotPoints(xCenter, yCenter, x, y)
}
}
voidcirclePlotPolnts (intxCenter, intyCenter, int x, int y)
{
setpixel (xCenter + x, yCenter + y ) ;
setpixel (xCenter - x. yCenter + y);
setpixel (xCenter + x, yCenter - y);
setpixel (xCenter - x, yCenter - y ) ;
setpixel (xCenter + y, yCenter + x);
setpixel (xCenter - y , yCenter + x);
setpixel (xCenter t y , yCenter - x);
setpixel (xCenter - y , yCenter - x);
}
Ellipse-Generating Algorithms
An ellipse is an elongated circle. Therefore, elliptical curves can be generated by
modifying circle-drawing procedures to take into account the different dimensions of an
ellipse along the major and minor axes.
Properties of ellipses
An ellipse can be given in terms of the distances from any point on the ellipse to two
fixed positions called the foci of the ellipse. The sum of these two distances is the same
values for all points on the ellipse.
If the distances to the two focus positions from any point P=(x,y) on the ellipse are
labeled d1 and d2, then the general equation of an ellipse can be stated as
d1+d2=constant

Expressing distances d1 and d2 in terms of the focal coordinates


F1=(x1,y2) and
F2=(x2,y2)
sqrt((x-x1) +(y-y1) )+sqrt((x-x2)2+(y-y2)2)=constant
2 2

By squaring this equation isolating the remaining radical and squaring again. The
general ellipse equation in the form
Ax2+By2+Cxy+Dx+Ey+F=0
The coefficients A,B,C,D,E, and F are evaluated in terms of the focal coordinates and
the dimensions of the major and minor axes of the ellipse.
The major axis is the straight line segment extending from one side of the ellipse to
the other through the foci.
The minor axis spans the shorter dimension of the ellipse, perpendicularly bisecting
the major axis at the halfway position (ellipse center) between the two foci.
An interactive method for specifying an ellipse in an arbitrary orientation is to input
the two foci and a point on the ellipse boundary.
Ellipse equations are simplified if the major and minor axes are oriented to align with
the coordinate axes.
The major and minor axes oriented parallel to the x and y axes parameter rx for this
example labels the semi major axis and parameter ry labels the semi minor axis
((x-xc)/rx)2 +((y-yc)/ry)2=1
Using polar coordinates r and θ, to describe the ellipse in Standard position with the
parametric equations
x=xc+rxcos θ
y=yc+rxsin θ
Angle θ called the eccentric angle of the ellipse is measured around the perimeter of a
bounding circle.
We must calculate pixel positions along the elliptical arc throughout one quadrant,
and then we obtain positions in the remaining three quadrants by symmetry

Midpoint ellipse Algorithm


The midpoint ellipse method is applied throughout the first quadrant in two parts.
The below figure show the division of the first quadrant according to the slope of an
ellipse with rx<ry.

In the x direction where the slope of the curve has a magnitude less than 1 and unit steps in
the y direction where the slope has a magnitude greater than 1.
Region 1 and 2 can be processed in various ways
1. Start at position (0,ry) and step clockwise along the elliptical path in the first quadrant
shifting from unit steps in x to unit steps in y when the slope becomes less than -1
2. Start at (rx,0) and select points in a counter clockwise order.
2.1 Shifting from unit steps in y to unit steps in x when the slope becomes greater than
-1.0
2.2 Using parallel processors calculate pixel positions in the two regions
simultaneously
3. Start at (0,ry) step along the ellipse path in clockwise order throughout the first quadrant
ellipse function (xc,yc)=(0,0) as
fellipse(x,y)=r2 yx2+r2xy2 –r2x r2y
which has the following properties:
fellipse(x,y) <0, if (x,y) is inside the ellipse boundary,
fellipse(x,y) =0, if(x,y) is on ellipse boundary,
fellipse(x,y) >0, if(x,y) is outside the ellipse boundary
Thus, the ellipse function fellipse(x,y) serves as the decision parameter in the
midpoint algorithm.
Starting at (0,ry), we take unit steps in the x direction until to reach the boundary
between region1 and region 2. Then switch to unit steps in the y direction over the remainder
of the curve in the first quadrant.
At each step to test the value of the slope of the curve.
The ellipse slope is calculated dy/dx= -(2r2yx/2r2xy)
At the boundary between region1 and region2, dy/dx= -(2r2yx/2r2xy)
The following figure shows the midpoint between two candidate pixels at sampling position
xk+1 in the first region

To determine the next position along the ellipse path by evaluating the decision
parameter at this mid point
P1k = fellipse(xk+1,yk-1/2)
if P1k <0, the midpoint is inside the ellipse and the pixel on scan line yk is closer to
the ellipse boundary.
Otherwise the midpoint is outside or on the ellipse boundary and select the pixel on
scan line yk-1
In region 1 the initial value of the decision parameter is obtained by evaluating the
ellipse function at the start position (x0,y0) = (0,ry)
P10 = fellipse(1,ry -½ )
Over region 2, we sample at unit steps in the negative y direction and the midpoint is now
taken between horizontal pixels at each step. For this region, the decision parameter is
evaluated as
P2k = fellipse(xk+½ ,yk- 1)
1. If P2k >0, the mid-point position is outside the ellipse boundary, and select the
pixel at xk.
2. If P2k <=0, the mid-point is inside the ellipse boundary and select pixel position
xk+1.

Midpoint Ellipse Algorithm


1. Input rx, ry and ellipse center (xc,yc) and obtain the first point on an ellipse centered on the
origin as
(x0,y0) = (0,ry)
2. Calculate the initial value of the decision parameter in region1 as
P10=r2y-r2xry +(1/4)r2x
3. At each xk position in region1 starting at k=0 perform the following test.
If P1k<0, the next point along the ellipse centered on (0,0) is (xk+1, yk) and
P1k+1 = p1k +2 r2yxk +1 + r2y
4. Calculate the initial value of the decision parameter in region 2 using the last point (x0,y0)
is the last position calculated in region 1.
p20 = ry2(x0+1/2)2+rx2(yo-1)2 – rx2ry2
5. At each position yk in region 2, starting at k=0 perform the following test,
If P2k>0 the next point along the ellipse centered on (0,0) is (xk,yk-1) and
P2k+1 = P2k – 2r2xyk+1+r2 x
6. Determine symmetry points in the other three quadrants.
7. Move each calculate pixel position (x,y) onto the elliptical path centered on (xc,yc) and
plot the coordinate values
x=x+xc, y=y+yc
8. Repeat the steps for region1 unit 2r yx>=2r2 xy
2

Implementation of Midpoint Ellipse drawing


#define Round(a) ((int)(a+0.5))
voidellipseMidpoint (intxCenter, intyCenter, int Rx, intRy)
{
int Rx2=Rx*Rx;
int Ry2=Ry*Ry;
int twoRx2 = 2*Rx2;
int twoRy2 = 2*Ry2;
int p;
int x = 0;
int y = Ry;
intpx = 0;
intpy = twoRx2* y;
voidellipsePlotPoints ( int , int , int , int ) ;
/* Plot the first set of points */
ellipsePlotPoints (xcenter, yCenter, x,y ) ;
/ * Region 1 */
p = ROUND(Ry2 - (Rx2* Ry) + (0.25*Rx2));
while (px<py)
{
x++;
px += twoRy2;
i f (p < 0) p += Ry2 + px;
else
{
y--;
py -= twoRx2;
p += Ry2 + px - py;
}
ellipsePlotPoints(xCenter, yCenter,x,y);
}
/* Region 2 */
p = ROUND (Ry2*(x+0.5)*' (x+0.5)+ Rx2*(y- l )* (y- l ) - Rx2*Ry2);
while (y > 0 )
{
y--;
py -= twoRx2;
if (p > 0) p += Rx2 - py;
else
{
x++;
px+=twoRy2;
p+=Rx2-py+px;
}
ellipsePlotPoints(xCenter, yCenter,x,y);
}
}
voidellipsePlotPoints(intxCenter, intyCenter,intx,int y);
{
setpixel (xCenter + x, yCenter + y);
setpixel (xCenter - x, yCenter + y);
setpixel (xCenter + x, yCenter - y);
setpixel (xCenter- x, yCenter - y);
}
UNIT-III
Basic transformation:

Introduction of Transformations:

Computer Graphics provide the facility of viewing object from different angles. The
architect can study building from different angles i.e.

1. Front Evaluation
2. Side elevation
3. Top plan

The purpose of using computers for drawing is to provide facility to user to view the
object from different angles, enlarging or reducing the scale or shape of object called as
Transformation.

It is possible to combine two transformations, after connecting a single transformation


is obtained, e.g., A is a transformation for translation. The B transformation performs
scaling. The combination of two is C=AB. So C is obtained by concatenation property.

Types of Transformations:

1. Translation
2. Scaling
3. Rotating

Other transformation

4. Reflection
5. Shearing

Translation

It is the straight line movement of an object from one position to another is called
Translation.

Here the object is positioned from one coordinate location to another.

Translation of point:

To translate a point from coordinate position (x, y) to another (x1 y1), we add
algebraically the translation distances Txand Tyto original coordinate.

In general equations,
P’=P+T,

P’=new position, P=old position, T-translation

x1=x+Tx
y1=y+Ty
The translation pair (Tx,Ty) is called as shift vector.

Translation is a movement of objects without deformation. Every position or point is


translated by the same amount. When the straight line is translated, then it will be drawn
using endpoints.

For translating polygon, each vertex of the polygon is converted to a new position..

Let P is a point with coordinates (x, y). It will be translated as (x1 y1).

Matrix for Translation:

Scaling:
It is used to alter or change the size of objects.

The change is done using scaling factors. There are two scaling factors, i.e. S x in x
direction Sy in y-direction.

x’= x.Sx
y’ = y.Sy
Scaling factor Sx scales object in x direction while Sy scales in y direction.
The transformation equation in matrix form
x’ sx 0 x
y’ = 0 sy y or P’ = S. P

Where S is 2X2 scaling matrix

Fig: Turning a square (a) Into a rectangle (b)


with scaling factors sx = 2 and sy= 1.

Fig: Reduced size and moved closer to the coordinate origin


Any positive numeric values are valid for scaling factors sx and sy. Values less than 1
reduce the size of the objects and values greater than 1 produce an enlarged object.
There are two types of Scaling. They are
o Uniform scaling
o Non Uniform Scaling
To get uniform scaling it is necessary to assign same value for sx and sy.
Unequal values for sx and sy result in a non uniform scaling.

Rotation:

It is a process of changing the angle of the object. Rotation can be clockwise or


anticlockwise. For rotation, we have to specify the angle of rotation and rotation point.
Rotation point is also called a pivot point. It is print about which object is rotated.

Types of Rotation:

1. Anticlockwise
2. Counterclockwise
The positive value of the pivot point (rotation angle) rotates an object in a counter-
clockwise (anti-clockwise) direction.

The negative value of the pivot point (rotation angle) rotates an object in a
clockwise direction.

Rotation of a point from position (x,y) to position (x’,y’) through angle θ relative to
coordinate origin The transformation equations for rotation of a point position P when the
pivot point is at coordinate origin.

In figure r is constant distance of the point positions Ф is the original angular of the
point from horizontal and θ is the rotation angle.
The transformed coordinates in terms of angle θ and Ф
x’ = rcos(θ+Ф) = rcosθcosФ – rsinθsinФ
y’ = rsin(θ+Ф) = rsinθcosФ + rcosθsinФ

The original coordinates of the point in polar coordinates x = rcosФ, y = rsinФ the
transformation equation for rotating a point at position (x,y) through an angle θ about origin
x’ = xcosθ – ysinθ
y’ = xsinθ + ycosθ

Rotation equation
P’= R . P
Rotation MatrixR = cosθ - sinθ
Sinθ +cosθ
P’=R.P

x’ = cosθ - sinθ x
y’ Sinθ + cosθ y

Other transformation:

Reflection
Shearing

Reflection:
It is a transformation which produces a mirror image of an object. The mirror image can be
either about x-axis or y-axis. The object is rotated by180°.

Types of Reflection:

1. Reflection about the x-axis


2. Reflection about the y-axis
3. Reflection about an axis perpendicular to xy plane and passing through
the origin
4. Reflection about line y=x

Reflection about x-axis: The object can be reflected about x-axis with the help of the
following matrix

In this transformation value of x will remain same whereas the value of y will become
negative. Following figures shows the reflection of the object axis. The object will lie another
side of the x-axis.

Reflection about y-axis: The object can be reflected about y-axis with the help of
following transformation matrix.

Here the values of x will be reversed, whereas the value of y will remain the same.
The object will lie another side of the y-axis.

The following figure shows the reflection about the y-axis


Reflection about an axis perpendicular to xy plane and passing through origin:
In the matrix of this transformation is given below

In this value of x and y both will be reversed. This is also called as half revolution
about the origin.

Reflection about line y=x: The object may be reflected about line y = x with the help
of following transformation matrix.
First of all, the object is rotated at 45°. The direction of rotation is clockwise. After it
reflection is done concerning x-axis. The last step is the rotation of y=x back to its original
position that is counterclockwise at 45°.

Shearing:

It is transformation which changes the shape of object. The sliding of layers of


object occur. The shear can be in one direction or in two directions.

Shearing in the X-direction: In this horizontal shearing sliding of layers occur.


The homogeneous matrix for shearing in the x-direction is shown below:
The x-shear equation is
x’ = x + Shx.x
y’ = y
Shearing in the Y-direction: Here shearing is done by sliding along vertical or y-
axis.

The y-shear equation is


y’ = y + Shy.y
x’ = x

Shearing in X-Y directions: Here layers will be slided in both x as well as y


direction. The sliding will be in horizontal as well as vertical direction. The shape of the
object will be distorted. The matrix of shear in both directions is given by:

Matrix Representation of 2D Transformation


Homogeneous Coordinates

The rotation of a point, straight line or an entire image on the screen, about a
point other than origin, is achieved by first moving the image until the point of rotation
occupies the origin, then performing rotation, then finally moving the image to its
original position.

Example of representing coordinates into a homogeneous coordinate system: For two-


dimensional geometric transformation, we can choose homogeneous parameter h to any non-
zero value.

For our convenience take it as one. Each two-dimensional position is then represented
with homogeneous coordinates (x, y, 1).

 h-homogeneous dummy coordinate

Following are matrix for two-dimensional transformation in homogeneous


coordinate:
The moving of an image from one place to another in a straight line is called a
translation. A translation may be done by adding or subtracting to each point, the amount,
by which picture is required to be shifted.

Translation of point by the change of coordinate cannot be combined with other


transformation by using simple matrix application. Such a combination is essential if we
wish to rotate an image about a point other than origin by translation, rotation again
translation.

To combine these three transformations into a single transformation, homogeneous


coordinates are used. In homogeneous coordinate system, two-dimensional coordinate
positions (x, y) are represented by triple-coordinates.

Two dimensional viewing


The viewing pipeline
A world coordinate area selected for display is called a window.
An area on a display device to which a window is mapped is called a view port. The
window defines ‘what’ is to be viewed, the view port defines ‘where’ it is to be displayed.
The mapping of a part of a world coordinate scene to device coordinate is referred to
as viewing transformation.
The two dimensional viewing transformation is referred to as window to view port
transformation or windowing transformation.

A viewing transformation using standard rectangles for the window and viewport

Normalized coordinate system

 If we map directly from WCS to a DCS, then changing our device requires rewriting
this mapping (among other changes).
 Instead, use Normalized Device Coordinates (NDC) as an intermediate coordinate
system that gets mapped to the device layer.
 Will consider using only a square portion of the device.
Windows in WCS will be mapped to viewports that are specified within a unit square
in NDC space.
 Map viewports from NDC coordinates to the screen.

The two dimensional viewing transformation pipelines


The viewing transformation in several steps,
First, we construct the scene in world coordinates using the output primitives.
Next to obtain a particular orientation for the window, we can set up a two-
dimensional viewing-coordinate system in the world coordinate plane, and define a window
in the viewing-coordinate system.
The viewing- coordinate reference frame is used to provide a method for setting up
arbitrary orientations for rectangular windows.
Once the viewing reference frame is established, we can transform descriptions in
world coordinates to viewing coordinates.
We then define a viewport in normalized coordinates (in the range from 0 to 1) and
map the viewing-coordinate description of the scene to normalized coordinates.
At the final step all parts of the picture that lie outside the viewport are clipped, and
the contents of the viewport are transferred to device coordinates. By changing the position of
the viewport, we can view objects at different positions on the display area of an output
device.

Window to view port coordinate transformation:

A point at position (xw,yw) in a designated window is mapped to viewport


coordinates (xv,yv) so that relative positions in the two areas are the same.
The figure illustrates the window to view port mapping. A point at position (xw,yw)
in the window is mapped into position (xv,yv) in the associated view port.
To maintain the same relative placement in view port as in window
Solving these expressions for view port position (xv,yv)

From (1)
XV – Xvmin =XVmax–Xvmin *(XW – Xwmin)
XWmax-XWmin
XV = Xvmin +XVmax – Xvmin *(XW – Xwmin)
XWmax-XWmin
From (2)
YV – Yvmin =YVmax – Yvmin *(YW – Ywmin)
YWmax-YWmin
YV = Yvmin +YVmax – Yvmin *(YW – Ywmin)
YWmax-YWmin

Sx = XVmax – XVmin
XWmax-XWmin
Sy = YVmax – YVmin
YWmax-YWmin
Then
XV = XVmin + SX * (XW – Xwmin)
YV = YVmin + Sy* (YW – Ywmin)
The Equations, can also be derived with a set of transformations that converts the window
area into the viewport area. This conversion is performed with the following sequence of
transformations:
1.Perform a scaling transformation using a fixed-point position of (xwmin,
ywmin)that scales the window area to the size of the viewport.
2. Translate the scaled window area to the position of the viewport.
Workstation Transformation:-
From normalized coordinates, objectdescriptions are mapped to the various display
devices. Any number of output devices can be open in a particular application, and another
window-to-viewport transformation can be performed for each open output device. This
mapping, called the workstation transformation, is accomplished by selecting a window
area in normalized space and a viewport area in the coordinates of the display device. With
the workstation transformation, we gain some additional control over the positioning of parts
of a scene on individual output devices. As illustrated in below Fig.

Two Dimensional viewing functions


Viewing reference system in a PHIGS application program has following function.
1. To evaluate view world coordinate positions
evaluateViewOrientationMatrix(x0,y0,xv,yv,error, viewMatrix)
where x0,y0 are coordinate of viewing origin and parameter xv, yv are the world
coordinate positions for view up vector.
An integer error code is generated if the input parameters are in error otherwise the
view matrix for world-to-viewing transformation is calculated. Any number of viewing
transformation matrices can be defined in an application.
2.To set up elements of window to view port mapping
evaluateViewMappingMatrix(xwmin, xwmax, ywmin, ywmax, xvmin, xvmax, yvmin,
yvmax, error, viewMappingMatrix)
Here window limits in viewing coordinates are chosen with parameters xwmin,
xwmax, ywmin, ywmax and the viewport limits are set with normalized coordinate positions
xvmin, xvmax, yvmin, yvmax.

3. The combinations of viewing and window view port mapping for various workstations in a
viewing table with
setViewRepresentation(ws,viewIndex,viewMatrix,viewMappingMatrix, xclipmin,
xclipmax, yclipmin, yclipmax, clipxy)
Where parameter wsdesignates the output device and parameter view index
sets an integer identifier for this window-view port point.
4.The matrices viewMatrix and viewMappingMatrix can be concatenated and referenced by
viewIndex.
setViewIndex(viewIndex) selects a particular set of options from the viewing
table.
5.At the final stage we apply a workstation transformation by selecting a work station
window viewport pair.
setWorkstationWindow(ws, xwsWindmin, xwsWindmax, ywsWindmin,
ywsWindmax)
setWorkstationViewport(ws, xwsVPortmin, xwsVPortmax, ywsVPortmin,
ywsVPortmax)
where was gives the workstation number. Window-coordinate extents are specified in the
range from 0 to 1 and viewport limits are in integer device coordinates.
Clipping operation
Any procedure that identifies those portions of a picture that are inside or outside of a
specified region of space is referred to as clipping algorithm or clipping.
The region against which an object is to be clipped is called clip window. Algorithm
for clipping primitive types:
 Point clipping
 Line clipping (Straight-line segment)
 Area clipping (Polygon)
 Curve clipping
 Text clipping

Point Clipping
Clip window is a rectangle in standard position.
A point P=(x,y) for display, if following inequalities are satisfied:
xwmin<= x <= xwmaxywmin<= y <= ywmax
where the edges of the clip window (xwmin,xwmax,ywmin,ywmax) can be either the
world-coordinate window boundaries or viewport boundaries.
If any one of these four inequalities is not satisfied, the point is clipped (not saved for
display).
Line Clipping
A line clipping procedure involves several parts. First we test a given line segment
whether it lies completely inside the clipping window. If it does not we try to determine
whether it lies completely outside the window.
Finally if we cannot identify a line as completely inside or completely outside, we
perform intersection calculations with one or more clipping boundaries.
Process lines through “inside-outside” tests by checking the line endpoints. A line
with both endpoints inside all clipping boundaries such as line from P1 to P2 is saved. A line
with both end-point outside any one of the clip boundaries line P3 P4 is outside the window.
All other lines cross one or more clipping boundaries, and may require calculation of multiple
intersection points.

Fig: Line clipping against a rectangular clip window


Cohen-Sutherland Line Clipping
This is one of the oldest and most popular line-clipping procedures. The method
speeds up the processing of line segments by performing initial tests that reduce the number
of intersections that must be calculated.
Every line endpoint in a picture is assigned a four digit binary code called a region
code that identifies the location of the point relative to the boundaries of the clipping
rectangle.

Binary region codes assigned to line end points according to relative position
with respect to the clipping rectangle.
Regions are set up in reference to the boundaries. Each bit position in region code is
used to indicate one of four relative coordinate positions of points with respect to clip
window: to the left, right, top or bottom.
By numbering the bit positions in the region code as 1 through 4 from right to left,
the coordinate regions are corrected with bit positions as
bit 1: left
bit 2: right
bit 3: below
bit4: above
A value of 1 in any bit position indicates that the point is in that relative position.
Otherwise the bit position is set to 0. If a point is within the clipping rectangle the region
code is 0000. A point that is below and to the left of the rectangle has a region code of 0101.
Bit values in the region code are determined by comparing endpoint coordinate values
(x,y) to clip boundaries.
Bit1 is set to 1 if x <xwmin.
For programming language in which bit manipulation is possible region-code bit
values can be determined with following two steps.
(1) Calculate differences between endpoint coordinates and clipping boundaries.
(2) Use the resultant sign bit of each difference calculation to set the corresponding value in
the region code.
bit 1 is the sign bit of x – xwmin
bit 2 is the sign bit of xwmax - x
bit 3 is the sign bit of y – ywmin
bit 4 is the sign bit of ywmax - y.
Once we have established region codes for all line endpoints, we can quickly determine
which lines are completely inside the clip window and which are clearly outside.
Any lines that are completely contained within the window boundaries have a region
code of 0000 for both endpoints, and we accept these lines.
Any lines that have a 1 in the same bit position in the region codes for each endpoint
are completely outside the clipping rectangle, and we reject these lines.
We would discard the line that has a region code of 1001 for one endpoint and a code
of 0101 for the other endpoint.
Both endpoints of this line are left of the clipping rectangle, as indicated by the 1 in
the first bit position of each region code.
A method that can be used to test lines for total clipping is to perform the logical and
operation with both region codes.
If the result is not 0000, the line is completely outside the clipping region. Lines that
cannot be identified as completely inside or completely outside a clip window by these tests
are checked for intersection with window boundaries.
Line extending from one coordinates region to another may pass through the clip
window, or they may intersect clipping boundaries without entering window.
Cohen-Sutherland line clipping starting with bottom endpoint left, right , bottom and
top boundaries in turn and find that this point is below the clipping rectangle.
Starting with the bottom endpoint of the line from P1 to P2, we check P1 against the
left, right, and bottom boundaries in turn and find that this point is below the clipping
rectangle.
We then find the intersection point P1’ with the bottom boundary and discard the line
section from P1 to P1’.
The line now has been reduced to the section from P1’ to P2,Since P2, is outside the
clip window, we check this endpoint against the boundaries and find that it is to the left of the
window.
Intersection point P2’ is calculated, but this point is above the window. So the final
intersection calculation yields P2”, and the line from P1’ to P2”is saved.
This completes processing for this line, so we save this part and go on to the next line.
Point P3 in the next line is to the left of the clipping rectangle, so we determine the
intersection P3’, and eliminate the line section from P3 to P3'. By checking region codes
for the line section from P3'to P4 we find that the remainder of the line is below the clip
window and can be discarded also. Intersection points with a clipping boundary can be
calculated using the slope-intercept form of the line equation.
For a line with endpoint coordinates (x1,y1) and (x2,y2) and the y coordinate of the
intersection point with a vertical boundary can be obtained with the calculation.
y =y1 +m (x-x1)
where x value is set either to xwmin or to xwmax and slope of line is calculated as
m = (y2- y1) / (x2- x1)
the intersection with a horizontal boundary the x coordinate can be calculated as
x= x1 +( y- y1) / m with y set to either to ywmin or to ywmax.

Implementation of Cohen-sutherland Line Clipping


#define Round(a) ((int)(a+0.5))
#define LEFT_EDGE 0x1
#define RIGHT_EDGE 0x2
#define BOTTOM_EDGE 0x4
#define TOP_EDGE 0x8
#define TRUE 1
#define FALSE 0
#define INSIDE(a) (!a)
#define REJECT(a,b) (a&b)
#define ACCEPT(a,b) (!(a|b))
unsigned char encode(wcPt2 pt, dcPtwinmin, dcPtwinmax)
unsigned char code=0x00;
if(pt.x<winmin.x)
code=code|LEFT_EDGE;
if(pt.x>winmax.x)
code=code|RIGHT_EDGE;
if(pt.y<winmin.y)
code=code|BOTTOM_EDGE;
if(pt.y>winmax.y)
code=code|TOP_EDGE;
return(code);
}
voidswappts(wcPt2 *p1,wcPt2 *p2)
{
wcPt2 temp;
tmp=*p1;
*p1=*p2;
*p2=tmp;
}
voidswapcodes(unsigned char *c1,unsigned char *c2)
{
unsigned char tmp;
tmp=*c1;
*c1=*c2;
*c2=tmp;
}
voidclipline(dcPtwinmin, dcPtwinmax, wcPt2 p1,ecPt2 point p2)
{
unsigned char code1,code2;
int done=FALSE,
draw=FALSE;
float m;
while(!done)
{
code1=encode(p1,winmin,winmax);
code2=encode(p2,winmin,winmax);
if(ACCEPT(code1,code2))
{
done=TRUE;
draw=TRUE;
}
else if(REJET(code1,code2))
done=TRUE;
else
{
if(INSIDE(code1))
{
swappts(&p1,&p2);
swapcodes(&code1,&code2);
}
if(p2.x!=p1.x)
m=(p2.y-p1.y)/(p2.x-p1.x);
if(code1 &LEFT_EDGE)
{
p1.y+=(winmin.x-p1.x)*m;
p1.x=winmin.x;
}
else if(code1 &RIGHT_EDGE)
{
p1.y+=(winmax.x-p1.x)*m;
p1.x=winmax.x;
}
Else
if(code1 &BOTTOM_EDGE)
{
if(p2.x!=p1.x)
p1.x+=(winmin.y-p1.y)/m; p1.y=winmin.y;
}
Else
if(code1 &TOP_EDGE)
{
if(p2.x!=p1.x)
p1.x+=(winmax.y-p1.y)/m;
p1.y=winmax.y;
}
}
}
if(draw)
lineDDA(ROUND(p1.x),ROUND(p1.y),ROUND(p2.x),ROUND(p2.y));
}
Nicholl-Lee-Nicholl Line clipping
By creating more regions around the clip window, the Nicholl-Lee-Nicholl (or NLN)
algorithm avoids multiple clipping of an individual line segment.
In the Cohen-Sutherland method, multiple intersections may be calculated.These extra
intersection calculations are eliminated in the NLN algorithm by carrying out more regions
testing before intersection positions are calculated.
Compared to both the Cohen-Sutherland and the Liang-Barsky algorithms, the
Nicholl-Lee-Nicholl algorithm performs fewer comparisons and divisions.
The trade-off is that the NLN algorithm can only be applied to two-dimensional
dipping, whereas both the Liang-Barsky and the Cohen-Sutherland methods are easily
extended to three-dimensional scenes.
For a line with endpoints P1 and P2 we first determine the position of point P1, for the
nine possible regions relative to the clipping rectangle. Only the three regions shown in Fig.
need to be considered.
If P1 lies in any one of the other six regions, we can move it to one of the three
regions in Fig. using a symmetry transformation.
For example, the region directly above the clip window can be transformed to the
region left of the clip window using a reflection about the line y = -x, or we could use a 90
degree counterclockwise rotation.
Three possible positions for a line endpoint p1 in the NLN algorithm

Case 1: p1 inside region


Case 2: p1 across edge
Case 3: p1 across corner
Next, we determine the position of P2 relative to P1. To do this, we create some new
regions in the plane, depending on the location of P1.
Boundaries of the new regions are half-infinite line segments that start at the position
of P1 and pass through the window corners.
If P1 is inside the clip window and P2 is outside, we set up the four regions shown in
Fig;
Fig: The four clipping regions used in NLN alg when p1 is inside and p2 outside the clip
window

The intersection with the appropriate window boundary is then carried out, depending
on which one of the four regions (L, T, R, or B) contains P2.
If both P1 and P2 are inside the clipping rectangle, we simply save the entire line. If
P1 is in the region to the left of the window, we set up the four regions, L, LT, LR,and
LBshown in fig

Fig: The four clipping regions used in NLN algorithm when p1 is directly left of the
clip window
These four regions determine a unique boundary for the line segment. For instance, if
P2 is in region L, we clip the line at the left boundary and save the line segment from this
intersection point to P2. But if P2 is in region LT,we save the line segment from the left
window boundary to the top boundary. If P2 is not in any of the four regions, L, LT, LR, or
LB, the entire line is clipped.
For the third case, when P1 is to the left and above the clip window, we use the
clipping regions in Fig.
Fig : The two possible sets of clipping regions used in NLN algorithm when P1 is above and
to the left of the clip window

In this case, we have the two possibilities shown, depending on the position of P1,
relative to the top left corner of the window. If P2,is in one of the regions T, L, TR, TB, LR,
or LB, this determines a unique clip window edge for the intersection calculations.
Otherwise, the entire line is rejected.
To determine the region in which P2 is located, we compare the slope of the line to the slopes
of the boundaries of the clip regions. For example, if P1 is left of the clipping rectangle (Fig.
a), then P2, is in region LT if

And we clip the entire line if


(yT – y1)( x2 – x1) < (xL – x1 ) ( y2 – y1)
The coordinate difference and product calculations used in the slope tests are saved
and also used in the intersection calculations. From the parametric equations
x = x1 + (x2 – x1)u
y = y1 + (y2 – y1)u
an x-intersection position on the left window boundary is x = xL,, with
u= (xL – x1 )/ ( x2 – x1)

so that the y-intersection position is


y = y1 + y2 – y1 (xL – x1 )
x2 – x1
And an intersection position on the top boundary has y = yTand
u = (yT – y1)/ (y2 – y1) with
x = x1 + x2 – x1 (yT – y1 )
y2 – y1
POLYGON CLIPPING
To clip polygons, we need to modify the line-clipping procedures. A polygon boundary
processed with a line clipper may be displayed as a series of unconnected line segments
(Fig.), depending on the orientation of the polygon to the clipping window.
Display of a polygon processed by a line clipping algorithm

For polygon clipping, we require an algorithm that will generate one or more closed
areas that are then scan converted for the appropriate area fill. The output of a polygon
clipper should be a sequence of vertices that defines the clipped polygon boundaries.
Sutherland – Hodgeman polygon clipping:
A polygon can be clipped by processing the polygon boundary as a whole against
each window edge. This could be accomplished by processing all polygon vertices against
each clip rectangle boundary.
There are four possible cases when processing vertices in sequence around the
perimeter of a polygon. As each point of adjacent polygon vertices is passed to a window
boundary clipper, make the following tests:
1. If the first vertex is outside the window boundary and second vertex is inside, both the
intersection point of the polygon edge with window boundary and second vertex are added to
output vertex list.
2. If both input vertices are inside the window boundary, only the second vertex is added to
the output vertex list.
3. If first vertex is inside the window boundary and second vertex is outside only the edge
intersection with window boundary is added to output vertex list.
4. If both input vertices are outside the window boundary nothing is added to the output list.
Fig:Clipping a polygon against successive window boundaries.

Fig:Successive processing of pairs of polygon vertices against the left window boundary
Clipping a polygon against the left boundary of a window, starting with vertex 1.
Primed numbers are used to label the points in the output vertex list for this window
boundary.
Vertices 1 and 2 are found to be on outside of boundary. Moving along vertex 3
which is inside, calculate the intersection and save both the intersection point and vertex 3.
Vertex 4 and 5 are determined to be inside and are saved. Vertex 6 is outside so we find and
save the intersection point.

Using the five saved points we repeat the process for next window boundary.
Implementing the algorithm as described requires setting up storage for an output list
of vertices as a polygon clipped against each window boundary. We eliminate the
intermediate output vertex lists by simply by clipping individual vertices at each step and
passing the clipped vertices on to the next boundary clipper.
A point is added to the output vertex list only after it has been determined to be inside
or on a window boundary by all boundary clippers. Otherwise the point does not continue in
the pipeline.
Implementation of Sutherland-Hodgeman Polygon Clipping
typedefenum { Left,Right,Bottom,Top } Edge;
#define N_EDGE 4
#define TRUE 1
#define FALSE 0
int inside(wcPt2 p, Edge b,dcPtwmin,dcPtwmax)
{
switch(b)
{
case Left: if(p.x<wmin.x) return (FALSE); break;
caseRight:if(p.x>wmax.x) return (FALSE); break;
casebottom:if(p.y<wmin.y) return (FALSE); break;
case top: if(p.y>wmax.y) return (FALSE); break;
}
return (TRUE);
}
int cross(wcPt2 p1, wcPt2 p2,Edge b,dcPtwmin,dcPtwmax)
{
if(inside(p1,b,wmin,wmax)==inside(p2,b,wmin,wmax))
return (FALSE);
else
return (TRUE);
}
wcPt2 (wcPt2 p1, wcPt2 p2,intb,dcPtwmin,dcPtwmax )
{
wcPt2iPt;
float m;
if(p1.x!=p2.x) m=(p1.y-p2.y)/(p1.x-p2.x);
switch(b)
{
case Left:
ipt.x=wmin.x;
ipt.y=p2.y+(wmin.x-p2.x)*m;
break;
case Right:
ipt.x=wmax.x;
ipt.y=p2.y+(wmax.x-p2.x)*m;
break;
case Bottom:
ipt.y=wmin.y;
if(p1.x!=p2.x) ipt.x=p2.x+(wmin.y-p2.y)/m;
elseipt.x=p2.x;
break;
case Top:
ipt.y=wmax.y;
if(p1.x!=p2.x) ipt.x=p2.x+(wmax.y-p2.y)/m;
else
ipt.x=p2.x;
break;
}
return(ipt);
}
voidclippoint(wcPt2 p,Edgeb,dcPtwmin,dcPtwmax, wcPt2 *pout,int *cnt, wcPt2
*first[],struct point *s)
{
wcPt2iPt;
if(!first[b]) first[b]=&p;
else if(cross(p,s[b],b,wmin,wmax))
{
ipt=intersect(p,s[b],b,wmin,wmax);
if(b<top) clippoint(ipt,b+1,wmin,wmax,pout,cnt,first,s);
else
{
pout[*cnt]=ipt; (*cnt)++;
}

}
s[b]=p;
if(inside(p,b,wmin,wmax))
if(b<top) clippoint(p,b+1,wmin,wmax,pout,cnt,first,s);
else
{
pout[*cnt]=p; (*cnt)++;
}
}
voidcloseclip(dcPtwmin,dcPtwmax, wcPt2 *pout,int *cnt,wcPt2 *first[], wcPt2 *s)
{
wcPt2iPt;
Edge b;
for(b=left;b<=top;b++)
{
if(cross(s[b],*first[b],b,wmin,wmax))
{
i=intersect(s[b],*first[b],b,wmin,wmax);
if(b<top)
clippoint(i,b+1,wmin,wmax,pout,cnt,first,s);
else
{
pout[*cnt]=i;
(*cnt)++;
}
}
}
}
intclippolygon(dcPt point wmin,dcPtwmax,int n,wcPt2 *pin, wcPt2 *pout)
{
wcPt2 *first[N_EDGE]={0,0,0,0},s[N_EDGE];
inti,cnt=0;
for(i=0;i<n;i++)
clippoint(pin[i],left,wmin,wmax,pout,&cnt,first,s); closeclip(wmin,wmax,pout,&cnt,first,s);
return(cnt);
}
Curve Clipping
Curve-clipping procedures will involve nonlinear equations, and this requires more
processing than for objects with linear boundaries.
The bounding rectangle for a circle or other curved object can be used first to test for
overlap with a rectangular clip window. If the bounding rectangle for the object is completely
inside the window, we save the object.
If the rectangle is determined to be completely outside the window, we discard the
object. In either case, there is no further computation necessary. But if the bounding rectangle
test fails, we can look for other computation-saving approaches.
For a circle, we can use the coordinate extents of individual quadrants and then
octants for preliminary testing before calculating curve-window intersections.
The below figure illustrates circle clipping against a rectangular window. On the first
pass, we can clip the bounding rectangle of the object against the bounding rectangle of the
clip region. If the two regions overlap, we will need to solve the simultaneous line-curve
equations to obtain the clipping intersection points.
Clipping a filled circle

Text clipping
There are several techniques that can be used to provide text clipping in a graphics
package.
The clipping technique used will depend on the methods used to generate characters
and the requirements of a particular application.
The simplest method for processing character strings relative to a window boundary is
to use the all-or-none string-clipping strategy shown in Fig. . If all of the string is inside a
clip window, we keep it. Otherwise, the string is discarded. This procedure is implemented by
considering a bounding rectangle around the text pattern.
The boundary positions of the rectangle are then compared to the window boundaries,
and the string is rejected if there is any overlap. This method produces the fastest text
clipping.

Text clipping using a bounding rectangle about the entire string

An alternative to rejecting an entire character string that overlaps a window boundary


is to use the all-or-none character-clipping strategy.
Here we discard only those characters that are not completely inside the window .In
this case, the boundary limits of individual characters are compared to the window. Any
character that either overlaps or is outside a window boundary is clipped.
Text clipping using a bounding rectangle about individual characters.

A final method for handling text clipping is to clip the components of individual
characters. We now treat characters in much the same way that we treated lines.
If an individual character overlaps a clip window boundary, we clip off the parts of
the character that are outside the window.
Text Clipping performed on the components of individual characters

Exterior clipping
Procedure for clipping a picture to the interior of a region by eliminating everything
outside the clipping region. By these procedures the inside region of the picture is saved. To
clip a picture to the exterior of a specified region. The picture parts to be saved are those that
are outside the region. This is called as exterior clipping. Exterior clipping is used also in
other applications that require overlapping pictures.
Objects within a window are clipped to interior of window when other higher priority
window overlap these objects. The objects are also clipped to the exterior of overlapping
windows.
UINT-IV
Three Dimensional Display Methods

 Parallel Projection
 Perspective Projection
 Depth cueing
 Visible line and surface identification
 Surface rendering
 Exploded and cutaway views
 Three dimensional stereoscopic views

1. Parallel Projection
This method generates view from solid object by projecting parallel lines onto the
display plane. By changing viewing position we can get different views of 3D object
onto 2D display screen.
In a parallel projection, parallel lines in the world-coordinate scene project into
parallel lines on the two-dimensional display plane.

Fig. 3.1: - different views object by changing viewing plane position.


Above figure shows different views of objects.
This technique is used in Engineering & Architecture drawing to represent an object
with a set of views that maintain relative properties of the object e.g.-orthographic
projection.
2. Perspective projection
This method generating view of 3D object by projecting point the display plane along
converging paths.

This will display object smaller when it is away from the view plane and of nearly
same size when closer to view plane. It will produce more reliability view as it is the way our
eye is forming image.
In a perspective projection, parallel lines in a scene that are not parallel to the display
plane are projected into converging lines. Scenes displayed using perspective projections
appear more realistic, since this is the way that our eyes and a camera lens form images.

3. Depth cueing
Many times depth information is important so that we can identify for a particular
viewing direction that which are the front and which is the back of display object. Simple
method to do this is depth cueing in which assign higher intensity to closer object &
lower intensity to the far objects.

Depth cuing is applied by choosing maximum and minimum intensity values and
a range of distance over which the intensities are to vary. Another application is to
modelling effect of atmosphere.
4. Visible line and surface Identification
In this method we first identify visible lines or surfaces by some method. Then
display visible lines with highlighting or with some different color. Other way is to
display hidden lines with dashed lines or simply not display hidden lines. But not drawing
hidden lines will loss the some information.
Similar method we can apply for the surface also by displaying shaded surface of
color surface. Some visible surface algorithm establishes visibility pixel by pixel across
the view plane. Other determines visibility of object surface as a whole.
5. Surface Rendering
More realistic image is produce by setting surface intensity according to light reflect
from that surface & the characteristics of that surface. It will give more intensity to the
shiny surface and less to dull surface. It also applies high intensity where light is more &
less where light falls is less.
6. Exploded and Cutaway views
Many times internal structure of the object is need to store. For ex., in machine
drawing internal assembly is important. For displaying such views it will remove
(cutaway) upper of body so that internal part’s can be visible.
7. Three dimensional stereoscopic views
This method display using computer generated scenes. It may display object by
three dimensional views.
The graphics monitor which are display three dimensional scenes are devised
using a technique that reflects a CRT image from a vibrating flexible mirror.

Fig. 3.3: - 3D display system uses a vibrating mirror.


Vibrating mirror changes its focal length due to vibration which is synchronized with
the display of an object on CRT. The each point on the object is reflected from the mirror into
spatial position corresponding to distance of that point from a viewing position.

Very good example of this system is GENISCO SPACE GRAPH system, which use
vibrating mirror to project 3D objects into a 25 cm by 25 cm by 25 cm volume. This system
is also capable to show 2D cross section at different depth.
Another way is stereoscopic views.
Stereoscopic views does not produce three dimensional images, but it produce 3D effects
by presenting different view to each eye of an observer so that it appears to have depth.

To obtain this we first need to obtain two views of object generated from viewing
direction corresponding to each eye. We can contract the two views as computer generated
scenes with different viewing positions or we can use stereo camera pair to photograph some
object or scene.
When we see simultaneously both the view as left view with left eye and right view with
right eye then two views is merge and produce image which appears to have depth.
One way to produce stereoscopic effect is to display each of the two views with raster
system on alternate refresh cycles. The screen is viewed through glasses with each lance
design such a way that it act as a rapidly alternating shutter that is synchronized to block out
one of the views.

Three Dimensional Transformations


Three Dimensional:

The three-dimensional transformations are extensions of two-dimensional


transformation. In 2D two coordinates are used, i.e., x and y whereas in 3D three co-
ordinates x, y, and z are used.

These are translations, scaling, and rotation. These are also called as basic
transformations are represented using matrix. More complex transformations are handled
using matrix in 3D.

Translation
It is the movement of an object from one position to another position. Translation
is done using translation vectors.

Translation is aadditive.

 These vectors are in x, y, and z directions.


 Translation in the x-direction is represented using Tx.
 The translation is y-direction is represented using Ty.
 The translation in the z- direction is represented using Tz.

If P is a point having co-ordinates in three directions (x, y, z) is before translated,

then after translation its coordinates will be (x1 y1 z1) after translation. Tx Ty Tz are
translation vectors in x, y, and z directions respectively.

x1=x+ Tx
y1=y+Ty
z1=z+ Tz

Three-dimensional transformations are performed by transforming each vertex of the


object.

Matrix for translation


Matrix representation of point translation

Point shown in fig is (x, y, z). It become (x1,y1,z1) after translation. Tx Ty Tz are
translation vector.

Scaling
Scaling is used to change the size of an object. The size can be increased or
decreased. The scaling three factors are required Sx Sy and Sz.

Scaling is a multiplication.

 Sx=Scaling factor in x- direction


 Sy=Scaling factor in y-direction
 Sz=Scaling factor in z-direction

We get the equations for scaling..

X1=X.SX

Y1=X.SX

Z1=X.SX
Matrix for Scaling

Scaling of the object relative to a fixed point

Following are steps performed when scaling of objects with fixed point (a, b, c). It can be
represented as below:

1. Translate fixed point to the origin


2. Scale the object relative to the origin
3. Translate object back to its original position.

Note: If all scaling factors Sx=Sy=Sz.Then scaling is called as uniform. If scaling is done
with different scaling vectors, it is called a differential scaling.

In figure (a) point (a, b, c) is shown, and object whose scaling is to done also shown in steps
in fig (b), fig (c) and fig (d).
Rotation

It is moving of an object about an angle. Movement can be anticlockwise or


clockwise.

3D rotation is complex as compared to the 2D rotation. For 2D we describe the angle


of rotation, but for a 3D angle of rotation and axis of rotation are required. The axis can be
either x or y or z.

Following figures shows rotation about x, y, z- axis


Following figure show rotation of the object about the Y axis

Following figure show rotation of the object about the Z axis


The two-dimensional z-axis rotation equations are easily extended to three
dimensions:

Parameter θ specifies the rotation angle. In homogeneous coordinate form, the three-
dimensional z-axis rotation equations are expressed as

or

We get the equations for an x-axis rotation


Which can be written in the homogeneous coordinate form is

We get the equations for any-axis rotation

The matrix representation for y-axis rotation is

Composite Transformation:-
Aswith two-dimensional transformations.
We form a composite three-dimensional transformation by multiplying the matrix
representations for the individual operations in the transformation sequence.
This concatenation is carried out from right to left, where the rightmost matrix is the
first transformation to be applied to an object and the leftmost matrix is the last
transformation.
The following program provides an example for implementing, a composite
transformation.
A sequence of basic, three-dimensional geometric transformations are combined to
produce a single composite transformation, which is then applied to the coordinate definition
of an object.
THREE-DIMENSIONAL VIEWING
Three-dimensional descriptions of objects must be projected onto the flat viewing
surface of the output device.
And the clipping boundaries now enclose a volume of space, whose shape depends
on the type of projection we select. In this chapter, we explore the general operations needed
to produce views of a three-dimensional scene, and we also discuss specific viewing
procedures provided in packages such as PHIGS and GL.
VIEWING PIPELINE
Computer generation of a view of a three-dimensional scene are somewhat analogous
to the processes involved in taking a photograph. To take a snapshot, we first need to position
the camera at a particular point in space.

We snap the shutter, the scene is cropped to the size of the "window" (aperture) of the
camera, and light from the visible surfaces is projected onto the camera film.

Once the scene has been modeled, world-coordinate positions are converted to
viewing coordinates (device coordinate).
The viewing-coordinate system is used in graphics packages as a reference for
specifying the observer viewing position and the position of the projection plane, which
we can think of in analogy with the camera film plane.
Next, projection operations are performed to convert the viewing-coordinate
description of the scene to coordinate positions on the projection plane, which will then be
mapped to the output device.
Objects outside the specified viewing limits are clipped further consideration, and the
remaining objects are processed through visible-surfaceidentification and surface-
rendering procedures to produce the display within the device viewport.
Projection
It is the process of converting a 3D object into a 2D object. It is also defined as
mapping or transformation of the object in projection plane or view plane. The view plane
is displayed surface.

Perspective Projection
In perspective projection farther away object from the viewer, small it appears. This
property of projection gives an idea about depth.
Two main characteristics of perspective are vanishing points and perspective
foreshortening.
Due to foreshortening object and lengths appear smaller from the center of
projection.
More we increase the distance from the center of projection, smaller will be the object
appear.
Vanishing Point
It is the point where all lines will appear to meet. There can be one point, two point,
and three point perspectives.

One Point: There is only one vanishing point as shown in fig (a)
Two Points: There are two vanishing points. One is the x-direction and other in the y -
direction as shown in fig (b)

Three Points: There are three vanishing points. One is x second in y and third in two
directions.

Important terms related to perspective:


View plane: It is an area of world coordinate system which is projected into viewing
plane.
Center of Projection: It is the location of the eye on which projected light rays
converge.
Projectors: It is also called a projection vector. These are rays start from the object scene
and are used to create an image of the object on viewing or view plane.

Anomalies in Perspective Projection


It introduces several anomalies due to these object shape and appearance gets
affected.
Perspective foreshortening: The size of the object will be small of its distance from the
center of projection increases.
Vanishing Point: All lines appear to meet at some point in the view plane.
Distortion of Lines: A range lies in front of the viewer to back of viewer is appearing to six
rollers.
Foreshortening of the z-axis in fig (a) produces one vanishing point, P1. Foreshortening the x
and z-axis results in two vanishing points in fig (b). Adding a y-axis foreshortening in fig (c)
adds vanishing point along the negative y-axis.

Parallel Projection
Parallel Projection use to display picture in its true shape and size. When projectors
are perpendicular to view plane then is called orthographic projection.
The parallel projection is formed by extending parallel lines from each vertex on the
object until they intersect the plane of the screen. The point of intersection is the projection
of vertex.
Parallel projections are used by architects and engineers for creating working
drawing of the object, for complete representations require two or more views of an
object using different planes.
Isometric Projection: All projectors make equal angles generally angle is of 30°.
Dimetric: In these two projectors have equal angles. With respect to two principle axis.
Trimetric: The direction of projection makes unequal angle with their principle axis.
Cavalier: All lines perpendicular to the projection plane are projected with no change in
length.
Cabinet: All lines perpendicular to the projection plane are projected to one half of their
length. These give a realistic appearance of object.
UNIT-V
Visible surface (Or) Hidden line surface detection method:

When we view a picture containing non-transparent objects and surfaces, then we


cannot see those objects from view which is behind from objects closer to eye.

We must remove these hidden surfaces to get a realistic screen image. The
identification and removal of these surfaces is called Hidden-surface problem.

There are two approaches for removing hidden surface problems − Object-Space
method and Image-space method.

The Object-space method is implemented in physical coordinate system.

Image-space method is implemented in screen coordinate system.

When we want to display a 3D object on a 2D screen, we need to identify those parts


of a screen that are visible from a chosen viewing position.

Algorithms used for hidden line surface detection

1. Back Face Removal Algorithm


2. Z-Buffer Algorithm
3. Painter Algorithm
4. Scan Line Algorithm
5. Subdivision Algorithm
6. Floating horizon Algorithm

Depth Buffer (Z-Buffer) Method

This method is developed by Cutmull. It is an image-space approach. The basic


idea is to test the Z-depth of each surface to determine the closest (visible) surface.

In this method each surface is processed separately one pixel position at a time across
the surface. The depth values for a pixel are compared and the closest (smallest z) surface
determines the color to be displayed in the frame buffer.

It is applied very efficiently on surfaces of polygon. Surfaces can be processed in any


order. To override the closer polygons from the far ones, two buffers named frame buffer
and depth buffer, are used.

Depth buffer is used to store depth values for (x, y) position, as surfaces are
processed (0 ≤ depth ≤ 1).

The frame buffer is used to store the intensity value of color value at each position
(x, y).
The z-coordinates are usually normalized to the range [0, 1]. The 0 value for z-
coordinate indicates back clipping pane and 1 value for z-coordinates indicates front
clipping pane.

Algorithm

For all pixels on the screen, set depth [x, y] to 1.0 and intensity [x, y] to a background
value.

For each polygon in the scene, find all pixels (x, y) that lie within the boundaries of a
polygon when projected onto the screen. For each of these pixels:

(a) Calculate the depth z of the polygon at (x, y)

(b) If z < depth [x, y], this polygon is closer to the observer than others already
recorded for this pixel. In this case, set depth [x, y] to z and intensity [x, y] to a value
corresponding to polygon's shading. If instead z > depth [x, y], the polygon already
recorded at (x, y) lies closer to the observer than does this new polygon, and no action
is taken.

3. After all, polygons have been processed; the intensity array will contain the
solution.

4. The depth buffer algorithm illustrates several features common to all hidden
surface algorithms.

5. First, it requires a representation of all opaque surface in scene polygon in this case.

6. These polygons may be faces of polyhedral recorded in the model of scene or may simply
represent thin opaque 'sheets' in the scene.
7. The IInd important feature of the algorithm is its use of a screen coordinate system. Before
step 1, all polygons in the scene are transformed into a screen coordinate system using matrix
multiplication.

Algorithm

Step-1 − Set the buffer values −

Depthbuffer (x, y) = 0

Framebuffer (x, y) = background color

Step-2 − Process each polygon (One at a time)

For each projected (x, y) pixel position of a polygon, calculate depth z.

If Z >depthbuffer (x, y)

Compute surface color,

set depthbuffer (x, y) = z,

framebuffer (x, y) = surfacecolor (x, y)

Advantages

 It is easy to implement.
 It reduces the speed problem if implemented in hardware.
 It processes one object at a time.

Disadvantages

 It requires large memory.


 It is time consuming process.

Back Face detection

A fast and simple object-space method for identifying the back faces of a
polyhedron is based on the "inside-outside" tests.

A point (x, y, z) is "inside" a polygon surface with plane parameters A, B, C, and


D if When an inside point is along the line of sight to the surface, the polygon must be a back
face (we are inside that face and cannot see the front of it from our viewing position).

We can simplify this test by considering the normal vector N to a polygon surface,
which has Cartesian components (A, B, C).
In general, if V is a vector in the viewing direction from the eye (or "camera")
position, then this polygon is a back face if

V.N > 0

Furthermore, if object descriptions are converted to projection coordinates and your


viewing direction is parallel to the viewing z-axis, then −

V = (0, 0, Vz) and V.N = VZC

So that we only need to consider the sign of C the component of the normal vector N.

In a right-handed viewing system with viewing direction along the negative $Z_{V}$
axis, the polygon is a back face if C < 0. Also, we cannot see any face whose normal has z
component C = 0, since your viewing direction is towards that polygon. Thus, in general, we
can label any polygon as a back face if its normal vector has a z component value −

C <= 0

Similar methods can be used in packages that employ a left-handed viewing system.
In these packages, plane parameters A, B, C and D can be calculated from polygon vertex
coordinates specified in a clockwise direction (unlike the counterclockwisedirection used in
a right-handed system).

Also, back faces have normal vectors that point away from the viewing position and
are identified by C >= 0 when the viewing direction is along the positive $Z_{v}$ axis. By
examining parameter C for the different planes defining an object, we can immediately
identify all the back faces.
N1=(v2-v1 )(v3-v2)
If N1.P≥0 visible
N1.P<0 invisible

Advantage
1. It is a simple and straight forward method.
2. It reduces the size of databases, because no need of store all surfaces in the
database, only the visible surface is stored.

Back Face Removed Algorithm

Repeat for all polygons in the scene.

1. Do numbering of all polygons in clockwise direction i.e.


v1 v2 v3.....vz
2. Calculate normal vector i.e. N1
N1=(v2-v1 )*(v3-v2)
3. Consider projector P, it is projection from any vertex
Calculate dot product
Dot=N.P
4. Test and plot whether the surface is visible or not.
If Dot ≥ 0 then
surface is visible
else
Not visible

DEPTH-SORT ALGORITHM OR PAINTER ALGORITHM:

It came under the category of list priority algorithm. It is also called a depth-sort
algorithm. In this algorithm ordering of visibility of an object is done. If objects are
reversed in a particular order, then correct picture results.
Objects are arranged in increasing order to z coordinate. Rendering is done in
order of z coordinate. Further objects will obscure near one. Pixels of rear one will overwrite
pixels of farther objects. If z values of two overlap, we can determine the correct order from
Z value as shown in fig (a).

If z objects overlap each other as in fig (b) this correct order can be maintained by
splitting of objects.

Depth sort algorithm or painter algorithm was developed by Newell, sancha. It is


called the painter algorithm because the painting of frame buffer is done in decreasing
order of distance.

The distance is from view plane. The polygons at more distance are painted firstly.

The concept has taken color from a painter or artist. When the painter makes a painting, first
of all, he will paint the entire canvas with the background color. Then more distance objects
like mountains, treesare added. Then rear or foreground objects are added to picture.
Similar approach we will use. We will sort surfaces according to z values.

The z values are stored in the refresh buffer.

Steps performed in-depth sort


1. Sort all polygons according to z coordinate.
2. Find ambiguities of any, find whether z coordinate overlap, split polygon if necessary.
3. Scan convert each polygon in increasing order of z coordinate.

Painter Algorithm

Step1: Start Algorithm

Step2: Sort all polygons by z value keep the largest value of z first.

Step3: Scan converts polygons in this order.


Test is applied
1. Does A is behind and non-overlapping B in the dimension of Z as shown in fig (a)
2. Does A is behind B in z and no overlapping in x or y as shown in fig (b)
3. If A is behind B in Z and totally outside B with respect to view plane as shown in fig
(c)
4. If A is behind B in Z and B is totally inside A with respect to view plane as shown in
fig (d)

The success of any test with single overlapping polygon allows F to be painted.

Scan Line Algorithm

It is an image space algorithm. It processes one line at a time rather than one pixel at
a time.

It uses the concept area of coherence. This algorithm records edge list, active edge
list. So accurate bookkeeping is necessary. The edge list or edge table contains the coordinate
of two endpoints.

Active Edge List (AEL) contain edges a given scan line intersects during its sweep.
The active edge list (AEL) should be sorted in increasing order of x. The AEL is dynamic,
growing and shrinking.

In order to require one scan-line of depth values, we must group and process all
polygons intersecting a given scan-line at the same time before processing the next scan-line.
Two important tables, edge table and polygon table, are maintained for this.

The Edge Table − It contains coordinate endpoints of each line in the scene, the inverse
slope of each line, and pointers into the polygon table to connect edges to surfaces.
The Polygon Table − It contains the plane coefficients, surface material properties, other
surface data, and may be pointers to the edge table.

SCANLINE ENTERIES
L1 AB BC EH FG
L2 AD EH BC FG
L3 AD EH BC FG

Scan line can deal with multiple surfaces. As each scan line is processed, this line
will intersect many surfaces. The intersecting line will determine which surface is visible.
Depth calculation for each surface is done. The surface rear to view plane is defined. When
the visibility of a surface is determined, then intensity value is entered into refresh buffer.

Algorithm

Step1: Start algorithm

Step2: Initialize the desired data structure


1. Create a polygon table having color, edge pointers, coefficients
2. Establish edge table contains information regarding, the endpoint of edges, pointer to
polygon, inverse slope.
3. Create Active edge list. This will be sorted in increasing order of x.
4. Create a flag F. It will have two values either on or off.

Step3: Perform the following steps for all scan lines

1. Enter values in Active edge list (AEL) in sorted order using y as value
2. Scan until the flag, i.e. F is on using a background color
3. When one polygon flag is on, and this is for surface S 1enter color intensity as I1 into
refresh buffer
4. When two or image surface flag are on, sort the surfaces according to depth and use
intensity value Sn for the nth surface. This surface will have least z depth value
5. Use the concept of coherence for remaining planes.

Step4: Stop Algorithm

Area Subdivision Algorithm

It was invented by John Warnock and also called a Warnock Algorithm. It is based
on a divide & conquer method. It uses fundamental of area coherence.

It is used to resolve the visibility of algorithms. It classifies polygons in two cases i.e.
trivial and non-trivial.

Trivial cases are easily handled. Non trivial cases are divided into four equal
subwindows. The windows are again further subdivided using recursion until all polygons
classified trivial and non trivial.
Classification of Scheme

It divides or classifies polygons in four categories:

1. Inside surface
2. Outside surface
3. Overlapping surface
4. Surrounding surface

Area-Subdivision Method

Divide the total viewing area into smaller and smaller rectangles until each small area
is the projection of part of a single visible surface or no surface at all.

Continue this process until the subdivisions are easily analyzed as belonging to a single
surface or until they are reduced to the size of a single pixel. An easy way to do this is to
successively divide the area into four equal parts at each step. There are four possible
relationships that a surface can have with a specified area boundary.

 Surrounding surface − One that completely encloses the area.


 Overlapping surface − One that is partly inside and partly outside the area.
 Inside surface − One that is completely inside the area.
 Outside surface − One that is completely outside the area.

The tests for determining surface visibility within an area can be stated in terms of these
four classifications. No further subdivisions of a specified area are needed if one of the
following conditions is true −

 All surfaces are outside surfaces with respect to the area.


 Only one inside, overlapping or surrounding surface is in the area.
 A surrounding surface obscures all other surfaces within the area boundaries.
A-Buffer Method

The A-buffer method is an extension of the depth-buffer method. The A-buffer


method is a visibility detection method developed at Lucas film Studios for the rendering
system Renders Everything You Ever Saw (REYES).

The A-buffer expands on the depth buffer method to allow transparencies. The key
data structure in the A-buffer is the accumulation buffer.

Each position in the A-buffer has two fields −

 Depth field − It stores a positive or negative real number


 Intensity field − It stores surface-intensity information or a pointer value

If depth >= 0, the number stored at that position is the depth of a single surface
overlapping the corresponding pixel area. The intensity field then stores the RGB
components of the surface color at that point and the percent of pixel coverage.
If depth < 0, it indicates multiple-surface contributions to the pixel intensity. The
intensity field then stores a pointer to a linked list of surface data. The surface buffer in the
A-buffer includes −

 RGB intensity components


 Opacity Parameter
 Depth
 Percent of area coverage
 Surface identifier

The algorithm proceeds just like the depth buffer algorithm. The depth and opacity values
are used to determine the final color of a pixel.

Depth Sorting Method

Depth sorting method uses both image space and object-space operations. The depth-
sorting method performs two basic functions −

 First, the surfaces are sorted in order of decreasing depth.


 Second, the surfaces are scan-converted in order, starting with the surface of greatest
depth.

The scan conversion of the polygon surfaces is performed in image space. This method for
solving the hidden-surface problem is often referred to as the painter's algorithm. The
following figure shows the effect of depth sorting −

The algorithm begins by sorting by depth. For example, the initial “depth” estimate of a
polygon may be taken to be the closest z value of any vertex of the polygon.

Let us take the polygon P at the end of the list. Consider all polygons Q whose z-extents
overlap P’s. Before drawing P, we make the following tests. If any of the following tests is
positive, then we can assume P can be drawn before Q.

 Do the x-extents not overlap?


 Do the y-extents not overlap?
 Is P entirely on the opposite side of Q’s plane from the viewpoint?
 Is Q entirely on the same side of P’s plane as the viewpoint?
 Do the projections of the polygons not overlap?

If all the tests fail, then we split either P or Q using the plane of the other. The new cut
polygons are inserting into the depth order and the process continues. Theoretically, this
partitioning could generate O(n2) individual polygons, but in practice, the number of
polygons is much smaller.

Binary Space Partition (BSP) Trees

Binary space partitioning is used to calculate visibility.

To build the BSP trees, one should start with polygons and label all the edges.
Dealing with only one edge at a time, extend each edge so that it splits the plane in two. Place
the first edge in the tree as root.

Add subsequent edges based on whether they are inside or outside. Edges that span
the extension of an edge that is already in the tree are split into two and both are added to the
tree.

 From the above figure, first take A as a root.


 Make a list of all nodes in figure (a).
 Put all the nodes that are in front of root A to the left side of node A and put all those
nodes that are behind the root A to the right side as shown in figure (b).
 Process all the front nodes first and then the nodes at the back.
 As shown in figure (c), we will first process the node B. As there is nothing in front of
the node B, we have put NIL. However, we have node C at back of node B, so node C
will go to the right side of node B.
 Repeat the same process for the node D.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy