0% found this document useful (0 votes)
2 views23 pages

Mod 5

3D viewing is the process of creating a 2D representation of a 3D scene by projecting it onto a screen, allowing for various camera positions and projection methods. Key concepts include modeling coordinates, world coordinates, and viewing coordinates, with projection methods like parallel and perspective projection used to define how the scene is viewed. The document also discusses OpenGL functions for setting up 3D viewing, the depth buffer method for hidden surface removal, and the importance of perspective projection for realistic depth perception.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
2 views23 pages

Mod 5

3D viewing is the process of creating a 2D representation of a 3D scene by projecting it onto a screen, allowing for various camera positions and projection methods. Key concepts include modeling coordinates, world coordinates, and viewing coordinates, with projection methods like parallel and perspective projection used to define how the scene is viewed. The document also discusses OpenGL functions for setting up 3D viewing, the depth buffer method for hidden surface removal, and the importance of perspective projection for realistic depth perception.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 23
@] What Is 3D Viewing? 3D viewing Is the process of generating a 20 pleture of a 3D scene so that it can be shown on ‘a screen. Just like taking a photograph, we choose a viewpoint (camera position) and project the 3D scene onto a 2D plane (screen). Bul in computer graphics, we have much more control over the scene than in real photography—we can pick any viewing direction, 00m, projection method, and even look “through” objects. Main Concepts in 3D Viewing: 1, Modeling Coordinates ~ Each object (like a cube, chair, car) is created in its own focal coordinate system. 2. World Coordinates — All objects are placed together in a global scene using transformations. 3. Viewing Coordinates — The entire world is viewed from a specific “camera” position using a viewing reference frame, defined by: © Eye position (camera) = where you are looking from © View direction = which direction the camera points © View-up vector = what direction is “up” in your view Projection Methods: After setting up the view, the 3D scene is projected onto a 2D view plane using either: 1, Parallel Projection — Lines are projected in parallel, © No depth effect, used in engineering and architectural designs. © Maintains actual size and shape. 2. Perspective Projection — Lines converge at a point. /ttpsildocs. google. com/documienl/1 HX-OmZe5IOKR|égUeJwk200W4-NXGNIIF 1dgCDGdraQYedit?tab=1.0 ‘a4 © scanned with OKEN Scanner © Croates realistic depth (distant objects appear smaller). © Mimies human vision and camera imagos. View Volume and Clipping: The visible area of the scene is enclosed in a view volume, which depends on the projection type: © Parallel Projection — View volume is a box. ‘© Perspective Projection —+ View volume is a frustum (like a pyramid with the top cut off). Objects outside this volume are clipped (not shown) to keep the display clean and efficient. Final Steps in the Pipeline: After projection: © The scene is converted to normalized coordinates (from ~1 to +1). © Then it's mapped to device (screen) coordinates. Depth info is used to display only visible surfaces. Optional: Surface rendering is applied for realism using lighting and shading. © What Are OpenGL 3D Viewing Functions? Jions help you set up the camera (aye position), choose the type of and define what part of the 3D world should be visible int for simulating how we see 3D scenes in real life. OpenGL 3D viewing funct projection (orthographic or perspective), ‘on the screen. These functions are importa Q Main OpenGL 3D Viewing Functions © scanned with OKEN Scanner 1. gluLookAt() - Defines the Camera View gluLookat(eyeX, eyeY, eyez, centerX, centerY, centerZ, upX, upY, upZ); * eyeX, eyeY, eyeZ ~ Camera position (where the eye is) © centerX, centerY, centerZ ~ The point to look at © upX, upY, upZ — The direction considered as "up" ® This function defines the viewing coordinate system—the direction and angle from which the scene is viewed. 2, gl0rtho() - Orthographic Projection glOrtho(left, right, bottom, top, near, far); © This sets a parallel (orthographic) projection ‘© Objects appear the same size, regardless of distance © Commonly used in engineering drawings and CAD ® Good for when accurate size and shape are more important than realism. 3. gluPerspective() ~ Perspective Projection gluPerspective(fovy, aspect, near, far); © fovy - Field of view in the y-direction (in degrees) aspect - Aspect ratio (width/height) near, far — Distances to the near and far clipping planes © scanned with OKEN Scanner ~ if Untited document - Gooale Doce a * Creates a realistic 3D effect, where distant objects appear smaler—tke in photography or the human eye. 4, q1Frustum() = Custom Perspective Projection alFrustum(lef, right, bottom, top, near, far); * Similar to gluPerspective(), but gives more control * Lets you define an asymmetric or off-center view * Useful for camera zooming, shifting, or simulating special lenses, 5. g1C1ipPlane() - Additional Clipping Planes giClipPlane(plane, equation); © Defines a custom plane that cuts part of the scene © Used to create sectional views or hide objects © How to Use These in Order: 1. Set the matrix mode: glMatrixMode(GL_PROJECTION) ; 2, Reset the matrix: glLoadIdentity(); 3. Apply projection (g10r tho, gluPerspective, or g1Frustum) 4. Set view with gluLookAt 5. Switch to modelview mode: glMatrixMode(GL_MODELVIEW) ; © scanned with OKEN Scanner @) =) Quick Summary Table: Function Purpose gluLookAt() Defines camera position and direction g10rtho() Sets parallel projection oluPerspectiv Sets realistic perspective projection eQ alFrustum() Sets custom perspective projection g@iClipPlane() Adds extra clipping planes Here is @ clear and structured explanation of the 3D Viewing Pipeline and Transformation from World to Viewing Coordinates, based entirely on your textbook content: © 3D Viewing Pipeline (with Diagram) “The 3D Viewing Pipeline is the process of transforming 3D scene into @ 2D image on a screen. itinvolves a series of coordinate transformations, projections, cllpping, and mapping steps. [i Stages of the Pipeline: 7 Modeling Coordinates (MC) 4 Modeling Transformation 4 World Coordinates (WC) EHEC 4 Viewing Transformation L wb Viewing Coordinates (VC) ae 5124 sea tanrearanled¥Nab=L0 © scanned with OKEN Scanner Projection Transformation Projection Coordinates (PC) Nonmatzation Trnsfomaton& Clipping Nonnazed Coordinates (NC) Viewnort Transformation Device Coordinates (DC) This flow is shown in your textbook as Figure 10-6 on page 304. @ Transformation from World to Viewing Coordinates To define a specific camera view, we transform the scene from world coordinates to viewing coordinates using the following steps: 1. Define the Viewing Coot Ps (Xe, yo, 2»): View Reference Point (VRP) - the camera position. ¢ nvector: View Plane Normal (VPN) — the viewing direction. © vector: View Up Vector (VUP) — defines the upward direction. uw vector: Perpendicular to both n and v ~ forms the right-handed system. These three vectors form the uvn coordinate system (Right-handed system) as shown in Figure 12. 2. Translation Matrix (T): Moves the origin of the world to the viewing position Px: | | © © T. O\ 0 - Yo =)ool\ se 000 4 3. Rotation Matrix (R): eg. [4 wy 4, 0 Va vw % © Nx 94 2 0 © oO Oo mon AHX-OmZeSIOkR Bq Uelvk200We-NKGNIFdgCOGdaC ean en https/idocs.google.com/docu © scanned with OKEN Scanner Aligns the uvn frame with the world axes: Wwe. _ 4. Final Viewing Transformation Matrix: Noe Ve = tT This transforms world-coordinate points into the viewing-coordinate system. s® What Happens After? Once the scene is in viewing coordinates, the next steps are: * Apply projection (parallel or perspective) Normalize the coordinates © Clip anything outside the view volume ¢ Map to the screen using the viewport transformation Summary: Stage Purpose World — Viewing Set up camera (position, direction, up vector) Projection Flatten 3D scene into 2D Normalization Fit view into a unit cube for clipping Clipping Remove parts outside view volume Viewport transformation Convert to device/screen coordinates © scanned with OKEN Scanner @ What Is an Orthogonal Projection? ‘An orthogonal projection (also called orthographic projection) is a type of parallel projection where all projection lines are perpendicular to the view plane. It is mainly used in engineering and architectural drawings because it prosorves true shape, lengths, and angles of objects. © Key Concepts in Orthogonal Projection (with 3D Viewing) 1. Projection Lines: All lines are parallel and perpendicular to the view plane. 2. No Depth Effect: Objects farther away appear the same size as closer ones. 3. Projection Coordinates: For a 3D point (x, y, z), the orthogonal projection onto the view plane gives: 7 soc, Yp= Sezopyese xiguaepsy IP ‘The z-value is discarded, but may be stored for depth and visibility calculations. BH Types of Orthogonal Views 1. Front, Side, and Top Views: © Front view: Looks straight at the object. 2 Side view: Shows one side of the object, © Top view (also called Plan view): Shows from above. 2. Axonometric Projection: © Shows more than one face at once, © Itis still an orthogonal Projection but viewed from a tilted angle. © Types: ® Isometric: All three axes are equally foreshortened (very common). © scanned with OKEN Scanner = Isometric: All three axes are oqually foreshortened (very common). = Dimotric and Trimotric: Axes are unequally scaled. !sometric Projection (Special Case) * The object or view plane is tiled so that it intersects all 3 principal axes equally. * All sides appear in equal proportion, * Useful for showing 3D shapes in a 2D view without distortion, Defined by left, right, bottom, top, near, and far planes. Only the objects inside this box are projected and rendered, * The g10rtho( ) function is used to set this in OpenGL. fed Why Use Orthogonal Projection? © Keeps actual sizes (no distortion). ¢ Ideal for technical drawings, blueprints, and CAD designs. © Easy to calculate and render using matrix multiplication. © scanned with OKEN Scanner Feature Orthogonal Projection Projection lines Perpendicular to view plane Depth effect No (everything appears same size) Used for Engineering, CAD, architecture Special types _Isometric, Axonometric. OpenGL function glOrtho(ieft, right, bottom, top, near, far) What is the Depth Buffer Method? In 3D computer graphics, when multiple surfaces or objects overlap, we need to decide which ‘one should be visible to the viewer. The Depth Buffer Method (also known as the Z-buffer method) is a technique used to determine visibility — that is, which surface is in front at each pixel. Be Q| This method works by comparing the depth (z-value) of every object at every pixel and displaying the closest one to the viewer, Concept: Each pixel on the screen can only display one color — the one that comes from the nearest surface (the surface with the smallest z-value at that pixel). To keep track of which surface is closest at each pixel, the computer uses a depth buffer (Z-bufter). & Buffers Used: 1. Depth Buffer (Z-buffer): Stores the z-value (depth) of the closest surface for each pixel. © scanned with OKEN Scanner 2, Frame Buffer: Stores the color of the visible surface for each pixel. © Algorithm (Step-by-Step): 1. Initialization: © Setall values in the depth buffer to the maximum depth (e.g., 1.0), which means no surface is close yet. © Set all values in the frame buffer to the background color. 2. Process Alll Surfaces: © For each surface in the 3D scene: = For each pixel (x, y) that the surface covers: = Compute the depth (z-value) of the surface at that point. = Compare this depth with the current value in the depth buffer: a Ifthe new z-value is less than the existing one: = Update the depth buffer at (x, y) with the new z-value. = Update the frame buffer at (x,y) with the color of that surface. 3. Final Result: © After processing all surfaces, the frame buffer contains the final image. o Each pixel shows the color of the nearest visible surface. 1. Initialize depthBuffer(x, y) = 1.0 (farthest possible) © scanned with OKEN Scanner Initialize frameBurfer(x, y) = background color 2. For each surface in the scene: For each pixel (x, y) the surface covers: = Calculate depth 2 at (x, y) =z < depthBuffer(x, y): depthBuffer(x, y) =z frameBuffer(x, y) = surface color at (x, y) Advantages: © Handles complex 3D scenes. e {© Does not require sorting surfaces. 2 Easy to implement and supported by graphics hardware (GPU). ° R Works well with all types of surfaces — polygons, curved surfaces, etc. cI ° Q o Notes: © Depth values are usually normalized: © 2 = .6 means nearest to the viewer, © % = 1,6 means farthest from the viewer, © This method is automatically supported in OpenGL when depth testing is enabled. © scanned with OKEN Scanner In OpenGL: giEnable(GL_DEPTH_TEST); // Enablos tho dopth buffor tost giClear(Gl._COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); / Clears both color and depth buffers © Summary: Term Description Depth Buffer Stores the depth (z) of the closest surface Frame Buffer Stores the color of the visible surface Goal Show only the surface closest to the viewer New z-value is compared with current z-buffer Comparison What is Hidden Surface Removal? In 3D computer graphics, when multiple objects or surfaces are drawn, some of them may be Partially or completely hidden behind others from the viewer's perspective. The process of identifying and removing (not displaying) the surfaces that are not visible to the viewer is called Hidden Surface Removal (also known as Hidden Face Removal or Visible Surface Detection). © scanned with OKEN Scanner ©& Why is it Needed? Without hidden surface removal: © All surfaces would be drawn, even the ones that are behind others. ‘+ This would create @ confusing and unrealistic image. + Correct visibility is essential for generating a realistic 3D scone. @ Goals of Hidden Surface Removal: Determine which surfaces are visible from the current viewing position. Remove or ignore hidden parts so only the visible surfaces are displayed. ‘© Improve rendering accuracy and performance. © Methods of Hidden Surface Removal: There are several techniques used in computer graphics for this, including: 1, Depth Buffer Method (Z-buffer method): ‘2 Compares depth values at each pixel. © Keeps the closest surface. 2. Painter's Algorithm: © Sorts surfaces from back to front, © Draws the farthest ones first, then closer ones. © scanned with OKEN Scanner 3. Back-Face Culling: © Removes surfaces that face away from tho viewor. 4, Scan-Line Method: © Works line by line across the screen and determines visibility at each scar-line- 5. Binary Space Partitioning (BSP Trees); © Divides the scene into regions using a tree structure for efficient rendering. = Example: If you have a cube placed behind a sphere: © Without hidden surface removal, both would be fully drawn, and you'd see extra unwanted lines. With proper hidden surface removal, the part of the cube behind the sphere would be removed from display, showing only the visible parts. Summary: Feature Description Purpose Show only visible surfaces, hide hidden ones Needed for Realistic and accurate 3D rendering © scanned with OKEN Scanner Improves Image quality and performance Common Methods Z-bulfer, Painter's, Back-tace culling, ete. @) What is Perspective Projection? donne Perspective projection is a 3D viewing technique where objects that are fart viewer appear smaller, and closer objects appear larger. This creates a realistic sense of depth, just like what we see with our eyes or in photographs. Unlike parallel (orthographic) projection, where projection lines are parallel, in perspective projection, all projection lines converge at a single point, called the Center of Projection or Projection Reference Point. 7 A oe is* Main Characteristics: | © Depth perception is included. Objects are foreshortened based on distance. Parallel ines appear to converge at a vanishing point. View volume is a frustum (truncated pyramid shape). Perspective Projection Diagram (Text-based). Below is a simple text-style representation of a One-Polnt Perspective Projection: Z-axis © scanned with OKEN Scanner 25,533 ‘United document - Googe Does 4 3aH I Vanishing | Point © —Allpprojection lines converge here Le m — yx vars shi og nN y ° TAN i Zz riy Object Eye Position pa ani S Or draw this, kas: A YoU can raw isin yourntebook as: ye Vows 9 © Acube in 30 space aah . spi vows er Lines drawn from each comer of the cube toward a vanishing point © The front face appears larger, and the back face appears smaller ‘© Projection lines intersect the view plane Projection Formula (One-Point Perspective) Let's assume the view plane is placed at 2 = zvp, and the center of projection Is at the origin (6, 8, @). Then the projection of a 3D point (x, y, ~) onto the 2D plane gives: 2v{ i ve pape eG we oP Lp: x C2erp it ye xeYP (az ~2 zeve-z2 2 opnz 728 ty JOWA-NXGNIIF tdgCOGdraQlediab=tO OR 203 google.comidocuments HX-OmZoS\QKR}BqUeIvh20 © scanned with OKEN Scanner Untitled document « Google Docs ye -u( BES tape (- | This formula makes distant objects (with largo 2) appoar smaller, 71125, 5:33 PM « Types of Perspective Projections ‘Type Description jug One-Point Vanishing point on ono axis (usually Z) Two-Point Vanishing points on two axes (0.9., X and Z) Three-Point Vanishing points on X, Y, and Z axes View Volume in Perspective Projection Shaped like a frustum (cut pyramid). © Objects outside the near and far clipping planes are not displayed. ¢ In OpenGL, it's created using functions like gluPerspective() or gLFrustun(). (&% Summary © Perspective projection mimics real-world vision. with depth and distance, * Projection lines meet at a vanishing point, © Commonly used in garnes, simulations, architecture, and animations. © scanned with OKEN Scanner Untied documant = Google Doc a Q) © Orthographic Projection Orthographic projection is a type of Perpendicular to the view plane. | objects without distortion, Pe cee Projection where the projection lines are 's used to represent the exact shape and size of 9D * No depth or perspective effect, © Commonly used in engineering drawings, CAD, and blueprints. + Itshows only one face (front, top, or side) of the object at a time. % In orthographic projection: XP=X, yP=yx_p = x,\quad y_p =y (Z value is ignored.) © Axonometric Projection projection, but the object is tilted so that ‘Axonometric projection is also a type of parallel derstanding of the 3D shape while still Keeping more than one face is visible. It gives a better un sizes measurable (no perspective distortion). tt includes three types: 4. Isometric Projection - All three axes are equally foreshortened (120° apart). 2. Dimetric Projection - Two axes have equal angles; one differs. 3. Trimetric Projection — All three axes have different angles and scales. ® Used in technical illustrations, engineering design, and assembly diagrams. © scanned with OKEN Scanner :| Difference Between Orthographic and Axonometric Projections Feature Orthographic Projection Axonomettic Projection Projection lines Perpendicular to the view Parallel to the view plane but object is plane tilted Number of faces Only one face at a time Two or three faces are visible visible simultaneously Depth effect No depth shown Depth is shown through foreshortening ‘Types Front, top, and side views Isometric, Dimetric, Trimetric Realism Flat, accurate, but not More realistic view without using realistic perspective Application Engineering blueprints, Technical drawings, exploded views, CAD manuals » Summary: Orthographic is simpler and used when exact measurements are needed. © Axonomettic gives a 3D-like appearance while still using parallel lines. © Both are types of parallel projections and do not include perspective distortion. © scanned with OKEN Scanner Pr eaene roTAM UEING GLUT to draw a payaan triangle) nd alow tho weer ON? a Keyboard keys to experiment wth porapactive lowing: ®] {© C++ Program Using OpenGL and GLUT #include #include float eyeX = 0.0, eyeY = 0.0, eyez = 5.0; // Camera (eye) position float centerX = 0.0, centerY = 0.0, centerZ = 0.0; // Look-at point Void display() { giClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glloadidentity(); 1 Set up the viewing transformation gluLookAt(eyex, eyeY, eyeZ, centerX, centerY, centerZ, 0.0, 1.0, 0.0); 1/ Draw a polygon (triangle) glBegin(GL_POLYGON); glColor3{(1.0, 0.0, 0.0); // Red glVertex3{(-1.0, -1.0, 0.0); glColor31(0.0, 1.0, 0.0); // Green glVertex31(1.0, -1.0, 0.0); © scanned with OKEN Scanner ee giColor34(0.0, 0.0, 1.0); 1 Blue glvertex31(0.0, 1.0, 0.0): glEnd(); glutSwapButfers(); void reshape(int w, int h) { if(h==O)h float aspect = (float)w / h; alViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); glLoadidentity(); gluPerspective(60.0, aspect, 1.0, 100.0); glMatrixMode(GL_MODELVIEW), void keyboard(unsigned char key, int x, int y) { switch (key) { .5; break; /! forward case ‘w': eyeZ -= case 's': eyeZ += 0.5; break; // backward case 'a': eyeX -= 0.5; break; // left case 5; break; If right eyeX += © scanned with OKEN Scanner case ‘g eye += 0.5; break; // UP case 'e' eye -= 0.5; break; // down €a8e 27: exit(0); // ESC to exit } glutPostRedisplay(); // Redraw after camera move int main(int arge, char** argv) { glutinit(&arge, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH),; glutinitWindowSize(800, 600); glutCreateWindow("Perspective Viewing with Camera Movement’); glEnable(GL_DEPTH_TEST); glutDisplayFunc(display); glutReshapeFunc(reshape); glutKeyboardFunc(keyboard); glutMainLoop(); return 0; © scanned with OKEN Scanner

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy