Cg Project
Cg Project
Computer graphics is an art of drawing pictures, lines, charts, etc. using computers with the help of
programming. Computer graphics image is made up of number of pixels. Pixel is the smallest addressable
graphical unit represented on the computer screen. Computer graphics is a field of computing that enables the
creation, manipulation, and representation of visual images using computers. It encompasses a wide range of
techniques and technologies that have revolutionized the way we interact with digital information and visual
media.
The importance of computer graphics extends across various domains, including entertainment (such as
movies, video games, and virtual reality), design (architecture, industrial design, and graphic design),
simulation (flight simulators, medical simulations), education (interactive learning tools, digital textbooks),
and scientific visualization (data analysis, molecular modeling). Computer graphics is a field of computing
that enables the creation, manipulation, and representation of visual images using computers. It encompasses
a wide range of techniques and technologies that have revolutionized the way we interact with digital
information and visual media. Developed by Silicon Graphics Inc. (SGI) in 1992, OpenGL was designed to
provide a standardized interface for hardware-accelerated 2D and 3D graphics. Over the years, it has become
an industry-standard API and is supported by a wide range of platforms, including Windows, macOS, Linux,
and mobile platforms like Android and iOS.
OpenGL continues to evolve with advancements in graphics hardware and software technologies. Recent
developments focus on enhancing support for modern rendering techniques, improving compatibility with
new hardware architectures, and integrating with emerging technologies such as VR, AR, and real-time ray
tracing. OpenGL remains a fundamental tool for developers and graphics professionals.
iv
Table of Contents
ABSTRACT ......................................................................................................................................i
1. Introduction .................................................................................................................................. 1
1.1 Introduction to Computer Graphics ........................................................................................ 1
1.1 Introduction to OpenGL ......................................................................................................... 3
1.2 Introduction to OpenCV......................................................................................................... 5
1.3 Introduction to Project ........................................................................................................... 6
2. Inbuilt Function in Computre Graphics .......................................................................................9
2.1 OpenGL..................................................................................................................................9
2.2 OpenCV ............................................................................................................................... 11
3. Requirement Specification ........................................................................................................ 12
3.1 Hardware Requirements .......................................................................................................13
3.2 Software Requirements ....................................................................................................... 14
4. Implementation ..........................................................................................................................15
5. Snapshots ...................................................................................................................................30
Conclusion Future
Work References
iv
List of Figures
Sl.no Title Page No
1 Build an Application 18
2 Original image 18
3 Grayscale image 21
5 Edge image 23
6 Color image 24
7 Cartoon image 25
8 Final output 26
9 Home page 27
10 Final Result 28
iv
Chapter 1
INTRODUCTION
• Graphics can be two- or three-dimensional • Computer Graphics is the creation and manipulation of
images or pictures with the help of computers. •
• There are two types of computer graphics: 1) Passive Computer Graphics (Non-interactive
Computer Graphics) 2) Active Computer Graphics (Interactive Computer Graphics
• The major product of computer graphics is a picture, with the help of CG, pictures can be
representedin2D and 3Dspace. Many applications show various parts of the displayed picture
changing in size and orientation. Such type of transformations i.e. the pictures can be made to
grow, shrink, rotate and etc. can be achieved through CG.
• The display on them is often too big to be shown in their entirety. Thus, with the help of CG, a
technique called clipping can be used to select just those parts of the picture that lie on the screen
and to discard the rest.
• Animated films use many of the same techniques that are used for visual effects, but without
necessarily aiming for images that look real. CAD/CAM stands for computer-aided design and
computer-aided manufacturing. These fields use computer technology to design parts and products
on the computer and then, using these virtual designs, to guide the manufacturing process. For
example, many mechanical parts are designed in a 3D computer modeling package and then
automatically produced on a computer-controlled milling device. Developing Sustainable Water
management system for rural areas and implementation approaches.
Swachh Bharat, Make in India, Mudra scheme, Skill development programs etc.
• Spreading public awareness under rural outreach programs.
• Social connect and responsibilities.
• Plantation and adoption of plants. Know your plants.
• Organize National integration and social harmony events /workshops /seminars.
Key Concepts
1. Raster Graphics:
o Pixels: The smallest unit of a digital image, usually arranged in a grid to form an image.
o Resolution: The number of pixels in an image, typically measured in width x height.
o Color Models: Methods for representing colors in digital images, such as RGB (Red, Green,
Blue) and CMYK (Cyan, Magenta, Yellow, Black).
2. Vector Graphics:
o Primitives: Basic geometric shapes like points, lines, curves, and polygons.
o Scalability: Vector graphics can be resized without losing quality because they are defined
mathematically.
3. 3D Graphics:
o Vertices and Edges: Basic elements used to define 3D objects.
o Meshes: Collections of vertices, edges, and faces that define the shape of a 3D object.
o Transformations: Operations such as translation, rotation, and scaling applied to 3D objects.
4. Rendering:
o Rasterization: The process of converting vector graphics into raster images.
o Ray Tracing: A rendering technique that simulates the way light interacts with objects to
produce highly realistic images.
o Shading Models: Methods for calculating the color of surfaces based on light sources, such
as Phong shading and Gouraud shading.
5. Animation:
o Keyframing: Creating animations by specifying the start and end points, with the system
interpolating the frames in between.
o Rigging and Skinning: Techniques for animating complex models, especially characters.
6. Graphics Hardware:
o Shaders: Programs that run on the GPU to perform custom rendering calculations
Applications
1. Entertainment:
o Video Games: Creating interactive worlds and characters.
o Movies and TV: Producing special effects and animated films.
2. Design and Manufacturing:
o CAD (Computer-Aided Design): Designing products, buildings, and machinery.
o CAM (Computer-Aided Manufacturing): Using designs to control manufacturing
processes.
3. Visualization:
o Scientific Visualization: Representing scientific data in visual formats.
o Medical Imaging: Visualizing complex medical data from scans like MRIs and CTs.
4. User Interfaces:
o Graphical User Interfaces (GUIs): Allowing users to interact with computers through visual
elements like windows, icons, and buttons.
Active development of OpenGL was dropped in favour of the API, released in 2016, and code named Gl
Next during initial development. In 2017, announced that OpenGL ES would not have new versions
a
nd has since concentrated on development of Vulkan and other technologies. As a result, certain
capabilities offered by modern GPUs, e.g. ray tracing, are not supported by OpenGL
Key Concepts
1. Rendering Pipeline:
o Vertex Processing: Vertices are transformed and processed to determine their position on the
screen.
o Clipping: Parts of objects outside the view are discarded.
o Rasterization: Transformed vertices are converted into fragments.
o Fragment Processing: Fragments are processed to determine their final color and depth.
o Frame Buffer Operations: Final image is composed in the frame buffer and displayed on the
screen.
2. Shaders:
o Vertex Shader: Processes each vertex's attributes (position, color, normal, etc.).
o Fragment Shader: Processes fragments generated by rasterization to determine their final
color.
o Geometry Shader: Can add or modify geometry on the fly (optional and more advanced).
3. Buffers:
o Vertex Buffer Object (VBO): Stores vertex data.
o Element Buffer Object (EBO): Stores indices for indexed drawing.
o Frame Buffer Object (FBO): Stores rendered images off-screen.
o Texture Buffer: Stores texture data.
4. Textures:
o Images applied to the surfaces of 3D models to give them detail.
5. Transformations:
o Model Transformation: Moves objects in the scene.
o View Transformation: Simulates the camera position and orientation.
o Projection Transformation: Maps 3D coordinates to 2D screen coordinates.
Basic Workflow
3. Set Up Buffers:
o Create and bind VBOs and EBOs.
o Upload vertex and index data.
4. Configure Vertex Attributes:
o Define how vertex data is interpreted and passed to the shaders.
5. Render Loop:
o Clear the screen.
o Draw objects.
o Swap buffers to display the rendered frame.
OpenCV (Opensource Computer Vision Library) is an open-source computer vision and machine learning
software library. Initially developed by Intel, it was later supported by Willow Garage and It sees (now part
of Intel). OpenCV is widely used for various applications in computer vision, image processing, and
machine learning. The image processing module in OpenCV is crucially used for tasks related to
manipulating images. It encompasses functions for filtering, transforming, and enhancing images. Human
vision has a resemblance to that of computer vision. Human vision learns from the various life
experiences and deploys them to distinguish objects and interpret the distance between various objects
and estimate the relative position.
Key Concepts
1. Image Processing:
o Basic Operations: Reading, writing, and displaying images.
o Image Transformations: Resizing, cropping, rotating, and affine transformations.
o Color Space Conversions: Converting between different color spaces (e.g., RGB to
Grayscale, HSV).
2. Filtering and Enhancement:
o Smoothing/Blurring: Applying Gaussian, median, and bilateral filters.
o Sharpening: Enhancing edges to make images clearer.
o Histogram Equalization: Improving the contrast of images.
3. Feature Detection and Matching:
o Edge Detection: Using algorithms like Canny to detect edges in image
Here we have listed down some of major domains where Computer Vision is heavily used.
Cartoonifying an image is a captivating process that transforms ordinary photographs into vibrant
and whimsical cartoon-like representations. This technique involves accentuating certain features,
simplifying details, and applying artistic interpretations to create a playful and imaginative visual
effect. Whether for personal enjoyment, creative projects, or social media engagement, cartoonifying
images adds a unique charm and storytelling element, bridging the gap between reality
and fantasy. Through a combination of digital tools, artistic flair, and a keen eye for exaggeration,
this art form brings a fresh perspective to familiar scenes and faces, making them memorable and
delightful to behold.
Objectives:
Key Concepts:
1. Image Transformation: Techniques such as Gaussian blur to smoothen the image and edge
detection to highlight edges, crucial for cartoonification.
2. Color Quantization: Reducing the number of colors in an image to create a flat, cartoonish
appearance.
3. Artistic Interpretation: Understanding how to stylize images to achieve a cartoon-like effect while
preserving essential details.
4. Parameter Tuning: Adjusting parameters of filters and transformations to achieve desired
cartoonification effects.
5. Image Filtering: Using filters like bilateral filter and median blur to enhance edges and reduce noise.
1. OpenCV: Essential library for computer vision and image processing tasks in Python.
2. NumPy: Used for numerical operations and array manipulation, crucial for handling image data.
3. Python Programming: Utilize Python for implementing algorithms and managing image processing
workflows.
4. Edge Detection Algorithms: Techniques like Canny edge detection to identify and highlight edges
in images.
5. Color Quantization Methods: Algorithms to reduce the number of colors in an image, enhancing
the cartoon effect.
Techniques:
OpenGL
OpenGL (Open Graphics Library) is a cross-language, cross-platform API for rendering 2D and 3D vector
graphics. It provides a rich set of functions for various graphics operations:
6. Texture Mapping
o Gl gen Textures: Generates texture names.
o Gl bind Texture: Binds a texture to a target
o gltextImage2D: Specifies a 2D texture image.
o gl text Parameter: Sets texture parameters.
7. Lighting and Materials
o gl Enable: Enables server-side GL capabilities (e.g., GL_LIGHTING).
o gl Lightfv: Sets light source parameters.
o Gl Materialfv: Sets material properties for the lighting model.
8. User Interaction
o Glut Keyboard Func: Registers a keyboard callback function.
o Glut Mouse Func: Registers a mouse callback function.
o Glut Motion Func: Registers a motion callback function.
OpenCV
OpenCV (Opensource Computer Vision Library) is an open-source computer vision and machine learning
software library. It provides numerous functions for image processing, computer vision, and machine
learning:
1. Introduction
The Cartoonify Image application aims to transform regular images into cartoon-like representations using
OpenCV and Python. This specification outlines the functional and non-functional requirements, system
architecture, hardware and software requirements, as well as development, deployment, and maintenance
considerations for the application.
2. Overall Description
The application will take an input image, process it using various image transformation techniques, and
output a cartoon version of the image. This process involves converting the image to grayscale, applying
edge detection, and then merging color tones to achieve the cartoon effect.
3. Functional Requirements
• Image Processing:
o Convert input image to grayscale.
o Apply edge detection techniques to highlight major edges.
o Reduce the number of colors in the image to give it a cartoon-like appearance.
o Merge color tones to simplify the image.
• User Interface:
o Provide a simple interface to upload images.
o Display the original and cartoonified images side by side.
o Allow users to download the cartoonified image.
4. Non-functional Requirements
• Performance:
o Process an image within a reasonable time frame (e.g., less than 10 seconds for typical image
sizes).
Usability:
5. System Architecture
• Development Tools: Python IDE (e.g., PyCharm, VS Code), Git for version control
7. Maintenance
Recommended Hardware:
Operating System:
Development Environment:
Python Libraries:
Hardware Requirements
Minimum Hardware Requirements:
Recommended Hardware:
Software Requirements
Operating System:
Development Environment:
Python Libraries:
Deployment Environment
Server:
• Database: SQLite (for lightweight local storage) or PostgreSQL, MySQL (for more
robust data storage needs)
• Development:
o Use virtual environments (e.g., venv or conda) to manage dependencies and
isolate project environments.
o Test the application locally using development servers provided by Flask
(flask run) for debugging and testing purpose
Deployment:
Maintenance
• Updates:
o Regularly update Python packages (pip install --upgrade package-name) to
leverage new features and security patches.
• Monitoring:
o Implement logging and monitoring (e.g., using tools like Prometheus,
Grafana) to track application performance and errors.
• Backup and Recovery:
o Set up regular backups of application data and configuration to ensure quick
recovery in case of failures.
Original Image:
• easygui: Imported to open a file box. It allows us to select any file from our system.
• Numpy: Images are stored and processed as numbers. These are taken as arrays. We
use NumPy to deal with arrays.
• Imageio: Used to read the file which is chosen by file box using a path.
• Matplotlib: This library is used for visualization and plotting. Thus, it is imported to
form the plot of images.
• OS: For OS interaction. Here, to read the path and save images to that path.
Code
import sys
import os
import tkinter as tk
Explanation:
To smoothen an image, we simply apply a blur effect. This is done using medianBlur()
function. Here, the center pixel is assigned a mean value of all the pixels which fall under the
kernel. In turn, creating a blur effect.
The above code generates the following output:
Code
#retrieving the edges for cartoon effect
#by using thresholding technique
getEdge = cv2.adaptiveThreshold(smoothGrayScale, 255,
cv2.ADAPTIVE_THRESH_MEAN_C,
cv2.THRESH_BINARY, 9, 9)
ReSized4 = cv2.resize(getEdge, (960, 540))
#plt.imshow(ReSized4, cmap='gray')
Explanation:
Cartoon effect has two specialties:
1. Highlighted Edges
2. Smooth colors
In this step, we will work on the first specialty. Here, we will try to retrieve the edges and
highlight them. This is attained by the adaptive thresholding technique. The threshold value is
the mean of the neighborhood pixel values area minus the constant C. C is a constant that is
subtracted from the mean or weighted sum of the neighborhood pixels. Thresh_binary is the
type of threshold applied, and the remaining parameters determine the block size.
The above code will generate output like below:
image that we mask with edges at the end to produce a cartoon image. We use bilateralFilter
which removes the noise. It can be taken as smoothening of an image to an extent.
The third parameter is the diameter of the pixel neighborhood, i.e, the number of pixels
around a certain pixel which will determine its value. The fourth and Fifth parameter defines
signmaColor and sigmaSpace. These parameters are used to give a sigma effect, i.e make an
image look vicious and like water paint, removing the roughness in colors.
Yes, it’s similar to BEAUTIFY or AI effect in cameras of modern mobile phones.
The above code generates the following output:
The implementation leveraged OpenCV's comprehensive library of functions, which provided robust
support for image manipulation and processing tasks. By combining these techniques in a systematic
workflow, we effectively translated the input images into visually appealing cartoons, showcasing the
potential of computer vision in creative applications.
Furthermore, the project highlighted the importance of parameter tuning in achieving optimal results,
particularly in balancing edge preservation and noise reduction during the bilateral filtering stage.
Exploring different parameter combinations allowed us to refine the cartoonification process and enhance
the output quality.
Overall, this project not only deepened our understanding of image processing fundamentals but also
illustrated practical applications of computer vision in transforming digital media. Future enhancements
could involve integrating machine learning models for more sophisticated feature extraction or developing
interactive applications for real-time cartoonification, thereby expanding the project's scope and usability.
By documenting our methodologies, challenges, and outcomes, this report serves as a valuable resource
for anyone interested in exploring image cartoonification techniques using OpenCV and Python.
FUTURE SCOPE
Here are some potential future scopes and enhancements that could be explored on cartoonifying an image
with OpenCV in Python:
1. Integration of Deep Learning:*Explore the integration of deep learning models, such as convolutional neural
networks (CNNs), for more advanced feature extraction and style transfer techniques in cartoonification.
Models like CycleGAN could be employed to learn mappings between photographic and cartoon domains.
2. Real-time Cartoonification: Develop real-time applications using optimized algorithms and frameworks
(like OpenCV's DNN module or TensorFlow Lite) to enable live video cartoonification on devices with limited
computational resources.
3. Interactive User Interfaces: Create user-friendly interfaces (GUIs) using libraries like Tkinter or PyQt to
allow users to adjust parameters and visualize the cartoonification process in real-time.
4. Multi-style Cartoonification:Extend the project to support multiple cartoon styles (e.g., anime, comic book)
and allow users to select or customize their preferred style.
5. Evaluation Metrics: Define and implement quantitative metrics to evaluate the quality of cartoonification
outputs objectively, such as edge preservation, color fidelity, and perceptual similarity to hand-drawn cartoons.
REFERENCES
[1] Bradski, G., & Kaehler, A. (2008). Learning OpenCV: Computer Vision with the
OpenCV Library. O'Reilly Media.
[2] Liu, X., & Chen, D. (2013). Image cartoonization based on edge and region extraction.
Journal of Visual Communication and Image Representation, 24(7), 1154-1166.
[3] Achanta, R., et al. (2009). Frequency-tuned salient region detection. IEEE International
Conference on Computer Vision and Pattern Recognition, 2009. IEEE.
[4] Kaur, P., & Goyal, R. (2015). An efficient edge-preserving filtering algorithm for image
cartoonization. International Journal of Computer Applications, 121(15), 35-39.
[6] Sonka, M., Hlavac, V., & Boyle, R. (2014). Image Processing, Analysis, and Machine
Vision. Cengage Learning.
[7] Gonzalez, R. C., & Woods, R. E. (2017). Digital Image Processing. Pearson Education.
[8] Burger, W., & Burge, M. J. (2016). Digital Image Processing: An Algorithmic
Introduction Using Java. Springer.
[9] Pratt, W. K. (2007). Digital Image Processing. John Wiley & Sons.