What Is Texture in Graphic Design
What Is Texture in Graphic Design
The general definition of texture in graphic design is the surface quality in a work of art. In
simpler terms, the texture is the visual tone of a design. It influences how graphic designs feel
and look. Texture can apply to physical surfaces as well. However, the difference here is that
texture in graphic design cannot be felt physically.
Instead, it is implied by how the design is styled. By layering rich graphics upon each other,
you can create visual textures that mimic the actual texture.
There are two primary forms of texture used in graphics design; actual and implied textures.
Actual textures
Actual textures are also called physical textures. They refer to the real tactile properties of the
design. Actual texture is useful for designing stuff like wedding invitation cards, business
cards, and brochures where concerns such as paper thickness and quality or surface smoothness
and letter embossing have to be addressed.
Implied textures
Also known as image texture, implied textures are generated from a combination of geometric
or organic shapes and colors to bring the feeling of texture to a graphic design. Implied textures
can be complex or simple depending on the layers of lines, shapes, and text used. Implied
textures can be separated into three different textures:
• Environmental – These are textures generated from the environment. They can include
rocks, stars in the sky, sand, among other environmental objects.
• Man-made – The texture is anything that was designed by human hands. Artificial
textures can include clothes, illustrations, or even paintings.
• Biological – Biological textures can be anything from fur, skin, animal prints to
feathers. They are sourced from biological components, mostly animal-related.
Texture in graphics design is meant to create illusions by altering the feel and look of an image.
Using texture correctly, a graphic designer can create compelling designs by adding an extra
layer of meaning to a graphic design.
Before deciding on a texture for your design, you need to have a clear idea of what you need
to design, resources that you can turn into textures, and software like CorelDRAW to work on
the design.
Basically, when adding texture, you want to create something that will catch someone's eye
without being too over the top. Go with your artistic instincts as well. The goal is to achieve
something that brings out your idea while invoking an emotional response to the viewer.
Using natural elements in your graphic designs infuses the image with life, beauty, warmth,
and vividness. Actual texture imagery is inspired by anything in the world. This can be anything
from a small feather to the canopy of a rainforest.
It is important to remember that the key to creating stunning images using actual texture is
setting a contrast between the textured background and the striking foreground.
The implied texture consists of human-fabricated images and a wide array of imagery with
surrealistic patterns. Using modern graphic design software like CorelDRAW, the designer can
incorporate a wide range of implied textures. The only limitation is your imagination. Contrast
still plays a critical role in creating gorgeous textures for the background and foreground.
The Graphics Pipeline
The Graphics Pipeline is the process by which we convert a description of a 3D scene into a
2D image that can be viewed. This involves both transformations between spaces and
conversions from one type of data to another.
Spaces
What is a space? Consider if you and I are standing in a room, 6 feet apart and facing the same
direction. Exactly half way inbetween us is a table, and I am on the left of the table. If I were
to answer “where is the table”, I would likely say the table is 3 feet to my right. But you would
say the table is 3 feet to your left. Neither of us is wrong, we’re just describing relationships in
space using a difference reference point. We might say that I am describing the position of the
table in “my coordinates” while you are describing it in “your coordinates”.
In computer graphics, we sometimes call these two different coordinate systems spaces. A lot
(seriously, a lot) of computer graphics has to do with transforming geometry between spaces.
Usually, the spaces are different in more complicated ways (consider if you turned yourself 45
degrees, and I flipped over and did a headstand - our descriptions of the table’s position would
now be more complicated and different).
It’s nice to have a common frame of reference for these things. Usually, we describe the
positions of most things in our scene in terms of “world coordinates”. This “world space” has
a common reference point, or origin, from which the positions of all other objects are
described. The world coordinate system also has a sense of directionality, e.g. which way is
up, but we’ll discuss this later.
In our rasterizers, our task will be to take triangles with coordinates stored in world space, and
transform them into pixels that we can color on the screen.
The geometry can be expressed in 2D coordinates, but usually we’ll be talking about 3D
geometry.
The output of our rasterizer will be an image that can be viewed. This image will be a two-
dimensional array of color values. These color values are called pixels, or “picture elements”.
We’ll usually talk about these pixels as a x and y coordinate on the screen. These coordinates
will be integers, and will range from 0 to the width or height of the screen.
0≤xp≤w−1 0≤yp≤h−1
We will need to find a way to transform from world coordinates to these pixel coordinates, but
we’ll cover that after we first introduce the main stages of the graphics pipeline.
The Graphics Pipeline
The Graphics Pipeline exists in a variety of forms depending on the type of computer graphics
and rendering you are doing.
What we’ll introduce now is the rasterization pipeline used by OpenGL, which will be more or
less what implement for our software rasterizer.
• the type of geometry we’ll be rendering is specified, in a form that is convenient for the
rasterization process we’re using.
• Rasterization involves determining the pixels that a given primitive covers on the
screen, so that we can compute the colors of each covered pixel.
• For each given pixel that is covered by the primitive, compute the expected color. For
starters we’ll likely just color each pixel a specific color, e.g. red. Later, the fragment
shader may be used to perform more complicated per-pixel computations, such as a
lighting calculation to determine whether a pixel is brightly lit or dim.
• At this point we need to perform what’s called a depth test to determine whether a
given pixel has already been computed for a closer object. For any given perspective,
some geometry may be hidden by other geometry that’s in front of it. We’ll resolve this
on a per-pixel basis, after the rasterization stage.
To have a beautiful character on a screen, two important elements are needed: A smooth
geometry and a Texture. A Texture is the image that clothes the character and gives it a
personality.
A texture is an image that is transferred to the GPU and is applied to the character during the
final stages of the rendering pipeline. Texture Parameters and Texture Coordinates ensure
that the image fits the character’s geometry correctly.
Providing the texture
Every character in a mobile game contains a texture. Applying a texture is the equivalent to
painting. Whereas you would paint a drawing section by section, a texture is apply to a
character fragment by fragment in the Fragment Shader.
opening up Blender , a modeling software he uses to model 3D characters. His task is to model
a shiny robot-like character with black gloves, silver helmet, and white torso. After two hours,
he has completed the geometry of the robot. His next task is to do what is known as
Unwrapping .
Unwrapping means to cut the 3D character and Unwrap it to form a 2D equivalent of the
character. The result is called a UV Map. Why does he do this? because an image is a two-
dimensional entity. By unwrapping the 3D character into a 2D entity, an image can be applied
to the character. Figure 2 shows an example of a cube unwrapped.
Once You has unwrapped the character, he takes the UV-Map of the character and exports it
to an image editor software, like Photoshop or GIMP. The UV-Map will serve as a road map
to paint over. Here, Bill will be able to paint what represents the gloves of the robot to Black;
the helmet to Silver and the torso to White . At the end, he will have a 2D image that will be
applied to a 3D character. This image is called a Texture.
It is this image texture that will be loaded in your application and transferred to the GPU.
Figure 4. Character's UV map with texture (image).
During the unwrapping process, the Vertices defining the 3D geometry are mapped onto a UV
Coordinate system. A UV Coordinate System is a 2D dimensional coordinate system whose
axes, U and V, ranges from 0 to 1.
For example, figure 2 above shows the UV Map of a cube. The vertices of the cube were
mapped onto the UV Coordinate system. A vertex with location of (1,1,1) in 3D space, may
have been mapped to UV-Coordinates of (0.5,0.5).
During the application of the texture, the Fragment Shader uses the UV-Coordinates as guide-
points to properly attach the texture to the 3D model.
Texture Analysis
Entropy, range, and standard deviation filtering; create gray-level co-occurrence matrix
Texture analysis refers to the characterization of regions in an image by their texture content.
Texture analysis attempts to quantify intuitive qualities described by terms such as rough,
smooth, silky, or bumpy as a function of the spatial variation in pixel intensities. In this sense,
the roughness or bumpiness refers to variations in the intensity values, or gray levels.
Functions