0% found this document useful (0 votes)
143 views

What Is Texture in Graphic Design

This document discusses texture analysis in image processing. It introduces a partially self-avoiding deterministic walk approach for texture analysis that uses "tourists" to analyze grey scale image contexts at different levels. The approach generates graphs from tourist trajectories in images that embody characteristics related to tourist transitivity. Statistical position and dispersion measures computed from these graphs are then used as texture descriptors, and the performance of this novel approach is compared to traditional texture analysis methods.

Uploaded by

Bitrus Yahaya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
143 views

What Is Texture in Graphic Design

This document discusses texture analysis in image processing. It introduces a partially self-avoiding deterministic walk approach for texture analysis that uses "tourists" to analyze grey scale image contexts at different levels. The approach generates graphs from tourist trajectories in images that embody characteristics related to tourist transitivity. Statistical position and dispersion measures computed from these graphs are then used as texture descriptors, and the performance of this novel approach is compared to traditional texture analysis methods.

Uploaded by

Bitrus Yahaya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Texture is one of the most important visual attributes for image analysis.

It has been widely


used in image analysis and pattern recognition. A partially self-avoiding deterministic walk has
recently been proposed as an approach for texture analysis with promising results. This
approach uses walkers (called tourists) to exploit the grey scale image contexts in several
levels. Here, we present an approach to generate graphs out of the trajectories produced by the
tourist walks. The generated graphs embody important characteristics related to tourist
transitivity in the image. Computed from these graphs, the statistical position (degree mean)
and dispersion (entropy of two vertices with the same degree) measures are used as texture
descriptors. A comparison with traditional texture analysis methods is performed to illustrate
the high performance of this novel approach.

What is texture in graphic design?

The general definition of texture in graphic design is the surface quality in a work of art. In
simpler terms, the texture is the visual tone of a design. It influences how graphic designs feel
and look. Texture can apply to physical surfaces as well. However, the difference here is that
texture in graphic design cannot be felt physically.

Instead, it is implied by how the design is styled. By layering rich graphics upon each other,
you can create visual textures that mimic the actual texture.

There are two primary forms of texture used in graphics design; actual and implied textures.

Actual textures

Actual textures are also called physical textures. They refer to the real tactile properties of the
design. Actual texture is useful for designing stuff like wedding invitation cards, business
cards, and brochures where concerns such as paper thickness and quality or surface smoothness
and letter embossing have to be addressed.

Implied textures

Also known as image texture, implied textures are generated from a combination of geometric
or organic shapes and colors to bring the feeling of texture to a graphic design. Implied textures
can be complex or simple depending on the layers of lines, shapes, and text used. Implied
textures can be separated into three different textures:
• Environmental – These are textures generated from the environment. They can include
rocks, stars in the sky, sand, among other environmental objects.
• Man-made – The texture is anything that was designed by human hands. Artificial
textures can include clothes, illustrations, or even paintings.
• Biological – Biological textures can be anything from fur, skin, animal prints to
feathers. They are sourced from biological components, mostly animal-related.

How to use texture to enhance your designs

Texture in graphics design is meant to create illusions by altering the feel and look of an image.
Using texture correctly, a graphic designer can create compelling designs by adding an extra
layer of meaning to a graphic design.

Before deciding on a texture for your design, you need to have a clear idea of what you need
to design, resources that you can turn into textures, and software like CorelDRAW to work on
the design.

Basically, when adding texture, you want to create something that will catch someone's eye
without being too over the top. Go with your artistic instincts as well. The goal is to achieve
something that brings out your idea while invoking an emotional response to the viewer.

Using actual texture to bring organic life to graphic designs

Using natural elements in your graphic designs infuses the image with life, beauty, warmth,
and vividness. Actual texture imagery is inspired by anything in the world. This can be anything
from a small feather to the canopy of a rainforest.

It is important to remember that the key to creating stunning images using actual texture is
setting a contrast between the textured background and the striking foreground.

Using implied texture to create experimental graphic designs

The implied texture consists of human-fabricated images and a wide array of imagery with
surrealistic patterns. Using modern graphic design software like CorelDRAW, the designer can
incorporate a wide range of implied textures. The only limitation is your imagination. Contrast
still plays a critical role in creating gorgeous textures for the background and foreground.
The Graphics Pipeline
The Graphics Pipeline is the process by which we convert a description of a 3D scene into a
2D image that can be viewed. This involves both transformations between spaces and
conversions from one type of data to another.

Spaces

What is a space? Consider if you and I are standing in a room, 6 feet apart and facing the same
direction. Exactly half way inbetween us is a table, and I am on the left of the table. If I were
to answer “where is the table”, I would likely say the table is 3 feet to my right. But you would
say the table is 3 feet to your left. Neither of us is wrong, we’re just describing relationships in
space using a difference reference point. We might say that I am describing the position of the
table in “my coordinates” while you are describing it in “your coordinates”.

In computer graphics, we sometimes call these two different coordinate systems spaces. A lot
(seriously, a lot) of computer graphics has to do with transforming geometry between spaces.
Usually, the spaces are different in more complicated ways (consider if you turned yourself 45
degrees, and I flipped over and did a headstand - our descriptions of the table’s position would
now be more complicated and different).

It’s nice to have a common frame of reference for these things. Usually, we describe the
positions of most things in our scene in terms of “world coordinates”. This “world space” has
a common reference point, or origin, from which the positions of all other objects are
described. The world coordinate system also has a sense of directionality, e.g. which way is
up, but we’ll discuss this later.

In our rasterizers, our task will be to take triangles with coordinates stored in world space, and
transform them into pixels that we can color on the screen.
The geometry can be expressed in 2D coordinates, but usually we’ll be talking about 3D
geometry.

The output of our rasterizer will be an image that can be viewed. This image will be a two-
dimensional array of color values. These color values are called pixels, or “picture elements”.

We’ll usually talk about these pixels as a x and y coordinate on the screen. These coordinates
will be integers, and will range from 0 to the width or height of the screen.

0≤xp≤w−1 0≤yp≤h−1

We will need to find a way to transform from world coordinates to these pixel coordinates, but
we’ll cover that after we first introduce the main stages of the graphics pipeline.
The Graphics Pipeline

The Graphics Pipeline exists in a variety of forms depending on the type of computer graphics
and rendering you are doing.

What we’ll introduce now is the rasterization pipeline used by OpenGL, which will be more or
less what implement for our software rasterizer.

input specification: what are the vertices?

• the type of geometry we’ll be rendering is specified, in a form that is convenient for the
rasterization process we’re using.

vertex shader: move to camera’s perspective

• world/object space → screen coordinates: we transform the coordinates of our input


geometry from world space to a new space that is aligned with the camera’s persective.

rasterization: which pixels are inside triangle?

• Rasterization involves determining the pixels that a given primitive covers on the
screen, so that we can compute the colors of each covered pixel.

fragment shader: what color is each pixel?

• For each given pixel that is covered by the primitive, compute the expected color. For
starters we’ll likely just color each pixel a specific color, e.g. red. Later, the fragment
shader may be used to perform more complicated per-pixel computations, such as a
lighting calculation to determine whether a pixel is brightly lit or dim.

testing and blending: which pixels are visible?

• At this point we need to perform what’s called a depth test to determine whether a
given pixel has already been computed for a closer object. For any given perspective,
some geometry may be hidden by other geometry that’s in front of it. We’ll resolve this
on a per-pixel basis, after the rasterization stage.

To have a beautiful character on a screen, two important elements are needed: A smooth
geometry and a Texture. A Texture is the image that clothes the character and gives it a
personality.

A texture is an image that is transferred to the GPU and is applied to the character during the
final stages of the rendering pipeline. Texture Parameters and Texture Coordinates ensure
that the image fits the character’s geometry correctly.
Providing the texture

Every character in a mobile game contains a texture. Applying a texture is the equivalent to
painting. Whereas you would paint a drawing section by section, a texture is apply to a
character fragment by fragment in the Fragment Shader.

opening up Blender , a modeling software he uses to model 3D characters. His task is to model
a shiny robot-like character with black gloves, silver helmet, and white torso. After two hours,
he has completed the geometry of the robot. His next task is to do what is known as
Unwrapping .

Unwrapping means to cut the 3D character and Unwrap it to form a 2D equivalent of the
character. The result is called a UV Map. Why does he do this? because an image is a two-
dimensional entity. By unwrapping the 3D character into a 2D entity, an image can be applied
to the character. Figure 2 shows an example of a cube unwrapped.

Figure 2. Unwrapping a cube into its UV Map.


The unwrapping of the character is essentially a transformation from a 3D coordinate system
to a 2D coordinate system called UV Coordinate System. This process is referred as UV
Mapping.

Figure 3. Unwrapping a 3D model into its UV Map.

Let’s continue reading the steps’s workflow:

Once You has unwrapped the character, he takes the UV-Map of the character and exports it
to an image editor software, like Photoshop or GIMP. The UV-Map will serve as a road map
to paint over. Here, Bill will be able to paint what represents the gloves of the robot to Black;
the helmet to Silver and the torso to White . At the end, he will have a 2D image that will be
applied to a 3D character. This image is called a Texture.

It is this image texture that will be loaded in your application and transferred to the GPU.
Figure 4. Character's UV map with texture (image).

A new coordinate system

During the unwrapping process, the Vertices defining the 3D geometry are mapped onto a UV
Coordinate system. A UV Coordinate System is a 2D dimensional coordinate system whose
axes, U and V, ranges from 0 to 1.

For example, figure 2 above shows the UV Map of a cube. The vertices of the cube were
mapped onto the UV Coordinate system. A vertex with location of (1,1,1) in 3D space, may
have been mapped to UV-Coordinates of (0.5,0.5).

During the application of the texture, the Fragment Shader uses the UV-Coordinates as guide-
points to properly attach the texture to the 3D model.

Texture Analysis
Entropy, range, and standard deviation filtering; create gray-level co-occurrence matrix

Texture analysis refers to the characterization of regions in an image by their texture content.
Texture analysis attempts to quantify intuitive qualities described by terms such as rough,
smooth, silky, or bumpy as a function of the spatial variation in pixel intensities. In this sense,
the roughness or bumpiness refers to variations in the intensity values, or gray levels.

Texture analysis is used in various applications, including remote sensing, automated


inspection, and medical image processing. Texture analysis can be used to find the texture
boundaries, called texture segmentation. Texture analysis can be helpful when objects in an
image are more characterized by their texture than by intensity, and traditional thresholding
techniques cannot be used effectively.

Functions

entropy Entropy of grayscale image

entropyfilt Local entropy of grayscale image

rangefilt Local range of image

stdfilt Local standard deviation of image

graycomatrix Create gray-level co-occurrence matrix from image

graycoprops Properties of gray-level co-occurrence matrix (GLCM)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy