What Is A Digital Image: Pixel Values Typically Represent Gray Levels, Colours, Heights, Opacities Etc

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

What is a digital image

A digital image is a representation of a two-dimensional image as a finite set of digital


values, called picture elements or pixels
Pixel values typically represent gray levels, colours, heights, opacities etc

Sampling:
Sampling is measuring the value of an image at a finite number of points

Quantization:
Is representing the sampled value of the image with an integer. It is defined by gray
levels represented by bits/pixels.
Eg 256 gray levels => 8 bits per pixels
64 gray levels => 4 bits per pixels

Point processing:
This is defined as mathematical operations performed at every pixel/ point over an
image.

Intensity Transformations :
These are operations performed on an input image to get the desired intensity
transformed output image. Here S is the output image and R is the input image.
The values are mapped in such a way that the intensity levels in the input and output
image vary in a mathematical relation. Range of pixels with intensity is altered.

1. Log transform : s=c log(1+r) where c is a constant.


2. Power Law Transform (Gamma Transform) : s=c ry(Gamma) Used for gray level
compression and expansion.

Contrast stretching:
It Is increasing the dynamic range of pixels so that pixels span full range of illumination.
(Brightening of images)
Intensity level Slicing:
A certain level of pixel range is converted to maximum intensity, rest of the pixels
remain unchanged.
Highlights pixels that are in range of interest.

Histogram Equalization:
Contrast is adjusted using the histogram of an image.
The method involves effectively spreading out the most frequently occurring intensity
values. This increases the global contrast of the image.
Areas of lower local contrast to gain higher contrast.

Segmentation:
Segmentation is to subdivide an image into its component regions or objects.
Point detection / Line detection / Edge detection
Carried out by convolution of an image using a mask or a kernel

Image gradient:
Used to find the edge strength and direction.
Points in the direction of greatest rate of change

MORPHOLOGY:
Morphology: a branch of biology that deals with the form and structure of animals and
plants.
Morphological image processing is used to extract image components for
representation and description of region shape, such as boundaries, skeletons.
Morphological image processing is a collection of non linear operations related to the
shape or morphology of features in an image
Morphological operations rely only on the relative ordering of pixel values, not on their
numerical values, and therefore are especially suited to the processing of binary images
Morphological techniques probe an image with a small shape or template called a
structuring element. The structuring element is a small binary image, i.e. a small matrix
of pixels, each with a value of zero or one.

Operations used in Morphology(Arithmetic Operations)


•Image Addition:-Used in Image averaging to reduce noise like image enhancement.
•Image Subtraction:-Used to get rid of background information
•Image Multiplication and Division:-Used to correct grey level shading that result from
non uniformities in illumination or in the sensor used

Operations used in Morphology(Logical Operations)


•AND:-a AND b =c
In terms of set theory:-c= A intersection B ( c is the intersection of A & B)
•OR: -a OR b =c
In terms of set theory:-c= A union B ( c is the union of A & B)
•NOT

Structuring Element
Morphological techniques probe an image with a small shape or template called a
structuring element
The structuring element is positioned at all possible locations in the image and it is
compared with the corresponding neighborhood of pixels

Basic Morphological Operators


1. Erosion
Erosion:-Shrinks foreground, enlarges Background
Erosion is the set of all points in the image, where the structuring element “fits into”.
•Consider each foreground pixel in the input image–If the structuring element fits in,
write a “1” at the origin of the structuring element!•Simple application of pattern
matching
•Input:–Binary Image (Gray value)–Structuring Element, containing only 1s!
White = 0, black = 1, dual property, image as a result of erosion gets darker

2. Dilation
•Dilation enlarges foreground, shrinks background
Dilation is the set of all points in the image, where the structuring element “touches” the
foreground.
•Consider each pixel in the input image–If the structuring element touches the
foreground image, write a “1” at the origin of the structuring element!
•Input:–Binary Image–Structuring Element, containing only 1s!!
Image get lighter, more uniform intensity

Edge Detection
1.Dilate input image
2.Subtract input image from dilated image
3.Edges remain!

Opening
Erosion followed by dilation
Spot and noise removal
•eliminates protrusions
•breaks necks
•smoothens contour

Closing
dilation followed by erosion
•smooth contour
•fuse narrow breaks and long thin gulfs
•eliminate small holes
•fill gaps in the contour
Closes small holes in the foreground
Opening is the dual of closing
•i.e. opening the foreground pixels with a particular structuring element
•is equivalent to closing the background pixels with the same element.

Hit-and-miss Transform
•Used to look for particular patterns of foreground and background pixels
•Very simple object recognition
•All other morphological operations can be derived from it!!
•Input: –Binary Image–Structuring Element, containing 0s and 1s!!
Example for a Hit-and-miss Structuring Element
•Contains 0s, 1s and don’t care’s
.•Usually a “1” at the origin!
If foreground and background pixels in the structuring element exactly match foreground
and background pixels in the image, then the pixel underneath the origin of the
structuring element is set to the foreground color.

Thinning
1.Used to remove selected foreground pixels from binary images
2.After edge detection, lines are often thicker than one pixel.
3.Thinning can be used to thin those line to one pixel width.

Thickening
•Used to grow selected regions of foreground pixels
•E.g. applications like approximation of convex hull

Image Restoration
•Process of removal or reduction of degradation in an image through linear or non linear
filtering
•Aim is to bring image towards what it would have been if it had been recorded without
degradation.

Texture
• Texture is a feature used to partition images into regions of interest and to classify those
regions.
• Texture provides information in the spatial arrangement of colors or intensities in an
image.
• Texture is characterized by the spatial distribution of intensity levels in a neighborhood.
• Texture is a repeating pattern of local variations in image intensity: – Texture cannot
be defined for a point.
Texture consists of texture primitives or texture elements, sometimes called texels. –
Texture can be described as fine, coarse, grained, smooth, etc.
– Such features are found in the tone and structure of a texture.
– Tone is based on pixel intensity properties in the texel, whilst structure represents the
spatial relationship between texels.
– If texels are small and tonal differences between texels are large a fine texture results.
– If texels are large and consist of several pixels, a coarse texture results.
There are two primary issues in texture analysis:
• n texture classification or texture segmentation.
Texture classification is concerned with identifying a given textured region from a given
set of texture classes.
– Each of these regions has unique texture characteristics.
– Statistical methods are extensively used. e.g. GLCM, contrast, entropy, homogeneity
Defining Texture:-

There are three approaches to defining exactly what texture is:


1. Structural: texture is a set of primitive texels in some regular or repeated relationship.
2. Statistical: texture is a quantitative measure of the arrangement of intensities in a
region. This set of measurements is called a feature vector.
3. Modelling: texture modelling techniques involve constructing models to specify
textures.

Defining Texture
Statistical methods are particularly useful when the texture primitives are small, resulting
in micro-textures.
When the size of the texture primitive is large, first determine the shape and properties of
the basic primitive and the rules which govern the placement of these primitives, forming
macrotextures.

Weak and strong textures


Weak textures
• have small spatial interactions between primitives, can be described by
frequencies of primitive types appearing in some neighborhood;
• statistical approaches are adequate for weak texture description.
Strong textures
• spatial interactions among primitives are somewhat regular;
• frequency of occurrence of primitive pairs in some spatial relationship can be used
for description;
• frequently accompanied by exact definition of texture primitives.

Definition of a constant texture:


• An image region has a constant texture if a set of its local properties in that
region is constant, slowly changing,
• or approximately periodic.

Two basic texture description approaches:


• statistical
• syntactic
Statistical Approach
• A statistical approach sees an image texture as a quantitative measure of the
arrangement of intensities in a region. In general this approach is easier to
compute and is more widely used, since natural textures are made of patterns of
irregular subelements.

K means clustering :
1. Unsupervised machine learning algorigthm
2. A cluster refers to a collection of data points aggregated together because of
certain similarities.
3. You’ll define a target number k, which refers to the number of centroids you need
in the dataset.
4. A centroid is the imaginary or real location representing the center of the cluster.
5. and then allocates every data point to the nearest cluster, while keeping the
centroids as small as possible.
6. The distance of each point in dataset is computed relative to the K cluster centroids
that are assumed, and based on proximity they are assigned to that cluster.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy