What Is A Digital Image: Pixel Values Typically Represent Gray Levels, Colours, Heights, Opacities Etc
What Is A Digital Image: Pixel Values Typically Represent Gray Levels, Colours, Heights, Opacities Etc
What Is A Digital Image: Pixel Values Typically Represent Gray Levels, Colours, Heights, Opacities Etc
Sampling:
Sampling is measuring the value of an image at a finite number of points
Quantization:
Is representing the sampled value of the image with an integer. It is defined by gray
levels represented by bits/pixels.
Eg 256 gray levels => 8 bits per pixels
64 gray levels => 4 bits per pixels
Point processing:
This is defined as mathematical operations performed at every pixel/ point over an
image.
Intensity Transformations :
These are operations performed on an input image to get the desired intensity
transformed output image. Here S is the output image and R is the input image.
The values are mapped in such a way that the intensity levels in the input and output
image vary in a mathematical relation. Range of pixels with intensity is altered.
Contrast stretching:
It Is increasing the dynamic range of pixels so that pixels span full range of illumination.
(Brightening of images)
Intensity level Slicing:
A certain level of pixel range is converted to maximum intensity, rest of the pixels
remain unchanged.
Highlights pixels that are in range of interest.
Histogram Equalization:
Contrast is adjusted using the histogram of an image.
The method involves effectively spreading out the most frequently occurring intensity
values. This increases the global contrast of the image.
Areas of lower local contrast to gain higher contrast.
Segmentation:
Segmentation is to subdivide an image into its component regions or objects.
Point detection / Line detection / Edge detection
Carried out by convolution of an image using a mask or a kernel
Image gradient:
Used to find the edge strength and direction.
Points in the direction of greatest rate of change
MORPHOLOGY:
Morphology: a branch of biology that deals with the form and structure of animals and
plants.
Morphological image processing is used to extract image components for
representation and description of region shape, such as boundaries, skeletons.
Morphological image processing is a collection of non linear operations related to the
shape or morphology of features in an image
Morphological operations rely only on the relative ordering of pixel values, not on their
numerical values, and therefore are especially suited to the processing of binary images
Morphological techniques probe an image with a small shape or template called a
structuring element. The structuring element is a small binary image, i.e. a small matrix
of pixels, each with a value of zero or one.
Structuring Element
Morphological techniques probe an image with a small shape or template called a
structuring element
The structuring element is positioned at all possible locations in the image and it is
compared with the corresponding neighborhood of pixels
2. Dilation
•Dilation enlarges foreground, shrinks background
Dilation is the set of all points in the image, where the structuring element “touches” the
foreground.
•Consider each pixel in the input image–If the structuring element touches the
foreground image, write a “1” at the origin of the structuring element!
•Input:–Binary Image–Structuring Element, containing only 1s!!
Image get lighter, more uniform intensity
Edge Detection
1.Dilate input image
2.Subtract input image from dilated image
3.Edges remain!
Opening
Erosion followed by dilation
Spot and noise removal
•eliminates protrusions
•breaks necks
•smoothens contour
Closing
dilation followed by erosion
•smooth contour
•fuse narrow breaks and long thin gulfs
•eliminate small holes
•fill gaps in the contour
Closes small holes in the foreground
Opening is the dual of closing
•i.e. opening the foreground pixels with a particular structuring element
•is equivalent to closing the background pixels with the same element.
Hit-and-miss Transform
•Used to look for particular patterns of foreground and background pixels
•Very simple object recognition
•All other morphological operations can be derived from it!!
•Input: –Binary Image–Structuring Element, containing 0s and 1s!!
Example for a Hit-and-miss Structuring Element
•Contains 0s, 1s and don’t care’s
.•Usually a “1” at the origin!
If foreground and background pixels in the structuring element exactly match foreground
and background pixels in the image, then the pixel underneath the origin of the
structuring element is set to the foreground color.
Thinning
1.Used to remove selected foreground pixels from binary images
2.After edge detection, lines are often thicker than one pixel.
3.Thinning can be used to thin those line to one pixel width.
Thickening
•Used to grow selected regions of foreground pixels
•E.g. applications like approximation of convex hull
Image Restoration
•Process of removal or reduction of degradation in an image through linear or non linear
filtering
•Aim is to bring image towards what it would have been if it had been recorded without
degradation.
Texture
• Texture is a feature used to partition images into regions of interest and to classify those
regions.
• Texture provides information in the spatial arrangement of colors or intensities in an
image.
• Texture is characterized by the spatial distribution of intensity levels in a neighborhood.
• Texture is a repeating pattern of local variations in image intensity: – Texture cannot
be defined for a point.
Texture consists of texture primitives or texture elements, sometimes called texels. –
Texture can be described as fine, coarse, grained, smooth, etc.
– Such features are found in the tone and structure of a texture.
– Tone is based on pixel intensity properties in the texel, whilst structure represents the
spatial relationship between texels.
– If texels are small and tonal differences between texels are large a fine texture results.
– If texels are large and consist of several pixels, a coarse texture results.
There are two primary issues in texture analysis:
• n texture classification or texture segmentation.
Texture classification is concerned with identifying a given textured region from a given
set of texture classes.
– Each of these regions has unique texture characteristics.
– Statistical methods are extensively used. e.g. GLCM, contrast, entropy, homogeneity
Defining Texture:-
Defining Texture
Statistical methods are particularly useful when the texture primitives are small, resulting
in micro-textures.
When the size of the texture primitive is large, first determine the shape and properties of
the basic primitive and the rules which govern the placement of these primitives, forming
macrotextures.
K means clustering :
1. Unsupervised machine learning algorigthm
2. A cluster refers to a collection of data points aggregated together because of
certain similarities.
3. You’ll define a target number k, which refers to the number of centroids you need
in the dataset.
4. A centroid is the imaginary or real location representing the center of the cluster.
5. and then allocates every data point to the nearest cluster, while keeping the
centroids as small as possible.
6. The distance of each point in dataset is computed relative to the K cluster centroids
that are assumed, and based on proximity they are assigned to that cluster.