Unit 4
Unit 4
Amplitude features: e.g. the brightness levels can identify regions of interest in the
image:
Amplitude features may be discriminative enough if intensity is enough to distinguish
wanted info from the rest of the scene
=> defining the best parameters of the transformation for feature extraction – most
difficult
=> amplitude feature space representation is not necessarily binary; just that unwanted
parts of the scenes should be represented uniquely (i.e. black) in the feature space
=> sometimes adaptive thresholding/adaptive grey scale slicing is needed.
Region Measurements
of interest
(ROI) ROI histogram
Tissue of interest is well
discriminated from the
microscopic slide by the
standard deviation of the
local histogram
TRANSFORM FEATURES
EDGE DETECTION
Edge detection significantly reduces the amount of data and filters out
useless information, while preserving the important structural
properties in an image.
Edges are boundaries between different textures.
Edge also can be defined as discontinuities in image intensity from one
pixel to another.
The edges for an image are always the important characteristics that offer an
indication for a higher frequency.
Detection of edges for an image may help for image segmentation, data
compression, and also help for well matching, such as image
reconstruction and so on.
Edge detection is difficult in noisy images, since both the noise and the
edges contain high-frequency content.
Edge Detectors
Robert
Sobel
Prewitt
Laplacian and
Canny
Sobel Edge Detector
The salient features of the sobel edge detectors are listed as follows
It has two 3x3 convolution kernels or masks, Gx and Gy, as shown in fig 1.
both Gx and Gy can be joined together to find the absolute magnitude and
the orientation of the gradient.
Used to detect edges along the horizontal and vertical axis
Based on convolving the image with a small, integer valued filter (3×3 kernel)
in both horizontal and vertical direction. So this detector requires less
computation
The sobel edge detection masks search for edges in horizontal and vertical
directions and then combine this information into a single metric
In this, image intensity is calculate at every pixel (pixel) and presenting the
direction of the maximum possible change from white (light) to black (dark)
and the rate of change in that direction.
Simplicity
Detection of edges and their orientations
Sensitivity to noise
Inaccurate
The salient features of the Prewitt Edge Detector are listed as follows
This edge detector is very similar of sobel operator
Simplicity
Detection of horizontal and vertical edges and their orientations
Sensitivity to noise
Inaccurate
The kernel used in the Prewitt detector is shown in fig 2.
LAPLACIAN OF GAUSSIAN EDGE DETECTOR
The salient features of the Laplacian of Gaussian edge detector is listed as follows
Proposed by Marr and Hildreth in 1980
This is a combination of the Gaussian filtering and Laplacian gradient
operator.
Laplacian gradient operator determines the regions where the rapid
intensity changes. So it is best suit for edge detection.
After the laplacian process is over, the image is given to Gaussian filter to
remove the noise pixels.
The laplacian gradient of an image is given by
The canny edge detector can be used to identify a wide range of real edges
in images.
The detector eliminates the unwanted noise pixels by the process of
smoothening edges in an image because noise pixels generate false edges. \
In this edge detection, the signal to noise ratio is improved as compared to
other methods.
This is the reason why this detector is extensively used for edge detection in
image processing.
The procedure to find edges in images is explained as follows.
Initially the image is smoothened using a suitable filter such as mean filter,
Gaussian filter etc., to reduce the effect of noise.
Then local gradient and edge direction is calculated for every point. This
point has a maximum strength in the direction of the gradient.
These edge points give rise to ridges in the gradient magnitude image.
The edge detector tracks along the top of these ridges and make all the
pixels to zero that are not actually on the top of the ridge. So a thin line is
generated in the output.
These ridge pixels are threshold using two threshold values: upper threshold
(T2) and lower threshold (T1).
Ridge pixels are classified as strong edge pixels if ridge pixel values are
greater than upper threshold (T2) and ridge pixels are classified as weak
edge pixels if ridge pixel values are between the lower threshold (T1) and the
upper threshold (T2).
Finally, the edges in the image are linked by integrating the weak pixels
which are connected to the strong pixels.
What is Texture?
No one exactly knows.
In the visual arts, texture is the perceived surface quality of an artwork.
"Texture" is an ambiguous word and in the context of texture synthesis may
have one of the following meanings:
1. In common speech, "texture" used as a synonym for "surface
structure".
Texture has been described by five different properties in the
psychology of perception: coarseness, contrast, directionality, line-
likeness and roughness [1].
2. In 3D computer graphics, a texture is a digital image applied to the surface
of a three-dimensional model by texture mapping to give the model a more
realistic appearance. Often, the image is a photograph of a "real" texture,
such as wood grain.
3. In image processing, every digital image composed of repeated
elements is called a "texture."
Stochastic textures. Texture images of stochastic textures look like noise: colour
dots that are randomly scattered over the image, barely specified by the attributes
minimum and maximum brightness and average colour. Many textures look like
stochastic textures when viewed from a distance. An example of a stochastic texture
is roughcast.
Artificial Natural
textures textures
STRUCTURE
MORPHOLOGICAL OPERATIONS
Skeletonization
To reduce all objects in an image to lines, without changing the essential
structure of the image, use the bwmorph function. This process is known as
skeletonization.
SCENE MATCHING AND DETECTION
Image Subtraction
Subtract one image from another or subtract constant from image
Syntax Z= imsubtract(X,Y)
Description
Z = imsubtract (X,Y) subtracts each element in array Y from the
corresponding element in array X and returns the difference in the
corresponding element of the output array Z.
SCENE MATCHING AND DETECTION
IMAGE SEGMENTATION
Components of PR system
Good features:
• Objects from the same class have similar feature values.
• Objects from different classes have different values.
Face Recognition
User-friendly pattern recognition application
Weakness of face recognition
Illumination problems
Pose problems( profile or frontal view)
Eigenspace-based approach
A holistic approach
Reducing the high dimensionality problem , and large computational
complexity.
Approaches
12 Mark Questions