0% found this document useful (0 votes)
24 views

IVP Unit-V & VI

Uploaded by

singhaman130903
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

IVP Unit-V & VI

Uploaded by

singhaman130903
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

IMAGE AND VIDEO PROCESSING

Edge Detection and Image Segmentation


Unit- V & VI

1
Principal approaches
➢ Segmentation is to subdivide an image into its
component regions or objects.

➢ Segmentation algorithms generally are based on two


basis properties of intensity values

❖Discontinuity: partition an image based on sharp


changes in intensity (such as edges).

❖Similarity: partition an image into regions that are


similar according to a set of predefined criteria.

2
Detection of Discontinuities
➢ Detect the three basic types of gray-level discontinuities
❖Points , lines , edges
➢ The common way is to run a mask through the image

3
Point Detection
➢ A point has been detected at the location on which the
mark is centered if
|R|  T
➢ where
❖T is a non-negative threshold
❖R is the sum of products of the coefficients with the gray
levels contained in the region encompassed by the mark.
Laplacian Operation

• The only differences is |R|  T

• To be considered isolated points

4
Example

X-ray image of a Result of point Result of


turbine blade detection thresholding

5
Line Detection

➢ Eg: Horizontal mask will result with max response when


a line passed through the middle row of the mask with a
constant background.

➢ The preferred direction of each mask is weighted with a


larger coefficient (i.e.,2) than other possible directions.
6
Line Detection

➢ Apply every masks on the image

➢ Let R1, R2, R3, R4 denotes the response of the


horizontal, +45 degree, vertical and -45 degree masks,
respectively.

➢ If, at a certain point in the image

|Ri| > |Rj|,

➢ For all ji, that point is said to be more likely associated


with a line in the direction of mask i.

7
Example

Binary image of a wire bond mask

After processing
with -45° line Result of
detector thresholding
filtering result

9
Edge Detection -Ideal and Ramp Edges

because of optics, sampling, image


acquisition imperfection
10
Thick edge

➢ No longer have a thin (one pixel thick) path.


➢ Instead, an edge point now is any point contained in the ramp,
and an edge would then be a set of such points that are
connected.
➢ The thickness is determined by the length of the ramp.
➢ The length is determined by the slope, which is in turn
determined by the degree of blurring.
➢ Blurred edges tend to be thick and sharp edges tend to be
thin

11
First and Second derivatives

The signs of the derivatives would be reversed for


an edge that transitions from light to dark

We have already spoken about how derivatives are used to find


discontinuities
•1st derivative tells us where an edge is
•1st derivative can be used to show edge direction
12
Second derivatives

➢ Produces 2 values for every edge in an image (an


undesirable feature)

➢ An imaginary straight line joining the extreme positive


and negative values of the second derivative would cross
zero near the midpoint of the edge. (zero-crossing
property)
❖quite useful for locating the centers of thick edges

13
First column: images and gray-level profiles of a ramp edge corrupted by random Gaussian
noise of mean 0 and  = 0.0, 0.1, 1.0 and 10.0, respectively
First Derivative Second Derivative

14
Notes

➢ Fairly little noise can have such a significant impact on


the two key derivatives used for edge detection in
images
❖Second derivative is more sensitive to noise

➢ Image smoothing should be serious consideration prior


to the use of derivatives

15
Gradient Operator

 f 
➢ First derivatives are implemented using
Gx   x 
the magnitude of the gradient. f =   =  f 
G y   
1
 y 
f = mag (f ) = [G + G ] 2
x
2
y
2

1
 f   f  
2 2 2

=   +    commonly approx.
 x   y  
the magnitude becomes nonlinear
f  Gx + G y
The direction of the gradient vector is the angle
α(x,y) given by

17
Gradient Masks

18
Diagonal edges with Prewitt
and Sobel masks

19
Example
Original Image Horizontal Gradient Component

Vertical Gradient Component Combined Edge Image


20
Example

21
Edge linking and boundary detection

➢ Ideally, edge detection techniques yield pixels lying only on


the boundaries between regions
➢ In practice, this pixel set seldom characterizes a boundary
completely because of
❖noise
❖breaks in the boundary due to non-uniform illumination
❖other effects that introduce spurious discontinuities

➢ Thus, edge detection algorithms are usually followed by


linking and other boundary detection procedures designed to
assemble edge pixels into meaningful boundaries

22
Edge linking: local processing
➢ Basic idea:
❖Analyze the characteristics of pixels in a small neighborhood
(3x3,5x5, etc) for every point (x,y) that has undergone edge
detection
❖All points that are “similar” are linked, forming a boundary of
pixels that share some common property
➢ Two principal properties for establishing similarity
❖The strength of the response of the gradient operator used to
produce the edge pixels
1
f = mag (f ) = [G + G ]
2
x
2
y
2

❖The direction of the gradient

23
Edge linking: local processing (cont.)
➢ An edge pixel at (x’,y’) in the neighborhood centered at
(x,y) is similar in magnitude to the pixel at (x,y) if

❖where T is a predetermined threshold

➢ An edge pixel at (x’,y’) in the neighborhood centered at


(x,y) is similar in angle to the pixel at (x,y) if

❖where A is a predetermined angle threshold

➢ Edge pixel (x’,y’) is linked with (x,y) if both magnitude


and angle criteria are satisfied

24
Thresholding
➢ Thresholding is the simplest segmentation method.
➢ The pixels are partitioned depending on their intensity value.
➢ Global thresholding, using an appropriate threshold T:

➢ Variable thresholding, if T can change over the image.


❖Local or regional thresholding, if T depends on a
neighborhood of (x, y).
❖Adaptive thresholding, if T is a function of (x, y).
➢ Multiple thresholding

27
Thresholding based on boundaries
➢ Important aspect of threshold selection: the ability to
reliably identify mode peaks in a given histogram
➢ The chances of selecting a “good” threshold are
enhanced if mode peaks are
❖tall
❖narrow
❖symmetric
❖and separated by deep valleys
➢ One approach for improving the histogram shape is to
consider only those pixels that lie on or near a boundary
between objects and the background

28
Choosing Thresholding
image with dark background and a light object image with dark background and two light
objects

Peaks and valleys of the image histogram can help in choosing the appropriate
value for the threshold(s).
Some factors affects the suitability of the histogram for guiding the choice of the
threshold:
• the separation between peaks;
• the noise content in the image;
• the relative size of objects and background;
• Illumination and reflection
29
Noise role in Thresholding

30
easily use global thresholding
object and background are separated
The Role of Illumination

f(x,y) = i(x,y) r(x,y)

a). computer generated reflectance


function
b). histogram of reflectance
function
c). computer generated illumination
function (poor)
d). product of a). and c).
e). histogram of product image

difficult to segment 3131


Reason for histogram distortion

Image formation model


❖f(x,y)=i(x,y) r(x,y)

illumination: reflectance:
Slow spatial variations vary abruptly, particularly
at the junctions of dissimilar
objects

z(x,y)= iʹ(x,y)+r̕ (x,y)


32
Global thresholding

33
Otsu’s method

Total pixels
• uses the gray scale
histogram of an image
• separates two regions
with maximum inter-
class variance.

34
➢ Select a threshold T(k)= k, 0<k<L-1,
❖Two classes, c1 and c2 , where c1 (intensity values in the
range [0, k]) and c2 Pixels in [k+1, L-1]
➢ The probability, P1(k ), that a pixel is assigned to class
c1 is given by the cumulative sum

➢ Similarly

➢ The mean intensity value of the pixels in c1 is

35
➢ Mean intensity value of the pixels assigned to class c2 is

➢ The cumulative mean up to level k is

➢ The average intensity of the entire image (i.e., the global


mean) is

➢ Hence

36
Once k* has been obtained, input image f (x,y) is segmented

37
Problem

38
39
40
Region-Based Segmentation

➢ Region-based segmentation is based on the connectivity


of similar pixels in a region.
❖Each region must be uniform.

❖Connectivity of the pixels within the region is very


important.

➢ Two main approaches to region-based segmentation:

❖ Region growing

❖ Region splitting and merging.

41
Basic Formulation
➢ Let R represent the entire image region.
➢ Segmentation is a process that partitions R into subregions,
R1,R2,…,Rn, such that

➢ where P(Rk): a logical predicate defined over the points in set


Rk
➢ For example: P(Rk)=TRUE if all pixels in Rk have the same
gray level.

42
Region Growing
➢ The basic approach is to start with a set of “seed” points

➢ Groups pixel or subregions into larger regions based on a predefined criteria from a set

of “seed” points.

➢ Growing by appending to each seed those neighbors that have similar properties such as

specific ranges of gray level

➢ Several key factors: the selection of “seed” points, similarity criteria, descriptor (based

on intensity levels, such as moments or texture, and spatial properties), stop rule,

adjacency definition.

➢ Notation: Descriptors alone can yield misleading results if connectivity or adjacency

information is not used in the region-growing process

➢ The selection of the seeds can be operated manually or using automatic procedures

based on appropriate criteria.

❖ A-priori knowledge can be included and is strictly application-dependent. 43


➢ It is difficult to segment the defects by thresholding
methods
➢ Applying region growing methods are better in this case

44
➢ (a) X-ray image of a defective weld
f.
➢ (b) Histogram.
➢ (c) Initial seed image S.
➢ (d) Final seed image (erosion).
➢ (e) Absolute value of the difference
between the seed value (255) and
(a) (|f-S|).
➢ (f) Histogram of (e).
➢ (g) Difference image thresholded
using dual thresholds.
➢ (h) |f-S|>68
➢ (i) Segmentation result obtained by
region growing Q:=|f-S| <=68

45
➢ Criteria:

1. The absolute gray-


level difference between
any pixel and the seed
has to be less than 68

2. The pixel has to be 8-


connected to at least one
pixel in that region (if
more, the regions are
merged)

46
47
The segmented region

Condition → absolute difference <= 2.

48
2 2 7 2 1 Consider the 3-bit image.
1 7 6 6 2 Assume seed value is 6.
segment using 8-connectivity
7 6 6 5 7
and threshold is 3.
2 4 5 4 2
1 2 5 1 1

49
Region Splitting and Merging

➢ Region splitting is the opposite of region growing.


❖First consider large region (possible the entire image).
❖Then a predicate (measurement) is used to determine if the
region is uniform.
❖If not, then the method requires that the region be split into
two regions.
❖Then each of these two regions is independently tested by
the predicate (measurement).
❖This procedure continues until all resulting regions are
uniform.

50
51
Region Splitting
➢ The main problem with region splitting is determining
where to split a region.
➢ One method to divide a region is to use a quadtree
structure.
➢ Quadtree: a tree in which nodes have exactly four
descendants.

52
5 6 6 6 7 7 6 6
Consider the 3-bit image.
6 7 6 7 5 5 4 7 Assume the threshold value
6 6 4 4 3 2 5 6 <=4.
5 4 5 4 2 3 4 6
0 3 2 3 3 2 4 7
0 0 0 0 2 2 5 6
1 1 0 1 0 3 4 4
1 0 1 0 2 3 5 4

53
Region Splitting and Merging

➢ The split and merge procedure:


❖Split into four disjoint quadrants any region Ri for which
P(Ri) = FALSE.
❖Merge any adjacent regions Rj and Rk for which P(RjURk)
= TRUE. (the quadtree structure may not be preserved)
❖Stop when no further merging or splitting is possible.

54
➢ Here we define P(Ri) = TRUE if at least 80% of the
pixels in Ri have the property:
❖|zj – mi| ≤ 2σi , where zj is the gray level of the jth pixel in
Ri , mi is the mean gray level of that region,
❖σi is the standard deviation of the gray levels in Ri.
❖If P(Ri)=TRUE under this condition, the values of all the
pixels in Ri were set equal to mi.

55

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy