DIP Lecture4 5
DIP Lecture4 5
N8(p)
q s q s
p p p p
r r
Example: Adjacency
(a) D( p, q) 0 ( D( p, q) 0 iff p q)
(b) D( p, q) D(q, p), and
(c) D( p, z ) D( p, q) D(q, z )
De ( p, q ) [ x s ]2 ( y t ) 2 ]1/ 2
D4 ( p, q ) x s y t
Distance Measures (2/2)
Chessboard Distance (D8) between p and q is defined as
D8 ( p, q ) max( x s , y t )
S1 S2
0 0 0 0 0 0 0 1 1 0
1 0 0 1 0 0 1 0 0 1
1 0 0 1 0 1 1 0 0 0
0 0 1 1 1 0 0 0 0 0
0 1
0 1 1 1 0 0 1 1
Solution – Practice Question 1
Practice Question 2
Consider the image segment shown.
(a) Let V = {0,1} and compute the lengths of
the shortest 4-, 8-, and m-path between p
and q. If a particular path does not exist
between these two points, explain why.
(b) Repeat V = {1,2}.
3 1 2 1(q)
2 2 0 2
1 2 1 1
(p)1 0 1 2
Solution – Practice Question 2
2.6 Introduction to
Basic Mathematical Tools used in
Digital Image Processing
1 2 1 2 1 2
1 1
and suppose that let a=1 and b=-1 .To test for linearity, we
again start with the left side of Eq. on last slide [Eq. (2-23)
in book]
The left and right sides of Eq. (2-23) are not equal in this
case, so we have proved that the max operator is nonlinear
Arithmetic Operations
Arithmetic operations between two images f(x,y) and g(x,y)
are denoted as
and 2 2
𝑔 (𝑥,𝑦 ) (𝑥,𝑦 )
Figure 2.29(a) shows an 8-bit image of the galaxy pair NGC 3314, in which
corruption was, simulated by adding to it Gaussian noise with zero mean and
a standard deviation of 64 intensity levels
This image which is representative of noisy astronomical images taken under low
light conditions is useless for all practical purposes
Figure 2.29(b) to (f) shows the results of 5, 10, 20, 50 and 100 images
Practice Question 3
Show that image averaging can be done
recursively. That is, show that if a(k) is the
average of k images, then the average of
k+1 images can be obtained from already-
computed average, a(k), and the new image
fk+1
Solution Hint: Use following expression of
average of k image to find average of k+1
image
Solution – Practice Question 3
Example – Comparing Images using Subtraction
(Enhancing Difference between Images)
Figs. 2.31(a) shows the difference between the 930 dpi and 72 dpi
images in Fig. 2.23
It can be observed that the differences are quite noticeable
The next two images in Fig. 2.31 show proportionally less overall
intensities, indicating smaller differences between the 930 dpi
image and 150 dpi and 300 dpi images
Medical Imaging – Mask Mode Radiography
Area of Medical imaging called mask mode radiography is a
Commercially successful and highly beneficial use of Image
subtraction
Consider image differences of the form
Figure 2.32(a) shows a mask X-ray image of the top of a patient’s head prior
to injection of an iodine medium into the bloodstream
Fig. 2.32(b) is a sample of a live image taken after the medium was injected.
Figure 2.32(c) is the difference between (a) and (b)
The difference is clear in Fig. 2.32(d) which was obtained by sharpening the
image and enhancing its contrast
Practice Question 4
Image subtraction is often used in industrial
applications for detecting the missing components
in product successfully. The approach is to store a
“golden” image that corresponds to the correct
assembly; this image is then subtracted from
incoming images of same product. Ideally, the
differences would be zero if new products are
assembled correctly. Difference images for
products with missing components would be
nonzero in the area where they differ from the
golden image. What conditions do you think have
to be met in practical for this method to work?
Solution – Practice Question 4 (1/5)
Let g (x,y) denote the golden image, and let f(x,y)
denote any input image acquired during routine
operation of the system
Change detection via subtraction is based on computing
the difference d(x,y) = g(x,y) — f(x, y)
The resulting image, d(x,y) , can be used in two
fundamental ways for change detection
One way is to use pixel-by-pixel analysis. In this case
we say that
f(x, y) is "close enough" to the golden image if all the pixels in
d(x,y) fall within a specified threshold band [Tmin,Tmax] where
Tmin, is negative and Tmax is positive
Usually, the same value of threshold is used for both negative
and positive differences, so that we have a band [-T, T] in
which all pixels of d(x,y) must fall in order for f(x,y) to be
declared acceptable
Solution – Practice Question 4 (2/5)
The second major approach is simply to
Sum all the pixels in |d(x,y)| and
compare the sum against a threshold Q
Note that the absolute value needs to be used to
avoid errors canceling out in the sum
This is a much cruder test, so we will concentrate
on the first approach
There are three fundamental factors that need tight
control for difference-based inspection to work:
(1) proper registration
(2) controlled illumination, and
(3) noise levels that are low enough so that difference
values are not affected appreciably by variations due
to noise
Solution – Practice Question 4 (3/5)
1 2 1 1 2 2 1 2
NOT
B1 B2 B 1 A ND B 2
A ND
B 1 OR B 2
OR
B 1 A ND [N OT (B 2)]
A ND -
NOT
B 1 X OR B 2
X OR
Spatial Operations
(𝑟,𝑐 )=𝑆𝑥𝑦
255 255
255
255 0 255 255 247
0 168 154
0 77 45
0 0
0 0
Image Registration
Image registration used to
Align two or more images of the same scene
An important application of digital image processing
Registration Method
Have an input image and a reference image
Input image is geometrically transformed to
produce an output image that is aligned (registered)
with the reference image
The geometric transformation needed to produce
the output registered image generally is not known,
and must be estimated
Examples – Image Registration
Example of image registration
Aligning two or more images taken at approximately the
same time, but using different imaging systems, such as
an MRI (magnetic resonance imaging) scanner and a PET
(positron emission tomography) scanner
Another Example of image registration
Images taken at different times using the same
instruments, such as satellite images of a given location
taken several days, months, or even , years apart
Image registration can be used to combine the images or
performing quantitative analysis and comparisons between
them. It requires
Compensating for geometric distortions caused by
differences in viewing angle, distance, orientation, sensor
resolution, shifts in object location, and other factors
Image Registration – Tie Points
During the estimation phase, (v,w) and (x,y) are the coordinates
of tie points in the input and reference images, respectively
If we have four pairs of corresponding tie points in both
images, we can write
Eight equations using above two equations and using them to
solve for the eight unknown coefficients, c1 through c8
Once we have the coefficients, above two equations can be
used for
Transforming all the pixels in the input image
The result is the desired registered image
Example – Image Registration
In next slide, Figure 2.42(a) shows a reference image and Fig.
2.42(b) shows the same image, but distorted geometrically by
vertical and horizontal shear
Our objective is to use the reference image to obtain tie points
and then use them to register the images
The tie points we selected (manually) are shown as small white
squares near the corners of the images (We needed only four tie
points because the distortion is linear shear in both directions)
Figure 2.42(c) shows the registration result obtained using these
tie points in the procedure discussed in the preceding paragraphs
Observe that registration was not perfect, as it evident by the
black edges in Fig. 2.42(c)
The difference image in Fig. 2.42(d) shows more clearly the slight
lack of registration between the reference and corrected images
The reason for the discrepancies is error in the manual selection
of the tie points. It is difficult to achieve perfect matches for
tie points when distortion is so severe
Example – Image Registration
Image Transforms
In some cases, image processing tasks are best
formulated by
Transforming the input images, carrying the specified task
in a transform domain, and
Applying the inverse transform to return to the spatial
domain
Number of different transforms exists in literature
A particular important class of 2-D linear
transforms, denoted T(u,w) can be expressed in the
general form as 𝑀−1 𝑁−1
magnitude phase
Hint: use the inverse DFT to reconstruct the
input image using magnitude or phase only
information
See next slide
Magnitude and Phase of DFT ..
Reconstructed image using
magnitude only
(i.e., magnitude determines the
strength of each component!)
𝑘
𝑘=0
𝑘 𝑘
𝑘=0
Similarly, the variance of the intensities is
𝐿−1
2 2
𝑘 𝑘
𝑘=0