DIP Lab Manual

Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

Digital Image

Processing

Lab Manual
Contents

Exp. No. Description

1 Reading, Displaying and Writing Digital Images

2 Inter-converting various types of digital images

3 Image Transformations on various types of digital images

4 Histogram Processing of Grayscale and binary images

5 Spatial and Graylevel Resolution of various Grayscale images

6 Smoothing Spatial Filtering on Grayscale images

7 Sharpening Spatial Filtering on Grayscale images

8 Unsharp Masking and High Boost Filtering in Spatial Domain

9 Averaging Frequency Domain Filters for Grayscale Images

10 Sharpening Frequency Domain Filters for Grayscale Images

11 Unsharp Masking and High Boost Filtering in Frequency Domain

12 Color Image Representation and Inter-conversion

13 Spatial Filtering of Color Images

14 Dilation, Erosion, Opening and Closing of Binary Images

15 Skeletonization of Black and White Images

16 Introducing basic JPEG compression principles


Experiment No. 1

Objective: Reading, Displaying and Writing Digital Images

Apparatus: MATLAB® and some sample gray scale & colored images

Theory:

Reading an image:

Images are read into the MATLAB environment using function imread whose syntax is

imread (‘filename’)

Here filename is a string containing the complete name of the image file (including extension)

>> I= imread (‘cameraman.tif’);

Above command read the image and store into image array I.

The simplest way to read an image from specified directory to include a full or relative path to the directory in filename

>> I= imread (‘D:\myimages\myfirst.jpg’);

Read the image from a folder called myimages.

Size of the Image:

>> size (I)


Ans=1024 1024

Above function gives rows and columns dimension of an image. This function is useful in programming when used in the following
form to determine automatically the size of an image.
>> [M, N] = size (f);

The whos function display additional information about an array.

>> whos I
Name size bytes class
I 1024 x1024 1048576 Uint8array

Displaying an Image:

Images are displayed on MATLAB desktop using function imshow

>> imshow (I)

>> imshow (I, [low high])

display as black all values less than or equal to low, and as white all values greater than or equal to high. The values in between are
displayed as intermediate intensity values using default number of levels.

>> imshow(I,[ ] )

Sets variable low to minimum value of array I and high to its maximum value

>>Pixval

This function is used frequently to display the intensity values of the individual pixels interactively. If another image, J, is displayed
using imshow, MATLAB replaces the image in the screen with new image. To keep the first image and output a second image, we use
figure as follows:
>> figure, imshow (J)

Using the statement


>> imshow(I), figure, imshow(J)
Display both images.

Writing an Image:

Images are written to the disk using function imwrite.


imwrite (I, ‘filename’)

The string contained in filename must include a recognized file format extension. For example, the following command writes f to a
TIFF file named patient10_run1:
>> imwrite(I, ‘goodman’,’tif’)
Or alternatively
>> imwrite (I, ‘goodman.tif’)

The imwrite function can have other parameter, depending on the file format selected. More general imwrite syntax applicable only to
JPEG images is:
imwrite (I, ‘filename.jpg’,’quality’,q)

Where q is an integer between 0 and 100 (the lower the number the higher the degradation due to JPEG compression)

Observations:
Perform the steps described in the section above on the following images present in MATLAB library

Autumn.tif Cameraman.tif

Forest.tif MRI.tif

Comments/ Notes:
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------------------------
Experiment No. 2

Objective: Resizing and Inter-converting various types of Digital Images

Apparatus: MATLAB® and some sample gray scale & colored images

Theory:

There are mainly three types of digital images which we will come across throughout this course. These are:
• Colored Images
• Grayscale Images
• Binary/ Black & White Images

For many operations in Spatial or Frequency Domain their inter-conversion is required. MATLAB commands for converting one type
to another are as under:

Resizing:

The command imresize can size up or down an input image.


Imresize (I, ‘factor’)
Where ‘factor’ is the amount by which we can scale up or down the image I

Example:

>> I = imread ('rice.png');


>> J = imresize (I, .5);
>> figure, imshow (I), figure, imshow (J)

The image rice.png is scaled to one half of its original size as a result to the above command. Putting ‘2’ instead of ‘0.5’ will double
the size of original image.

Conversion:

Colored to Grayscale:

MAT2GRAY Convert matrix to intensity image

J = mat2gray (I, [MIN MAX])

converts the matrix I to the intensity image J. The returned matrix J contains values in the range 0.0 (black) to 1.0 (full intensity or
white). MIN and MAX are the values in I that correspond to 0.0 and 1.0 in I. Values less than MIN become 0.0, and values greater
than MAX become 1.0.

J = mat2gray (I)

sets the values of MIN and MAX to the minimum and maximum values in I.

Example:

>> I = imread ('canoe.tif');


>> J = mat2gray (I);
>> figure, imshow(I), figure, imshow(J)

RGB2GRAY:

It converts RGB image to grayscale. RGB2GRAY converts RGB images to grayscale by eliminating the hue and saturation
information while retaining the luminance.

I = rgb2gry (RGB)

converts the true color image RGB to the grayscale intensity image I.
Example

>> I = imread(football.jpg');
>> J = rgb2gray (I);
>> figure, imshow (I), figure, imshow (J);

Colored or Grayscale to Binary:

IM2BW Convert image to binary image by thresholding

IM2BW produces binary images from grayscale, or RGB images. To do this, it converts the input image to grayscale format (if it is
not already an intensity image), and then converts this grayscale image to binary by thresholding. The output binary image BW has
values of 1 (white) for all pixels in the input image with luminance greater than LEVEL and 0 (black) for all other pixels. (Note that
you specify LEVEL in the range [0,1], regardless of the class of the input image.)

BW = im2bw (I, level)


converts the intensity image I to black and white.

BW = im2bw (RGB, level)


converts the RGB image RGB to black and white.

Example:

>> I=imread (‘rice.png’);


>> J=im2bw (I, 0.3);
>> Imshow (I), figure, imshow (J);

Observations: Perform the steps described in the section above on the following images present in MATLAB library
Resizing Inter-converting
Image Scaled Up Version Scaled Down Image RGB to Grayscale Grayscale to B&W
(factor>1) (factor<1)
Rice.png N/A

Saturn.tif

Canoe.tif

Football.jpg

Comments/ Notes:
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------
Experiment No. 3

Objective: Image Adjustments on various types of digital images

Apparatus: MATLAB® and some sample gray scale & colored images

Theory:

IMADJUST:
J = IMADJUST (I, [LOW_IN HIGH_IN],[LOW_OUT HIGH_OUT])

maps the values in intensity image I to new values in J such that values between LOW_IN and HIGH_IN map to values between
LOW_OUT and HIGH_OUT. Values below LOW_IN and above HIGH_IN are clipped; that is, values below LOW_IN map to
LOW_OUT, and those above HIGH_IN map to HIGH_OUT. You can use an empty matrix ([]) for [LOW_IN HIGH_IN] or for
[LOW_OUT HIGH_OUT] to specify the default of [0 1]. If you omit the argument, [LOW_OUT HIGH_OUT] defaults to [0 1].

J = IMADJUST (I, [LOW_IN HIGH_IN],[LOW_OUT HIGH_OUT], GAMMA)

maps the values of I to new values in J as described in the previous syntax. GAMMA specifies the shape of the curve describing the
relationship between the values in I and J. If GAMMA is less than 1, the mapping is weighted toward higher (brighter) output values.
If GAMMA is greater than 1, the mapping is weighted toward lower (darker) output values. If you omit the argument, GAMMA
defaults to 1 (linear mapping).

Note that if HIGH_OUT < LOW_OUT, the output image is reversed, as in a photographic negative

RGB2 = IMADJUST (RGB1,...)

performs the adjustment on each image plane (red, green, and blue) of the RGB image RGB1.

Example:

>> I = imread('pout.tif');
>> J = imadjust(I,[0.3 0.7],[]);
>> imshow (I), figure, imshow(J)

>> RGB1 = imread('football.jpg');


>> RGB2 = imadjust(RGB1,[.2 .3 0; .6 .7 1],[]);
>> figure, imshow(RGB1), figure, imshow(RGB2)

IMCOMPLEMENT:
IM2 = IMCOMPLEMENT (IM)

computes the complement of the image IM. IM can be a binary, intensity, or truecolor image. IM2 has the same class and size as IM.

In the complement of a binary image, black becomes white and white becomes black. In the case of a grayscale or truecolor image,
dark areas become lighter and light areas become darker.

Example

>> I = imread('circles.png');
>> J = imcomplement (I);
>> figure, imshow(I), figure, imshow(J)

STRETCHLIM:

STRETCHLIM find limits to contrast stretch an image.

LOW_HIGH = STRETCHLIM (I)

returns a pair of gray values that can be used by IMADJUST to increase the contrast of an image.

Example:

>> I = imread('pout.tif');
>> J = imadjust(I,stretchlim(I),[]);
>> figure, imshow(I), figure, imshow(J)

Observations: Perform the steps described in the section above on the following images present in MATLAB library
Image IMADJUST with IMCOMPLEMENT IMADJUST using
gamma: STRETCHLIM
2 1 0.8 0.2
Rice.png

pout.tif

Circles.png N/A N/A

Football.jpg

Comments/ Notes:
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------
Experiment No. 4

Objective: Histogram Processing on Gray Scale and Binary Images

Apparatus: MATLAB® and some sample grayscale images

Theory:

IMHIST:
IMHIST (I)

displays a histogram for the intensity image I whose number of bins are specified by the image type. If I is a grayscale image,
IMHIST uses 256 bins as a default value. If I is a binary image, IMHIST uses only 2 bins.

IMHIST (I, N)

displays a histogram with N bins for the intensity image I above a grayscale colorbar of length N. If I is a binary image then N can
only be 2.

Example:

>> I = imread ('pout.tif');


>> imhist (I)

HISTEQ:

HISTEQ (I)

enhances the contrast of images by transforming the values in an intensity image, so that the histogram of the output image
approximately matches a specified histogram.

>> J = histeq (I, hgram)

transforms the intensity image I so that the histogram of the output intensity image J with length (hgram) bins approximately matches
hgram. The vector hgram should contain integer counts for equally spaced bins with intensity values in the appropriate range: [0, 1]
for images of class double, [0, 255] for images of class uint8, and [0, 65535] for images of class uint16. histeq automatically scales
hgram so that sum(hgram) = prod(size(I)). The histogram of J will better match hgram when length (hgram) is much smaller than the
number of discrete levels in I.

>> J = histeq (I, n)

transforms the intensity image I, returning in J an intensity image with n discrete gray levels. A roughly equal number of pixels is
mapped to each of the n levels in J, so that the histogram of J is approximately flat. (The histogram of J is flatter when n is much
smaller than the number of discrete levels in I.) The default value for n is 64.

ADAPTHISTEQ:

>> J = adapthisteq (I)

enhances the contrast of the intensity image I by transforming the values using contrast-limited adaptive histogram equalization
(CLAHE).

CLAHE operates on small regions in the image, called tiles, rather than the entire image. Each tile's contrast is enhanced, so that the
histogram of the output region approximately matches the histogram specified by the 'Distribution' parameter. The neighboring tiles
are then combined using bilinear interpolation to eliminate artificially induced boundaries. The contrast, especially in homogeneous
areas, can be limited to avoid amplifying any noise that might be present in the image.

Parameter Description
‘clipLimit’ Real scalar in the range [0 1] that specifies a contrast enhancement limit. Higher numbers result in more contrast
Default: 0.01
‘Distribution’ String specifying the desired histogram shape for the image tiles.
'uniform' -- Flat histogram

'rayleigh' -- Bell-shaped histogram

'exponential' -- Curved histogram

Default: 'uniform'

Example:

Apply Contrast-Limited Adaptive Histogram Equalization to the rice.png image. Display the enhanced image.

>> I = imread ('tire.tif');


>> A = adapthisteq (I,'clipLimit',0.02,'Distribution','rayleigh');
>> figure, imshow(I);
>> figure, imshow(A);

Observations: Perform the steps described in the section above on the following images present in MATLAB library

Histogram Histogram Equalization Adaptive


Image IMHIST (I) IMHIST(I, n) HISTEQ (I) HISTEQ (I, hgram) HISTEQ (I, n) Histogram
Equalization
Rice.png

pout.tif

tire.tif

Cell.tif

Comments/ Notes:
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------
Experiment No. 5

Objective: Exploring Spatial and Graylevel Resolution for various Grayscale images

Apparatus: MATLAB® and some sample grayscale images

Theory:

Spatial Resolution:

Sampling is the principal factor determining the spatial resolution of an image. Basically, spatial resolution is the smallest
discernible detail in an image. Suppose, we construct a chart with vertical lines of width W, with the space between the
lines also having width W. A line pair consists of one such line and its adjacent space. Thus, the width of a line pair is
2W, and there are 1/2W line pairs per unit distance. A widely used definition of resolution is simply the smallest number
of discernible line pairs per unit distance, e.g. 100 line pairs per millimeter.

Below is a 1024X1024, 8-bit image sub-sampled down to size 32X32 pixels. The number of allowable graylevels was
kept at 256 for all:

As a second step, the above images can be re-sampled up to 1024x1024 pixels by row and column duplication from
256x256, 128x128, 64x64 and 32x32 sized images.

The same will be realized using MATLAB Command IMRESIZE for input image ‘I’ described as follows:

Size of Image MATLAB Commands for Sub-Sampling MATLAB Commands for Re-Sampling
512x512 >> J=imresize (I,1/2), figure, imshow(J) >> I=imresize (J,2), figure, imshow(J)
256x256 >> K=imresize (I, 1/4), figure, imshow (K) >> I=imresize (K,4), figure, imshow (K)
128x128 >> L=imresize(I, 1/8), figure, imshow (L) >> I=imresize(L,8), figure, imshow (L)
64x64 >> M=imresize(I, 1/16), figure, imshow (M) >> I=imresize(M,16), figure, imshow (M)
32x32 >> N=imresize(I, 1/32), figure, imshow (N) >> I=imresize(N,32), figure, imshow (N)
Graylevel Resolution:

Graylevel resolution similarly refers to the smallest discernible change in gray level but measuring discernible changes in
gray level is a highly subjective process. We have considerable discretion regarding the number of samples used to
generate a digital image, but this is not true for the number of gray levels. Due to hardware considerations, the number of
gray levels is usually an integer power of 2.

Following figure shows a 452x274 image shown with graylevel resolution of 256, 128, 64, and 32, 16, 8, 4, and 2.

In order to carry out graylevel resolution in MATLAB, we will use the command IMADJUST studies in a previous
experiment too. As the graylevel images are described in 8-bits, this means we have 256 distinct graylevels possible. In
order to display an image with 128 graylevels, we will consider 128/2 graylevels on each side of 127 and will saturate all
remaining graylevels to either zero or 255. Similarly for the other reduced graylevel image having only 64 levels, we will
consider 64/2 pixels on each side of 127 and the remaining levels will be saturated to either zero or 255. The operation is
summarized as under:

Graylevel Low Limit High Limit MATLAB Command

128 128-128/2=64 128+128/2=192 >> J=imadjust (I, [0.25 0.75]), figure, imshow (J)

64/256=0.25 192/256=0.75

64 128-64/2=96 128+64/2=160 >> K=imadjust (I, [0.375 0.625]), figure, imshow (K)

96/256=0.375 160/256=0.625

32 128-32/2=112 128+32/2=144 >> L=imadjust (I, [0.4375 0.5625]), figure, imshow (L)

112/256=0.4375 144/256=0.5625

16 128-16/2=120 128+16/2=136 >> M=imadjust (I, [0.46875 0.53125]), figure, imshow (M)

120/256=0.46875 136/256=0.53125

8 128-8/2=124 128+8/2=132 >> N=imadjust (I, [0.484375 0.515625]), figure, imshow (N)
124/256=0.484375 132/256=0.515625

4 128-4/2=126 128+4/2=130 >> O=imadjust (I, [0.4921875 0.5078125]), figure, imshow (O)

126/256=0.4921875 130/256=0.5078125

2 128-2/2=127 128+2/2=129 >> P=imadjust (I, [0.4960938 0.5039063]), figure, imshow (P)

127/256=0.4960938 129/256=0.5039063

Observations:
Spatial Resolution:

Images Sub Sampling Re-sampling


/2 /4 /8 /16 x2 x4 x8 x16
Tape.png

Rice.png

Saturn.png

Circles.png

Graylevel Resolution:

Images No. of Graylevels


256 128 64 32 16 8 4 2
Coins.png

Rice.png

Saturn.png
Circles.png

Comments/ Notes:
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------
Experiment No. 6

Objective: Smoothing Spatial Filtering of Grayscale Images

Apparatus: MATLAB® and some sample grayscale images

Theory:

Filtering:

Neighborhood processing consists of (1) selecting a center point (x,y); (2) performing an operation that involves only the pixels in a
predefined neighborhood about (x,y); (3) letting the result of that operation be the response of the process at that point; and (4)
repeating the process for every point in the image. The process of moving the center point creates new neighborhoods, one for each
pixel in the input image. The two principal terms used to identify this operation are neighborhood processing and spatial filtering, with
the second term being more prevalent.

Figure below illustrates the mechanics of linear spatial filtering using a 3x3 neighborhood. At any point (x, y) in the image, the
response, g(x, y) of the filter is the sum of products of the filter coefficients and the image pixels encompassed by the filter:

G(x, y) = w(-1,-1) f(x-1, y-1) + w(-1,0) f(x-1,y)+… +w(0,0) f(x, y)+… + w(1,1)f(x+1,y+1)
Smooting Filters:

Smooting Filters are used for blurring and for noise reduction. Blurring is used in preprocessing tasks, such as removal of small details
from an image prior to object extraction, and brdiging of small gaps in lines or curves.

The output of a smoothing filter is simply the average of pixels contained in the neighborhood of athe filter mask.These filters
sometimes are called averaging filters or low pass filters. Since the noise typically consists of sharp trasitions in internsity levels, the
most obvious application of smoothing is noiise reductoin. Averaging filters, however, have the undesirable side effect that they blur
edges.

Here are two examples of Averaging Spatial Filters:

Note that, instead of being 1/9, the coefficients of the filter ar eall 1s. The idea here is that it is computationally more efficient to have
coefficients valued 1. At the end of the filtering process, the entire image is divided by 9.

The second mask yields a so-called weighted average, terminology used to indicate that pixels are multiplied by different coefficients,
thus giving more importance (weight) to some pixels at the expense of others. Central pixels has the highest importance and the pixel
farthest from centre has the lowest importance. This will work towards reducing blurring in the smoothing process.

Matlab Command for the purpose is IMFILTER used as under:

>> J=imfilter (I, mask);

Observations:
Mask 1:

Filtered Image with Mask Size


3x3 5x5 9x9 12x12
Mask 2:

Filtered Image with Mask Size


3x3 5x5 9x9 12x12

Comments/ Notes:
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------
Experiment No. 7

Objective: Sharpening Spatial Filtering of Grayscale Images

Apparatus: MATLAB® and some sample grayscale images

Theory:

Sharpening Filters:

The principal objective of sharpening is to highlight transitions in intensity. Uses of image sharpening vary and include
applications ranging from electronic printing and medical imagining to industrial inspection and autonomous guidance in
military systems. As we have seen in last experiment that image blurring could be accomplished in the spatial domain by
pixel averaging is analogous to integration, it is logical to conclude that sharpening can be accomplished by spatial
differentiation. Image differentiation enhances edges and other discontinuities (such as noise), and de-emphasizes areas
with slowly varying intensities


A basic definition of the first-order derivative of a one-dimensional function f(x) is the difference
    1 


2 
Second order derivative of f(x) as the difference

   1 
 2

Laplacian Filters:

In this lab, we consider the implementation of 2-D, second order derivatives and their use for image sharpening. Such
filters are called Laplacian filters and are widely used for isotropic filtering i.e. the process of filtering is rotation
invariant.

Discrete form of a Laplacian filter is described in general as follows:

x2 ,      1,    1,    ,  1   , 1 4, 


The above equation gives an isotropic result for rotation in increments of 90o. The diagonal directions can be incorporated
in the definition of digital Laplacian by adding two more terms to the above equation and replacing 4f(x,y) with 8f(x,y).
This mask yields isotropic results in increments of 45o.

Because Laplacian is a derivative operator, its use highlights intensity discontinuities in an image and deemphasizes
regions with slowly varying intensity levels. This will tend to produce images which have grayish edge lines and other
discontinuities, all superimposed on a dark, featureless background. Background features can be recovered while still
preserving the sharpening effect of Laplacian simply by adding the Laplacian image to the original.

MATLAB command used for the purpose is IMFILTER (I, mask) as we have used in last experiment.

Observations:
Mask 1:

Filtered Image with Mask Size


3x3 5x5 9x9 12x12
Mask 2:

Filtered Image with Mask Size


3x3 5x5 9x9 12x12

Comments/ Notes:
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------
Experiment No. 8

Objective: Unsharp Masking and High Boost Filtering in Spatial Domain

Apparatus: MATLAB® and some sample grayscale/ colored images

Theory:

It is often desirable to emphasize high frequency components representing the image details without eliminating low frequency
components. In this case, the high-boost filter can be used to enhance high frequency component while still keeping the low frequency
components. A comparison of low pass, high pass and high boost filtering is as under:

A process that has been used for many years by the printing and publishing industry to sharpen images consists of subtracting an
unsharp (smoothed) version of an image from the original image. This process, called unsharp masking, consists of the following
steps:
1. Blur the original image
2. Subtract the blurred image from the original (the resulting difference is called the mask)
3. Add the mask to the original


Letting ,  denote the blurred image, unsharp masking is expressed in equation for as follows. First we obtain the
mask:

 ,    ,  


, 

Then we add a weighted portion of the mask back to the original image:

,    ,      , 

Where we included a weight,   0, for generality. When A=1, we have unsharp masking as defined above. When A
>1, the process is referred to as highboost filtering. Choosing A <1 deemphasizes the contribution of the unsharp mask.
MATLAB Command used for the purpose is the same we have used in last two experiment i.e. imfilter

>> J=imfilter (I, mask);

Observations:

Mask 1:

Filtered Image with Weight ‘A’


1 2 3 4

Mask 2:

Filtered Image with Weight ‘A’


1 1 1 1

Comments/ Notes:
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------
Experiment No. 9

Objective: Smoothing Frequency Domain Filters for Grayscale Images

Apparatus: MATLAB® and some sample grayscale and B&W images

Theory:

Frequency Domain Filtering:

Filtering in the frequency domain consists of modifying the Fourier Transform of an image and then computing the inverse transform
to obtain the processed result. Thus given a digital image, f(x,y), of size MxN, the basic filtering equation in which we are interested

,     , , !


has the form:

F (u,v) is the DFT of the input image, f(x,y), H(u,v) is a filter function (also called simply the filter, or the filter transfer function), and
g(x,y) is the filtered (output) image. Functions F, H and g are arrays of size MxN, the same as the input image. The product H(u, v)
F(u, v) is formed using array multiplication. The filter function modifies the transform of the input image to yield a processed output,
g(x,y). Specification of H(u,v) is simplified considerably by using functions that are symmetric about their center, which requires that
F(u,v) be centered also.

One of the simplest filters we can construct is a filter H(u,v) that is 0 at the center of the transform and 1 elsewhere. This filter would
reject the dc term and pass all other terms of F(u,v) when we form the product H(u,v)F(u,v). DC term is responsible for the average
intensity of an image, so setting it to zero will reduce the average intensity of the output image to zero.

Low frequencies in the transform are related to slowly varying intensity components in an image, such as the walls of a room or a
cloudless sky in an outdoor scene. On the other hand, high frequencies are caused by sharp transitions in intensity, such as edges and
noise. Therefore, we would expect that a filter H (u,v) that attenuates high frequencies while passing low frequencies would blur an
image and will be called a low pass filter. On the other hand, a filter with opposite property would enhance sharp details but cause
reduction in contrast in the image and will be called a high pass filter.

From the equation above, it is easy to conclude that filter H(u,v) and Fourier transform of image F(u,v) have to be of the same size in
order their product is calculated. The same is achieved by padding zeros where necessary.

Low Pass/ Smoothing Filters:

Edges and other sharp intensity transitions (such as noise) in an image contribute significantly to the high-frequency content of its
Fourier transform. Hence, smoothing (blurring) is achieved in the frequency domain by high-frequency attenuation; that is, by low
pass filtering. Low pass filters can be one of three types i.e. Ideal, Butterworth and Gaussian.

A 2-D low pass filter that passes without attenuation all frequencies within a circle of radius D0 from the origin and cuts off all
frequencies outside this circle is called an ideal low pass filter.

For this filter, the point of transition between H(u,v)=1 and H(u,v)=0 is called the cut off frequency. In the figure above, the cutoff
frequency is D0. The sharp cut-off frequencies of an ILPF cannot be realized with electronic components, although they certainly can
be simulated in a computer.

The transfer function of a Butterworth low pass (BLPF) of order n, and with cut off frequency at distance D0 from the origin is
defined as
1
,  
", 
1   " !$%
#

Where D(u,v) is the same as above.

Transfer function of a Gaussian low pass filter (GLPF) is as below:

,   & '


( ),*/$' (
,

Where D0 is the cut off frequency

Procedure:

In order to achieve smoothing on a given image f, one has to follow the steps below:
1. Read input image, f
2. Apply MATLAB function fft2 and pad zeros in order to make it say 256x256. The format is as under

>> fft_image=fft2 (I, 256, 256);

3. Use both spatial smoothing masks one by one (used in Exp no. 6) and calculate their 2D-DFT with padded zeros

>> mask= 1/16.*[1 2 1; 2 4 2; 1 2 1];

>> h=fft2 (mask, 256, 256);

4. Shift the frequency domain representations of both input image and of the filter to bring their zero frequency component in
the middle. The same is accomplished by the command fftshift as follows:

>>h=fftshift (h);

>>K=fftshift (fft_image);
5. Now obtain their product with array multiplication command of MATLAB:

>>L=K.*h;

6. Now undo the shift obtained in step 4 above using the MATLAB command ifftshift:

>>L=ifftshift (L);

7. Now calculate the inverse DFT using the MATLAB Command ifft2:

>>rec_image=real (ifft2(L));

Observations:
Ideal Mask:
Input Image Mask Image of Mask Filtered Image
(Spatial Domain) (Spatial Domain) (Frequency Domain) (Spatial Domain)
(After fftshift)

Rice.png
Cameraman.tif
Bag.png
Blobs.png

Butterworth Mask:
Input Image Mask Image of Mask Filtered Image
(Spatial Domain) (Spatial Domain) (Frequency Domain) (Spatial Domain)
(After fftshift)

Rice.png
Cameraman.tif
Bag.png
Blobs.png

Gaussian Mask:
Input Image Mask Image of Mask Filtered Image
(Spatial Domain) (Spatial Domain) (Frequency Domain) (Spatial Domain)
(After fftshift)

Rice.png
Cameraman.tif
Bag.png
Blobs.png

Comments/ Notes:
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------
Experiment No. 10

Objective: Sharpening Frequency Domain Filters for Grayscale Images

Apparatus: MATLAB® and some sample grayscale and B&W images

Theory:

High Pass/ Sharpening Filters:

In the previous experiment, we showed that an image can be smoothed by attenuating the high frequency components of its Fourier
transform. Because edges and other abrupt changes in intensities are associated with high frequency components, image sharpening
can be achieved in the frequency domain by high pass filtering, which attenuates the low frequency components without disturbing
high frequency information in the Fourier transform. Its transfer function can simply be:

-. ,   1 /. , 

This shows that when the low pass filter attenuates frequencies, the high pass filter passes them, and vice versa.

As it was in the case of Low Pass Filter, a High Pass Filter can also be either of three types, i.e. Ideal, Butterworth or Gaussian.
Butterworth represents a transition between the sharpness of the ideal filter and the broad smoothness of the Gaussian filter.

Transfer function of a 2-D Ideal High Pass Filter is zero if D (u,v) is smaller than D0 and ‘1’ otherwise. As intended, IHPF is the
opposite of the ILPF in the sense that it sets to zero all frequencies inside a circle of radius D0 while passing without attenuation, all
frequencies outside the circle. As in the case of the ILPF, the IHPF is not physically realizable. However, we consider it here for
completeness and as before, because its properties can be used to explain phenomena such as ringing in the spatial domain.

1
A 2-D Butterworth high pass filter (BHPF) of order n and cut off frequency D0 is defined as:
,  
"# $%
1 !
", 

As with its low pass counterpart, we can expect Butterworth high pass filter to behave smoother than IHPFs.

The transfer function of the Gaussian high pass filter (GHPF) with cut off frequency locus at a distance D0 from the center of the
frequency rectangle is given by:
' ( ),*

,   1 & $',(

As expected, the results obtained are more gradual than with the previous two filters. Even the filtering of the smaller objects and thin
bars is cleaner with the Gaussian filter.

Procedure:
In order to achieve smoothing on a given image f, one has to follow the steps below:
1. Read input image, f
2. Apply MATLAB function fft2 and pad zeros in order to make it say 256x256. The format is as under:

>> fft_image=fft2 (I, 256, 256);

3. Use both spatial smoothing masks one by one (used in Exp no. 6) and calculate their 2D-DFT with padded zeros,e.g.

>> mask= 1/16.*[1 2 1; 2 4 2; 1 2 1];

>> h=fft2 (mask, 256, 256);

4. Shift the frequency domain representations of both input image and of the filter to bring their zero frequency component in
the middle. The same is accomplished by the command fftshift as follows:

>>h_lpf=fftshift (h);

>>K=fftshift (fft_image);

5. Obtain transfer function of HPF by subtracting transfer function of LPF from one. Then obtain their product with array
multiplication command of MATLAB:
>>h_hpf=1-h_lpf;

>>L=K.*h;

6. Now undo the shift obtained in step 4 above using the MATLAB command ifftshift:

>>L=ifftshift (L);

7. Now calculate the inverse DFT using the MATLAB Command ifft2:

>>rec_image=real (ifft2(L));

Observations:
Ideal Mask:
Input Image Mask Image of Mask Filtered Image

(Spatial Domain) (Spatial Domain) (Frequency Domain) (Spatial Domain)

(After fftshift)

Rice.png

Cameraman.tif

Bag.png

Blobs.png
Butterworth Mask:
Input Image Mask Image of Mask Filtered Image

(Spatial Domain) (Spatial Domain) (Frequency Domain) (Spatial Domain)

(After fftshift)

Rice.png

Cameraman.tif

Bag.png

Blobs.png

Gaussian Mask:
Input Image Mask Image of Mask Filtered Image

(Spatial Domain) (Spatial Domain) (Frequency Domain) (Spatial Domain)

(After fftshift)

Rice.png

Cameraman.tif

Bag.png

Blobs.png

Comments/ Notes:
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------
Experiment No. 11

Objective: Unsharp Masking and High Boost Filtering in Frequency Domain

Apparatus: MATLAB® and some sample grayscale images

Theory:

In this experiment, we discuss frequency domain formulations of the unsharp masking and high boost filtering image sharpening
techniques introduced in experiment no. 8 before. Using frequency domain methods, the mask defined is given by:

 ,   ,  /. , 

/.    /. , , !


With

Where HLP (u,v) is a low pass filter and F(u,v) is the Fourier transform of f(x,y). Here, fLP (x,y) is a smoothed image. Final output is

,   ,   0   , 


calculated as under:

This expression defines unsharp masking when k=1 and highboost filtering when k>1.

The above equation can be rewritten as follows:

,     121  0  1 /. , !3, 4

The expression contained within square brackets is called a high-frequency-emphasis filter. As noted earlier, high pass filters se the
dc term to zero, thus reducing the average intensity in the filtered image to zero. The high frequency emphasis filter does not have this
problem because of the 1 that is added to the high pass filter. The constant, k , gives control over the proportion of high frequencies
that influence the final result.

Procedure:
In order to achieve smoothing on a given image f, one has to follow the steps below:
1. Read input image, f
2. Apply MATLAB function fft2. The format is as under

>> fft_image=fft2 (I);

3. Use both spatial smoothing masks one by one (used in Exp no. 6) and calculate their 2D-DFT with padded zeros to make its
size equal to the size of input image I, e.g.

>> mask= 1/16.*[1 2 1; 2 4 2; 1 2 1];

>> h=fft2 (mask, I_x, I_y);

4. Shift the frequency domain representations of both input image and of the filter to bring their zero frequency component in
the middle. The same is accomplished by the command fftshift as follows:

>>h_lpf=fftshift (h);

>>K=fftshift (fft_image);

5. Then obtain their product with array multiplication command of MATLAB:

>>L=K.*h;

6. Now undo the shift obtained in step 4 above using the MATLAB command ifftshift:

>>L=ifftshift (L);
7. Now calculate the inverse DFT using the MATLAB Command ifft2:

>>rec_image=real (ifft2(L));

8. Calculate gmask as explained in the theory section above


9. Add original image to gmask to obtain the final processed output

Observations:
Ideal Mask:
Input Image Filtered Image
(Spatial Domain) (Spatial Domain)
K=2 K=4 K=6 K=8
Rice.png
Cameraman.tif
Bag.png
Eight.tif

Butterworth Mask:

Input Image Filtered Image


(Spatial Domain) (Spatial Domain)
K=2 K=4 K=6 K=8
Rice.png
Cameraman.tif
Bag.png
Eight.tif

Gaussian Mask:

Input Image Filtered Image


(Spatial Domain) (Spatial Domain)
K=2 K=4 K=6 K=8
Rice.png
Cameraman.tif
Bag.png
Eight.tif

Comments/ Notes:
------------------------------------------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------------------------------------------------------------

---------------------------------------------------------------------------------------------------------------------------------------------------------------
Experiment No. 12

Objective: Color Image Representation and Inter-conversion

Apparatus: MATLAB® and some sample colored images

Theory:

RGB Images:

An RGB color image is an MxNx3 array of color pixels, where each color pixel is a triplet corresponding to the red, green, and blue
components of an RGB image at a specific spatial location. An RGB image may be viewed as a stack of three gray scale images that,
when fed into the red, green and blue inputs of a color monitor, produce a color image o n the screen. By convention, the three images
forming an RG color image are referred to the as the red, green and blue component images. The data class of the component images
determines their range of values.

Indexed Images:
An indexed image has two components; a data matrix of integers, X, and a color map matrix, map. Matrix map is an mX3 array of
class double containing floating point values in the range [0 1]. The length of the map is equal to the number of colors it defines. Each
row of map specifies the red, green and blue components of a single color. An indexe image uses direct mapping of poixel intensity
values to color map values. The color of each pixel is determined by using the correspoinding value of integer matrix X as an index
into map. If X is of class double, then value 1 points to the first row in map, value o2 points to the second row, and so on. If X is of
class uint8 or uint 16, then 0 points to the first row in map.

To display an indexed image we write


>> imshow (X,map)
Or alternatively
>> image (X)
>> colormap (map)
NTSC Color Space:
The NTSC system is used in analog television. One of the main advantages of this format is that gray scale information is separate
from color data, so the same signal can be used for both color and monochrome television sets.

In the NTSC format, image data consists of three components, lamination (Y), hue (I) and saturation (Q), where the choice of the
letters YIQ is conventional. The luminance component represents gray scale information and the other two components carry the color
information of a TV signal. The YIQ components are obtained from the RGB component s of an image using the linear
transformation:

6 0.299 0.587 0.114 A


5 7 9  5 0.596 0.274 0.322 9 5B 9
8 0.211 .523 0.312 C

MATLAB commands for inter-conversion between RGB and NTSC images are:

>> yiq_image=rgb2ntsc(rgb_image)
>> rgb_image=ntsc2rgb(yiq_image)
YCbCr Color Space:
The YCbCr color space is used extensively in digital video. In this format, luminance information represented by a single component,
Y and color information is stored as two color difference components, Cb and Cr. Component Cb is the difference between the blue
component and a reference value, and component Cris the difference between the red component and a reference value.

MATLAB Commands for the purpose are:


>> ycbcr_image=rgb2ycbcr(rgb_image)

>> rgb_image=ycbcr2rgb(ycbcr_image)

CMY Color Space:


As we know that Cyan, magenta and yellow are the secondary colors of light or alternatively the primary colors of pigments e.g. when
a surface coated with cyan is illuminated with white light, no red light is reflected from surface. That is cyan subtracts red light from
reflected white light, which itself is composed of equal amount of red, green, and blue light.

Most devices that deposit colored pigments on paper, such as color printers and copiers, require CMY data input or convert RGB to
CMY internally. This conversion is performed as following:

D 1 A
5E9  519 5B 9
6 1 C

The conversion is performed using the following MATLAB Command:

>> cmy_image=imcomplement (rgb_image)

>> rgb_image=imcomplement (cmy_image)

HSI Color Space:


As we have already seen, creating colors in the RGB and CMY models and changing from one model to the other is a straightforward
process. As noted earlier, these color systems are ideally suited for hardware implementations. In addition, the RGB system matches
nicely with the fact the human eye is strongly perceptive to red, green and blue primaries. Unfortunately, the RGB, CMY and other
similar color models are not well suited for describing colors in terms that are practical for human interpretation. For example, one
doesn’t not refer to the color of an automobile by giving the percentage of each of the primaries composing its color. Furthermore, we
do not think of color images as being composed of three primary images that combine to form the single image.

When humans view a color object, we describe it by its hue, saturation, and brightness. Recall from the discussion above that hue is a
color attribute that describe a pure color, whereas saturation gives a measure of the degree to which a pure color is diluted by white
light. Brightness is a subjective descriptor that is practically impossible to measure. It embodies the achromatic notion of intensity and
is a key factor in describing color sensation.

Given an image in RGB color format, the H component of each RGB pixel is obtained using the equation:

G, H C I B K
F
360 G, H C J B

1/2A B  A C!
G  LMN  O P
A B$  A CB C!/$

3
The saturation component is given by:
Q1 minA, B, C!
A  B  C

1
Finally, the intensity component is given by:
7  A  B  C
3

:
Observations:

Input Image RGB to NTSC RGB to YCbCr RGB to CMY

RGB

Football.jpg

Greens.jpg

Saturn.png

Board.tif

Comments/ Notes:
------------------------------------------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------------------------------------------------------------

---------------------------------------------------------------------------------------------------------------------------------------------------------------
Experiment No. 13

Objective: Spatial Filtering of Color Images

Apparatus: MATLAB® and some sample colored images

Theory:

Spatial filtering of colored images deals with the complexity which involves performing spatial neighborhood processing, also on
single image planes. This breakdown is analogous to the discussion on intensity transformation and spatial filtering of gray scale
images covered in previous experiments. We will take on RGB images only in this experiment for the purpose of spatial filtering. The
basic concept, however, can be applied to other domains too. We will explore spatial processing of color images by two examples of
linear filtering: image smoothing and image sharpening.

Color Image Smoothing:


As we have seen in previous experiments, one way to smooth a monochrome image is to define a filter mask of 1s, multiply all pixel
values by the coefficients in the spatial mask, and divide the result by the sum of the elements in the mask. The process of smoothing a
full color image using spatial mask is given as under:

The process is formulated in the same way as for gray scale images, except that instead of single pixels we now deal with vector
values in the form. It follows from the discussion and the properties of vector addition that:

1
W Y AN, Zb
X
V ,[\]^_ a
V a
V 1 Y BN, Za
L,   VX a
V ,[\]^_ a
V1 a
VX Y CN, Zà
U ,[\]^_

We recognize each component of this vector as the result that we would obtain by performing neighborhood averaging on each
individual component image, using the filter mask mentioned above. Thus we conclude that smoothing by neighborhood averaging
can be carried on a per image pane basis. The results would be the same if neighborhood averaging were carried out directly in color
vector space.

Color Image Sharpening:


Sharpening an RGB color image with a linear spatial filter follows the same procedure outline above, but using a sharpening filter
instead. From vector analysis, we know that the Laplacian of a vector is defined as a vector whose components are equal to the
Laplacian of the individual scalar components of the input vector. In the RGB color system the Laplacian of vector c introduced above
will be calculated as under:

c$ A, 
c$ c, !  ec$ B, f
c$ C, 
Which, as in the section above, tells us that we can compute the Laplacian of a full color image by computing the Laplacian of each
component image separately, and then can join as described in the procedure below.

Procedure:

1. Extract the three component images:


>>fR=fc(:,:,1);
>> fG=fc(:,:,2);
>> fB=fc(:,:,3);

2. Filter each component image individually, either with a smoothing or with a sharpening mask explained in earlier
experiments

>> fR_filtered=imfilter(fR,mask);

And repeat the same for the other two components

3. Reconstruct the filtered RGB image:

>> fc_filtered=cat3(3, fR_filtered, fG_filtered, fB_filtered);


Observations:
Smoothing Filer:

Input Image R Pane G Pane B Pane Filtered Filtered Filtered Filtered


RGB R Pane G Pane B Pane RGB
Image
Football.jpg
Greens.jpg
Saturn.png
Board.tif

Sharpening Filer:

Input R Pane G Pane B Pane Filtered Filtered Filtered Filtered


Image R Pane G Pane B Pane RGB
RGB Image
Football.jpg
Greens.jpg
Saturn.png
Board.tif

Comments/ Notes:
------------------------------------------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------------------------------------------------------------

---------------------------------------------------------------------------------------------------------------------------------------------------------------
Experiment No. 14

Objective: Dilation, Erosion, Opening and Closing of Binary Images

Apparatus: MATLAB® and some sample black and white images

Theory:

Dilation:

Dilation is an operation that grows or thickens objects in an image. The specific manner and extent of this thickening is controlled by a
shape referred to as a structuring element. Graphically, structuring elements can be represented either by a matrix of 0s and 1s or as a
set of foreground (1-valued) pixels. Regardless of the representation, the origin of the structuring element must be clearly identified.

In other words, dilation is a process that translates the origin of the structuring element throughout the domain of the image and checks
to see where the element overlaps 1-valued pixels, The output image is 1 at each location of the origin of the structuring element such
that the structuring element overlaps at least one 1-valued pixel in the input image.

MATLAB command for the purpose is :

>> D=imdilate (A,B)

Where A is input image and B is structuring element.


Toolbox function ‘strel’ constructs elements with a variety of shapes and sizes. Its basic syntax is:

se=strel (shape, parameters)

where shape is a string specifying the desired shape, and parameters is a list of parameters that specify information about the shape,
such as its size. Following are the flat structuring elements supported by the command:

Diamond
Disk
Line
Octagon
Pair
Periodicline
Rectangle
square

Erosion:

Erosion shrinks or thins objects in a binary image. As in dilation, the manner and extend of shrinking is controlled by a structuring
element. Erosion is a process of translating the structuring element throughout the domain of the image and checking to see where it
fits entirely within the foreground of the image. The output image has a value of 1 at each location of the origin of the structuring
element; such that the element overlaps only 1-valued pixels of the input image (i.e. it doesn’t overlap any of the image background)

Erosion is performed by toolbox function imerode, whose syntax is the same as the syntax of imdilate.

Opening and Closing:

The morphological opening of A by B, is defined as the erosion of A by B, followed by a dilation of the result of B. Morphological
opening removes completely regions of an object that cannot contain the structuring element, smooths object contours, breaks thin
connections, and removes thin protrusions (overhangs).

MATLAB command for opening is imopen:


>> C=imopen (A, B)

Where A is input image and B is structuring element.


The morphological closing of A by B, is a dilation followed by an erosion. Geometrically, it is the union of all translations of B that do
not overlap A. Like opening, closing tends to smooth the contours of objects. Unlike opening, howver, closing generally joins narrow
breaks, fills long this gulfs, and fills holes smaller than the structuring element.

MATLAB command for closing is imclose:


>> C=imclose (A, B)

Where A is input image and B is structuring element.

Observations:
Dilation:

Input B&W O/P Image using Structuring Element


Image
Disk Diamond Square rectangle
Circles.png
Testpat1.png
Blobs.png

Erosion:

Input B&W O/P Image using Structuring Element


Image
Disk Diamond Square rectangle
Circles.png
Testpat1.png
Blobs.png

Opening:

Input B&W O/P Image using Structuring Element


Image
Disk Diamond Square rectangle
Circles.png
Testpat1.png
Blobs.png

Closing:

Input B&W O/P Image using Structuring Element


Image
Disk Diamond Square rectangle
Circles.png
Testpat1.png
Blobs.png

Comments/ Notes:
------------------------------------------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------------------------------------------------------------

---------------------------------------------------------------------------------------------------------------------------------------------------------------
Experiment No. 15

Objective: Skeletonization of Black and White Images

Apparatus: MATLAB® and some sample black and white images

Theory:

The operation of skeletonization simplifies the image by removing redundant pixels; that is, changing appropriate pixels from black to
white. This results for example in each ridge of a finger print being turned into a line only a single pixel wide.

The decision to remove a pixel is based on four rules. All of these rules must be satisfied for a pixel to be changed from black to white.
The first three rules are rather simple, while the fourth is quite complicated. Each pixel has eight neighbors. The four neighbors in the
horizontal and vertical directions are frequently called the close neighbors (4-neighborhood). The diagonal pixels are
correspondingly called the distant neighbors (diagonal neighborhood). The four rules are as follows:

Rule 1: The pixel under consideration must presently be black. If the pixel is already white, no action needs to be taken.

Rule 2: At least one of the pixel's close neighbors must be white. This insures that the erosion of the thick ridges takes place from the
outside. In other words, if a pixel is black, and it is completely surrounded by black pixels, it is to be left alone on this iteration. Why
use only the close neighbors, rather than all of the neighbors? The answer is simple: running the algorithm both ways shows that it
works better. Remember, this is very common in morphological image processing; trial and error is used to find if one technique
performs better than another.

Rule three: The pixel must have more than one black neighbor. If it has only one, it must be the end of a line, and therefore shouldn't
be removed.

Rule four: A pixel cannot be removed if it results in its neighbors being disconnected. This is so each ridge is changed into a
continuous line, not a group of interrupted segments.

Connected means that all of the black neighbors touch each other. Likewise, unconnected means that the black neighbors form two or
more groups.

The algorithm for determining if the neighbors are connected or unconnected is based on counting the black-to-white transitions
between adjacent neighboring pixels, in a clockwise direction. For example, if pixel 1 is black and pixel 2 is white, it is considered a
black-to-white transition. Likewise, if pixel 2 is black and both pixel 3 and 4 are white, this is also a black-to-white transition. In total,
there are eight locations where a black-to-white transition may occur. To illustrate this definition further, the examples in (b) and (c)
have an asterisk placed by each black-to-white transition. The key to this algorithm is that there will be exactly one black-to-white
transition if the neighbors are connected. More than one such transition indicates that the neighbors are unconnected.
As additional examples of binary image processing, consider the types of algorithms that might be useful after the fingerprint is
skeletonized. A disadvantage of this particular skeletonization algorithm is that it leaves a considerable amount of fuzz, short offshoots
that stick out from the sides of longer segments. There are several different approaches for eliminating these artifacts. For example, a
program might loop through the image removing the pixel at the end of every line. These pixels are identified

By having only one black neighbor. Do this several times and the fuzz is removed at the expense of making each of the correct lines
shorter. A better method would loop through the image identifying branch pixels (pixels that have more than two neighbors). Starting
with each branch pixel, count the number of pixels in each offshoot. If the number of pixels in an offshoot is less than some value
(say, 5), declare it to be fuzz, and change the pixels in the branch from black to white.

Another algorithm might change the data from a bitmap to a vector mapped format. This involves creating a list of the ridges
contained in the image and the pixels contained in each ridge. In the vector mapped form, each ridge in the fingerprint has an
individual identity, as opposed to an image composed of many unrelated pixels. This can be accomplished by looping through the
image looking for the endpoints of each line, the pixels that have only one black neighbor. Starting from the endpoint, each line is
traced from pixel to connecting pixel. After the opposite end of the line is reached, all the traced pixels are declared to be a single
object, and treated accordingly in future algorithms.

It is implemented using MATLAB command bwmorph which has got certain parameters. One of these is skel which is used to
skeletonize an image. Syntax of the command is as under:

G=bwmorph (f, opration,n)

Where f is an input binary image, operation is a tring specifying the desire d operation and n is a positive integer specifying the
number of times the operation should be repeated.

Observations:

Image No. Input B&W Skeletonized


Image Image

1
2
3

Comments/ Notes:
------------------------------------------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------------------------------------------------------------

---------------------------------------------------------------------------------------------------------------------------------------------------------------
Experiment No. 16

Objective: Introducing basic principles of JPEG compression

Apparatus: MATLAB® and some sample black and white images

Theory:

The JPEG standard provides a powerful compression tool used worldwide for various applications. This standard has been adopted as
the leading lossy compression standard for natural images due to its excellent compression capabilities and configurability. In this lab
we will present the basic concepts used for JPEG coding and experiment with various coding parameters.

The JPEG baseline coding algorithm consists of the following steps:


1. The image is divided into 8×8 non-overlapping blocks.
2. Each block is level-shifted by subtracting 128 from it.
3. Each (level-shifted) block is transformed with Discrete Cosine Transform (DCT).
4. Each block (of DCT coefficients) is quantized using a quantization table. The quantization table is modified by the “quality” factor
that controls the quality of compression.
5. Each block (of quantized DCT coefficients) is reordered in accordance with a zigzag pattern.
6. In each block (of quantized DCT coefficients in zigzag order) all trailing zeros (starting immediately after the last non-zero
coefficient and up to the end of the block) are discarded and a special End-Of-Block (EOB) symbol is inserted instead in order to
represent them.

In the real JPEG baseline coding, after removal of the trailing zeros, each block is coded with Huffman coding, but this issue is
beyond our scope. Furthermore, in each block all zero-runs are coded and not only the final zero-run. Each DC coefficient (the first
coefficient in the block) is encoded as a difference from the DC coefficient in a previous block

Procedure:
1. Implement Discrete Cosine Transform on non overlapping 8x8 segments of input image using ‘blkproc’ command of
MATLAB

2. Implement zigzag ordering of a pattern (matrix) of arbitrary size M ×N. You have a M × N pattern and you put sequential
numbers (indices) in its entries filling row by row. Now you have to extract the indices from the pattern in zigzag order. The
order of zigzag must be according to the following figure – starting at the upper left corner and moving down.
>> Zig = zigzag(8,8) ;
3. Accumulate the zero frequency components of each block into another two dimensional array
4. Display this array as image and compression is obvious

Observations:

Image No. Input Image JPEG compressed Image

1
2
3

Comments/ Notes:
------------------------------------------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------------------------------------------------------------

---------------------------------------------------------------------------------------------------------------------------------------------------------------

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy