0% found this document useful (0 votes)
7 views

dip record

The document outlines methods to convert RGB images to binary images in MATLAB using the 'imbinarize' function, detailing the steps involved in the conversion process. It also discusses how to label and measure connected components in binary images using 4-connectivity and 8-connectivity, providing examples of MATLAB code for practical implementation. Additionally, it covers arithmetic operations on images, contrast stretching, and histogram equalization techniques.

Uploaded by

rilwanraina4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

dip record

The document outlines methods to convert RGB images to binary images in MATLAB using the 'imbinarize' function, detailing the steps involved in the conversion process. It also discusses how to label and measure connected components in binary images using 4-connectivity and 8-connectivity, providing examples of MATLAB code for practical implementation. Additionally, it covers arithmetic operations on images, contrast stretching, and histogram equalization techniques.

Uploaded by

rilwanraina4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

EX.

NO : Convert RGB Image to Binary Image in MATLAB

DATE :

In MATLAB, we can use a built−in function ‘imbinarize’ to convert a given


RGB image into a binary image.

The conversion of RGB image to binary image in MATLAB is performed as


per the following steps:

Step (1) − Read the input RGB image.

Step (2) − Convert the input RGB image to a grayscale image.

Step (3) − Specify a threshold value to perform binary conversion. This


value depends upon the specific image and requirements. It lies between
0 and 1.

Step (4) − Convert the grayscale image to binary image based on the
defined threshold value.

Step (5) − Display the binary image.

Now, let us understand how to implement this algorithm to convert an


RGB image to binary image in MATLAB with the help of example programs.

Example

1
RESULT :
Thus the given program is executed and verified successfully

2
EX. NO : Implementation of Relationships between pixels
DATE :

Label and Measure Connected Components in a Binary Image


Detect Connected Components
In a binary image, a connected component, or an object, is a set of adjacent "on" pixels.
Determining which pixels are adjacent depends on how pixel connectivity is defined. For
a two-dimensional image, there are two standard connectivities:
• 4-connectivity — Pixels are connected if their edges touch. Two adjoining pixels are
part of the same object if they are both on and are connected along the horizontal or
vertical direction.
• 8-connectivity — Pixels are connected if their edges or corners touch. Two adjoining
pixels are part of the same object if they are both on and are connected along the
horizontal, vertical, or diagonal direction.
You can also define nonstandard connectivity, or connectivities for higher dimensional
images. For more information, see Pixel Connectivity.
This figure shows two identical matrices that represent a binary image. On each matrix
is an overlay highlighting the connected components using 4-connectivity and 8-
connectivity, respectively. There are three connected components using 4-connectivity,
but only two connected components using 8-connectivity.

You can calculate connected components by using the bwconncomp function. In this
sample code, BW is the binary matrix shown in the above figure. For this example,
specify a connectivity of 4 so that two adjoining pixels are part of the same object if
they are both on and are connected along the horizontal or vertical direction.
The PixelIdxList field identifies the list of pixels belonging to each connected
component.
BW = zeros(8,8);
BW(2:4,2:3) = 1;
BW(5:7,4:5) = 1;
BW(2,6:8) = 1;
BW(3,7:8) = 1;
BW
BW =

3
0 0 0 0 0 0 0 0
0 1 1 0 0 1 1 1
0 1 1 0 0 0 1 1
0 1 1 0 0 0 0 0
0 0 0 1 1 0 0 0
0 0 0 1 1 0 0 0
0 0 0 1 1 0 0 0
0 0 0 0 0 0 0 0
cc4 = bwconncomp(BW,4)
cc4 =

Connectivity: 4
ImageSize: [8 8]
NumObjects: 3
PixelIdxList: {[6x1 double] [6x1 double] [5x1 double]}
For comparison, calculate the connected components of the same binary image using
the default connectivity, 8.
cc8 = bwconncomp(BW)
cc8 =

Connectivity: 8
ImageSize: [8 8]
NumObjects: 2
PixelIdxList: {[12x1 double] [5x1 double]}

Label Connected Components

Labeling connected component is the process of identifying the connected components


in an image and assigning each one a unique label. The resulting matrix is called a label
matrix.
This figure shows the two label matrices that label the connected components using 4-
connectivity and 8-connectivity, respectively.

4
Create a label matrix by using the labelmatrix function. This sample code continue
with the connected component structure, cc4, defined in the preceding section.
L4 = labelmatrix(cc4)
L4 =

8×8 uint8 matrix

0 0 0 0 0 0 0 0
0 1 1 0 0 3 3 3
0 1 1 0 0 0 3 3
0 1 1 0 0 0 0 0
0 0 0 2 2 0 0 0
0 0 0 2 2 0 0 0
0 0 0 2 2 0 0 0
0 0 0 0 0 0 0 0

To visualize connected components, display the label matrix as a pseudo-color image


by using the label2rgb function. The label identifying each object in the label matrix
maps to a different color in the associated colormap. You can specify the colormap,
background color, and how objects in the label matrix map to colors in the colormap.
RGB_label = label2rgb(L4,@copper,"c","shuffle");
imshow(RGB_label)

OUTPUT :

RESULT : Thus the given program is executed and verified successfully

5
EX . NO : Implementation of Arithmetic, Set operations and
DATE : Transformations of an image

imadd
Syntax
Z = imadd(X,Y)

Description
Z = imadd(X,Y) adds each element in array X with the corresponding element in array Y and
returns the sum in the corresponding element of the output array Z.

Examples
Add Two uint8 Arrays

This example shows how to add two uint8 arrays with truncation for values that exceed 255.
Get
X = uint8([ 255 0 75; 44 225 100]);
Y = uint8([ 50 50 50; 50 50 50 ]);
Z = imadd(X,Y)
Z = 2x3 uint8 matrix

255 50 125
94 255 150

Add Two Images and Specify Output Class

Read two grayscale uint8 images into the workspace.


Get
I = imread('rice.png');
J = imread('cameraman.tif');
Add the images. Specify the output as type uint16 to avoid truncating the result.
K = imadd(I,J,'uint16');
Display the result.

imshow(K,[])

6
Add a Constant to an Image

Read an image into the workspace.


Get

I = imread('rice.png');
Add a constant to the image.
Get

J = imadd(I,50);
Display the original image and the result.
Get

imshow(I)

Get

figure
imshow(J)

7
imsubtract
Subtract one image from another or subtract constant from image

Syntax
Z = imsubtract(X,Y)

Description
Z = imsubtract(X,Y) subtracts each element in array Y from the corresponding element in
array X and returns the difference in the corresponding element of the output array Z.

Subtract Image Background


Read a grayscale image into the workspace.
I = imread('rice.png');
Estimate the background.
background = imopen(I,strel('disk',15));
Subtract the background from the image.
J = imsubtract(I,background);
Display the original image and the processed image.
imshow(I)

Imshow(j)

8
Perform Simple 2-D Translation Transformation
Try This ExampleCopy Code Copy Command

This example shows how to perform a simple affine transformation called a translation. In a
translation, you shift an image in coordinate space by adding a specified value to the x- and y-
coordinates. (You can also use the imtranslate function to perform translation.)
Read the image to be transformed. This example creates a checkerboard image using
the checkerboard function.
Get

cb = checkerboard;
imshow(cb)

Get spatial referencing information about the image. This information is useful when you want
to display the result of the transformation.
Get

cb_ref = imref2d(size(cb))
cb_ref =
imref2d with properties:

XWorldLimits: [0.5000 80.5000]


YWorldLimits: [0.5000 80.5000]
ImageSize: [80 80]
PixelExtentInWorldX: 1
PixelExtentInWorldY: 1
ImageExtentInWorldX: 80
ImageExtentInWorldY: 80
XIntrinsicLimits: [0.5000 80.5000]
YIntrinsicLimits: [0.5000 80.5000]

Specify the amount of translation in the x and y directions.


Get

tx = 20;
ty = 30;
Create a transltform2d geometric transformation object that represents the translation
transformation. For other types of geometric transformations, you can use other types of
objects.
Get

tform = transltform2d(tx,ty);
Perform the transformation. Call the imwarp function, specifying the image you want to
transform and the geometric transformation object. imwarp returns the transformed
image, cb_translated. This example also returns the optional spatial referencing

9
object, cb_translated_ref, which contains spatial referencing information about the
transformed image.
Get

[cb_translated,cb_translated_ref] = imwarp(cb,tform);

View the original and the transformed image side-by-side. When viewing the translated image,
it might appear that the transformation had no effect. The transformed image looks identical to
the original image. The reason that no change is apparent in the visualization is
because imwarp sizes the output image to be just large enough to contain the entire
transformed image but not the entire output coordinate space. Notice, however, that the
coordinate values have been changed by the transformation.
Get

figure
subplot(1,2,1)
imshow(cb,cb_ref)
subplot(1,2,2)
imshow(cb_translated,cb_translated_ref)

To see the entirety of the transformed image in the same relation to the origin of the coordinate
space as the original image, use imwarp with the "OutputView" name-value argument,
specifying a spatial referencing object. The spatial referencing object specifies the size of the
output image and how much of the output coordinate space is included in the output image. To
do this, the example makes a copy of the spatial referencing object associated with the original
image and modifies the world coordinate limits to accommodate the full size of the transformed
image. The example sets the limits of the output image in world coordinates to include the origin
from the input.
Get

10
cb_translated_ref = cb_ref;
cb_translated_ref.XWorldLimits(2) = cb_translated_ref.XWorldLimits(2)+tx;
cb_translated_ref.YWorldLimits(2) = cb_translated_ref.YWorldLimits(2)+ty;

[cb_translated,cb_translated_ref] = imwarp(cb,tform, ...


OutputView=cb_translated_ref);

figure
subplot(1,2,1)
imshow(cb,cb_ref)
subplot(1,2,2)
imshow(cb_translated,cb_translated_ref);

11
Image Rotation and Scale
Try This ExampleCopy Code Copy Command

This example shows how to align or register two images that differ by a rotation and a
scale change. You can calculate the rotation angle and scale factor and transform the
distorted image to recover the original image.
Step 1: Read Image
Read an image into the workspace.
Get

original = imread("cameraman.tif");
imshow(original)
text(size(original,2),size(original,1)+15, ...
"Image courtesy of Massachusetts Institute of Technology", ...
FontSize=7,HorizontalAlignment="right")

Step 2: Resize and Rotate the Image


Create a distorted version of the image by resizing and rotating the image. Note
that imrotate rotates images in a counterclockwise direction when you specify a positive angle
of rotation.
Get

scaleFactor = 0.7;
distorted = imresize(original,scaleFactor);

theta = 30;
distorted = imrotate(distorted,theta);
imshow(distorted)

12
Transform Image Using 2-D Affine Transformation Object
Read an image into the workspace.
A = imread('pout.tif');
Create an affine2d object that defines an affine geometric transformation. This example
combines vertical shear and horizontal stretch.
Get
tform = affine2d([2 0.33 0; 0 1 0; 0 0 1])
tform =
affine2d with properties:

T: [3x3 double]
Dimensionality: 2

Apply the geometric transformation to the image using imwarp.


Get
B = imwarp(A,tform);
Display the resulting image.
Get
figure
imshow(B);
axis on equal;

13
RESULT :
Thus the given program is executed and verified successfully

14
EX . NO : contrast streching ,histogram & histogramequalization
DATE :

Contrast Stretching of Low contrast Image

stretchlim
Find limits to contrast stretch an image

Syntax
• LOW_HIGH = stretchlim(I)
• LOW_HIGH = stretchlim(I,TOL)
• LOW_HIGH = stretchlim(RGB,TOL)

Description

LOW_HIGH = stretchlim(I) returns LOW_HIGH, a two-element vector of pixel values


that specify lower and upper limits that can be used for contrast stretching image I. By
default, values in LOW_HIGH specify the bottom 1% and the top 1% of all pixel values.
The gray values returned can be used by the imadjust function to increase the contrast
of an image.

LOW_HIGH = stretchlim(I,TOL) where TOL is a two-element vector [LOW_FRACT


HIGH_FRACT] that specifies the fraction of the image to saturate at low and high pixel
values.

If TOL is a scalar, LOW_FRACT = TOL, and HIGH_FRACT = 1 - LOW_FRACT, which


saturates equal fractions at low and high pixel values.

If you omit the argument, TOL defaults to [0.01 0.99], saturating 2%.

If TOL = 0, LOW_HIGH = [min(I(:)); max(I(:))].

LOW_HIGH = stretchlim(RGB,TOL) returns a 2-by-3 matrix of intensity pairs to saturate


each plane of the RGB image. TOL specifies the same fractions of saturation for each plane.

Note If TOL is too big, such that no pixels would be left after saturating low
and high pixel values, stretchlim returns [0 1].

Class Support

The input image can be of class uint8, uint16, int16, double, or single. The output
limits returned, LOW_HIGH, are of class double and have values between 0 and 1.

15
Example

• I = imread('pout.tif');
• J = imadjust(I,stretchlim(I),[]);
• imshow(I), figure, imshow(J)

Adjust Image Contrast Using Histogram Equalization


Histogram equalization involves transforming the intensity values so that the histogram of the
output image approximately matches a specified histogram. By default, the histogram
equalization function, histeq, tries to match a flat histogram with 64 bins such that the output
image has pixel values evenly distributed throughout the range. You can also specify a different
target histogram to match a custom contrast.

Original Image Histogram


Read a grayscale image into the workspace.
Get

I = imread("pout.tif");
Display the image and its histogram. The original image has low contrast, with most pixel
values in the middle of the intensity range.
Get

figure
subplot(1,3,1)
imshow(I)
subplot(1,3,2:3)
imhist(I)

16
Adjust Contrast Using Default Equalization
Adjust the contrast using histogram equalization. Use the default behavior of the histogram
equalization function, histeq. The default target histogram is a flat histogram with 64 bins.
Get

J = histeq(I);
Display the contrast-adjusted image and its new histogram.
Get

figure
subplot(1,3,1)
imshow(J)
subplot(1,3,2:3)
imhist(J)

17
Adjust Contrast, Specifying Number of Bins

Adjust the contrast, specifying a different number of bins. With a small number of bins, there are
noticeably fewer gray levels in the contrast-adjusted image.
Get

nbins = 10;
K = histeq(I,nbins);
Display the contrast-adjusted image and its new histogram.
Get

figure
subplot(1,3,1)
imshow(K)
subplot(1,3,2:3)
imhist(K)

RESULT :
Thus the given program is executed and verified successfully

18
EX . NO : Display of bit planes of an images
DATE :

b1 = double(bitget(Img,1));
b2 = double(bitget(Img,2));
b3 = double(bitget(Img,3));
b4 = double(bitget(Img,4));
b5 = double(bitget(Img,5));
b6 = double(bitget(Img,6));
b7 = double(bitget(Img,7));
b8 = double(bitget(Img,8));
Img_b = cat(8,b1,b2,b3,b4,b5,b6,b7,b8);
figure,
imshow(Img), title('Original:');
figure,
subplot(2,2,1)
imshow(b1), title('Bit Plan: 1');
subplot(2,2,2)
imshow(b2), title('Bit Plan: 2');
subplot(2,2,3)
imshow(b3), title('Bit Plan: 3');
subplot(2,2,4)
imshow(b4), title('Bit Plan: 4');
figure,
subplot(2,2,1)
imshow(b5), title('Bit Plan: 5');
subplot(2,2,2)
imshow(b6), title('Bit Plan: 6');
subplot(2,2,3)
imshow(b7), title('Bit Plan: 7');
subplot(2,2,4)
imshow(b8), title('Bit Plan: 8');
y = Img_b;
output :

RESULT : Thus the given program is executed and verified successfully

19
EX . NO : Display of fft of an image
DATE :

Steps:

• Read the image.


• Apply forward Fourier transformation.
• Display log and shift FT images.
Function Used:
• imread( ) inbuilt function is used to image.
• fft2( ) inbuilt function is used to apply forward fourier transform on 2D signal.
• ifft2( ) inbuilt function is used to apply inverse Fourier transform on 2D signal.
• fftshift( ) inbuilt function is used to shift corners to center in FT image.
• log( ) inbuilt function is used to evaluate logarithm of FT complex signal.
• imtool( ) inbuilt function is used to display the image.

% MATLAB code for Forward and


% Inverse Fourier Transform

% FORWARD FOURIER TRANSFORM


k=imread("cameraman.jpg");

% Apply fourier transformation.


f=fft2(k);

% Take magnitude of FT.


f1=abs(f);

% Take log of magnitude of FT.


f2=log(abs(f));

% Shift FT from corners to central part.


f3=log(abs(fftshift(f)));

% Display all three FT images.


imtool(f1,[]);
imtool(f2,[]);
imtool(f3,[]);

% Remove some frequency from FT.


f(1:20, 20:40)=0;
imtool(log(abs(f)),[]);

20
Output:

Figure 1:Absolute of the fourier transformed image

Figure 2: Log of the absolute of FT of image

21
Figure 3:Centered spectrum of FT image

Figure 4: Some frequencies blocked in FT image

22
Code Explanation :

• k=imread(“cameraman.jpg”); This line reads the image.


• f=fft2(k); This line computes fourier transformation.
• f1=abs(f); This takes magnitude of FT.
• f2=log(abs(f)); This line takes log of magnitude of FT.
• f3=log(abs(fftshift(f))); This line shifts FT from corners to central part.
• f(1:20, 20:40)=0; This line remove frequencies from FT.

% MATLAB code for INVERSE FOURIER TRANSFORM


% apply inverse FT on FTransformed image.
% we get original image in this step.
j=ifft2(f);
% Take log of original image.
j1=log(abs(j));

% Shift corners to center.


j2=fftshift(j);
% Again shift to get original image.
j3=fftshift(j2);
% Remove some frequency from FT image.
f(1:20, 20:40)=0;
% Apply inverse FT.
j4=ifft2(f);
% Display all 4 images.
imtool(j,[]);
imtool(j1,[]);
imtool(j2,[]);
imtool(j3,[]);%j and j3 are same.
imtool(abs(j4),[]);

Output :

Figure 1: Original Input Image was obtained by taking the Inverse FT of the FT image

23
Figure 2: log of absolute Inverse Fourier Transformed Image

Figure 3: Corners shifted to center

Figure 4: Inverse FT after removing some frequencies from Freq Domain

RESULT : Thus the given program is executed and verified successfully

24
EX . NO : Image Smoothening Filters
DATE :

Imgaussfilt
2-D Gaussian filtering of images

Syntax
B = imgaussfilt(A)

B = imgaussfilt(A,sigma)

B = imgaussfilt(___,Name,Value)

Description
B = imgaussfilt(A) filters image A with a 2-D Gaussian smoothing kernel with standard
deviation of 0.5, and returns the filtered image in B.
B = imgaussfilt(A,sigma) filters image A with a 2-D Gaussian smoothing kernel with
standard deviation specified by sigma.
B = imgaussfilt(___,Name,Value) uses name-value arguments to control aspects of the
filtering.

Examples

collapse all
Smooth Image with Gaussian Filter
Try This ExampleCopy Code Copy Command

Read image to be filtered.


Get

I = imread('cameraman.tif');
Filter the image with a Gaussian filter with standard deviation of 2.
Get

Iblur = imgaussfilt(I,2);
Display the original and filtered image in a montage.
Get

montage({I,Iblur})
title('Original Image (Left) Vs. Gaussian Filtered Image (Right)')

25
Medfilt2
2-D median filtering
collapse all in page

Syntax
J = medfilt2(I)

J = medfilt2(I,[m n])

J = medfilt2(___,padopt)

Description
example
J = medfilt2(I) performs median filtering of the image I in two dimensions. Each output
pixel contains the median value in a 3-by-3 neighborhood around the corresponding pixel in the
input image.
J = medfilt2(I,[m n]) performs median filtering, where each output pixel contains the
median value in the m-by-n neighborhood around the corresponding pixel in the input image.
J = medfilt2(___,padopt) controls how medfilt2 pads the image boundaries.
Examples
collapse all
Remove Salt and Pepper Noise from Image
Try This ExampleCopy Code Copy Command

Read image into workspace and display it.


Get

I = imread('eight.tif');
figure, imshow(I)

26
Add salt and pepper noise.
Get

J = imnoise(I,'salt & pepper',0.02);


Use a median filter to filter out the noise.
Get

K = medfilt2(J);
Display results, side-by-side.
Get

imshowpair(J,K,'montage')

RESULT :
Thus the given program is executed and verified successfully

27
EX.NO : Sharpening filters
DATE :

Sharpen image using unsharp masking


collapse all in page

Syntax
B = imsharpen(A)

B = imsharpen(A,Name,Value)

Description
example
B = imsharpen(A) sharpens the grayscale or truecolor (RGB) image A by using the unsharp
masking method.
example
B = imsharpen(A,Name,Value) uses name-value arguments to control aspects of the
unsharp masking.

Examples

collapse all
Sharpen Image
Try This ExampleCopy Code Copy Command

Read an image into the workspace and display it.


Get

a = imread('hestain.png');
imshow(a)
title('Original Image');

28
Sharpen the image using the imsharpen function and display it.
Get

b = imsharpen(a);
figure, imshow(b)
title('Sharpened Image');

Control the Amount of Sharpening at the Edges


Try This Example Copy Code Copy Command

Read an image into the workspace and display it.


Get

a = imread('rice.png');
imshow(a), title('Original Image');

29
Sharpen image, specifying the radius and amount parameters.
Get

b = imsharpen(a,'Radius',2,'Amount',1);
figure, imshow(b)
title('Sharpened Image');

OUTPUT :

RESULT : Thus the given program is executed and verified successfully

30
EX . NO : Edge Detection Sobel & Canny Filters
DATE :

In an image, an edge is a curve that follows a path of rapid change in image intensity.
Edges are often associated with the boundaries of objects in a scene. Edge detection is
used to identify the edges in an image.
To find edges, you can use the edge function. This function looks for places in the
image where the intensity changes rapidly, using one of these two criteria:
• Places where the first derivative of the intensity is larger in magnitude than some
threshold
• Places where the second derivative of the intensity has a zero crossing
edge provides several derivative estimators, each of which implements one of these

definitions. For some of these estimators, you can specify whether the operation should
be sensitive to horizontal edges, vertical edges, or both. edge returns a binary image
containing 1's where edges are found and 0's elsewhere.
The most powerful edge-detection method that edge provides is the Canny method. The
Canny method differs from the other edge-detection methods in that it uses two
different thresholds (to detect strong and weak edges), and includes the weak edges in
the output only if they are connected to strong edges. This method is therefore less
likely than the others to be affected by noise, and more likely to detect true weak edge

Detect Edges in Images


Try This ExampleCopy Code Copy Command

This example shows how to detect edges in an image using both the Canny edge
detector and the Sobel edge detector.
Read the image into the workspace and display it.
Get
I = imread('coins.png');

imshow(I)

31
Apply the Sobel edge detector to the unfiltered input image. Then, apply the Canny
edge detector to the unfiltered input image.
Get
BW1 = edge(I,'sobel');
BW2 = edge(I,'canny');
Display the filtered images side-by-side for comparison.
Get
tiledlayout(1,2)

nexttile
imshow(BW1)
title('Sobel Filter')

nexttile
imshow(BW2)
title('Canny Filter')

RESULT :

Thus the given program is executed and verified successfully .

32
EX . NO : Implementation of image restoration techniques
DATE :

Image restoration is the operation of taking a corrupt/noisy image and estimating the clean,
original image. Corruption may come in many forms such as motion blur, noise and camera
mis-focus.

Steps :

1. Read an Input Image


2. Defining a Blurr Filter
3. Degrade the Image Quality by applying any filtering (eg Gaussian Blur, Motion Blur)
4. Addition of Minimal Random Noise to the degraded Image (using randn)
5. Computing DFT of Degraded Image
Steps (fft2, fftshift, log of absolute value for display)
6. Computing DFT of Filter (size equal to the image)
Steps (increase the size of filter, ifftshift, fft2, fftshift, log of absolute value for display)
7. Applying the REQUISITE METHOD FOR IMAGE RESTORATION
8. Display the Restored Image in Spatial Domain

EXAMPLE :

I = im2double(imread('Fig0526(a)(original_DIP).tif'));

imshow(I);

title('Original Image (courtesy of MIT)');

LEN = 90;

THETA = 45;

PSF = fspecial('motion', LEN, THETA);

blurred = imfilter(I, PSF, 'conv', 'circular');

figure, imshow(blurred)

noise_mean = 0;

noise_var =0.01 ;

blurred_noisy = imnoise(blurred, 'gaussian', noise_mean, noise_var);

figure, imshow(blurred_noisy)

title('Simulate Blur and Noise')

estimated_nsr = noise_var / var(I(:));

wnr3 = deconvwnr(blurred_noisy, PSF, estimated_nsr);

figure, imshow(wnr3)

title('Restoration of Blurred, Noisy Image Using Estimated NSR');

33
OUTPUT :

RESULT :

Thus the given program is executed and verified successfully .

34
EX . NO : Object Deduction And Recognition
DATE :

In MATLAB, object deduction and recognition can be performed using the Computer Vision
System Toolbox and Deep Learning Toolbox. Below is a step-by-step example demonstrating
how to achieve this using a pre-trained deep learning model, such as AlexNet.

STEPS :

1. Load Image: The script starts by loading an image from a specified path and displaying it.

2. Preprocess Image: The image is resized to the input size required by AlexNet (227x227
pixels). If the image is grayscale, it is converted to an RGB image by replicating the single
channel.

3. Load Model: The AlexNet model is loaded using the `alexnet` function. This function loads a
pre-trained AlexNet model.

4. Classify Image: The preprocessed image is classified using the `classify` function. The resulting
label is then displayed on the original image.

OBJECT DEDUCTION

ORIGINAL IMAGE :

CODE :

import cv2

from matplotlib import pyplot as plt

img = cv2.imread("image.jpg")

img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

35
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

plt.subplot(1, 1, 1)

plt.imshow(img_rgb)

plt.show()

OUTPUT :

OBJECT RECOGNITION

CODE :

import cv2

from matplotlib import pyplot as plt

img = cv2.imread("image.jpg")

img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

stop_data = cv2.CascadeClassifier('stop_data.xml')

found = stop_data.detectMultiScale(img_gray, minSize =(20, 20)

36
amount_found = len(found)

if amount_found != 0:

for (x, y, width, height) in found:

cv2.rectangle(img_rgb, (x, y), (x + height, y + width), (0, 255, 0), 5)

plt.subplot(1, 1, 1)

plt.imshow(img_rgb)

plt.show()

OUTPUT :

RESULT :

Thus the given program is executed and verified successfully .

37

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy