Module 2 DIP
Module 2 DIP
Module 2 DIP
SYLLABUS:
WR-1 https://nptel.ac.in/courses/117/105/117105135/
WR-2 https://www.youtube.com/watch?v=DSGHkvQBMbs
Channel https://www.youtube.com/channel/UCD0mXLuJqtvPxBqRI1r66cg/videos
MODULE -2
IMAGE ENHANCEMENT IN SPATIAL DOMAIN
Spatial domain refers to the image plane itself, and approaches in this category are
based on direct manipulation of pixels in an image. Frequency domain processing
techniques are based on modifying the Fourier transform of the image.
Suppose we have a digital image which can be represented by a two dimensional
random field f(x,y) . Spatial domain processes will be denoted by the expression
g ( x, y) = T f ( x, y) or s = T (r )
The value of pixels, before and after processing, will be denoted by r and s, respectively.
Where f(x, y) is the input to the image, g(x, y) is the processed image and T is an
operator on f.
Histograms give a rough sense of the density of the underlying distribution of the data,
and often for density estimation: estimating the probability density function of the
underlying variable. The total area of a histogram used for probability density is always
normalized. Histograms are sometimes confused with bar charts. A histogram is used
for continuous data, where the bins represent ranges of data, while a bar chart is a plot
of categorical variables. Some authors recommend that bar charts have gaps between
the rectangles to clarify the distinction
2.3 What is the importance of image enhancement in image processing?
List out any three Image Enhancement
Techniques?
➢ Contrast Stretching: The result of the
transformation shown in the figure below is to
produce an image of higher contrast than the
original, by darkening the levels below m and
brightening the levels above m in the original
image. This technique is known as contrast
stretching.
2.4 Give some commonly used Grey Level Transformations and their
applications?
Grey level transforms are image enhancement methods that are based only on the
intensity of single pixels. Given below are the three basic types of functions used
frequently for image enhancement. The three
basic types of functions used frequently for image
enhancement:
1. Linear Functions:
• Negative Transformation
• Identity Transformation
2. Logarithmic Functions:
• Log Transformation
• Inverse-log Transformation
3. Power-Law Functions:
• nth power transformation
• nth root transformation
➢ Identity Function
• Output intensities are identical to input intensities
• This function doesn’t have an effect on an image, it was included in the graph
only for completeness
• Its expression: s=r
➢ Negative Transformation
• The negative of a digital image is obtained by the transformation function
s = T (r ) = L − 1 − r shown in the following figure, where L is the number of grey
levels. The idea is that the intensity of the output image decreases as the
intensity of the input increases. This is useful in numerous applications such as
displaying medical images.
➢ Log Transformation
The general form of the log transformation: s = c log(1 + r ) Where c is a constant,
and r ≥ 0
• Log curve maps a narrow range of low grey-level values in the input image
into a wider range of the output levels.
• Used to expand the values of dark pixels in an image while compressing the
higher-level values.
• It compresses the dynamic range of images with large variations in pixel
values.
➢ Inverse Logarithm Transformation
• Do opposite to the log transformations
• Used to expand the values of high pixels in an image while compressing the
darker-level values.
Dr. Ganesh V Bhat Dept. of ECE CEC Page 4
MODULE-2 Digital Image Processing Quick Reference [17EC72]
➢ Power-law transformations
• Power-law transformations have the
basic form of: s = c.rᵞ Where c and γ
are positive constants Different
transformation curves are obtained by
varying γ (gamma)
Variety of devices used for image capture,
printing and display respond according to
a power law. The process used to correct
these power-law response phenomena is
called gamma correction.
For example, cathode ray tube (CRT)
devices have an intensity-to-voltage
response that is a power function, with
exponents varying from approximately 1.8
to 2.5. With reference to the curve for
γ=2.5, display systems would tend to
produce images that are darker than intended. In addition
to gamma correction, power-law transformations are
useful for general-purpose contrast manipulation.
➢ Gray-level Slicing
This technique is used to highlight a specific range of grey levels in a given image. It
can be implemented in several ways, but the two basic themes are:
• One approach is to display a high value for all grey levels in the range of interest
and a low value for all other
gray levels. The transformation,
shown to the left in the below
produces a binary image.
• The second approach, based on
the transformation shown to
the right in the given Fig,
brightens the desired range of
gray levels but preserves gray
levels unchanged.
➢ Bit-plane Slicing
Pixels are digital numbers, each one composed of 8 bits. Plane 0 contains the least
significant bit and plane 7 contains the most significant bit. Instead of highlighting
gray-level range, we could highlight the contribution made by each bit
This method is useful and used in image compression as Most significant bits contain
the majority of visually significant data
Types of processing:
• Histogram equalization
• Histogram matching (specification)
• Local enhancement
➢ Histogram equalisation (Automatic)
Consider transform of the form s = T (r ) 0≤ r ≤ 1 ; We
assume that T(r) satisfies the following conditions
• T(r) monotonically increasing in range 0≤ r ≤ 1
• 0≤ T( r) ≤ 1 for 0≤ r ≤ 1
Fig: A grey level transformation function that is both
single valued and monotonically increasing.
• The requirement in the first condition T(r) to be
single valued is needed to guarantee that the
inverse transformation will exist, and the
monotonicity condition preserves the increasing order from black to white in the
output image
• Second condition guarantees that the output gray levels will be in the same range
as the input levels.
The inverse transformation from s back to r is denoted r = T −1 (s) 0 ≤ S ≤ 1
• pr (r ) denote the probability density function (pdf) of the variable r
• ps (s) denote the probability density function (pdf) of the variable s
• If pr (r ) and T(r) is known then ps (s) can be obtained from 𝑃𝑠 (𝑠) = 𝑃𝑟 (𝑟)|𝑑𝑟
𝑑𝑠
| … (a)
Moreover, if the original image has intensities only within a certain range [rmin , rmax ]
rmin rmax
Therefore, the new intensity s takes always all values within the available range [0 1].
Suppose that Pr (r ) , Ps (s ) are the probability distribution functions (PDF’s) of the
variables r and s respectively.
ds dT (r ) d
r
= = Pr ( w)dw = Pr (r ) (b)
dr dr dr 0
1
From equation (a) 𝑃𝑠 (𝑠) = 𝑃𝑟 (𝑟)|𝑑𝑟
𝑑𝑠
| =𝑃𝑟 (𝑟) |𝑃 |=1 0≤ s ≤1, For discrete values we deal
𝑟(𝑟)
with probabilities and summations instead of probability density functions and
integrals. The probability of occurrence of gray level rk in an image is approximated by
n
p(rk ) = k2 k=0,1,2,….L-1, N is the total number of pixels in the image, nk is the
N
number of pixels that have gray level rk and L is the total number of possible gray level
in the image. Unfortunately, in a real life scenario we must deal with digital images. The
discrete form of histogram equalization is given by the relation
k k n
s = T (r ) = Pr (rj ) = j
j =0 j =0 n
Let us suppose that Pr (r) and Pz (z) represent the PDFs of r and z where r is the input
pixel intensity level and z is the output pixel intensity Suppose that histogram
equalization is first applied on the original image r
k k nj
sk = T (rk ) = Pr (rj ) = 𝑣𝑘 = 𝐺(𝑧𝑘 ) = ∑𝑘𝑖=0 𝑃𝑧 (𝑧𝑖 ) = 𝑠𝑘 𝑊ℎ𝑒𝑟𝑒 𝑘=0,1,2,…..𝐿−1
j =0 j =0 n
Where n is the total number of pixels in the image, nk is the number of pixels that have
gray level rk and L is the total number of possible gray level in the image and z is the
available image. Finally 𝑧𝑘 = 𝐺 −1 [𝑇(𝑟𝑘) ] 𝑘 = 0,1,2 … . 𝐿 − 1
Or −1
𝑧𝑘 = 𝐺 (𝑠𝑘 ) 𝑘 = 0,1,2 … . 𝐿 − 1
If the image is simply divided in blocks, each histogram equalization, the intermediate
block and the block will be apparent brightness discontinuity. In order to eliminate this
phenomenon, we use linear interpolation approach, namely in the following figure, part
shade of blue pixels using bilinear interpolation; green shaded by linear interpolation;
pink shaded using pixel after the normal histogram equalization value.
I.e., for a point in FIG. 00, the region (assuming that the length and width of the region
W; f (00), f (01), f (10), f (11) are four points where the
transformation function block), the transformed pixel
values of the points:
➢ Contrast Limited Adaptive Histogram Equalization (just for reference, not for exams)
Contrast Limited AHE (CLAHE) differs from adaptive histogram equalization in its
contrast limiting. In the case of CLAHE, the contrast limiting procedure is applied to
each neighbourhood from which a transformation function is derived. CLAHE was
developed to prevent the over amplification of noise that adaptive histogram
equalization can give rise to.
2.7 list out some of the Histogram Statistics used for characterization of
digital images.
The purpose of a histogram is to graphically summarize the distribution of a
univariate data set. The histogram graphically shows the following:
These features provide strong indications of the proper distributional model for the
data. The probability plot or a goodness-of-fit test can be used to verify the
distributional model.
Image Subtraction: A new image is obtained as a result of the difference between the
pixels in the same location of the two images being subtracted.
C=A-B; ie. Maximum value of A-B and zero. C(i,j,:) =max(A(i,j,:)-B(i,j,:),0).
Image subtraction is widely used for change detection. For detecting the missing
components in product assembly, Example: To detect the defects in the PCB. Image
subtraction is most beneficial in the area of medical image processing called mask
mode radiography.
Image Multiplication: Image multiplication is used to increase the average gray level of
the image by multiplying with a constant. It is used for masking operations. C=A.*B;
Image Division: Image division can be considered as multiplication of one image and the
reciprocal of other image. C=A.\B;
Logical Operations:
Logic operations similarly operate on a pixel-by-pixel basis. We need only be
concerned with the ability to implement the AND, OR, and NOT logic operators because
these three operators are functionally complete. When dealing with logic operations on
gray-scale images, pixel values are processed as strings of binary numbers. For
example, performing the NOT operation on a black, 8-bit pixel (a string of eight 0’s)
produces a white pixel(a string of eight 1’s). Intermediate values are processed the same
way, changing all 1’s to 0’s and vice versa.
Logical operations are done on pixel by pixel basis. The AND and OR operations are
used for selecting sub-images in an image . This masking operation is referred as
Region Of Interest processing.
Logical AND : To isolate the interested region from rest of the image portion logical AND
or OR is used. Consider a mask image L for the image A. To obtain the interested area,
D= and(L,A) ;We can use L&A also. The resulting image will be stored in D which
contains the isolated image part.
sum will be the new value for the current pixel currently overlapped with the center of
the kernel.
If the kernel is not symmetric, it has to be flipped both around its horizontal and
vertical axis before calculating the convolution as above.
Linear filter is a filter which operate the pixel value in the support region in linear
manner (i.e.,as weighted summation). The support region is specified by the ‘filter
matrix’ and be represent as H(i,j). The size of H is call ‘filter region’ and filter matrix has
its own coordinate system, i is column index and j is row index. The center of it is the
origin location and it is called the ‘hot spot’.
• (i) Averaging filter: It is used in reduction of the detail in image. All coefficients
are equal.
• (ii) Weighted averaging filter: In this, pixels are multiplied by different
coefficients. Center pixel is multiplied by a higher value than average filter.
2. Order Statistics Filter:
It is based on the ordering the pixels contained in the image area encompassed by
the filter. It replaces the value of the center pixel with the value determined by the
ranking result. Edges are better preserved in this filtering.
Types of Order statistics filter:
• (i) Minimum filter: 0th percentile filter is the minimum filter. The value of the
center is replaced by the smallest value in the window.
• (ii) Maximum filter: 100th percentile filter is the maximum filter. The value of
the center is replaced by the largest value in the window.
• (iii) Median filter: Each pixel in the image is considered. First neighboring
pixels are sorted and original values of the pixel is replaced by the median of
the list.
Sharpening Spatial Filter: It is also known as derivative filter. The purpose of the
sharpening spatial filter is just the opposite of the smoothing spatial filter. Its main
focus in on the removal of blurring and highlight the edges. It is based on the first and
second order derivative.
2.11 Smoothing Spatial Filters
Smoothing filters are used for blurring and for noise reduction. Blurring is used in pre-
processing steps, such as removal of small details from an image prior to (large) object
extraction, and bridging of small gaps in lines or curves. Noise reduction can be
accomplished by blurring with a linear filter and also by nonlinear filtering
The general implementation for filtering an M*N image with a weighted averaging filter
of size m*n (m and n odd) is given by the expression
➢ Non-Linear Filters
Noise removing with smoothing filter (a linear filter) provide the result in burred of the image structure, line
and edge. Non-Linear Filters were used to solve this problem and it works in non-linear manner.
Type of non-linear filters
Minimum and Maximum Filters: The minimum and maximum value in
the moving region R of the original image is the result of the minimum
and maximum filter respectively. These filter were defined as given
below
Median Filter: The result was calculated in the same way as the
minimum and maximum filter. The median of all value in moving region
R is the result of the median filter. And this filter typically use for
remove salt and pepper noise in the image. This filter was defined as
Weight Median Filter: This similar to the median filter, The word
“Weight” means extension of each member in the kernel according to
weight matrix W. The position of the filter member near hot spot will
obtain the higher weight than others.
Because derivatives of any order are linear operations, the Laplacian is a linear
operator. Taking into account that we now have two variables, we use the following
notation for the partial second-order derivative in the x-direction:
This equation can be implemented using the mask shown in figure below
Laplacian kernel, Laplacian kernel that includes the diagonal terms, and Two other Laplacian kernels.
g ( x, y ) = f ( x, y ) − [ f ( x + 1, y ) + f ( x − 1, y ) + f ( x, y + 1) + f ( x, y − 1) + 4 f ( x, y )
= 5 f ( x, y ) − [ f ( x + 1, y ) + f ( x − 1, y ) + f ( x, y + 1) + f ( x, y − 1)]
Image enhancement techniques are application oriented. There are two basic types of
methods - spatial domain methods and frequency domain methods.
Spatial domain methods: methods that directly modify pixel values, possibly using
intensity information from a neighborhood of the pixel. Examples include image
negatives, contrast stretching, dynamic range compression, histogram specification,
image subtraction, image averaging, and various spatial filters.
What is it good for? -> Smoothing, Sharpening, Noise removal edge detection
Frequency domain methods: Methods that modify the Fourier transform of the image.
First, compute the Fourier transform of the image. Then alter the Fourier transform of
the image by multiplying a filter transfer function. Finally, use inverse transform to get
the modified image (steps are described later in the text). The key is the filter transfer
function - examples include lowpass filter, highpass filter, and Butterworth filter.
Convolution Theorem: The Fourier Transform is used to convert images from the spatial
domain into the frequency domain and vice-versa. Convolution is one of the most
important concepts in Fourier theory. Mathematically, a convolution is defined as the
integral over all space of one function at x times another function at u-x.
f g = f ( ) g (t − )d = g ( ) f (t − )d
− −
Let be the operator performing the Fourier transform such that e.g. is the Fourier
transform of f (can be 1-D or 2-D). Then
(f * g) = (f) · (g) = F · G
Where • denotes the element-by-element multiplication. Also, the Fourier transform of a
product is the convolution of the Fourier transforms:
(f · g) = (f) * (g) = F * G.
By using the inverse Fourier transform F -1, we can write
(F · G) =
-1
f*g
(F * G) =
-1
f · g.
A question that often arises in the development of frequency domain technique is the
issue of computational complexity. Why do in the frequency domain for what could be
done in the spatial domain using small spatial masks? First, since the frequency
carries with a significant degree of intuitiveness regarding how to specify filters. Second
part of the answer depends on the size of the spatial masks and is usually answered
with respect to comparable implementations. For example, use both approaches for
running software on the same machine, it turns out that the frequency domain
implementation runs faster for surprisingly small value of M and N. Also, some
experiments shown that some enhancement tasks that would be exceptionally difficult
or impossible to formulate directly in the spatial domain become almost trivial in the
frequency domain.
An image can be filtered either in the frequency or in the spatial domain. In theory, all
frequency filters can be implementing as a spatial filter, but in practice, the frequency
filters can only be approximated by the filtering mask in spatial domain. If there exist a
simple mask for the desired filter effect, it is computationally less expensive to perform
the filtering in the spatial domain. And if there is no straight forward mask can be
found in the spatial domain, frequency filtering is more appropriate.
1 M −1 N −1
In step 2, the Two-Dimensional DFT: F (u, v) =
MN x=0 y =0
f ( x, y)e− j 2 (ux / M +vy / N ) ,
1 M −1 N −1
and its inverse: f ( x, y) =
MN u =0 v=0
F (u, v)e j 2 (ux / M +vy / N ) .
In equation form, the Fourier transform of the filtered image in step 3 is given by:
G(u, υ) = F(u, υ)H(u, υ), where F(u, υ) and H(u, υ) denote the Fourier transform of the input
image f (x, y), and the filter function h(x, y), respectively. And G(u, υ) is the Fourier
Transform of the filtered image, which is the multiplication of two two-dimensional
functions H and F on an element-by-element basics. The important point to keep in
mind is that the filtering process is based on modifying the transform of an image
(frequency) in some way via a filter function, and then taking the inverse of the result to
obtain the filtered image: Filtered Image = -1[G(u, υ)].
The lowpass filters considered here are radially symmetric about the origin.
(a) Perspective plot of an ideal lowpass-filter transfer function. (b) Function displayed as an image. (c) Radial cross section.
Use Figure c as the cross section that extending as a function of distance from the origin along a radial line, we get Figure a, which is the
perspective plot of an Ideal LPF transfer function. And Figure b is the filter displayed as an image.
The drawback of the ideal lowpass filter function is a ringing effect that occurs along
the edges of the filtered image. In fact, ringing behavior is a characteristic of ILPF (Ideal
Low Pass Filter). As mentioned earlier, multiplication in the Fourier domain
corresponds to a convolution in the spatial domain. Due to the multiple peaks of the
ideal filter in the spatial domain, the filtered image produces ringing along intensity
edges in the spatial domain. The cutoff frequency r0 of the ILPF determines the amount
of frequency components passed by the filter. Smaller the value of r0, more the number
of image components eliminated by the filter. In general, the value of r0 is chosen such
that most components of interest are passed through, while most components not of
interest are eliminated.
Example : ideal lowpass filtering:
As we can see, the filtered image is blured and ringing is
more severe as r0 become smaller. It is clear from this
example that ILPF is not very practical.
The next section introduce a lowpass filter which
smoothing a image can achieve blurring the image while
there is little or no ringing.
Which means as the order of BLPF increase, it will exhibits the characteristics of the
ILPF.
See example below to see the different between two images with different orders but the
same cutoff frequency. In fact, order of 20 already shows the ILPF characteristic.
Example: BLPF with different orders but the same cutoff frequency:
Figure shows the comparison between the spatial representations of various orders with cutoff frequency of 5 pixels, also the
corresponding gray level profiles through the center of the filter. As we can see, BLPF of order 1 has no ringing. Order of 2 has
mild ringing. So, this method is more appropriate for image smoothing than the ideal lowpass filter. Ringing in the BLPF
becomes significant for higher order.
C. Gaussian lowpass filter: Gaussian filters are important in many signal processing,
image processing and communication applications. These filters are characterized by
narrow bandwidths, sharp cutoffs, and low overshoots. A key feature of Gaussian filters
is that the Fourier transform of a Gaussian is also a Gaussian, so the filter has the
same response shape in both the spatial and frequency domains. The form of a
H (u, v) = e− D (u,v)/2 ,
2 2
Gaussian lowpass filter in two-dimensions is given by
where D(u,v) is the distance from the origin in the frequency plane as defined earlier.
The parameter σ measures the spread or dispersion of the Gaussian curve as seen in
below fig . Larger the value of σ, larger the cutoff frequency and milder the filtering is.
(a) Perspective plot of a GLPF transfer function. (b) Function displayed as an image. (c) Radial cross sections for various values
of D0
When letting σ = r0, which leads a more familiar form as previous discussion. So above
− D2 (u ,v)/2r 2
Equation becomes: H (u, v) = e 0
When D(u, v) = r0, the filter is down to 0.607 of its maximum value of 1.
Example: Gaussian lowpass filtering:
As mentioned earlier, the Gaussian has the same shape in the spatial and Fourier
domains and therefore does not incur the ringing effect in the spatial domain of the
filtered image. This is a advantage over ILPF and BLPF, especially in some situations
where any type of artifact is not acceptable, such as medical image. In the case where
tight control over transition between low and high frequency needed, Butterworth
lowpass filter provides better choice over Gaussian lowpass filter; however, tradeoff is
ringing effect.
The Butterworth filter is a commonly used discrete approximation to the Gaussian.
Applying this filter in the frequency domain shows a similar result to the Gaussian
smoothing in the spatial domain. But the difference is that the computational cost of
the spatial filter increases with the standard deviation (e.g the size of the filter kernel),
whereas the costs for a frequency filter are independent of the filter function. Hence,
the Butterworth filter is a better implementation for wide lowpass filters, while the
spatial Gaussian filter is more appropriate for narrow lowpass filters.
Where D(u, v) is defined earlier below Fig shows perspective plot, image representation,
and cross section of an BHPF.
The frequency response does not have a sharp transition as in the IHPF. As we compare
example 5 and 6 with r0 = 18, we can see that BHPF behave smoother and less
distortion than IHPF. Therefore, BHPF is more appropriate for image sharpening than
the IHPF. Also less ringing is introduced with small value of the order n of BHPF.
is defined earlier, and r0 is the distance from the origin in the frequency plane. The
parameter σ, measures the spread or dispersion of the Gaussian curve. Larger the
value of σ, larger the cutoff frequency and milder the filtering is.
The components of the gradient vector itself are linear operators, but the magnitude of
this vector obviously is not because of the squaring and square root operation.
Although it is not strictly correct, the magnitude of the gradient vector often is referred
to as the gradient.
For practical reasons this can be simplified as: f G x + G y
There is some debate as to how best to calculate these gradients but we will use:
f ( z7 + 2 z8 + z9 ) − ( z1 + 2 z2 + z3 ) + ( z3 + 2 z6 + z9 ) − ( z1 + 2 z4 + z7 )
The Laplacian is often applied to an image that has first been smoothed with something
approximating a Gaussian smoothing filter in order to reduce its sensitivity to noise,
and hence the two variants will be described together here. The operator normally takes
a single gray level image as input and produces another gray level image as output.
The Laplacian of an image with pixel intensity values f(x, y) (original image) is given by:
2 f ( x, y ) 2 f ( x , y ) d n f ( x)
2 f ( x, y ) = + Since = ( ju )n F (u )
x 2 y 2 dx
n
H (u, v) = − u − + v −
2 2
Hence, the Laplacian-filtered image in the spatial domain can be obtained by:
2 f ( x, y ) = − H (u , v) F (u , v)
So, how we use the Laplacian for image enhancement in the spatial domain? Here are
the basic ways where the g(x, y) is the enhanced image:
In frequency domain, g(x, y) the enhance image is also possible to be obtained by taking
the inverse Fourier transform of a single mask (filter)
H (u, v) = 1 + ( u − M / 2 ) + (v − N / 2) 2
2
and the original image f(x, y):
g ( x, y ) = −1 1 + ( u − M / 2 ) + (v − N / 2) 2 F (u, v)
2
Example: example of the Laplacian filtering shows up more detail of in the ring of the Saturn.
2.21 Explain the concept of unsharp masking and high boost filtering.
A process used for many years in the publishing industry to sharpen images consists of
subtracting a blurred version of an image from the image itself. This process, called
unsharp masking, is expressed as
f hp ( x, y ) = f ( x, y ) − f lp ( x, y )
where fs(x, y) denotes the sharpened image obtained by unsharp masking, and is a
blurred version of f(x, y). A slight further generalization of unsharp masking is called
high-boost filtering.A high-boost filtered image, fhb, is defined at any point (x, y) as
f hb ( x, y ) = Af ( x, y ) − f lp ( x, y ) Where A≥1
Above equation can be written as
f hb ( x, y ) = ( A − 1) f ( x, y ) − f lp ( x, y ) + f ( x, y )
f hb ( x, y ) = ( A − 1) f ( x, y ) + f hp ( x, y )
When A=1, high boost filter reduces to regular high pass filter.
Fhp (u , v) = F (u , v) − Flp (u , v)
Flp (u , v) = F (u , v) H lp (u , v)
Therefore, unsharp masking can be implemented directly in frequency domain by using
composite filter
H hp (u , v) = 1 − H lp (u , v)
Similarly, high boost filtering can be implemented with the composite filter (A≥1)
H hp (u , v) = ( A − 1) + H lp (u , v)
f ( x, y) = i( x, y)r ( x, y)
Because the fourier transform of the product of two function is not separable:
{ f ( x, y) {i( x, y)r ( x, y)
Therefore taking natural logarithm on both side
ro ( x, y ) = e r ( x , y )
'
where io ( x, y ) = e
i' ( x , y )
This filtering technique is most famous for removing multiplicative noise and is carried
out as shown in fig
If the parameter L and H are chosen so that L 1 and H 1 , the filter function
shown in figure below trends to decrease the contribution made bey the slow
frequencies (illumination) and amplify the contribution made by high frequency
H (u , v) = ( H − L )[1 − e − c ( D ]+L
2
( u , v ) / D02 )
Where C is a constant and it controls the sharpness of the slope of the filter function
QUESTION BANK
14. Explain the concept of unsharp masking and high boost filtering.
FURTHER READING