0% found this document useful (0 votes)
33 views

1 (Autosaved)

The document describes 4 course outcomes for a digital image processing course. CO1 describes basic concepts and applications of image processing. CO2 describes techniques in color image processing, segmentation, and object recognition. CO3 illustrates the relationship between pixels and image arithmetic. CO4 analyzes mathematical principles of image enhancement. The course is taught by Smita S. Darbastwar.

Uploaded by

Rohan 12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

1 (Autosaved)

The document describes 4 course outcomes for a digital image processing course. CO1 describes basic concepts and applications of image processing. CO2 describes techniques in color image processing, segmentation, and object recognition. CO3 illustrates the relationship between pixels and image arithmetic. CO4 analyzes mathematical principles of image enhancement. The course is taught by Smita S. Darbastwar.

Uploaded by

Rohan 12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 139

Course Outcome

CO 1.
Student will be able to describe the basic concepts and the scope of digital image processing, and
the roles of image processing in a variety of applications.

CO 2.
Digital image processing
Student will be able to describe different techniques in image color image processing, image
segmentation and object recognition in image.

CO 3.
Student will be able to illustrate relationship between pixels and arithmetic operations on
images. Course By – Smita S.Darbastwar
CO 4.
Student will be able to analyze the mathematical principles of digital image enhancement.
Click to add text
• Digital image processing deals with
manipulation of digital images through a
digital computer.
Introduction
• It is a subfield of signals and systems but focus
particularly on images.
• DIP focuses on developing a computer system that
is able to perform processing on an image. The
input of that system is a digital image and the
system process that image using efficient
algorithms, and gives an image as an output.
• The most common example is Adobe Photoshop. It
is one of the widely used application for processing
digital images.
Introduction

• Signal processing is a discipline in electrical


engineering and in mathematics that deals
with analysis and processing of analog and
digital signals , and deals with storing , filtering
, and other operations on signals. These
signals include transmission signals , sound or
voice signals , image signals , and other signals
e.t.c.
Introduction
• Out of all these signals , the field that deals
with the type of signals for which the input is
an image and the output is also an image is
done in image processing. As it name
suggests, it deals with the processing on
images.
• It can be further divided into analog image
processing and digital image processing.
Analog image processing

• Analog image processing is done on analog


signals. It includes processing on two dimensional
analog signals. In this type of processing, the
images are manipulated by electrical means by
varying the electrical signal. The common example
include is the television image.
• Digital image processing has dominated over
analog image processing with the passage of time
due its wider range of applications.
Digital image processing

• The digital image processing deals with


developing a digital system that performs
operations on an digital image.
What is an Image

• An image is nothing more than a two


dimensional signal. It is defined by the
mathematical function f(x,y) where x and y are
the two co-ordinates horizontally and
vertically.
• The value of f(x,y) at any point is gives the
pixel value at that point of an image.
128 30 123
232 123 321
123 77 89
80 255 255

The above figure is an example of digital image that you are now viewing on your computer screen. But actually ,
this image is nothing but a two dimensional array of numbers ranging between 0 and 255.
Each number represents the value of the function f(x,y) at any point. In this case the value 128 , 230 ,123 each
represents an individual pixel value. The dimensions of the picture is actually the dimensions of this two
dimensional array.
Electromagnetic spectrum
• Since digital image processing has very wide
applications and almost all of the technical
fields are impacted by DIP, we will just discuss
some of the major applications of DIP.
• Digital Image processing is not just limited to
adjust the spatial resolution of the everyday
images captured by the camera. It is not just
limited to increase the brightness of the
photo, e.t.c. Rather it is far more than that.
Electromagnetic waves
• Electromagnetic waves can be thought of as
stream of particles, where each particle is
moving with the speed of light. Each particle
contains a bundle of energy. This bundle of
energy is called a photon.
• The electromagnetic spectrum according to
the energy of photon is shown below.
Electromagnetic spectrum
• In this electromagnetic spectrum, we are only able to see the
visible spectrum.
• Visible spectrum mainly includes seven different colors that are
commonly term as (VIBGOYR).
• VIBGOYR stands for violet , indigo , blue , green , orange ,
yellow and Red.
• But that does not nullify the existence of other stuff in the
spectrum. Our human eye can only see the visible portion, in
which we saw all the objects. But a camera can see the other
things that a eye is unable to see.
• For example: x rays , gamma rays , e.t.c. Hence the analysis of
all that stuff too is done in digital image processing.
Applications of Digital Image Processing

• Some of the major fields in which digital image processing is widely


used are mentioned below
• Image sharpening and restoration
• Medical field
• Remote sensing
• Transmission and encoding
• Machine/Robot vision
• Color processing
• Pattern recognition
• Video processing
• Microscopic Imaging
• Others
How human eye works?
How human eye works?

• Before, the image formation on analog and


digital cameras , consider the image formation
on human eye. Because the basic principle
that is followed by the cameras has been
taken from the way , the human eye works.
• When light falls upon the particular object , it
is reflected back after striking through the
object.
• The rays of light when passed through the lens of
eye , form a particular angle , and the image is formed
on the retina which is the back side of the wall.
• The image that is formed is inverted. This image is
then interpreted by the brain and that makes us able
to understand things.
• Due to angle formation , we are able to perceive the
height and depth of the object we are seeing. This has
been more explained in the tutorial of perspective
transformation
• As you can see in the above figure, that when
sun light falls on the object (in this case the
object is a face), it is reflected back and
different rays form different angle when they
are passed through the lens and an invert
image of the object has been formed on the
back wall. The last portion of the figure
denotes that the object has been interpreted
by the brain and re-inverted
• Image formation on digital cameras
• In the digital cameras , the image formation is
not due to the chemical reaction that take
place , rather it is a bit more complex then
this.
• In the digital camera , a CCD array of sensors is
used for the image formation.
What is an Image

• An image is nothing more than a two


dimensional signal. It is defined by the
mathematical function f(x,y) where x and y are
the two co-ordinates horizontally and
vertically.
• The value of f(x,y) at any point is gives the
pixel value at that point of an image.
Terminology
• Pixel
• Pixel is the smallest element of an image.
• Each pixel correspond to any one value.
• In an 8-bit gray scale image, the value of the pixel
between 0 and 255.
• The value of a pixel at any point correspond to the
intensity of the light photons striking at that point.
• Each pixel store a value proportional to the light
intensity at that particular location.
Terminology
• Gray level
• The value of the pixel at any point denotes the intensity of
image at that location, and that is also known as gray level.

• Pixel value.(0)
• that each pixel can have only one value and each value
denotes the intensity of light at that point of the image.
• The value 0 means absence of light. It means that 0 denotes
dark, and it further means that when ever a pixel has a value
of 0, it means at that point, black color would be formed.
Terminology
• Look at this image matrix
• 000
• 000
• 000
• Now this image matrix has all filled up with 0. All the pixels
have a value of 0. If we were to calculate the total number
of pixels form this matrix, this is how we are going to do it.
• Total no of pixels = total no. of rows X total no. of columns
• =3X3
• = 9.
Terminology
• Bpp or bits per pixel denotes the number of
bits per pixel.
• The number of different colors in an image is
depends on the depth of color or bits per
pixel.
Some terms
• its in mathematics:
• Its just like playing with binary bits.
• How many numbers can be represented by one bit.
• 0
• 1
• How many two bits combinations can be made.
• 00
• 01
• 10
• 11
• If we devise a formula for the calculation of total number of combinations that
can be made from bit, it would be like this.
• Where bpp denotes bits per pixel. Put 1 in the formula you get 2, put 2 in the
formula, you get 4. It grows exponentially.
Some terms
• Number of different colors:
• Now as we said it in the beginning, that the number of different colors depend on
the number of bits per pixel.
• The table for some of the bits and their color is given below.
• Bits per pixel
• Number of colors 1 bpp2 colors
• 2 bpp 4 colors
• 3 bpp 8 colors
• 4 bpp 16 colors
• 5 bpp 32 colors
• 6 bpp 64 colors
• 7 bpp 128 colors
• 8 bpp 256 colors
• 10 bpp1024 colors16 bpp65536 colors24 bpp16777216 colors (16.7 million colors)32
bpp4294967296 colors (4294 million colors)
Some terms
• Shades
• You can easily notice the pattern of the exponentional growth. The
famous gray scale image is of 8 bpp , means it has 256 different colors
in it or 256 shades.

• Color images are usually of the 24 bpp format, or 16 bpp.


• We will see more about other color formats and image types in the
tutorial of image types.
• 0 pixel value denotes black color.
• 0 pixel value always denotes black color. But there is no fixed value
that denotes white color.
• White color:
• The value that denotes white color can be calculated as :
Image storage requirements

• After the discussion of bits per pixel, now we have


every thing that we need to calculate a size of an
image.
• Image size
• The size of an image depends upon three things.
• Number of rows
• Number of columns
• Number of bits per pixel
• The formula for calculating the size is given below.
• Size of an image = rows * cols * bpp
The binary image

• The binary image as it name states, contain only two pixel values.
• 0 and 1.
• Here 0 refers to black color and 1 refers to white color. It is also
known as Monochrome.
• Black and white image:
• The resulting image that is formed hence consist of only black
and white color and thus can also be called as Black and White
image.
• No gray level
• One of the interesting this about this binary image that there is
no gray level in it. Only two colors that are black and white are
found in it.
• Format
• Binary images have a format of PBM ( Portable bit map )
• 2, 3, 4,5, 6 bit color format
• The images with a color format of 2, 3, 4, 5 and 6 bit are not widely used today.
They were used in old times for old TV displays, or monitor displays.
• But each of these colors have more then two gray levels, and hence has gray
color unlike the binary image.
• In a 2 bit 4, in a 3 bit 8, in a 4 bit 16, in a 5 bit 32, in a 6 bit 64 different colors are
present.
• 8 bit color format
• 8 bit color format is one of the most famous image format. It has 256 different
shades of colors in it. It is commonly known as Grayscale image.
• The range of the colors in 8 bit vary from 0-255. Where 0 stands for black, and
255 stands for white, and 127 stands for gray color.
• This format was used initially by early models of the operating systems UNIX and
the early color Macintoshes.
• A grayscale image of Einstein is shown below:
• 16 bit color format
• It is a color image format. It has 65,536 different colors in it. It is
also known as High color format.
• It has been used by Microsoft in their systems that support
more then 8 bit color format. Now in this 16 bit format and the
next format we are going to discuss which is a 24 bit format are
both color format.
• The distribution of color in a color image is not as simple as it
was in grayscale image.
• A 16 bit format is actually divided into three further formats
which are Red , Green and Blue. The famous (RGB) format.
• It is pictorially represented in the image below.
• 24 bit color format
• 24 bit color format also known as true color
format. Like 16 bit color format, in a 24 bit
color format, the 24 bits are again distributed
in three different formats of Red, Green and
Blue.
• 24 bit color format
• 24 bit color format also known as true color
format. Like 16 bit color format, in a 24 bit
color format, the 24 bits are again distributed
in three different formats of Red, Green and
Blue.
• Since 24 is equally divided on 8, so it has been
distributed equally between three different
color channels.
• Their distribution is like this.
• 8 bits for R, 8 bits for G, 8 bits for B.
• Behind a 24 bit image.
• Unlike a 8 bit gray scale image, which has one
matrix behind it, a 24 bit image has three
different matrices of R, G, B.
• Different color codes
• All the colors here are of the 24 bit format,
that means each color has 8 bits of red, 8 bits
of green, 8 bits of blue, in it. Or we can say
each color has three different portions. You
just have to change the quantity of these
three portions to make any color.
• Binary color format
• Color:Black
• Image:
• Decimal Code:
• (0,0,0)
• Color:White
• Image:
• Decimal Code:
• (255,255,255)
• RGB color model:
• Color:Red
• Image:
• Decimal Code:
• (255,0,0)
• Color:Green
• Image:
• green
• Decimal Code:
• (0,255,0)
• Color: Blue
• Image:
• Decimal Code:
• (0,0,255)
Digital image processing
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
Applications of DIP
What Is Digital Image Processing?
• The field of digital image processing refers to
processing digital images by means of a digital
computer.

77
What is a Digital Image ?
• An image may be defined as a two- dimensional
function, f(x,y) where x and y are spatial (plane)
coordinates, and the amplitude of f at any pair of
coordinates (x, y) is called the intensity or gray level of
the image at that point.

• When x, y, and the amplitude values of f are all finite,


discrete quantities, we call the image a digital image

78
Picture elements, Image elements,
pels, and

pixels
A digital image is composed of a finite number of
elements, each of which has a particular location and
value.
• These elements are referred to as picture elements,
image elements, pels, and pixels.

• Pixel is the term most widely used to denote the


elements of a digital image.

79
The Origins of Digital Image
Processing
• One of the first applications of digital images was in
the newspaper industry, when pictures were first sent
by submarine cable between London and New York.

• Specialized printing equipment coded pictures for


cable transmission and then reconstructed them at
the receiving end.

80
• Figure was transmitted in this way and reproduced on
a telegraph printer fitted with typefaces simulating a
halftone pattern.

• The initial problems in improving the visual quality of


these early digital pictures were related to the
selection of printing procedures and the distribution
of intensity levels

81
• The printing based on
reproduction
technique made photographic tapes
telegraph
from receiving terminal from 1921. at
perforated the

• Figure shows an image obtained using this method.

• The improvements are tonal quality and in resolution.

82
• The early Bartlane systems were capable of coding
images in five distinct levels of gray.
• This capability was increased to 15 levels in 1929.

• Figure is typical of the type of images thatcould be


obtained using the 15-tone equipment.

83
• Figure shows the first image of the moon taken by
Ranger

84
Example of fields that use Digital image processing
X RAYS
GAMMA RAYS
UV RAYS
VISIBLE
AND
INFRARE
D BAND
Example of fields that use Digital image processing
Example of fields that use Digital image processing
Example of fields that use Digital image processing
Example of fields that use Digital image processing
Example of fields that use Digital image processing
Example of fields that use Digital image processing
Fundamental Steps in
Digital Image Processing
Components of an Image Processing System
A Simple Image Formation Model
• Images by two-dimensional functions of the form f(x, y).

• The value or amplitude of f at spatial coordinates (x, y)


gives the intensity (brightness) of the image at that
point.

• As light is a form of energy, f(x,y) must be non zero and


finite.

10
6
• The function f(x, y) may be characterized by two
components:
(1)the amount of source illumination incident on the
scene being viewed
(2)the amount of illumination reflected by the objects
in the scene.
• These are called the illumination and reflectance
components and are denoted by i(x, y) and r(x, y),
respectively.

10
7
• The two functions combine as a product to
form f(x, y):

f(x, y) = i(x, y) r(x, y)

r(x, y) = 0 --- total absorption


1 --- total reflection

10
8
• The intensity of a monochrome image f at any
coordinates (x, y) the gray level (l) of the image at
that point.

That is, l = f(x0 ,

y0 ) L lies in the range

10
9
GRAY SCALE
• The interval [Lmin , Lmax ] is called the gray scale.

• Common practice is to shift this interval numerically to


the interval [0, L-1],

• where L = 0 is considered black and


L = L-1 is considered white on the gray scale.

All intermediate values are shades of gray varying from


black to white.
11
0
GRAY SCALE
• The interval [Lmin , Lmax ] is called thegray scale.

• Common practice is to shift this interval numerically to


the interval [0, L-1],

• where L = 0 is considered black and


L = L-1 is considered white on the gray scale.

All intermediate values are shades of gray varying from


black to white.
12
4
Basic Relationships Between Pixels
• 1. Neighbors of a Pixel :-
A pixel p at coordinates (x, y) has four horizontal and vertical neighbors
whose coordinates are given by (x+1, y), (x-1, y), (x, y+1), (x, y-1)

• This set of pixels, called the 4-neighbors of p, is denoted


by N4(p).

• Each pixel is a unit distance from (x, y), and some of the
neighbors of p lie outside the digital image if (x, y) is on
the border of the image.

12
5
ND(p) and N8(p)
• The four diagonal neighbors of p have coordinates
(x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)
and are denoted by ND(p).

• These points, together with the 4-neighbors, arecalled the 8-


neighbors of p, denoted by N8(p).

• If some of the points in ND(p) and N8(p) fall outside the image if
(x, y) is on the border of theimage.

12
6
Adjacency, Connectivity,
Regions, and Boundaries
•To establish whether two pixels are connected, it
must be determined if they are neighbors and
•if their gray levels satisfy a specified criterion of
similarity (say, if their gray levels are equal).

•For instance, in a binary image with values 0 and 1,


two pixels may be 4-neighbors,

•but they are said to be connected only if they have


the same value
12
7
• Let V be the set of gray-level values used todefine
connectivity. In a binary image, V={1} for the connectivity
of pixels with value 1.

• In a grayscale image, for connectivity of pixels with a


range of intensity values of say 32, 64 V typically contains
more elements.

• For example,

• In the adjacency of pixels with a range of possible gray-


level values 0 to 255,
• set V could be any subset of these 256 values.We consider
three types of adjacency:
12
8
• We consider three types of adjacency:
(a) 4-adjacency.
Two pixels p and q with values from V are 4-adjacent if q is in the
set N4(p).

(b) 8-adjacency.
Two pixels p and q with values from V are 8-adjacent if q is in the
set N8(p).

(c) m-adjacency (mixed adjacency).


(d) Two pixels p and q with values from V are m-adjacent if
• (i) q is in N4(p), or

• (ii) q is in ND(p) and the set whose values are from V.


12
9
• A path from pixel p with coordinates (x, y) to
pixel q with coordinates (s, t) is a sequence of
distinct pixels with coordinates

• where y0) = (x, y) and (xŶ’ yn) = (s,


(x(x yi) and (xi-ϭ’t),yi-1) pixels and are adjacent for ϭ ч i ч
i’ Ϭ’
Ŷ. IŶ this Đase, Ŷ is the length of the path.

• If (xϬ’ y 0) = yn) the path is a closed path.


(xŶ’

13
0
.
• Two pixels p and q are said to be connected in S if
there exists a path between them consisting entirely
of pixels in S.

• For any pixel p in S, the set of pixels that are


connected to it in S is called a connected component
of S.

13
1
Relations, equivalence
• A binary relation R on a set A is a set of pairs of
elements from A. If the pair (a, b) is in R, the notation
used is aRb ( ie a is related to b)

• Ex:- the set of points A = { p1,p2,p3,p4} arranged as


P1
p2
P3
p4

13
2
• In this case R is set of pairs of points from A that are 4-
connected that is R = {(p1,p2), (p2,p1), (p1,p3),
(p3,p1)} .

thus p1 is related to p2 and p1 is related to p3 and vice


versa but p4 is not related to any other point under
the relation .

13
3
Reflective - Symmetric - Transitive
• Reflective
if for each a in A, aRa

• Symmetric
if for each a and b in A, aRb implies bRa

• Transitive
if for a, b and c in A, aRb and bRc implies aRc

A relation satisfying the three properties is called


an equivalence relation. 13
4
Distance Measures
• For pixels p, q,and z,with coordinates (x, y), (s, t),and
(u, v) respectively, D is a distance function or metric if
(a) D(p, q)D;p, if p = q ),

(b) D(p, q) = D(q, p)

The Euclidean distance between p and q is defined as

13
5

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy