0% found this document useful (0 votes)
3 views

Digital Image Processing_Unit 1_ASRao

The document provides an overview of digital image processing, detailing its definitions, types, advantages, and disadvantages. It covers the fundamental steps involved in processing images, including acquisition, enhancement, restoration, and segmentation, as well as applications in various fields like medicine and law enforcement. Additionally, it discusses different types of images, file formats, and the historical development of digital image processing techniques.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Digital Image Processing_Unit 1_ASRao

The document provides an overview of digital image processing, detailing its definitions, types, advantages, and disadvantages. It covers the fundamental steps involved in processing images, including acquisition, enhancement, restoration, and segmentation, as well as applications in various fields like medicine and law enforcement. Additionally, it discusses different types of images, file formats, and the historical development of digital image processing techniques.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 130

Allanki Sanyasi Rao

AMIE; M.Tech; (Ph.D); MISTE; MIETE


Associate Professor & HOD
Dept. of ECE
Pre-Requisites /Prior Knowledge

ASRao
Te x t B o o k s

ASRao
Introduction

ASRao
Image is a pictorial representation
ASRao
of something or someone
Image
 2 – dimensional function f(x, y)
 It is a projection of 3D scene in 2D plane

 x and y are spatial coordinates

 Amplitude of f at any (x, y) coordinates represents the intensity level


at that point
 An image is a two-dimensional function that represents a measure of
some characteristic such as brightness or color of a viewed scene.
Light  Lens  Image Sensor  Processor  Digital Image  Memory Card
 An image may be defined as a two-dimensional function f(x,y), where x
and y are spatial (plane) coordinates, and the amplitude to f at any pair
of coordinates (x,y) is called the intensity of the image at that point.
The term gray level is used often to refer to the intensity of
monochrome images.

 Color images are formed by a combination of individual 2-D images.


For example, the RGB color system, a color image consists of three
(red, green and blue) individual component images.
Types of Image

1. Analog Image
2. Digital Image

 Image: A two-dimensional function f(x,y), where x and y are spatial


(plane) coordinates, and f at any pair of coordinates (x,y) is called the
intensity or gray level of the image at that point.
 When above mathematical representation has continuous range of
values representing position and intensity, image is analog image.
 E.g. Image produced on the screen of a CRT monitor
 When x, y and f are all finite, discrete quantities, image is a digital
image.
 Contains finite number of elements, each of which has a particular
location and value.
 These elements are called picture elements, image elements, pels or
pixels.
 Pixels are the smallest units of digital image which represent the
brightness.
 Pixel Values: The magnitude of the electromagnetic energy (or
intensity) captured in a digital image is represented by positive digital
numbers.
ASRao
 Remember digitization implies that a digital image is an approximation
of a real scene.
ASRao
Image as a Matrix

ASRao
Image Representation

ASRao
Advantages of Digital Images

 Fast processing
 Cost effective
 Effective storage
 Efficient transmission
 Scope for versatile image manipulation

Disadvantages of Digital Images

 High memory for good quality images


 Hence requires fast processor
Digital Image Processing
 To process and analyze the digital images using algorithms or
mathematical models by means of Digital Computers in order to
improve its quality.

 Ex.: Image enhancement, restoration etc.


M o ti v a ti o n b e h i n d D i g i t a l I m a g e P r o c e s s i n g

 Due to two principal application areas:


o Improvement of pictorial information for human interpretation
o Processing of image data for storage, transmission and
representation of digital data for machine perception is possible.

ASRao
Three Levels of Digital Image Processing

 Low - Level Processing


 Middle - Level Processing
 High - Level Processing
Types of Digital Image Processing
Low Level Processing
 Input & Output both are images
 Image preprocessing for noise reduction, contrast enhancement, image
sharpening.
Mid Level Processing
 Input: Image
 Output: Image attributes - edges, contours, recognition of objects etc.
 Ex: segmentation, edges of image, identification
High Level Processing
 Input: attributes extracted from images
 Image analysis, vision related cognitive operations.
 Ex.: Automatic Character Recognition, missile recognition, Computer
Vision , autonomous navigation
Applications of Digital Image Processing

 Biometrics
 Vehicle number plate detection
 Content based Image retrieved (CBIR)
 Steganography
 Medical imaging
 Object recognition
 Image enhancement and noise removal
History of Digital Image Processing

Early 1920s: One of the first applications of digital


imaging was in the newspaper industry

– The Bartlane cable picture transmission service


– Images were transferred by submarine cable
between London and New York
– Pictures were coded for cable transfer and reconstructed at the receiving
end on a telegraph printer

Mid to late 1920s: Improvements to


the Bartlane system resulted in higher
quality images
– New reproduction processes based
on photographic techniques
– Increased number of tones in
reproduced images
1960s: Improvements in computing technology and the onset of the space
race led to a surge of work in digital image processing
– 1964: Computers used to improve the quality of
images of the moon taken by the Ranger 7 probe
– Such techniques were used in other space
missions including the Apollo landings

1970s: Digital image processing begins to be used


in medical applications

– 1979: Sir Godfrey N. Hounsfield & Prof. Allan M.


Cormack share the Nobel Prize in medicine for the
invention of tomography, the technology behind
Computerized Axial Tomography (CAT) scans
1980s - Today: The use of digital image processing techniques has
exploded and they are now used for all kinds of tasks in all kinds of areas
– Image enhancement/restoration
– Artistic effects
– Medical visualization
– Industrial inspection
– Law enforcement
– Human computer interfaces
Examples: Image Enhancement

One of the most common uses of DIP techniques: improve quality,


remove noise etc.
Examples: The Hubble Telescope

 Launched in 1990 the Hubble telescope can take images of very distant
objects.
 However, an incorrect mirror made many of Hubble’s images useless.
 Image processing techniques were used to fix this
Examples: Artistic Effects

 Artistic effects are used to make


images more visually appealing, to
add special effects and to make
composite images
Examples: Medicine

 Take slice from MRI scan of canine heart, and find boundaries between
types of tissue
– Image with gray levels representing tissue density
– Use a suitable filter to highlight edges
Examples: GIS

Geographic Information Systems


– Digital image processing techniques are used extensively to
manipulate satellite imagery
– Terrain classification – Meteorology
 Night-Time Lights of the World data set
– Global inventory of human settlement
– Not hard to imagine the kind of analysis that might be done
using this data
Examples: Industrial Inspection

 Human operators are expensive,


slow and unreliable
 Make machines do the job instead
 Industrial vision systems are used
in all kinds of industries
 Can we trust them?
Examples: PCB Inspection

 Printed Circuit Board (PCB) inspection


– Machine inspection is used to determine that all components are
present and that all solder joints are acceptable
– Both conventional imaging and x-ray imaging
Examples: Law Enforcement

 Image processing techniques are used extensively by law enforcers


– Number plate recognition for speed cameras/automated toll systems
– Fingerprint recognition
– Enhancement of CCTV images
Examples: HCI

 Try to make human computer


interfaces more natural
– Face recognition
– Gesture recognition
 Does anyone remember the user
interface from “Minority Report”?
 These tasks can be extremely difficult
Types of Images
Binary Image
 1 – Bit Image
 0: Black 1: White

Gray-Scale Image
 8 bits/pixel data: 28 = 256 Shades of gray
 0: Black 255: White
 Monochrome Images
ASRao
RGB Image (Color Image)
 24 bits/pixel - 3 band color Image, 8 bits for each color
 8 – bit color format: Gray Scale Image
 16 – bit color format: High color format
 24 – bit color format: True color format
Multispectral Images
 7 to 10 Spectral bands
 Ex. Remote sensing images: TIFF, GIF images
Hyperspectral Images
 Hundreds or thousands of Spectral bands
 To capture wavelengths beyond human vision
Types of Image Files
[ File format (Header + Image data) to represent the digital images ]

Raster Image Files Vector Image Files


It is constructed by using a series of It is constructed using proportional
pixels to form an image formulas
Examples: Examples:
JPEG (Joint Photographic Experts PSD (Photoshop Document)
Group) EPS (Encapsulated Post Script)
PNG (Portable Network Graphics) PDF (Portable Document Format)
GIF (Graphics Interchange Format)
TIFF (Tagged Image File Format)

Resizing the image is possible and Resizing is possible and no loss of


loss of detail and sharpness details
Larger in file size Smaller in size
Quicker t display Slower to display
Resolution dependent Resolution independent
Image File Format
JPEG (Joint Photographic Experts) or JPG .jpeg / .jpg 8 - bits
 It is the most widely used lossy compression format.
 Users can specify the level of compression

PNG (Portable Network Graphics) .png 8-bits (or) 24 -bits


 Lossless image format
 Sharper web quality images

GIF (Graphics Interchange Format) .gif 24 -bits


 Lossless image format
 Supports both animated & static images
 Used for webpage ads
 Less file size
TIFF (Tagged Image File Format) .tiff
 A large raster image format
 Lossless compression (or) original image data is maintained

BMP (Bitmap Image File) .bmp


 Developed by Microsoft for Windows
 No compression loss i.e., uncompressed image

Raw file type .raw


A Simple Image Model
 An image is denoted by a two dimensional function of the form f(x,y).
 The value or amplitude f at spatial coordinates (x,y) is a positive scalar
quantity whose physical meaning is determined by the source of the
image.
 When an image is generated by a physical process, its values are
proportional to energy radiated by a physical source.
 As a consequence, f(x,y) must be nonzero and finite i.e., 0 < f(x,y) < ∞.
 The function f(x,y) may be characterized by two components- The
amount of the source illumination incident on the scene being viewed.
-The amount of the source illumination reflected back by the
objects in the scene. These are called illumination and reflectance
components and are denoted by i(x,y) and r(x,y).
 The functions combine as a product to form f(x,y). We call the intensity
of a monochrome image at any coordinates (x,y) the gray level (I) of the
image at that point I = f(x,y)
Lmin I Lmax

 Lmin is to be positive and Lmax must be finite.


Lmin i min rmin
Lmax i max rmax

 The interval [Lmin, Lmax] is called gray scale. Common practice is to shift
this interval numerically to the interval [0, L-1] where I=0 is considered
black and I=L-1 is considered white on the gray scale. All intermediate
values are shades of gray varying from black to white.
Spatial Resolution
 Vision specialists will often talk about pixel size
ASRao
 Graphic designers will talk about dots per inch
Spatial Resolution:
- Measures the number of pixels in an image
- More pixels = clearer picture with more details
- Measured in pixels per inch (PPI) or dots per inch (DPI)
Gray Level Resolution:
- Measures the number of shades of gray in an image
- More shades = more detailed and nuanced picture
- Measured in bits (e.g., 8-bit, 16-bit)
Note:
- Spatial resolution affects the image's clarity and detail
- Gray level resolution affects the image's contrast and shading

ASRao
Intensity Level Resolution

BPP – Bits Per Pixel


Fundamental Steps
in
gital Image Processi

ASRao
Fundamental Steps in Digital Image Processing
There are two categories of the steps involved in the image processing –
1. Methods whose outputs and input are images.
2. Methods whose outputs are attributes extracted from those images.
 The knowledge about a Problem Domain is coded into an image
processing system.
 The knowledge Base controls the interaction between different modules
of an image processing system.
Image Acquisition
 It is the first process.
 Generally, the image acquisition stage involves preprocessing, such as
scaling

g(i,j)
i(x,y)

f(x,y) = i(x, y) . r(x, y)

Pixel = picture element


r(x,y)
Image Enhancement
 It is the process of manipulating an image, so that the result is more
suitable than the original for a specific application.
 For example, a method that is quite useful for enhancing X-ray images
may not be the best approach for enhancing satellite images taken in
the infrared band of the electromagnetic spectrum.
 When an image is processed for visual interpretation, the viewer is the
ultimate judge of how well a particular method works.
Image Restoration
 Is an area that also deals with improving the appearance of an image.
 However, unlike enhancement, which is subjective, image restoration is
objective, in the sense that restoration techniques tend to be based on
mathematical or probabilistic models of image degradation.
Color Image Processing
 It is an area that is been gaining importance because of the use of
digital images over the internet.
 Color image processing deals with basically color models and their
implementation in image processing applications.
Wavelets and Multi Resolution Processing
 These are the foundation for representing image in various degrees of
resolution.
Compression
 Compression deals with the techniques for reducing the storage space
required to save an image or the bandwidth required to transmit it.
 This is particularly useful for displaying images on the internet if the
size of the image is large, then it uses more bandwidth (data) to display
the image from the server and also increases the loading speed of the
website.
 Mostly JPEG image compression standard is used in many applications.
 It has to major approaches:
1. Lossless Compression
2. Lossy Compression
Morphological Processing

 It deals with tools for extracting image components that are useful in
the representation and description of shape and boundary of objects.
 It is majorly used in automated inspection applications.
Segmentation

 It is the process of partitioning a digital image into multiple segments.


 It is generally used to locate objects and boundaries in objects.
 The more accurate the segmentation, the more likely recognition is to
succeed.
Representation and Description
 Representation deals with converting the data into a suitable form for
computer processing.
- Boundary representation: It is used when the focus is on external
shape characteristics e.g. corners
- Regional representation: It is used when the focus in on internal
properties e.g. texture
 Description deals with extracting features helps in extracting useful
information.

 Feature Matching: we can extract the same features from a different


image of the same cathedral taken form a different angle.
Recognition
 It is a process that assigns a label (e.g. vehicle) to an object based on its
description.
Knowledge Base
 Knowledge about a problem domain is coded into an image processing
system in the form of a knowledge database.
 The knowledge may be as simple as detailing regions of an image where
the information of interest is known to be located.
 The Knowledge base can also be complex, such as an interrelated list of
all major possible defects in a materials inspection problem.
 In addition to guiding the operation of each processing module, the
knowledge base also controls the interaction between modules.
Components of an
Digital Image
Processing System

ASRao
Components of an Digital Image Processing System

Network
Image Sensors
 Two elements are required to acquire digital image:
- Physical device that is sensitive to the energy radiated by the
object we wish to image.
- Digitizer that is used for converting the output of the physical
sensing device into digital form.

 For instance, in a digital


video camera, the sensor
produces an electrical
output proportional to
light intensity.

 The digitizer converts the


outputs to a digital data.
Specialized Image Processing Hardware
 It consists of the digitizer just mentioned, plus hardware that performs
other primitive operations such as an arithmetic logic unit, which
performs arithmetic such addition and subtraction and logical
operations in parallel on images.
 ALU is used in averaging images as quickly as they are digitized, for the
purpose of noise reduction. This type of hardware sometimes called as
‘front-end subsystem’.
Computer
 It is a general-purpose computer and can range from a PC to a
supercomputer depending on the application.
 In dedicated applications, sometimes specially designed computer is
used to achieve a required level of performance.
Software
 It consists of specialized modules that perform specific tasks a well-
designed package also includes capability for the user to write code, as a
minimum, utilizes the specialized module.
 More sophisticated software packages allow the integration of these
modules.
Mass Storage
 This capability is a must in image processing applications. An image of
size 1024 x1024 pixels, in which the intensity of each pixel is an 8- bit
quantity requires one Megabytes of storage space if the image is not
compressed.
 Image processing applications falls into three principal categories of
storage.
- Short term storage for use during processing
- On line storage for relatively fast retrieval
- Archival storage such as magnetic tapes and disks
Image Display

 Image displays in use today are mainly color TV monitors.


 These monitors are driven by the outputs of image and graphics
displays cards that are an integral part of computer system.
Hardcopy Devices
 The devices for recording image include laser printers, film cameras,
heat sensitive devices inkjet units and digital units such as optical and
CD ROM disk.
 Films provide the highest possible resolution, but paper is the obvious
medium of choice for written applications.
Networking
 It is almost a default function in any computer system in use today
because of the large amount of data inherent in image processing
applications.
 The key consideration in image transmission is bandwidth.
 In dedicated network, it is not a problem. But it is necessary to used
optical fiber and other broadcast technologies for communication via
Internet to remote sites.
Human Visual System

ASRao
Human Visual System
 The human visual system consists of two primary components: the eye
and the brain, which are connected by the optic nerve.

Eye: Receiving sensor [Camera, Scanner]

Brain: Information Processing Unit [Computer System]

 The function of Visual System is to detect electromagnetic radiation


(EMR) emitted by objects.

 Human can detect Light wavelength between 400 – 700 nm


 Perceived color (Hue) is related to the wavelength of light.
 Brightness is related the intensity of the radiation
 The visible spectrum can be divided into three bands:
Blue (400 – 500 nm)
Green (500 – 600 nm)
Red (600 – 700 nm)
 The sensors are distributed across retina.
ASRao
Outer Part – Sclera  The color of the eye is due to color pigment
Inner most Part – Lens
Present in the Iris.
Middle Part - Iris ASRao
ASRao
3 Layers:

External Layer : Sclera & Cornea


Posterior blue in color – Sclera
Anterior – Cornea

Intermediate Layer : Red in color consists two parts


Posterior – Choroid
Anterior - Iris

 The function of the Choroid is


to supply the nutrients to the
eye.
 The internal layer formed by
Retina.

 The retina contains discrete


light receptors called Rods &
Cones. ASRao
 These rods and cones convert the light into electrical energy and this
electrical energy is transferred to the brain through optic nerve.
 In the posterior, the eye is filled with Vitreous humor and in the anterior,
it is filled with Aqueous humor.
 The thickness of the lens is varied by the Ciliary Muscles.
 The thickness of the lens is varied in order to form an inverted image on
the Fovea region.

 At fovea, the density of the


cones will be maximum.
 There is a region called Blind
Spot, where there will be no
cones nor rods. So if the image
falls on this blind spot, we
cannot see the object.

ASRao
 In this diagram, we can see the detailed view of the retina. As it consists
rods and cones, they termed so because of their shape.

 The cones are responsible for color vision and rods are responsible for
peripheral vision.
ASRao
ASRao
Distribution of Rods and Cones in the Retina

 The density of the cones is maximum at fovea. At the blind spot there
are neither rods nor cones. ASRao
Image Formation in the Eye

 For a clear perception, the image of the object should be focused on the

 fovea of a retina.
During perception the object by the eye, the object lens distance and
lens image i.e., fovea may be fixed.
 However the focusing is achieved by varying the focal length of the lens.
To focus on distant objects, the Ciliary muscles cause the lens to be
flattened i.e higher focal length.
 Similarly to focus on near by objects,
the lens made thicker.
ASRao
 For objects more than 3m away, the lens exhibits lowest refractive power
i.e. fully flattened. In this case the distance between the optical center
and retinal image is about 17mm.

 Here in this example, H=15mm, D=100mm & F=17mm, then h=2.55mm


which is very small in height.
 For objects less than 3m away, the lens exhibits more refractive power
i.e, it becomes more thicker in order to focus the image on the fovea
ASRao
Brightness Adaptation of Human Eye

 The brightness adaptation is a phenomenon which describes the ability


of the human eye in simultaneously discriminating distinct intensity
levels.

 The brightness adaptation level is the current sensitivity level of the


visual system for any given set of conditions.

 The simultaneous contrast is a phenomenon which describes that the


perceived brightness of a region in an image is not a simple function of
its intensity rather it depends on the intensities of neighboring regions.

ASRao
Brightness Adaption and Discrimination

 The ability of the eye to discriminate between different brightness levels


is an important consideration in creating and presenting images.

 The range of light intensity levels to which the human visual system can
adapt is on the order of 1010 from the scotopic threshold (minimum low
light condition) to the glare limit (maximum light level condition)

 Even though the dynamic range of human eye is enormous, but at a


given point of time, the eye can observe a small range of illuminations.
This phenomenon is called Brightness Adaptation.

ASRao
Subjective brightness
 Subjective brightness is brightness as perceived by the human visual
system.

 Subjective brightness refers to how bright an image appears to the


human eye, taking into account psychological and contextual factors.

 Factors influencing Subjective brightness:


- Surrounding environment
- Image content
- Color palette
- Contrast
- Adaptation (viewing conditions)
ASRao
Example 1:

Two images with same objective brightness:


Image A: Black background with white text
Image B: White background with black text

Subjective brightness:
Image A appears brighter (high contrast)
Image B appears dimmer (low contrast)

Example 2:

Two rooms with same lighting level:


Room A: Dark-colored walls and furniture
Room B: Light-colored walls and furniture

Subjective brightness:
Room A appears darker (psychological factor)
Room B appears brighter (psychological
ASRao factor)
 Subjective brightness is a logarithmic function of the light intensity
incident on the eye.
 This characteristic is illustrated
in the figure, which is a plot of
light intensity versus subjective
brightness.
 The long solid curve represents
the range of intensities to which
the visual system can adapt.
 In photopic vision alone, the
range is about 106
 The transition from scotopic to
photopic vision is gradual over
the approximate range -3 to -1
mL in the log scale.
 For a given set of conditions, the current sensitivity level of the visual
system is called the brightness adaptation
ASRao level which may correspond.
 This figure shows the range of intensities human system can adapt.
When illumination is low i.e. scotopic vision (dim light vision) plays the
dominant role. Whereas for high intensities, Photopic vision is dominant.

 From figure, the transition from Scotopic to Photopic vision is gradual


and at certain levels of illumination both of them play a role. So the
dynamic range of human eye is enormous, but the eye cannot operate
over entire range simultaneously, it can detect some small range of
illuminations. This is Brightness Adaptation.
 Stare at the sun for a few seconds and the eye adapts to the upper range
of the illumination values. Now look away from the sun, you will find
that you cannot see anything for some time, because your eye takes a
finite time to adapt itself to the new range.
 Similarly you would have observed that when the power supply of your
home is cut off in the night, everything will be dark immediately and
nothing will be seen for some time. Because your eyes were adapted to
light, but gradually eye will adjust to this dark level and you will start
seeing some visible things even in the dark. This is Brightness
ASRao
Adaptation.
 The intensity of the image will be cleared, but the brightness of the image
will be more, this effect is called as Mach Band Effect”.
 The visual system tends to undershoot or overshoot around the boundary
of regions of different intensities.

ASRao
 Brightness discrimination or Contrast Sensitivity is the ability of the eye
to discriminate between changes in light intensity at any specific
adaptation level.
 The response of the human eye to changes in the intensity of illumination
is non-linear.

 Let us perform an experiment:


- We can take one glass plate and illuminate it from the bottom with some
constant illumination.

ASRao
 Now the observer is asked to look on to the glass plate from top. At the
center of this plate the intensity of the illumination is increased from I
to I+Δic, the change in illumination.

 Now the observer is asked to observe whether he or she can observe or


detect this increase. If the observer cannot detect the change the
intensity, is further increased by another increment of Δic till the
observer detects the difference. This procedure is continue till the
observer detects the difference and this is known as the just noticeable
difference.

 The Weber ratio is different for different persons and therefore, it is said
that the response of human eye to changes in intensity is nonlinear.

 Low weber ratio means that even a small variation of Δic is detected by
the observer. High weber ratio implies that large variations were required
for the observer to notice the changes over i.e. Δic + Δic + ….. .

ASRao
 Hence, a low weber ratio means that the observer has a good contrast
sensitivity and a high ratio means the observer has poor contrast
sensitivity. So proper illumination and weber ratio of most of the people
is constant near 0.02.

 From the graph, you can see that the weber ratio is not very low and very
high levels of light intensity. why? Our discrimination quality reduces
when we are in a room that is not well lit with light. At the same time our
discrimination quality also reduces when there is too much of light. The
intensity of the light is very high or very low, we cannot discriminate and
the weber is very high. But for most of the persons the weber ratio is
approximately 0.02. ASRao
Simultaneous Contrast
 The simultaneous contrast phenomenon: each small square is actually the
same intensity but, because of the different intensities of the surrounds,
the small squares do not appear equally bright.
 The hue of a patch is also dependent on the wavelength composition of
surrounding light.

Hue: Actual color


Optical Illusion
 An optical illusion (also called a Visual illusion) is an illusion caused by the
visual system and characterized by a visual percept that appears to differ
from reality.
 The eye fills in non-existing information or wrongly perceives geometrical
properties of objects.
Image Sensing
&
Acquisition
ASRao
Image Sensing and Acquisition
 Most of the images in which we are interested are generated by the
combination of an illumination source and the reflection or absorption
of energy from that source by the elements of the scene being imaged.

 The illumination may originate from a source of electromagnetic energy


such as Radar, Infrared or X-ray system or Ultrasound or even computer-
generated illumination pattern.

 Image acquisition is the first step in digital image processing.

 Two elements are required to acquire digital image:


- Physical device that is sensitive to the energy radiated by the
object we wish to image.
- Digitizer that is used for converting the output of the physical
sensing device into digital form.
ASRao
 Image acquisition using single sensor
 Image acquisition using senor strips
 Image acquisition using sensor arrays
 Incoming energy is transformed into a voltage by the combination of
input electrical power and sensor material that is responsive to the
particular type of energy being detected.
 The output voltage waveform is the response of the sensor(s), and a
digital quantity is obtained from each sensor by digitizing its response.

Image Sensors
Image Acquisition using a Single Sensor

 Image sensor used in digital


cameras is a device that converts
an optical image to an electrical
signal.
 An image sensor is typically a
charge coupled device (CCD) or
CMOS sensor.
 Most familiar Sensor of this type is the photo diode.
 The use of a filter in front of a sensor improves selectivity.
 According to the illumination of light, output waveform is generated.
Output is proportional to the incident light intensity.
 To obtain a two dimensional image using a single sensor, the motion
should be in both x and y directions.
 The film is mounted onto the
drum and the drum rotates
around its own axis which
provides a motion in one
direction – which causes
vertical movement.
 The single sensor is connected
through a long rod which
provides the linear motion
perpendicular to the film
motion – which causes
horizontal movement
 Therefore the output is a 2D digital image.
 A light source is inside the drum, passes throughout the film and the
sensor detects that light in different frequencies which gets converted
into the voltage which ultimately results in the 2D image using sensors.
 This method is an inexpensive (but slow) way to obtain high-resolution
ASRao
images.
Image Acquisition using a Sensor Strip

 In sensor strip we have a series of sensors placed in line in a horizontal


direction that appears to be a row of sensors. Sensing devices with 4000
or more in-line sensors is possible.

 Sensor strip is like a copier machine that


scans documents. It moves horizontally,
capturing the entire page quickly and
efficiently.

ASRao
 Inline arrangement of sensors in one direction – will capture image in
one direction.
 Motion perpendicular to the strip provides imaging in other direction.
- Sensor strip captures horizontal lines.
- Vertical motion captures entire image.
- Image is constructed line by line.
 Sensor strips in a ring configuration
are used in medical and industrial
imaging to obtain cross-sectional
(“slice”) images of 3-D objects.
 A rotating X-ray source provides
illumination, and X-ray sensitive
sensors opposite the source collect
the energy that passes through the
object.
 This is the basis for medical and industrial computerized axial
tomography (CAT) imaging ASRao
Image Acquisition using a Sensor Arrays

 Individual sensors are arranged in the form of 2-D array. This type of
arrangement is found in digital cameras.
 Light from the object (e.g., book page)
falls onto the sensor array.
 Each sensor measures the light energy it
receives.
 The sensor's response is proportional to
the total light energy it receives over
time.

 Sensors collect light energy for a long


time (minutes or hours).
 This integration process adds up the light
energy, reducing random noise.
 Result: A clearer, more accurate
measurement of the light intensity.
ASRao
Advantage: Since sensor array is 2D, a complete image can be obtained by
focusing the energy pattern onto the surface of the array.

ASRao
Image
Sampling & Quantization

ASRao
 In Digital Image Processing, signals captured from the physical world
need to be translated into digital form by “Digitization” process.

 In order to become suitable for digital processing a image function f(x,y)


must be digitized both spatially and in amplitude.
ASRao
 Image Sampling
- Digitization of spatial coordinates (x,y)
- A digital sensor can only measure a limited number of samples at
a discrete set of energy levels.

 Quantization
- Amplitude digitization

 In all types of sensors, quantization of the sensor output completes the


process of generating digital image.

 The quality of a digital image is determined to a large degree by the


number of samples and discrete gray levels used in sampling and
quantization.

ASRao
ASRao
 Figure shows a continuous image f that we want to convert to digital
form.

 An image may be continuous with respect to the


x and y coordinates, and also in amplitude.

 To convert it to digital form, we have to sample


the function in both coordinates and in
amplitude.

 The one-dimensional function in figure is a


plot of amplitude (intensity level) values of
the continuous image along the line segment
AB.

 The random variations are due to image noise.

ASRao
 The sample points are indicated by a vertical tick marks at the bottom
of the figure.
 The samples are shown small white superimposed on the function.
The set of these discrete locations gives the sampled function.
 To form a digital function, the intensity values must be converted
(quantized) into discrete quantities.
 The right side of the figure shows the intensity scale divided into eight
discrete intervals, ranging from black
ASRao
to white.
 The continuous intensity levels are quantized by assigning one of the
eight values to each sample.
 The assignment is made depending on the vertical proximity of a
sample to a vertical tick mark.
 The digital samples resulting from both sampling an quantization are
shown in figure. ASRao
 First figure shows a continuous image projected onto the plane of an
array sensor.
 Second figure shows the image after sampling and quantization.

 The quality of a digital image is determined to a large degree by the


number of samples and discrete intensity levels used in sampling and
quantization. ASRao
Basic Relationships
between Pixels

ASRao
We are going to see:

 Neighborhood

 Adjacency

 Connectivity

 Paths

 Regions and boundaries

ASRao
(x-1,y-1) (x,y-1) (x+1,y-1)

(x-1,y) (x, y) (x+1,y)

(x-1,y+1) (x,y+1) (x+1,y+1)

(x-1,y-1) (x-1,y) (x-1,y+1)

(x,y-1) (x, y) (x,y+1)

(x+1,y-1) (x+1,y) (x-1,y+1)


ASRao
Neighborhood
Neighbors of Pixels

 Any pixel p(x, y) has two vertical and


two horizontal neighbors, given by
(x+1, y), (x-1, y), (x, y+1), (x, y-1)

 This set of pixels are called the 4-neighbors of P, and is denoted by N4


(P). Each of them are at a unit distance from P.

 The four diagonal neighbors of p(x,y)


are given by, (x+1, y+1), (x+1, y-1), (x-
1, y+1), (x-1 ,y-1) .

 This set is denoted by ND(P).

ASRao
 The points ND(P) and N4(P) are together known as 8-neighbors of the
point P, denoted by N8(P).
 Some of the points in the N4 , ND and N8 may fall outside image when P
lies on the border of image.

 4-neighbors of a pixel p are its


vertical and horizontal neighbors
denoted by N4(p)

 8-neighbors of a pixel p are its


vertical horizontal and 4 diagonal
neighbors denoted by N8(p)
ASRao
N4 - 4-neighbors

•ND - diagonal neighbors

•N8 - 8-neighbors (N4 U ND )


ASRao
Adjacency, Connectivity
Adjacency
 Two pixels are connected if they are neighbors and their gray levels
satisfy some specified criterion of similarity.
 For example, in a binary image two pixels are connected if they are
4-neighbors and have same value (0/1).
 Let V be set of intensity (gray levels) values used to define adjacency.
 e.g. V={1}
V={0, 2}
Binary image = {0, 1}
Gray Scale image = {0, 1, 2, …….., 255}
 In binary images, 2 pixels are adjacent if they are neighbors & have some
intensity values either 0 or 1.
 In gray scale, image contains more gray level values in range 0 to 255, set
V could be any subset of these 256ASRao
values.
We consider three types of adjacency:
 4-adjacency: Two pixels p and q with values from V are 4-adjacent if q is
in the set N4(p).
 8-adjacency: Two pixels p and q with values from V are 8-adjacent if q is
in the set N8(p).

 m-adjacency:

ASRao
A
Sa lla
ny nk
as i
iR
a

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy