Image Processing and Compression Techniques

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 16

IMAGE PROCESSING AND

COMPRESSION TECHNIQUES

Abstract:
An image defined in the "real world" is considered to be a
function of two real variables say x,y.
Before going to processing an image, it is converted into
a digital form. Digitization includes sampling of image and
quantization of sampled values.After converting the image into bit
information, processing is performed.This processing technique
may be,

1. Image enhancement
2. Image reconstruction
3. Image compression

Image enhancement refers to accentuation, or sharpening, of


image features such as boundaries.

Image restoration is concerned with filtering the observed


image to minimize the effect of degradations.
Image compression is concerned with minimizing the no of bits
required to represent an image.

Text compression – CCITT GROUP3 & GROUP4


Still image compression – JPEG
Video image compression –MPEG
Introduction:
Modern digital technology has made it possible to manipulate
multi-dimensional signals with systems that range from simple
digital circuits to advanced parallel computers. The
goal of this manipulation can be divided into three categories:
1. Image Processing image in -> image out
2. Image Analysis image in -> measurements out
3. Image Understanding image in -> high-level description out
Image processing is referred to processing of a 2D picture by a
computer.

Basic definitions:
An image defined in the "real world" is considered to be a
function of two real variables, for example, a(x,y) with a as the
amplitude (e.g. brightness) of the image at the real coordinate
position (x,y).An image may be considered to contain sub-
images sometimes referred to as regions-of-interest, ROIs, or
simply regions. This concept reflects the fact that images
frequently contain collections of objects each of which can be the
basis for a region. In a sophisticated image processing system it
should be possible to apply specific image processing
operations to selected regions. Thus one part of an image
(region) might be processed to suppress motion blur while
another part might be processed to improve
color rendition.
Sequence of image processing:

Image stabilizing ang quantizing:


The most requirements for image processing of images is that
the images be available in digitized form, that is, arrays of finite
length binary words. For digitization, the given Image is sampled
on a discrete grid and each sample or pixel is quantized using
a finite number of bits. The digitized image is processed by a
computer. To display a digital image, it is first converted into
analog signal, which is scanned onto a display.

Sampling theorem:
A bandlimited image is sampled uniformly on a rectangular grid
with spacing dx,dy can be recovered without error from the
sample values f (mdx,ndy) provided the sampling rate is greater
than nyquist rate, that is
Image quantization:
The step subsequent to sampling in image digitization is
quantization. A quantizer maps a continuous variable u into a
discrete variable u1, which takes values from a finite set
{r1, r2… rn} of numbers. This mapping is generally a
staircase function & quantization rule is as follows:

Define {tk, k=1,…,L+1} as a set of increasing transition or


decision levels with t1 and tL+1 as minimum and maximum
values,respectively,of u. if u lies in intervals [tk,tL+1], then it
is mapped to rk, the kth reconstruction level.

The following quantizers are useful in image coding


techniques such as pulse code modulation (PCM), differential
PCM, transform coding, etc., they are operate on one input
sample at a time, output value depends only on that input.
they are,
1. LLOYD-MAX QUANTIZER
2. OPTIMUM MEAN SQUARE UNIFORM QUANTIZER
PROCESSING TECHNIQUES:
1. Image enhancement
2. Image restoration
3. Image compression etc.

1. Image enhancement:
It refers to accentuation, or sharpening, of image features
such as boundaries, or contrast to make a graphic display more
useful for display & analysis. This process does not increase the
inherent information content in data.It includes gray level &
contrast manipulation, noise reduction, edge crispening
and sharpening, filtering, interpolation and magnification,
pseudocoloring, and so on.

2. Image restoration:

It is concerned with filtering the observed image to


minimize the effect of degradations. Effectiveness of image
restoration depends on the extent and accuracy of the
knowledge of degradation process as well as on filter design.
Image restoration differs from image enhancement in that the
latter is concerned with more extraction or accentuation of
image features.
Image compression:
It is concerned with minimizing the no of bits required to
represent an image. Application of compression are in
broadcast TV, remote sensing via satellite, military
communication via aircraft, radar, teleconferencing, facsimile
transmission, for educational & business documents,
medical images that arise in computer tomography,
magnetic resonance imaging and digital radiology, motion,
pictures ,satellite images, weather maps, geological surveys and
so on.
Compression Techniques:
1. Lossless compression
2. Lossy compression

Lossless compression:
In this, data is not altered in process of compression or
decompression.Decompression generates an exact replica of an
original image. “Text compression” is a good example.
Spreadsheets, processor files usually contain repeated
sequence of characters. By reducing repeated characters to
count, we can reduce requirement of bits.
Grayscale&images contain repetitive information .this
repetitive graphic images and sound allows replacement of bits
by codes. In color images, adjacent pixels can have different
color values. These images do not have sufficient repetitiveness
to be compressed. In these cases, this technique is not
applicable.
Lossless compression techniques have been able to
achieve reduction in size in the range from 1/10 to1/50 of
original uncompressed size.

Standards for lossless compression:


1. Packbit encoding (run-length encoding)
2. CCITT group 3 1D
3. CCITT group 3 2D
4. CCITT group 4 2D
5. Lempel-Ziv and Welch algorithm.
Lossy compression:
It is used for compressing audio, grayscale or color
images, and video objects. In this, compressing results in loss
of information. When a video image is decompressed, loss of
data in one frame will not perceived by the eye. If several bits are
missing, information is still perceived in an acceptable manner
as eye fills in gaps in shading gradient.

Standards for lossy compression:


1. Joint photographic experts Group (JPEG)
2. Moving picture experts Group (MPEG)
3. Intel DVI
4. P*64 video coding algorithm.

Binary Image Compression Schemes:


A binary image containing black & white pixels. A scanner
scans a document as sequential scanlines. During scanning, the
CCD array sensors of scanner capture black & white pixels
along a scanline. This process is repeated for next scanline
and so on.

Packbit encoding:
In this, a consecutive repeated string of characters is replaced
by two bytes.
First byte=no of times the character is repeated.
Second byte=character itself.
For e.g., 0000001111110000 is represented as
Byte1, byte2, byte3… byteN.0x06, 1x06, 0x04.
In some cases, one byte is used to represent both value of
character & also no of times.
I In this, one bit out of 8 used for representing pixel value, 7
bits are for runlength. Typical compression efficiency is from
½ to1/5.
CCITT Group 3 1D:
Huffman coding is used.

Algorithm for Huffman coding:


1. Arrange the symbols probabilities pi in decreasing order&
consider them as leaf nodes of a tree.
2. while there is more than one node;
2.1. Merge the two nodes with smallest probability to
form a new node whose
probability is the sum of two merged nodes.
2.2. Arbitrarily assign 1 & 0 each pair of branches merging
into a node.
3. Read sequentially from root to leaf node where symbol is
located. Probability of occurrence of a bit stream of length
Rn is P (n). As a result, shorter codes were developed for less
frequent runlength.
For e.g., from below table , runlength code for 16
white pixels is 101010, while runlength code for 16 black pixels
is 0000010111,since occurrence of 16 white pixels is more than
that of 16 black pixels.
Codes greater than 1792 runlength is same for both pixels.
CCITT Group 3 1D utilizes Huffman coding to generate a set
of terminating codes & make-up codes for given bit stream.
E.g., runlength of 132 where pixels are encoded by using,
Make-up code for 128 white pixels-10010.
Terminating code for 4white pixels-1011

Generating coding tree:


CCITT Group 3 2D compression:
It uses a “k-factor” where image is divided into a
several groups of k-lines. First line of every group of k-
lines is encoded using CCITT Group 3 1Dcompression. A
2D scheme is used with1D scheme to encode rest of the lines.
Data format for 2D scheme:
2D-scheme uses a combination of pass code, vertical code, and horizontal
code. Pass code is 0001.horizotal code is 001.
Diff b/w pixel position in reference Verticalcode
lineAnd coding line
3 0000010
2 000010
1 010
0 1
-1 011
-2 000011
-3 0000011

Steps to code the coding line:


1. Parse the coding line and for change in pixel value.The
pixel value change if found at the location.
2. Parse the reference line and look for change in pixel value.
The change is found at the b1-location.
3. Find the difference in locations b1 & a1: Delta=b1-a1.
If delta is in b/w -3 to +3, apply vertical codes.
CCITT Group 4 2D compression:
In this, first reference line is an imaginary all white line above the
top of image. The new line becomes reference for next scan line.

GRAYSCALE,COLOR IMAGE&
STILIMAGE COMPRESSION:

DCT :( DISCRETE COSINE TRANSFORM)


In time domain, signal requires lots of data points to
represent the time in x-axis & amplitude in y-axis. Once the
signal is converted into a frequency domain, only a few
data points are required to represent the same signal.
A color image is composed of pixels; these pixels have RGB
values, each with its x & y coordinates using 8*8 matrixes. To
compress the gray scale in JPEG, each pixel is translated to
luminance. To compress color image, work is 3 times as much.
DCT-CALCULATIONS:
Nos in the above table represent amplitude of pixel in 8*8
blocks. The tale2 shows 8*8 matrixes, after applying DCT. In
table2, row0, column0 has DCT (i,j) as 172, it is called as DC
component, others are called as AC component & it has largervalue
than others.

Quantization:
Baseline JPEG algorithm supports 4 quantization tables & 2
Huffman tables for
DCand AC DCT coefficients.
Quantizedcoeff (i, j) =DCT (i, j)/quantum (i, j);
DCTcoefficients after quantization:
After quantization, JPEG elected to compress 0 values by utilizing
runlength scheme.To find no of 0s, JPEG uses zigzag manner.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy