DGE Reserving Image Compression FOR Magnetic Resonance Images
DGE Reserving Image Compression FOR Magnetic Resonance Images
DGE Reserving Image Compression FOR Magnetic Resonance Images
1
CONTENTS
1. INTRODUCTION
1.1 OVERVIEW 1
1.2 STEPS INVOLVED IN COMPRESSING AN IMAGE 3
1.3 CHARACTERISTICS OF AN APPROPRIATE 4
IMAGE COMPRESSION
1.4 MAGNETIC RESONANCE IMAGING (MRI) 5
1.5 NEED OF IMAGE COMPRESSION : MRI 6
1.6 GOALS FOR DEVELOPING AN IMAGE 7
COMPRESSION ARCHITECTURE : MRI
2. BASIC CONCEPT S
8. BIBLIOGRAPHY
27
2
Chapter-One
INTRODUCTION
1.1 OVERVIEW
3
entropy relation indicates that the achievable compression ratio for general
data is about 2:1 using lossless techniques.
Our visual and auditory senses perform some “masking” of the
received inputs, rendering a portion of the received input redundant and
unperceivable. This enables us to devise lossy data compression techniques
which remove this redundant information, thus achieving much higher
compression ratios.
JPEG still image compression scheme, which provides
compression ratios of 5:1 to 20:1 with reasonable distortions.
For JPEG and other transform-based compression techniques, the
cost-function is the energy or frequency content of the image , which is
concentrated in the lower values. The use of transform-based coding
techniques effectively results in the blurring of edges and other high-frequency
contents of the image . Furthermore, at high compression ratios (which is
controlled by the Q-Factor in JPEG), the distortion rate increases significantly
as well, resulting in “patchy” or “blocky” images. Real-time implementations
of the Discrete Cosine Transform (DCT) used in JPEG are also costly, since
DCT is computationally
Medical image compression, such as the compression of magnetic
resonance images, requires the detection and recognition of boundaries in the
tissue structures, for detection of lesions and tumors. This requires the
preservation of edges in the image, which defines the various boundaries
between the various anatomical structures and tissues in the image.
4
Neural networks are adaptive in nature, due to their use of a training phase for
learning the relationships between the input data and required outputs, and
their generalization capabilities provide a means of coping with noisy or
incomplete data.
2) Dividing the image data into various classes, based on their importance.
3) Dividing the available bit budget among these classes, such that the
distortion is minimum.
4) Quantize each class separately using the bit allocation information derived
in step 3.
5) Encode each class separately using an entropy coder and write to the file.
5
1.3 THE CHARACTERISTICS OF AN APPROPRIATE
IMAGE COMPRESSION SCHEME CAN BE DEFINED
AS FOLLOWS:
1) The ability to specify which components of the image are vital for visual
integrity and contain the most information, and therefore, need to be
preserved with very little or no loss.
2) The ability to achieve as high compression ratios as possible for the other
portions of the image, leading to significant savings in transmission and
storage requirements.
6
1.4 MAGNETIC RESONANCE IMAGING (MRI)
Magnetic resonance imaging (MRI) is a test that uses a magnetic field and
pulses of radio wave energy to make pictures of organs and structures inside
the body.
In many cases MRI gives information that cannot be seen on an X-
ray, ultrasound, or computed tomography (CT) scan. For an MRI test, the
area of the body being studied is placed inside a special machine that is a
strong magnet. MRI scanner detects changes in the normal structure and
characteristics of your head. The MRI can also detect tissue damage or
disease, such as infection, inflammation, or a tumor.
Information from an MRI can be saved and stored on a computer for
more study. Photographs or films of certain views can also be made.
In some cases, a dye (contrast material) may be used during the MRI to
show pictures of organs or structures more clearly.
MRI can be used to find problems such as bleeding, tumors, infection,
blockage, or injury in the brain, organs and glands, blood vessels, and joints.
In some cases, a contrast material may be used during the MRI scan to
enhance the images of certain structures. The contrast material may help
evaluate blood flow, detect some types of tumors, and locate areas of
inflammation.
While MRI is a safe and valuable test for looking at structures and organs
inside the body and it is more expensive than other image techniques
7
1.5 NEED OF IMAGE COMPRESSION WITH RESPECT TO
MAGNETIC RESONANCE IMAGING (MRI)
9
Chapter-Two
10
transition artifacts or block-like appearance at the edges of each sub image. In
addition, transform techniques involve a lot of computationally intensive steps
during compression and decompression, making software implementations
cumbersome for real-time applications. However, dedicated processors have
started to appear for performing the transform manipulations quickly.
Variations of the DCT technique attempt to improve its
performance and reduce the blocking artifacts. One approach is to perform the
DCT on the whole image as a single frame . This alleviates the blocking effect
at the expense of compressor complexity and memory requirements.
Acceptable compression ratios of 4:1 for MR images to 20:1 for CR chest
images were recorded. Adaptive DCT (ADCT) has been investigated, which
uses a variable encoding scheme for the transformed image, allocating more
bits to coefficients with higher values . This improves the compression ratio by
25- 30%.
11
perception system to filter out portions of the image spectrum to which the
eye is insensitive. The technique has achieved very good compression ratios,
with the JPEG standard test image achieving up to 0.23 bits/pixel.
Most of the neural network based techniques are derived from the work by
Cottrell, Munro and Zipser . They developed a multi layered perceptron neural
network with back propagation as the error learning function. Compression is
achieved by the means of multi-layered neural networks that have small
12
hidden layers (in comparison to the output layer). Multi-Level Neural Nets are
trained using back-propagation with the block sampled original image at the
input and output in order to obtain a reduced vector representation of each
block in the hidden layer. The internal (hidden) representation of the block is
the compressed form of the image to be transmitted since there are less hidden
layer units than at the input or output. By applying the internal vector to the
decompressing network, the original
image can be recalled.
This approach uses a form of self-organization for obtaining the
reduced dimension feature space. Neural networks are trained using a set of
sample images, and then the weights obtained through the training phase are
used to compress actual images.
New neural network based adaptive compression methods which
overcome the traditional drawbacks associated with back-propagation based
neural networks, such as the static nature of network architectures, unbalanced
errors of individual training patterns. This new neural network architecture,
termed Dynamic Auto associative Neural Networks (DANN) has been shown
to provide excellent control over the error rates while maintaining user-
selectable compression ratios.
13
Chapter-Three
14
imaging, very minimal or no loss compression is required where image
integrity is of utmost importance.
15
3.3 USE OF BANK OF NEURAL NETWORKS
16
yield better compression ratios and image fidelity as compared to compression
using conventional DCT techniques, and is inherently more parallelizable in
hardware implementations due to its uniform structure and simple processing
requirements.
17
Figure 3.1:
Figure 3.2:
18
Chapter-Four
19
4.2 EPIC IMPROVES UPON DANN-BASED COMPRESSION
THROUGH THE FOLLOWING FEATURES:
20
possibly lower compression ratios. The outline of the training process is given
in Figure 4.1.
Figure 4.1
The generation of the training set for training the bank of P neural networks is
to take a set of training images, and partition each image into N x N sized
blocks, overlapping by N/2 pixels, for use in generating the training set. The
variance of the block is calculated for use in the classification step. The blocks
are classified into one of P classes, based on predefined variance threshold
determined empirically through statistical analysis of the characteristics of
typical images, and converted into a training vector. The candidate vector
belonging to the p-th class is then compared with all existing training vectors
in the training set for that class to determine if it exceeds a “similarity
threshold” defined to eliminate redundant training vectors lying close together
in Euclidean space. The user controllable “similarity threshold” serves to
reduce the size of the training set and avoids over training through the use of
highly similar training vectors, which reduces the generalization properties of
the networks.
21
4.5. Modified DANN Training Procedure
The neural network training algorithm used in EPIC is based on the DANN
training procedure . Each neural network in the bank of P networks is
subjected to DANN training, with some modifications designed to detect
pathological conditions and reduce time spent in unproductive training. Among
the improvements are convergence detection primitives, which measure the
slope of the error curve to determine if the training process is stuck (wobbling)
or if the error function is increasing as training proceeds (increasing error).
Thresholds are set for the number of iterations the network can exist in these
conditions before that particular phase of training is aborted and control
returned to the calling routine. The EPIC training process for each of the P
networks can be done in parallel, decreasing the time required for learning a
given set of images compared to the original DANN training process.
22
Chapter-Five
23
The chain coded edge image, as well as the compressed patterns, are optionally
further compressed using the LZW-based UNIX compress command to
remove any further redundancies in the data stream. This usually results in no
more than 10% improvement in the compression ratio, however, and is
obtained only from the chain codes. It is conceivable that this step could be
eliminated with the use of a Huffman coder for the chain codes.
Decompression is relatively straightforward. The compressed non edge pattern
and edge image are first decompressed using the UNIX uncompress command
as necessary, and the compressed patterns extracted and fed to the
decompressors based on their network class for decompression into the
respective image blocks. The block decompression step can be performed in
parallel. The various blocks are used to form the non-edge image. In addition,
the chain coded edge image is decoded and overlaid onto the non-edge image
to give the reconstructed image.
The algorithms for compression and decompression
are given in Figures 5.1 and 5.2
For given set of images
Perform Edge Detection on the image
Store Edges and their Gray values using chain coding
Partition image into N x N non-overlapping blocks
For each block (parallelizable)
classify the block into p P classes using predefined variance thresholds
compress the block using the pth neural network
Quantize output of second hidden layer
endfor
Store compressed patterns using differential linear predictive coding,
or fixed-bit rate quantization for higher compression ratio
Optional edge compression stage using LZW encoding for storage/transmission
endfor
Fig. 5.1: EPIC Image Compression Pseudocode
24
For given set of images
LZW decoding (correspond to optional encoding) of edge compressed images
Decode chain of Edges and their Gray values
Perform differential linear predictive decoding of compressed patterns (as needed)
For each block (parallelizable)
decompress block using the p-th neural network specified in pattern’s class info.
Form N x N non-edge image block
endfor
Overlay Edge pixels on non-edge image to obtain reconstructed image
endfor
Fig 5.2: EPIC Image Decompression Pseudocode
25
Chapter-Six
1) Edge preservation: Since the edges in the image are coded using a
lossless scheme, it does not suffer from the drawback of conventional
JPEG, which tends to lose high frequency components (edges) during
compression.
27
Chapter-Seven
28
BIBLIOGRAPHY
29