Reliable Communication: Basics of Error Control Coding

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 4

Reliable Communication

M R Abidi
Department of Electronics Engg.

In recent years people all over the world have been fascinated by the pictures and
scientific data being relayed from farthest planets. The power of the radio transmitters on
the space crafts used is only a few watts, which is comparable to the strength of a dim
electric light bulb. These low power signals are being sent through communication
channels that suffer from noise, interference or distortion due to hardware imperfections,
or physical limitations. How can this information be reliably transmitted across hundreds
of millions of miles without being completely swamped by noise? Many disciplines, such
as Mathematics, Computing and Electronics Engineering, come together to recover these
signals successfully. This could be possible by the use of error control coding. The
combined efforts of the researchers of the fields mentioned led to discover the area of
Error control coding.
Error control coding is ubiquitous in modern information based society. Every compact
disk, CD-ROM or DVD employs codes to protect data embedded in them. Every hard
disk drive employs error correction coding. Every phone call made over a digital cell
phone or every packet transmitted over the Internet uses some kind of error control
coding.
Error control coding is the branch of mathematics concerned with transmitting data
across noisy channels reliably or retrieving original data from a storage medium. Coding
theory is about making messages easy to read: don't confuse it with cryptography which
is the art of making messages hard to read!
The field of error control coding came only after the work of C E Shannon (1916 2001).
Prior to Shannon, it was believed that the channel noise prevented error-free
communication and to achieve higher reliability it was necessary to increase transmit
power. Shannon disproved this, and showed that every channel has a capacity (in bits per
second), and as long as the transmission rate is less than channel capacity, it is possible to
design a reliable (in the sense of required bit error rate) communication system using
error control coding. After the publication of Shannons paper, researchers tried to find
codes that would produce a very small probability of error. In the beginning the progress
was slow but in 1960s pace of research increased tremendously. The great advances in all
types of electronic communication and storage systems over the past few years are, in
part, due to the continuing developments in Coding Theory. The major application areas
of coding are: space and satellite communications, data transmission, data storage, digital
audio/video transmission, mobile communications and file transfer. Thus Coding Theory
is right at the cutting edge of technology.

Basics of Error Control Coding


The basic idea behind error control coding is to add some kind of redundancy to the
message prior to its transmission over a channel or recording on a storage medium. The
concept of redundancy can be explained by means of the following English sentence
(corrupted by noise):

MATEMATCS IS AN INTRSTNG SUBJACT.


There are a lot of errors in this sentence. Like other languages, English language has also
a lot of built in redundancy. Thus due to familiarity with the redundancy in English
language, the original text would be correctly guessed by even the school children as
MATHEMATICS IS AN INTERESTING SUBJECT.
Thus adding redundancy means adding some extra symbols to the message in a certain
known manner. The encoder adds redundant bits (or symbols) to the senders bit stream
to create a codeword. The decoder, present at the receiver side, uses these redundant bits
to detect or correct as many bit errors as that the particular error control code allows. By
adding redundancy intelligently the effect of random noise can be diluted to some extent.
Intuitively, it appears that in order to increase error correcting capability of a coding
scheme more redundant bits are to be added in the message. However, increased
redundancy leads to a slower rate of information transmission.

Popular Coding Techniques


There are a variety of techniques present in the literature that may be used for error
control. In this part of the article a few popular coding techniques are briefly discussed.
1. Automatic Repeat Request (ARQ)
In ARQ schemes error detecting codes are used. Error detecting codes are the
codes that only detect the presence of errors (the simplest example is the parity
check). Such codes not only need lesser number of redundant bits than error
correcting codes but also require simpler decoding techniques. Once the presence
of errors is detected a retransmission from the sender is requested. Repeated
transmissions of erroneously received code words take place until they are
received error-free. Therefore, the over mentioned advantages do not come free.
One has to pay the price in terms of lesser efficiency and the requirement of a
feedback channel.
2. Forward Error Correction (FEC)
If a user is intended to receive the message right at first time, FEC codes are the
better choice. Error correcting codes can be categorized into two different classes
namely block codes and convolutional codes.
BLOCK CODES
Block encoder takes a block of k symbols of a message and replaces it with an nsymbol codeword (n>k). For a binary code, the total number of codeword will be
2k. Due to channel noise the received word can be any one of 2 n n-bit words of
which only 2k will be the valid code words. The decoder finds the codeword that
is the closest (in terms of number of places, e.g., 10110 and 10011 differs at 2
places) to the received n-bit word. Even though it is beyond the scope of the
article to discuss the details of decoder, it is worth to give one simple example of
an error correcting code:
Repetition Code: In a repetition code a codeword is formed by simply repeating each bit 2m+1
times, where m=1, 2,. For (5, 1) repetition code the encoder maps 0 to 00000 and 1 to 11111.
The decoder takes 5 bits at a time and counts the number of 0s. If these are 3 or more, it selects 0

otherwise 1 for the decoded bit. Let the channel has a bit error rate of 10 -2 (on average 1 bit out of
100 bits is inverted). It can be shown that using (5, 1) repetition code the bit error rate reduces to
9.910-6. It can be observed that even the simple repetition code offers improvement in error
performance, but at the price of a large amount of overhead burden i.e. 400 %.

In the last 50 years, a lot of good (fewer redundant bits and better error
detecting/correcting capability) codes have been developed, e.g, Hamming code,
BCH codes, Reed-Solomon codes, Reed-Muller codes, Golay codes and Cyclic
Redundancy Check codes etc.
CONVOLUTIONAL CODES
In convolutional encoders, the incoming bit stream is applied to a K-bit long shift
register. For each shift of the register, t new bits are inserted and n code bits are
delivered. The most popular method of decoding the convolutional codes is
Viterbi algorithm.
It is to be pointed out that recently, the difference between block and
convolutional codes has become less and less well defined.
INTERLEAVING
In wireless applications, the errors normally occur in the form of bursts.
Interleaving is one of the most popular ways of correcting bursts errors. A block
interleaver is loaded row by row with M code words, obtained by using any block
encoder, each of length n bits. These code words are transmitted column-wise. At
the receiver, the code words are de interleaved before decoding. There may not be
more than one bit error in any one codeword for a burst of length L bits or less. A
decoder designed for the block code used can easily correct this single error.
CONCATENATION
When two codes are used in series, the combination is called a concatenated code.
There are instances when a single code is unable to correct all types of errors
introduce during transmission, in such cases concatenated codes may be useful.

Coding in Compact Disk (CD) Player


Let us now discuss the example of popular CD player to illustrate the successful
implementation of error control coding.
For the digitized audio recording of CD, first of all the audio signal is sampled at a rate
44.1 k samples per sec. Each audio sample is uniformly quantized to one of 2 16 levels.
Channel errors come from (1) small unwanted articles, air bubbles, inaccuracies from
manufacturing process, and (2) fingerprints, scratches, or dust particles. To overcome
these kinds of errors CD manufacturers (e.g. Sony and Philips) used multiple layers of
interleaving and Reed-Solomon code. Using this scheme of error correction coding a
burst of length less than 4000 bits (or 2.5 mm on disk) can be corrected. If the error burst
exceeds the capability of the coding scheme used, the decoder hides unreliable samples
by interpolating (maximum interpolatable burst 12000 bits) between reliable neighbors.

Recent Advances

In the past decade, a significant progress has been made in the field of error control
coding. The invention of turbo codes in 1993 revolutionized the area. These codes
provide a performance close to theoretical limit obtained by Shannon. In recent years few
more new codes that give performance comparable to turbo codes (or even better) are
also discovered e.g. low density parity check codes and rateless codes etc. The distinct
features of these codes have enabled them to be widely proposed or adopted in existing
wireless standards. Further the discovery of space time coding significantly increased the
capacity of wireless systems and these codes are widely applied for broadband
communication systems.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy