ITCT Lab Manual 2018-19
ITCT Lab Manual 2018-19
FOR
Information Theory and Coding
Techniques
References
[1] Bernad Sklar, “Digital Communication Fundamentals & applications”, 2nd Ed.
Pearson Education.
[2] Shu lin and Daniel j, Cistello jr., “Error control Coding” Pearson, 2nd Edition.
[3] Todd Moon, “Error Correction Coding : Mathematical Methods and Algorithms”, Wiley
Publication
[4] Khalid Sayood, “Introduction to data compression”, Morgan Kaufmann Publishers
ASSIGNMENT NO. 1
THEORY:
INFORMATION:
The probability denotes likelihood or the certainty of occurrence of any event. A less
probable event is rarer and so it contains more information. Thus, if an event of lower
probability occurs, it conveys more information than the occurrence of an event of larger
probability. If ‘p’ is the probability of occurrence of the message symbol and ‘I’ is the
information received from the message, then;
1
Information, I = log 2
p
If the base is 2, unit of information is bit.
Table for conversion of information bit:-
Suppose there are `m` different messages m1,m2 ,….,mm having probabilities p1,p2….pm.
suppose a sequence of L message is transmitted. If L is very large, then we can say that
messages of m are transmitted.
1
I1 = p1` log 2
p1
1
I 2 = p2` log 2
p2
. . .
. . .
1
I m = pm` log 2
pm
Thus, the total information due to sequence of L message will be…
Itotal = I1 + I 2 + K + I m
Average information or Entropy = total information no. of messages.
m 1
Entropy H(X) =IL (total) = p i` log 2
i =1 pi
Properties of Information:
The important properties of the information conveyed by a message are as follows-
1) The information contents of a message increases with the decrease in value of its
probability (Pr). This means that the most unexpected event will contain maximum
information.
6) If we are absolutely certain of the outcome of an event even before it occurs that is
probability of an event is one, then there is no information gained.
I (SK) = 0 if pk = 1
Discrete Memoryless channel:
Consider Discrete Memoryless channel
Let X : input to channel and Y : output of channel
Y being the noisy version of X
X and Y are random variables
x , x ,…..x : message belonging to input alphabet X
1 2 m
P( y )
n
With the fundamental property k xj =1 j
k =1
Joint Probability Distribution of random variable X and Y is given by
P ( x1 , y1 ) P ( x1 , y2 ) L P ( x1 , yn )
P ( x2 , y1 ) P ( x2 , y2 ) L P ( x2 , yn )
P ( X ,Y ) =
M M L M
P ( xm , y1 ) P ( xm , y2 ) L P ( xm , yn )
j =1
This means that probability of yk is probability that yk and x1 occurs OR probability that
yk and x2 occurs OR probability that yk and x3 occurs …. Up to yk and xm occurs
( )
Q P ( x j , yk ) = P yk x j * P ( x j ) ( )
P ( yk ) = P yk x j * P ( x j )
n
j =1
P ( X Y ) = P ( x j yk )
j =1
n
Marginal Entropy of Sink / output Y : H (Y ) = − P( yk ) log P ( yk )
k =1
Joint Entropy of input X and output Y : H ( X , Y ) = − P( x j , yk ) log P ( x j , yk )
m n
j =1 k =1
Entropy Interpretation
I ( X ; Y ) = P X = x, Y = y I ( x; y )
xX yY
P x y
= P X = x, Y = y log
xX yY P x
EXAMPLE:
Find out all entropies: H (X), H (Y), H (X, Y), H (X/Y) and H (Y/X).
The probability matrix is
0.1 0.1 0.2
P = 0.1 0.1 0.1
0.1 0.1 0.1
Solution:
Summation of all elements of matrix is a joint probability matrix P (X/Y).
1 1 1
H ( X ) = − P( x j ) log P ( x j ) = 0.4log + 0.3log + 0.3log
m
H ( X , Y ) = − P( x j , yk ) log P ( x j , yk )
m n
j =1 k =1
1 1 1 1 1
= 0.1 log 2 + 0.1 log 2 + 0.2 log 2 + 0.1 log 2 + 0.1 log 2
0.1 0.1 0.2 0.1 0.1
1 1 1 1
+ 0.1 log 2 + 0.1 log 2 + 0.1 log 2 + 0.1 log 2
0.1 0.1 0.1 0.1
= 3.12 bits / message
ALGORITHM:
1) Read the number of rows m and number of columns n for the joint probability
matrix.
2) Read the individual matrix elements and display them.
3) Find out the summation of each row, which gives P (X0), P (X1), and P (X2).
4) Find out the summation of each column, which gives P (Y0), P (Y1), and P (Y2).
H ( X ) = − P( x j ) log P ( x j )
m
5) Find H (X).
j =1
n
6) Find H (Y). H (Y ) = − P( yk ) log P ( yk )
k =1
j =1 k =1
CONCLUSION:
ASSIGNMENT NO. 2
OBJECTIVE:
1. To understand the concept variable length source coding.
2. To implement algorithm for Huffman code and Shannon fano encoding
3. To compute entropy, average length and coding efficiency.
THEORY:
Huffman coding:
It is an entropy encoding algorithm used for lossless data compression. The term refers to
the use of a variable-length code table for encoding a source symbol (such as a character
in a file) where the variable-length code table also called as code book, has been derived
in a particular way based on the estimated probability of occurrence for each possible
value of the source symbol.
Probability of occurrence is based on the frequency of occurrence of a data item. The
principle is to use a lower number of bits to encode the data that occurs more frequently.
Codes are stored in a Code Book which may be constructed for each block or a set of
blocks. In all cases the code book plus encoded data must be transmitted to enable
decoding.
The Huffman algorithm is now briefly summarized:
The simplest construction algorithm uses a priority queue where the node with lowest
probability is given highest priority:
1. Create a leaf node for each symbol and add it to the priority queue.
2. While there is more than one node in the queue:
1. Remove the two nodes of highest priority (lowest probability) from the
queue
2. Create a new internal node with these two nodes as children and with
probability equal to the sum of the two nodes' probabilities.
3. Add the new node to the queue.
3. The remaining node is the root node and the tree is complete.
Since efficient priority queue data structures require O(log n) time per insertion, and a
tree with n leaves has 2n−1 nodes, this algorithm operates in O(n log n) time, where n is
the number of symbols.
If the symbols are sorted by probability, there is a linear-time (O(n)) method to create a
Huffman tree using two queues, the first one containing the initial weights (along with
pointers to the associated leaves), and combined weights (along with pointers to the
trees) being put in the back of the second queue. This assures that the lowest weight is
always kept at the front of one of the two queues:
1. Start with as many leaves as there are symbols.
2. Enqueue all leaf nodes into the first queue (by probability in increasing order so
that the least likely item is in the head of the queue).
3. While there is more than one node in the queues:
1. Dequeue the two nodes with the lowest weight by examining the fronts of
both queues.
2. Create a new internal node, with the two just-removed nodes as children
(either node can be either child) and the sum of their weights as the new
weight.
3. Enqueue the new node into the rear of the second queue.
4. The remaining node is the root node; the tree has now been generated.
Although this algorithm may appear "faster" complexity-wise than the previous
algorithm using a priority queue, this is not actually the case because the symbols need to
be sorted by probability before-hand, a process that takes O(n log n) time in itself.
Many variations of Huffman coding exist, some of which use a Huffman-like algorithm,
and others of which find optimal prefix codes.
Example: A source generates 4 different symbols with probability 0.4, 0.35, 0.2, 0.05. A binary
tree is generated from left to right taking the two least probable symbols and putting them together
to form another equivalent symbol having a probability that equals the sum of the two symbols.
The process is repeated until there is just one symbol. The tree can then be read backwards, from
right to left, assigning different bits to different branches. The final Huffman code is:
Symbol Code
a1 0
a2 10
a3 110
a4 111
Example: A source generates 7 different symbols with probability 0.1, 0.05, 0.2, 0.15, 0.15, 0.25,
0.1. Encode by Shannon fano method.
Algorithm:
CONCLUSION:
QUESTIONS:
1) What is a prefix code?
2) Explain Kraft inequality?
3) What is the efficiency of any code?
4) What are the steps for Shannon-fano encoding mechanism?
5) What is run length encoding?
6) Comment on the efficiency of Shannon-Fano coding method.
7) Explain the steps for Lempel-Ziv algorithm.
8) Compare Lempel-Ziv & Huffman Encoding mechanism.
9) Distinguish between Lossy &Lossless data compression with examples.
ASSIGNMENT NO. 3
THEORY:
Reliable transmission of information over noisy channels requires the use of error
correcting codes which encode input in such a way that errors can be detected and
corrected at the receiving site.
The basic idea behind error correcting codes is an addition of some controlled
redundancy in the form of extra symbol to a message prior to transmission of message
through a noisy channel. This redundancy is added in a controlled manner. The encoded
message when transmitted might be corrupted by noise in the channel. At the receiver,
the original message can be recovered from the corrupted one if the errors are within the
limit for which the code has been designed.
1. Error correcting capability in terms of the number of errors that it can rectify.
2. Fast and efficient encoding of the message.
3. Fast and efficient encoding of the received message.
4. Maximum transfer of information bits per unit time (i.e., fewer overheads in terms
of redundancy).
CODE RATE:
The code rate of an (n,k) code is defined as the ratio (k/n) and denotes the fraction of the
codeword that consists of information symbols.
MINIMUM DISTANCE:
The minimum distance of a code is the minimum distance between any two code words.
MINIMUM WEIGHT:
The minimum weight of a code is the smallest weight of any non-zero element codeword
and is denoted by w.
Constraint on k and d :
1) Block length ( n ) :
This parameter gives us set of vectors which can be used as code words.
2) Dimension ( k ) :
This is based on some logic by which we select some of vectors from q n available
vectors. Number of vectors which can be called as code words is q k and k is called
3) Hamming distance ( d ) :
v = (11001010101)
Then d ( u , v ) = 5
GENERATOR MATRIX:
The generator matrix is a matrix having k rows and n columns i.e., it is a k*n matrix with
rank k. Since the choice of the basis vectors is not unique, the generator matrix is not
unique for a given linear code. The generator matrix is of the following form:
G = I k : Pkn
d min 2t + 1
where, d min = Minimum distance, t = Error correction capability
PARITY CHECK MATRIX:
It is possible to detect a valid code word and such a matrix is called the parity check
matrix denoted by H. For decoding purpose, we consider it’s transpose H T . H T is of the
P
following form, H = .
T
I n −k n( n −k )
There are different formats of transpose of parity check matrix depending upon generator
matrix (G).
P
if, G = I k : Pkn then, H = .
T
I n −k n( n −k )
I n−k
if, G = P : I k kn then, H T = M
P n( n −k )
I n−k
if,
G = PT : I k
k n
then, H = .
T
P k
n( n − k )
ALGORITHM:
1) Input the values of n and k.
2) Input the parity matrix.
3) Calculate generator matrix G.
4) Input the data matrix. M [i.e. any one combination out of the total possible
combinations.]
5) Calculate the code using X = M * G.
6) In the same way calculate all the code words.
7) Introduce error in the m bit position changing it either from 1 to 0 or 0 to 1.
8) Calculate the syndrome.
9) Compare error pattern with corresponding syndrome
10) Evaluate correct code Xc
CONCLUSION:
QUESTIONS:
1. What do you mean by hamming distance?
2. How to obtain minimum hamming distance?
3. How to obtain error Detecting capability
4. How to obtain error correcting capability
ASSIGNMENT NO. 4
OBJECTIVE:
1. To implement the systematic encoding of Cyclic Code.
2. To implement the systematic decoding of Cyclic Code.
THEORY:
1111 1 + x + x 2 + x3 (1 + x + x + x )gg ( x )
2 3
x + 1 ⎯⎯q( x )
⎯
b( x )
⎯⎯⎯
→ x + x +1 2
x + x + 1 ⎯⎯
3
a( x )
⎯
x3 + x 2 + x
x2 + 1
x2 + x + 1
r( x)
x ⎯⎯ ⎯
a ( x ) = ( x + 1) b ( x ) + x
a ( x) = q ( x)b ( x) + r ( x)
Problem:
Obtain systematic (7,4) Cyclic code for generating polynomial g ( x ) = 1 + x + x3
CONCLUSION:
QUESTIONS:
1. What are important properties of Cyclic code
2. What are properties of syndrome table
3. Why cyclic code are more suitable for burst errors.
4. Draw a circuit implementation of cyclic code (both encoding and decoding).
ASSIGNMENT NO. 5
OBJECTIVE:
1. To implement the encoding of Convolution Code by
a. Code Tree.
b. Code Trellis.
THEORY:
Convolutional Codes
In block coding, the encoder accepts k-bit message block and generates an n-bit code
word. i.e. code words are produced on a block-by-block basis. We know that we have
serial and parallel communication, as far as serial data is concerned, provisions must be
made in the encoder to buffer an entire message block before generating the associated
code word. Thus buffering introduces delay and hence when data / message bits come in
serially, buffer is undesirable. In such situation use of convolutional coding may be
preferred method.
Information frame
Smaller blocks of uncoded data of length k0 are used for encoding purpose. Theses are
called Information frame
Thus convolutional coding implies that encoders have memory, which retain the previous
m incoming information frames. The codes that are obtained in this fashion are called
Tree Codes. An important subclass of tree codes, used frequently in practice, is called
Convolutional Codes.
Tree codes and Trellis codes
We assume that we have an infinitely long stream of incoming symbols. This stream of
symbols is first broken up into (segments of k0 symbols. Each segment is called an)
information frame
The encoder consists of two parts
(i) Memory – basically a shift register
(ii) a logic circuit.
k = ( m + 1) k0
Wordlength of a shift register encoder: It is defined as
The Blocklength of a shift register encoder is defined as
n
n = ( m + 1) n0 = k 0
k0
Code rate
k0 k
R= =
n0 n
Convolutional code:
k = ( m + 1) k0
A (n0 ,k0) tree code that is linear, time invariant, and has a finite word length
is called an (n, k) Convolutional Code.
k 1
R= =
Code rate = n 2.
The encoder operates on the incoming message sequence, one bit at a time and is
nonsystematic codes.
Each path connecting the output to the input of a convolutional encoder may be
characterized in terms of its impulse response, defined as the response of that path to a
symbol 1 applied to its input, with each flip-flop in the encoder set initially in the zero
state.
Equivalently, we may characterize each path in terms of a generator polynomial, defined as
the unit-delay transform of the impulse response.
let the generator sequence
( g ( ) , g ( ) , g ( ) ,L g ( ) ) denote the impulse response of the i path,
0
i
1
i
2
i
M
i
th
Working:
The convolutional encoder of above Figure, has two paths numbered 1 and 2. The
impulse response of path 1 (i.e., upper path) is (1, 1,1). Hence, the corresponding
generator polynomial is given by
g1 ( D ) = 1 + D + D 2
Similarly for path 2 (lower path) is (1,0,1)
g 2 ( D ) = 1 + D2
For the message sequence (10011), say, we have the polynomial representation
m(D) = 1 + D3 + D4
We know from Fourier transformation, convolution in the time domain is transformed
into multiplication in the D-domain.
Hence, the output polynomial of path 1 is given by
c( ) ( D ) = g 1 ( D ) m ( D ) = (1 + D + D 2 )(1 + D 3 + D 4 ) = 1 + D + D 2 + D 3 + D 6
1
TRELLIS CODE:-
The tree code can be collapsed into a new form called TRELLIS.
Trellis diagram are messy but generally preferred over both the tree and the state
diagrams because they represent linear time sequencing of events to produce the trellis
diagrams, advantage is taken of the fact that the tree structure repeats itself after ‘K’
branches that is it is periodic with period ‘K’. The x axis is discrete time and all possible
states are shown on the y axis. Trellis moves horizontally with the passage of time. Each
transition means new bit have arrived.
Each state is connected to the next state by the allowable codeword for that state. There
are only two choices possible at each state. These are only determined by the arrived of
either as ‘O’ bit or ‘1’ bit. The arrows show the input bit and output bits are shown in
parenthesis. The arrows going upwards unique to each case, same as both the state and
tree diagram are trellis can be drawn for as many periods as desired. Each period repeats
the possible transitions one time interval section of a fully formed encoding trellis
structure completely defines this code. Some sections can be shown for viewing a code
symbol sequence as a function of time.
Steps for Code Tree Implementation:
1. Tree becomes repetitive after 3rd branch. Beyond the 3rd branch, the two nodes
labeled are identical nodes.
2. The encoder has memory M = K-l =2 message bits. Hence when third message bit
enters the encoder, the 1st message bit is shifted out of the register.
3. In the code tree starting with 1 & 0, if there is ' 1' in the input sequence, then
proceed downward. (This is shown by dotted line) & note down the correct code
written on that line.
4. If there is '0' in the input sequence, then go upward (shown by solid line) and note
down code written on that line.
5. Thus trace the code - tree up to level equal to number of bits in input sequence to
get the corresponding output sequence.
Step for Code Trellis implementation:
1. If there is '0' in k0, then trace upward [i.e. solid line & note down code written
above the line.
2. If there is ' 1' in k0, then trace downward [i.e. dotted line & note down code
written above the line
Thus for k0 = l 1 0 1 00 0
We get n0 =11010100101100
VITERBI’S ALGORITHM:
Lets represent the received signal by ‘y’. Convolution decoding operates continuously on
input data. Let 1 and 0 have same transmission error probability. Then matric in the
discrepancy between the received signal ‘y’ and decoded signal at a particular mode.
This matric can be added over a few nodes for a particular path.
In this method output or the received code is compared with the trellis diagram. If the
output of the node in the trellis diagram matches with the received code then all paths
are checked and the respective matric are written down.
In case if there are two paths having same matric then only one of them is continued.
Otherwise the path having lowest matric is chosen whenever the path is broken it shows
the message bit m=1 and if it is continuous message bit then m=0.If it is continuous
between two nodes method of decoding in viterbi algorithm is called max likelihood
decoding.
In this decoding,
Surviving path=2(k-1)R
Where,
K=constant length
R=message bit
If the number of message bits cleared decoded are very large then storage requirement is
also large since the decoder has to store multiple paths. To avoid this, matric diversion
effect is used.
PROBLEM :
For the convolution encoder shown draw the trellis diagram and using viterbi algorithm
decode the sequence 1 1 1 0 1 1 1 1 0 1 0 1 1 1
Fig. 6.1: Convolutional Encoder
GIVEN SEQUENCE: 01000100
The path with least metric is the 1st path with metric 4.hence it is surviving path and it
gives message bit output. Hence the output is 1001100.
TRELLIS DIAGRAM
CONCLUSION:
QUESTIONS:
1. Write polynomial description of convolutional code
2. For convolutional code, how to obtain dfree
3. With neat example encoding schemes for convolutional code. Draw circuit, state table, state
diagram and trellis diagram.
4. Explain sequential decoding scheme for convolutional code
5. Explain how decoding of convolutional code can be achieved by viterbi decoding
6. Explain concept of trace-back length.
ASSIGNMENT NO. 6
OBJECTIVE:
To implement the encoding and decoding of BCH Code.
THEORY:
In coding theory the BCH codes form a class of parameterized error correcting code BCH
codes were invented in 1959 by Hocquenghem, and independently in 1960 by Bose and
Ray-Chaudhuri. The acronym BCH comprises the initials of these inventors' names.
The principal advantage of BCH codes is the ease with which they can be decoded, via an
elegant algebraic method known as syndrome decoding. This allows very simple
electronic hardware to perform the task, obviating the need for a computer, and meaning
that a decoding device may be made small and low-powered. As a class of codes, they
are also highly flexible, allowing control over block length and acceptable error
thresholds, meaning that a custom code can be designed to a given specification (subject
to mathematical constraints). Reed–Solomon codes, which are BCH codes, are used in
applications such as satellite communications, compact disc players, DVDs, disk drives,
and two-dimensional bar codes.
In technical terms a BCH code is a multilevel cyclic variable-length digital error-
correcting code used to correct multiple random error patterns. BCH codes may also be
used with multilevel phase-shift keying whenever the number of levels is a prime
number or a power of a prime number. A BCH code in 11 levels has been used to
represent the 10 decimal digits plus a sign digit
Construction:
A BCH code is a polynomial code over a finite field with a particularly chosen generator
polynomial. It is also a cyclic code.
Fix a finite field GF(qm), where q is a prime. Also fix positive integers n, and d such that n
= qm − 1 and 2 ≤ 𝑑 ≤ 𝑛. We will construct a polynomial code over GF(q) with code length
n, whose minimum Hamming distance is at least d. What remains to be specified is the
generator polynomial of this code.
Let α be a primitive nth root of unity in GF(qm). For all i, let mi(x) be the minimal
polynomial of αi with coefficients in GF(q). The generator polynomial of the BCH code is
defined as the least common multiple 𝑔(𝑥) = 𝑙𝑐𝑚(𝑚1 (𝑥), ⋯ , 𝑚𝑑−1 (𝑥)).
Example:
Let q = 2 and m = 4 (therefore n = 15). We will consider different values of d. There is a
primitive root 𝛼 ∈ 𝐺𝐹(16) satisfying α4 + α + 1 = 0 its minimal polynomial over GF(2)
is :m1(x) = x4 + x + 1.
Note that in GF(24), the equation (a + b)2 = a2 + 2ab + b2 = a2 + b2 holds, and therefore
m1(α2) = m1(α)2 = 0. Thus α2 is a root of m1(x), and therefore m2(x) = m1(x) = x4 + x + 1.
To compute m3(x), notice that, by repeated application of (1), we have the following linear
relations
1 = 0𝛼 3 + 0𝛼 2 + 0𝛼 + 1
𝛼 3 = 1𝛼 3 + 0𝛼 2 + 0𝛼 + 0
𝛼 6 = 1𝛼 3 + 1𝛼 2 + 0𝛼 + 0
𝛼 9 = 1𝛼 3 + 0𝛼 2 + 1𝛼 + 0
𝛼12 = 1𝛼 3 + 1𝛼 2 + 1𝛼 + 1
Five right-hand-sides of length four must be linearly dependent, and indeed we find a
linear dependency α12 + α9 + α6 + α3 + 1 = 0. Since there is no smaller degree dependency,
the minimal polynomial of α3 is :m3(x) = x4 + x3 + x2 + x + 1. Continuing in a similar
manner, we find
𝑚4 (𝑥) = 𝑚2 (𝑥) = 𝑚1 (𝑥) = 𝑥 4 + 𝑥 + 1
𝑚5 (𝑥) = 𝑥 2 + 𝑥 + 1
𝑚6 (𝑥) = 𝑚3 (𝑥) = 𝑥 4 + 𝑥 3 + 𝑥 2 + 𝑥 + 1
𝑚7 (𝑥) = 𝑥 4 + 𝑥 3 + 1
The BCH code with d = 1,2,3 has generator polynomial
𝑔(𝑥) = 𝑚1 (𝑥) = 𝑥 4 + 𝑥 + 1
It has minimal Hamming distance at least 3 and corrects up to 1 error. Since the generator
polynomial is of degree 4, this code has 11 data bits and 4 checksum bits.
The BCH code with d = 4,5 has generator polynomial
It has minimal Hamming distance at least 5 and corrects up to 2 errors. Since the
generator polynomial is of degree 8, this code has 7 data bits and 8 checksum bits.
The BCH code with d = 6,7 has generator polynomial
𝑔(𝑥) = 𝑙𝑐𝑚(𝑚1 (𝑥), 𝑚3 (𝑥))
= (𝑥 4 + 𝑥 + 1)(𝑥 4 + 𝑥 3 + 𝑥 2 + 𝑥 + 1)(𝑥 2 + 𝑥 + 1)
= 𝑥10 + 𝑥 8 + 𝑥 5 + 𝑥 4 + 𝑥 2 + 𝑥 + 1
It has minimal Hamming distance at least 7 and corrects up to 3 errors. This code has 5
data bits and 10 checksum bits.
Decoding:
There are many algorithms for decoding BCH codes. The most common ones follow this
general outline:
1. Calculate the syndrome values for the received vector
2. Calculate the error locator polynomials
3. Calculate the roots of this polynomial to get error location positions.
4. Calculate the error values at these error locations.
Calculate the syndromes
The received vector R is the sum of the correct codeword C and an unknown error vector
E. The syndrome values are formed by considering R as a polynomial and evaluating it at
. Thus the syndromes are[3]
sj = R(αc + j − 1) = C(αc + j − 1) + E(αc + j − 1)
for j = 1 to d − 1. Since αc + j − 1 are the zeros of g(x), of which C(x) is a multiple, C(αc + j − 1) =
0. Examining the syndrome values thus isolates the error vector so we can begin to solve
for it.
If there is no error, sj = 0 for all j. If the syndromes are all zero, then the decoding is done.
Calculate the error location polynomial
If there are nonzero syndromes, then there are errors. The decoder needs to figure out
how many errors and the location of those errors.
If there is a single error, write this as , where i is the location of the error
and e is its magnitude. Then the first two syndromes are
so together they allow us to calculate e and provide some information about i (completely
determining it in the case of Reed-Solomon codes).
If there are two or more errors,
It is not immediately obvious how to begin solving the resulting syndromes for the
unknowns ek and ik.
• If the determinant of matrix exists, then we can actually find an inverse of this
matrix and solve for the values of unknown Λ values.
• If , then follow
if t = 0
then declare an empty error locator polynomial
stop Peterson procedure.
end
set
continue from the beginning of Peterson's decoding
• After you have values of Λ you have with you the error locator polynomial.
• Stop Peterson procedure.
Problem:
CONCLUSION:
QUESTIONS:
1. Obtain elements of field whose primitive polynomial is x3 + x + 1
2. What do you mean by burst errors.
3. Write decoding scheme to correct burst errors.
ASSIGNMENT NO. 7
THEORY:
Automatic Repeat request (ARQ), also known as Automatic Repeat Query, is an error-control
method for data transmission that uses acknowledgements (messages sent by the receiver
indicating that it has message has correctly received a data frame or packet) and time outs
(specified period so f time allowed to elapse before an acknowledgment is to be received) to
achieve reliable data transmission over an unreliable service.
If the sender does not receive an acknowledgment before the timeout, it usualy re-transmits the
frame/packet until the sender receives an acknowledgment or exceeds a predefined number of re-
transmissions.
The types of ARQ protocols include Stop-and-wait ARQ, Go-Back-N ARQ, and Selective Repeat
ARQ/Selective Reject. Al three protocols usually use some form of sliding window protocol to tel
the transmitter to determine which (if any) packets need to be retransmitted. These protocols
reside in the Data Link or Transport Layers of the OSI model.
A number of patents exist for the use of ARQ in live video contribution environments. In these
high throughput environments negative acknowledgements are used to drive down overheads.
ALGORITHM:
Step1.Start the Program
Step7.If an Acknowledgement is not received for a particular frame, retransmit that frame
alone again
Step8.Repeat step 5 to7 till number of remaining frames to be send becomes zero
CONCLUSION:
ASSIGNMENT NO. 8
OBJECTIVE:
To study various components involved in the communication network.
THEORY:
Introduction:
Computer network is a group of two or more computers that connect with each other to
share a resource.
Sharing of devices and resources is the purpose of computer network.You can share
printers, fax machines, scanners, network connection, local drives, copiers and other
resources.
In computer network technology, the reare severaltypesofnetworksthatrange
fromsimpletocomplexlevel.
However,in any case in order to connect computers with each other or to the existing
network or planning to install from scratch, the required devices and rules (protocols)
are mostly the same.
Computer network requires the following devices (some of the mare optional):-
Repeater
Hub
Switch
Bridge
Router
Gateway
Repeater: A repeater operates at the physical layer. Its job is to regenerate the signal over the
same network before the signal becomes too weak or corrupted so as to extend the length to which
the signal can be transmitted over the same network. An important point to be noted about
repeaters is that they do not amplify the signal.
When the signal becomes weak, they copy the signal bit by bit and regenerate it at the original
strength. It is a 2 port device.
Hub: A hub is basically a multiport repeater. A hub connects multiple wires coming from
different branches, for example, the connector in star topology which connects different stations.
Hubs cannot filter data, so data packets are sent to all connected devices. In other words, collision
domain of all hosts connected through Hub remains one. Also, they do not have intelligence to
find out best path for data packets which leads to inefficiencies and wastage.
Switch: A switch is a multiport bridge with a buffer and a design that can boost its efficiency
(large number of port simply less traffic) and performance. Switch is datalink layer device. Switch
can perform error checking before forwarding data, that makes it very efficient as it does not
forward packets that have errors and forward good packets selectively to correct port only. In
other words, switch divides collision domain of hosts, but broadcast domain remains same.
Bridge: A bridge operates at data link layer. A bridge is a repeater, with addon functionality of
filtering content by reading the MAC addresses of source and destination. It is also used for
interconnecting two LANs working on the same protocol. It has a single input and single output
port, thus making it a 2 port device.
Router: A router is a device like a switch that routes data packets based on their IPaddresses.
Router is mainly a Network Layer device. Routers normally connect LANs and WANs together
and have a dynamically updating routing table based on which they make decisions on routing the
data packets. Router divide broadcast domains of hosts connected through it.
Gateway: A gateway, as the name suggests, is a passage to connect two networks together that
may work upon different networking models. They basically works as the messenger agents that
take data from one system, interpret it, and transfer it to another system. Gateways are also called
protocol converters and can operate at any network layer. Gateways are generally more complex
than switch or router.
A local area network (LAN) is a devices network that connect with each other in
The scope of a home, school, laboratory, or office.
Usually, a LAN comprise computers and peripheral devices linked to a local domain
server. All network appliances can use a shared printers or disk storage. A local area
network serve for many hundreds of users.
Typically, LAN includes many wires and cables that demand a previously designed
network diagram. They are used by IT professionals to visual document the LANs
physical structure and arangement.
The Network Logical Structure Diagram is designed to show the logical organization of a network.
Shows the basic network components, network structure, and determines the interaction of all
network devices. The diagram displays basic devices and zones: Internet, DMZ, LAN, and group.
Clarifies what network equipment is connected, describes the major nodes in the network, gives an
understanding of the logical structure of the network as well as the type of interaction within the
network.
Fig:NetworkComponents
Conclusion: