Digital Communication Chapter 3

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 37

Digital Communication

Chapter 3
Information Theory and Coding
Information Source
•Information refers to any new knowledge about something.
•The role of a communication-system designer to make sure that the information is
transmitted to the receiver correctly.
•An information source conducts random experiments producing some events.
•The outcome of the events is picked from a sample space (set of possible
outcomes) of the experiment
•The sample space is also called as source alphabet.
•Each element of the source alphabet is called as symbol having some probability
of occurrence
Discreate Memoryless Source (DMS)
• A source is discreate if the number of symbols present in the alphabet is
discreate/ finite.
• A source is memoryless if the occurrence of one event is independent of
occurrence of previous events.
Ex 1: Binary symmetric source:
• It is described by the alphabet . In this source, there are two symbols,
and

• The probability of occurrence of is and is


Ex 2. QPSK Transmitter
• It transmits one of the 4 symbols in each symbol duration.

• Here the source alphabet is .

• Corresponding probabilities:


Measure of Information
•Information is a measure of uncertainty of an outcome

•Lower the probability of occurrences of an outcome, higher the


information contained in it.

•Let a DMS has source alphabet

•The probability in i th symbol is

•Then, information contained in is


Properties of Information

•For (certain event) (no information contains)

•For (nonexistence of the event) (contains information)

•If then (Higher the probability lowers the information)


Ex: A DMS has source alphabet with , and . Find information contained in
each symbol

Ans:
• So

• So

• So

• So
Average Information (Entropy)
•When an information source is transmitting a long sequence of symbols,

Then average information transmitted from the source is more interesting than
information contained in individual symbols.

•Average information is called entropy of the source, given by:

bits/symbol
Ex: Find the entropy of the DMS of the previous problem.

Ans:

= 0.5 X 1 + 0.25 X 2 + 0.125 X 4 + 0.125 X 4

= 1.75 bits/symbols

•The average information contained in one symbol transmitted by the source is 1.75bits

•Range of H(S):

: No of symbols in source alphabet

For ,
Information Rate (R)
•If the source S is emitting r symbols per second, then the information rate of the symbol is
given by

(Unit: )

i.e. the source S is emitting R bits/sec

Ex: Find the information rate of the previous problem if the source emits
1000symbols/sec.

Ans:

Information rate: = 1000 X 1.75 = 1.75kbps


Entropy Function of a binary memoryless source:
• Let S is a binary memoryless channel H(S) attains maximum value when all
with two symbols: and symbols are equiprobable
Discrete Memoryless Channel (DMC)
• The channel is a path through which the information flows from transmitter
to receiver.

• In each signaling interval, the channel accepts an input signal from source
alphabet (X)
• In response, it generates a symbols from the destination alphabet (Y).

• If the number of symbols presents in X and Y are finite, it is called as


discreate channel.

• If the channel output depends only on present input of the channel, then
channel is memoryless.
Channel Transition Probability
• Each input-output path is represented by its
channel transition probability.
• Transition probability between jth source
symbol and ith destination symbol is the
conditional probability .

• It provides the probability of occurrence of


when is definitely transmitted.
Channel Transmission Matrix
The complete set of channel transition probabilities forms the channel
transition matrix whose i-j th element is .
Binary Symmetric Channel (BSC)

Ex: If and , find the channel transition matrix.

i.e. when is transmitted then 90% times is received

i.e. when is transmitted then 80% times is received


•Ex: If and , find the probabilities of the destination symbols for previous
channel.

•Ans

So, and
Conditional Entropy
•Let the symbol is observed at destination point.

•The symbol is caused due to transmission of any symbol from the input source alphabet.

•Conditional entropy is the uncertainty in transmitting the information source after


observing the output .

• is the uncertainty about the channel input after the channel output is observed.
Mutual Information
•It is the difference of uncertainty of the transmitting symbol before and after observing
the channel output.

•: Uncertainty about the channel input before the channel output is observed

• : Uncertainty about the channel input after the channel output is observed

•It is the uncertainty about the transmitted symbol is being resolved by observing the
received symbol .
Channel Capacity
•It is defined as the maximum value of the mutual information.

bits/symbol

•Channel Capacity pe second

bits/sec

•Channel-coding theorem

The maximum rate at which communication is established over a discreate memoryless


channel with arbitrary small probability of error is
Shannon’s formula for channel capacity in AWGN channel
• Ex: Consider an AWGN channel with 4 kHz BW and noise power spectral
density . It the signal power at the receiver is 0.1mW, find the maximum rate
at which information can be transmitted with arbitrary small probability of
error.
• Ans:
Signal Power:
Channel Bandwidth:
Noise Power:
SNR:
Channel Capacity:
Bandwidth- SNR Tradeoff

Channel capacity of a channel with infinity bandwidth:


Forward Error Correction (FEC) Code
•Some redundant data is added with original information at transmitting end.

•It is used at receiver to detect and correct the errors in the received information.

•Receiver need not to ask any additional information from the transmitter.

•Code words: It is a unit of bits that can be decoded independently.

•The number of bits in a codeword is code length.


•If k-bits of data digit s are transmitted by code word of n- bit digits, the
number of redundant (check) bits:

•Code rate:

•It is also called as code.

•So, data vector ( dimensional):

•And, code vector ( dimensional):


Types of FEC codes
1. Block Codes 2. Convolutional Codes

 A block of k bits needs to accumulate  Coded sequence on n bits depends on k


and then codded to n bits (n > k) bits data digits as well as previous N data
digits

 Unique sequence of k bits generates


unique code, not depends on  Coded data is not unique. Coding is done

previous values on continuously running basis rather than


by blocks of k bits data digits.
Liner Block Codes
•All the n – digits codewords are formed by linear combination of k –
data digits.
•Systematic code: Leading k – digits of the codeword are
data/information digits and remaining digits are parity-check digits
•Parity-check digits are formed by linear combination of data digits
• Hence, the code:

.
• So,
• : Generator Matrix ()

• can be partitioned into a ( Identity matrix and a ( matrix P


• The elements of P are either 1 or 0.
Example: (6,3) block code

In calculation use
modulo-2 adder:

1+1=0
1+0=1
0+1=1
0+0=0
Find all codewords for all possible data words
How many bits of error it can detect?
Data word Code word
111 111000
Minimum Hamming distance
110 110110
101 101011
So
100 100101
011 011101 Hence, the code can detect single bit error
010 010010
001 001110
000 000000
Cyclic Codes:
•It is a subclass of linear block codes.

•It is capable of correcting more than one error in the codeword.

•Its implementation is simple by use of shift registers.

•The codewords are simple lateral shift of one another.

•Ex: If is a codeword, then , and so on are also codewords.

•Let denotes shifted cyclically places to the left.


Polynomial representation of cyclic codes:
• Cyclic codes can be represented in a polynomial form.

• For polynomial is:

• For polynomial is:

• Coefficients of the polynomials are either 1 or 0


• Consider a codeword.

• Length of the codeword:

• Length of the data word:

• Data polynomial:

• Length of the redundant word:

• Generator Polynomial: It is a polynomial of degree used to convert the data


polynomial to code polynomial.
•Example: Find generator polynomial for a (7,4) cyclic code. Find codewords for
following data words: 1010, 1111, 0001, 1000.

•Ans:

So, the generator polynomial must be order of 3.

So, there are two possible generator polynomials:


Let the generator polynomial

Data: 1010. So

Code polynomial:

So, codeword: 1110010

Similarly, for d = 1111, c = 1001011


d = 0001, c = 0001101
d = 1000, c = 1101000

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy