Pulse Modulation System
Pulse Modulation System
3.1
3.1 Introduction
Systems in which a series of recurring pulses are made to vary
in amplitude, duration, shape & time, as a function of
modulating signal.
Analog pulse modulation: A pulse train is used as the carrier
wave. Some characteristic feature of each pulse (e.g.,
amplitude, duration, or position) is used to represent message
samples.
There are various Pulse Modulation schemes
Pulse Amplitude Modulation
Pulse Code Modulation
Delta Modulation
Pulse Width Modulation
Pulse Position Modulation
3.2
3.1 Cont’d
Advantages of PM
Noise immunity
Inexpensive digital circuitry
Can be time division multiplexed with other pulse
modulated signal
Transmission distance is increased through the use of
regenerative repeaters
Digital pulse streams can be stored
Error detection and correction is easily implemented
3.3
3.1 cont’d
Disadvantages of PM
Require greater BW to transmit and receive as compare
to its analog counterpart
For high transmission rates, time need specialized
encoding techniques
Pulse coded stream is difficult to recover
For information recovery, it need synchronization
between transmitter & receiver.
3.4
3.2 Sampling Theorem
Sampling Theorem for strictly band - limited signals
1.a signal which is limited to W f W , can be completely
n
described by g ( ) .
2W
n
2.The signal can be completely recovered from g ( )
2W
Nyquist rate 2W
Nyquist interval 1
2W
When the signal is not band - limited (under sampling)
aliasing occurs .To avoid aliasing, we may limit the
signal bandwidth or have higher sampling rate.
3.5
3.3 Analoge Pulse Modulation
Pulse amplitude modulation (PAM)
is an engineering term that is used to describe the
conversion of the analog signal to a pulse-type signal in
which the amplitude of the pulse denotes the analog
information.
3.6
3.3 Analog Pulse Modulation
3.7
3.3 cont’d
3.8
3.3 cont’d
Pulse-Amplitude Modulation
3.9
3.3 cont’d
Pulse Width Modulation
Also known as pulse duration modulation (PDM)
A form of modulation in which the width of a pulse carrier is
made to vary in accordance with the modulation voltage.
The leading edge of the pulse remains fixed, but the occurrence
of the trailing edge of the pulse varies
Disadvantages
Different pulses of different durations (width) are created
Hard to interpret
Wider pulses – expending more power
3.10
3.3 cont’d
Pulse Position Modulation
It is possible to overcome the flaw of PWM, by
preserving only the pulse transitions of the PWM signal
This effectively creates Pulse Position Modulation
(PPM)
PPM differs from PWM in that the position of a pulse is
made to vary in accordance of the modulating voltage
3.11
3.3 cont’d
Pulse modulated schemes
3.12
3.4 Digital
D Pulse Modulation
3.4.1 Pulse Code Modulation
Pulse Code Modulation is a process through which an analogue
signal can be represented (approximated) by a digital signal
PCM is a three step process which includes
Sampling/Pulse amplitude modulation
Quantization
Coding
Analog to Digital conversion is really a three step process
involving
Sampling
Conversion from continuous-time, continuous valued signal to
discrete-time, continuous-valued signal
Quantization
Conversion from discrete-time, continuous valued signal to
discrete-time, discrete-valued signal
3.13
3.4 Digital
D Pulse Modulation
3.15
3.4.2 Sampling
A continuous-time signal has some value „defined‟ at „every‟
time instant .
So it has infinite number of sample points .
3.16
3.4.2 Sampling
It is impossible to digitize an infinite number of points
because infinite points would require infinite amount of
memory and infinite amount of processing power .
So we have to take some finite number of points .
Sampling can solve such a problem by taking samples at
the fixed time interval .
Sampling is the process of determining the instantaneous
voltage at these given intervals of time .
A technique called “pulse amplitude modulation” is used
to produce a pulse when signal is sampled
If an analog signal is not appropriately sampled, aliasing
will occur, where a discrete-time signal may be a
representation (alias) of multiple continuous-time signals.
3.17
3.4.2 Shannon-Nyquist Sampling theorem
3.18
3.4.2 Shannon-Nyquist Sampling theorem
Types of Sampling
Critical Sampling
When the sampling frequency is chosen to be equal to twice the
max. frequency component (Fs=2Fmax ).
Ideally, we should be able to recover the Analog signal from Digital
samples.
Under Sampling
When the sampling frequency is chosen to be less than twice the
max. frequency component (Fs<2Fmax).
We would not be able to recover the analog signal from Digital
samples.
3.19
3.4.2 Shannon-Nyquist Sampling theorem
Over Sampling
When the sampling frequency is chosen to be greater than twice the max.
frequency component
We would be able to recover the Analog signal from Digital samples
It would also help to control the effect of Aliasing
Require more memory
3.20
3.4.2 cont’d
3.21
3.4.2 cont’d
3.22
3.4.2 cont’d
Example 1: For the following analog signal, find the Nyquist
sampling rate, also determine the digital signal frequency
and the digital signal
3.23
3.4.2 cont’d
3.24
3.4.2 cont’d
Anti-aliasing filters
Anti-aliasing filters are analog filters as they process the signal before
it is sampled. In most cases, they are also low-pass filters unless band-
pass sampling techniques are used.
The ideal filter has a flat pass-band and the cut-off is very sharp,
since the cut-off frequency of this filter is half of that of the sampling
frequency, the resulting replicated spectrum of the sampled signal do
not overlap each other. Thus no aliasing occurs.
3.25
3.4.2 cont’d
3.26
3.4.2 cont’d
Quantization
Now that we have converted the continuous-time, continuous-
valued signal into a discrete-time, continuous-valued signal, we
STILL need to make it discrete valued
This is where Quantization comes into picture
“The process of converting analog voltage with infinite precision
to finite precision”
For e.g. if a digital processor has a 4-bit word, the amplitudes of
the signal can be segmented into 16 levels
3.27
3.4.2 cont’d
Quantization
3.28
3.4.2 cont’d
Quantization
If the discrete-time signal x[n] falls between two
quantization levels, it will either be rounded up or truncated
Rounding replaces x[n] by the value of the nearest
quantization level
Truncation replaces x[n] by the value of the level below it
3.29
3.4.2 cont’d
General rules for Quantization
Important properties of the quantizes include
Number of quantizes levels
Quantization resolution
Note the minimum & maximum amplitude of the input
signal
Ymin & Ymax
3.30
3.4.2 cont’d
General rules for Quantization
3.31
3.4.2 cont’d
General rules for Quantization
3.32
3.4.2 cont’d
3.33
3.4.2 cont’d
3.34
3.4.3 Discrete PAM signals(Line Codes)
UNI-Polar encoding
In uni-polar encoding, only two voltage levels are used.
It uses only one polarity of voltage level
Bit rate same as data rate
Unfortunately DC component is present
Loss of synchronization
3.35
3.4.3 cont’d
Polar encoding
Polar encoding technique uses two voltages
One positive level other one negative
Four different encoding techniques
3.36
3.4.3 cont’d
Bi-polar encoding
Bi-polar, uses the three voltage levels
3.37
3.4.3 cont’d
Non-return to Zero-Level (NRZ-L)
Two different voltages (+ and – OR + and none) for 0 and 1 bits
Voltage constant during bit interval
no transition I.e. no return to zero voltage
e.g. Absence of voltage for zero, constant positive voltage for one
More often, negative voltage for one value and positive for the other
3.38
3.4.3 cont’d
Non-return to Zero Invert
NRZ-I is variation of NRZ encoding.
Considered as a differential encoding techniques because of its
level is a function of the transition at the beginning of the signal
element.
If signal is logic 0, there is no change to opposite logic level, either
at beginning or throughout cell.
If signal is logic 1, then transition occurs at beginning of the cell
and logic level change to the opposite value throughout cell.
3.39
3.4.3 cont’d
Return to Zero (RZ)
RZ encoding can be classified into two categories: Unipolar
RZ and Bipolar RZ.
In Unipolar RZ encoding, a binary 1 is represented with a
high level for half the bit time and returns to zero for the other
half.
In Bipolar RZ encoding, two non-zero voltages are used; a
logic 1 and 0 alternate in polarities, each taking half the bit time
before running to zero.
Bipolar RZ is a self clocking code
3.40
3.4.3 cont’d
Self Clocking Codes
Self clocking codes are encoding techniques used to ensure
that each bit time associated with the binary serial bit stream
contains at least one level transition (1 to 0, or 0 to 1).
Sufficient spectral energy is generated at clock frequency so
that the receiver can recover the data, even when long periods of
1s or 0s occurs.
Self clocking codes include Bipolar RZ, Manchester and
differential Manchester, Miller.
Telephones trunks circuits uses a self clocking RZ encoding
called bipolar with 8 zeros substitution(B8ZS)
B8ZS is also known as AMI
3.41
3.4.3 cont’d
Manchester Encoding
In Manchester encoding, a logic 1 is represented with low-to-high
transition in the center(or beginning) of the cell and
Logic 0 is represented with a high-to-low transition in the center(or
beginning) of the cell
Ethernet LAN systems employ Manchester encoding
Differential Manchester
In differential Manchester, a transition from low-to-high or high-to-
low occurs at center of each cell, which provides for self clocking
mechanism.
A logic 1 or 0 are determined by the logic level of the previous cell.
If the previous cell is a logic 1, there is no change at the beginning
of the current cell.
If the previous cell is a logic 0, then a transition occurs at the
beginning of the current cell.
3.42
3.4.3 cont’d
Miller Encoding
Self clocking techniques in which a transition occurs at the
center of each cell for logic 1s only
No transition is used for a logic 0 unless it is followed by
another 0, in which case transition is placed at the end of the
cell for the first 0
Miller encoding used in digital magnetic recording.
3.43
3.4.3 cont’d
3.44
3.4.3 cont’d
3.45
3.4.3 cont’d
Example
3.46
3.4.4 Delta modulation
Example
3.47
3.4.4 Delta modulation
3.48
3.4.6 Information, Entropy and source coding
Measure of Information
A. Information Source
• An information source is an object that produces an event, the
outcome of it is selected at random according to a probability
distribution.
• A practical source of information in a communication system is
a device produces messages, it can be either analog or discrete.
• The set of source symbols is called the source alphabet and the
elements of the set are called symbols or letters.
• Information source can be classified as having memory and
memory less.
• A source with memory is one for which the current symbol
depends on the previous symbols.
• A memory less source is one for which each symbol produced is
independent of the previous symbols .
3.49
3.4.6 Information, Entropy and source coding
B. Information content of a Discrete Memoryless System
• The amount of information contained in an event is closely related
to its uncertainty.
• Message having knowledge of high probability of occurrence
convey relatively little information.
• If the message is certain (with high probability of occurrence) , it
conveys zero information.
• Thus, a mathematical measure of information should be a function
of the probability of the outcome and should satisfy the following
axioms:
– Information should be proportional to the uncertainty of an
outcome.
– Information contained in independent outcomes should add.
3.50
3.4.6 Information, Entropy and source coding
Information Content of a Symbol
3.51
3.4.6 Information, Entropy and source coding
Average Information or Entropy
• In a practical communication system, we usually transmit long
sequences of symbols from an information source.
• Thus, we are more interested in the average information that a source
produces than the information content of a single symbol.
• The mean value of I(xi) over the alphabet of source X with ‘m’
different symbols is given by
Information Rate
• If the time rate at which source X emits symbols is ‘r’
(symbols/s), the information rate R of the source is given by
3.53
3.4.6 Information, Entropy and source coding
Channel Capacity
A. Channel Capacity per Symbol Cs:
•The channel capacity per symbol of a DMC is defined as
3.54
3.4.6 Information, Entropy and source coding
Source Coding Theorem (Shannon's first theorem)
The theorem can be stated as follows: Given a discrete
memoryless source of entropy H(S) , the average code-
word length L for any distortionless source coding is
bounded as
L > H(S)
The entropy of a source is a function of the probabilities
of the source symbols that constitute the alphabet of the
source.
Entropy of Discrete Memoryless Source Assume that the
source output is modeled as a discrete random variable,
S , which takes on symbols from a fixed finite alphabet
3.55
3.4.6 Information, Entropy and source coding
3.56
3.4.6 Information, Entropy and source coding
The entropy is a measure of the average information
content per source symbol.
The source coding theorem is also known as the
"noiseless coding theorem" in the sense that it establishes
the condition for error-free encoding to be possible.
3.57
3.4.6 Information, Entropy and source
coding
3.58
3.4.6 Information, Entropy and source
coding
3.59
3.4.6 Information, Entropy and source coding
3.60
3.4.6 Information, Entropy and source coding
3.61
3.4.6 Information, Entropy and source coding
Exercise
Consider a discrete memoryless source with alphabets
(x1,x2,x3,x4,x5,x6) and statistics (0.3,0.25,0.20,0.12,0.08,0.05)
respectively for its output
apply Shannon-fanno source coding algorithm to this source.
Determine average code word length, entropy, efficiency and
redundancy
3.62
3.4.6 Information, Entropy and source coding
3.63
3.4.6 Information, Entropy and source coding
3.64
3.4.6 Information, Entropy and source coding
3.65
3.4.6 Information, Entropy and source coding
3.66
3.4.6 Information, Entropy and source coding
3.67