0% found this document useful (0 votes)
73 views

Pulse Modulation System

This document discusses various pulse modulation systems. It introduces pulse modulation where a series of pulses vary in amplitude, duration, shape or time based on a modulating signal. There are different pulse modulation schemes including pulse amplitude modulation, pulse code modulation, delta modulation, pulse width modulation, and pulse position modulation. The document then discusses advantages and disadvantages of pulse modulation systems. It covers analog pulse modulation techniques like PAM, PPM, PDM and digital pulse modulation schemes including PCM involving sampling, quantization, and coding. It also discusses Shannon-Nyquist sampling theorem and concepts like anti-aliasing filters, quantization, line codes for digital PAM signals.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views

Pulse Modulation System

This document discusses various pulse modulation systems. It introduces pulse modulation where a series of pulses vary in amplitude, duration, shape or time based on a modulating signal. There are different pulse modulation schemes including pulse amplitude modulation, pulse code modulation, delta modulation, pulse width modulation, and pulse position modulation. The document then discusses advantages and disadvantages of pulse modulation systems. It covers analog pulse modulation techniques like PAM, PPM, PDM and digital pulse modulation schemes including PCM involving sampling, quantization, and coding. It also discusses Shannon-Nyquist sampling theorem and concepts like anti-aliasing filters, quantization, line codes for digital PAM signals.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 67

Chapter 3

Pulse Modulation Systems

3.1
3.1 Introduction
 Systems in which a series of recurring pulses are made to vary
in amplitude, duration, shape & time, as a function of
modulating signal.
 Analog pulse modulation: A pulse train is used as the carrier
wave. Some characteristic feature of each pulse (e.g.,
amplitude, duration, or position) is used to represent message
samples.
 There are various Pulse Modulation schemes
 Pulse Amplitude Modulation
 Pulse Code Modulation
 Delta Modulation
 Pulse Width Modulation
 Pulse Position Modulation
3.2
3.1 Cont’d
Advantages of PM
 Noise immunity
Inexpensive digital circuitry
Can be time division multiplexed with other pulse
modulated signal
Transmission distance is increased through the use of
regenerative repeaters
Digital pulse streams can be stored
Error detection and correction is easily implemented

3.3
3.1 cont’d
Disadvantages of PM
 Require greater BW to transmit and receive as compare
to its analog counterpart
For high transmission rates, time need specialized
encoding techniques
Pulse coded stream is difficult to recover
For information recovery, it need synchronization
between transmitter & receiver.

3.4
3.2 Sampling Theorem
Sampling Theorem for strictly band - limited signals
1.a signal which is limited to  W  f  W , can be completely
 n 
described by  g ( ) .
 2W 
 n 
2.The signal can be completely recovered from  g ( )
 2W 
Nyquist rate  2W
Nyquist interval  1
2W
When the signal is not band - limited (under sampling)
aliasing occurs .To avoid aliasing, we may limit the
signal bandwidth or have higher sampling rate.
3.5
3.3 Analoge Pulse Modulation
Pulse amplitude modulation (PAM)
 is an engineering term that is used to describe the
conversion of the analog signal to a pulse-type signal in
which the amplitude of the pulse denotes the analog
information.

3.6
3.3 Analog Pulse Modulation

3.7
3.3 cont’d

3.8
3.3 cont’d
Pulse-Amplitude Modulation

3.9
3.3 cont’d
Pulse Width Modulation
 Also known as pulse duration modulation (PDM)
A form of modulation in which the width of a pulse carrier is
made to vary in accordance with the modulation voltage.
The leading edge of the pulse remains fixed, but the occurrence
of the trailing edge of the pulse varies
Disadvantages
 Different pulses of different durations (width) are created
Hard to interpret
Wider pulses – expending more power

3.10
3.3 cont’d
Pulse Position Modulation
 It is possible to overcome the flaw of PWM, by
preserving only the pulse transitions of the PWM signal
This effectively creates Pulse Position Modulation
(PPM)
PPM differs from PWM in that the position of a pulse is
made to vary in accordance of the modulating voltage

3.11
3.3 cont’d
Pulse modulated schemes

3.12
3.4 Digital
D Pulse Modulation
3.4.1 Pulse Code Modulation
 Pulse Code Modulation is a process through which an analogue
signal can be represented (approximated) by a digital signal
PCM is a three step process which includes
Sampling/Pulse amplitude modulation
Quantization
Coding
Analog to Digital conversion is really a three step process
involving
Sampling
Conversion from continuous-time, continuous valued signal to
discrete-time, continuous-valued signal
Quantization
 Conversion from discrete-time, continuous valued signal to
discrete-time, discrete-valued signal
3.13
3.4 Digital
D Pulse Modulation

Fig:- Basic elements of PCM


3.14
3.4 Digital
D Pulse Modulation
Coding
Conversion from a discrete-time, discrete-valued signal to an
efficient digital data format
Represent as bit.

3.15
3.4.2 Sampling
 A continuous-time signal has some value „defined‟ at „every‟
time instant .
 So it has infinite number of sample points .

3.16
3.4.2 Sampling
 It is impossible to digitize an infinite number of points
because infinite points would require infinite amount of
memory and infinite amount of processing power .
So we have to take some finite number of points .
Sampling can solve such a problem by taking samples at
the fixed time interval .
Sampling is the process of determining the instantaneous
voltage at these given intervals of time .
A technique called “pulse amplitude modulation” is used
to produce a pulse when signal is sampled
If an analog signal is not appropriately sampled, aliasing
will occur, where a discrete-time signal may be a
representation (alias) of multiple continuous-time signals.
3.17
3.4.2 Shannon-Nyquist Sampling theorem

A signal with no frequency component above a certain


maximum frequency is known as a band-limited signal (in
our case we want to have a signal band-limited to ½ Fs) .
Some times higher frequency components are added to the
analog signal (practical signals are not band-limited) .
In order to keep analog signal band-limited, we need a
filter, usually a low pass that stops all frequencies above ½
Fs. This is called an „Anti-Aliasing‟ filter

3.18
3.4.2 Shannon-Nyquist Sampling theorem
Types of Sampling
Critical Sampling
 When the sampling frequency is chosen to be equal to twice the
max. frequency component (Fs=2Fmax ).
Ideally, we should be able to recover the Analog signal from Digital
samples.
Under Sampling
When the sampling frequency is chosen to be less than twice the
max. frequency component (Fs<2Fmax).
 We would not be able to recover the analog signal from Digital
samples.

3.19
3.4.2 Shannon-Nyquist Sampling theorem
Over Sampling
When the sampling frequency is chosen to be greater than twice the max.
frequency component
We would be able to recover the Analog signal from Digital samples
It would also help to control the effect of Aliasing
Require more memory

3.20
3.4.2 cont’d

3.21
3.4.2 cont’d

3.22
3.4.2 cont’d
Example 1: For the following analog signal, find the Nyquist
sampling rate, also determine the digital signal frequency
and the digital signal

3.23
3.4.2 cont’d

3.24
3.4.2 cont’d
Anti-aliasing filters
Anti-aliasing filters are analog filters as they process the signal before
it is sampled. In most cases, they are also low-pass filters unless band-
pass sampling techniques are used.

The ideal filter has a flat pass-band and the cut-off is very sharp,
since the cut-off frequency of this filter is half of that of the sampling
frequency, the resulting replicated spectrum of the sampled signal do
not overlap each other. Thus no aliasing occurs.

3.25
3.4.2 cont’d

3.26
3.4.2 cont’d
Quantization
Now that we have converted the continuous-time, continuous-
valued signal into a discrete-time, continuous-valued signal, we
STILL need to make it discrete valued
This is where Quantization comes into picture
“The process of converting analog voltage with infinite precision
to finite precision”
For e.g. if a digital processor has a 4-bit word, the amplitudes of
the signal can be segmented into 16 levels

3.27
3.4.2 cont’d
Quantization

3.28
3.4.2 cont’d
Quantization
If the discrete-time signal x[n] falls between two
quantization levels, it will either be rounded up or truncated
Rounding replaces x[n] by the value of the nearest
quantization level
Truncation replaces x[n] by the value of the level below it

3.29
3.4.2 cont’d
General rules for Quantization
Important properties of the quantizes include
 Number of quantizes levels
Quantization resolution
Note the minimum & maximum amplitude of the input
signal
Ymin & Ymax

3.30
3.4.2 cont’d
General rules for Quantization

3.31
3.4.2 cont’d
General rules for Quantization

3.32
3.4.2 cont’d

3.33
3.4.2 cont’d

3.34
3.4.3 Discrete PAM signals(Line Codes)
UNI-Polar encoding
In uni-polar encoding, only two voltage levels are used.
It uses only one polarity of voltage level
Bit rate same as data rate
Unfortunately DC component is present
Loss of synchronization

3.35
3.4.3 cont’d
Polar encoding
Polar encoding technique uses two voltages
One positive level other one negative
Four different encoding techniques

3.36
3.4.3 cont’d
Bi-polar encoding
Bi-polar, uses the three voltage levels

3.37
3.4.3 cont’d
Non-return to Zero-Level (NRZ-L)
Two different voltages (+ and – OR + and none) for 0 and 1 bits
Voltage constant during bit interval
no transition I.e. no return to zero voltage
e.g. Absence of voltage for zero, constant positive voltage for one
More often, negative voltage for one value and positive for the other

3.38
3.4.3 cont’d
Non-return to Zero Invert
NRZ-I is variation of NRZ encoding.
Considered as a differential encoding techniques because of its
level is a function of the transition at the beginning of the signal
element.
If signal is logic 0, there is no change to opposite logic level, either
at beginning or throughout cell.
If signal is logic 1, then transition occurs at beginning of the cell
and logic level change to the opposite value throughout cell.

3.39
3.4.3 cont’d
Return to Zero (RZ)
RZ encoding can be classified into two categories: Unipolar
RZ and Bipolar RZ.
In Unipolar RZ encoding, a binary 1 is represented with a
high level for half the bit time and returns to zero for the other
half.
In Bipolar RZ encoding, two non-zero voltages are used; a
logic 1 and 0 alternate in polarities, each taking half the bit time
before running to zero.
Bipolar RZ is a self clocking code

3.40
3.4.3 cont’d
Self Clocking Codes
Self clocking codes are encoding techniques used to ensure
that each bit time associated with the binary serial bit stream
contains at least one level transition (1 to 0, or 0 to 1).
Sufficient spectral energy is generated at clock frequency so
that the receiver can recover the data, even when long periods of
1s or 0s occurs.
Self clocking codes include Bipolar RZ, Manchester and
differential Manchester, Miller.
Telephones trunks circuits uses a self clocking RZ encoding
called bipolar with 8 zeros substitution(B8ZS)
B8ZS is also known as AMI

3.41
3.4.3 cont’d
Manchester Encoding
In Manchester encoding, a logic 1 is represented with low-to-high
transition in the center(or beginning) of the cell and
Logic 0 is represented with a high-to-low transition in the center(or
beginning) of the cell
Ethernet LAN systems employ Manchester encoding
Differential Manchester
In differential Manchester, a transition from low-to-high or high-to-
low occurs at center of each cell, which provides for self clocking
mechanism.
A logic 1 or 0 are determined by the logic level of the previous cell.
If the previous cell is a logic 1, there is no change at the beginning
of the current cell.
If the previous cell is a logic 0, then a transition occurs at the
beginning of the current cell.

3.42
3.4.3 cont’d
Miller Encoding
Self clocking techniques in which a transition occurs at the
center of each cell for logic 1s only
No transition is used for a logic 0 unless it is followed by
another 0, in which case transition is placed at the end of the
cell for the first 0
Miller encoding used in digital magnetic recording.

3.43
3.4.3 cont’d

3.44
3.4.3 cont’d

3.45
3.4.3 cont’d
Example

3.46
3.4.4 Delta modulation
Example

3.47
3.4.4 Delta modulation

Let m n  m( nTs ) , n  0,1,2, 


where Ts is the sampling period and m( nTs ) is a sample of m(t ).
The error signal is
e n  m n  mq  n  1
eq  n    sgn(e n  )
mq  n   mq  n  1  eq  n
where mq  n  is the quantizer output , eq  n  is
the quantized version of e n  , and  is the step size

3.48
3.4.6 Information, Entropy and source coding
Measure of Information
A. Information Source
• An information source is an object that produces an event, the
outcome of it is selected at random according to a probability
distribution.
• A practical source of information in a communication system is
a device produces messages, it can be either analog or discrete.
• The set of source symbols is called the source alphabet and the
elements of the set are called symbols or letters.
• Information source can be classified as having memory and
memory less.
• A source with memory is one for which the current symbol
depends on the previous symbols.
• A memory less source is one for which each symbol produced is
independent of the previous symbols .
3.49
3.4.6 Information, Entropy and source coding
B. Information content of a Discrete Memoryless System
• The amount of information contained in an event is closely related
to its uncertainty.
• Message having knowledge of high probability of occurrence
convey relatively little information.
• If the message is certain (with high probability of occurrence) , it
conveys zero information.
• Thus, a mathematical measure of information should be a function
of the probability of the outcome and should satisfy the following
axioms:
– Information should be proportional to the uncertainty of an
outcome.
– Information contained in independent outcomes should add.

3.50
3.4.6 Information, Entropy and source coding
Information Content of a Symbol

3.51
3.4.6 Information, Entropy and source coding
Average Information or Entropy
• In a practical communication system, we usually transmit long
sequences of symbols from an information source.
• Thus, we are more interested in the average information that a source
produces than the information content of a single symbol.
• The mean value of I(xi) over the alphabet of source X with ‘m’
different symbols is given by

• The quantity H(X) is called the Entropy of source X. It is a measure


of the average information content per source symbol.
• The source entropy H(X) can be considered as the average amount of
uncertainty within source X that is resolved by use of the alphabet.
3.52
3.4.6 Information, Entropy and source coding
• The source entropy H(X) satisfies the following relation:

Information Rate
• If the time rate at which source X emits symbols is ‘r’
(symbols/s), the information rate R of the source is given by

3.53
3.4.6 Information, Entropy and source coding
Channel Capacity
A. Channel Capacity per Symbol Cs:
•The channel capacity per symbol of a DMC is defined as

•where the maximization is over all possible input probability


distributions {P(Xi)} on X. Note that the channel capacity Cs is a
function of only the channel transition probabilities that define the
channel.
B. Channel Capacity per Second C:
•If r symbols are being transmitted per second, then the maximum
rate of transmission of information per second is rCs. This is the
channel capacity per second and is denoted by C (b/s):

3.54
3.4.6 Information, Entropy and source coding
Source Coding Theorem (Shannon's first theorem)
The theorem can be stated as follows: Given a discrete
memoryless source of entropy H(S) , the average code-
word length L for any distortionless source coding is
bounded as
L > H(S)
The entropy of a source is a function of the probabilities
of the source symbols that constitute the alphabet of the
source.
 Entropy of Discrete Memoryless Source Assume that the
source output is modeled as a discrete random variable,
S , which takes on symbols from a fixed finite alphabet

3.55
3.4.6 Information, Entropy and source coding

Define the amount of information gain after observing the


event as the logarithmic function

3.56
3.4.6 Information, Entropy and source coding
The entropy is a measure of the average information
content per source symbol.
The source coding theorem is also known as the
"noiseless coding theorem" in the sense that it establishes
the condition for error-free encoding to be possible.

3.57
3.4.6 Information, Entropy and source
coding

3.58
3.4.6 Information, Entropy and source
coding

3.59
3.4.6 Information, Entropy and source coding

3.60
3.4.6 Information, Entropy and source coding

3.61
3.4.6 Information, Entropy and source coding
Exercise
Consider a discrete memoryless source with alphabets
(x1,x2,x3,x4,x5,x6) and statistics (0.3,0.25,0.20,0.12,0.08,0.05)
respectively for its output
apply Shannon-fanno source coding algorithm to this source.
Determine average code word length, entropy, efficiency and
redundancy

3.62
3.4.6 Information, Entropy and source coding

3.63
3.4.6 Information, Entropy and source coding

3.64
3.4.6 Information, Entropy and source coding

3.65
3.4.6 Information, Entropy and source coding

3.66
3.4.6 Information, Entropy and source coding

3.67

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy