Unit 2 A
Unit 2 A
Unit 2 A
Definition: Vocoder is an audio processor that is used to transmit speech or voice signal in
the form of digital data. The vocoder is used as short form for voice coder. Vocoders are
basically used for digital coding of speech and voice simulation. The bitrate for available
narrowband vocoders is from 1.2 to 64 kbps.
Vocoder operates on the principle of formants. Formants are basically the meaningful
components of a speech that is generated due to the human voice.
Whenever a speech signal is transmitted, it is not needed to transmit the precise waveform.
We can simply transmit the information by which one can reconstruct that particular
waveform. This reconstructed waveform at the receiver must be similar and not identical to
the waveform actually transmitted.
Vocoder works in such a way that it first captures the characteristic element of the signal.
Then other audio signals are affected by the use of that characteristic signal.
Vocoders are used for voice synthesis. The vocoder takes two signals and creates a third
signal using the spectral information of the two input signals. It aims to emblem the
amplitude and frequency characteristic of speech signal onto the synthesis signal, while
maintaining the pitch of the speech signal.
A voice model is used to simulate voice. As speech contains a sequence of voiced and
unvoiced sounds, this is the basis for the operation of a voice model.
Before proceeding further, it is better to first understand what is voiced and unvoiced sounds.
Voice sounds are basically the sounds generated by vibrations of the vocal cords.
On contrary, the sound produced at the pronunciation of the letters such as ‘s’, ‘p’ or ‘f’ is
known as unvoiced sounds. Unvoiced sounds are generated by expelling air through lips
and teeth.
As we can see in the above figure of speech model used in Vocoder. Here, voiced sounds are
simulated by the impulse generator, the frequency of which is equal to the fundamental
frequency of vocal cords. The noise source present in the circuit is used to simulate the
unvoiced sounds.
The position of the switch helps in determining whether the sound is voiced or unvoiced.
Then the selected signal is passed through a filter that simulates the effect of mouth, throat
and nasal passage of speaker. The filter unit then filters the input in such a way so as the
required letter is pronounced. Thus we can have a synthesised approximated speech
waveform.
LPC is extensively used in case of speech and music application. LPC is an acronym
for Linear Predictive Coding. It is basically a technique to estimate future values. In
simple words we can say, by analysing two previous samples it predicts the outcome.
Vocoder is comprised of voice encoder and decoder. Let us now discuss the operation of
each in detail-
Voice Encoder
The figure given below shows the block diagram of voice encoder-
The frequency spectrum of the speech signal (200Hz – 3200Hz) is divided into 15 frequency
ranges by using 15 Bandpass filter(BPF) each having bandwidth range of 200Hz. The output
of BPF acts as input for the rectifier unit.
Here, the signal is rectified and filtered so as to produce a dc voltage. This generated dc
voltage is proportional to the amplitude of AC signal present at the output of the filter.
The input of the frequency discriminator is the speech signal. Frequency discriminator unit is
followed by a Low pass filter(LPF) of 20Hz. This LPF generates a dc voltage proportional
to the voice frequency. The frequency represents nothing else than the pitch of the voice.
Now, the output at all the LPF’s is dc voltage which is sampled, multiplexed and A/D
converted. So, we have a digital equivalent of the speech signal at the output of the encoder.
This encoded voice signal consists of frequency component from 200Hz to 3200Hz,
information regarding the pitch of the speech and whether it is voiced or unvoiced.
Voice Decoder
The digital voice signal generated by the voice encoder is firstly decoded. Then voice
decoder using a speech synthesizer produces voice signal at its output. It generally generates
an approximate voice signal.
The demultiplexer and DAC section convert the received encoded signal back to its analog
form. Here, a balanced modulator(BM)-filter combination is used in correspondence to
rectifier-filter combination at the encoder. The carrier to this BM is either the output of noise
generator or pulse generator. But this depends on the position of the switch.
However, the switch position is decided by the decoder. It is so because when the voiced
signal is received, the switch connects the pulse generator output to the input of all the BM.
Similarly, when an unvoiced signal is received, the switch connects noise generator output to
the input of all the BM. But, the position of the switch totally depends on the decision of
decoder.
Only certain BM will provide the output if the received signal is voiced. This totally depends
on the frequency component of the received signal. But we can get output from all the BM if
the received signal is unvoiced. The adder will thus add up all the analog signal and produce
voice or speech output.
Types of Vocoders
Classification of Vocoders:
Channel Vocoders
Formant Vocoders
Cepstrum Vocoders
Voice-Excited Vocoders
Channel Vocoders:
Formant Vocoders:
Cepstrum Vocoders:
The Cepstrum vocoder separates the excitation and vocal tract spectrum by inverse
Fourier transforming of the log magnitude spectrum to produce the Cepstrum of
the signal.
The low frequency coefficients in the cepstrum correspond tothe vocal tract spectral
envelope
The high frequency excitation coefficients forming a periodic pulse train at
multiples of the sampling period
Linear filtering is performed to separate the vocal tract cepstral coefficients from the
excitation coefficients
In the receiver, the vocal tract cepstral coefficients are Fourier transformed to
produce the vocal tract impulse response.
By convolving this impulse response with a synthetic excitation signal (random
noise or periodic pulse train), the original speech is reconstructed
Voice-Excited Vocoder:
Voice-excited Vocoders eliminate the need for pitch extraction and voicing
detection operation.
This system uses a hybrid combination of PCM transmission for the low frequency
band of speech, combined with channel vocoding of higher frequency bands.
A pitch signal is generated at the synthesizer by rectifying, band pass filtering, and
clipping the baseband signal.
Voice excited vocoder have been designed for operation at 7200 bits/s to 9600 bits/s
Hybrid spread spectrum systems
The use of hybrid techniques attempt to capitalize upon the advantages of a particular method
while avoiding the disadvantages.
DS, on one hand, suffers heavily from the near-far effect, which makes this technique hard to
apply to systems without the ability of power control. On the other hand, its implementation
is inexpensive.
The PN code generators are easy to implement and the spreading operation itself can be
simply performed by XOR ports.
FH effectively suppresses the near-far effect and reduces the need for power control.
However, implementation ofthe (fast) hopping frequency synthesizer required for a
reasonable spreading gain is more problematic in terms of higher silicon cost and increased
power consumption.
Applying both techniques allows for combining their advantages while reducing the
disadvantages. This results in a reasonable near-far resistance at an acceptable hardware cost.
Many different hybrid combinations are possible, some of which are PN/FH, PN/TH, FH/TH
and PN/FH/TH.
While designing a hybrid system, the designer should decide FFH or SFH is to be applied.
FFH increases the cost of the frequency synthesizer but provides more protection against the
near-far effect.
SFH combines a less expensive synthesizer with a poor near-far rejection and the need for a
more powerful error-correction scheme (several symbols are lost during a hit jamming).
Multicarrier Modulation
Multicarrier modulation is a form of signal waveform that uses multiple normally close spaced
carriers in a block to carry the information.
Multicarrier modulation, MCM is a technique for transmitting data by sending the data over
multiple carriers which are normally close spaced.
As a result, multicarrier modulation techniques are widely used for data transmission as it is
able to provide an effective signal waveform which is spectrally efficient and resilient to the
real world environment.
One form of multicarrier modulation is OFDM
Multicarrier modulation basics
Multicarrier modulation operates by dividing the data stream to be transmitted into a number
of lower data rate data streams. Each of the lower data rate streams is then used to modulate
an individual carrier.
When the overall transmission is received, the receiver has to then re-assembles the overall
data stream from those received on the individual carriers.
It is possible to use a variety of different techniques for multicarrier transmissions. Each form
of MCM has its own advantages and can be sued in different applications.
Development of MCM
The history of multicarrier modulation can said to have been started by military users. The
first MCM were military HF radio links in the late 1950s and early 1960s. Here several
channels were sued to overcome the effects of fading.
Originally the concept of MCM required the use of several channels that were separated from
each other by the use of steep sided filters of they were close spaced. In this way, interference
from the different channels could be eliminated.
However, multicarrier modulation systems first became widely used with the introduction of
broadcasting systems such as DAB digital radio and DVB, Digital Video Broadcasting which
used OFDM, orthogonal frequency division multiplexing. OFDM used processing power
within the receiver and orthogonality between the carriers to ensure no interference was
present.
The wireless communication environment is very hostile. The signal transmitted over a
wireless communication link is susceptible to fading (severe fluctuations in signal level), co-
channel interference, dispersion effects in time and frequency, path loss effect, etc. On top of
these woes, the limited availability of bandwidth poses a significant challenge to a designer in
designing a system that provides higher spectral efficiency and higher quality of link
availability at low cost.
Multiple antenna systems are the current trend in many of the wireless technologies that is
essential for their performance (you will even see it in your future hard disk drives as Two
Dimensional Magnetic Recording (TDMR) technology). Multiple Input Multiple Output
systems (MIMO) improve the spectral efficiency and offers high quality links when
compared to traditional Single Input Single Output (SISO) systems. Many theoretical
studies and communication system design experimentations on MIMO systems demonstrated
a great improvement in performance of such systems.
Techniques for improving performance
Spatial Multiplexing techniques , example – BLAST yields increased data rates in wireless
communication links. Fading can be mitigated by employing receiver and transmit diversity
(Alamouti Scheme , Tarokh et. al) , there by improving the reliability of the transmission
link. Improved coverage can be effected by employing coherent combining techniques –
which gives array gain and increases the signal to noise ratio of the system. The goals of a
wireless communication system are conflicting and a clear balance of the goals is needed for
maximizing the performance of the system.
The following text concentrates on two of the above mentioned techniques – diversity and
spatial multiplexing.
In MIMO jargon, communication systems are broadly categorized into four categories with
respect to number of antennas in the transmitter and the receiver, as listed below.
Apart from the antenna configurations, there are two flavors of MIMO with respect to how
data is transmitted across the given channel. Existence of multiple antennas in a system,
means existence of different propagation paths. Aiming at improving the reliability of the
system, we may choose to send same data across the different propagation (spatial) paths.
This is called spatial diversity or simply diversity. Aiming at improving the data rate of the
system, we may choose to place different portions of the data on different propagation paths
(spatial-multiplexing). These two systems are listed below.
● MIMO – implemented using diversity techniques – provides diversity gain – Aimed
at improving the reliability
● MIMO – implemented using spatial-multiplexing techniques – provides degrees of
freedom or multiplexing gain – Aimed at improving the data rate of the system.
Diversity:
As indicated, two fundamental resources available for a MIMO system are diversity and
degrees of freedom. Let’s see what these terms mean
Introduction:
Initially developed for military applications during II world war, that was less sensitive to
intentional interference or jamming by third parties. Spread spectrum technology has
blossomed into one of the fundamental building blocks in current and next-generation
wireless systems.
Narrow band can be wiped out due to interference. To disrupt the communication,
the adversary needs to do two things,
(a) to detect that a transmission is taking place and
(b) to transmit a jamming signal which is designed to confuse the receiver.
Solution
Remedy
spread the narrow band signal into a broad band to protect against interference
2. The Spectrum Spreading is accomplished before transmission through the use of a code
that is independent of the data sequence. The Same code is used in the receiver to
despread the received signal so that the original data sequence may be recovered.
PSUEDO-NOISE SEQUENCE:
Generation of PN sequence:
A feedback shift register is said to be Linear when the feedback logic consists of
entirely mod-2-address (Ex-or gates). In such a case, the zero state is not permitted.
The period of a PN sequence produced by a linear feedback shift register with ‘n’ flip
flops cannot exceed 2n-1.
When the period is exactly 2 n-1, the PN sequence is called a ‘maximum length
sequence’ or ‘m-sequence’.
Example1: Consider the linear feedback shift register shown in above figure
Involve three flip-flops. The input so is equal to the mod-2 sum of S1 and S3. If
the initial state of the shift register is 100. Then the succession of states will be as
follows.
100,110,011,011,101,010,001,100 . . . . . .
The output sequence (output S3) is therefore. 00111010 . . . . . Which repeats itself with
period 23–1 = 7 (n=3). Maximal length codes are commonly used PN codes In binary
shift register, the maximum length sequence is
N = 2m-1
Properties of PN Sequence
1. Balance property
3. Autocorrelation property
1. Balance property
In each Period of the sequence , number of binary ones differ from binary zeros by
at most one digit.
Consider output of shift register 0 0 0 1 0 0 1 1 0 1 0 1 1 1 1
Seven zeros and eight ones -meets balance condition.
Among the runs of ones and zeros in each period, it is desirable that about one
half the runs of each type are of length 1, one- fourth are of length 2 and one-eighth
are of length 3 and so-on.
Consider output of shift register
Number of runs =8
0 0 0 1 0 0 1 1 0 1 0 1 1 1 1
3 1 2 2 1 1 1 4
𝑁
1
𝑅𝑐 𝑘 = 𝑐𝑛 𝑐𝑛−𝑘
𝑁
𝑛=1
Where N is length or the period of the sequence, k is the lag of auto correlation function.
1 𝑖𝑓 𝑘 = 1𝑁
𝑅𝑐 𝑘 = 1
− 𝑖𝑓 𝑘 ≠ 1𝑁
𝑁
Where 1 is any Integer. We can also state the auto correlation function is
1
𝑅𝑐 𝑘 =
𝑁
1 1
𝑅𝑐 𝑘 = 7−8 = −
15 15
Yields PN autocorrelation as
7 127
8 255
9 511
10 1023
11 2047
12 4095
13 8191
17 131071
19 524287
{ cK } denotes a PN sequence.
The desired modulation is achieved by applying the data signal b(t) and PN signal
c(t) to a product modulator or multiplier. If the message signal b(t) is narrowband and
the PN sequence signal c(t) is wide band, the product signal m(t) is also wide band. The
PN sequence performs the role of a ‘Spreading Code”.
For base band transmission, the product signal m(t) represents the transmitted
signal. Therefore m(t) = c(t).b(t)
The received signal r(t) consists of the transmitted signal m(t) plus an additive
interference noise n(t), Hence
r(t) = m(t) + n(t)
= c(t).b(t) + n(t)
To recover the original message signal b(t), the received signal r(t) is applied to a
demodulator that consists of a multiplier followed by an integrator and a decision device.
The multiplier is supplied with a locally generated PN sequence that is exact replica of that
used in the transmitter. The multiplier output is given by
Z(t) = r(t).c(t)
The data signal b(t) is multiplied twice by the PN signal c(t), where as unwanted
signal n(t) is multiplied only once. But c2(t) = 1, hence the above equation reduces to
Now the data component b(t) is narrowband, where as the spurious component
c(t)n(t) is wide band. Hence by applying the multiplier output to a base band (low pass)filter
most of the power in the spurious component c(t)n(t) is filtered out. Thus the effect of the
interference n(t) is thus significantly reduced at the receiver output.
The integration is carried out for the bit interval 0 ≤ t ≤ Tb to provide the sample value
V. Finally, a decision is made by the receiver.
If V > Threshold Value ‘0’, say binary symbol ‘1’ If V < Threshold Value ‘0’, say
binary symbol ‘0’
Keying:-
To provide band pass transmission, the base band data sequence is multiplied by a
Carrier by means of shift keying. Normally binary phase shift keying (PSK) is used because
of its advantages. The transmitter first converts the incoming binary data sequence {b k}
into an NRZ waveform b(t), which is followed by two stages of modulation.
The first stage consists of a multiplier with data signal b(t) and the PN signal c(t)as
inputs. The output of multiplier is m(t) is a wideband signal. Thus a narrow – band data
sequence is transformed into a noise like wide band signal.
The second stage consists of a binary Phase Shift Keying (PSK) modulator.
Which converts base band signal m(t) into band pass signal x(t). The transmitted signal x(t) is
thus a direct – sequence spread binary PSK signal. The phase modulation θ(t) of
x(t) has one of the two values ‘0’ and ‘π’ (180 o) depending upon the polarity of the
message signal b(t) and PN signal c(t) at time t.
Polarity of PN & Polarity of PN signal both +, + or - - Phase ‘0’
In the first stage the received signal y(t) and a locally generated carrier are
applied to a coherent detector (a product modulator followed by a low pass filter), Which
converts band pass signal into base band signal.
The second stage of demodulation performs Spectrum despreading by
multiplying the output of low-pass filter by a locally generated replica of the PN signal
c(t), followed by integration over a bit interval Tb and finally a decision device is used to
get binary sequence.
Fig : Direct Sequence Spread Spectrum Example
2
𝜑𝑘 𝑡 = cos 2𝛱 𝑓𝑐 𝑡 𝑘𝑇𝑐 ≤ 𝑡 ≤ 𝑡 + 1 𝑇𝑐
𝑇𝑐
0 𝑂𝑡𝑒𝑟𝑤𝑖𝑠𝑒
2
𝜑𝑘 𝑡 = sin 2𝛱 𝑓𝑐 𝑡 𝑘𝑇𝑐 ≤ 𝑡 ≤ 𝑡 + 1 𝑇𝑐
𝑇𝑐
0 𝑂𝑡𝑒𝑟𝑤𝑖𝑠𝑒 = 0,1 … … … … … 𝑁 − 1
𝑥 𝑡 = 𝑐(𝑡)𝑠(𝑡)
2
𝜑𝑘 𝑡 = ± c(t)cos 2𝛱 𝑓𝑐 𝑡
𝑇𝑐
N−1
𝐸𝑏
𝜑𝑘 𝑡 = ± ck φk (t) 0 ≤ 𝑡 ≤ 𝑇𝑏
𝑁
k=0
where, Eb is signal energy per bit.
PN Code sequence { c0, c1, ……cN-1} with ck= + 1, Transmitted signal x(t) is
therefore N dimensional and requires N orthonormal functions to represent it. j(t)
represent interfering signal (jammer). As said jammer tries to places all its available
energy in exactly same N dimension signal space. But jammer has no knowledge
of signal phase. Hence tries to place equal energy in two phase coordinates that is
cosine and sine. As per that jammer can be represented as
𝑁−1 𝑁−1
𝑗 𝑡 = 𝑗𝑘 𝜑𝑘 𝑡 + 𝑗𝑘 𝜑𝑘 𝑡 0 ≤ 𝑡 ≤ 𝑇𝑏
𝑘=0 𝑘=0
Where
𝑇𝑏
𝑗𝑘 = 𝑗 𝑡 𝜑𝑘 𝑡 𝑘 = 0,1, … … 𝑁 − 1
0
𝑇𝑏
𝑗𝑘 = 𝑗 𝑡 𝜑𝑘 𝑡 𝑘 = 0,1, … … 𝑁 − 1
0
Thus j(t) is 2N dimensional, twice the dimension as that of x(t).
𝑇𝑏 𝑁−1 𝑁−1
1 1 1
𝐽= 𝑗 2 𝑡 𝑑𝑡 = 𝑗𝑘 2 + 𝑗𝑘 2
𝑇𝑏 0 𝑇𝑏 𝑇𝑏
𝑘=0 𝑘=0
𝑁−1 𝑁−1
𝑗𝑘 2 = 𝑗𝑘 2
𝑘=0 𝑘=0
𝑁−1
2
𝐽= 𝑗𝑘 2
𝑇𝑏
𝑘=0
To evaluate system performance we calculate SNR at input and output of DS/BPSK
receiver. The coherent receiver input is u(t) =s(t) + c(t)j(t) and using this u(t), output at
coherent receiver
Tb
2
𝑉= u(t) cos 2𝛱 𝑓𝑐 𝑡 𝑑𝑡 = 𝑉𝑠 + 𝑉𝑐𝑗
𝑇𝑏 0
2𝐸𝑏
𝑠(𝑡) = ± cos 2𝛱 𝑓𝑐 𝑡 𝑑𝑡 0 ≤ 𝑡 ≤ 𝑇𝑏
𝑇𝑏
N−1 𝑇𝑏 N−1
𝑇𝑐 𝑇𝑐
𝑉𝑐𝑗 = Ck 𝑗 𝑡 𝜑𝑘 𝑡 𝑑𝑡 = Ck 𝑗𝑘
𝑇𝑏 0 𝑇𝑏
k=0 k=0
With Ck treated as independent identical random variables with both symbols having
equal probabilities
1
𝑃 𝐶𝑘 = 1 = 𝑃 𝐶𝑘 = −1 =
2
Expected value of Random variable vcj is zero, for fixed k we have
1 1
𝐸 𝑐𝑘 𝑗𝑘 |𝑗𝑘 = 𝑗𝑘 𝑃 𝐶𝑘 = 1 − 𝑝 𝐶𝑘 = −1 = 𝑗𝑘 − 𝑗𝑘 = 0
2 2
And Variance
𝑁−1
1 𝐽𝑇𝑐
𝑉𝑎𝑟 𝑉𝑐𝑗 |𝑗 = 𝑗𝑘 2 =
𝑁 2
𝑘=0
2𝐸𝑏
(𝑆𝑁𝑅)𝑜 =
𝐽 𝑇𝑐
The average signal power at receiver input is Eb/Tb hence input SNR
𝐸𝑏 𝑇𝑏
(𝑆𝑁𝑅)𝑖 =
𝐽
2𝑇𝑏
(𝑆𝑁𝑅)0 = (𝑆𝑁𝑅)𝑖
𝑇𝑐
1
1. Bit rate of binary data entering the transmitter input is 𝑅𝑏 = 𝑇𝑏
2. The bandwidth of PN sequence c(t) , of main lobe is Wc
1
𝑊𝐶 =
𝑇𝑐
𝑊𝑐
𝑃𝐺 =
𝑅𝑏
Probability of error
𝑉 = ± 𝐸𝑏 + 𝑉𝑐𝑗
Decision rule is, if detector output exceeds a threshold of zero volts; received bit is
symbol 1 else decision is favored for zero.
Probability of error can be calculated from simple formula for DS/BPSK system
1 𝐸𝑏
𝑃𝑒 ≅ 𝑒𝑟𝑓𝑐
2 𝐽𝑇𝑐
Antijam Characteristics
1 𝐸𝑏
𝑃𝑒 = 𝑒𝑟𝑓𝑐
2 𝑁0
The ratio J/P is termed jamming margin. Jamming Margin is expressed in decibels as
𝐸𝑏
𝑗𝑎𝑚𝑚𝑖𝑛𝑔 𝑚𝑎𝑟𝑔𝑖𝑛 𝑑𝐵 = 𝑃𝑟𝑜𝑐𝑒𝑠𝑠𝑖𝑛𝑔 𝑔𝑎𝑖𝑛 𝑑𝐵 − 10 𝑙𝑜𝑔10
𝑁0 𝑚𝑖𝑛
𝐸𝑏
Where is minimum bit energy to noise ration needed to support a prescribed
𝑁0
average probability of error.
Example1
Solution
Example2
A direct sequence spread binary phase shift keying system uses a feedback
shift register of length 19 for the generation of PN sequence. Calculate the
processing gain of the system.
Solution
Solution
𝐸𝑏
𝑗𝑎𝑚𝑚𝑖𝑛𝑔 𝑚𝑎𝑟𝑔𝑖𝑛 𝑑𝐵 = 10 𝑙𝑜𝑔10 𝑃𝐺𝑑𝐵 − 10 𝑙𝑜𝑔10
𝑁0 𝑚𝑖𝑛
Since frequency hopping does not covers the entire spread spectrum
instantaneously. We are led to consider the rate at which the hop occurs. Depending
upon this we have two types of frequency hop.
1. Slow frequency hopping:- In which the symbol rate Rs of the MFSK signal is an
integer multiple of the hop rate Rh. That is several symbols are transmitted on
each frequency hop.
2. Fast – Frequency hopping:- In which the hop rate Rh is an integral multiple of the
MFSK symbol rate Rs. That is the carrier frequency will hoop several times
during the transmission of one symbol. A common modulation format for
frequency hopping system is that of M- ary frequency – shift – keying (MFSK).
Slow frequency hopping:-
In the receiver the frequency hopping is first removed by mixing the received
signal with the output of a local frequency synthesizer that is synchronized with the
transmitter. The resulting output is then band pass filtered and subsequently processed
by a non coherent M-ary FSK demodulator. To implement this M-ary detector, a bank of
M non coherent matched filters, each of which is matched to one of the MFSK tones is
used. By selecting the largest filtered output, the original transmitted signal is estimated.
In a slow rate frequency hopping multiple symbols are transmitted per hop.
Hence each symbol of a slow FH / MFSK signal is a chip. The bit rate Rb of the
incoming binary data. The symbol rate Rs of the MFSK signal, the chip rate Rc and the
hop rate Rn are related by
Rc = Rs = Rb /k ≥ Rh
where k= log2M
The following figure illustrates the variation of the frequency of a slow FH/MFSK
signal with time for one complete period of the PN sequence. The period of the PN
sequence is 2 4-1 = 15.
The FH/MFSK signal has the following parameters:
Number of bits per MFSK symbol K = 2. Number of MFSK tones M = 2 K = 4
Length of PN segment per hop k = 3; Total number of frequency hops 2k = 8
The combining processes which use to combine multiple diversity branches in the reception,
has two classes such as post-detection combing and pre-detection combining. The signals
from diversity branches are combined coherently before detection in pre-detection combining.
However, signals are detected individually before combining in post-detection. The
performance of communication system is the same for both combining techniques for
coherent detection. However, the performance of communication system is better by using
pre-detection combining for non-coherent detection. It does mean that there is no effect in
performance by the type of combining procedure for the coherent modulation case. The post-
detection combining is not complex in non-coherent detection, results very common in use.
There is a difference in system performance when used pre-detection combining and post-
detection combining for non-coherent detection such as frequency modulation (FM)
discriminator or differential detection schemes. Moreover, the terms pre-detection and post-
50
detection are also indicates the time of combining means when the combining is performed,
before or after the hard decision.
conjugate of channel gain [1]. As a result the phase-shifts are compensated in the diversity
channels and the signals coming from strong diversity branches which has low level noise, are
weighted more comparing to the signals from the weak branches with high level of noise. The
term in weighting can be neglected conditioning that has equal value for all d. Then
the realization of the combiner needs the estimation of gains in complex channel and it does
not need any estimation of the power of noise.
It is feasible to employ MRC in transmission process of transmit diversity. But in this case the
transmitter should get proper feedback information about the sub-channels state between
single receive antenna and multiple transmit antennas. However, it is not feasible to weight
transmissions from multiple antennas optimally for every receiving antenna, in a combined
transmit-receive diversity channel. Moreover, if interference is limited in a communication
system, then there is a scheme which combines the diversity branches in order to maximize
the signal-to-interference-plus-noise ratio may allow much better performance than MRC
provides. The assumption is valid for spatially white Gaussian noise if we can observe noise
power at the receiver where just thermal noise is accounted. If we use the same type antenna
elements then the thermal noise power is uncorrelated and equal for each branch.
51
Ant
Rx
Ant
Det
Det
Rx
Q C
Q A
A
B
B
t
t
C
The EGC can employ in the reception of diversity with coherent modulation. The envelope
gains of diversity channels are neglected in EGC and the diversity branches are combined
52
here with equal weights but conjugate phase. The structure of equal-gain combining (EGC) is
as following since there is no envelope gain estimation of the channel.
Rx
Det
Det
Rx
The general form of selection combining is to monitor all the diversity branches and select the
best one (the one which has the highest SNR) for detection. Therefore we can say that SC is
not a combining method but a selection procedure at the available diversity. However,
measuring SNR is quite difficult because the system has to select it in a very short time. But
selecting the branch with the highest SNR is similar to select the branch with highest received
power when average power of noise is the same on each branch. Therefore, it is practical to
select the branch which has the largest signal composition, noise and interference. If there is
an availability of feedback information about the channel state of the diversity branch the
selection combining also can be used in transmission.
53
Rx
Comp Det
Rx
Performance improvement obtained by the switching method leys on the value of threshold
selection, the delay of time that creates from the loop of feedback of monitoring estimation,
switching and decision. Moreover, phase transients and envelope of a carrier can reduce the
improvement of performance. In the system of angle modulation, for example, GSM, the
phase transient is responsible to create errors in detection stream of data. In this case, a pre-
detection band pass filter may be used to remove envelope transients.
Ant
Switch
Ant
Rx Det
Comp
Fixed
Threshold
(a)
54
Ant
Switch
Ant
Rx Det
Comp Estimation
(b)
Picture 15: Switching combining methods with fixed threshold (a) and variable threshold (b).
Ant
Switch
Ant
Rx Det
Oscillator
Ant
Ant
Rx Det
Ant
Sweep signal
Sender
Medium Receiver
Input signal
Received Signal
Channel estimation process consists of multiple steps. First a mathematical model is created of
the channel. Then a signal which is known by both sender and receiver is transmitted over the
channel.
When the receiver receives the signal, it is of course distorted and contains noise from the
channel, but the receiver also knows the original signal, thus it can compare the original signal
and received signal to extract the properties of channel and the noises added to the sent signal in
the channel.
1. Mathematical model for channel is created. This model correlates sent and received signal
using channel matrix.
2. A signal known by both sender and receiver is sent by sender over the channel.
3. Receiver compares the received signal with original signal and figures out the values in
channel matrix.
Note: the signal that is sent and is known by both sender and receiver is usually called reference
signal or pilot signal.
When these reference signals are received at destination, they contain distortions and noise, and they are
represented by 𝑦(𝑓1 ), 𝑦(𝑓2 ), 𝑎𝑛𝑑 𝑦(𝑓3 ). Now to represent received signals 𝑦(𝑓) in terms of 𝑥(𝑓),channel
function is required for that specific frequency. It can be given by ℎ(𝑓).
Therefore, the relation of reference signal, received signal, and channel function can be represented by
the correlation function.
Now since, only 3 frequencies had been considered and channel characteristics are estimated
for only those frequencies, channel properties for rest of the frequencies can be estimated via
interpolation of already known characteristics.
Channel function in these equations represents channel distortion. Noise is also added to the
distorted signals, therefore actual equations of the received signal look like following:
𝑦(𝑓1 ) = ℎ(𝑓1 ) · 𝑥(𝑓1 ) + 𝑛(𝑓1 )
𝑦(𝑓2 ) = ℎ(𝑓2 ) · 𝑥(𝑓2 ) + 𝑛(𝑓2 )
𝑦(𝑓3 ) = ℎ(𝑓3 ) · 𝑥(𝑓3 ) + 𝑛(𝑓3 )
Similar to how channel function was estimated, theoretically noise can also be estimated by using
averaged channel estimate ℎ̅(𝑓).
𝑛 ( 𝑓 1) = 𝑦 ( 𝑓 1) − ℎ̅( 𝑓 1) · 𝑥 ( 𝑓 1)
𝑛 ( 𝑓 2) = 𝑦 ( 𝑓 2) − ℎ̅( 𝑓 2) · 𝑥 ( 𝑓 2)
But this provides with absolute values of noise and due to continuously variations in noise in a channel,
absolute values of noise are not beneficial in channel estimation. What’s beneficial is the estimated
function of noise which encompasses and models noise variations too. For that, there are different
algorithms and methods. One of them, which is implemented in srsLTE (Open-source LTE
implementation on SDR) is to subtract averaged channel estimate from actual channel estimate.
ℎ12
ℎ21
ℎ22 𝑦2
𝑥2
In MIMO system the process of channel estimation remains the same except now there are two signals
received from a single source. This means that two paths in the medium were used, one path per signal.
Therefore, to compute the final signal 𝑦(𝑓) for each frequency, both received signals have to be
considered. This results in formation of the matrix of received signals.
𝑦1 ℎ ℎ12 𝑥1 𝑛1
[𝑦 ] = [ 11 ] [𝑥 ] + [𝑛 ]
2 ℎ21 ℎ22 2 2
Similar to SISO system, Hermitian of input matrix 𝑥 can be taken to estimate channel matrix ℎ.
And noise matrix 𝑛 can also be calculated in the similar manner via matrix operations.
QUESTIONS FOR PRACTISE
Q.1. Define constraint length in convolutional codes?
Q.2. What is pseudo noise sequence?
Q.3. What is direct sequence spread spectrum modulation
Q.4. What is frequency hap spread spectrum modulation?
Q.5. What is processing gain?
Q.6. What is jamming margin ?
Q.7. When is the PN sequence called as maximal length sequence?
Q.8. What is meant by processing gain of DS spread spectrum system?
Q.9. What is the period of the maximal length sequence generated using 3 bit shift register.
Q.10. Define frequency hopping.
Q.11. What are the Advantages of DS-SS system
Q.12. What are the Disadvantages of DS-SS system.
Q.13. What are the Advantages of FH-SS System
Q.14. What are the Disadvantages of FH-SS System
Q.15. Define synchronization in Spread Spectrum Systems
Q.16. Comparison between DS-SS and FH-SS
Q.17. What are the Application of Direct Sequence Spread Spectrum 18. State the balance
property of random binary sequence.
Q.18. Mention about the run property.
Q.19. What is called jamming effect.
Q.20. What is Anti jamming ?
Q.21. What is slow and fast frequency hopping.