Lec 31

Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

Digital Communication

Professor Surendra Prasad


Department of Electrical Engineering
Indian Institute of Technology Delhi
Lecture No 31
Performance of M'ary Digital Modulations

(Refer Slide Time: 01:03)

We will now take up the last topic in digital modulations that is of interest to us, namely get a
feel for the performance of M-ary digital modulations. We have fully taken care of the binary
modulations.

(Refer Slide Time: 01:18)

The only thing left is M-ary digital modulations. We have seen the receiver structures,
primary digital modulations both for orthogonal as well as
(Refer Slide Time: 01:30)

other M-ary waveforms, M-ary modulations based on two dimensional signal constellations,
right? Today we would like to take up the performance.

Now because of shortage of time, I will not be taking up detailed performance analysis of
every, all aspects of M-ary waveform performance, M-ary digital modulation performance.
However I will go through the broad approach and also by now you are quite reasonably
familiar with the techniques that need to be used and I will be therefore leaving a number of
things for self-study in this. So as we go along I will tell you precisely what you have to read
yourself. I will be talking about the major result over here, and it will be very easy for you to
do that because the basic ideas are similar to what you have been doing so far, Ok.

So we will take up the performance of M-ary waveforms. And to start with I will take M-ary
orthogonal waveforms.
(Refer Slide Time: 02:46)

And in this class of modulations we will consider the performance of the coherent T
modulators or the coherent receivers.

(Refer Slide Time: 02:59)

The analysis for the, the corresponding analysis for the non-coherent receivers will be very
similar and it will be very easy for you to work it out yourself, read it out yourself.

Alright, now let us quickly recapitulate the decision statistic that we have to use for making
our decisions for the case of coherent receivers in using orthogonal waveforms, that is we are
looking at, what is the structure of the coherent receiver? I do not have the picture here but it
is very easy to remember that picture. We have
(Professor – student conversation starts)
Student: A bank of
Professor: You have a bank of matched filters, right. And you have sampled the outputs of
these bank of matched filters at the time instant t equal to l t for the lth symbol and depending
on which matched filter produces the largest value of this sample you will decide for that
particular signal have been transmitted,

(Refer Slide Time: 03:55)

Ok, right.

That is the structure we have in mind and the decision statistic, I did go through this maths at
that time. But it is very easy to even intuitively remember what it was. The decision statistics
is to look at the matched filter output u sub m l T, right, index m denoting the mth matched
filter, l T denoting the time index corresponding which you are looking at the signal
0:04:23.4. This will be equal to E sub p into delta m sub l m plus n m sub l, right
(Refer Slide Time: 04:40)

where m sub l indicates the symbol actually transmitted in the lth T interval, right.
m sub l will also take the values from zero to
Student: n 0:05:03.9
Professor: Capital N minus 1, right. So will l. The index m denotes which matched filter
output we are looking at, and m sub l denotes the actual symbol that was transmitted in that
interval. So obviously this, this we will expect this contribution from the pulse to come only
in the m sub lth filter output, right?
(Professor – student conversation ends)

That is why this 0:05:30.6 delta function is appearing in this expression. And if we assume
that the input noise is white Gaussian, then this matched filter output noise as sample of this
time instance will be a Gaussian random variable, with variance N zero by 2, right? And also
the noise outputs of all the m matched filters would be mutually uncorrelated. We had seen
that property earlier. Same white noise going through orthogonal matched filters, the
corresponding sample value with the noise variables which you will get will be all
uncorrelated, independent.

So we have already seen that for white Gaussian input noise n t, these random variables n sub
m l prime are, and since they are Gaussian they are also independent, independent Gaussian
random variables.
(Refer Slide Time: 06:38)

The expected value of n sub m l prime squared will be equal to N sub zero by 2.

(Refer Slide Time: 06:47)

Alright, this is the decision statistic based on which we can easily do our error analysis at
least in approximate way. I will only consider approximate way here.

There are both kinds of analysis possible. One approximate method and the other more exact
method. But the approximate method itself is reasonably good under certain situations and
also it gives some interesting insights. So I will consider the approximate method first.

What is the error event before we come to the method?


(Refer Slide Time: 07:33)

The error event in this case is described as follows.

Given that, let us say a particular symbol m is a true index. I am slightly changing the
notation now because earlier m was denoting the matched filter output

(Refer Slide Time: 07:55)

corresponding to the mth filter. But I am now simplifying the notation saying given that m is
the true index, the transmitted index, right. The error event is that some other matched filter
output exceeds the matched filter output corresponding to this index, right? So some other
matched filter output, actually the real part of that exceeds the output of the mth matched
filter, right?
(Refer Slide Time: 08:41)

This is a reasonable statement of the error event based on which we can try to compute the
error probability. You all agree with this?

True index is m but the largest output is not coming from the corresponding matched filter
but some other filter, right? The approximate method uses what is called a union bound
method for calculation. Approximate valuation of p sub e

(Refer Slide Time: 09:20)

is based on what is called a union bound argument.


(Refer Slide Time: 09:29)

The union bound argument is basically this. If m is your true index then there is certain
probability that, that, let us say we will go through a count of all the other possible indices
leaving m, right?

So we will go through all the possible ways by which error can happen. That is, m is the, let
us say m is the non zero value. Or let us say m is the zero index, just for the simplicity of our
discussion. Let us say m was zero, right, the very first index. Then the error event is that
something other than the zeroth matched filter is producing largest output. Union bound says
we will calculate individual probabilities that the first, the matched filter corresponding to
index 1 produces output larger than zero.

Similarly we 0:10:26.2 compute the probability that output, index, matched filter
corresponding to index 2 produces larger output and so on and so forth. And we just add up
all these probabilities.

(Professor – student conversation starts)


Student: That will be pessimistic
Professor: That is obviously pessimistic answer, answer upper bound, right. That is what a
union bound does. It is an approximate method. And there is an error in it; we will try to
appreciate that. But have you understood the argument? The argument is that we calculate the
probability of every one of the other indices producing the output larger than the matched
filter output and the sum of all these probabilities is taken as the overall error probability,
right?
This is not a true probability error, error probability calculation. This is approximate
calculation. That is, you are taking the union of all possible error events as the actual; error
event is being broken up into a number of events which if disjoint would have given rise to a
correct result. But they are not really disjoint.
Student: Sir, what is that...?
Student: Sir we are not taking the union. We are taking the addition. Because if we would
have taken the union, it would have been exact 0:11:40.1.
Student: Union and this thing, disjoint...
Student: But Sir, why are not 0:11:44.8 disjoint?
Student: 0:11:51.4
Professor: Let us put it this way. It is possible that more than 2 matched filters produce an
output which is greater than the mth filter output.

(Refer Slide Time: 12:07)

Student: You simply 0:12:07.3


Professor: Whereas, so therefore what is going to happen is that is going to be counted twice
in this situation, right? You appreciate that? Because this is also producing and that is also
producing and individually we are taking the probability, you know, we are only looking at
one of them. We are not looking at disjoint event at all.
Student: Therefore it is optimistic
Student: No
Professor: That is a pessimistic thing, because that is being counted twice.
Student: Yes
Professor: The probability of two, each of these is being calculated individually irrespective
of the other, right? It is being counted twice. And so on.
(Professor – student conversation ends)

Of course, theoretically can happen at more than two also will do it but of course the
probability of that is very, very small. So we don't have to worry about it. So that is the basic
idea of the union bound. That is, you define let us say the conditional probability of this
event, I am going to denote by p sub m prime given m as the probability that output sample of
the m primeth, it should be at least read as m primeth matched filter is greater than the
corresponding output of the mth matched filter, Ok given that

(Refer Slide Time: 13:45)

m was transmitted.

This indicates the probability that some other matched filter m prime is producing the larger
output than the mth filter, alright? Then what the union bound says is that the overall error
probability will be simply the sum of all this for different values of m prime.

(Professor – student conversation starts)


Student: Not equal to m.
Professor: Not equal to m.
So union bound tells us that each p sub e is actually less than or equal to, because it is a
pessimistic, this sum of all the conditional probabilities for m prime not equal to m.

(Refer Slide Time: 14:36)

That is union bound.


Student: Sir, this can be greater than 1 also.

(Refer Slide Time: 14:43)

Professor: How can it be greater?


(Refer Slide Time: 14:45)

Student: If we sum in their probabilities, we have no restriction that it has to be less than 1.
Professor: Oh, it can be greater than 1?
Student: Yes sir.
Professor: Yeah, of course. A bound can be greater than 1 which is alright. Because after all, p
sub e is going to be, is the probability. And probability is going to be less than, anything less
than 1 can also be bounded by something greater than 1. In that case that bound is going to be
useless, right? True, I agree with that but let us see how useless or useful this is.
(Professor – student conversation ends)

Ok, so is the union bound argument clear to everybody? Alright. So this probability we
already know, p m prime by m, given m. This is the same probability that we considered for
the binary orthogonal case. I mean instead of taking, essentially now taking a pair at a time,
isn't it? So it essentially becomes the same question as if you are considering binary
orthogonal scheme as far as one particular value of m prime is concerned, right?

Therefore p m prime given m is nothing but, well whatever result we got for the binary
coherent orthogonal scheme which was I think, right?
(Refer Slide Time: 16:06)

Remember it was only slightly different from the corresponding result for coherent b s k,
right? The 3 d B difference was there. So if you remember the form of one of them, you can
write the result for the other. So this was the result we did for binary p s k, sorry binary f s k
coherent demodulation.

(Refer Slide Time: 16:26)

Therefore for M-ary orthogonal waveforms, the union bound tells us, now this is going to be
same for every value of m prime and you are adding for how many terms?

(Professor – student conversation starts)


Student: m minus 1
Professor: M minus 1, so it is going to be equal to m minus 1, less than or equal to M minus 1
times, you like to express this in terms of bit, average bit energy. Now we know that E p is, or
E p by log 2 M is your
Student: E b
Professor: E sub b, right.

(Refer Slide Time: 17:07)

So E sub p can be substituted by that and this result becomes M minus 1, this should be in
brackets log 2 m E sub b by N zero,

(Refer Slide Time: 17:29)

Ok.
Student: You have written E p is equal to log 0:17:33.0
Professor: Yes, E p upon log 2 m, because it is energy for k bits, k equal to log of M to the
base 2. That is equal to the average bit energy, right? So that is the result we get for the
performance of M-ary orthogonal coherent demodulations

(Refer Slide Time: 18:00)

making use of the union bound.


(Professor – student conversation ends)

Now this becomes an exact expression asymptotically, that is as the signal to noise ratio E b
by N zero is increased; it actually becomes a very close or in fact equal relation for E b by N
zero tending to infinity. Therefore the union bound is not too bad, right? Why is it so?

Because after all remember, what are the kind of events we did not consider? The event that
simultaneously more than 2 matched filters other than m producing the output larger than m.
That probability will become smaller and smaller as signal to noise ratio becomes larger and
larger, right? Therefore that possibility of counting that twice or more than once, that
becomes remoter and remoter, right?

Therefore this becomes a true asymptotic expression for the error probability 0:19:01.1,
right? Of course for the smaller values of signal to noise ratio 0:19:06.1, this, there is a
considerable amount of approximation; or for that matter for smaller values of m.

(Professor – student conversation starts)


Student: 0:19:16.0 with m?
Professor: It is not directly, but that is fine.
Student: How 0:19:25.4 p m prime given m is equal to q of root of E b by N 1?
Professor: Ok, this is a, this result, what I am saying is, this is precisely the same as, it has to
be the same as what we obtain for the binary f s k case.
Student: Now the distance between two

(Refer Slide Time: 19:39)

signals did not 0:19:40.6 matter?


Professor: Yes. Every signal has the same energy E sub p, right, orthogonal waveforms. We
are only considering a pair of orthogonal waveforms now which is what the binary
modulation scheme is, right? So there is no difference when you, basically this becomes a
pair-wise calculation. m was transmitted but m prime was taken to be larger, gave the larger
output. This is the only event we are considering. And if we just look at this event as
precisely as if you are looking at only binary f s k or binary orthogonal signaling, Ok, any
other doubts or questions?
Student: Sir at high S N R, this is approaching to infinity 0:20:20.5
Professor: That is right; as S N R tends to infinity, asymptotically. When I say high, actually I
am talking about an asymptotic result here, alright.
So what do you see from here? You see that, what happens? Let us see what happens to error
probability as m is increased? From this bound what do we learn?
Student: As in what?
Professor: It decreases or increases?
Student: Increases
Professor: As m increases?
Student: m increase
Student: It is coming inside also.
Student: Increase
Student: Sir, it will become more and more exact

(Refer Slide Time: 20:50)

Student: Sir it will decrease


Professor: Ok. We will come to this point.
Student: Because m is inside also and
Student: Outside also
Professor: So one has to see which one is more important.
Student: Actually 0:21:00.8 log to m will be dominant, decrease....
Professor: Ok the behavior will be different depending on
(Refer Slide Time: 21:06)

what is the value of E b p by N zero is.


Student: E b by N zero
Professor: Ok? We will see that. We will come back to this question. Keep that in mind.
In fact I will take up that question right away. Here is a plot of, but before I give the plot, let
me talk about the, a little bit about the exact expression which I will not prove here. I will
leave that as an exercise for self-read.

(Refer Slide Time: 21:38)

For M-ary orthogonal systems, the precise expression for the error probability is something
very, it is an extension of what we did for the binary case but the final result is this. It looks
like a complicated looking integral, Ok
(Refer Slide Time: 22:27)

this is a more precise expression. Not readable?


Student: It is not readable over here.
Professor: Ok let me read it out for you. This or maybe I can rewrite it. May not be able to fit
it into the space that I have available with me. Is it more readable now?
Student: Yes
Professor: This is where I get into trouble now.
Student: So this n naught is not in the
Professor: N sub zero
Student: Sir, N zero, N sub zero is not in the...
Professor: No it is in the square root sign; everything is under the square root, Ok. I think it is
still missing up.
Student: How does it come, Sir?
Professor: This to the power M minus 1.
(Refer Slide Time: 23:31)

This square bracketed, expression to the power M minus 1 0:23:36.0 This is only the,

(Refer Slide Time: 23:40)

, is it more readable now?


Student: Sir, one d x is 0:23:45.2
(Refer Slide Time: 23:47)

Professor: Yes
(
0:23:49.2
Professor: And they have not even told me.
)
Professor: Alright?
Student: Is that 2 root 0:24:13.5 E d by N zero? Here does it 0:24:16.4 by N zero?
Professor: Well the final expression, not possible to get it in a closed form, it is again in the
form of complicated Q function. So you cannot make out anything from this. You cannot
directly compare this non-closed form result with the closed form result, Ok.
Student: Approach to....
Professor: Approach is very similar to what we did for the binary case. It is an extension to
that for the M-ary case. So please read that. Ok.
(Refer Slide Time: 24:51)

This I am leaving out for self-reading. It is something that you can easily understand
therefore I will
Student: In the photo shot you have not given any
Professor: I will give you that. I will give you the notes.
Now another result, alright now let me finish with this. This is a plot

(Refer Slide Time: 25:10)

of this error probability, Ok. I plotted here for E B against N zero for different values of N. So
as you can see, as you increase the value of M, the curve tends to become lower and lower
provided you are above some
Student: E B by N naught is greater than
(Refer Slide Time: 25:37)

Professor: E B by N zero is greater than some minimum value, Ok which is about minus 1
point 6 d B or something. Now this value is minus 1 point 6 d B. All these curves intersect at
this point,

(Refer Slide Time: 25:51)

Ok.
(Professor – student conversation ends)

So the answer to that question which I asked, sorry, the answer to that question that I asked
you some time ago as to what happens to the error probability as M tends to infinity, the
answer is the error probability tends to zero provided the E B by N zero is on the right of this
point, that is l, this point is actually l n 2, right, that is how it is minus 1 point 6 and it tends to
1 as, if your E b by N zero is less than this value.

So there is a threshold value of the signal to noise ratio. If you are above the threshold value
of the signal to noise ratio, increasing M always improves the error probability for a given E
B by N zero. Ok and this you can prove theoretically and is also proved in the book, I would
like you to read it up yourself.

(Professor – student conversation starts)


Student: Sir, which book?
Professor: Same, same. Ok.
And that is a very interesting result, remarkable result. I hope you appreciate that. Because
unlike what you might be thinking for M-ary modulations in general, particularly when we
talk in the context of, let us say 2 D M-ary modulation schemes later as we will see, the error
probability does not increase with increasing value of M, which you might expect to happen,
right? But for orthogonal modulation schemes that does not happen. Because they are
orthogonal, right?
Because every time you add a new value of, add a wave form, you are going into a
orthogonal direction in the signal space. There is a price to be paid for it, right? Can you
guess what that price might be?
Student: Mathematical complexity is more.
Professor: More in terms of bandwidth. It will be basically in terms of bandwidth. They have
got to
(Refer Slide Time: 27:59)

expand the bandwidth. Because you are adding newer and newer dimensions to your signal
space that means your signals would be such that it occupies higher and higher frequencies
typically. Otherwise it is very, this is of course not obvious from this discussion. I have not
got gone into bandwidth calculations at all. But it is something that one can appreciate to
some extent, because you are adding dimensionally to the signal space by adding more and
more orthogonal wave forms, then something has to be paid as a price and that price is in
terms of bandwidth.
Student: Sir in this definition can we correlate with 0:28:31.6
Professor: In bandwidth
Student: Sir, are you saying the 0:28:33.6 bandwidth will work? If they are having...
Professor: I have not given an exact argument because I have not gone into bandwidth
calculation at all, but roughly given a bandwidth there are only a certain few number of
waveforms that you can design which will be mutually orthogonal. You want to add some
more waveforms then you necessary go for higher bandwidth, Ok. So essentially, look at it
only intuitively from that point of view and therefore the M-ary orthogonal modulation
schemes are asymptotically very good as M tends to large values but for a price which you
may not be able to pay in real life practice. So that is something to keep that in mind.
Now there is another related result here regarding simplex signals which you have discussed
before. Do you remember what was the motivation for introducing simple signals?
Student: That is energy...that is more...
Student: Energy is more....
(Refer Slide Time: 29:36)

Student: Average energy is...


Professor: The motivation was, and this is a result now you can prove. I would like you to
read it yourself. That it will give you a M-ary simplex sets of wave forms with the optimum
receiver, coherent receiver will yield the same error probability, same error rate as in
orthogonal set, with an average energy which is less, Ok.
(Professor – student conversation ends)

And the relationship is, the average pulse 0:30:03.3 energy will be 1 minus 1 by M into E sub
p, rather E sub p.

(Refer Slide Time: 30:09)


So if E sub p, the energy of the pulse required is E sub p for the orthogonal case, it will be
this value for the simplex case. Of course as M tends to infinity, as M becomes larger and
larger this difference becomes smaller and smaller.

Ok, so asymptotically with M, both have similar performances. For a finite M simplex set
holds an advantage over the orthogonal set, Ok. So this result again, I would like you to read
from the book for yourself. It is a fairly simple proof and you can easily appreciate it. Finally
before I leave orthogonal signals, let me discuss one point regarding the error rate calculation.
The error rate calculation that we have done so far is with respect to symbol error rate, right?
It will be an interesting question to ask how is the symbol error rate in this case related to bit
error rate?

(Professor – student conversation starts)


Student: The bit error rate will be 1 by M, 1 by rho
Student: k times

(Refer Slide Time: 31:26)

Professor: No, don't jump to conclusions. Just think about it.


Student: Bit error rate will be 1 by log 2 because if we use Gray code then there will be one
bit field error.
Professor: There is no particular significance of Gray code in M-ary orthogonal signaling. It
is very important for two dimensional M-ary signaling, right? But for orthogonal signaling
there is, there is, everything is orthogonal to everything else, right? There is no nearest
neighbor as such. Every orthogonal waveform is as close or as distant from every other
waveform as any other 0:31:59.2. There is no preferential distance relationships. Distance is
precisely zero in terms of orthogonality, right?
So there is no significance of Gray coding and Gray coding this thing for the orthogonal
schemes. So don't jump to conclusions therefore. So what we can say about bit error rate, bit
error versus symbol error probabilities?
Student: k minus 1
Professor: Leave the world of speculation and try to

(Refer Slide Time: 32:36)

see what result we can get.


Student: 1 by 2...
(Professor – student conversation ends)

Now, let us go through some logical arguments. The first point is when a symbol error
occurs; any one of the other M minus 1 symbols could be obtained, right? The erroneous
symbol
(Refer Slide Time: 33:17)

could be any of the other m minus 1 symbols

(Refer Slide Time: 33:30)

other than the true one. That is the first thing to appreciate, right?

So a symbol error implies erroneous symbol could be any of the other M minus 1 symbols,
right? Also this probability of going from, causing the symbol error, and the probability of the
particular symbol error is going to be same, no matter which symbol I consider, right.
Whether I consider M as the true index, M equal to zero as the true index, or M equal to 1 as
the true index, or so on, this symbol error probability is going to be same because all the
waveforms are symmetrical, symmetrically placed with respect to each other in the signal
space.
(Professor – student conversation starts)
Student: Then 0:34:08.1 assume that the distribution also the in the same coordinate?
Professor: Assuming also they are a priori, probability also same, all signals have same,
equal, a priori probabilities, right? Therefore the second point is that this probability is
independent of the particular symbol transmitted, right? That is, it does not matter what is the
true value of m?

(Refer Slide Time: 34:50)

Therefore I can take a convenient value of M for which I can do the calculation more easily
and then the result would hold for any other value of M. Because the symmetry of the
problem, it does not matter whether I do this way or any other way, right?
A convenient value of m is to consider is m equal to zero.
Student: All zeroes
Professor: All zeroes, right. Or the index m is equal to zero which will correspond to a k-bit
word of all zeroes, right? So choose m equal to zero and the binary k tuple implied by this is
a bit sequence of all zeroes, let us say.
(Refer Slide Time: 35:39)

Then the erroneous words, the possible erroneous words are all the other possible words,
right? Because is the true value, all the error values will be m equal to 1 to m minus 1. And
that will correspond to all other bit sequences other than the all-zero bit sequence, right?
(Professor – student conversation ends)

Therefore I can now count how many different error patterns exist, right? For example, if the
error pattern equals to m equal to 1, then this is zero zero zero zero... let us say 1, and only
one error occurs, right? If it is something else, two errors may occur. Depending on what
error pattern, what value of m has been actually selected, right?

Therefore the average number of bits that can go in error will depend on, what is a decoded
word, what I can do is I can count the number of ones, and all the other words, right and
divide by

(Professor – student conversation starts)


Student: m minus 1
Professor: m minus 1 that is the average value, average number of bits which can go in error
if there is no specific preference for one over the other, on an average. That is the basic
criteria.
(Professor – student conversation ends)
So fix any symbol which we have done, chose m equal to zero, and let us say, the expected
number of errors per symbol, I think I should say bit errors, expected number of bit errors per
symbol

(Refer Slide Time: 37:25)

is nothing but the expected or average value, average number of 1s in all non-zero symbols,
right?

(Refer Slide Time: 37:45)

So I have to just count the total number of bits, total number of 1s in all the non-zero symbols
and divide by M minus 1,
(Refer Slide Time: 38:06)

right? And it is very easy to check, this is the argument that you can, I mean, you just have to
sit down and do it and verify.

One can count the total number of ones in all the non-zero symbols as equal to half k into two
to the power k.

(Refer Slide Time: 38:24)

This is like obvious, right? Because the maximum value is k, the minimum value is zero,
average value is k by 2, minimum value is 1, average value is k by 2 and 2 to the power k
minus 1, sorry, 2 to the power k such words

(Professor – student conversation starts)


Student: 0:38:43.8
Professor: So there is something which can verify more, with more precisely, divided it by M
minus 1

(Refer Slide Time: 38:50)

and substituting for M one can write this as equal to k times 2 to the power k minus 1,

(Refer Slide Time: 39:13)

I am writing this half of 2 to the power k as 2 to the power k minus 1, Ok, divided by 2 to the
power k minus 1, right? This is therefore, what is the significance of this?
(Professor – student conversation ends)
This is a figure which tells me what is the average number of bits that will go wrong when
the symbol is wrongly decided, right? That is, out of k bits that are present in the word, so
many bits will be, on an average wrong, clear.

Therefore what is the bit error probability? Divide by k, right, clear, this the number of bits
that will go wrong for every k bits, every group of k bits. For per bit, the error probability, bit
error probability which l will denote by P sub e comma b will be 2 to the power k minus 1
upon 2 to the power k minus 1 into P sub b which is the symbol error probability which is
what we have calculated earlier,

(Refer Slide Time: 40:13)

Ok. So that is the res/result, that is the connection between bit error probability and
(Refer Slide Time: 40:23)

symbol error probability.

(Professor – student conversation starts)


Student: Sir, how do you 0:40:25.7
Student: No, cannot divide
Professor: I have divided by k. Why? Because this is the average number of bits that will go
wrong for a group of k bits, isn't it? For per bit, the average number of bits which will go
wrong is divided by k.
Student: It tends to half.
Professor: It tends to half provided your k is very large, right. And if M becomes very, very
large then bit error probability is precisely half of symbol error probability, Ok, quite true.
Ok all the rest of results concerning M-ary orthogonal signaling namely the exact result, the
corresponding results for the simplex family and the non-coherent receiver results. They are
very similar in nature. I would like you to study on your own, Ok. The detailed behavior is
also similar to what we have discussed earlier. So read everything else regarding M-ary
orthogonal signaling. For example, non-coherent receiver error calculations, Ok,
(Refer Slide Time: 41:49)

because I would like to finish with this topic today.


Student: 0:41:56.0 You would be giving the photo copies?
Professor: Yes.
(Professor – student conversation ends)

Ok finally let me come to error rates for M-ary signal constellation. When I say this, I usually
imply automatically that I am talking of two dimensional signal constellations,

(Refer Slide Time: 42:28)

right, which are very popular. The reason why they are popular is because one does not
expand in bandwidth as per the increases in value of m. The bandwidth is more or less fixed,
because you are using the same pulse shape, no matter what is the value of m. And therefore
the bandwidth is under control. Whereas in orthogonal signaling scheme one has to
necessarily design a group of wave form which are orthogonal.

So more than one pulse shape is involved. In fact m pulse shapes are involved. And the
combined average bandwidth of all these m pulse shapes will be quite large, right? In fact that
is a price which may be at times very difficult to pay. That is the reason why you find very
rarely M-ary orthogonal schemes for very large values of m under practical use, Ok.
Although theoretically they are of great significance, Ok.

So let us return to this two dimensional signal constellations where we use, if you remember
basically

(Refer Slide Time: 43:30)

a single matched filter, right and then the i q outputs of this complex matched filter are
decoded to find out which symbol was actually transmitted, depending on which decision
region it lies in, right? Because the two-dimensional signal space is divided into a number of
decision regions, each decoded point will lie in one of these decision regions and our
decoding strategy, demodulation strategy is to choose the symbol in which, corresponding to
the decision region in which the point actually lies, the complex output actually lies, right?
Let me recapitulate the maths for you. The matched filter output before sampling is some
waveform like this.
(Refer Slide Time: 44:27)

We are using a single matched filter, single matched filter either at pass pulse 0:44:32.0 or
base pulse, Ok. Then if it is Nyquist pulse; that is an assumption that we have always been
making, then u l T is going to be a sub l plus m prime, let us say n prime l.

(Refer Slide Time: 44:53)

Because the only contribution that will come is from the Nyquist pulse in the corresponding
interval t, from l T minus 1 to l T, alright?

So a l is your,
(Refer Slide Time: 45:14)

a point in the signal constellation that you actually transmitted and it is coming along with
complex Gaussian noise

(Refer Slide Time: 45:23)

or in other words this is actually complex variable. It has a real part and imaginary part, both
are uncorrelated, therefore it has a Gaussian distribution with zero correlation coefficient, Ok.
So u l is the sum of, it is a complex variable. It is a sum of a l and n l prime. This is a point in
the constellation, signal constellation that was transmitted in the lth symbol interval,
(Refer Slide Time: 45:58)

right?

And this is a complex valued Gaussian random variable

(Refer Slide Time: 46:13)

and each component of this complex variable has a variance of sigma square or N zero by 2.
(Refer Slide Time: 46:23)

Therefore what can you say about the u sub l? Let me write this as u sub l.

(Refer Slide Time: 46:33)

It is a complex Gaussian random variable whose mean is u sub l right? So therefore suppose
if we have m possible symbols there are m possible density functions defined for each of the
possible transmitted symbols, right?

Let me illustrate this for the case of; let us say a 8 phase p s k system. That is an example of a
two dimensional signal constellation of this kind? So we will have a situation like this, for 8-
ary p s k.
(Refer Slide Time: 47:13)

These are the 8 points in the signal constellation diagram lying on a circle, right and around
each of these mean values we will have a two dimensional Gaussian s d function coming up
which describes the density function of u sub l,

(Refer Slide Time: 47:38)

right? Essentially this density function is imposed by the presence of noise n sub l. And what
we have to understand is now how we will do the error calculation for this kind of situation.

Of course I will do this exercise simply for 8-ary p s k. I will just tell you the result for
general two dimensional modulation schemes. I don't think we have time to complete even
this today so I will start from this and quickly finish this next time and then go on to, the next
topic that we will be taking up is a brief introduction to information theory and problem.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy