Lec 28

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Nonlinear Dynamical Systems

Prof. Madhu N. Belur and Prof. Harish K. Pillai


Department of Electrical Engineering
Indian Institute of Technology, Bombay

Lecture - 28
Describing: Optimal Gain

Welcome everyone to this next lecture on describing functions. So, we have started with
describing functions, for that purpose we define what is the meaning of gain of an
operator, possibly non-linear. And hence we define a notion of quasi linearization and
quasi word refers to the linearization depending on the input and there amongst various
linearizations. We decided a notion of optimality.

(Refer Slide Time: 00:48)

So, we said that describing function is an optimal gain, optimal gain with respect to
reference input. This reference input a sin(omega t), because the linearization depends on
the input, this is called quasi linearization and the word gain is what makes it a
linearization. So, the describing function is nothing but an optimal quasi linearization of
a non-linear system. We will recapitulate this is a little more detail.
(Refer Slide Time: 01:42)

So, what we did last time is we let a r come here this also one non-linear system, this was
the actual output. We are going to compare it with some output of another linear system
y_approx, we called it. Then we said r is a signal of finite average power, r has finite
average power. What is finite about it, what is average about it? We integrate it from 0 to
T, |r(t)|^2 dt. So 0 to T when we integrate, then it would give us energy, we divide it by T
and now this is power, the average power and we let limit as T tends to infinity because
if you do not let T tend to infinity, this will always be finite. There is no way it will
become infinite for a function r, for a function r that is bounded at every time instant. So,
we the finite word refers to as this limit tends to infinity, as this limit T tends to infinity,
this quantity is not unbounded, it is finite. To say that this limit exists when capital T
tends to infinity means that this signal r is finite average power. So, we took a r like this,
we assume that this N is bounded input, bounded output stable that means that the actual
output y_actual is also of finite average power. Then we decided that what H should we
fit here, so that the error e has a least average power, the if y_actual is finite average
power, then you can always take H equal to 0 and you get the error also to be finite
average power, why because error e will be equal to y_actual, so one can try to minimize
this. So, e the best, at the best approximation H, e will also be of finite average power
and now we want to find H such that this finite average, the average power in this is the
least.
So, that minimization problem is what makes describing function the best solution and it
turns out that the optimum H, the optimal value is not unique. The optimal value is
unique, but the optimizer H is not unique. Any linear system, any stable linear system,
whose which evaluates to the, which evaluates to the Fourier coefficients, first harmonics
at this particular sinusoid also turns out to be a optimal quasi linearization.

(Refer Slide Time: 04:50)

So, let us see this in little more detail. To find optimal quasi linearization assuming N is
BIBO stable and time invariant, time invariance is required because we want y_actual to
also be a periodic signal when r(t) = a sin(omega t), the time we calculate y(t). We
analyze y into its various Fourier coefficients y(t) = a_0 + a1r sin (omega t) + a_1i cos(
omega t) + a_2r sin(2 omega t) + a_2i cos(2 omega t)+ and so on. So, we are going to
take a_1r and a_1i, this we will use to define the so called describing function.
(Refer Slide Time: 06:14)

A describing function, a function eta(a, omega) that describes on both a, and omega of
that particular non-linear system is defined as eta(a, omega) = (a_1r + j a1i)/a, this is the
definition of the describing function. Now you can take any stable transfer function. H
has all it is course on the left half complex plane and H evaluated at S is equal to j
omega, if that is equal to eta (a, omega) then this any such H is in, is an optimal quasi
linearization for that non-linear system.

So, one can notice that there is a lot of non uniqueness here. In other words you are
specified with this particular point on the complex plane any transfer function H, any
stable transfer function whose Nyquist plot passes precisely this point, it has to pass
through this point precisely at S is equal to j omega and not at any other frequency. Any
such transfer function will all qualify as an optimal quasi linearization. So, now we are
going to see some more examples of describing functions or describing functions of
some more non-linearities.
(Refer Slide Time: 07:34)

We already evaluated from first principle by using the Fourier coefficients of the signum,
a non-linearity, whose input output graph depends like this. We had some debate whether
at u equal to 0, it should be equal to 0 or + 1 or - 1, as I said the Fourier coefficients do
not depend on the value of y at just one point, but more on a aggregate sense for the
purpose of calling this particular non-linearity as a odd non-linearity. It is already
memoryless, because its memoryless, we could express why as a graph of u instead of
the dependence on time t.

In addition to it being time invariant, it is already memoryless, in addition it is also odd


an odd function, that is what was helpful in saying that the describing function in such a
case is a real function of a and omega, because its memoryless its only a function of a the
amplitude a. So, we already saw that the describing function graph looks like this in
terms of a, so for very small amplitude of the incoming signal, the amplification is very
high, on the other hand for large amplitude the amplification is very low. So, we are
going to see some more examples, for example of the saturation non-linearity now.
(Refer Slide Time: 09:12)

For example, consider the non-linearity, which is saturation. How do we expect the
graph of this, let us say then it is equal to slope 5 as long as input u varies within the
range delta. For plus minus delta, the slope of this is equal to slope 5 and for beyond that
it has saturated, so it has saturated to 5 times of delta. So, let us try to already draw the
graph of the describing function as a function of a, do we expect the describing function
to be real. Yes, we expect that to be real because this is the odd function of u, if we
change u to change any value here to its negative and the value of y becomes just the
negative.

This is why we can call this function an odd function. In addition to it being time
invariant and memoryless, it is also odd. Because it is memoryless the describing is a
function of only a, because it is odd nonlinearity, there is only a real part the imaginary
part is equal to 0. So, because it is slope 5, for all amplitudes up to delta. If you give a
sin(omega t) as input and amplitude is less than or equal to delta, then the output is just
scaled by 5, because of that we expect that it will be equal to a constant 5, but beyond
that you see there is more and more clipping going on, what exactly is the clipping. For a
let us say equal to 10, it has got saturated, this is what we saw briefly in the previous
lecture. This clipping amount is what for more and more fraction of the period of this
signal, it will be clipped if the amplitude is high, that is why we can say that describing
function is decreasing, monotonically decreasing for amplitude larger than delta, for
amplitude greater than delta. So, we will see an exact formula for this, we will see a
formula for slightly more general with a little more generality even though the derivation
is pretty cumbersome, but then I think that with lots of careful manipulation by keeping
track of at what value of time t it saturates. By keeping track of this, one should be able
to integrate explicitly and find this out.

(Refer Slide Time: 12:04)

So, let us just reproduce the formula from Vidyasagar of a non-linearity that looks like
this, up to delta it is equal to, and beyond that it is slope m_2, here it is slope m_1. This
is an odd non-linearity, these two lines have different slopes, these are slope m_2 these
are slope m_2 and in the middle there is slope m_1. For such a function, for such a input
output map, the saturation non-linearity is a special case of this in which m_2 = 0 and the
dead zone is another special case of this, in which slope m_1 = 0. So, we can see that this
particular, this can also be thought of as like a hardening spring, a spring whose spring
constant goes on decreasing or can or goes on increasing. If the slope m_2 is larger than
slope m_1 and if m_1 has a interpretation that it is a spring constant, then one can think
of this as a spring that hardens when it is extended, as and when it is extended more and
more this spring hardens. So, of course, these are approximations there is hardening is
gradual, here suddenly for amplitude larger than delta, there is some aspect of the signal
that encounters an amplification of just m_2, but for other aspects for all lower values of
amplitude, the amplification is just m_1. So, how do we expect the describing function to
be, expect the describing function to be m_1 and if m_2 is lower then up to delta it is
equal to this, after which it comes down to m_2. Either comes down or comes up,
depending on whether m_2 is larger or smaller than m_1 and what is exactly this, what is
the closed form expression for this, for this particular thing we will reproduce a formula
that has been calculated carefully. So, let me just write it here. So, we decided to
reproduce the formula for the describing function, so this involves a good amount of
calculation, a careful calculation, but then it is the graph is not very, formula is not very
unexpected.

(Refer Slide Time: 14:40)

So, let us, it is for this that we are trying to find the describing function. For input u,
outside the range minus to delta, the slope is m_2 and m_2, m_2 here, m_2 here and for
slope inside this range minus delta to delta the slope is just m_1. So, for this particular
example, the formula goes like this.
(Refer Slide Time: 15:00)

So, it has been a practice to use a look up table, where we use the readymade formula
and apply it to our example, and this formula is what takes a good amount of labor to
prove. But once it is proved it is extremely handy, one often uses a look up table of
describing function of various of many standard non-linearitys and this one I have taken
from Vidyasagars book on non-linear systems analysis.

So, this is the formula of course, one would ask is this formula what range of a? That is
not difficult to answer because when amplitude is less than or equal to delta, that time
describing function of is just equal to m_1, for a less than or equal to delta. And for
amplitude larger than for a greater than delta is when this formula is applicable. One
could check for a = delta whether the two describing functions give the same value, we
do not expect that the describing function for this non-linearity becomes dis-continuous
as a function of amplitude a, why? Because when a = delta, there is zero amount that gets
magnified by slope m_2 and for a slightly more than delta, there is an infinitesimally
small amount that gets amplified by m_2 and hence we expect some continuity at a =
delta and that is what one indeed can verify by putting a = delta inside this formula and
checking whether it is indeed equal to m_1. Of course, this formula looks pretty
complicated, so it turns out that it gets simplified if we use another function f(x) in this
particular way, this is also for x less than or equal to 1 and for x greater than 1, this is just
to be equal to 1.
So, one can use this formula and then this entire difficult part gets absorbed into just
function f(delta)/a. It is expected that delta/a will play a role because at what value the
new slope starts acting depends will affect the amplitude at which the formula will also
change. So, this is the formula if we use an expression for f(x) for this intermediate part.
So, we will see what this evaluates to for the saturation non-linearity, saturation non-
linearity is a case where m_2 = 0, m_1 = 1.

(Refer Slide Time: 17:34)

Let us go by the standard saturation non-linearity, where the slope is equal to 1 over the
range minus delta to delta with delta also equal to 1, this formula that we have written
here, let us see what this evaluates to. eta(a) = f(1/a) where f(x) = (2/pi) [sin^{-1}(x) +
x(1 x^)^{1/2}]. So, one can apply this formula for this special case. So, let us just draw
a graph. f(x) itself looks like this.
(Refer Slide Time: 18:44)

f(x) as a function of x looks like this. So, what we have is just f(1/x), so f(1/x) and that
too for a larger than, we can get this as the formula for describing function for the
saturation. One can plot this explicitly for example, on scilab and check that this is
indeed the case if time permits, we will plot this on scilab and show it in this course.
Now, one can also verify that the describing function for the so called dead zone non-
linearity. What is the dead zone non-linearity? A non-linearity whose input output looks
like this, that is also again memoryless time invariant and odd. It has some slope say m_2
or say equal to 1 for this range, but for certain range it is just dead. There is no output
response seen, as long as the input is smaller than the range plus minus 1. So, here we
expect that the gain is initially 0 and then it saturates to 1, and it saturates to the slope for
very large amplitude, this zone over which it is dead is a very small fraction, because it is
a very small fraction we will expect that it will eventually tend to 1.

So, this is how one can check by evaluating into that particular formula. So, the next
thing that we will do is, we will take an example of a non-linearity which has some
memory for example, the jump hysteresis and we will derive the formula for that
particular purpose that is our first example where the describing function turns out to
become a complex function, it has a imaginary part also.
(Refer Slide Time: 21:06)

So, consider the jump so called jump hysteresis. So, what is the jump hysteresis?
Suppose this is x, this is the system whose input is u, input is called x for this purpose,
output is y. So, whether it is increasing or decreasing is what decides whether it is on this
curve or this curve. So, this is the case when x dot is less than 0, this is the case when x
dot is positive. We might say what happens when x dot is equal to 0. Of course, x dot for
the reference signal x(t) = a sin(omega t), x dot is equal to 0, but at just one point, so at
that time it is jumping from there to here.

So, we will say output and what about the slopes, these slope is m this slope is also m,
the jump amount is equal to 2b. So, we will say that so y(t) = m x(t) + b for x dot
positive, y(t) = m x(t) - b for x dot less than 0. So, let us evaluate the describing function
from by first principles for this particular hysteresis, this hysteresis is what we will call
jump hysteresis, what is jump about it, when x dot has increased fully and it is
decreasing, that time the output jumps that is the output y, the output jumps by amount
2b jumps down by amount 2b. And the other hand after x has decreased fully and x dot is
negative and it has decreased and when x starts increasing again, that time the output
jumps up by amount 2b. So, for this particular non-linearity, we will derive the
describing functions both the real part and the imaginary part. So, while we do this, we
will note some properties of the describing function, what are the various properties?
(Refer Slide Time: 23:44)

So, let us just recap the describing function calculating procedure that will give us some
very important properties of the Describing Function Computation Procedure, evaluate
output y(t) find Fourier series, Fourier series in particular first harmonic coefficients, we
are not interested in other harmonics, first harmonic coefficients a_1r and a_1i; these are
the two things that we require now. And then and then define describing function eta as
eta in general can depend on both a and omega eta= (a_1r + j a_1i)/a.

So, now notice that if the output, if it turns out that the non-linearitys, if two non-
linearitys are added by an amount added to each other then the outputs will just get
added to y_1 + y_2. Then the Fourier series of the sum of two signals is just a sum of the
harmonics, the Fourier series extraction procedure is linear in its, in its arguments. So,
that is what makes that the Fourier series expansion of two signals y_1 + y_2, when you
add them you have to just add the Fourier series coefficients. So, this that is what makes
this particular step in the procedure linear in the non-linearity also and now the non-
linearity comes not just to the addition of two signals y_1 + y_2. But also to the scaling
of a signal by a constant by a static constant, if the if a non-linearity just gets scaled by a
constant k then the Fourier series coefficients have to just be scaled by an amount and
hence the describing function has to also be just scaled by same amount k. So, what does,
what does this particular property mean?
(Refer Slide Time: 26:20)

If you have non-linearity N_1 and non-linearity N_2, and if you have done lot of work to
calculate the describing functions, and the describing functions where calculated and
now you were suddenly told, that the output is the sum of the output of the two
nonlinearities, then the describing function of this big block. Let us call this describing
function of N_1 + N_2 of this block again is a function of a and omega,
eta_{N_1+N2}(a, omega) will turn out to be nothing but eta_{N_1} (a, omega) +
eta_{N_2} (a, omega). Why did we conclude this? That is because we said the output
here is nothing but the sum of the two outputs, here and here. Now, if you are given with
the Fourier series coefficients of these. Is it very difficult extract, the Fourier series
coefficients of these? No, because the Fourier series let us call this map f that takes a
signal y and gives you a_0, a_1r, a_1i, a_2r, a_2i and so on. This map is linear in the
signal y, because it is linear in the signal y if you multiply this y by a constant k, so k y
will just go to k a_0, k a_1r to multiply this by a constant k means a every time instant
the value of y(t) is just scaled to k y(t). So, this just gets scaled to k a_1i and so on.
Similarly, y_1 + y_2 goes to just a_01 + a_02, where a_01 and a_02 are the first entry of
y_1 and y_2.

So, to say that this map f that takes a signal y and gives you the takes a periodic signal y
and takes and gives you a Fourier coefficients to say that it is linear in the signal means
that these Fourier coefficients can just get added. Of course, it is said that this space of
periodic signals is at a vector space for you to introduce this sum etcetera. If you take
two, you should take two signals, which are periodic in the same period capital T and
you have to add them, again you get a signal that is periodic with that time period again.
And also you scale it by constant k, it will be periodic again with the same period. So,
that is what allows us to say that, if one has done lot of work to calculate the describing
function of nonlinearities N_1 and N_2, the describing function of the net non-linearity
N_1 + N_2 as defined in this block is just sum of the describing functions. And that is
coming because the procedure of calculating describing function goes through this
Fourier series coefficients calculation. And this Fourier series coefficients extraction
procedure happens to be linear, why is this linear? Because we have that integration
operation etcetera and that integration operation is also linear in its argument in the
signal y. So, how is this useful? So, we will see quickly how this is a very useful
property.

(Refer Slide Time: 30:01)

So, if we have calculated for a saturation non-linearity for example, and the output is y(t)
for the range [-1 , 1] and it is standard. And now, we say that no actually this is not what
we wanted, but we in fact wanted scaling by constant k equal to 2 let us say. What is that
the this part [-1 , 1] cannot be changed by that, that change has to happen by a slightly
more complicated procedure. This is y_2 in which slope of this is slope 2. So, notice that
this particular non-linearity and this non-linearity are pretty related, this non-linearity has
to just be scaled, the output has to be scaled by an amount k equal to 2 to get this non-
linearity.
Because of this particular property, if the describing function of this has been calculated
like we did before to a function like this, we have this value equal to 1, all we have to do
is scale this to 2, in which this range is equal to 1 again, but here it starts from 2. So,
notice that this is just this one multiplied by 2. Let us take another example, in which we
this is an example where it is just scaled. So, let us take another one where we add two
non-linearities.

(Refer Slide Time: 32:05)

One non-linearity tends to be just multiplication by 10, another non-linearity happens to


be the standard saturation non-linearity, this is the input which is a sin(omega t) and this
is y_1 while this is y_2 and the two are added to give you the net output y. Let us plot
both of these on a same graph, both are odd, memoryless time invariant non-linearities.

This one has this and the other one has had this to this constant had value 10, and this is
starts at 1 and comes down to 0, notice that this graph is not scaled not up to, not to scale.
So, this is for y for non-linearity N_1 of course, it is a linearity in this case and this is for
non-linearity N_2. So, what about this net one? There what is the describing function of
the non-linear map here reference input to y, that is just the sum of these both, it starts at
11 and comes down to, and comes down to 10.

So, this net to say that we can just add these two describing functions as a function of a is
what makes a describing function linear in its argument the non-linearity. So, we are
going to use this very crucially to find out the describing function of the jump hysteresis.
Of course, one can do it from first principle also, but we are going to do almost that, that
will be also benefit by understanding this particular structure that describing, that the
describing functions have between each other.

(Refer Slide Time: 34:21)

So, let us take the jump hysteresis graph again, so recall that this is our jump hysteresis.
When the input is increasing, that time it follows this amplification by m except that it is
shifted up by amount b, and when the input is decreasing that time the shift is to - b and
there is also scaling by constant m.

(Refer Slide Time: 34:49)


So, let us first consider the case that m is equal to 1, this is the input a sin(omega t), the
output has two parts, one is the input itself, but it has been shifted up. After the shift is
what we are talking about, after it is gone up, there is a shift down by amount 2b amount
b will come back to the same graph and another amount b will bring it b lower. From
here it shifts up, so this is how the graph looks. So, notice that this is the super
imposition of two graphs which two that we will draw on another, on another page.

This is time axis one of them is just time omega t, this one is the output a sin(omega t) +
b for x dot greater than 0 a sin(omega t) - b for x dot less than 0. So, this is the output.
Notice that we have take amplification equal to 1, that is why we are able to see that it
shifts by amount 2b, plus b on one side of this minus b on other side. So, we will write
this as the sum of two things, that way it is better that we write on some here only. So,
this jump is what we can say is equal to b here -b, again b here.

So, when does this switch sign, it switches sign when sin(omega t) derivative of
sin(omega t) changes sign, derivative of sin(omega t) is nothing but cos(omega t). So,
notice that this is this actually the signum function applied to cos(omega t), the
conclusion that I am trying to draw from here is that we have a signal a sin(omega t),
what we have added to that is b times the signum function applied to cos(omega t), that
is what we are trying to conclude form this figure. How did we conclude that? We noted
that this is our original signal a sin(omega t), when this a sin(omega t) is increasing
namely form here up till here, it has shifted up by amount b. Why is this shift by amount
b?
(Refer Slide Time: 37:54)

They call that this figure here, this was our jump hysteresis, when x dot was positive, it
was scaled by m, m is equal to 1 for now and it is also shifted up by amount b. As soon
as x dot becomes positive to negative, it jumps down by amount 2b and follows this
curve here, this is also again scaling by m, but it is amount - b below the, below the line
m, this is the line m with slope m and passing through the origin. This one is amount b
lower, this sign is amount b above, so that the shift is exactly 2b and it is symmetric, it is
symmetric about this point origin. Symmetric not really about the x-axis or the y-axis
because there is this dependence on x dot. But when we say along this axis, there is
amount 2b here on this, there is amount b on this side, amount b on this side. In that
sense it is symmetric.

So, what does this mean coming back to this figure, after sin(omega t) has reached its
peak when sin(omega t) starts decreasing, that time the shift is down by 2b amount, so
that it comes by b amount lower than a sin(omega t). But then notice that this sin(omega
t) derivative has changed sin is nothing but to say that cos(omega t) function itself has
changed its sign, that because it has cos(omega t) has change its sign means to say that
sin(omega t) derivative has changes it sign.

So, this means that we are adding b where cos(omega t) is positive, we are adding minus
b when cos(omega t) is negative, again we are adding b when cos(omega t) is positive.
So, this is nothing but the signum function operated on cos(omega t). So, what does this
mean, what this means is that the describing function of the jump hysteresis can be very
easily calculated by applying the signum non-linearity on the derivative of sin(omega t).
Derivative of sin(omega t) is nothing but cos(omega t).

(Refer Slide Time: 40:04)

So, the describing functions eta(a omega), of the jump hysteresis equal to just a constant
m applied to the scaling. Which we have taken equal to 1 plus j times j times what, the
describing function of signum on-linearity, which we found was equal to 4 pi/a. I think
let me verify, sorry I just now verified, it is not this, it is equal to eta(a omega)= m + j
(pi/4 a), why did we bring this in, because we saw that the signum non-linearity was
being applied to the cos signal, cos(omega t) signal. And the imaginary part comes
precisely as the Fourier series coefficient, the first harmonic of cos(omega t). It is a
coefficient of, this is nothing but a_1i, while this is nothing but a_1r, this one we noted
was nothing but the describing function of the signum non-linearity, but that time it was
odd. Hence, it applied to the real part only, but now it is coming with the cosine tern and
hence we have multiplied with j here. So, this is how the jump hysteresis describing
function looks. So, we are no longer able to plot the describing function as the function
of a, but we have to plot it here.

So, I missed one thing, so notice that this one was scaled by amount b, why because the
jump was not between plus and minus 1, but the jump was between plus b and minus b.
Hence, we have multiplied this by amount b. So, this is describing function in the
complex plane, so it can sort of that if m is positive, this is how, for a = 0 its very large,
and it comes down like this and this is for a tending to infinity. As the amplitude of the
signal tends to infinity, the jump amount is relatively very small and hence it amounts to
just amplification by m, it turns out to come on the real axis. This is the imaginary axis in
the complex plane, this is the real part. The significance of plotting the describing
function on the complex plane, as the plot in the complex plane will become clear very
soon, when we use the describing function for finding periodic orbits. So, because the
describing function is complex here, it is no longer real like we did like we said it for odd
memoryless nonlinearities so far. Here the describing function is complex it depends on
m, a and b for the jump hysteresis, b was the amount by which it jumps 2b was the
amount by which it jumps, m was the slope for the case that x dot is positive or negative
and a was amplitude of the input signal a sin(omega t).

So, when we plot this m is some positive number is what I have taken here, hence we
have plotted here and it comes from a very large for a equal to 0, it is some number with
a very large imaginary part and it decreases, imaginary part is decreasing as a is
increasing. And it finally, comes down to the real axis, for a tending to infinity. What is
the reason that for a tending to infinity comes to the real axis, we have to go back to the
plot of the jump hysteresis.

(Refer Slide Time: 43:50)


So, when the amplitude is very large, that time one can think that the jump amount 2b is
a very small fraction of the total signal, because it switches by amount 2b no matter
whether amplitude is 10 or 100 or 1000. So, the fraction of jump is a very small fraction
of the total signal. Hence, the imaginary part is going to be very small, imaginary part
itself is come because it is no longer memory less, it depends on whether x dot is positive
or negative. And also because it is not odd, it is not an odd memoryless non-linearity.

Now, we are going to see how this describing function is to be used for finding periodic
orbits. That is the next important topic that has historically been the reason that
describing function has been investigated so far for finding the amplitude and frequency
of limit cycles, and as we noted in the very start of this lecture and also a few lectures
ago, sustained, robust sustained oscillations can be implemented only by non-linear
circuits. And we saw how the saturation non-linearity for example, can give us robust
sustained oscillations. With this third order system in the feed-forward path, third order
linear system in the feed-forward path. So, let us come back to that example.

(Refer Slide Time: 45:17)

We have G(s), we have the saturation non-linearity, this is the output y, this G(s) = 1/(s +
1)(s + 2)(s + 3), and we have some signal a sin(omega t) here. So, let us first take the
case, that this non-linearity is just a pure constant, that can also be thought of to be the
case, when amplitude is smaller than the range over which it is linear.
As long as the amplitude is smaller it is within that range the this system will be seen as
the linear system one can think of it as just a constant k. So, when would we have
periodic orbits in the close loop, we will have periodic orbits if, assume that the external
input is 0, so we have some signal here r, it gets amplified by k goes through this comes
back there, and it is equal to r.

So, notice that r(t) is here gets multiplied by k, then G(s) acts on it and there is also
minus sign here because of course, minus sign is operated on k before G(s) operated.
And that gives you back and this r and this r are the same this, ignore this small
difference between the two, this is nothing but to say that (1 + G(s) k) r(t) = 0. We will
say some signal r(t) equal to a sin(omega t) happens to be a periodic solution, if it
satisfies this differential equation. What is the meaning that it satisfies the differential
equation when we substitute a sin(omega t) into this, notice that we will get into this r(t)
that time,

(Refer Slide Time: 47:36)

we get (1 + G(j omega) k) = 0 at omega. So, this is like this is for linear systems, for
linear systems necessary and sufficient condition, for a sin(omega t) to be a solution. Of
course, one might say that look a sin(omega t) will be a solution even if this is not equal
to 0, because we could just take amplitude a equal to 0. So, a of course, we should say
that this amplitude a is not equal to 0, that is to say that non trivial periodic solution. So,
when do we have non periodic, non trivial periodic orbits in the system, we can have non
periodic orbits if 1 plus product of the gains is equal to 0. It is extremely important
equation, what does this equation mean?

(Refer Slide Time: 48:55)

Look at this, consider the gain from here from - 1, and the gain from here, the net gain
should be equal to - 1. If you start form r, this is the net gain that we got from here that
gain should be equal to + 1. It depends on in the definition loop gain whether you take
this minus sign into account or not.

So, r(t) goes through this and comes back here, but it is the same signal, this is the very
hand waving way of understanding this, understanding this argument that you take a
signal at some point here, it under goes a gain by amount k, it under goes by another
gain, first by minus sign by - 1 and then by G(j omega), why is G(j omega) the gain,
because that is the meaning of a transfer function, the transfer function is precisely the
gain when you give exponential signal into in it and when you give sin(omega t) as a
signal, then the amplification is exactly G(j omega) at steady state of course, this requires
that all the transients have died. So, how this translates to the describing function, how
for linear time invariant systems, the amplitude a does not play a role is what we will see
in the following lecture.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy