Analysis of Modeling Equations

Download as pdf or txt
Download as pdf or txt
You are on page 1of 46

CHAPTER

2 Analysis of Modeling
Equations

To be sure, mathematics can be extended to any branch of


knowledge, including economics, provided the concepts are so
clearly defined as to permit accurate symbolic representation.
That is only another way of saying that in some branches of
discourse it is desirable to know what you are talking about.
—James R. Newman

Most of dynamical systems analyzed in this book are modeled by ordinary differential equations.
A main use of a mathematical model is to predict the system transient behavior. Unlike linear
systems, where closed-form solutions can be written in terms of the system’s eigenvalues and
eigenvectors, finding analytical, exact, solutions to nonlinear differential equations can be very
difficult or impossible. However, we can approximate the solutions of differential equations with
difference equations whose solutions can be obtained easily using a computer. In this chapter we
discuss a number of methods for solving differential equations. The first class of methods allows
us to graphically determine solutions to second-order differential equations. Then, we discuss
numerical techniques for solving differential equations. After that, we present two methods of
linear approximation of nonlinear systems.

2.1 State-Plane Analysis


The state plane, which is a state space of two-dimensional systems, is also called the phase plane.
Analysis in the state plane is applicable to linear and nonlinear systems modeled by second-
order ordinary differential equations. The state-plane methods are graphical procedures for
solving such equations. Using state-plane methods, one can graphically determine the transient
response of a second-order dynamical system. We now introduce relevant terms. Consider a
class of second-order systems whose dynamics are modeled by the differential equation

ẍ = f (x, ẋ). (2.1)

Define the state variables

x1 = x and x2 = ẋ.
48
2.1 STATE-PLANE ANALYSIS 49

Then, equation (2.1) can be represented as

ẋ 1 = x2 ,
(2.2)
ẋ 2 = f (x1 , x2 ).

The plane with coordinates x1 and x2 is labeled as the state plane or phase plane. A solution
of (2.1) can be illustrated by plotting x versus t, or by plotting x2 versus x1 using t as a parameter.
To each value of the state x(t) = [x1 (t) x2 (t)]T there corresponds a point in the (x1 , x2 ) plane
called the representative point (RP). As t varies, the RP describes a curve in the state plane
called a trajectory. A family of trajectories is called a phase portrait.

2.1.1 Examples of Phase Portraits

◆ Example 2.1
Consider the spring–mass system shown in Figure 2.1, where x is the displacement
of the mass M from equilibrium, and k is the linear spring constant. The spring
is assumed to be linear; that is, it obeys Hooke’s law. Applying Newton’s law, we
obtain the equation of motion of the mass:

M ẍ + kx = 0.

Let ω2 = k/M. Then, the above equation can be represented in standard form:

ẍ + ω2 x = 0. (2.3)

Observe that

ẍ d ẋ/dt d ẋ
ẍ = ẋ = ẋ = ẋ .
ẋ dx/dt dx

Using the above, we represent equation (2.3) as

d ẋ
ẋ + ω2 x = 0.
dx

Separating the variables yields


d ẋ + x dx = 0.
ω2

x
k
M

Figure 2.1 A spring–mass system of Example 2.1.


50 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS




Figure 2.2 A phase portrait for the


spring–mass system of Example 2.1.

Integrating the above gives


2

+ x 2 = A2 ,
ω

where A is the integration constant determined by the initial conditions. We see that
a phase portrait of equation (2.3) consists of a family of circles centered at the origin
of the ( x, ẋ/ω) plane as illustrated in Figure 2.2. Different circular trajectories, in
this plane, correspond to different values of the constant A representing initial
conditions. Note that


x=ω dt,
ω

where ω > 0. Hence, for positive values of ẋ/ω the displacement x increases. This
means that for the values ẋ/ω > 0 the arrows on the trajectories indicate x moving
to the right. For negative values of ẋ/ω—that is, for the values of ẋ/ω below the
horizontal axis—the arrows on the trajectories indicate x decreasing by going to
the left. Therefore, as time increases, the RP moves in a clockwise direction along
the trajectory.

◆ Example 2.2
In this example, we consider a model of a satellite shown in Figure 2.3. We assume
that the satellite is rigid and is in a frictionless environment. It can rotate about an
axis perpendicular to the page as a result of torque τ applied to the satellite by
firing the thrusters. The system input is the applied torque, and the system output is
the attitude angle θ. The satellite’s moment of inertia is I . Applying the rotational
2.1 STATE-PLANE ANALYSIS 51

Thruster
Figure 2.3 A model of a rigid satellite
of Example 2.2.

version of Newton’s law, we obtain the satellite’s equation of motion:

d 2θ
I = τ,
dt 2

which can be represented in the following equivalent form:

d 2θ 1
= τ.
dt 2 I

Let
1
τ = u,
I

then the equation describing the satellite’s motion becomes

d 2θ
= u. (2.4)
dt 2

We assume that when the thrusters fire, the thrust is constant; that is, u = ±U. For
simplicity, let U = 1. Using the state variables x1 = θ and x2 = θ̇, we obtain the
satellite’s state-space model:
      
ẋ 1 0 1 x1 0
= + u, (2.5)
ẋ 2 0 0 x2 1

where u = ±1. We first consider the case when u = 1. With this control, the
system equations are

ẋ 1 = x2 ,
ẋ 2 = 1.
52 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

Solving the second equation yields


x2 = t + c1 ,
where c1 is an integration constant. We substitute x2 = t + c1 into ẋ 1 = x2 to get
ẋ 1 = t + c1 .
Solving the above equation, we obtain

x1 = (t + c1 )dt

= 12 (t + c1 ) 2 + c2

= 12 x22 + c2 ,

where c2 is an integration constant. In the state plane the above equation represents
a family of parabolas open toward the positive x1 axis. The representative point
moves upward along these parabolas because ẋ 2 = 1; that is, ẋ 2 > 0. Typical system
trajectories for this case are shown in Figure 2.4.
Similarly, when u = −1, we have
ẋ 2 = −1.
Solving the above differential equation yields
x2 = −t + c̃1 ,
where c̃1 is an integration constant. Substituting x2 = −t + c̃1 into ẋ 1 = x2 and
then solving the resulting equation yields

x1 = − 12 x22 + c̃2 ,

where c̃2 is an integration constant. In the state plane, this equation represents

2.0
1.5
1.0
0.5
x2 0
0.5
1.0
1.5
2.0
7 6 5 4 3 2 1 0 1 2 3 4 5
x1
Figure 2.4 Trajectories of the system (2.5) for u = 1.
2.1 STATE-PLANE ANALYSIS 53

2.0
1.5
1.0
0.5
x2 0
0.5
1.0
1.5
2.0
5 4 3 2 1 0 1 2 3 4 5 6
x1
Figure 2.5 Trajectories of the system (2.5) for u = − 1.

a family of parabolas open toward the negative x1 axis. The representative point
moves downward along these parabolas because ẋ 2 = −1; that is, ẋ < 0. Typical
system trajectories for this case are shown in Figure 2.5.

In the above examples, systems’ trajectories were constructed analytically. Analytical methods
are useful for systems modeled by differential equations that can be easily solved. If the system
of differential equations cannot be easily solved analytically, we can use graphical or numerical
methods. We now describe a graphical method for solving second-order differential equations
or a system of two first-order differential equations.

2.1.2 The Method of Isoclines


An isocline is a curve along which the trajectories’ slope is constant. In other words, an isocline
is a locus (a set of points), of the trajectories’ constant slope. To obtain equations that describe
isoclines, we consider a system of two first-order differential equations of the form

d x1
ẋ 1 = = f 1 (x1 , x2 ),
dt
(2.6)
d x2
ẋ 2 = = f 2 (x1 , x2 ).
dt
Dividing the second of the above equations by the first one, we obtain

d x2 f 2 (x1 , x2 )
= . (2.7)
d x1 f 1 (x1 , x2 )

Thus, we eliminated the independent variable t from the set of the first-order differential
equations given by (2.6). In equation (2.7), we consider x1 and x2 as the independent and
54 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

dependent variables, respectively. Let

d x2
m = m(x1 , x2 ) = .
d x1

Note that m is just the slope of the tangent to the trajectory passing through the point [x1 x2 ]T .
The locus of the trajectory constant slope,

d x2
= m(x1 , x2 ) = constant,
d x1

can be obtained from the equation

f 2 (x1 , x2 ) = m f 1 (x1 , x2 ).

The curve that satisfies the above equation is an isocline corresponding to the trajectories’ slope
m because a trajectory crossing the isocline will have its slope equal to m. The idea of the
method of isoclines is to construct several isoclines in the state plane. This then allows one
to construct a field of local tangents m. Then, the trajectory passing through any given point
in the state plane is obtained by drawing a continuous curve following the directions of the
field.

◆ Example 2.3
Consider the spring–mass system from Example 2.1. Defining the state variables
x1 = x and x2 = ẋ, we obtain the state equations

ẋ 1 = x2 ,
ẋ 2 = −ω2 x1 .

Let, for simplicity, ω = 1. Then,

dx2 x
= m= − 1.
dx1 x2

From the above we obtain the equation of isoclines:

1
x2 = − x .
m 1

This equation describes a family of straight lines through the origin. In Figure 2.6,
we show some of the isoclines and a typical trajectory for the above spring–mass
system. Note that in this example the trajectory slope is m, while the isocline slope
is −1/m. This means that trajectories of the spring–mass system are perpendicular
to the isoclines; that is, the trajectories are circles centered at the origin of the state
plane.
2.1 STATE-PLANE ANALYSIS 55

x2
m0
m1 m  1

m
x1

Figure 2.6 Isoclines and a


typical trajectory for the spring–
mass system of Example 2.3.

◆ Example 2.4
Given a second-order differential equation

ẍ = −4η ẋ − 4x, (2.8)

where η is a parameter. We will find values of η for which there are isoclines along
which the trajectory slope and the isocline slope are equal. Such isoclines, if they
exist, are called the asymptotes.
Let x1 = x and x2 = ẋ. Then, the second-order differential equation given by (2.8)
can be represented as a system of first-order differential equations:

ẋ 1 = x2 ,
ẋ 2 = −4ηx2 − 4x1 .

The isocline equation is

dx2 −4ηx2 − 4x1


= = m,
dx1 x2

or, equivalently,

4
x2 = − x1 . (2.9)
m + 4η

A trajectory slope is equal to an isocline slope if and only if

4
− = m;
m + 4η
56 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

that is,

m2 + 4ηm + 4 = 0. (2.10)
The discriminant of (2.10) is

=4 η2 − 1.

Hence, the condition for the existence of asymptotes is


|η| ≥ 1. (2.11)

◆ Example 2.5
Consider the system of differential equations
ẋ 1 = x1 + x2 + 1,
ẋ 2 = −x2 + 2.
(a) Find the equation of the isocline corresponding to the trajectory slope m = 1.
(b) Does the given system have asymptotes? If yes, then write the equations of
the asymptotes.
The trajectory slope is

dx2 −x2 + 2
=m= .
dx1 x1 + x2 + 1

Hence, the equation of the isocline corresponding to the trajectory slope m = 1 is


obtained by solving

−x2 + 2
= 1.
x1 + x2 + 1

We obtain

x2 = − 12 x1 + 12 .

The trajectory slope is

−x2 + 2
m= . (2.12)
x1 + x2 + 1
From (2.12), we obtain an equation of isoclines of the form

m m− 2
x2 = − x − .
m+ 1 1 m+ 1

An asymptote is the isocline for which its slope and the trajectory slope are equal.
2.1 STATE-PLANE ANALYSIS 57

Therefore, we can obtain the equations of asymptotes by solving


m
m=− .
m+ 1
We obtain

m2 + 2m = m (m + 2) = 0.

Hence, the equations of asymptotes are

x2 = 2 and x2 = −2x1 − 4.

We can use a computer to draw phase portraits interactively. Below is an example of


MATLAB’s script that was used to sketch a phase portrait of the van der Pol equation shown in
Figure 2.7.
t0=0; % initial time
tf=20; % final time
tspan=tf-t0;
x0=[-4 -4]’; % initial condition
button=1;
% we will now draw internal axes
p=4*[-1 0;1 0];
clf;plot(p(:,1),p(:,2))
hold on
plot(p(:,2),p(:,1))
axis(4*[-1 1 -1 1])
while(button==1)
% select a new initial condition and press the left button. Press
% the right button if you want to stop.
[t,x]=ode45(@my_xdot,tspan,x0); % invoke your own m.file
plot(x(:,1),x(:,2))
[x1,x2,button]=ginput(1);
x0=[x1 x2]’;
end
The van der Pol equation has the form
ẍ − µ(1 − x 2 )ẋ + x = 0, (2.13)
where µ is a parameter. We can equivalently represent the van der Pol equation as
ẋ 1 = x2 ,
  (2.14)
ẋ 2 = µ 1 − x12 x2 − x1 .
58 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

x 0

1

2

3

4
4 2 0 2 4 Figure 2.7 A phase-plane portrait of the van
x der Pol equation for µ = 1.

An inspection of the phase portrait of the van der Pol equation reveals the existence of a closed
trajectory in the state plane. This solution is an example of a limit cycle.
Definition 2.1 A limit cycle is a closed trajectory in the state plane, such that no other
closed trajectory can be found arbitrarily close to it.
Note that the differential equation modeling the spring–mass system does not have a limit cycle
because the solutions are not isolated. Limit cycles can only appear in nonlinear systems; they
do not exist in linear systems.

Theorem 2.1 (Bendixson) If f 1 and f 2 in (2.7) have continuous partial derivatives in a


domain A bounded by a closed path C and if ∂ f 1 /∂ x1 + ∂ f 2 /∂ x2 does not vanish and
is of constant sign in that domain, then (2.7) can have no limit cycle in that domain.
Proof We prove the theorem by contradiction. Equation (2.7) is satisfied on any of its
trajectory. Therefore, if (2.7) has a limit cycle, then clearly
f 1 dx2 − f 2 dx1 = 0. (2.15)
We now apply Green’s theorem, stated on page 674, to the vector field
[−f 2 f 1 ]T
to obtain

∂ f1 ∂ f2
f 1 dx2 − f 2 dx1 = + dx1 dx2 . (2.16)
C A ∂ x1 ∂ x2
By (2.15), the right-hand side of (2.16) must vanish, which contradicts the hypothesis.
Thus, if ∂ f 1 /∂ x1 + ∂ f 2 /∂ x2 does not vanish and is of constant sign in a domain A
bounded by a closed path C, then no limit cycle can exist in A. The proof of the
theorem is complete.
2.2 NUMERICAL TECHNIQUES 59

◆ Example 2.6
We apply Bendixson’s theorem to the van der Pol equation represented by (2.14).
We have
 
f 1 = x2 and f 2 = µ 1 − x12 x2 − x1 .
We calculate ∂ f 1 /∂ x1 + ∂ f 2 /∂ x2 to get
 
∂ f 1 /∂ x1 + ∂ f 2 /∂ x2 = µ 1 − x12 .
It follows from the Bendixson theorem that there are no limit cycles in the region
|x1 | < 1 because ∂ f 1 /∂ x1 + ∂ f 2 /∂ x2 > 0 in that region.

Bendixson’s theorem gives us a necessary condition for a closed curve to be a limit cycle. It
is useful for the purpose of establishing the nonexistence of a limit cycle. In general, it is very
difficult to establish the existence of a limit cycle. Sufficient conditions for the existence of a
limit cycle are discussed, for example, by Hochstadt [123, Chapter 7]. A method for predicting
limit cycles, called the describing function method, is presented in Section 2.5.

2.2 Numerical Techniques


In this section we discuss numerical techniques for solving differential equations. We begin our
presentation of numerical techniques that allow us to determine approximate solutions to the
given system of differential equations with a method based on a Taylor series expansion. This
section is based on the Class Notes of Prof. V. B. Haas [109].

2.2.1 The Method of Taylor Series


The basis for many numerical methods for solving differential equations is the Taylor formula.
Suppose we are given a state-space model of the form
ẋ(t) = f (t, x(t), u(t)), x(t0 ) = x 0 . (2.17)
The method we will discuss depends on the fact that if f (t, x, u) is differentiable with respect
to its n + m + 1 variables x1 , . . . , xn , u 1 , . . . , u m , t in some (n + m + 1)-dimensional region D
and if x(t) is a solution of (2.17), where (t0 , x 0 , u(t0 )) lies in D, then we can expand x(t) into
a Taylor series to obtain
(t − t0 )2
x(t) = x 0 + ẋ(t0 )(t − t0 ) + ẍ(t0 ) + ···
2

∂ f (t0 , x 0 , u(t0 ))  ∂ f (t0 , x 0 , u(t0 ))
n
= x 0 + f (t0 , x 0 , u(t0 ))(t − t0 ) + + ẋ i
∂t i=1
∂ xi

m
∂ f (t0 , x 0 , u(t0 ))  (t − t0 )2
+ u̇ j + ···.
j=1
∂u j 2
60 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

If we stop after the qth term of the Taylor series solution, then the remainder after q terms of a
Taylor series gives us an estimate of the error. We can expect the Taylor series solution to give a
fairly accurate approximation of the solution of the state-space equations only in a small interval
about t0 . Thus, if t = t − t0 is some small time interval such that the Taylor series converges
for t ∈ [t0 , t0 + t], we can use the expansion
d x(t0 ) d 2 x(t0 ) (t − t0 )2
x(t) = x 0 + (t − t0 ) + + ···
dt dt 2 2
to evaluate x(t1 ) = x(t0 + t). Then, we can start over again solving ẋ(t) = f (t, x(t), u(t))
with the new initial condition x(t1 ). We illustrate the method with simple examples.

◆ Example 2.7
Consider the first-order differential equation
dx
= t 2 − x2 , x(1) = 1.
dt
Expanding x = x(t) into a Taylor series about x(1) = 1 yields

dx(1) d 2 x(1) (t − 1) 2 d 3 x(1) (t − 1) 3


x(t) = x(1) + (t − 1) + 2
+
dt dt 2 dt 3 3!
4
d x(1) (t − 1) 4
+ ···.
dt 4 4!
Because dx/dt = t 2 − x 2 , we obtain by successive differentiations

d 2x dx
= 2t − 2x ,
dt 2 dt

d 3x dx 2 d 2x
= 2−2 − 2x 2 ,
dt 3 dt dt
d 4x dx d 2 x d 3x
= −6 − 2x ,
dt 4 dt dt 2 dt 3
..
.
·
Evaluating the above at t = 1 yields
dx(1)
= 0,
dt
d 2 x(1)
= 2,
dt 2
d 3 x(1)
= −2,
dt 3
d 4 x(1)
= 4,
dt 4
..
.
·
2.2 NUMERICAL TECHNIQUES 61

Hence,
(t − 1) 3 (t − 1) 4
x(t) = 1 + (t − 1) 2 − + + ···.
3 6

◆ Example 2.8
In this example, we consider the second-order differential equation

d 2x dx
2
+3 + 6t x = 0,
dt dt

subject to the initial conditions x(0) = 1 and dx(0)/dt = 1. Using d 2 x/dt 2 =


− 3dx/dt − 6t x, we obtain by successive differentiation

d 3x d 2x dx
= −3 − 6t − 6x,
dt 3 dt 2 dt
d 4x d 3x d 2x dx
= −3 − 6t − 12 ,
dt 4 dt 3 dt 2 dt
d 5x d 4x d 3x d 2x
= −3 − 6t − 18 ,
dt 5 dt 4 dt 3 dt 2
..
.
·
Taking into account the initial conditions, we evaluate all coefficients at t = 0
to get

d 2 x(0)
= −3,
dt 2
d 3 x(0)
= 3,
dt 3
d 4 x(0)
= −21,
dt 4
d 5 x(0)
= 117,
dt 5
..
.
·
Hence,

dx(0) d 2 x(0) t 2 d 3 x(0) t 3 d 4 x(0) t 4 d 5 x(0) t 5


x(t) = x(0) + t+ + + + + ···
dt dt 2 2 dt 3 3! dt 4 4! dt 5 5!
3 1 7 117 5
= 1 + t − t2 + t3 − t4 + t + ···.
2 2 8 120
62 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

We now illustrate the method of Taylor series when applied to a system of first-order differential
equations.

◆ Example 2.9
We consider the following system of two first-order equations:

dx1 /dt = f 1 (t, x1 , x2 ),


dx2 /dt = f 2 (t, x1 , x2 ).

The initial conditions at t = t 0 are x1 (t 0 ) = x10 , x2 (t 0 ) = x20 . Having the ini-


tial conditions, we can determine the values dx1 (t 0 )/dt and dx2 (t 0 )/dt from the
original equations. We then differentiate dx1 /dt and dx2 /dt to obtain

d 2 x1 d dx dx
= f t, x1 , x2 , 1 , 2 ,
dt 2 dt 1 dt dt

d 2 x2 d dx dx
2
= f 2 t, x1 , x2 , 1 , 2 .
dt dt dt dt

Now, d 2 x1 /dt 2 and d 2 x2 /dt 2 can be calculated by substituting known quanti-


ties on the right-hand sides of the above equations. By successive differentiation
and substitution, we can determine d 3 x1 /dt 3 and d 3 x2 /dt 3 as well as higher
derivatives at t = t 0 . The solution to the given system is then computed as

dx1 (t 0 ) d 2 x1 (t 0 ) (t − t 0 ) 2 d 3 x1 (t 0 ) (t − t 0 ) 3
x1 (t) = x10 + (t − t 0 ) + + + ···,
dt dt 2 2 dt 3 3!
dx2 (t 0 ) d 2 x2 (t 0 ) (t − t 0 ) 2 d 3 x2 (t 0 ) (t − t 0 ) 3
x2 (t) = x20 + (t − t 0 ) + + + ···.
dt dt 2 2 dt 3 3!

The Taylor series method can be programmed on a computer. However, this is not a very efficient
method from a numerical point of view. In the following, we discuss other methods for numerical
solution of the state equations that are related to the method of Taylor series. We first present
two simple numerical techniques known as the forward and backward Euler methods.

2.2.2 Euler’s Methods


Suppose that we wish to obtain the solution of the modeling equation
ẋ(t) = f (t, x(t), u(t))
in the time interval [t 0 , t f ], subject to the initial condition x(t 0 ) = x 0 and the input vector u(t).
We divide the interval [t 0 , t f ] into N equal subintervals of width
t f − t0
h= .
N
2.2 NUMERICAL TECHNIQUES 63

We call h the step length. We set


tk = t 0 + kh.
To proceed further, we approximate the derivative d x(t)/dt at time tk by
d x(tk ) x(tk+1 ) − x(tk )

dt h
and write
x(tk+1 ) − x(tk )
= f (tk , x(tk ), u(tk )).
h
Hence, for k = 1, 2, . . . , N , we have

x(tk+1 ) = x(tk ) + h f (tk , x(tk ), u(tk ))

The above is known as the forward Euler algorithm. We see that if h is sufficiently small, we
can approximately determine the state at time t1 = t 0 + h from the initial condition x 0 to get
x(t1 ) = x 0 + h f (t 0 , x 0 , u(t 0 )).
Once we have determined the approximate solution at time t1 , we can determine the approxi-
mate solution at time t2 = t1 + h, and so on. The forward Euler method can also be arrived at
when instead of considering the differential equation ẋ(t) = f (t, x(t), u(t)), we start with its
equivalent integral representation:
t
x(t) = x 0 + f (τ, x(τ ), u(τ )) dτ. (2.18)
t0

We then use the rectangular rule for numerical integration that can be stated as follows. The
area under the curve z = g(t) between t = tk and t = tk + h is approximately equal to the area
of the rectangle ABC D as shown in Figure 2.8; that is,
tk +h
g(t) dt ≈ hg(tk ). (2.19)
tk

Applying the rectangular rule of integration to (2.18), we obtain the forward Euler method.

z  g(t)

D C

A B Figure 2.8 Illustration of the rectangular rule of


tk tk  h integration.
64 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

The backward Euler method differs from the forward Euler method in the way we approximate
the derivative:
d x(tk ) x(tk ) − x(tk−1 )
≈ .
dt h
This approximation gives the backward Euler formula:

x(tk+1 ) = x(tk ) + h f (tk+1 , x(tk+1 ), u(tk+1 ))

In the above formula, x(tk+1 ) is a function of itself. For this reason, we say that the backward Euler
method is an implicit integration algorithm. Implicit integration algorithms require additional
computation to solve for x(tk+1 ). For further discussion of this issue, we refer to Parker and
Chua [222].
Euler’s methods are rather elementary, and thus they are not as accurate as some of the more
sophisticated techniques that we are going to discuss next.

2.2.3 Predictor–Corrector Method


The predictor–corrector method, also known as Heun’s method, is based on the trapezoidal
rule for numerical integration. The trapezoidal rule states that the area under the curve z = g(t)
between t = tk and t = tk + h is approximately equal to the area of the trapezoid ABCD as
depicted in Figure 2.9; that is,
tk +h
g(tk ) + g(tk + h)
g(t) dt ≈ h . (2.20)
tk 2

Suppose we wish to find x(t) at t = t1 > t 0 . Applying the trapezoid rule of integration to (2.17),
we obtain
h
x(t1 ) = x 0 + ( f (t 0 , x 0 , u(t 0 )) + f (t1 , x(t1 ), u(t1 )).
2
The above is an implicit algorithm, and it may be viewed as a merging of the forward and
backward Euler algorithms. If the function f is nonlinear, then we will, in general, not be able

z  g(t)

C
D

A B
tk tk  h
Figure 2.9 The trapezoidal rule of integration.
2.2 NUMERICAL TECHNIQUES 65

to solve the above equation for x(t1 ) exactly. We attempt to obtain x(t1 ) iteratively. Let

h = tk+1 − tk ,
x ∗1 = x 0 + h f (t 0 , x 0 , u(t 0 )),
m0 = f (t 0 , x 0 , u(t 0 )),
m1 = f (t1 , x ∗1 , u(t1 )).

A first approximation to x(t1 ) is

x ∗1 = x 0 + h f (t 0 , x 0 , u(t 0 ))

and is called a predictor. The next approximation is

h
x(t1 ) = x 0 + (m0 + m1 )
2

and is called a corrector.


Suppose now that we wish to find x(t) at t = t f > t 0 . We first divide the interval [t 0 , t f ]
into N equal subintervals using evenly spaced subdivision points to obtain

t 0 < t 1 < t2 < · · · < t N = t f .

Once we have found x(t1 ), we can apply the same procedure to find x(t2 ), and so on.

◆ Example 2.10
We use the predictor–corrector method to find x(1), where the step length h equals
0.5, for the following differential equation:

dx
= t x 2 − x, x(0) = 1. (2.21)
dt

The calculations are summarized in Table 2.1.

Table 2.1 Solving the Differential Equation (2.21) Using the Predictor–
Corrector Method

t0 x0 t1 m0 x1∗ m1 m = (m 0 + m 1 )/2 x

0 1 0.5 −1.0 0.5 −0.375 −0.688 −0.344


0.5 0.656 1.0 −0.441 0.435 −0.246 −0.344 −0.172
1.0 0.484
66 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

We could use x(t1 ) obtained from the corrector formula to construct a next approximation to
x(t1 ). In general, we can use the formula
h  
x (k) (t1 ) = x 0 + f (t 0 , x 0 , u(t 0 )) + f t1 , x (k−1) (t1 ), u(t1 )
2
to evaluate x(t1 ) until two successive iterations agree to the desired accuracy. This general
algorithm is referred to as a second-order predictor–corrector method.
An n-dimensional system of first-order differential equations can be worked analogously. We
illustrate the procedure for n = 2, where
dx
= f (t, x, y), x(t 0 ) = x0 ,
dt
dy
= g(t, x, y), y(t 0 ) = y0 .
dt
Let
m 0 = f (t 0 , x0 , y0 ),
n 0 = g(t 0 , x0 , y0 ).
Then, the predictor equations are
x1∗ = x0 + m 0 h,
y1∗ = y0 + n 0 h.
Let
m 1 = f (t1 , x1∗ , y1∗ ),
n 1 = g(t1 , x1∗ , y1∗ ).
Then, the corrector equations are
m0 + m1
x(t1 ) = x0 + h,
2
n0 + n1
y(t1 ) = y0 + h.
2
Using the above, we can generate the corresponding formulas for tk .
We will now show that the predictor–corrector method yields a solution that agrees with
the Taylor series solution through terms of degree two. We consider the scalar case, where f
is assumed to be differentiable with respect to its arguments and ∂ 2 f /∂t∂ x = ∂ 2 f /∂ x∂t. First
observe that m 1 = f (t1 , x1∗ ) can be written as
f (t1 , x1∗ ) = f (t 0 + h, x0 + m 0 h)
    
h 1 f tt ft x h
≈ f (t 0 , x0 ) + [ f t f x ] + [h m 0 h] ,
m0h 2 f xt fx x m0h

where f t = ∂ f /∂t, f x = ∂ f /∂ x, and so on. Substituting the above into the corrector equation
2.2 NUMERICAL TECHNIQUES 67

yields

h
x(t1 ) = x0 + (m 0 + m 1 )
2
 
h h h
= x0 + (m 0 + m 0 ) + [ ft fx ]
2 2 m0h
  
1 f ft x h
+ [h m 0 h] tt
2 f xt fx x m0h
h2   h3
= x0 + m 0 h + ( f t + f x m 0 ) + f tt + 2 f t x m 0 + f x x m 20 .
2 4
The Taylor series expansion, on the other hand, gives

h2 h3
x(t1 ) = x0 + ẋ h + ẍ + x (3) + · · ·
2 6
h2
= x0 + f (t 0 , x0 ) h + ( f t + f x ẋ)
2
h3
+ ( f tt + 2 f t x ẋ + f x x (ẋ)2 + f x ẍ) + ···,
6
where f t , f x , ẋ, and so on, are all evaluated at t = t 0 . Comparing the right-hand sides of the last
two equations, we see that the solution obtained using the predictor–corrector method agrees
with the Taylor series expansion of the true solution through terms of degree two.

2.2.4 Runge’s Method


Runge’s method is an improvement over the predictor–corrector method in the sense that it
agrees with the Taylor series solution through terms of degree three. The method depends upon
integration by means of Simpson’s rule, which says that the area under the curve x = g(t)
between t 0 and t 0 + h is given approximately by
t 0 +h
h
g(t) dt ≈ (x0 + 4x1 + x2 ), (2.22)
t0 6

where x0 = g(t 0 ), x1 = g(t 0 +h/2), and x2 = g(t 0 +h). The above formula is derived by passing
a parabola through the three points (t 0 , x0 ), (t 0 + h/2, x1 ), and (t 0 + h, x2 ) and assuming that the
area under the given curve is approximately equal to the area under the parabola, as illustrated
in Figure 2.10. To verify Simpson’s formula, we translate the axes so that in the new coordinates
(t˜, x̃) we have t = t 0 + h/2 + t˜, x = x̃. In the new coordinates the three points become

(−h/2, x0 ), (0, x1 ), (h/2, x2 ).

The parabola through the above points is described by the equation

x̃ = a t˜2 + bt˜ + c.
68 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

x
x  g( t̃ )

x1
x0
x2

h 0 h t̃

2 2 Figure 2.10 Integration using Simpson’s rule.

Hence

h2 h
x0 = a − b + c,
4 2
x1 = c,
h2 h
x2 = a + b + c.
4 2

Using the above three equations, we calculate

c = x1 ,
x2 − x0
b= ,
h
2
a = 2 (x0 − 2x1 + x2 ).
h

Then, the area under the parabola is



h/2 h/2
2 x2 − x0
(a t˜2 + bt˜ + c) d t˜ = (x 0 − 2x 1 + x 2 ) ˜
t 2
+ ˜
t + x 1 d t˜
−h/2 −h/2 h2 h
h
= (x0 + 4x1 + x2 ).
6

If we are given the differential equation

dx
= f (t, x), x(t 0 ) = x0 ,
dt

we integrate it to obtain
t1
x(t1 ) = x(t 0 ) + f (s, x(s)) ds.
t0
2.2 NUMERICAL TECHNIQUES 69

Then, we approximate the integral


t1
f (s, x(s)) ds
t0

by
h
(m 0 + 4m 1 + m 3 ),
6
where
h = t1 − t 0 ,
m 0 = f (t 0 , x0 ),

h h
m 1 = f t 0 + , x0 + m 0 ,
2 2
m 2 = f (t 0 + h, x0 + m 0 h),
m 3 = f (t 0 + h, x0 + m 2 h).
Thus, we may write
x(t1 ) = x0 + x,
where x = mh, and m = 16 (m 0 + 4m 1 + m 3 ).

◆ Example 2.11
We use Runge’s method to find x(1), where the step length h equals 0.5, for the
differential equation (2.21). We found x(1) using the predictor–corrector method in
Example 2.10. The calculations for Runge’s method are summarized in Table 2.2.

Table 2.2 Solving the Differential Equation (2.21) Using Runge’s Method

t0 x0 m0 m1 m2 m3 m x

0 1 −1 −0.610 −0.375 −0.482 −0.654 −0.327


0.5 0.673 −0.446 −0.325 −0.248 −0.248 −0.332 −0.166
1.0 0.507

Runge’s method can, of course, be used for systems of first-order differential equations. If, for
example,
dx
= f (t, x, y), x(t 0 ) = x0 ,
dt
dy
= g(t, x, y), y(t 0 ) = y0 ,
dt
70 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

then

x(t1 ) = x0 + x,
y(t1 ) = y0 + y,

where

h
x = (m 0 + 4m 1 + m 3 ),
6
h
y = (n 0 + 4n 1 + n 3 ).
6

In the above expressions,

m 0 = f (t 0 , x0 , y0 ),
m 1 = f (t 0 + h/2, x0 + m 0 h/2, y0 + n 0 h/2),
m 2 = f (t 0 + h, x0 + m 0 h, y0 + n 0 h),
m 3 = f (t 0 + h, x0 + m 2 h, y0 + n 2 h),
n 0 = g(t 0 , x0 , y0 ),
n 1 = g(t 0 + h/2, x0 + m 0 h/2, y0 + n 0 h/2),
n 2 = g(t 0 + h, x0 + m 0 h, y0 + n 0 h),
n 3 = g(t 0 + h, x0 + m 2 h, y0 + n 2 h).

2.2.5 Runge–Kutta Method


This method is an improvement over the Runge method, and it agrees with the Taylor series
solution through terms of degree four; for this reason it is sometimes called the fourth-order
Runge–Kutta algorithm. It is based on the use of the formula

h
x(t1 ) = x(t 0 ) + (m 0 + 2m 1 + 2m 2 + m 3 ), (2.23)
6
where

m 0 = f (t 0 , x0 ),
m 1 = f (t 0 + h/2, x0 + m 0 h/2),
m 2 = f (t 0 + h/2, x0 + m 1 h/2),
m 3 = f (t 0 + h, x0 + h m 2 ).

Thus, we may write

x(t1 ) = x0 + x,

where x = mh, and m = 16 (m 0 + 2m 1 + 2m 2 + m 3 ).


2.3 PRINCIPLES OF LINEARIZATION 71

◆ Example 2.12
We apply the Runge–Kutta method to find x(1), where the step length h equals
0.5, for the differential equation (2.21) that we used Examples 2.10 and 2.11. Our
calculations for the Runge–Kutta method are summarized in Table 2.3.

Table 2.3 Solving the Differential Equation (2.21) Using the Runge–Kutta
Method

t0 x0 m0 m1 m2 m3 m x

0 1 −1.0 −0.609 −0.668 −0.444 −0.666 −0.333


0.5 0.667 −0.444 −0.324 −0.328 −0.250 −0.333 −0.167
1.0 0.500

The above problem has been worked by the predictor–corrector, Runge’s, and the Runge–Kutta
methods. This problem can be solved exactly by means of the Taylor series. The Taylor series
solution is
x(t) = 1 − t + t 2 − t 3 + · · · .
The series converges to 1/(1 + t) for −1 < t < 1. A check on the differential equation shows
that
1
x(t) =
1+t
is indeed a solution for all t > −1. Thus, x(1) = 0.500.
The algorithms presented above require only one input point, x(tk ), at each step to compute
x(tk+1 ). Such algorithms are referred to as single-step algorithms. In the multistep algorithms, a
number of previous points are used, along with the corresponding values of f, to evaluate x(tk+1 );
that is, the multistep algorithms reuse past information about the trajectory when evaluating its
new point. In general, it may be difficult to decide which type of algorithm is more efficient
because the performance of a particular algorithm is problem dependent. Well-known multistep
algorithms are: Adams–Bashforth, Adams–Moulton, and Gear. More information about the
above-mentioned algorithms can be found in Parker and Chua [222, Section 4.1].

2.3 Principles of Linearization


The replacement of a nonlinear system model by its linear approximation is called linearization.
The motivation for linearization is that the dynamical behavior of many nonlinear system models
can be well approximated within some range of variables by linear system models. Then, we
can use well-developed techniques for analysis and synthesis of linear systems to analyze a
nonlinear system at hand. However, the results of analysis of nonlinear systems using their
linearized models should be carefully interpreted in order to avoid unacceptably large errors due
to the approximation in the process of linearization.
72 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

y
dh(x0)
y  y0  (x  x0)
dx

y  h(x)
y0

Figure 2.11 Linear approximation to a function


x0 x y = h(x).

We begin our discussion of linear approximations of nonlinear systems by considering a


nonlinear element with a state variable x and an output variable y that are related by the equation
y = h(x),
where the function h : R → R is continuously differentiable; that is, h ∈ C 1 . Let x0 be an
operating point. We expand h into a Taylor series about the operating point x0 to obtain
y = h(x)
dh(x0 )
= h(x0 ) + (x − x0 ) + higher-order terms.
dx
Linearization of h(x) about the point x0 consists of replacing h by the linear approximation
dh(x0 )
y = h(x0 ) + (x − x0 )
dx
dh(x0 )
= y0 + (x − x0 ), (2.24)
dx
where y0 = h(x0 ). Let y = y − y0 and x = x − x0 . Then, we can represent (2.24) as
dh(x0 )
y = x.
dx
Over a small range of x, the line (2.24) is a good approximation to the curve y = h(x) in a
neighborhood of the operating point x0 ; see Figure 2.11 for an illustration of the above statement.
We now illustrate the process of linearization on the simple pendulum.

◆ Example 2.13
Consider the simple pendulum shown in Figure 2.12. Let g be the gravity con-
stant. The tangential force component, mg sin θ, acting on the mass m returns the
pendulum to its equilibrium position. By Newton’s second law we obtain

F = −mg sin θ.
2.3 PRINCIPLES OF LINEARIZATION 73

The equilibrium point for the pendulum is θ = 0◦ . Note that F (0◦ ) = 0. Therefore,

F = F and θ = θ.

We also have

dF ◦
(0 ) = −mg cos 0◦ = −mg.

Hence, the linearized model about the equilibrium position θ = 0◦ has the form

F = −mg θ.

Thus, for small angular displacements θ, the force F is proportional to the dis-
placements. This approximation is quite accurate for −π/4 ≤ θ ≤ π/4, as can be
seen in Figure 2.13.

mg cos   mg sin 

Figure 2.12 Forces acting on the simple


mg pendulum.

F
mg

    

2 2
slope  mg

Figure 2.13 Linearizing the equation modeling the simple pendulum.


74 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

y  h(x)

Tangent
plane
h(x0)

x2
Figure 2.14 Tangent plane as a linear
x1 x0 approximation for a function of two variables.

If h : Rn → R—that is, y = h(x1 , x2 , . . . , xn ), which means that the dependent variable depends
upon several variables—the same principle applies. Let
x 0 = [x10 x20 ··· xn0 ]T
be the operating point. The Taylor series expansion of h about the operating point x 0 yields
y − h(x 0 ) = ∇h(x 0 )T (x − x 0 ) + higher-order terms,
where
    
∂h  ∂h  ∂h  
∇h(x 0 ) =
T ··· .
∂ x1 x=x 0 ∂ x2 x=x 0 ∂ xn  x=x 0
Geometrically, the linearization of h about x 0 can be thought of as placing a tangent plane onto
the nonlinear surface at the operating point x 0 as illustrated in Figure 2.14.

2.4 Linearizing Differential Equations


We now consider a dynamical system modeled by a set of nonlinear differential equations:
d x1
= f 1 (x1 , x2 , . . . , xn , u 1 , u 2 , . . . , u m ),
dt
d x2
= f 2 (x1 , x2 , . . . , xn , u 1 , u 2 , . . . , u m ),
dt
..
.
d xn
= f n (x1 , x2 , . . . , xn , u 1 , u 2 , . . . , u m ).
dt
We assume that the functions f i , i = 1, 2, . . . , n, are continuously differentiable. The above set
of differential equations can be represented in vector form as
ẋ = f (x, u). (2.25)
Let ue = [u 1e u 2e · · · u me ] be a constant input that forces the system (2.25) to settle into a
T

constant equilibrium state x e = [x1e x2e · · · xne ]T ; that is, ue and x e satisfy
f (x e , ue ) = 0.
2.4 LINEARIZING DIFFERENTIAL EQUATIONS 75

We now perturb the equilibrium state by allowing


x = x e + x, u = ue + u.
Taylor’s expansion yields
d
x = f (x e + x, ue + u)
dt
∂f ∂f
= f (x e , ue ) + (x e , ue ) x + (x e , ue ) u + higher-order terms,
∂x ∂u
where
   
∂ f1 ∂ f1   ∂ f1 ∂ f1  
∂x ··· ···
 1 ∂ xn  

 ∂u
 1 ∂u m  

∂f  .. 
 ∂f  .. 

(x e , ue ) =  ... .  and (x e , ue ) =  ... . 
∂x   ∂u  
 ∂ fn ∂ f n 

 ∂ fn ∂ f n 

··· ···
∂ x1 ∂ x  x=x e
n u=ue
∂u 1 ∂u  x=x e
m u=ue

are the Jacobian matrices of f with respect to x and u, evaluated at the equilibrium point,
[x eT ueT ]T . Note that
d d d d
x= x e + x = x
dt dt dt dt
because x e is constant. Furthermore, f (x e , ue ) = 0. Let
∂f ∂f
(x e , ue ) and B =
A= (x e , ue ).
∂x ∂u
Neglecting higher-order terms, we arrive at the linear approximation
d
x = Ax + Bu.
dt
Similarly, if the outputs of the nonlinear system model are of the form
y1 = h 1 (x1 , x2 , . . . , xn , u 1 , u 2 , . . . , u m ),
y2 = h 2 (x1 , x2 , . . . , xn , u 1 , u 2 , . . . , u m ),
..
.
y p = h p (x1 , x2 , . . . , xn , u 1 , u 2 , . . . , u m )
or in vector notation
y = h(x, u),
then Taylor’s series expansion can again be used to yield the linear approximation of the above
output equations. Indeed, if we let
y = ye +  y,
then we obtain
 y = Cx + Du,
76 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

where
 
∂h 1 ∂h 1 
 ∂x ···
 1 ∂ xn 

∂h  .. 
C= (x e , ue ) =  ... .  ∈ R p×n
∂x  
 ∂h p ∂h p 

···
∂ x1 ∂ xn  u=u
x=x e
e

and
 
∂h 1 ∂h 1 
 ∂u ···
 1 ∂u m 

∂h  ..  
D= (x e , ue ) =  ... .   ∈ R p×m
∂u  
 ∂h p ∂h p 

···
∂u 1 ∂u m  u=u
x=x e
e

are the Jacobian matrices of h with respect to x and u, evaluated at the equilibrium point
[x eT ueT ]T .

◆ Example 2.14
In this example, we linearize the model of Watt’s governor that we derived in
Section 1.7.1. Recall that the equations modeling the governor have the form
ẋ 1 = x2 ,
1 b
ẋ 2 = N2 x32 sin 2x1 − g sin x1 − x2 ,
2 m
κ τ
ẋ 3 = cos x1 − .
I I
We linearize the above model of an equilibrium state of the form

x e = [x1e 0 x3e]T .
Equating the right-hand sides of the above engine–governor equations to zero yields

x2e = 0
g
N2 x3e
2 =
cos x1e
τ
cos x1e =
κ
The Jacobian matrix of the engine–governor model evaluated at the equilibrium
state is
 
0 1 0
  
∂f  N2 x 2 cos 2x − g cos x N2 x3e sin 2x1e
b
 =  − .
∂ x  x=x e  3e 1e 1e
κ
m 
− sin x1e 0 0
I
2.5 DESCRIBING FUNCTION METHOD 77

We use the following trigonometric identities

cos 2α = cos2 α − sin2 α, sin 2α = 2 sin α cos α


2 = g/cos x , to eliminate N2 x 2 from
and the previously obtained relation, N2 x3e 1e 3e
the Jacobian matrix. Performing needed manipulations, we obtain
 
0 1 0
  
∂f 
 g sin2 x1e
−
b 2g sin x1e 
.
= −
∂ x  x=x e  κcos x1e m x3e  
− sin x1e 0 0
I
Hence, the linearized model of the engine–governor system can be represented as
 
0 1 0
   
ẋ 1   x1
 g sin2 x1e b 2g sin x1e 
ẋ 2  = − − x2  . (2.26)
 cos x1e m x3e 
ẋ 3  κ  x3
− sin x1e 0 0
I

2.5 Describing Function Method


The describing function method allows us to apply familiar frequency domain techniques, used in
the linear system analysis, to the analysis of a class of nonlinear dynamical systems. The method
can be used to predict limit cycles in nonlinear systems. The describing function method can
be viewed as a “harmonic linearization” of a nonlinear element. The method provides a “linear
approximation” to the nonlinear element based on the assumption that the input to the nonlinear
element is a sinusoid of known, constant amplitude. The fundamental harmonic of the element’s
output is compared with the input sinusoid to determine the steady-state amplitude and phase
relation. This relation is the describing function for the nonlinear element. The describing func-
tion method is based on the Fourier series. In our review of Fourier series, we use the notion
of a scalar product over a function space, which we discuss next. Our presentation of the scalar
product in the context of spaces of functions follows that of Lang [175]. The reader familiar with
the Fourier series may go directly to Subsection 2.5.3, where the describing function method
analysis is presented.

2.5.1 Scalar Product of Functions


Let C([a, b]) be the set of continuous functions on the interval [a, b] ⊂ R.

Definition 2.2 The scalar product of two real-valued functions u, v ∈ C([a, b]) is
b
u, v = u(t)v(t) dt.
a
78 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

Using simple properties of the integral, we can verify that the above scalar product satisfies the
following conditions:

1. For all u, v ∈ C([a, b]), we have u, v = v, u.


2. If u, v, w are elements of C([a, b]), then u, v + w = u, v + u, w.
3. If x is a number, then xu, v = x u, v = u, xv.
4. For all v ∈ C([a, b]), we have v, v ≥ 0, and v, v > 0 if v = 0.

The functions u, v ∈ C([a, b]) are said to be orthogonal if u, v = 0. A set of functions from
C([a, b]) is said to be mutually orthogonal if each distinct pair of functions in the set is
orthogonal.

◆ Example 2.15
The functions ψn (t) = sin nωt, φn (t) = cos nωt, n = 1, 2, . . . , and φ0 (t) = 1 form
a mutually orthogonal set of functions on the interval [t 0 , t 0 + 2π /ω]. This can be
verified by direct integration. For example, for positive m and n such that m = n,
we have

t 0 +2π /ω
ψm, ψn  = sin m ωt sin nωt dt
t0

1 t 0 +2π /ω
= (cos (m − n)ωt − cos (m + n)ωt)dt
2 t0
t 0 +2πω
1 sin (m − n)ωt sin (m + n)ωt 

= − 
2ω m− n m+ n t0

= 0. (2.27)

Note that m + n = 0 because m and n are positive. Using similar computations,


we can show that for m = n we obtain

φm, φn  = 0 (2.28)

and that for all non-negative m and positive n we have

φm, ψn  = 0. (2.29)

We define the norm of v ∈ C([a, b]) to be



v = v, v.
2.5 DESCRIBING FUNCTION METHOD 79

◆ Example 2.16
Consider ψn (t) = sin nωt on the interval [t 0 , t 0 + 2π /ω]. Then,
t 0 +2π /ω
ψn 2 = sin nωt sin nωt dt
t0
t 0 +2π /ω
= sin2 nωt dt
t0

1 t 0 +2π /ω
= (1 − cos 2nωt) dt
2 t0

t 0 +2π/ω
1 sin 2nωt 

= t− 
2 2nω t0
π
= . (2.30)
ω
Similarly,
π
φn 2 = , n = 1, 2 . . . , (2.31)
ω
and

φ0 2 = . (2.32)
ω

So far we discussed only continuous functions. In many applications we have to work with
more general functions, specifically with piecewise continuous functions.
Definition 2.3 A function f is said to be piecewise continuous on an interval [a, b] if the
interval can be partitioned by a finite number of points, say ti , so that:
1. a = t 0 < t1 < · · · < tn = b.
2. The function f is continuous on each open subinterval (ti−1 , ti ).
3. The function f approaches a finite limit as the end points of each subinterval are approached
from within the subinterval; that is, the limits
lim f (ti − h) and lim f (ti + h)
h→0 h→0
h>0 h>0

both exist.
The graph of a piecewise continuous function is shown in Figure 2.15. In further discussion,
we assume that two piecewise functions f and g are equal if they have the same values at the
points where they are continuous. Thus, the functions shown in Figure 2.16 are considered to
be equal, though they differ at the points of discontinuity.
80 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

f(t)

Figure 2.15 A graph of a piecewise continuous


a  t0 t1 t2 t3 b  t4 t function.

f (t) g(t)

t0 t1 t2 t3 t4 t t0 t1 t2 t3 t4 t
Figure 2.16 Two piecewise continuous functions that we consider to be equal.

Theorem 2.2 Let V be the space of functions that are piecewise continuous on the
interval [a, b]. Let f ∈ V. Then,  f  = 0 if and only if f ( x) = 0 for all but finite
number of points x in the interval [a, b].
Proof (⇐) It is clear that if f ( x) = 0 except for a finite number of x ∈ [a, b], then
b
 f 2 = f 2 ( x)dx = 0.
a
(⇒) Suppose f is piecewise continuous on [a, b], and let a = t 0 < t1 < · · · < tn = b
be a partition of the interval [a, b] such that f is continuous on each subinterval [ti−1 , ti ]
except possibly at the end points. Suppose that  f  = 0. Then, also  f 2 = f, f  = 0.
The above means that
b
f 2 ( x)dx = 0,
a

where the integral is the sum of the integrals over the intervals [ti−1 , ti ]; that is,
b n ti

f 2 ( x)dx = f 2 ( x)dx = 0.
a i=1 ti−1

Each integral satisfies


ti
f 2 ( x)dx ≥ 0.
ti−1
2.5 DESCRIBING FUNCTION METHOD 81

Hence, each such integral must be equal to 0. The function f is continuous on (ti−1 , ti ).
Therefore,
f 2 ( x) = 0 for ti−1 < x < ti ,
which means that
f ( x) = 0 for ti−1 < x < ti .
Hence, f ( x) = 0 except at a finite number of points. The proof is complete.

The above theorem implies that the scalar product on the space V of piecewise continuous
functions is positive definite. In other words, for a piecewise continuous function f ∈ V , we
have f, f  ≥ 0, and f, f  = 0 if and only if f = 0 for all but a finite number of points in the
interval [a, b].

2.5.2 Fourier Series


We begin with the definition of a periodic function.

Definition 2.4 A function f : R → R is said to be periodic with the period T if the domain
of f contains t + T whenever it contains t and if

f (t + T ) = f (t)

for every value of t. The smallest positive value of T for which f (t + T ) = f (t) is called
the fundamental period of f .

It follows from the above definition that if T is a period of f , then 2T is also a period, and so is
any integral multiple of T .
A periodic function, satisfying certain assumptions spelled out later, may be represented by
the series


f (t) = A0 + (Ak cos kωt + Bk sin kωt), (2.33)
k=1

where ω = 2π/T , with T being the fundamental period of f . On the set of points where the
series (2.33) converges, it defines f , whose value at each point is the sum of the series for that
value of t, and we say that the series (2.33) is the Fourier series for f .
Suppose that the series (2.33) converges. We use the results of the previous subsection to find
expressions for the coefficients Ak and Bk . We first compute Ak coefficients. Multiplying (2.33)
by φm = cos mωt, where m > 0 is a fixed positive integer, and then integrating both sides with
respect to t from t 0 to t 0 + 2π/ω yields

 ∞

φm , f  = A0 φm , φ0  + Ak φm , φk  + Bk φm , ψk , (2.34)
k=1 k=1

where φ0 = 1. It follows from (2.27), (2.28), and (2.29) that the only nonvanishing term on
the right-hand side of (2.34) is the one for which k = m in the first summation. Using (2.31),
82 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

we obtain
φm , f 
Am =
φm , φm 

ω t 0 +2π/ω
= f (t) cos mωt dt, m = 1, 2, . . . . (2.35)
π t0
To obtain A0 , we integrate (2.33). Using (2.32), we get
t 0 +2π/ω
φ0 , f  ω
A0 = = f (t) dt. (2.36)
φ0 , φ0  2π t 0
An expression for Bm may be obtained by multiplying (2.33) by sin mωt and then integrating
term by term from t 0 to t 0 + 2π/ω. Using the orthogonality relations (2.27)–(2.29) yields

ψm , f  ω t 0 +2π/ω
Bm = = f (t) sin mωt dt, m = 1, 2, . . . . (2.37)
ψm , ψm  π t0
Suppose now that a function f is given that is periodic with period 2π/ω and integrable on the
interval [t 0 , t 0 + 2π/ω]. Then, coefficients Ak and Bk can be computed using (2.35)–(2.37), and
a series of the form (2.33) can be formally constructed. However, we do not know whether this
series converges for each value of t and, if so, whether its sum is f (t). The answer is given by
the theorem that we state after explaining the notation used. We denote the limit of f (t) as t
approaches c from the right by f (c+); that is,
f (c+) = lim f (t).
t→c+
Similarly, f (c−) = limt→c− f (t) denotes the limit of f (t) as t → c from the left. The mean
value of the right- and left-hand limits at the point c is
( f (c+) + f (c−))/2.
Note that at any point c where f is continuous, we have
f (c+) = f (c−) = f (c).

Theorem 2.3 Assume that f and its derivative are piecewise continuous on the interval
[t 0 , t 0 + 2π/ω]. Furthermore, suppose that f is defined outside the interval [t 0 , t 0 +
2π/ω] so that it is periodic with period 2π/ω. Then, f has a Fourier series of the
form (2.33) whose coefficients are given by (2.35)–(2.37). The Fourier series converges
to ( f (t+) + f (t−))/2 at all points of the interval [t 0 , t 0 + 2π/ω].

With this knowledge of the Fourier series, it is now possible to discuss the describing function
method for analyzing a class of nonlinear control systems.

2.5.3 Describing Function in the Analysis of Nonlinear Systems


The describing function method can be considered as an extension of the Nyquist stability
criterion to nonlinear systems. In our further discussion, we sometimes use a double box to
represent a nonlinear element as illustrated in Figure 2.17.
2.5 DESCRIBING FUNCTION METHOD 83

Figure 2.17 A symbol for a nonlinear element.

x(t)  X sin t y(t)  Y1 sin(t  1)  harmonics

c
G(s)
r0 x(t) y(t)


Figure 2.18 A nonlinear feedback system with a nonlinear element in the forward path.

Consider a feedback system that contains a nonlinear element as shown in Figure 2.18.
Assume that the input to the nonlinear element is a sinusoidal signal x(t) = X sin ωt, where
X is the amplitude of the input sinusoid. The output of the nonlinear element is, in general,
not sinusoidal in response to a sinusoidal input. Suppose that the nonlinear element output is
periodic with the same period as its input and that it may be expanded in the Fourier series,


y(t) = A0 + (Ak cos kωt + Bk sin kωt)
k=1
∞
= A0 + Yk sin(kωt + φk ). (2.38)
k=1

Thus y(t) consists of a fundamental component


Y1 sin(ωt + φ1 ),
together with a mean level A0 and harmonic components at frequencies 2ω, 3ω, . . . . The describ-
ing function method relies on the assumption that only the fundamental harmonic is significant.
This assumption is called the filtering hypothesis. The filtering hypothesis can be expressed as
|G( jω)|  |G(k jω)| for k = 2, 3, . . . ,
which requires that the linear element, modeled by the transfer function G(s), be a low-pass filter.
The filtering hypothesis is often valid because many, if not most, control systems are low-pass
filters. This results in higher harmonics being more attenuated compared with the fundamental
harmonic component. The nonlinear element is then approximately modeled by its describing
function,
Y1 fundamental component of output
N (X ) =  φ1 = . (2.39)
X input
The describing function is, in general, a complex-valued function of the amplitude X of the
input sinusoid.
The stability of the nonlinear closed-loop system depicted in Figure 2.18 can be analyzed
using Nyquist’s criterion applied to the associated linear system shown in Figure 2.19. The
84 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

c Figure 2.19 Associated linear


N(X) G(s)
r0  system used in the describing
function analysis of the nonlinear
system shown in Figure 2.18.

j Im G( j)

Case (i) G (s) plane

Re G( j)

1
 Case (ii)
N(X)
2 Case (iii)

G( j)
Figure 2.20 Stability analysis using describing function.

associated linear closed-loop system characteristic equation is


1 + N (X )G(s) = 0. (2.40)
In the G( jω) complex plane, the locus traced out by −1/N as X varies is considered a gen-
eralization of the (−1, j0) point in the linear case. The stability of the nonlinear system is
determined by computing the encirclements of the −1/N locus by the G( jω) plot. We might
have the situations shown in Figure 2.20. In case (i), the −1/N locus is not encircled by the
G( jω) plot. Assuming that the poles of G(s) are in the left-half complex plane, the closed-loop
system is asymptotically stable. For a limit cycle to exist, we must have an intersection between
the polar plot of G( jω) and the −1/N locus. We have two intersections between −1/N and
G( jω) in case (ii). At intersection point 1, any increase in X will cause the operating point to be
encircled and will consequently cause X to grow. If there is a decrease in X , then the oscillations
will decay. Thus, point 1 represents an unstable limit cycle. At intersection point 2, an increase in
X would shift the operating point outside the G( jω) plot resulting in the decaying oscillations.
Point 2 represents a stable limit cycle. In case (iii), any oscillation will grow without limit.

◆ Example 2.17
Consider the nonlinear control system shown in Figure 2.21 that was analyzed in
Power and Simpson [237, pp. 339–341]. We first derive the describing function of
the ideal relay. When the input of the ideal relay is greater than zero, its output is
M. When the relay’s input is less than zero, its output is −M. Thus, the ideal relay
2.5 DESCRIBING FUNCTION METHOD 85

Ideal relay
Plant
1 16 c
r0  1 s(s  2)2

Figure 2.21 Nonlinear feedback control system analyzed in Example 2.17.

y y
M M

x t
 2 3

M M


x  X sin t

2

3 Figure 2.22 Input and output


t waveforms of an ideal relay.

can be described as

M for x > 0
y= (2.41)
−M for x < 0.

We assume that the input to the relay is a sinusoid, x(t) = X sin ωt, drawn vertically
down from the relay characteristic in Figure 2.22. The output y(t) of the relay is
a rectangular wave. It is drawn horizontally to the right of the relay characteristic
in Figure 2.22. This rectangular wave is an odd function. For an odd function, we
have Ak = 0, k = 0, 1, . . . , in its Fourier series. Thus, the Fourier series of the
relay’s output has the form



y(t) = Bk sin kωt.
k=1

The fundamental harmonic of y(t) is

B1 sin ωt = Y1 sin ωt,


86 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

where

ω 2π/ω
B1 = y(t) sin ωt dt
π 0

2ω π/ω
= M sin ωt dt
π 0
π/ω
2Mω
= sin ωt dt
π 0

2M π/ω
= − cos(ωt) 

π 0
4M
= . (2.42)
π
Hence, the describing function of the ideal relay is

Y 4M
N( X ) = 1  0◦ = .
X πX
A normalized plot of the describing function of the ideal relay nonlinearity is
depicted in Figure 2.23. In our example, M = 1 and thus the describing function
of the relay is
4
N( X ) = .
πX
The sinusoidal transfer function of the linear plant model is

16 −64ω + j16(ω2 − 4)
G( jω) = = .
jω( jω + 2) 2 ω(4 + ω2 ) 2

4
N
3

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
M
X
Figure 2.23 A plot of the describing function for the ideal relay.
2.5 DESCRIBING FUNCTION METHOD 87

0.5
G(s) plane
0.4
0.3
 0
0.2 G( j)

0.1 1
Imag Axis


N
0
X 0
0.1
0.2

0
0.3
0.4
0.5
1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0
Real Axis
Figure 2.24 Plots of G( jω) and −1/N of Example 2.17.

At ω = 2 rad/sec the polar plot of G( jω) intersects the plot of −1/N (see Fig-
ure 2.24), to form a stable limit cycle. The amplitude of the limit cycle is obtained
from the relation G( j2) = −1/N; that is,

−64 πX
=− .
(4 + 4) 2 4

Hence, the amplitude of the limit cycle is

4
X= ,
π
while the amplitude of the fundamental harmonic of the plant output, c, is

4 4
|G( j2)| = .
π π

Note that the filtering hypothesis is satisfied in this example. Indeed, the Fourier
series of the output of the relay is

4 1 1
sin ωt + sin 3ωt + sin 5ωt + · · · .
π 3 5

Thus, for example, the amplitude of the third harmonic of the plant output is

4 1 4
|G( j6)| = .
3π 45 π

The higher harmonics are even more attenuated compared with the fundamental
harmonic.
88 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

Notes
The system of differential equations (2.43) in Exercise 2.6 comes from Boyce and DiPrima [32,
p. 523]. Al-Khafaji and Tooley [6] provide an introductory treatment of numerical solutions of
differential equations. For further discussion of solving differential equations using numerical
methods and analyses of their computational efficiency, the reader is referred to Salvadori and
Baron [252], Conte and de Boor [53, Chapter 8], or Parker and Chua [222]. For an in-depth treat-
ment of the describing function method, we refer to Graham and McRuer [105] and Chapters 6
and 7 of Hsu and Meyer [128]. Graham and McRuer [105] make an interesting remark about
the origins of the describing function method. They note on page 92 of their 1961 book: “While
the notion of representing a nonlinearity by an ‘equivalent’ linear element is quite old and has
been used by many writers, the first instance of a major systematic exploitation of the technique
was probably by N. M. Kryloff and N. N. Bogoliuboff, Introduction to Non-linear Mechanics
(a free translation of the Russian edition of 1937, by S. Lefschetz), Princeton University Press,
Princeton, N.J., 1943. See also N. Minorsky, Introduction to Non-linear Mechanics, J. W.
Edwards, Ann Arbor, Mich., 1947.” The so-called sinusoidal describing function that we ana-
lyzed in the previous section, was developed almost simultaneously in several different countries
during and just after World War II. The first to introduce the method in the open literature appears
to be A. Tustin of England, who published his paper in the Journal of the IEE in 1947.
The behavior of numerous dynamical systems is often modeled by a set of linear, first-
order, differential equations. Frequently, however, a linear model is a result of linearization of a
nonlinear model. Linear models seem to dominate the controls’ literature. Yet nature is nonlinear.
“All physical systems are nonlinear and have time-varying parameters in some degree” [105,
p. 1]. How can one reconcile this paradox? Graham and McRuer [105, p. 1] have the following
answer: “Where the effect of the nonlinearity is very small, or if the parameters vary only slowly
with time, linear constant-parameter methods of analysis can be applied to give an approximate
answer which is adequate for engineering purposes. The analysis and synthesis of physical
systems, predicated on the study of linear constant-parameter mathematical models, has been,
in fact, an outstandingly successful enterprise. A system represented as linear, however, is to some
extent a mathematical abstraction that can never be encountered in a real world. Either by design
or because of nature’s ways it is often true that experimental facts do not, or would not, correspond
with any prediction of linear constant-parameter theory. In this case nonlinear or time-varying-
parameter theory is essential to the description and understanding of physical phenomena.”

EXERCISES

2.1 Sketch a phase-plane portrait of the differential equation


ẍ − ẋ + 2x + 2sign(2x + ẋ) = 0.
2.2 For the system of differential equations
7 1
ẋ 1 = − x1 + x2 ,
4 4
3 5
ẋ 2 = x1 − x2 ,
4 4
find the phase-plane asymptotes.
EXERCISES 89

2.3 For a dynamical system modeled by


ẋ 1 = x2 ,
ẋ 2 = −x1 ,
find the time T it takes the system trajectory to move from the state x(0) = [0 1]T to
the state x(T ) = [1 0]T .
2.4 Consider a trajectory in the state plane shown in Figure 2.25. Find the time it takes the
representative point to move from A to B.

x· (cm/sec)

A
6

2 B

3 6 x (cm) Figure 2.25 A trajectory for Exercise 2.4.

2.5 Given the following initial value problem:


ẋ = f (t, x), x(t 0 ) = x0 .
(a) Derive the Euler formula for solving the above initial value problem using a Taylor
series.
(b) Make use of a Taylor series with a remainder to compute the local error formula
assuming that the data at the nth step are correct.

2.6 Use Bendixson’s theorem to investigate the existence of limit cycles for the system
 
ẋ 1 = x1 + x2 − x1 x12 + x22 ,
  (2.43)
ẋ 2 = −x1 + x2 − x2 x12 + x22 .
Sketch a phase-plane portrait for this system.
2.7 Use Bendixson’s theorem to investigate the existence of limit cycles for the system
 
ẋ 1 = x2 − x1 x12 + x22 − c ,
  (2.44)
ẋ 2 = −x1 − x2 x12 + x22 − c ,
where c is a constant. Sketch phase-plane portraits for this system for c = −0.2 and
c = 0.2. The above nonlinear system was analyzed in reference 222, p. 204.
2.8 Verify that the Runge method agrees with the Taylor series solution through terms of
degree three.
2.9 Verify that the Runge–Kutta method agrees with the Taylor series solution through
terms of degree four.
90 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

2.10 Consider the nonlinear mass–spring–damper system depicted in Figure 2.26. The spring
coefficient K = 0.5x 2 N/m, the damping coefficient B = 0.1ẋ 2 N·sec/m, and M = 1 kg.

K
u
M B
Figure 2.26 The mass–spring–damper system of
Exercise 2.10.

(a) Write the second-order differential equation describing the motion of the system.
(b) Choose x1 = x and x2 = ẋ. Represent the equation of motion derived in (a) in state-
space format.
(c) For the model from (b), find the equilibrium state corresponding to u = 0. Then,
linearize the model about the obtained equilibrium point.
2.11 (Based on Toptscheev and Tsyplyakov [282, pp. 29–31]) A cyclotron, whose operation
is illustrated in Figure 2.27, accelerates charged particles into high energies so that
they can be used in atom-smashing experiments. The operation of the cyclotron can be
described as follows. Positive ions enter the cyclotron from the ion source, S, and are
available to be accelerated. An electric oscillator establishes an accelerating potential
difference across the gap of the two D-shaped objects, called dees. An emerging ion
from the ion source, S, finds the dee that is negative. It will accelerate toward this dee
and will enter it. Once inside the dee, the ion is screened from electric fields by the
metal walls of the dees. The dees are immersed in a magnetic field so that the ion’s
trajectory bends in a circular path. The radius of the path depends on the ion’s velocity.
After some time, the ion emerges from the dee on the other side of the ion source. In

Beam

Dee

Deflector
plate

S Oscillator

Dee
Figure 2.27 Schematic of a cyclotron of Exercise 2.11.
EXERCISES 91

the meantime, the accelerating potential changes its sign. Thus the ion again faces a
negative dee. The ion further accelerates and describes a semicircle of larger radius.
This process goes on until the ion reaches the outer edge of one dee, where it is pulled
out of the cyclotron by a negatively charged deflector plate. The moving ion is acted
upon by a deflecting force of magnitude
q(v + v)(B + B),
where q is the charge of the ion, v is the ion’s speed, v is the increase in the ion’s
speed, and B is the magnetic field acting on the ion. The nominal radius of the ion’s
orbit is r . The change in the magnetic field as the result of a change in the orbit of the
ion is B. As the ion speed increases, the relativistic mass m increases by m. The
ion moving in a circular path undergoes centrifugal force of magnitude
(m + m)(v + v)2
.
r + r
By Newton’s second law, the equation describing the motion of the ion is
d2 (m + m)(v + v)2
(m + m) (r + r ) = − q(v + v)(B + B).
dt 2 r + r
Linearize the above equation. The motion of the ion along a nominal path is described by
d 2r mv 2
m 2
= − qv B.
dt r
Find the transfer function relating the change in radius of motion, r , of the ion and
the change in the momentum of the ion:
p = mv + mv.
In other words, find the transfer function L(r )/L(p), where L is the Laplace trans-
form operator.
2.12 Linearize the permanent-magnet stepper motor model, analyzed in Subsection 1.7.3,
about x1eq = θeq = constant.
2.13 Let v1 , v2 , . . . , vn be nonzero, mutually orthogonal elements of the space V and let v
be an arbitrary element of V .
(a) Find the component of v along v j .
(b) Let c j denote the component of v along v j . Show that the element
v − c 1 v 1 − · · · cn v n
is orthogonal to v1 , v2 , . . . , vn .
(c) Let c j be the component of v ∈ V along v j . Show that

n
ci2 ≤ v2 .
i=1

The above relation is called the Bessel inequality.


92 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS

2.14 Consider characteristics of two nonlinear elements that are shown in Figure 2.28. Let
 
2 1 1 1
N S (x) = arcsin + cos arcsin . (2.45)
π x x x
Then, the describing function of the limiter is

K ,
 M≤S

Nl = M

 K N S , M > S.
S
The describing function of the dead zone is


0, M ≤ A,
 
Nd−z = M

K 1 − NS , M > A.
A
Derive the describing function of the nonlinear element whose characteristic is shown
in Figure 2.29.

Limiter Dead zone


K
K
A

S S A
K

Figure 2.28 Characteristics of a limiter and dead zone nonlinear elements of Exercise 2.14.

K1

A

A
K

K1 Figure 2.29 Characteristic of a nonlinear element


of Exercise 2.14.

2.15 The describing function of the nonlinear element shown in Figure 2.30 is


 0, M ≤ A,
 
N1 =   2

 M 4B A
K 1 − NS + 1− , M > A,
A πM M
EXERCISES 93

B
A

A
B

K Figure 2.30 Nonlinear element with the describing


function N1 of Exercise 2.15.

A

B Figure 2.31 Characteristic of the ideal relay


with dead zone.

where N S (x) is given by (2.45). The describing function of the dead-zone nonlinearity
is given in Exercise 2.14 and its characteristic is shown in Figure 2.28. Derive the
describing function of the ideal relay with dead zone whose characteristic is shown in
Figure 2.31.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy