Analysis of Modeling Equations
Analysis of Modeling Equations
Analysis of Modeling Equations
2 Analysis of Modeling
Equations
Most of dynamical systems analyzed in this book are modeled by ordinary differential equations.
A main use of a mathematical model is to predict the system transient behavior. Unlike linear
systems, where closed-form solutions can be written in terms of the system’s eigenvalues and
eigenvectors, finding analytical, exact, solutions to nonlinear differential equations can be very
difficult or impossible. However, we can approximate the solutions of differential equations with
difference equations whose solutions can be obtained easily using a computer. In this chapter we
discuss a number of methods for solving differential equations. The first class of methods allows
us to graphically determine solutions to second-order differential equations. Then, we discuss
numerical techniques for solving differential equations. After that, we present two methods of
linear approximation of nonlinear systems.
x1 = x and x2 = ẋ.
48
2.1 STATE-PLANE ANALYSIS 49
ẋ 1 = x2 ,
(2.2)
ẋ 2 = f (x1 , x2 ).
The plane with coordinates x1 and x2 is labeled as the state plane or phase plane. A solution
of (2.1) can be illustrated by plotting x versus t, or by plotting x2 versus x1 using t as a parameter.
To each value of the state x(t) = [x1 (t) x2 (t)]T there corresponds a point in the (x1 , x2 ) plane
called the representative point (RP). As t varies, the RP describes a curve in the state plane
called a trajectory. A family of trajectories is called a phase portrait.
◆ Example 2.1
Consider the spring–mass system shown in Figure 2.1, where x is the displacement
of the mass M from equilibrium, and k is the linear spring constant. The spring
is assumed to be linear; that is, it obeys Hooke’s law. Applying Newton’s law, we
obtain the equation of motion of the mass:
M ẍ + kx = 0.
Let ω2 = k/M. Then, the above equation can be represented in standard form:
ẍ + ω2 x = 0. (2.3)
Observe that
ẍ d ẋ/dt d ẋ
ẍ = ẋ = ẋ = ẋ .
ẋ dx/dt dx
d ẋ
ẋ + ω2 x = 0.
dx
ẋ
d ẋ + x dx = 0.
ω2
x
k
M
x·
where A is the integration constant determined by the initial conditions. We see that
a phase portrait of equation (2.3) consists of a family of circles centered at the origin
of the ( x, ẋ/ω) plane as illustrated in Figure 2.2. Different circular trajectories, in
this plane, correspond to different values of the constant A representing initial
conditions. Note that
ẋ
x=ω dt,
ω
where ω > 0. Hence, for positive values of ẋ/ω the displacement x increases. This
means that for the values ẋ/ω > 0 the arrows on the trajectories indicate x moving
to the right. For negative values of ẋ/ω—that is, for the values of ẋ/ω below the
horizontal axis—the arrows on the trajectories indicate x decreasing by going to
the left. Therefore, as time increases, the RP moves in a clockwise direction along
the trajectory.
◆ Example 2.2
In this example, we consider a model of a satellite shown in Figure 2.3. We assume
that the satellite is rigid and is in a frictionless environment. It can rotate about an
axis perpendicular to the page as a result of torque τ applied to the satellite by
firing the thrusters. The system input is the applied torque, and the system output is
the attitude angle θ. The satellite’s moment of inertia is I . Applying the rotational
2.1 STATE-PLANE ANALYSIS 51
Thruster
Figure 2.3 A model of a rigid satellite
of Example 2.2.
d 2θ
I = τ,
dt 2
d 2θ 1
= τ.
dt 2 I
Let
1
τ = u,
I
d 2θ
= u. (2.4)
dt 2
We assume that when the thrusters fire, the thrust is constant; that is, u = ±U. For
simplicity, let U = 1. Using the state variables x1 = θ and x2 = θ̇, we obtain the
satellite’s state-space model:
ẋ 1 0 1 x1 0
= + u, (2.5)
ẋ 2 0 0 x2 1
where u = ±1. We first consider the case when u = 1. With this control, the
system equations are
ẋ 1 = x2 ,
ẋ 2 = 1.
52 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS
= 12 (t + c1 ) 2 + c2
= 12 x22 + c2 ,
where c2 is an integration constant. In the state plane the above equation represents
a family of parabolas open toward the positive x1 axis. The representative point
moves upward along these parabolas because ẋ 2 = 1; that is, ẋ 2 > 0. Typical system
trajectories for this case are shown in Figure 2.4.
Similarly, when u = −1, we have
ẋ 2 = −1.
Solving the above differential equation yields
x2 = −t + c̃1 ,
where c̃1 is an integration constant. Substituting x2 = −t + c̃1 into ẋ 1 = x2 and
then solving the resulting equation yields
x1 = − 12 x22 + c̃2 ,
where c̃2 is an integration constant. In the state plane, this equation represents
2.0
1.5
1.0
0.5
x2 0
0.5
1.0
1.5
2.0
7 6 5 4 3 2 1 0 1 2 3 4 5
x1
Figure 2.4 Trajectories of the system (2.5) for u = 1.
2.1 STATE-PLANE ANALYSIS 53
2.0
1.5
1.0
0.5
x2 0
0.5
1.0
1.5
2.0
5 4 3 2 1 0 1 2 3 4 5 6
x1
Figure 2.5 Trajectories of the system (2.5) for u = − 1.
a family of parabolas open toward the negative x1 axis. The representative point
moves downward along these parabolas because ẋ 2 = −1; that is, ẋ < 0. Typical
system trajectories for this case are shown in Figure 2.5.
In the above examples, systems’ trajectories were constructed analytically. Analytical methods
are useful for systems modeled by differential equations that can be easily solved. If the system
of differential equations cannot be easily solved analytically, we can use graphical or numerical
methods. We now describe a graphical method for solving second-order differential equations
or a system of two first-order differential equations.
d x1
ẋ 1 = = f 1 (x1 , x2 ),
dt
(2.6)
d x2
ẋ 2 = = f 2 (x1 , x2 ).
dt
Dividing the second of the above equations by the first one, we obtain
d x2 f 2 (x1 , x2 )
= . (2.7)
d x1 f 1 (x1 , x2 )
Thus, we eliminated the independent variable t from the set of the first-order differential
equations given by (2.6). In equation (2.7), we consider x1 and x2 as the independent and
54 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS
d x2
m = m(x1 , x2 ) = .
d x1
Note that m is just the slope of the tangent to the trajectory passing through the point [x1 x2 ]T .
The locus of the trajectory constant slope,
d x2
= m(x1 , x2 ) = constant,
d x1
f 2 (x1 , x2 ) = m f 1 (x1 , x2 ).
The curve that satisfies the above equation is an isocline corresponding to the trajectories’ slope
m because a trajectory crossing the isocline will have its slope equal to m. The idea of the
method of isoclines is to construct several isoclines in the state plane. This then allows one
to construct a field of local tangents m. Then, the trajectory passing through any given point
in the state plane is obtained by drawing a continuous curve following the directions of the
field.
◆ Example 2.3
Consider the spring–mass system from Example 2.1. Defining the state variables
x1 = x and x2 = ẋ, we obtain the state equations
ẋ 1 = x2 ,
ẋ 2 = −ω2 x1 .
dx2 x
= m= − 1.
dx1 x2
1
x2 = − x .
m 1
This equation describes a family of straight lines through the origin. In Figure 2.6,
we show some of the isoclines and a typical trajectory for the above spring–mass
system. Note that in this example the trajectory slope is m, while the isocline slope
is −1/m. This means that trajectories of the spring–mass system are perpendicular
to the isoclines; that is, the trajectories are circles centered at the origin of the state
plane.
2.1 STATE-PLANE ANALYSIS 55
x2
m0
m1 m 1
m
x1
◆ Example 2.4
Given a second-order differential equation
where η is a parameter. We will find values of η for which there are isoclines along
which the trajectory slope and the isocline slope are equal. Such isoclines, if they
exist, are called the asymptotes.
Let x1 = x and x2 = ẋ. Then, the second-order differential equation given by (2.8)
can be represented as a system of first-order differential equations:
ẋ 1 = x2 ,
ẋ 2 = −4ηx2 − 4x1 .
or, equivalently,
4
x2 = − x1 . (2.9)
m + 4η
4
− = m;
m + 4η
56 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS
that is,
m2 + 4ηm + 4 = 0. (2.10)
The discriminant of (2.10) is
=4 η2 − 1.
◆ Example 2.5
Consider the system of differential equations
ẋ 1 = x1 + x2 + 1,
ẋ 2 = −x2 + 2.
(a) Find the equation of the isocline corresponding to the trajectory slope m = 1.
(b) Does the given system have asymptotes? If yes, then write the equations of
the asymptotes.
The trajectory slope is
dx2 −x2 + 2
=m= .
dx1 x1 + x2 + 1
−x2 + 2
= 1.
x1 + x2 + 1
We obtain
x2 = − 12 x1 + 12 .
−x2 + 2
m= . (2.12)
x1 + x2 + 1
From (2.12), we obtain an equation of isoclines of the form
m m− 2
x2 = − x − .
m+ 1 1 m+ 1
An asymptote is the isocline for which its slope and the trajectory slope are equal.
2.1 STATE-PLANE ANALYSIS 57
m2 + 2m = m (m + 2) = 0.
x2 = 2 and x2 = −2x1 − 4.
x 0
1
2
3
4
4 2 0 2 4 Figure 2.7 A phase-plane portrait of the van
x der Pol equation for µ = 1.
An inspection of the phase portrait of the van der Pol equation reveals the existence of a closed
trajectory in the state plane. This solution is an example of a limit cycle.
Definition 2.1 A limit cycle is a closed trajectory in the state plane, such that no other
closed trajectory can be found arbitrarily close to it.
Note that the differential equation modeling the spring–mass system does not have a limit cycle
because the solutions are not isolated. Limit cycles can only appear in nonlinear systems; they
do not exist in linear systems.
◆ Example 2.6
We apply Bendixson’s theorem to the van der Pol equation represented by (2.14).
We have
f 1 = x2 and f 2 = µ 1 − x12 x2 − x1 .
We calculate ∂ f 1 /∂ x1 + ∂ f 2 /∂ x2 to get
∂ f 1 /∂ x1 + ∂ f 2 /∂ x2 = µ 1 − x12 .
It follows from the Bendixson theorem that there are no limit cycles in the region
|x1 | < 1 because ∂ f 1 /∂ x1 + ∂ f 2 /∂ x2 > 0 in that region.
Bendixson’s theorem gives us a necessary condition for a closed curve to be a limit cycle. It
is useful for the purpose of establishing the nonexistence of a limit cycle. In general, it is very
difficult to establish the existence of a limit cycle. Sufficient conditions for the existence of a
limit cycle are discussed, for example, by Hochstadt [123, Chapter 7]. A method for predicting
limit cycles, called the describing function method, is presented in Section 2.5.
If we stop after the qth term of the Taylor series solution, then the remainder after q terms of a
Taylor series gives us an estimate of the error. We can expect the Taylor series solution to give a
fairly accurate approximation of the solution of the state-space equations only in a small interval
about t0 . Thus, if t = t − t0 is some small time interval such that the Taylor series converges
for t ∈ [t0 , t0 + t], we can use the expansion
d x(t0 ) d 2 x(t0 ) (t − t0 )2
x(t) = x 0 + (t − t0 ) + + ···
dt dt 2 2
to evaluate x(t1 ) = x(t0 + t). Then, we can start over again solving ẋ(t) = f (t, x(t), u(t))
with the new initial condition x(t1 ). We illustrate the method with simple examples.
◆ Example 2.7
Consider the first-order differential equation
dx
= t 2 − x2 , x(1) = 1.
dt
Expanding x = x(t) into a Taylor series about x(1) = 1 yields
d 2x dx
= 2t − 2x ,
dt 2 dt
d 3x dx 2 d 2x
= 2−2 − 2x 2 ,
dt 3 dt dt
d 4x dx d 2 x d 3x
= −6 − 2x ,
dt 4 dt dt 2 dt 3
..
.
·
Evaluating the above at t = 1 yields
dx(1)
= 0,
dt
d 2 x(1)
= 2,
dt 2
d 3 x(1)
= −2,
dt 3
d 4 x(1)
= 4,
dt 4
..
.
·
2.2 NUMERICAL TECHNIQUES 61
Hence,
(t − 1) 3 (t − 1) 4
x(t) = 1 + (t − 1) 2 − + + ···.
3 6
◆ Example 2.8
In this example, we consider the second-order differential equation
d 2x dx
2
+3 + 6t x = 0,
dt dt
d 3x d 2x dx
= −3 − 6t − 6x,
dt 3 dt 2 dt
d 4x d 3x d 2x dx
= −3 − 6t − 12 ,
dt 4 dt 3 dt 2 dt
d 5x d 4x d 3x d 2x
= −3 − 6t − 18 ,
dt 5 dt 4 dt 3 dt 2
..
.
·
Taking into account the initial conditions, we evaluate all coefficients at t = 0
to get
d 2 x(0)
= −3,
dt 2
d 3 x(0)
= 3,
dt 3
d 4 x(0)
= −21,
dt 4
d 5 x(0)
= 117,
dt 5
..
.
·
Hence,
We now illustrate the method of Taylor series when applied to a system of first-order differential
equations.
◆ Example 2.9
We consider the following system of two first-order equations:
dx1 (t 0 ) d 2 x1 (t 0 ) (t − t 0 ) 2 d 3 x1 (t 0 ) (t − t 0 ) 3
x1 (t) = x10 + (t − t 0 ) + + + ···,
dt dt 2 2 dt 3 3!
dx2 (t 0 ) d 2 x2 (t 0 ) (t − t 0 ) 2 d 3 x2 (t 0 ) (t − t 0 ) 3
x2 (t) = x20 + (t − t 0 ) + + + ···.
dt dt 2 2 dt 3 3!
The Taylor series method can be programmed on a computer. However, this is not a very efficient
method from a numerical point of view. In the following, we discuss other methods for numerical
solution of the state equations that are related to the method of Taylor series. We first present
two simple numerical techniques known as the forward and backward Euler methods.
The above is known as the forward Euler algorithm. We see that if h is sufficiently small, we
can approximately determine the state at time t1 = t 0 + h from the initial condition x 0 to get
x(t1 ) = x 0 + h f (t 0 , x 0 , u(t 0 )).
Once we have determined the approximate solution at time t1 , we can determine the approxi-
mate solution at time t2 = t1 + h, and so on. The forward Euler method can also be arrived at
when instead of considering the differential equation ẋ(t) = f (t, x(t), u(t)), we start with its
equivalent integral representation:
t
x(t) = x 0 + f (τ, x(τ ), u(τ )) dτ. (2.18)
t0
We then use the rectangular rule for numerical integration that can be stated as follows. The
area under the curve z = g(t) between t = tk and t = tk + h is approximately equal to the area
of the rectangle ABC D as shown in Figure 2.8; that is,
tk +h
g(t) dt ≈ hg(tk ). (2.19)
tk
Applying the rectangular rule of integration to (2.18), we obtain the forward Euler method.
z g(t)
D C
The backward Euler method differs from the forward Euler method in the way we approximate
the derivative:
d x(tk ) x(tk ) − x(tk−1 )
≈ .
dt h
This approximation gives the backward Euler formula:
In the above formula, x(tk+1 ) is a function of itself. For this reason, we say that the backward Euler
method is an implicit integration algorithm. Implicit integration algorithms require additional
computation to solve for x(tk+1 ). For further discussion of this issue, we refer to Parker and
Chua [222].
Euler’s methods are rather elementary, and thus they are not as accurate as some of the more
sophisticated techniques that we are going to discuss next.
Suppose we wish to find x(t) at t = t1 > t 0 . Applying the trapezoid rule of integration to (2.17),
we obtain
h
x(t1 ) = x 0 + ( f (t 0 , x 0 , u(t 0 )) + f (t1 , x(t1 ), u(t1 )).
2
The above is an implicit algorithm, and it may be viewed as a merging of the forward and
backward Euler algorithms. If the function f is nonlinear, then we will, in general, not be able
z g(t)
C
D
A B
tk tk h
Figure 2.9 The trapezoidal rule of integration.
2.2 NUMERICAL TECHNIQUES 65
to solve the above equation for x(t1 ) exactly. We attempt to obtain x(t1 ) iteratively. Let
h = tk+1 − tk ,
x ∗1 = x 0 + h f (t 0 , x 0 , u(t 0 )),
m0 = f (t 0 , x 0 , u(t 0 )),
m1 = f (t1 , x ∗1 , u(t1 )).
x ∗1 = x 0 + h f (t 0 , x 0 , u(t 0 ))
h
x(t1 ) = x 0 + (m0 + m1 )
2
Once we have found x(t1 ), we can apply the same procedure to find x(t2 ), and so on.
◆ Example 2.10
We use the predictor–corrector method to find x(1), where the step length h equals
0.5, for the following differential equation:
dx
= t x 2 − x, x(0) = 1. (2.21)
dt
Table 2.1 Solving the Differential Equation (2.21) Using the Predictor–
Corrector Method
t0 x0 t1 m0 x1∗ m1 m = (m 0 + m 1 )/2 x
We could use x(t1 ) obtained from the corrector formula to construct a next approximation to
x(t1 ). In general, we can use the formula
h
x (k) (t1 ) = x 0 + f (t 0 , x 0 , u(t 0 )) + f t1 , x (k−1) (t1 ), u(t1 )
2
to evaluate x(t1 ) until two successive iterations agree to the desired accuracy. This general
algorithm is referred to as a second-order predictor–corrector method.
An n-dimensional system of first-order differential equations can be worked analogously. We
illustrate the procedure for n = 2, where
dx
= f (t, x, y), x(t 0 ) = x0 ,
dt
dy
= g(t, x, y), y(t 0 ) = y0 .
dt
Let
m 0 = f (t 0 , x0 , y0 ),
n 0 = g(t 0 , x0 , y0 ).
Then, the predictor equations are
x1∗ = x0 + m 0 h,
y1∗ = y0 + n 0 h.
Let
m 1 = f (t1 , x1∗ , y1∗ ),
n 1 = g(t1 , x1∗ , y1∗ ).
Then, the corrector equations are
m0 + m1
x(t1 ) = x0 + h,
2
n0 + n1
y(t1 ) = y0 + h.
2
Using the above, we can generate the corresponding formulas for tk .
We will now show that the predictor–corrector method yields a solution that agrees with
the Taylor series solution through terms of degree two. We consider the scalar case, where f
is assumed to be differentiable with respect to its arguments and ∂ 2 f /∂t∂ x = ∂ 2 f /∂ x∂t. First
observe that m 1 = f (t1 , x1∗ ) can be written as
f (t1 , x1∗ ) = f (t 0 + h, x0 + m 0 h)
h 1 f tt ft x h
≈ f (t 0 , x0 ) + [ f t f x ] + [h m 0 h] ,
m0h 2 f xt fx x m0h
where f t = ∂ f /∂t, f x = ∂ f /∂ x, and so on. Substituting the above into the corrector equation
2.2 NUMERICAL TECHNIQUES 67
yields
h
x(t1 ) = x0 + (m 0 + m 1 )
2
h h h
= x0 + (m 0 + m 0 ) + [ ft fx ]
2 2 m0h
1 f ft x h
+ [h m 0 h] tt
2 f xt fx x m0h
h2 h3
= x0 + m 0 h + ( f t + f x m 0 ) + f tt + 2 f t x m 0 + f x x m 20 .
2 4
The Taylor series expansion, on the other hand, gives
h2 h3
x(t1 ) = x0 + ẋ h + ẍ + x (3) + · · ·
2 6
h2
= x0 + f (t 0 , x0 ) h + ( f t + f x ẋ)
2
h3
+ ( f tt + 2 f t x ẋ + f x x (ẋ)2 + f x ẍ) + ···,
6
where f t , f x , ẋ, and so on, are all evaluated at t = t 0 . Comparing the right-hand sides of the last
two equations, we see that the solution obtained using the predictor–corrector method agrees
with the Taylor series expansion of the true solution through terms of degree two.
where x0 = g(t 0 ), x1 = g(t 0 +h/2), and x2 = g(t 0 +h). The above formula is derived by passing
a parabola through the three points (t 0 , x0 ), (t 0 + h/2, x1 ), and (t 0 + h, x2 ) and assuming that the
area under the given curve is approximately equal to the area under the parabola, as illustrated
in Figure 2.10. To verify Simpson’s formula, we translate the axes so that in the new coordinates
(t˜, x̃) we have t = t 0 + h/2 + t˜, x = x̃. In the new coordinates the three points become
x̃ = a t˜2 + bt˜ + c.
68 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS
x
x g( t̃ )
x1
x0
x2
x̃
h 0 h t̃
2 2 Figure 2.10 Integration using Simpson’s rule.
Hence
h2 h
x0 = a − b + c,
4 2
x1 = c,
h2 h
x2 = a + b + c.
4 2
c = x1 ,
x2 − x0
b= ,
h
2
a = 2 (x0 − 2x1 + x2 ).
h
dx
= f (t, x), x(t 0 ) = x0 ,
dt
we integrate it to obtain
t1
x(t1 ) = x(t 0 ) + f (s, x(s)) ds.
t0
2.2 NUMERICAL TECHNIQUES 69
by
h
(m 0 + 4m 1 + m 3 ),
6
where
h = t1 − t 0 ,
m 0 = f (t 0 , x0 ),
h h
m 1 = f t 0 + , x0 + m 0 ,
2 2
m 2 = f (t 0 + h, x0 + m 0 h),
m 3 = f (t 0 + h, x0 + m 2 h).
Thus, we may write
x(t1 ) = x0 + x,
where x = mh, and m = 16 (m 0 + 4m 1 + m 3 ).
◆ Example 2.11
We use Runge’s method to find x(1), where the step length h equals 0.5, for the
differential equation (2.21). We found x(1) using the predictor–corrector method in
Example 2.10. The calculations for Runge’s method are summarized in Table 2.2.
Table 2.2 Solving the Differential Equation (2.21) Using Runge’s Method
t0 x0 m0 m1 m2 m3 m x
Runge’s method can, of course, be used for systems of first-order differential equations. If, for
example,
dx
= f (t, x, y), x(t 0 ) = x0 ,
dt
dy
= g(t, x, y), y(t 0 ) = y0 ,
dt
70 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS
then
x(t1 ) = x0 + x,
y(t1 ) = y0 + y,
where
h
x = (m 0 + 4m 1 + m 3 ),
6
h
y = (n 0 + 4n 1 + n 3 ).
6
m 0 = f (t 0 , x0 , y0 ),
m 1 = f (t 0 + h/2, x0 + m 0 h/2, y0 + n 0 h/2),
m 2 = f (t 0 + h, x0 + m 0 h, y0 + n 0 h),
m 3 = f (t 0 + h, x0 + m 2 h, y0 + n 2 h),
n 0 = g(t 0 , x0 , y0 ),
n 1 = g(t 0 + h/2, x0 + m 0 h/2, y0 + n 0 h/2),
n 2 = g(t 0 + h, x0 + m 0 h, y0 + n 0 h),
n 3 = g(t 0 + h, x0 + m 2 h, y0 + n 2 h).
h
x(t1 ) = x(t 0 ) + (m 0 + 2m 1 + 2m 2 + m 3 ), (2.23)
6
where
m 0 = f (t 0 , x0 ),
m 1 = f (t 0 + h/2, x0 + m 0 h/2),
m 2 = f (t 0 + h/2, x0 + m 1 h/2),
m 3 = f (t 0 + h, x0 + h m 2 ).
x(t1 ) = x0 + x,
◆ Example 2.12
We apply the Runge–Kutta method to find x(1), where the step length h equals
0.5, for the differential equation (2.21) that we used Examples 2.10 and 2.11. Our
calculations for the Runge–Kutta method are summarized in Table 2.3.
Table 2.3 Solving the Differential Equation (2.21) Using the Runge–Kutta
Method
t0 x0 m0 m1 m2 m3 m x
The above problem has been worked by the predictor–corrector, Runge’s, and the Runge–Kutta
methods. This problem can be solved exactly by means of the Taylor series. The Taylor series
solution is
x(t) = 1 − t + t 2 − t 3 + · · · .
The series converges to 1/(1 + t) for −1 < t < 1. A check on the differential equation shows
that
1
x(t) =
1+t
is indeed a solution for all t > −1. Thus, x(1) = 0.500.
The algorithms presented above require only one input point, x(tk ), at each step to compute
x(tk+1 ). Such algorithms are referred to as single-step algorithms. In the multistep algorithms, a
number of previous points are used, along with the corresponding values of f, to evaluate x(tk+1 );
that is, the multistep algorithms reuse past information about the trajectory when evaluating its
new point. In general, it may be difficult to decide which type of algorithm is more efficient
because the performance of a particular algorithm is problem dependent. Well-known multistep
algorithms are: Adams–Bashforth, Adams–Moulton, and Gear. More information about the
above-mentioned algorithms can be found in Parker and Chua [222, Section 4.1].
y
dh(x0)
y y0 (x x0)
dx
y h(x)
y0
◆ Example 2.13
Consider the simple pendulum shown in Figure 2.12. Let g be the gravity con-
stant. The tangential force component, mg sin θ, acting on the mass m returns the
pendulum to its equilibrium position. By Newton’s second law we obtain
F = −mg sin θ.
2.3 PRINCIPLES OF LINEARIZATION 73
The equilibrium point for the pendulum is θ = 0◦ . Note that F (0◦ ) = 0. Therefore,
F = F and θ = θ.
We also have
dF ◦
(0 ) = −mg cos 0◦ = −mg.
dθ
Hence, the linearized model about the equilibrium position θ = 0◦ has the form
F = −mg θ.
Thus, for small angular displacements θ, the force F is proportional to the dis-
placements. This approximation is quite accurate for −π/4 ≤ θ ≤ π/4, as can be
seen in Figure 2.13.
mg cos mg sin
F
mg
2 2
slope mg
y h(x)
Tangent
plane
h(x0)
x2
Figure 2.14 Tangent plane as a linear
x1 x0 approximation for a function of two variables.
If h : Rn → R—that is, y = h(x1 , x2 , . . . , xn ), which means that the dependent variable depends
upon several variables—the same principle applies. Let
x 0 = [x10 x20 ··· xn0 ]T
be the operating point. The Taylor series expansion of h about the operating point x 0 yields
y − h(x 0 ) = ∇h(x 0 )T (x − x 0 ) + higher-order terms,
where
∂h ∂h ∂h
∇h(x 0 ) =
T ··· .
∂ x1 x=x 0 ∂ x2 x=x 0 ∂ xn x=x 0
Geometrically, the linearization of h about x 0 can be thought of as placing a tangent plane onto
the nonlinear surface at the operating point x 0 as illustrated in Figure 2.14.
constant equilibrium state x e = [x1e x2e · · · xne ]T ; that is, ue and x e satisfy
f (x e , ue ) = 0.
2.4 LINEARIZING DIFFERENTIAL EQUATIONS 75
are the Jacobian matrices of f with respect to x and u, evaluated at the equilibrium point,
[x eT ueT ]T . Note that
d d d d
x= x e + x = x
dt dt dt dt
because x e is constant. Furthermore, f (x e , ue ) = 0. Let
∂f ∂f
(x e , ue ) and B =
A= (x e , ue ).
∂x ∂u
Neglecting higher-order terms, we arrive at the linear approximation
d
x = Ax + Bu.
dt
Similarly, if the outputs of the nonlinear system model are of the form
y1 = h 1 (x1 , x2 , . . . , xn , u 1 , u 2 , . . . , u m ),
y2 = h 2 (x1 , x2 , . . . , xn , u 1 , u 2 , . . . , u m ),
..
.
y p = h p (x1 , x2 , . . . , xn , u 1 , u 2 , . . . , u m )
or in vector notation
y = h(x, u),
then Taylor’s series expansion can again be used to yield the linear approximation of the above
output equations. Indeed, if we let
y = ye + y,
then we obtain
y = Cx + Du,
76 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS
where
∂h 1 ∂h 1
∂x ···
1 ∂ xn
∂h ..
C= (x e , ue ) = ... . ∈ R p×n
∂x
∂h p ∂h p
···
∂ x1 ∂ xn u=u
x=x e
e
and
∂h 1 ∂h 1
∂u ···
1 ∂u m
∂h ..
D= (x e , ue ) = ... . ∈ R p×m
∂u
∂h p ∂h p
···
∂u 1 ∂u m u=u
x=x e
e
are the Jacobian matrices of h with respect to x and u, evaluated at the equilibrium point
[x eT ueT ]T .
◆ Example 2.14
In this example, we linearize the model of Watt’s governor that we derived in
Section 1.7.1. Recall that the equations modeling the governor have the form
ẋ 1 = x2 ,
1 b
ẋ 2 = N2 x32 sin 2x1 − g sin x1 − x2 ,
2 m
κ τ
ẋ 3 = cos x1 − .
I I
We linearize the above model of an equilibrium state of the form
x e = [x1e 0 x3e]T .
Equating the right-hand sides of the above engine–governor equations to zero yields
x2e = 0
g
N2 x3e
2 =
cos x1e
τ
cos x1e =
κ
The Jacobian matrix of the engine–governor model evaluated at the equilibrium
state is
0 1 0
∂f N2 x 2 cos 2x − g cos x N2 x3e sin 2x1e
b
= − .
∂ x x=x e 3e 1e 1e
κ
m
− sin x1e 0 0
I
2.5 DESCRIBING FUNCTION METHOD 77
Definition 2.2 The scalar product of two real-valued functions u, v ∈ C([a, b]) is
b
u, v = u(t)v(t) dt.
a
78 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS
Using simple properties of the integral, we can verify that the above scalar product satisfies the
following conditions:
The functions u, v ∈ C([a, b]) are said to be orthogonal if
u, v = 0. A set of functions from
C([a, b]) is said to be mutually orthogonal if each distinct pair of functions in the set is
orthogonal.
◆ Example 2.15
The functions ψn (t) = sin nωt, φn (t) = cos nωt, n = 1, 2, . . . , and φ0 (t) = 1 form
a mutually orthogonal set of functions on the interval [t 0 , t 0 + 2π /ω]. This can be
verified by direct integration. For example, for positive m and n such that m = n,
we have
t 0 +2π /ω
ψm, ψn = sin m ωt sin nωt dt
t0
1 t 0 +2π /ω
= (cos (m − n)ωt − cos (m + n)ωt)dt
2 t0
t 0 +2πω
1 sin (m − n)ωt sin (m + n)ωt
= −
2ω m− n m+ n t0
= 0. (2.27)
φm, φn = 0 (2.28)
φm, ψn = 0. (2.29)
◆ Example 2.16
Consider ψn (t) = sin nωt on the interval [t 0 , t 0 + 2π /ω]. Then,
t 0 +2π /ω
ψn 2 = sin nωt sin nωt dt
t0
t 0 +2π /ω
= sin2 nωt dt
t0
1 t 0 +2π /ω
= (1 − cos 2nωt) dt
2 t0
t 0 +2π/ω
1 sin 2nωt
= t−
2 2nω t0
π
= . (2.30)
ω
Similarly,
π
φn 2 = , n = 1, 2 . . . , (2.31)
ω
and
2π
φ0 2 = . (2.32)
ω
So far we discussed only continuous functions. In many applications we have to work with
more general functions, specifically with piecewise continuous functions.
Definition 2.3 A function f is said to be piecewise continuous on an interval [a, b] if the
interval can be partitioned by a finite number of points, say ti , so that:
1. a = t 0 < t1 < · · · < tn = b.
2. The function f is continuous on each open subinterval (ti−1 , ti ).
3. The function f approaches a finite limit as the end points of each subinterval are approached
from within the subinterval; that is, the limits
lim f (ti − h) and lim f (ti + h)
h→0 h→0
h>0 h>0
both exist.
The graph of a piecewise continuous function is shown in Figure 2.15. In further discussion,
we assume that two piecewise functions f and g are equal if they have the same values at the
points where they are continuous. Thus, the functions shown in Figure 2.16 are considered to
be equal, though they differ at the points of discontinuity.
80 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS
f(t)
f (t) g(t)
t0 t1 t2 t3 t4 t t0 t1 t2 t3 t4 t
Figure 2.16 Two piecewise continuous functions that we consider to be equal.
Theorem 2.2 Let V be the space of functions that are piecewise continuous on the
interval [a, b]. Let f ∈ V. Then, f = 0 if and only if f ( x) = 0 for all but finite
number of points x in the interval [a, b].
Proof (⇐) It is clear that if f ( x) = 0 except for a finite number of x ∈ [a, b], then
b
f 2 = f 2 ( x)dx = 0.
a
(⇒) Suppose f is piecewise continuous on [a, b], and let a = t 0 < t1 < · · · < tn = b
be a partition of the interval [a, b] such that f is continuous on each subinterval [ti−1 , ti ]
except possibly at the end points. Suppose that f = 0. Then, also f 2 =
f, f = 0.
The above means that
b
f 2 ( x)dx = 0,
a
where the integral is the sum of the integrals over the intervals [ti−1 , ti ]; that is,
b n ti
f 2 ( x)dx = f 2 ( x)dx = 0.
a i=1 ti−1
Hence, each such integral must be equal to 0. The function f is continuous on (ti−1 , ti ).
Therefore,
f 2 ( x) = 0 for ti−1 < x < ti ,
which means that
f ( x) = 0 for ti−1 < x < ti .
Hence, f ( x) = 0 except at a finite number of points. The proof is complete.
The above theorem implies that the scalar product on the space V of piecewise continuous
functions is positive definite. In other words, for a piecewise continuous function f ∈ V , we
have
f, f ≥ 0, and
f, f = 0 if and only if f = 0 for all but a finite number of points in the
interval [a, b].
Definition 2.4 A function f : R → R is said to be periodic with the period T if the domain
of f contains t + T whenever it contains t and if
f (t + T ) = f (t)
for every value of t. The smallest positive value of T for which f (t + T ) = f (t) is called
the fundamental period of f .
It follows from the above definition that if T is a period of f , then 2T is also a period, and so is
any integral multiple of T .
A periodic function, satisfying certain assumptions spelled out later, may be represented by
the series
∞
f (t) = A0 + (Ak cos kωt + Bk sin kωt), (2.33)
k=1
where ω = 2π/T , with T being the fundamental period of f . On the set of points where the
series (2.33) converges, it defines f , whose value at each point is the sum of the series for that
value of t, and we say that the series (2.33) is the Fourier series for f .
Suppose that the series (2.33) converges. We use the results of the previous subsection to find
expressions for the coefficients Ak and Bk . We first compute Ak coefficients. Multiplying (2.33)
by φm = cos mωt, where m > 0 is a fixed positive integer, and then integrating both sides with
respect to t from t 0 to t 0 + 2π/ω yields
∞
∞
φm , f = A0
φm , φ0 + Ak
φm , φk + Bk
φm , ψk , (2.34)
k=1 k=1
where φ0 = 1. It follows from (2.27), (2.28), and (2.29) that the only nonvanishing term on
the right-hand side of (2.34) is the one for which k = m in the first summation. Using (2.31),
82 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS
we obtain
φm , f
Am =
φm , φm
ω t 0 +2π/ω
= f (t) cos mωt dt, m = 1, 2, . . . . (2.35)
π t0
To obtain A0 , we integrate (2.33). Using (2.32), we get
t 0 +2π/ω
φ0 , f ω
A0 = = f (t) dt. (2.36)
φ0 , φ0 2π t 0
An expression for Bm may be obtained by multiplying (2.33) by sin mωt and then integrating
term by term from t 0 to t 0 + 2π/ω. Using the orthogonality relations (2.27)–(2.29) yields
ψm , f ω t 0 +2π/ω
Bm = = f (t) sin mωt dt, m = 1, 2, . . . . (2.37)
ψm , ψm π t0
Suppose now that a function f is given that is periodic with period 2π/ω and integrable on the
interval [t 0 , t 0 + 2π/ω]. Then, coefficients Ak and Bk can be computed using (2.35)–(2.37), and
a series of the form (2.33) can be formally constructed. However, we do not know whether this
series converges for each value of t and, if so, whether its sum is f (t). The answer is given by
the theorem that we state after explaining the notation used. We denote the limit of f (t) as t
approaches c from the right by f (c+); that is,
f (c+) = lim f (t).
t→c+
Similarly, f (c−) = limt→c− f (t) denotes the limit of f (t) as t → c from the left. The mean
value of the right- and left-hand limits at the point c is
( f (c+) + f (c−))/2.
Note that at any point c where f is continuous, we have
f (c+) = f (c−) = f (c).
Theorem 2.3 Assume that f and its derivative are piecewise continuous on the interval
[t 0 , t 0 + 2π/ω]. Furthermore, suppose that f is defined outside the interval [t 0 , t 0 +
2π/ω] so that it is periodic with period 2π/ω. Then, f has a Fourier series of the
form (2.33) whose coefficients are given by (2.35)–(2.37). The Fourier series converges
to ( f (t+) + f (t−))/2 at all points of the interval [t 0 , t 0 + 2π/ω].
With this knowledge of the Fourier series, it is now possible to discuss the describing function
method for analyzing a class of nonlinear control systems.
c
G(s)
r0 x(t) y(t)
Figure 2.18 A nonlinear feedback system with a nonlinear element in the forward path.
Consider a feedback system that contains a nonlinear element as shown in Figure 2.18.
Assume that the input to the nonlinear element is a sinusoidal signal x(t) = X sin ωt, where
X is the amplitude of the input sinusoid. The output of the nonlinear element is, in general,
not sinusoidal in response to a sinusoidal input. Suppose that the nonlinear element output is
periodic with the same period as its input and that it may be expanded in the Fourier series,
∞
y(t) = A0 + (Ak cos kωt + Bk sin kωt)
k=1
∞
= A0 + Yk sin(kωt + φk ). (2.38)
k=1
j Im G( j)
Re G( j)
1
Case (ii)
N(X)
2 Case (iii)
G( j)
Figure 2.20 Stability analysis using describing function.
◆ Example 2.17
Consider the nonlinear control system shown in Figure 2.21 that was analyzed in
Power and Simpson [237, pp. 339–341]. We first derive the describing function of
the ideal relay. When the input of the ideal relay is greater than zero, its output is
M. When the relay’s input is less than zero, its output is −M. Thus, the ideal relay
2.5 DESCRIBING FUNCTION METHOD 85
Ideal relay
Plant
1 16 c
r0 1 s(s 2)2
y y
M M
x t
2 3
M M
x X sin t
2
can be described as
M for x > 0
y= (2.41)
−M for x < 0.
We assume that the input to the relay is a sinusoid, x(t) = X sin ωt, drawn vertically
down from the relay characteristic in Figure 2.22. The output y(t) of the relay is
a rectangular wave. It is drawn horizontally to the right of the relay characteristic
in Figure 2.22. This rectangular wave is an odd function. For an odd function, we
have Ak = 0, k = 0, 1, . . . , in its Fourier series. Thus, the Fourier series of the
relay’s output has the form
∞
y(t) = Bk sin kωt.
k=1
where
ω 2π/ω
B1 = y(t) sin ωt dt
π 0
2ω π/ω
= M sin ωt dt
π 0
π/ω
2Mω
= sin ωt dt
π 0
2M π/ω
= − cos(ωt)
π 0
4M
= . (2.42)
π
Hence, the describing function of the ideal relay is
Y 4M
N( X ) = 1 0◦ = .
X πX
A normalized plot of the describing function of the ideal relay nonlinearity is
depicted in Figure 2.23. In our example, M = 1 and thus the describing function
of the relay is
4
N( X ) = .
πX
The sinusoidal transfer function of the linear plant model is
16 −64ω + j16(ω2 − 4)
G( jω) = = .
jω( jω + 2) 2 ω(4 + ω2 ) 2
4
N
3
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
M
X
Figure 2.23 A plot of the describing function for the ideal relay.
2.5 DESCRIBING FUNCTION METHOD 87
0.5
G(s) plane
0.4
0.3
0
0.2 G( j)
0.1 1
Imag Axis
N
0
X 0
0.1
0.2
0
0.3
0.4
0.5
1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0
Real Axis
Figure 2.24 Plots of G( jω) and −1/N of Example 2.17.
At ω = 2 rad/sec the polar plot of G( jω) intersects the plot of −1/N (see Fig-
ure 2.24), to form a stable limit cycle. The amplitude of the limit cycle is obtained
from the relation G( j2) = −1/N; that is,
−64 πX
=− .
(4 + 4) 2 4
4
X= ,
π
while the amplitude of the fundamental harmonic of the plant output, c, is
4 4
|G( j2)| = .
π π
Note that the filtering hypothesis is satisfied in this example. Indeed, the Fourier
series of the output of the relay is
4 1 1
sin ωt + sin 3ωt + sin 5ωt + · · · .
π 3 5
Thus, for example, the amplitude of the third harmonic of the plant output is
4 1 4
|G( j6)| = .
3π 45 π
The higher harmonics are even more attenuated compared with the fundamental
harmonic.
88 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS
Notes
The system of differential equations (2.43) in Exercise 2.6 comes from Boyce and DiPrima [32,
p. 523]. Al-Khafaji and Tooley [6] provide an introductory treatment of numerical solutions of
differential equations. For further discussion of solving differential equations using numerical
methods and analyses of their computational efficiency, the reader is referred to Salvadori and
Baron [252], Conte and de Boor [53, Chapter 8], or Parker and Chua [222]. For an in-depth treat-
ment of the describing function method, we refer to Graham and McRuer [105] and Chapters 6
and 7 of Hsu and Meyer [128]. Graham and McRuer [105] make an interesting remark about
the origins of the describing function method. They note on page 92 of their 1961 book: “While
the notion of representing a nonlinearity by an ‘equivalent’ linear element is quite old and has
been used by many writers, the first instance of a major systematic exploitation of the technique
was probably by N. M. Kryloff and N. N. Bogoliuboff, Introduction to Non-linear Mechanics
(a free translation of the Russian edition of 1937, by S. Lefschetz), Princeton University Press,
Princeton, N.J., 1943. See also N. Minorsky, Introduction to Non-linear Mechanics, J. W.
Edwards, Ann Arbor, Mich., 1947.” The so-called sinusoidal describing function that we ana-
lyzed in the previous section, was developed almost simultaneously in several different countries
during and just after World War II. The first to introduce the method in the open literature appears
to be A. Tustin of England, who published his paper in the Journal of the IEE in 1947.
The behavior of numerous dynamical systems is often modeled by a set of linear, first-
order, differential equations. Frequently, however, a linear model is a result of linearization of a
nonlinear model. Linear models seem to dominate the controls’ literature. Yet nature is nonlinear.
“All physical systems are nonlinear and have time-varying parameters in some degree” [105,
p. 1]. How can one reconcile this paradox? Graham and McRuer [105, p. 1] have the following
answer: “Where the effect of the nonlinearity is very small, or if the parameters vary only slowly
with time, linear constant-parameter methods of analysis can be applied to give an approximate
answer which is adequate for engineering purposes. The analysis and synthesis of physical
systems, predicated on the study of linear constant-parameter mathematical models, has been,
in fact, an outstandingly successful enterprise. A system represented as linear, however, is to some
extent a mathematical abstraction that can never be encountered in a real world. Either by design
or because of nature’s ways it is often true that experimental facts do not, or would not, correspond
with any prediction of linear constant-parameter theory. In this case nonlinear or time-varying-
parameter theory is essential to the description and understanding of physical phenomena.”
EXERCISES
x· (cm/sec)
A
6
2 B
2.6 Use Bendixson’s theorem to investigate the existence of limit cycles for the system
ẋ 1 = x1 + x2 − x1 x12 + x22 ,
(2.43)
ẋ 2 = −x1 + x2 − x2 x12 + x22 .
Sketch a phase-plane portrait for this system.
2.7 Use Bendixson’s theorem to investigate the existence of limit cycles for the system
ẋ 1 = x2 − x1 x12 + x22 − c ,
(2.44)
ẋ 2 = −x1 − x2 x12 + x22 − c ,
where c is a constant. Sketch phase-plane portraits for this system for c = −0.2 and
c = 0.2. The above nonlinear system was analyzed in reference 222, p. 204.
2.8 Verify that the Runge method agrees with the Taylor series solution through terms of
degree three.
2.9 Verify that the Runge–Kutta method agrees with the Taylor series solution through
terms of degree four.
90 CHAPTER 2 • ANALYSIS OF MODELING EQUATIONS
2.10 Consider the nonlinear mass–spring–damper system depicted in Figure 2.26. The spring
coefficient K = 0.5x 2 N/m, the damping coefficient B = 0.1ẋ 2 N·sec/m, and M = 1 kg.
K
u
M B
Figure 2.26 The mass–spring–damper system of
Exercise 2.10.
(a) Write the second-order differential equation describing the motion of the system.
(b) Choose x1 = x and x2 = ẋ. Represent the equation of motion derived in (a) in state-
space format.
(c) For the model from (b), find the equilibrium state corresponding to u = 0. Then,
linearize the model about the obtained equilibrium point.
2.11 (Based on Toptscheev and Tsyplyakov [282, pp. 29–31]) A cyclotron, whose operation
is illustrated in Figure 2.27, accelerates charged particles into high energies so that
they can be used in atom-smashing experiments. The operation of the cyclotron can be
described as follows. Positive ions enter the cyclotron from the ion source, S, and are
available to be accelerated. An electric oscillator establishes an accelerating potential
difference across the gap of the two D-shaped objects, called dees. An emerging ion
from the ion source, S, finds the dee that is negative. It will accelerate toward this dee
and will enter it. Once inside the dee, the ion is screened from electric fields by the
metal walls of the dees. The dees are immersed in a magnetic field so that the ion’s
trajectory bends in a circular path. The radius of the path depends on the ion’s velocity.
After some time, the ion emerges from the dee on the other side of the ion source. In
Beam
Dee
Deflector
plate
S Oscillator
Dee
Figure 2.27 Schematic of a cyclotron of Exercise 2.11.
EXERCISES 91
the meantime, the accelerating potential changes its sign. Thus the ion again faces a
negative dee. The ion further accelerates and describes a semicircle of larger radius.
This process goes on until the ion reaches the outer edge of one dee, where it is pulled
out of the cyclotron by a negatively charged deflector plate. The moving ion is acted
upon by a deflecting force of magnitude
q(v + v)(B + B),
where q is the charge of the ion, v is the ion’s speed, v is the increase in the ion’s
speed, and B is the magnetic field acting on the ion. The nominal radius of the ion’s
orbit is r . The change in the magnetic field as the result of a change in the orbit of the
ion is B. As the ion speed increases, the relativistic mass m increases by m. The
ion moving in a circular path undergoes centrifugal force of magnitude
(m + m)(v + v)2
.
r + r
By Newton’s second law, the equation describing the motion of the ion is
d2 (m + m)(v + v)2
(m + m) (r + r ) = − q(v + v)(B + B).
dt 2 r + r
Linearize the above equation. The motion of the ion along a nominal path is described by
d 2r mv 2
m 2
= − qv B.
dt r
Find the transfer function relating the change in radius of motion, r , of the ion and
the change in the momentum of the ion:
p = mv + mv.
In other words, find the transfer function L(r )/L(p), where L is the Laplace trans-
form operator.
2.12 Linearize the permanent-magnet stepper motor model, analyzed in Subsection 1.7.3,
about x1eq = θeq = constant.
2.13 Let v1 , v2 , . . . , vn be nonzero, mutually orthogonal elements of the space V and let v
be an arbitrary element of V .
(a) Find the component of v along v j .
(b) Let c j denote the component of v along v j . Show that the element
v − c 1 v 1 − · · · cn v n
is orthogonal to v1 , v2 , . . . , vn .
(c) Let c j be the component of v ∈ V along v j . Show that
n
ci2 ≤ v2 .
i=1
2.14 Consider characteristics of two nonlinear elements that are shown in Figure 2.28. Let
2 1 1 1
N S (x) = arcsin + cos arcsin . (2.45)
π x x x
Then, the describing function of the limiter is
K ,
M≤S
Nl = M
K N S , M > S.
S
The describing function of the dead zone is
0, M ≤ A,
Nd−z = M
K 1 − NS , M > A.
A
Derive the describing function of the nonlinear element whose characteristic is shown
in Figure 2.29.
S S A
K
Figure 2.28 Characteristics of a limiter and dead zone nonlinear elements of Exercise 2.14.
K1
A
A
K
2.15 The describing function of the nonlinear element shown in Figure 2.30 is
0, M ≤ A,
N1 =
2
M 4B A
K 1 − NS + 1− , M > A,
A πM M
EXERCISES 93
B
A
A
B
A
where N S (x) is given by (2.45). The describing function of the dead-zone nonlinearity
is given in Exercise 2.14 and its characteristic is shown in Figure 2.28. Derive the
describing function of the ideal relay with dead zone whose characteristic is shown in
Figure 2.31.
Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.
Alternative Proxies: