L Lyapunov
L Lyapunov
L Lyapunov
Lyapunov Stability
Lyapunov stability theory [1] plays a central role in systems theory and engineering.
An equilibrium point is stable if all solutions starting at nearby points stay nearby;
otherwise, it is unstable. It is asymptotically stable if all solutions starting at nearby
points not only stay nearby, but also tend to the equilibrium point as time approaches
infinity. These notions are made precise as follows.
ẋ = f (x), (A.1)
V (x) → ∞ as x → ∞ (A.4)
That is, if a solution of (A.1) belongs M at some time instant, it belongs to M for
all future and past time.
A set M is said to be a positively invariant set with respect to system (A.1) if
That is, any trajectory of (A.1) starting from M stays in M for all future time.
Theorem A.3 Let D be a compact (closed and bounded) set with the property that
every trajectory of system (A.1) starting from D remains in D for all future time.
Let V : D → R be a continuously differentiable positive definite function such that
V̇ (x) ≤ 0 in D. (A.8)
Let E be the set of all points in D where V̇ (x) = 0 and M be the largest invariant
set in E. Then every trajectory of (A.1) starting from D approaches M as t → ∞.
If V̇ (x) is negative definite, S = {0}. Then, Corollaries A.1 and A.2 coincide with
Theorems A.1 and A.2.
ẋ1 = x2 , (A.11a)
g k
ẋ2 = − sin x1 − x2 . (A.11b)
l m
We respectively apply Corollary A.1 and Theorem A.1 to discuss the stability prop-
erties of this system.
Let us first choose the energy as a Lyapunov function candidate, namely
g 1
V (x) = (1 − cos x1 ) + x22 (A.12)
l 2
which satisfies V (0) = 0 and V (x) > 0 in D − {0} with D = {x ∈ R2 | − π < x1 <
π}. Differentiating V (x) along the trajectories of the system leads to
g
V̇ (x) = ẋ1 sin x1 + x2 ẋ2
l
g g k
= x2 sin x1 + x2 − sin x1 − x2
l l m
k 2
=− x ,
m 2
which implies that V̇ (x) ≤ 0 in D. Let S be the set in D which contains all states
where V̇ (x) = 0 is maintained, i.e., S = {x ∈ D | V̇ (x) = 0}. For the pendulum
system, we infer from V̇ (x) ≡ 0 that
and
x2 (t) ≡ 0 ⇒ ẋ2 (t) ≡ 0 ⇒ sin x1 = 0. (A.14)
The only point on the segment −π < x1 < π rendering sin x1 = 0 is x1 = 0. Hence,
no trajectory of the pendulum system (A.11a), (A.11b) other than the trivial solu-
tion can forever stay in S, i.e., S = {0}. Then, x = 0 is asymptotically stable by
Corollary A.1.
200 A Lyapunov Stability
We can also show the stability property by choosing other Lyapunov function
candidates. One possibility is to replace the term 12 x22 in (A.12) by the quadratic
form 12 x T P x for some positive definite matrix P . That is, we choose
g 1
V (x) = (1 − cos x1 ) + x T P x (A.15)
l 2
as a Lyapunov function candidate, where
p11 p12
P=
p12 p22
Now we can choose p11 , p12 and p22 to ensure V̇ (x) is negative definite. First of
all, we take
k
p22 = 1, p11 = p12 (A.17)
m
to cancel the cross-product terms x2 sin x1 and x1 x2 and arrive at
g k
V̇ (x) = − p12 x1 sin x1 + p12 − p22 x22 . (A.18)
l m
k
0 < p12 < . (A.19)
m
References 201
gk k 2
V̇ (x) = − x1 sin x1 − x . (A.20)
2lm 2m 2
The term x1 sin x1 > 0 for all −π < x1 < π . Defining D = {x ∈ R2 | − π < x1 < π},
we can conclude that, with the chosen P , V (x) is positive definite and V̇ (x) is
negative definite on D. Hence, x = 0 is asymptotically stable by Theorem A.1.
References
1. Khalil HK (2002) Nonlinear Systems. Prentice Hall, New York
Appendix B
Input-to-State Stability (ISS)
ẋ = Ax + Bw (B.1)
and have
x(t) ≤ β(t)x(t0 ) + γ w ∞, (B.3)
where
∞
β(t) = eA(t−t0 ) → 0 and γ = B eA(s−t0 ) ds < ∞.
t0
of systems under different external input signals including L2 and L∞ and provides
elegant results for linear systems.
Using these comparison functions, we can restate the stability definition in Ap-
pendix A in a more precise fashion [1].
where x ∈ Rn and w ∈ Rm are the state and the external input, respectively, and f
is locally Lipschitz in x and w. One has the following definitions [4].
Definition B.4 The system (B.7) is said to be input-to-state stable (ISS) if there
exist a class KL function β and a class K function γ such that, for any x(0) and for
any input w(·) continuous and bounded on [0, ∞), the solution of (B.7) satisfies
x(t) ≤ β x(0), t + γ w ∞ , ∀t ≥ 0. (B.8)
Theorem B.1 For the system (B.7), the following properties are equivalent:
• It is ISS;
• There exists a smooth positive definite function V : Rn → R+ such that for all
x ∈ Rn and w ∈ Rm ,
α1 |x| ≤ V (x) ≤ α2 |x| , (B.10a)
∂V
|x| ≥ γ |w| ⇒ f (x, w) ≤ −α3 |x| , (B.10b)
∂x
where α1 , α2 , and γ are class K∞ functions and α3 is a class K function;
206 B Input-to-State Stability (ISS)
• There exists a smooth positive definite radially unbounded function V and class
K∞ functions α and γ such that the following dissipation inequality is satisfied
∂V
f (x, w) ≤ −α |x| + γ |w| . (B.11)
∂x
Theorem B.2 For the system (B.7), the following properties are equivalent:
• It is iISS;
• There exists a smooth positive definite radially unbounded function V : Rn →
R+ , a class K∞ function γ and a positive definite function α : R+ → R+ such
that the following inequality is satisfied:
∂V
f (x, w) ≤ −α |x| + γ |w| . (B.12)
∂x
ẋ = f (x, z),
ż = g(z, w),
shown in Fig. B.1, is ISS with input w, if the x-subsystem is ISS with z being viewed
as input and the z-subsystem is ISS with input w.
ẋ = f (x, z),
ż = g(z),
• A system is ISS if and only if it is robustly stable, where the system is perturbed
by the uncertainty as shown in Fig. B.2;
• A system is ISS if and only if it is dissipative with the supply function of
s(x, w) = γ (|w|) − α(|x|), i.e., the following dissipation inequality
t2
V x(t2 ) − V x(t1 ) ≤ s x(τ ), w(τ ) dτ
t1
Lemma B.1 (Young’s Inequality [p. 75, 1]) If the constants p > 1 and q > 1 are
such that (p − 1)(q − 1) = 1, then for all > 0 and all (x, y) ∈ R2 we have
p p 1
xy ≤ |x| + q |y|q .
p q
1 2
xy ≤ κx 2 + y .
4κ
208 B Input-to-State Stability (ISS)
Lemma B.2 ([pp. 495, 505, 1]) Let v and ρ be real-valued functions defined on
R+ , and let b and c be positive constants. If they satisfy the differential inequality
then
(a) The following integral inequality holds:
t
v(t) ≤ v(0)e−ct + b e−c(t−τ ) ρ(τ )2 dτ ; (B.14)
0
References
1. Khalil HK (2002) Nonlinear Systems. Prentice Hall, New York
2. Sontag ED (1989) Smooth stabilization implies coprime factorization. IEEE Trans Autom Con-
trol 34:435–443
3. Sontag ED, Wang Y (1996) New characterizations of input-to-state stability. IEEE Trans Autom
Control 41:1283–1294
4. Sontag ED (2008) Input to state stability: basic concepts and results. In: Cachan JM, Gronin-
gen FT, Paris BT (eds) Nonlinear and optimal control theory. Lecture notes in mathematics.
Springer, Berlin, pp 163–220
Appendix C
Backstepping
If V (x) is a CLF for (C.3), then a stabilizing control law α(x), smooth for all
x = 0, is given by Sontag’s formula [1, 4]
⎧ ∂V
⎨ ∂x f + ( ∂V
∂x f ) +( ∂x g)
2 ∂V 4
− ∂x g = 0,
, ∂V
α(x) = ∂V
∂x g (C.5)
⎩
0, ∂V
∂x f = 0,
witch results in
2 4
∂V ∂V
W (x) = f + g > 0, ∀x = 0.
∂x ∂x
We start with the first equation ẋ1 = x12 − x13 + x2 , where x2 is viewed as virtual
control, and proceed to design the feedback control x2 = α(x1 ) to stabilize the origin
x1 = 0. With
x2 = −x12 − x1 (C.7)
we cancel the nonlinear term x12 to obtain
along the solution of (C.8). This implies that V̇ is negative definite. Hence, the origin
of (C.8) is globally exponentially stable.
Indeed, x2 is one of system states and x2 = α(x1 ). To backstep, we define the
error between x2 and the desired value α(x1 ) as follows:
z2 = x2 − α(x1 ) = x2 + x1 + x12 , (C.10)
Taking
1 1
Vc (x) = x12 + z22 (C.12)
2 2
as a composite Lyapunov function, we obtain for the transformed system (C.11a),
(C.11b) that satisfies
V̇c = −x12 − x14 + z2 x1 + (1 + 2x1 ) −x1 − x13 + z2 + u . (C.13)
Choosing
u = −x1 − (1 + 2x1 ) −x1 − x13 + z2 − z2 (C.14)
yields
V̇c = −x12 − x14 − z22 , (C.15)
which is negative definite. Hence, the origin of the transformed system (C.11a),
(C.11b), and hence the original system (C.6a), (C.6b), is globally stable.
Now we consider the system having the following strict feedback form [1, 5]:
where x ∈ Rn and ξ1 , . . . , ξk , which are scalars, are the state variables. Many phys-
ical systems can be represented as a strict feedback system, such as the system
described in Chap. 4.
The whole design process is based on the following assumption:
where x ∈ Rn is the state and u ∈ R is the scalar control input. There exists a con-
tinuously differentiable feedback control law
with W : Rn → R positive definite (or positive semi-definite, in this case one needs
to apply Theorem A.3 to discuss stability).
A special case is when the x-state has dimension 1, i.e., n = 1. Then, by con-
structing a Lyapunov function
1
V (x) = x 2 (C.20)
2
for (C.16a), where ξ1 is regarded as the virtual control, the control law α(x) is
determined to satisfy (C.4), i.e.,
x f (x) + g(x)α(x) ≤ −W (x) (C.21)
More specially, one can choose W (x) = k1 x 2 with k1 > 0 for simplicity to get
− k1 x+f (x)
g(x) , x = 0,
α(x) = (C.23)
0, x = 0,
if g(x) = 0 for all x. Note that the x-subsystem is uncontrollable at the points of
g(x) = 0.
In the following, we assume that Assumption C.1 is satisfied in general.
Since ξ1 is just a state variable and not the control, we define e1 as the deviation
of ξ1 from its desired value α(x):
e1 = ξ1 − α(x) (C.24)
and infer
∂V ∂V ∂V
V̇ = ẋ = f (x) + g(x)α(x) + g(x)e1 ≤ −W (x) + g(x)e1 . (C.25)
∂x ∂x ∂x
Then, the second step begins, and the second Lyapunov function is defined as
1
V1 (x, ξ1 ) = V (x) + e12 . (C.26)
2
Let the desired value of ξ2 be α1 (x, ξ1 ), and introduce the second error
e2 = ξ2 − α1 (x, ξ1 ). (C.27)
Then we have
C.2 Backstepping Design 213
V̇1 = V̇ + e1 ė1
∂V ∂α(x)
≤ −W (x) + g(x)e1 + e1 ξ̇1 − ẋ
∂x ∂x
∂V
≤ −W (x) + e1 g(x) + f1 (x, ξ1 ) + g1 (x, ξ1 )α1 (x, ξ1 )
∂x
∂α
+ g1 (x, ξ1 )e2 − f (x) + g(x)ξ1 . (C.28)
∂x
∂V1
V̇1 ≤ −W1 (x, ξ1 ) + g1 (x, ξ1 )e2 , (C.30)
∂ξ1
1 2
l
Vj (x, ξ1 , . . . , ξj ) = V (x) + ei , j = 1, 2, . . . , k (C.31)
2
i=1
where
Xj −1
Xj = , j = 2, 3, . . . , k − 1, (C.34a)
ξj
Fj −1 (Xj −1 ) + Gj −1 (Xj −1 )ξj
Fj (Xj ) = (C.34b)
fj (Xj −1 , ξj )
0
Gj (Xj ) = , (C.34c)
gj (Xj −1 , ξj )
and
x f (x) + g(x)ξ1 0
X1 = , F1 (X1 ) = , G1 (X1 ) = .
ξ1 f1 (x, ξ1 ) g1 (x, ξ1 )
(C.35)
Under the above control law, the system is globally asymptotically stable, since
with
Wj (x, e1 , . . . , ej ) = Wj −1 + cj ej2 , j = 2, 3, . . . , k, (C.37)
which are positive definite.
Applying the backstepping technique, results for some special forms of systems
are listed as follows.
(a) Integrator Backstepping. Consider the system
and suppose that the first equation satisfies Assumption C.1 with ξ ∈ R as con-
trol. If W (x) is positive definite, then
1 2
Va (x, ξ ) = V (x) + ξ − α(x) (C.39)
2
is a CLF for the whole system, and one of controls rendering the system asymp-
totically stable is given by
∂α ∂V
u = −c ξ − α(x) + f (x) + g(x)ξ − g(x), c > 0. (C.40)
∂x ∂x
C.3 Adaptive Backstepping 215
where the linear subsystem is a minimum phase system of relative degree one
(hb = 0). If the x-subsystem satisfies Assumption C.1 with y ∈ R as control
and W (x) is positive definite, then there exists a feedback control guaranteeing
that the equilibrium x = 0, ξ = 0 is globally asymptotically stable. One of such
controls is
1 ∂α ∂V
u= −c y − α(x) − hAξ + f (x) + g(x)y − g(x) , c > 0.
hb ∂x ∂x
(C.42)
(c) Nonlinear Block Backstepping. Consider the cascade system
where the ξ -subsystem has globally defined and relative degree 1 uniformly
in x and its zero dynamics is input-to-state stable with respect to x and y as
its inputs. If the x-subsystem satisfies Assumption C.1 with y ∈ R as control
and W (x) is positive definite, then there exists a feedback control guaranteeing
that the equilibrium x = 0, ξ = 0 is globally asymptotically stable. One of such
controls is
−1
∂h
u= gξ (x, ξ )
∂ξ
∂h ∂α ∂V
× −c y − α(x) − fξ (x, ξ ) + f (x) + g(x)y − g(x) ,
∂ξ ∂x ∂x
c > 0. (C.44)
z1 = x2 − α1 (x1 , θ ) (C.49)
ẋ1 = z1 − c1 x1 , (C.50a)
∂α1 ∂α1 ∂α1
ż1 = ẋ2 − ẋ1 − θ̇ = u − (z1 − c1 x1 ), (C.50b)
∂x1 ∂θ ∂x1
where θ̇ = 0 is used (θ is assumed to be constant). By defining the Lyapunov func-
tion
1 1
V1 (x1 , z1 ) = x12 + z12 (C.51)
2 2
and differentiating it along the above system, we arrive at
Indeed, θ is unknown, we cannot implement this control law. However, we can apply
the idea of backstepping to handle this issue.
We start with x2 being a virtual control to design an adaptive control law, i.e., we
start with
ẋ1 = v + θ ϕ(x1 ). (C.56)
If θ were known, the control
with
θ̃1 = θ − θ̂1 (C.61)
being the parameter estimation error. We extend the Lyapunov function V0 as
1 1 2
V1 (x1 , θ̃1 ) = x1 + θ̃ , (C.62)
2 2γ 1
where γ > 0. Its derivative becomes
1 ˙
V̇1 = x1 ẋ1 + θ̃1 θ̃1
γ
1 ˙
= −c1 x12 + x1 θ̃1 ϕ(x1 ) + θ̃1 θ̃1
γ
1˙
= −c1 x1 + θ̃1 x1 ϕ(x1 ) + θ̃1 .
2
(C.63)
γ
If we choose
1˙
x1 ϕ(x1 ) + θ̃1 = 0, (C.64)
γ
then we have the seminegative definite property of V1 as
then
ẋ1 = z2 + α1 (x1 , θ̂1 ) + θ ϕ(x1 ) = z2 − c1 x1 + θ̃1 ϕ(x1 ) (C.70)
and
∂α1 ∂α1 ˙
ż2 = ẋ2 − ẋ1 − θ̂1
∂x1 ∂ θ̂1
∂α1 ∂α1
=u− z2 − c1 x1 + θ̃1 ϕ(x1 ) − γ x1 ϕ(x1 ). (C.71)
∂x1 ∂ θ̂1
1 1 2 1 2
V2 (x1 , θ̃1 , z2 ) = x12 + θ̃ + z (C.72)
2 2γ 1 2 2
and infer
1 ˙
V̇2 = x1 ẋ1 + θ̃1 θ̃1 + z2 ż2 = x1 z2 − c1 x12 + x1 θ̃1 ϕ(x1 ) − θ̃1 x1 ϕ(x1 )
γ
∂α1 ∂α1
+ z2 u − z2 − c1 x1 + θ̃1 ϕ(x1 ) − γ x1 ϕ(x1 )
∂x1 ∂ θ̂1
∂α1 ∂α1
= −c1 x12 + z2 u + x1 − z2 − c1 x1 + θ̃1 ϕ(x1 ) − γ x1 ϕ(x1 ) ,
∂x1 ∂ θ̂1
(θ − θ̂2 )θ̂˙2
1
= x1 z2 − c1 x12 + x1 θ̃1 ϕ(x1 ) − θ̃1 x1 ϕ(x1 ) −
γ
∂α1
+ z2 −x1 − c2 z2 − (θ − θ̂2 ) ϕ(x1 )
∂x1
1˙ ∂α1
= −c1 x12 − c2 z22 − (θ − θ̂2 ) θ̂2 + z2 ϕ(x1 ) .
γ ∂x1
By choosing the second update law as
θ̂˙2 = −γ z2
∂α1
ϕ(x1 ), (C.79)
∂x1
we arrive at
V̇3 = −c1 x12 − c2 z22 , (C.80)
which is negative semidefinite. Hence, (C.66), (C.67a), (C.76), and (C.79) construct
the final adaptive controller for the system (C.45a), (C.45b).
220 C Backstepping
References
1. Krstić M, Kanellakopoulos I, Kokotović P (1995) Nonlinear and adaptive control design. Wiley,
New York
2. Kokotovic PV (1992) The joy of feedback: nonlinear and adaptive. In: IEEE control systems
3. Kokotovic Krstic M PV, Kanellakopoulos I (1992) Backstepping to passivity: recursive design
of adaptive systems. In: Proc 31st IEEE conf decision contr. IEEE Press, New Orleans, pp 3276–
3280
4. Sontag ED (1989) A ‘universal’ construction of Artstein’s theorem on nonlinear stabilization.
Syst Control Lett 13:117–123
5. Khalil HK (2002) Nonlinear Systems. Prentice Hall, New York
Appendix D
Model Predictive Control (MPC)
controllers determine the control action based on the predicted future dynamics
of the to be controlled system starting from the current state. The model used to
complete the prediction can be linear or nonlinear, time-continuous or discrete-
time, deterministic or stochastic, etc. We emphasize here the functionality of the
model, namely it is able to predict the future dynamics of the to be controlled
system, while we do not care for the form of the model. Hence, any kind of
models, based on which the system dynamics can be computed, can be used as a
prediction model. Some of them are listed as follows:
– Convolution models, including step response and impulse response models;
– First principle models (state space model);
– Fuzzy models;
– Neural network models;
– Data-based models;
– ...
• Handling of constraints. In practice, most systems have to satisfy time-domain
constraints on inputs and states. For example, an actuator reaches saturation and
some states such as temperature and pressure are not allowed to exceed their lim-
itations for the reason of safe operation, or some variables have to be held under
certain threshold values to meet environmental regulations. Moreover, when mod-
eling chemical processes from mass, momentum and energy conservation laws,
algebraic equations may arise from phase equilibrium calculations and other phe-
nomenological and thermodynamic correlations [6]. These algebraic equations
may also be considered as constraints on the dynamics of the process. It is clear
that time-domain constraints impose limitations on the achievable control perfor-
mance, even if the system to be controlled is linear [7].
In MPC, time-domain constraints can be placed directly in the optimization
problem in their original form, without doing any transformation. Such a direct
and explicit handling of time-domain constraints leads to non-conservative or at
least less conservative solutions. Moreover, because the future response of the
system is predicted, early control action can be taken so as to avoid the violation
of time-domain constraints (e.g., actuator saturation, safety constraints, emission
regulation) while tracking, for example, a given reference trajectory with mini-
D.1 Linear MPC 223
where x ∈ Rnx is the system state, u ∈ Rnu is the control input, d ∈ Rnd is the
measurable disturbance, yc ∈ Rnc is the controlled output, and yb ∈ Rnb is the con-
strained output.
It is well-known that the difference equation (D.1a) can be exactly obtained from
the differential equation
by computing
A = eAc δ , (D.3a)
δ
Bu = eAc τ dτ · Bcu , (D.3b)
0
δ
Bd = eAc τ dτ · Bcd , (D.3c)
0
where
Assume that the state is measurable. If it is not the case, we can use an observer
to estimate the state. Then, at time k, with the measured/estimated state x(k), the
optimization problem of linear MPC is formulated as
Problem D.1
min J x(k), U (k), Nc , Np (D.5)
U (k)
Np
J x(k), U (k), Nc , Np = Γy,i yc (k + i|k) − r(k + i) 2
i=1
c −1
N
+ Γu,i u(k + i|k)2 , (D.7)
i=0
In the above, Np and Nc are prediction and control horizons, respectively, satis-
fying Nc ≤ Np , Γy and Γu are weights given as
and U (k) is the vector form of the incremental control sequences defined as
⎡ ⎤
u(k|k)
⎢ u(k + 1|k) ⎥
⎢ ⎥
U (k) = ⎢ .. ⎥ , (D.9)
⎣ . ⎦
u(k + Nc − 1|k) Nc ×1
⎡ ⎤ ⎡ ⎤
yc (k + 1|k) yb (k + 1|k)
⎢ yc (k + 2|k) ⎥ ⎢ yb (k + 2|k) ⎥
⎢ ⎥ ⎢ ⎥
Yc (k + 1|k) = ⎢ .. ⎥ , Yb (k + 1|k) = ⎢ .. ⎥
⎣ . ⎦ ⎣ . ⎦
yc (k + Np |k) Np ×1
yb (k + Np |k) Np ×1
where
⎡ ⎤ ⎡ ⎤
Cc A Cb A
⎢ Cc A2 + Cc A ⎥ ⎢ Cb A2 + Cb A ⎥
⎢ ⎥ ⎢ ⎥
Sx,c = ⎢ .. ⎥ , S x,b = ⎢ .. ⎥ ,
⎣ . ⎦ ⎣ . ⎦
Np i
Np i
i=1 Cc A Np ×1 i=1 Cb A Np ×1
⎡ ⎤ ⎡ ⎤
Inc ×nc Inb ×nb
⎢ Inc ×nc ⎥ ⎢ Inb ×nb ⎥
⎢ ⎥ ⎢ ⎥
Ic = ⎢ . ⎥ , Ib = ⎢ . ⎥ ,
⎣ . ⎦ . ⎣ . ⎦ .
Inc ×nc N ×1 Inb ×nb N ×1
p p
⎡ ⎤ ⎡ ⎤
Cc Bd Cb Bd
⎢ Cc ABd + Cc Bd ⎥ ⎢ Cb ABd + Cb Bd ⎥
⎢ ⎥ ⎢ ⎥
Sd,c = ⎢ .. ⎥ , Sd,b = ⎢ .. ⎥ ,
⎣ . ⎦ ⎣ . ⎦
Np i−1
Np i−1
i=1 Cc A Bd N
p ×1 i=1 Cb A Bd N
p ×1
⎡ Cc Bu 0 0 ... 0 ⎤
2
⎢ i=1 Cc Ai−1 Bu Cc Bu 0 ... 0 ⎥
⎢ ⎥
⎢ .. .. .. .. .. ⎥
⎢ . . . . . ⎥
⎢ ⎥
Su,c = ⎢ Nc i−1 B Nc −1 C Ai−1 B ⎥ ,
⎢ i=1 c C A u c u ... ... C c Bu ⎥
⎢ i=1 ⎥
⎢ .. .. .. .. .. ⎥
⎣ . . . . . ⎦
Np i−1 Np −1 Np −Nc +1
i=1 Cc A Bu i=1 Cc Ai−1 Bu ... ... i=1 Cc Ai−1 Bu N ×N
p c
D.1 Linear MPC 227
⎡ Cb Bu 0 0 ... 0 ⎤
2
⎢ i=1 Cb Ai−1 Bu Cb Bu 0 ... 0 ⎥
⎢ ⎥
⎢ .. .. .. .. ⎥
⎢ . . . ... . ⎥
⎢ ⎥
Su,b = ⎢ Nc Nc −1 ⎥ .
⎢ i=1 Cb Ai−1 Bu i=1 Cb Ai−1 Bu ... ... Cb Bu ⎥
⎢ ⎥
⎢ .. .. .. .. ⎥
⎣ . . . ... . ⎦
Np i−1 Np −1 Np −Nc +1
i=1 Cb A Bu i=1 Cb Ai−1 Bu ... ... i=1 Cb Ai−1 Bu N ×N
p c
According to the basic of MPC, the optimization problem (Problem D.1) will be
solved at each sampling time, updated with the new measurement. If we can find a
solution of Problem D.1, denoted as U ∗ (k),
the closed-loop control at time k is then defined as
with Yc (k +1|k) given by (D.10a). We can then obtain the solution by calculating the
gradient of the objective function over the independent variable U (k) and setting
it to zero. The result reads
T T −1 T T
U ∗ (k) = Su,c Γy Γy Su,c + ΓuT Γu Su,c Γy Γy Ep (k + 1|k), (D.14)
According to the basic of MPC, we pick up the first element of U ∗ (k) to build the
closed-loop control as follows
It is clear that
228 D Model Predictive Control (MPC)
where
H = Su,c
T
ΓyT Γy Su,c + ΓuT Γu ,
with
⎡ ⎤
Inu ×nu 0 ... 0 0
⎢ 0 Inu ×nu ... 0 0 ⎥
⎢ ⎥
⎢ ⎥
T = ⎢ ... ..
.
..
.
..
.
..
. ⎥ ,
⎢ ⎥
⎣ 0 0 ... Inu ×nu 0 ⎦
0 0 ... 0 Inu ×nu Nc ×Nc
⎡ ⎤
Inu ×nu 0 ... 0 0
⎢ Inu ×nu Inu ×nu ... 0 0 ⎥
⎢ ⎥
⎢ ⎥
L = ⎢ ... ..
.
..
.
..
.
..
. ⎥ ,
⎢ ⎥
⎣ Inu ×nu Inu ×nu . . . Inu ×nu 0 ⎦
Inu ×nu Inu ×nu . . . Inu ×nu Inu ×nu Nc ×Nc
⎡ ⎤ ⎡ ⎤
ymin (k + 1) ymax (k + 1)
⎢ ymin (k + 2) ⎥ ⎢ ymax (k + 2) ⎥
⎢ ⎥ ⎢ ⎥
Ymin (k + 1) = ⎢ .. ⎥ , Ymax (k + 1) = ⎢ .. ⎥ .
⎣ . ⎦ ⎣ . ⎦
ymin (k + Np ) Np ×1
ymax (k + Np ) Np ×1
where x(k) ∈ Rnx is the system state, u(k) ∈ Rnu is the control input, yc (k) ∈ Rnc
is the controlled output, yb (k) ∈ Rnb is the constrained output. The constraints on
input and output are represented as
It is assumed that all states are measurable. If not all states are measurable, an
observer has to be designed to estimate the state. Then at time k, based on the
measured/estimated state x(k), the optimization problem of discrete nonlinear MPC
is formulated as
230 D Model Predictive Control (MPC)
Problem D.2
min J x(k), Uk (D.22)
Uk
c −1
N
+ ū(k + i) − ur (k + i)2 + ū(k + i)2 , (D.24)
R S
i=0
and ȳc (·) and ȳb (·) as predicted controlled and constrained outputs, respectively,
can be calculated through the following dynamic equations:
x̄(i + 1) = f x̄(i), ū(i) , k ≤ i ≤ k + Np , x̄(k) = x(k), (D.25a)
ȳc (i) = gc x̄(i), ū(i) , (D.25b)
ȳb (i) = gb x̄(i), ū(i) . (D.25c)
In the above description, Np and Nc are the prediction and control horizons,
respectively, satisfying Nc ≤ Np ; (r(·), ur (·)) are the references of the controlled
output and corresponding control input; (Q, R, S) are weighting matrices, allowed
to be time varying; ū(·) is the predicted control input, defined as
where ū0 , . . . , ūNc −1 constitute the independent variables of the optimization prob-
lem, denoted as Uk
⎡ ⎤
ū0
⎢ ū1 ⎥
⎢ ⎥
Uk ⎢ . ⎥ . (D.27)
⎣ .. ⎦
ūNc −1
Note that x(k), the system state, is also the initial condition of the prediction
model (D.25a)–(D.25c), which is the key of MPC being a feedback strategy.
D.2 Nonlinear MPC (NMPC) 231
Assume that optimization problem D.2 is feasible at each sampling time and the
solution is
⎡ ∗ ⎤
ū0
⎢ ū∗1 ⎥
⎢ ⎥
Uk∗ ⎢ .. ⎥ , (D.28)
⎣ . ⎦
ū∗Nc −1
then, according to the basics of MPC, the control input is chosen as
Because Uk∗ depends on the values of x(k), (Q, S, R) and (Nc , Np ), u(k) is an
implicit function of these variables. Ignoring the dependence of (Q, S, R) and
(Nc , Np ), denote u(k) as
u(k) = κ x(k) , k ≥ 0. (D.30)
Substituting it into the controlled system (D.20a)–(D.20c), we have the closed sys-
tem
x(k + 1) = f x(k), κ x(k) , k ≥ 0. (D.31)
If not all states are measurable, only need to replace x(k) with x̂(k).
where x(t) ∈ Rnx is the state, u(t) ∈ Rnu is the control input, yc (t) ∈ Rnc is the
controlled output, yb (t) ∈ Rnb is the constrained output. The constraints on input
and output are
At the present time t, based on the measured/estimated state x(t) and ignoring
the constraint on the change rate of the control input, the optimization problem is
formulated as
232 D Model Predictive Control (MPC)
Problem D.3
min J x(t), Ut (D.34)
Ut
subject to
and ȳc (·) and ȳb (·) are the predicted controlled and constrained output, respectively,
calculated through the following dynamic equation:
˙ ) = f x̄(τ ), ū(τ ) , t ≤ τ ≤ t + Tp , x̄(t) = x(t),
x̄(τ (D.37a)
ȳc (τ ) = gc x̄(τ ), ū(τ ) , (D.37b)
ȳb (τ ) = gb x̄(τ ), ū(τ ) . (D.37c)
In the above, Tc and Tp are the prediction and control horizons, satisfying
Tc ≤ Tp ; (r(·), ur (·)) are the references of the controlled output and the correspond-
ing control input; (Q, R) are weighting matrices, allowed to be time varying; ū(·)
is the predicted control input, and for τ ∈ [t, t + Tc ], it is defined as
τ −t
ū(τ ) = ūi , i = int , (D.38)
δ
Note that x(t), the system state, is also the initial condition of the prediction
model (D.37a).
Remark D.1 By Eqs. (D.38) and (D.39), the control input is treated as constant dur-
ing the sampling period, and thus the optimization problem is transferred into a
problem with limited independent variables.
D.2 Nonlinear MPC (NMPC) 233
Remark D.2 The constraint on the change rate of control input can be taken into
consideration through the following two methods. One is to add a penalty item into
the objective function:
t+Tp
J x̂(t), Ut = ȳc (τ ) − r(τ )2 + ū(τ ) − ur (τ )2 dτ
Q R
t
c −1
N
+ ūi − ūi−1 2
S. (D.40)
i=0
This is a somehow “soft” treatment. Another one is to use ūi −δūi−1 to approximate
the change rate of the control input, and add the following constraint into (D.35a)–
(D.35c):
ūi − ūi−1
dumin ≤ ≤ dumax . (D.41)
δ
Assume that the optimization problem D.3 is feasible at each sampling time, and
the solution is denoted as
⎡ ∗ ⎤
ū0
⎢ ū∗1 ⎥
⎢ ⎥
Ut∗ ⎢ .. ⎥ , (D.42)
⎣ . ⎦
ū∗Nc −1
Because Ut∗ depends on the values of x(t), (Q, R) and (Tc , Tp ), u(t) is an implicit
function of these variables, denoted as
u(τ ) = κ x(t) , t ≤ τ ≤ t + δ, (D.44)
where the dependence of (Q, R) and (Tc , Tp ) is ignored for simplicity. By substi-
tuting it into (D.32a)–(D.32c), we have the closed-loop system
ẋ(τ ) = f x(τ ), κ x(t) , t ≤ τ ≤ t + δ, t ≥ 0. (D.45)
References
1. Allgöwer F, Badgwell TA, Qin JS, Rawlings JB, Wright SJ (1999) Nonlinear predictive control
and moving horizon estimation—an introductory overview. In: Frank PM (ed) Advances in
control, highlights of ECC’99. Springer, Berlin, pp 391–449
2. Camacho EF, Bordons C (2004) Model predictive control. Springer, London
3. Maciejowski JM (2002) Predictive control: with constraints. Prentice Hall, New York
4. Mayne DQ, Rawlings JB, Rao CV, Scokaert POM (2000) Constrained model predictive con-
trol: stability and optimality. Automatica 36(6):789–814
5. Chen H (1997) Stability and robustness considerations in nonlinear model predictive control.
Fortschr.-Ber. VDI Reihe 8, vol 674. VDI Verlag, Düsseldorf
6. Kröner A, Holl P, Marquardt W, Gilles ED (1989) DIVA—an open architecture for dynamic
simulation. In: Eckermann R (ed) Computer application in the chemical industry. VCH, Wein-
heim, pp 485–492
7. Mayne DQ (1995) Optimization in model based control. In: Proc IFAC symposium dynamics
and control of chemical reactors, distillation columns and batch processes, Helsingor, pp 229–
242
8. Chen H, Scherer CW (2006) Moving horizon H∞ control with performance adaptation for
constrained linear systems. Automatica 42(6):1033–1040
9. Chen H (2013) Model predictive control. Science Press, Beijing. In Chinese
10. Bemporad A, Morari M, Dua V, Pistikopoulos EN (2002) The explicit linear quadratic regu-
lator for constrained systems. Automatica 38(1):3–20
11. Chen H, Allgöwer F (1998) A quasi-infinite horizon nonlinear model predictive control
scheme with guaranteed stability. Automatica 34(10):1205–1217
References 235
12. Chen H, Gao X-Q, Wang H (2006) An improved moving horizon H∞ control scheme through
Lagrange duality. Int J Control 79(3):239–248
13. Chisci L, Rossiter JA, Zappa G (2001) Systems with persistent disturbances: predictive control
with restricted constraints. Automatica 37(7):1019–1028
14. Grimm G, Messina MJ, Tuna SE, Teel AR (2007) Nominally robust model predictive control
with state constraints. IEEE Trans Autom Control 52(5):1856–1870
15. Grimm G, Messina MJ, Tuna SE, Teel AR (2004) Examples when nonlinear model predictive
control is nonrobust. Automatica 40:1729–1738
16. Lazar M, Muñoz de la Peña D, Heemels W, Alamo T (2008) On input-to-state stabilizing of
min–max nonlinear model predictive control. Syst Control Lett 57(1):39–48
17. Limón D, Álamo T, Salas F, Camacho EF (2006) Input to state stability of min–max MPC
controllers for nonlinear systems with bounded uncertainties. Automatica 42(5):797–803
18. Mayne DQ, Kerrigan EC, van Wyk EJ, Falugi P (2011) Tube-based robust nonlinear model
predictive control. Int J Robust Nonlinear Control 21(11):1341–1353
19. Mayne DQ, Seron MM, Rakovic SV (2005) Robust model predictive control of constrained
linear systems with bounded disturbances. Automatica 41(2):219–224
Appendix E
Linear Matrix Inequality (LMI)
Many problems arising from control, identification and signal processing can be
transformed into a few standard convex or quasi-convex (optimization or feasibility)
problems involving linear matrix inequalities (LMIs) [1, 2] which can be solved
efficiently in a numerical sense by the use of interior-point methods [3]. In this
book, for example, we formulate the calculation of the observer gain in Chap. 2
and the solution of the feedback gain in Chap. 4 as convex optimization problems
involving LMIs.
E.1 Convexity
Geometrically, a set D is convex if the line segment between any two points in
D lies in D.
Definition E.2 (Convex hull) The convex hull of a set D, denoted as Co{D}, is the
intersection of all convex sets containing D. If D consists of a finite number of
elements, then these elements are referred to as the vertices of Co{D}.
The convex hull of a finite point set forms a polytope and any polytope is the
convex hull of a finite point set.
Geometrically, (E.2) implies that the line segment between (x1 , f (x1 )) and
(x2 , f (x2 )), i.e., the chord from x1 to x2 , lies above the curve of f .
Moreover, a function f : D → R is called affine if (E.2) holds with equality.
where
• x = (x1 , . . . , xm ) is the decision variable;
• F0 , . . . , Fm are given real symmetric matrices, and
• The inequality F (x) > 0 means that uT F (x)u > 0 for all u ∈ Rn , u = 0.
While (E.5) is a strict LMI, we may also encounter non-strict LMIs which have the
form of
F (x) ≥ 0.
The linear matrix inequality (E.5) defines a convex constraint on x. That is, the
set F := {x|F (x) > 0} is convex. Hence, optimization problems involving the min-
imization (or maximization) of a performance function f : F → R belong to the
class of convex optimization problems, if the performance function f renders (E.2)
satisfied for all x1 , x2 ∈ F and α ∈ (0, 1). The full power of convex optimization
theory can then be employed [4].
E.3 Casting Problems in an LMIs Setting 239
min f (x)
x∈D
min λ
s.t. λF (x) − G(x) > 0,
F (x) > 0,
H (x) > 0.
Some control problems that can be easily casted in an LMI setting are given as
follows:
ẋ = Ax (E.6)
A ∈ Co{A1 , A2 , . . . , Ar },
ẋ = A0 x + Bp p, (E.9a)
q = Cq x + Dqp p, (E.9b)
pi = δi (t)qi , δi (t) ≤ 1, i = 1, 2, . . . , nq , (E.9c)
Decay Rate The decay rate of a system is defined to be the largest α such that
lim eαt x(t) = 0 (E.11)
t→∞
holds for all trajectories. In order to estimate the decay rate of system (E.6), we
define V (x) = x T P x and require
dV (x)
≤ −2αV (x) (E.12)
dt
1
for all trajectories, which then leads to x(t) ≤ e−αt | λλmax (P ) 2
min (P )
| x(0) . By explor-
ing (E.12) for system (E.6), the decay rate problem can be casted in the following
LMI optimization problem
min α (E.13a)
α,P
Similarly, we can formulate the problem in an LMI setting for uncertain systems
with polytopic description as follows:
min α (E.14a)
α,P
min α (E.15a)
α,P ,Λ
The following lemmas are useful for casting control, identification and signal
processing problems in LMIs.
Lemma E.1 (Schur Complement) For Q(x), R(x), S(x) depending affinely on x
and Q(x), R(x) being symmetric, the LMI
Q(v) S(x)
>0
S(v)T R(x)
is equivalent to
or to
R(v) > 0, Q(x) − S(v)R(v)−1 S(v)T > 0.
p
F0 − λi Fi > 0.
i=1
For the case p = 1, the converse holds, provided that there is some ξ0 such that
ξ0T F1 ξ0 > 0.
References
1. Boyd S, El Ghaoui L, Feron E, Balakishnan V (1994) Linear matrix inequalities in system and
control theory. SIAM, Philadelphia
2. Scherer CW, Weiland S (2000) Linear matrix inequalities in control. In: Delft center for systems
and control. DISC lecture note, Dutch institute of systems and control
3. Nesterov Y, Nemirovsky A (1994) Interior point polynomial methods in convex programming.
SIAM, Philadelphia
4. Boyd SP, Vandenberghe L (2004) Convex optimization. Cambridge University Press, Cam-
bridge
Appendix F
Subspace Linear Predictor
For a linear time invariant (LTI) system in question, assume that it can be described
in a state-space form as defined by the equations below:
⎡ ⎤
uk+δ
⎢
⎢ uk+δ+1 ⎥
⎥
xk+M+δ = AM xk+δ + AM−1 B AM−2 B ... B ⎢ .. ⎥
⎣ . ⎦
uk+M+δ−1
⎡ ⎤
ek+δ
⎢ k+δ+1 ⎥
⎢ e ⎥
+ AM−1 K AM−2 K ... K ⎢ .. ⎥. (F.4)
⎣ . ⎦
ek+M+δ−1
Next we will look at the output equation (F.2) and develop recursively an output
matrix equation. From (F.2), for t = k + 1, k + 2, . . . , k + M − 1, we have
Compiling the above result for output equations into a single matrix equation gives
us
⎡ ⎤ ⎡ ⎤
yk C
⎢ yk+1 ⎥ ⎢ CA ⎥
⎢ ⎥ ⎢ ⎥
⎢ yk+2 ⎥ ⎢ CA2 ⎥
⎢ ⎥=⎢ ⎥xk
⎢ .. ⎥ ⎢ .. ⎥
⎣ . ⎦ ⎣ . ⎦
yk+M−1 CA M−1
⎡ ⎤⎡ ⎤
D 0 0 ... 0 uk
⎢ CB D 0 ... 0 ⎥ ⎢ ⎥
⎢ ⎥⎢ uk+1 ⎥
⎢ CAB CB D ⎢
. . . 0 ⎥⎢ uk+2 ⎥
⎥
+⎢ ⎥
⎢ .. .. .. .. . ⎥⎢ .. ⎥
⎣ . . . . .. ⎦⎣ . ⎦
CAM−2 B CAM−3 B CAM−4 B ... D uk+M−1
⎡ ⎤⎡ ⎤
I 0 0 ... 0 ek
⎢ CK I 0 ... 0⎥ ⎢ ⎥
⎢ ⎥⎢ ek+1 ⎥
⎢ CAK CK I ... 0⎥ ⎢ ek+2 ⎥
+⎢ ⎥⎢ ⎥.
⎢ .. .. .. .. .. ⎥⎢ .. ⎥
⎣ . . . . . ⎦ ⎣ . ⎦
CAM−2 K CAM−3 K CAM−4 K ... I ek+M−1
(F.7a)
246 F Subspace Linear Predictor
Adjusting the time for the variables in (F.7a) by an arbitrary discrete time δ will
result in a similar matrix equation
⎡ ⎤
yk+δ
⎢ yk+1+δ ⎥
⎢ ⎥
⎢ yk+2+δ ⎥
⎢ ⎥ (F.8)
⎢ .. ⎥
⎣ . ⎦
yk+M+δ−1
⎡ ⎤
C
⎢ CA ⎥
⎢ ⎥
⎢ 2 ⎥
= ⎢ CA ⎥xk+δ
⎢ .. ⎥
⎣ . ⎦
CAM−1
⎡ ⎤⎡ ⎤
D 0 0 ... 0 uk+δ
⎢ CB D 0 ... 0⎥ ⎢ ⎥
⎢ ⎥⎢ uk+1+δ ⎥
⎢ 0 ⎥⎢ uk+2+δ ⎥
⎥ ⎢
+ ⎢ CAB CB D ... ⎥
⎢ .. .. .. .. .. ⎥⎢ .. ⎥
⎣ . . . . . ⎦ ⎣ . ⎦
CAM−2 B CAM−3 B CAM−4 B ... D uk+M+δ−1
⎡ ⎤⎡ ⎤
I 0 0 ... 0 ek+δ
⎢ CK I 0 ... 0⎥ ⎢ ⎥
⎢ ⎥⎢ ek+δ+1 ⎥
⎢ CAK CK I ... 0⎥ ⎢ ek+δ+2 ⎥
+⎢ ⎥⎢ ⎥. (F.9)
⎢ .. .. .. .. .. ⎥⎢ .. ⎥
⎣ . . . . . ⎦⎣ . ⎦
CAM−2 K CAM−3 K CAM−4 K ... I ek+M+δ−1
⎡ ⎤
yk yk+1 ... yk+N −M+1
⎢ yk+1 yk+2 ... yk+N −M+2 ⎥
⎢ ⎥
⎢ yk+2 yk+3 ... yk+N −M+3 ⎥
⎢ ⎥
⎢ .. .. .. .. ⎥
⎣ . . . . ⎦
yk+N −M+1 yk+M ... yk+N
⎡ ⎤
C
⎢ CA ⎥
⎢ ⎥
⎢ 2 ⎥
= ⎢ CA ⎥ xk xk+1 ... xk+N −M+1
⎢ .. ⎥
⎣ . ⎦
CAM−1
F Subspace Linear Predictor 247
⎡ ⎤
D 0 0 ... 0
⎢ CB D 0 ... 0⎥
⎢ ⎥
⎢ CAB CB D ... 0⎥
+⎢ ⎥
⎢ .. .. .. .. .. ⎥
⎣ . . . . . ⎦
CAM−2 B CAM−3 B CAM−4 B ... D
⎡ ⎤
uk uk+1 ... uk+N −M+1
⎢ uk+1 uk+2 ... uk+N −M+2 ⎥
⎢ ⎥
⎢ uk+2 uk+3 ... uk+N −M+3 ⎥
×⎢ ⎥
⎢ .. .. .. .. ⎥
⎣ . . . . ⎦
uk+N −M+1 uk+M ... uk+N
⎡ ⎤
I 0 0 ... 0
⎢ CK I 0 ... 0⎥
⎢ ⎥
⎢ CAK CK I ... 0⎥
+⎢ ⎥
⎢ .. .. .. .. .. ⎥
⎣ . . . . .⎦
CAM−2 K CAM−3 K CAM−4 K ... I
⎡ ⎤
ek ek+1 ... ek+N −M+1
⎢ ek+1 ek+2 ... ek+N −M+2 ⎥
⎢ ⎥
⎢ ek+2 ek+3 ... ek+N −M+3 ⎥
×⎢ ⎥. (F.10)
⎢ .. .. .. .. ⎥
⎣ . . . . ⎦
ek+N −M+1 ek+M ... ek+N
From the derivation of Eqs. (F.5) and (F.10), we can write the subspace I/O matrix
equations in the field of subspace system identification [1] as follows:
Yp = ΓM Xp + HM
d
Up + HNs Ep , (F.11)
Yf = ΓM Xf + HM
d
Uf + HNs Ef , (F.12)
Xf = AM Xp + dM Up + sM Ep , (F.13)
where the subscripts p and f denote the ‘past’ and ‘future’ matrices of the respective
variables, the superscripts d and s stand for the deterministic and stochastic part of
the system, respectively. Open-loop data uk and yk , k ∈ {0, 1, . . . , N} are available
for identification. Therefore, for the definition in (F.11)–(F.13), the past and future
data matrices are constructed as follows:
⎡y y2 ... yN −2M+1 ⎤ ⎡y yM+2 ... yN −M+1 ⎤
1 M+1
⎢ y2 y3 ... yN −2M+2 ⎥ ⎢ yM+2 yM+3 ... yN −M+2 ⎥
⎢
Yp = ⎣ . .. .. ⎥, Yf = ⎢ .. .. .. ⎥,
.. ..
. ⎦ ⎣ ..
. ⎦
. . . . .
yM yM+1 ... yN −M y2M y2M+1 ... yN
(F.14)
248 F Subspace Linear Predictor
⎡u u2 ... uN −2M+1 ⎤ ⎡u uM+2 ... uN −M+1 ⎤
1 M+1
⎢ u2 u3 ... uN −2M+2 ⎥ ⎢ uM+2 uM+3 ... uN −M+2 ⎥
⎢
Up = ⎣ . .. .. ⎥, Uf = ⎢ .. .. .. ⎥,
.. ..
. ⎦ ⎣ ..
. ⎦
. . . . .
uM uM+1 ... uN −M u2M u2M+1 ... uN
(F.15)
⎡e e2 ... uN −2M+1 ⎤ ⎡e eM+2 ... eN −M+1 ⎤
1 M+1
⎢ e2 e3 ... eN −2M+2 ⎥ ⎢ eM+2 eM+3 ... eN −M+2 ⎥
Ep = ⎢
⎣ .. .. .. .. ⎥,
⎦ Ef = ⎢
⎣ .. .. .. .. ⎥.
⎦
. . . . . . . .
eM eM+1 ... eN −M e2M e2M+1 ... eN
(F.16)
Furthermore, the past and future state matrices are also defined by
Xp = x1 x2 . . . xN −2M+1 , (F.22)
F Subspace Linear Predictor 249
Xf = xM+1 xM+2 ... xN −M+1 . (F.23)
where the subscript “†” denotes Moore–Penrose pseudoinverse of a matrix [2]. Sub-
stituting Eq. (F.24) into (F.13) will then give
Xf = AM ΓM† Yp − HM d
Up − HM
s
Ep + dM Up + sM Ep
= AM ΓM† Yp + dM − AM ΓM† HM
d
Up + sM − AM ΓM† HM
s
Ep . (F.25)
Therefore, substituting Eq. (F.25) into (F.12) will result in an equation for future
output as given below:
Yf = ΓM AM ΓM† Yp + dM − AM ΓM† HM d
Up
s
+ M − AM ΓM† HM s
Ep + HMd
Uf + HNs Ef
= ΓM AM ΓM† Yp + ΓM dM − AM ΓM† HM d
Up
+ HM d
Uf + ΓM sM − AM ΓM† HMs
Ep + HNs Ef . (F.26)
Due to the effect of Ef which is stationary white noise, and by the virtue of the
stability of a Kalman filter, for a set of measurements that is sufficiently large,
Eq. (F.26) above can then be written to give an optimal prediction of Yf as follows:
Wp
Ŷf = Lw Lu = L w W p + L u Uf (F.27)
Uf
Equation (F.27) is thus known as the subspace linear predictor equation, with Lw
being the subspace matrix that corresponds to the past input and output data ma-
trix Wp , and Lu is the subspace matrix that corresponds to the future input data
matrix Uf .
In order to calculate the subspace linear predictor coefficients Lw and Lu from
the Hankel data matrices Up , Yp and Uf , we will solve the following least squares
problem, thus giving us the prediction equation for Yf :
W p 2
min Yf − Lw Lu , (F.29)
Lw ,Lu Uf
250 F Subspace Linear Predictor
where
†
Wp
Lw Lu = Yf
Uf
! W !−1
p
= Yf WpT UfT WpT UfT . (F.30)
Uf
In the control implementation, only the leftmost column of the matrix will be used
for the prediction of future output values. Therefore, after the subspace linear pre-
dictor coefficients Lw and Lu are found from the identification data, we can then
streamline equation (F.27) by taking only the leftmost column of matrices Ŷf , Yp ,
Up and Uf by defining
⎡ ⎤ ⎡ ⎤
yt+1 yt−M+1
⎢ yt+2 ⎥ ⎢ yt−M+2 ⎥
⎢ ⎥ ⎢ ⎥
ŷf = ⎢ . ⎥, yp = ⎢ .. ⎥,
⎣ . ⎦. ⎣ . ⎦
yt+M yt
⎡ ⎤ ⎡ ⎤ (F.31)
ut−M+1 ut+1
⎢ ut−M+2 ⎥ ⎢ ut+2 ⎥
⎢ ⎥ ⎢ ⎥
up = ⎢ .. ⎥, uf = ⎢ . ⎥,
⎣ . ⎦ ⎣ .. ⎦
ut ut+M
and
up
wp = , (F.32)
yp
then we will arrive at a streamlined subspace-base linear predictor equation, namely
ŷf = Lw wp + Lu uf . (F.33)
According to Eq. (F.33), we can predict the output of the system by the past input
and output data as well as the future input data that is applied. This result will be
utilized in the implementation of model predictive control algorithm, that is, data-
driven predictive control algorithm.
References
1. Overschee PV, Moor BD (1996) Subspace identification for linear systems: theory, implemen-
tation, applications. Kluwer Academic, Norwell
2. Van Overschee P, De Moor B (1995) A unifying theorem for three subspace system identifica-
tion algorithms. Automatica 31(12):1853–1864