LMI Control Toolbox
LMI Control Toolbox
LMI Control Toolbox
Pascal Gahinet
Arkadi Nemirovski
Alan J. Laub
Mahmoud Chilali
User’s Guide
Version 1
How to Contact The MathWorks:
www.mathworks.com Web
comp.soft-sys.matlab Newsgroup
508-647-7000 Phone
508-647-7001 Fax
For contact information about worldwide offices, see the MathWorks Web site.
Preface
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Introduction
1
Linear Matrix Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
i
Model Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-12
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-35
Robustness Analysis
3
Quadratic Lyapunov Functions . . . . . . . . . . . . . . . . . . . . . . . . 3-3
LMI Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
Quadratic Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6
Maximizing the Quadratic Stability Region . . . . . . . . . . . . . . . . 3-8
Decay Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9
Quadratic H∞ Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10
µ Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-17
Structured Singular Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-17
Robust Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-19
Robust Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-21
ii Contents
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-28
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-32
State-Feedback Synthesis
4
Multi-Objective State-Feedback . . . . . . . . . . . . . . . . . . . . . . . . 4-3
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-18
Synthesis of H∞ Controllers
5
H∞ Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
Riccati- and LMI-Based Approaches . . . . . . . . . . . . . . . . . . . . . . 5-7
H∞ Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-10
Validation of the Closed-Loop System . . . . . . . . . . . . . . . . . . . 5-13
iii
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-22
Loop Shaping
6
The Loop-Shaping Methodology . . . . . . . . . . . . . . . . . . . . . . . . 6-2
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-24
iv Contents
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-15
v
Well-Posedness Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-40
Semi-Definite B(x) in gevp Problems . . . . . . . . . . . . . . . . . . . . 8-41
Efficiency and Complexity Issues . . . . . . . . . . . . . . . . . . . . . . . 8-41
Solving M + PTXQ + QTXTP < 0 . . . . . . . . . . . . . . . . . . . . . . . . 8-42
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-44
Command Reference
9
List of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-3
vi Contents
Preface
Preface
viii
Acknowledgments
Acknowledgments
The authors wish to express their gratitude to all colleagues who directly or
indirectly contributed to the making of the LMI Control Toolbox. Special
thanks to Pierre Apkarian, Gregory Becker, Hiroyuki Kajiwara, and Anca
Ignat for their help and contribution. Many thanks also to those who tested and
helped refine the software, including Bobby Bodenheimer, Markus
Brandstetter, Eric Feron, K.C. Goh, Anders Helmersson, Ted Iwasaki, Jianbo
Lu, Roy Lurie, Jason Ly, John Morris, Ravi Prasanth, Michael Safonov,
Carsten Scherer, Andy Sparks, Mario Rotea, Matthew Lamont Tyler, Jim
Tung, and John Wen. Apologies, finally, to those we may have omitted.
The work of Pascal Gahinet was supported in part by INRIA.
ix
Preface
x
1
Introduction
Linear Matrix Inequalities . . . . . . . . . . . . . 1-2
References . . . . . . . . . . . . . . . . . . . . . 1-10
1 Introduction
See [9] for a good introduction to LMI concepts. The LMI Control Toolbox is
designed as an easy and progressive gateway to the new and fast-growing field
of LMIs:
1-2
Toolbox Features
Toolbox Features
The LMI Control Toolbox serves two purposes:
• Provide state-of-the-art tools for the LMI-based analysis and design of robust
control systems
• Offer a flexible and user-friendly environment to specify and solve general
LMI problems (the LMI Lab)
The control design tools can be used without a priori knowledge about LMIs or
LMI solvers. These are dedicated tools covering the following applications of
LMI techniques:
For users interested in developing their own applications, the LMI Lab
provides a general-purpose and fully programmable environment to specify
and solve virtually any LMI problem. Note that the scope of this facility is by
no means restricted to control-oriented applications.
1-3
1 Introduction
A ( x ) := A 0 + x 1 A 1 + … + x N A N < 0
(1-1)
where
Note that the constraints A(x) > 0 and A(x) < B(x) are special cases of (1-1) since
they can be rewritten as –A(x) < 0 and A(x) – B(x) < 0, respectively.
The LMI (1-1) is a convex constraint on x since A(y) < 0 and A(z) < 0 imply that
+ z-
A y-----------
2
< 0 . As a result,
A1 ( x ) < 0
.
. is equivalent to A ( x ) := diag ( A 1 ( x ),…, A K ( x ) ) < 0
.
AK ( x ) < 0
1-4
LMIs and LMI Problems
In most control applications, LMIs do not naturally arise in the canonical form
(1-1), but rather in the form
L(X1, . . . , Xn) < R(X1, . . . , Xn)
where L(.) and R(.) are affine functions of some structured matrix variables X1,
. . . , Xn. A simple example is the Lyapunov inequality
T
A X + XA < 0
(1-2)
A(x) < 0
(1-3)
T
Minimize c x subject to A ( x ) < 0
(1-4)
A ( x ) < λB ( x )
Minimize λ subject to B ( x ) > 0 (1-5)
C(x) < 0
is quasi-convex and can be solved by similar techniques. It owes its name to the
fact that is related to the largest generalized eigenvalue of the pencil
(A(x),B(x)).
Many control problems and design specifications have LMI formulations [9].
This is especially true for Lyapunov-based analysis and design, but also for
1-5
1 Introduction
• Robust stability of systems with LTI uncertainty (µ-analysis) [24, 21, 27]
• Robust stability in the face of sector-bounded nonlinearities (Popov criterion)
[22, 28, 13, 16]
• Quadratic stability of differential inclusions [15, 8]
• Lyapunov stability of parameter-dependent systems [12]
• Input/state/output properties of LTI systems (invariant ellipsoids, decay
rate, etc.) [9]
• Multi-model/multi-objective state feedback design [4, 17, 3, 9, 10]
• Robust pole placement
• Optimal LQG control [9]
• Robust H∞ control [11, 14]
• Multi-objective H∞ synthesis [17, 23, 10, 18]
• Design of robust gain-scheduled controllers [5, 2]
• Control of stochastic systems [9]
• Weighted interpolation problems [9]
To hint at the principles underlying LMI design, let’s review the LMI
formulations of a few typical design objectives.
x· = Ax
1-6
LMIs and LMI Problems
n N
A ( t ) ∈ Co { A 1 ,…,A n } =
∑ ∑
a i A i : a i ≥ 0, ai = 1
i = 1 i=1
A sufficient condition for the asymptotic stability of this LDI is the feasibility of
T
Find P = PT such that A i P + PA i < 0, P > I.
x· = Ax + Bu
y = Cx + Du
is the largest input/output gain over all bounded inputs u(t). This gain is the
global minimum of the following linear objective minimization problem [1, 26,
25].
Minimize γ over X = XT and γ such that
T T
A X + XA XB C
T
–γ I D < 0
T
B X
C D –γ I
X > 0
x· = Ax + Bw
G
y = Cx
1-7
1 Introduction
1 T T
∫
2
G 2 : = lim E ---- y ( t )y ( t )dt
T → ∞ T 0
∞
1
∫
H
= ------ G ( jω )G ( jω )dω
2π – ∞
2
Hence G 2 is the global minimum of the LMI problem
Minimize Trace (Q) over the symmetric matrices P,Q such that
T T
AP + PA + BB < 0
Q CP
>0
PC T P
Again this is a linear objective minimization problem since the objective Trace
(Q) is linear in the decision variables (free entries of P,Q).
1-8
Further Mathematical Background
T
Minimize c x subject to x x ≥ 0.
xx
(1-6)
1-9
1 Introduction
References
[1] Anderson, B.D.O, and S. Vongpanitlerd, Network Analysis, Prentice-Hall,
Englewood Cliffs, 1973.
[2] Apkarian, P., P. Gahinet, and G. Becker, “Self-Scheduled H∞ Control of
Linear Parameter-Varying Systems,” Proc. Amer. Contr. Conf., 1994, pp.
856-860.
[3] Bambang, R., E. Shimemura, and K. Uchida, “Mixed H2 /H∞ Control with
Pole Placement,” State-Feedback Case,“ Proc. Amer. Contr. Conf., 1993, pp.
2777-2779.
[4] Barmish, B.R., “Stabilization of Uncertain Systems via Linear Control,“
IEEE Trans. Aut. Contr., AC–28 (1983), pp. 848-850.
[5] Becker, G., Packard, P., “Robust Performance of Linear-Parametrically
Varying Systems Using Parametrically-Dependent Linear Feedback,” Systems
and Control Letters, 23 (1994), pp. 205-215.
[6] Bendsoe, M.P., A. Ben-Tal, and J. Zowe, “Optimization Methods for Truss
Geometry and Topology Design,” to appear in Structural Optimization.
[7] Ben-Tal, A., and A. Nemirovski, “Potential Reduction Polynomial-Time
Method for Truss Topology Design,” to appear in SIAM J. Contr. Opt.
[8] Boyd, S., and Q. Yang, “Structured and Simultaneous Lyapunov Functions
for System Stability Problems,” Int. J. Contr., 49 (1989), pp. 2215-2240.
[9] Boyd, S., L. El Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix
Inequalities in Systems and Control Theory, SIAM books, Philadelphia, 1994.
[10] Chilali, M., and P. Gahinet, “H∞ Design with Pole Placement Constraints:
an LMI Approach,” to appear in IEEE Trans. Aut. Contr. Also in Proc. Conf.
Dec. Contr., 1994, pp. 553-558.
[11] Gahinet, P., and P. Apkarian, “A Linear Matrix Inequality Approach to H∞
Control,” Int. J. Robust and Nonlinear Contr., 4 (1994), pp. 421-448.
[12] Gahinet, P., P. Apkarian, and M. Chilali, “Affine Parameter-Dependent
Lyapunov Functions for Real Parametric Uncertainty,” Proc. Conf. Dec. Contr.,
1994, pp. 2026-2031.
1-10
References
1-11
1 Introduction
1-12
2
Uncertain Dynamical
Systems
Linear Time-Invariant Systems . . . . . . . . . . . 2-3
SYSTEM Matrix . . . . . . . . . . . . . . . . . . . 2-3
References . . . . . . . . . . . . . . . . . . . . .2-35
2 Uncertain Dynamical Systems
The LMI Control Toolbox offers a variety of tools to facilitate the description
and manipulation of uncertain dynamical systems. These include functions to:
2-2
Linear Time-Invariant Systems
Ex k + 1 = Ax k + Bu k
y k = Cx k + Du k
Recall that the vectors x(t), u(t), y(t) denote the state, input, and output
trajectories. Similarly, xk, uk, yk denote the values of the state, input, and
output vectors at the sample time k.
The “descriptor” formulation ( E ≠ I ) proves useful when specifying
parameter-dependent systems (see “Affine Parameter-Dependent Models” on
page 2-15) and also avoids inverting E when this inversion is poorly
conditioned. Moreover, many dynamical systems are naturally written in
descriptor form. For instance, the second-order system
mx·· + f x· + kx = u, y = x.
1 0 dX 0 1 0
-------- = X + u, y = ( 1, 0 )X
0 m dt –k –f 1
where X ( t ) := x ( t ) .
·
x(t)
SYSTEM Matrix
For convenience, the state-space realization of LTI systems is stored as a single
MATLAB matrix called a SYSTEM matrix. Specifically, a continuous- or
2-3
2 Uncertain Dynamical Systems
n
A + j(E – I) B
0
…
C D
0
0 –Inf
x· = – x + u, y=x
To retrieve the values of A, B, C, D from the SYSTEM matrix sys, type
[a,b,c,d] = ltiss(sys)
–1 n(s)
G ( s ) = D + C ( sE – A ) B = -----------
d(s)
Conversely, the command
sys = ltisys('tf',n,d)
The number of states, inputs, and outputs of a linear system are retrieved from
the SYSTEM matrix with sinfo:
2-4
Linear Time-Invariant Systems
sinfo(sys)
System with 1 state(s), 1 input(s), and 1 output(s)
ans =
1
The function ssub selects particular inputs and outputs of a system and
returns the corresponding subsystem. For instance, if G has two inputs and
three outputs, the subsystem mapping the first input to the second and third
outputs is given by
ssub(g,1,2:3)
The function sinv computes the inverse H(s) = G(s)–1 of a system G(s) with
square invertible D matrix:
h = sinv(g)
2-5
2 Uncertain Dynamical Systems
mx·· + f x· + kx = u, y=x
the last input argument being the E matrix. To plot its Bode diagram, type
splot(sys,'bo')
The second argument is the string consisting of the first two letters of “bode.”
This command produces the plot of Figure 2-1. A third argument can be used
to adjust the frequency range. For instance,
splot(sys,'bo',logspace( 1,1,50))
−50 −1 0
10 10
Frequency (rad/sec)
0
Phase deg
−90
−180
−1 0
10 10
Frequency (rad/sec)
2-6
Time and Frequency Response Plots
Gain dB 50
−50 −1 0 1
10 10 10
Frequency (rad/sec)
0
Phase deg
−90
−180
−1 0 1
10 10 10
Frequency (rad/sec)
For a system with transfer function G(s) = D + C(sE – A)–1B, the singular value
plot consists of the curves
(ω, σi(G(jω)))
where σi denotes the i-th singular value of a matrix M in the order
σ1(M) Š σ2(M) Š . . . Š σp(M).
Similarly, the step and impulse responses of this system are plotted by
splot(sys,'st') and splot(sys,'im'), respectively. See the “Command
Reference” chapter for a complete list of available diagrams.
The function splot is also applicable to discrete-time systems. For such
systems, the sampling period T must be specified to plot frequency responses.
For instance, the Bode plot of the system
2-7
2 Uncertain Dynamical Systems
x k + 1 = 0.1x k + 0.2u k
yk = –xk
2-8
Interconnections of Linear Systems
returns the system with transfer function G2(s)G1(s). Both functions take up to
ten input arguments, at most one of which can be a parameter-dependent
system. Similarly, the command
sdiag(g1,g2)
appends (concatenates) the systems G1 and G2 and returns the system with
transfer function
G (s) 0
G(s) = 1
0 G2 ( s )
y
1 = G( s ) u1
y u
2 2
2-9
2 Uncertain Dynamical Systems
Finally, the functions sloop and slft form basic feedback interconnections.
The function sloop computes the closed-loop mapping between r and y in the
loop of Figure 2-3. The result is a state-space realization of the transfer
function
(I – εG1G2)–1G1
where ε = ±1.
r + y
G1(s)
−ε
G2(s)
This function is useful to specify simple feedback loops. For instance, the
closed-loop system clsys corresponding to the two-degree-of-freedom tracking
loop
r + y
C K G
−
The function slft forms the more general feedback interconnection of Figure
ω z
2-4 and returns the closed-loop mapping from 1 to 1 . To form this
ω 2 z 2
interconnection when u ∈ R2 and y ∈ R3, the command is
slft(P1,P2,2,3)
2-10
Interconnections of Linear Systems
The last two arguments dimension u and y. This function is useful to compute
linear-fractional interconnections such as those arising in H∞ theory and its
extensions (see “How to Derive Such Models” on page 2-23).
w1 z1
P1(s)
u y
P2(s)
w2 z2
2-11
2 Uncertain Dynamical Systems
Model Uncertainty
The notion of uncertain dynamical system is central to robust control theory.
For control design purposes, the possibly complex behavior of dynamical
systems must be approximated by models of relatively low complexity. The gap
between such models and the true physical system is called the model
uncertainty. Another cause of uncertainty is the imperfect knowledge of some
components of the system, or the alteration of their behavior due to changes in
operating conditions, aging, etc. Finally, uncertainty also stems from physical
parameters whose value is only approximately known or varies in time. Note
that model uncertainty should be distinguished from exogenous actions such as
disturbances or measurement noise.
The LMI Control Toolbox focuses on the class of dynamical systems that can be
approximated by linear models up to some possibly nonlinear and/or
time-varying model uncertainty. When deriving the nominal model and
estimating the uncertainty, two fundamental principles must be remembered:
2-12
Model Uncertainty
2-13
2 Uncertain Dynamical Systems
Polytopic Models
We call polytopic system a linear time-varying system
E(t) x· = A(t) x + B(t) u
y = C(t) x + D(t) u
whose SYSTEM matrix S ( t ) = A ( t ) + jE ( t ) B ( t ) varies within a fixed polytope
of matrices, i.e., C(t) D(t)
k k
S ( t ) ∈ Co { S 1, …, S k } :=
∑ ∑
α i S i : α i ≥ 0, αi = 1
i = 1 i=1
A 1 + jE 1 B 1 A k + jE k B k
S1 = ,…S k =
C1 D1 Ck Dk
(2-1)
In other words, S(t) is a convex combination of the SYSTEM matrices S1, . . . , Sk.
The nonnegative numbers α1, . . . , αk are called the polytopic coordinates of S.
2-14
Uncertain State-Space Models
Such models are also called polytopic linear differential inclusions in the
literature [3] and arise in many practical situations, including:
2-15
2 Uncertain Dynamical Systems
A + jE B
S ( p ) = A ( p ) + jE ( p ) B ( p ) , Si = i i i
,
C(p) D(p) C D
i i
where pv is the parameter description returned by pvec (see next subsection for
details). The output affsys is a structured matrix storing all relevant data.
2-16
Uncertain State-Space Models
Important: By default, ltisys sets the E matrix to the identity. Omitting the
arguments e0, e1, e2 altogether results in setting E(p) = I + (p1 + p2)I.
p i ∈ [ p i, p i ].
(2-1)
The i-th row of the matrix range lists the lower and upper bounds on pi.
2-17
2 Uncertain Dynamical Systems
are incorporated by
rate = [0.1 1, 0.001 0.001]
p = pvec('box',range,rate)
5
range of parameter
values
p
600 1000
2-18
Uncertain State-Space Models
The string 'pol' indicates that the parameter range is defined as a polytope.
p2 Π2
8
range of values of p
Π1
4
1
Π3
p1
1 3 10
2-19
2 Uncertain Dynamical Systems
S ( p ) = A ( p ) + jE ( p ) B ( p )
C(p) D(p)
are readily converted to polytopic ones. Suppose for instance that each
parameter pi ranges in some interval [ p i, p i ]. The parameter vector
p = (p1, . . . , pn) then takes values in a parameter box with 2n corners
Π1, Π2, . . . . If the function S(p) is affine in p, it maps this parameter box to
some polytope of SYSTEM matrices. More precisely, this polytope is the convex
envelope of the images S(Π1), S(Π2), . . . of the parameter box corners
Π1, Π2, . . . as illustrated by the figure below.
p1
Π4 Π3
Π1 Π2
where affsys is the affine model. The resulting polytopic model polsys
consists of the instances of affsys at the vertices of the parameter range.
2-20
Uncertain State-Space Models
Example
We conclude with an example illustrating the manipulation of polytopic and
parameter-dependent models in the LMI Control Toolbox.
where the inductance L, the resistor R, and the capacitor C are uncertain
parameters ranging in
L ∈ [10, 20], R ∈ [1, 2], C ∈ [100, 150].
A state-space representation of its undriven response is
E(L, R, C) x· = A(L, R, C)x
where x = i, ------ and
T di
dt
A ( L , R, C ) = 0 1 = 0 1 + L × 0 + R 0 0 + C 0 0
–R –C 0 0 –1 0 0 –1
E ( L, R, C ) = 1 0 = 1 0 + L 0 0 + R × 0 + C × 0 .
0 L 0 0 0 1
The first SYSTEM matrix s0 contains the state-space data for L = R = C = 0 while
sL, sR, sC define the coefficient matrices of L, R, C. The range of parameter
values is specified by the pvec command.
The results can be checked with psinfo and pvinfo:
2-21
2 Uncertain Dynamical Systems
psinfo(pds)
pvinfo(pv)
The matrices a and e now contain the values of A(L, R, C) and E(L, R, C) for L
= 15, R = 1.2, and C = 150.
Finally, the polytopic counterpart of this affine model is given by
pols = aff2pol(pds)
psinfo(pols)
2-22
Linear-Fractional Models of Uncertainty
• The LTI system P(s) gathers all known LTI components (controller, nominal
models of the system, sensors, and actuators, . . .)
• The input vector u includes all external actions on the system (disturbance,
noise, reference signal, . . .) and the vector y consists of all output signals
generated by the system
• ∆ is a structured description of the uncertainty. Specifically,
∆ = diag(∆1, . . . , ∆r)
where each uncertainty block ∆i accounts for one particular source of
uncertainty (neglected dynamics, nonlinearity, uncertain parameter, etc.).
The diagonal structure of ∆ reflects how each uncertainty component ∆i
enters the loop and affects the overall behavior of the true system.
∆
w q
P(s) y
u
2-23
2 Uncertain Dynamical Systems
r + y
G(s)
−
k(t)
where
r + + y
G0
− +
+ δ(.)
+
k0
2-24
Linear-Fractional Models of Uncertainty
Then pull out all uncertain components and lump them together into a single
block as follows:
q1 p1
q2 δ 0 P2
0 ∆
r + e + y
G0
− +
+
k0
+
p q
If 1 and 1 denote the input and output vectors of the uncertainty
p2 q2
q1 p1
block, the plant P(s) is simply the transfer function from q 2 to p 2 in the
r y
diagram.
δ 0
0 ∆
q p
1 1
q 2 p 2
P(s) y
r
2-25
2 Uncertain Dynamical Systems
p2 q2
r + e + y
G0
− q1 +
p1
+
k0
+
The properties of each individual uncertainty block ∆i are specified with the
function ublock. These individual descriptions are then appended with udiag
to form a complete description of the structured uncertainty
∆ = diag(∆1, . . . , ∆r)
Proper quantification of the uncertainty is important since this determines
achievable levels of robust stability and performance. The basics about
quantification issues are reviewed next.
2-26
Linear-Fractional Models of Uncertainty
Norm-Bounded Uncertainty
Norm bounds specify the amount of uncertainty in terms of RMS gain. Recall
that the RMS gain or L2 gain of a BIBO-stable system is defined as the
worst-case ratio of the input and output energies:
∆w L
∆ ∞ = sup ------------------2
w ∈ L2 w L2
w≠0
where
∞
∫0 w
2 T
w L = ( t )w ( t ) dt.
2
When ∆ is LTI (i.e., when the physical system G itself is LTI), the uncertainty
can be quantified more accurately by using frequency-dependent bounds of the
form
–1
W ( s )∆ ( s ) ∞ < 1
(2-2)
2-27
2 Uncertain Dynamical Systems
where W(s) is some SISO shaping filter. Such bounds specify different amounts
of uncertainty across the frequency range and the uncertainty level at each
frequency is adjusted through theshaping filter shaping filter W(s). For
instance, choosing
2s + 1
W ( s ) = -------------------
s + 100
1
10
0
10
Bound beta(w)
−1
10
−2
10 −1 0 1 2 3
10 10 10 10 10
Frequency w
These various norm bounds are easily specified with ublock. For instance, a
2-by- 3 LTI uncertainty block ∆(s) with RMS gain smaller than 10 is declared by
delta = ublock([2 3],10)
2-28
Linear-Fractional Models of Uncertainty
phi = ublock(1,1,'nl')
In these commands, the first argument dimensions the block, the second
argument quantifies the uncertainty by a norm bound or sector bounds, and
the optional third argument is a string specifying the nature and structure of
the uncertainty block. The default properties are full, complex-valued, and
linear time invariant (LTI).
Scalar blocks and real parameter uncertainty are denoted by the letters s and
r, respectively. For example, the command
delta = ublock(3,0.5,'sr')
at all frequencies ω.
After specifying each block ∆i with ublock, call udiag to derive a complete
description of the uncertainty structure
∆ = diag(∆1, . . . , ∆n).
For instance, the commands
delta1 = ublock([3 1],0.1)
delta2 = ublock(2,2,'s')
delta3 = ublock([1 2],100,'nl')
delta = udiag(delta1,delta2,delta3)
2-29
2 Uncertain Dynamical Systems
Sector-Bounded Uncertainty
The behavior of uncertain dynamical components can also be quantified in
terms of sector bounds on their time-domain response [5, 4]. A (possibly
nonlinear) BIBO-stable system φ(.) is said to be in the sector {a, b} if the
mapping
φ : u ∈ L2 α y∈L2
satisfies a quadratic constraint of the form
∞
∫0 ( y ( t ) – au ( t ) )
T
( y ( t ) – bu ( t ) )dt ≤ 0 for all u ∈ L 2
which corresponds to the sector {0, +∞}. Note that sector bounds can always be
converted to norm bounds since φ is in the sector {a, b} if and only if
a+b a–b
φ – ------------- < ---------------
2 ∞ 2
2-30
Linear-Fractional Models of Uncertainty
The system φ(.) is called memoryless if y(t) only depends on the input value u(t)
at time t, i.e.,
y(t) = φ(u(t)).
For memoryless systems, the sector condition reduces to the “instantaneous”
property
(y(t) – au(t))T (y(t) – bu(t)) ð 0 at all time t.
In the scalar case, this means that the response y = φ(u) lies in the sector
delimited by the two lines y = au and y = bu as illustrated below.
y=au
2u if u ≤ 1
k ( u ) = u + 1 if u > 1
u – 1 if u < – 1
The response of this spring lies in the sector {1, 2} as is seen from Figure 2-10.
This scalar memoryless nonlinearity is specified with ublock by
k = ublock(1,[1 2],'nlm')
2-31
2 Uncertain Dynamical Systems
p 1
------- = 1 x
dx
dt – p p + 2p
1 1 2
(2-3)
k(u)
-1 1
2-32
Linear-Fractional Models of Uncertainty
On output:
a =
6 1
6 8
• The uncertainty structure delta consists of real scalar blocks as confirmed
by
uinfo(delta)
Note that the parameter p1 is repeated and that the norm bound corresponds
to the maximal deviation from the average value of the parameter.
2-33
2 Uncertain Dynamical Systems
δp 1 × I 0
0 δp
2
P(s) y
u
2-34
References
References
[1] Doyle, J.C. and G. Stein, “Multivariable Feedback Design: Concepts for a
Classical/Modern Synthesis,” IEEE Trans. Aut. Contr., AC-26 (1981), pp. 4-16.
[2] Doyle, J.C., “Analysis of Feedback Systems with Structured Uncertainties,”
IEE Proc., vol. 129, pt. D (1982), pp. 242–250.
[3] Boyd, S., L. El Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix
Inequalities in Systems and Control Theory, SIAM books, Philadelphia, 1994.
[4] Vidyasagar, M., Nonlinear System Analysis, Prentice-Hall, Englewood
Cliffs, 1992.
[5] Zames, G., “On the Input-Output Stability of Time-Varying Nonlinear
Feedback Systems, Part I and II,” IEEE Trans. Aut. Contr., AC–11 (1966), pp.
228-238 and 465-476.
2-35
2 Uncertain Dynamical Systems
2-36
3
Robustness Analysis
Quadratic Lyapunov Functions . . . . . . . . . . . 3-3
LMI Formulation . . . . . . . . . . . . . . . . . . . 3-4
Quadratic Stability . . . . . . . . . . . . . . . . . . 3-6
Maximizing the Quadratic Stability Region . . . . . . . . 3-8
Decay Rate . . . . . . . . . . . . . . . . . . . . . 3-9
Quadratic H∞ Performance . . . . . . . . . . . . . . .3-10
µ Analysis . . . . . . . . . . . . . . . . . . . . .3-17
Structured Singular Value . . . . . . . . . . . . . . .3-17
Robust Stability Analysis . . . . . . . . . . . . . . .3-19
Robust Performance . . . . . . . . . . . . . . . . . .3-21
Example . . . . . . . . . . . . . . . . . . . . . .3-28
References . . . . . . . . . . . . . . . . . . . . .3-32
3 Robustness Analysis
Control systems are often designed for a simplified model of the physical plant
that does not take into account all sources of uncertainty. A-posteriori
robustness analysis is then necessary to validate the design and obtain
guarantees of stability and performance in the face of plant uncertainty. The
LMI Control Toolbox offers a variety of tools to assess robust stability and
robust performance. These tools cover most available Lyapunov-based and
frequency-domain analysis techniques.
• Quadratic stability/performance
• Tests involving parameter-dependent Lyapunov functions
• Mixed-µ analysis
• The Popov criterion
3-2
Quadratic Lyapunov Functions
at all times t.
Assessing quadratic stability is not tractable in general since (3-1) places an
infinite number of constraints on Q. However, (3-1) can be reduced to a finite
set of LMI contraints in the following cases:
1 A(t) and E(t) are fixed affine functions of some time-varying parameters
p1(t), . . . , pn(t).
A(t) = A0 + p1(t)A1 + . . . + pn(t)An
E(t) = E0 + p1(t)E1 + . . . + pn(t)En.
∑i = 1 α i ( t )
n
with αi(t) Š 0 and = 1 . This is referred to as a polytopic model.
The first case corresponds to systems whose state-space equations depend
affinely on time-varying physical parameters, and the second case to
3-3
3 Robustness Analysis
E ( t )x· = A ( t )x + B ( t )u
y = C ( t )x + D ( t )u
(3-2)
dV ( x )- + y T y – γ 2 u T u < 0
---------------
dt
The smallest γ for which such a Lyapunov function exists is called the quadratic
H∞ performance. This is an upper bound on the worst-case RMS gain of the
system (3-2).
LMI Formulation
Sufficient LMI conditions for quadratic stability are as follows [8, 1, 2]:
Affine models: consider the parameter-dependent model
3-4
Quadratic Lyapunov Functions
E ( p )x· = A ( p )x , A ( p ) = A 0 + p 1 A 1 + … + p n A n ,
E ( t ) = E0 + p1 E1 + … + pn En
(3-3)
V = { ( ω 1, …, ω n ) : ω i ∈ { p i, p i } }
denote the set of corners of the corresponding parameter box. The dynamical
system (3-3) is quadratically stable if there exist symmetric matrices Q and
n
{ M i } i = 1 such that
T T
A i PE i + E i PA i + M i ≥ 0 for i = 1,…,n
(3-5)
Mi ≥ 0
(3-6)
Q>I
(3-7)
T T T T
A i QE j + E j QA i + A j QE i + E i QA j < 2t ij I for i, j ∈ { 1,…,n }
(3-8)
Q>I
(3-9)
3-5
3 Robustness Analysis
t 11 … t 1n
... <0
…
…
t 1n … t nn
(3-10)
Note that these LMI conditions are in fact necessary and sufficient for
quadratic stability whenever
• In the affine case, no parameter pi enters both A(t) and E(t), that is, Ai = 0 or
Ei = 0 for all i. The conditions (3-5)–(3-6) can then be deleted and it suffices
to check (3-1) at the corners ω of the parameter box.
• In the polytopic case, either A(t) or E(t) is constant. It then suffices to solve
(3-8)–(3-9) for tij = 0.
Similar sufficient conditions are available for robust RMS performance. For
polytopic systems, for instance, LMI conditions guaranteeing the RMS
performance γ are as follows.
A symmetric matrix Q and scalars tij = tji for i,j ∈ {1, . . . , n} exist such that
T T T
A i QE j + E j QA i + A j QE i + E j QA i H H
T
Bi + Bj
T
– 2γI H < 2t ij × 1
T T
C i QE i + C i QE j D i + D j – 2γI
Q>0
t 11 … t 1n
<0
…
t 1n … t nn
Quadratic Stability
The function quadstab tests the quadratic stability of polytopic or affine
parameter-dependent systems. The corresponding LMI feasibility problem is
solved with feasp.
3-6
Quadratic Lyapunov Functions
x· = A(t)x
where A(t) ∈ Co{A1, A2, A3} with
1 , A = 0 1
A1 = 0 1 , A2 = 0 3
– 2 – 0.2 – 2.2 – 0.3 – 1.9 – 0.1
1 0.040521
2 0.032876
tmin =
3.2876e 02
Collectively denoting the LMI system (3-8)–(3-10) by A(x) < 0, quadstab
assesses its feasibility by minimizing τ subject to A(x) < τI and returns the
global minimum tmin of this problem. Hence the system is quadratically stable
if and only if tmin < 0. In this case, the second output P is the Lyapunov matrix
proving stability.
3-7
3 Robustness Analysis
x· = A(k, f)x
where
A ( k, f ) = 0 1 , k ( t ) ∈ [ 10, 12 ], f ( t ) ∈ [ 0.5, 0.7 ]
–k –f
The result is
3-8
Quadratic Lyapunov Functions
marg =
1.4908e+00
Decay Rate
For the time-varying system
E(t) x· = A(t)x,
the quadratic decay rate α* is the smallest α such that
A(t)QE(t)T + E(t)QA(t)T < αE(t)QE(t)T
holds at all times for some fixed Q > 0. The system is quadratically stable if and
only if α* < 0, in which case α* is an upper bound on the rate of return to
equilibrium. Specifically,
xT(t)Q–1x(t) < eα*t (x(0)TQ–1x(0)).
α*
Note that – ------- is also the largest β such that the shifted system
2
·
E(t) x = (A(t) + βE(t))x
is quadratically stable.
For affine or polytopic models, the decay rate computation is attacked as a
generalized eigenvalue minimization problem (see gevp). This task is
performed by the function decay.
Example 3.3. For the time-varying system considered in Example 3.1, the decay
rate is computed by
[drate,P] = decay(polsys)
This command returns
drate =
5.6016e 02
3-9
3 Robustness Analysis
P =
6.0707e 01 1.9400e 02
1.9400e 02 3.1098e 01
Quadratic H∞ Performance
The function quadperf computes the quadratic H∞ performance of an affine
parameter-dependent system
E(p) x· = A(p)x + B(p)u
y = C(p)x + D(p)u
or of a polytopic system
E(t) x· = A(t)x + B(t)u
y = C(t)x + D(t)u
where
S ( t ) = A ( t ) + jE ( t ) B ( t ) ∈ Co { S 1 ,…,S n }
C(t) D(t)
Example 3.4. Consider the second-order system with time-varying mass and
damping
1 0 x· 0 1 x 0
= + u
0 m ( t ) x·· – 2 – f ( t ) x· 1
3-10
Quadratic Lyapunov Functions
qperf =
6.083e+00
This value is larger than the nominal LTI performance for m = 1 and f = 0.5 as
confirmed by
nomsys = psinfo(affsys,'eval',[1 0.5])
norminf(nomsys)
1.4311e+00
3-11
3 Robustness Analysis
T T dQ T
E ( p )Q ( p )A ( p ) + A ( p )Q ( p )E ( p ) – E ( p ) -------- E ( p ) < 0.
dt
(3-11)
dp i dp
on each pi and its time derivative --------- , the vectors p and ------- range in
dt dt
n-dimensional “boxes.” If V and T list the corners of these boxes, (3-11) holds
for all parameter trajectories if the following LMI problem is feasible [6].
n
Find symmetric matrices Q0, Q1, . . . , Qn, and { M i } i = 1 such that
3-12
Parameter-Dependent Lyapunov Functions
∑ ωi Mi < 0
T T T 2
E ( ω )Q ( ω )A ( ω ) + A ( ω )Q ( ω )E ( ω ) – E ( ω ) ( Q ( τ ) – Q 0 )E ( ω ) +
i
• For ω ∈ V and i = 1, . . . , n
T T T T
A i Q i E ( ω ) + E ( ω ) Q i A i + A i Q ( ω )E i + E i Q ( ω )A i +
T T T
A ( ω ) Qi Ei + Ei Qi A ( ω ) – Ei ( Q ( τ ) – Q0 ) + Mi ≥ 0
Note that these conditions reduce to quadratic stability when the rates of
dp i
variation --------- are allowed to range in (–∞, +∞). In such cases indeed,
dt
Q1, . . . , Qn must be set to zero for feasibility.
Polytopic models: A similar extension of the quadratic stability test is
available for time-invariant polytopic systems
E x· = Ax
where one of the matrices A, E is constant and the other uncertain. Assuming
that E is constant and A ranges in the polytope
A ∈ {α1A1 + . . .+ αnAn : αi Š 0, α1+ . . .+ αn = 1},
we seek a Lyapunov function of the form V(x, α) = xTQ(α)–1x where
Q(α) = α1Q1 + . . . + αnQn
Using such Lyapunov functions, sufficient conditions for stability over the
entire polytope are as follows.
3-13
3 Robustness Analysis
There exist symmetric matrices Q1, . . . , Qn, and scalars tij = tji such that
T T T T
A i Q j E + EQ j A i + A j Q i E + EQ i A j < 2t ij
Qj > I
t 11 … t 1n
... <0
…
t 1n … t nn
Stability Analysis
Given an affine or polytopic system, the function pdlstab seeks a parameter-
dependent Lyapunov function establishing robust stability over a given
parameter range or polytope of models. The syntax is similar to that of
quadstab. For an affine parameter-dependent system with two uncertain
parameters p1 and p2, for instance, pdlstab is invoked by
[tmin,Q0,Q1,Q2] = pdlstab(ps)
x· = 0 1 x
··x – k ( t ) – f ( t ) x·
3-14
Parameter-Dependent Lyapunov Functions
dk df
------- < 0.01, ------ < 1
dt dt
(3-12)
The second argument of pvec defines the range of parameter values while the
third argument specifies the bounds on their rate of variation.
This system is not quadratically stable over the specified parameter box as
confirmed by
tmin = quadstab(ps)
tmin =
8.0118e 04
1 0.084914
2 0.011826
3 0.011826
4 1.878414e 03
3-15
3 Robustness Analysis
5 1.878414e 03
6 4.846478e 04
7 1.528537e 04
8 7.832830e 04
3-16
µ Analysis
µ Analysis
µ analysis investigates the robust stability or performance of systems with
linear time-invariant linear-fractional uncertainty. It is also applicable to
time-invariant parameter-dependent systems by first deriving an equivalent
linear-fractional representation with aff2lft (see “From Affine to
Linear-Fractional Models” on page 2-32 for details). Nonlinear and/or
time-varying uncertainty is addressed by the Popov criterion discussed in the
next section.
w q
3-17
3 Robustness Analysis
3-18
µ Analysis
where delta specifies the perturbation structure ∆ (use ublock and udiag to
define delta, see “Norm-Bounded Uncertainty” on page 2-27 and
“Sector-Bounded Uncertainty” for details). When each block ∆i is bounded by 1,
the output mu is equal to ν∆(M). If different bounds βi are used for each ∆i, the
interpretation of mu is as follows:
The interconnection of Figure 3-1 is well posed for all ∆ ∈ ∆ satisfying
βi
σ max ( ∆ i ) < ------
mu
∆(s)
w q
u P(s) y
This interconnection is stable for all structured ∆(s) satisfying ||∆||∞ < 1 if and
only if [3, 12, 10, 16]
µ m := sup µ ∆ ( P ( jω ) ) < 1.
ω
The reciprocal Km = 1/µm represents the robust stability margin, that is, the
largest amount of uncertainty ∆ that can be tolerated without losing stability.
3-19
3 Robustness Analysis
On input, delta describes the uncertainty ∆ (see ublock and udiag for details)
and freqs is an optional vector of frequencies where to evaluate ν∆(P(jω)). On
output, margin is the computed value of Kˆ and peakf is the frequency near
m
which ν∆(P(jω) peaks, i.e., where the margin is the smallest.
The uncertainty quantification may involve different bounds for each block ∆i.
In this case, margin represents the relative stability margin as a percentage of
the specified bounds. More precisely, margin = θ means that stability is guaran-
teed as long as each block ∆i satisfies
where a < b are the sector bounds specified with ublock. In the special case
b = +∞, this condition reads
1–0 1 + 0
∆ i is in the sector a + ------------- ,a + -------------
1+0 1 – 0
for θ ð 1 and
1–0 1 + 0
∆ i is outside
outside the sector a + ------------- ,a + -------------
1 + 0 1 – 0
for θ > 1
For instance, margin = 0.7 for a single-block uncertainty ∆(s) bounded by 3
means that stability is guaranteed for ||∆||∞ < 0.7-by-3 = 2.1.
The syntax
3-20
µ Analysis
[margin,peakf,fs,ds,gs] = mustab(P,delta)
also returns the D, G scaling matrices at the tested frequencies fs. To retrieve
the values of D and G at the frequency fs(i), type
Di = getdg(ds,i), Gi = getdg(gs,i)
Note that Di and Gi are not necessarily the optimal scalings at fs(i). Indeed,
the LMI optimization is often stopped before completion to speed up compu-
tations.
Robust Performance
In robust control, it is customary to formulate the design specifications as
abstract disturbance rejection objectives. The performance of a control system
is then measured in terms of the closed-loop RMS gain from disturbances to
outputs (see “H• Control” on page 5-3 for details). The presence of uncertainty
typically deteriorates performance. For an LTI system with linear-fractional
uncer- tainty (Figure 3-3), the robust performance γrob is defined as the
worst-case RMS gain from u to y in the face of the uncertainty ∆(s).
3-21
3 Robustness Analysis
∆(s)
u P(s) y
∆ perf ( s ) 0
0 ∆(s)
P(s)
• Compute the robust performance γrob for the specified uncertainty ∆(s). This
is done by
3-22
µ Analysis
grob = muperf(P,delta)
The value grob is finite if and only if the interconnection of Figure 3-3 is
robustly stable.
• Assess the robustness of a given performance γ = g > 0 in the face of the
uncertainty ∆(s). The command
margin = muperf(P,delta,g)
computes a fraction margin of the specified uncertainty bounds for which the
RMS gain from u to y is guaranteed not to exceed γ.
Note that g should be larger than the nominal RMS gain for ∆ = 0. The
performance g is robust if and only if margin Š 1.
The function muperf uses the following characterization of γrob . Assuming ||∆||∞
< 1 and denoting by D and G the related scaling sets,
H H D 0
P ( jω ) D 0 P ( jω ) + j G 0 P ( jω ) – P ( jω ) G 0 <
0 I 0 0 0 0 0 γ2 I
3-23
3 Robustness Analysis
φ( · )
w q
u G(s) y
x· = Ax + Bw
q = Cx + Dw
w = φ ( q )
x· = Ax + B w
Σj j j rj
q j = C j x + D j w ( ∈ R )
w = φ(q )
j
(this includes norm bounds as the special case βj = –αj). To establish robust
stability, the Popov criterion seeks a Lyapunov function of the form
3-24
The Popov Criterion
qj t
1 T
∑ ∫0 φ j ( z ) d z – ∑ σ j ∫0 ( w j – α j q j )
T
V ( x, t ) = --- x Px + νj ( w j – β j q j ) dτ (3-14)
2
j j
where σj > 0 and νj are scalars and νj = 0 unless Dj = 0 and φj (.) is a scalar
memoryless nonlinearity (i.e., wj(t) depends only on qj(t); in such case, the
2 2
sector constraint (3-13) reduces to α j q j ≤ q j φ j ( q j ) ≤ β j q j ) .
Letting
Kα := diag(αj Irj), Kβ := diag(βj Irj)
S := diag(σj Irj), N := diag(νj Irj)
and assuming nominal stability, the conditions V(x, t) > 0 and dV/dt < 0 are
equivalent to the following LMI feasibility problem (with the notation
H(M) := M + MT):
Find P = PT and structured matrices S, N such that S > 0 and
T T
T
C Kα S C Kβ
H I P ( A, B ) + 0 N ( CA, CB ) – T
<0 (3-15)
0 I T
D Kα – 1 D Kβ – 1
where G is the SYSTEM matrix representation of G(s) and phi is the uncertainty
description specified with ublock and udiag. The function popov returns the
output t of the LMI solver feasp. Hence the interconnection is robust stable if
t < 0. In this case, P, S, and N are solutions of the LMI problem (3-15).
3-25
3 Robustness Analysis
qj t
∫0 ∫0 ( w j – α j q j )
T T
φ ( z ) N j dz – S j ( w j – β j q j ) dτ =
t
∫0 qj ( Sj + Sj )qj dτ
T T T T
δ j x C N j Cx – ( δ j – α j ) ( δ j – β j )
(3-16)
T T
where Sj and Nj are rj × rj matrices subject to S j + > O, N = and Nj = 0 Sj Nj ,
when the parameter δj is time varying. As a result, the variables S and N in the
Popov condition (3-15) assume a block-diagonal structure where σjIrj and νjIrj
are replaced by the full blocks Sj and Nj, respectively.
When all φj(.) are real time-invariant parameters, the Lyapunov function (3-14)
becomes
1 T t
∑ ∑ ∫
T T T
V ( x, t ) = --- x P + δ j C N j C x – ( δ j – α j ) ( δ j – β j ) q j ( S j + S j )q j dτ
2 0
j j
x· = ( Ax + B C w̃ )
Σj j j j
q̃ j = x
w̃ j = δ j q̃ j
(3-17)
3-26
The Popov Criterion
[t,P,S,N] = popov(G,phi,1)
The third input 1 signals popov to perform the loop transformation (3-17)
before solving the feasibility problem (3-15).
3-27
3 Robustness Analysis
Example
The use of these various robustness tests is illustrated on the benchmark
problem proposed in [14]. The physical system is sketched in Figure 3-6.
x1 x2
u
k w2
m1 m2
w1
x· 1 x
0 0 1 0 1 0 0
x· x
2
= 0 0 0 1 2 + 0 (u + w ) +
1
0 w
2 (3-18)
x·· 1 –k k 0 0 x· 1 1 0
x·· 2 k –k 0 0 x· 0 1
2
0 – 0.7195 1 0 0.720
AK = 0 – 2.9732 0 1 , B K = 2.973
– 2.5133 4.8548 – 1.7287 – 0.9616 – 3.37
1.0063 – 5.4097 – 0.0081 0.0304 4.419
3-28
Example
The concern here is to assess, for this particular controller, the closed-loop
stability margin with respect to the uncertain real parameter k. Since the plant
equations (3-18) depend affinely on k, we can define the uncertain physical
system as an affine parameter-dependent system G with psys.
A0=[0 0 1 0;0 0 0 1;0 0 0 0;0 0 0 0];
B0=[0;0;1;0]; C0=[0 1 0 0]; D0=0;
S0=ltisys(A0,B0,C0,D0); % system for k=0
After entering the controller data as a SYSTEM matrix K, close the loop with
slft.
Cl = slft(G,K)
marg =
4.1919e 01
The value marg = 0.419 means 41% of [0.5, 2] (with respect to the center 1.25).
That is, quadratic stability in the interval [0.943, 1.557]. Since quadratic
stability assumes arbitrarily fast time variations of the parameter k, we can
expect this answer to be conservative when k is time invariant.
• Parameter-dependent Lyapunov functions: when k does not vary in
time, a less conservative estimate is provided by pdlstab. To test stability for
k ∈ [0.5, 2], type
3-29
3 Robustness Analysis
t = pdlstab(Cl)
t =
2.1721e 01
Since t < 0, the closed loop is robustly stable for this range of values of k.
Assume now that k slowly varies with a rate of variation bounded by 0.1. To
test if the closed loop remains stable in the face of such slow variations,
redefine the parameter vector and update the description of the closed-loop
system by
pv1 = pvec('box',[0.5 2],[ 0.1 0.1])
G1 = psys(pv1,[S0,S1])
Cl1 = slft(G1,K)
t =
2.0089e 02
Since t is again negative, this level of time variations does not destroy robust
stability.
• µ analysis: to perform µ analysis, first convert the affine
parameter-dependent model Cl to an equivalent linear-fractional
uncertainty model.
[P0,deltak] = aff2lft(Cl)
uinfo(deltak)
Here P0 is the closed loop system for the nominal value k0 = 1.25 of k and the
uncertainty on k is represented as a real scalar block deltak. Note that k
must be assumed time invariant in the µ framework.
To get the relative parameter margin, type
3-30
Example
[pmarg,peakf] = mustab(P0,deltak)
pmarg =
1.0670e+00
peakf =
6.3959e 01
The value pmarg = 1.064 means stability as long as the deviation |δk| from
k0 = 1.25 does not exceed 0.75 × 1.067 ≈ 0.80. That is, as long as k remains in
the interval [0.45, 2.05]. This estimate is sharp since the closed loop becomes
unstable for δk = –0.81, that is, for k = k0 – 0.81 = 0.44:
spol(slft(P0, 0.81))
ans =
1.0160e+00 + 1.9208e+00i
1.0160e+00 1.9208e+00i
1.0512e+00 + 9.7820e 01i
1.0512e+00 9.7820e 01i
6.2324e 03 + 6.3884e 01i
6.2324e 03 6.3884e 01i
2.7484e 01 + 1.9756e 01i
2.7484e 01 1.9756e 01i
• Popov criterion: finally, we can apply the Popov criterion to the
linear-fractional model returned by aff2lft
[t,P,D,N] = popov(P0,deltak)
t =
1.3767e 02
This test is also successful since t < 0. Note that the Popov criterion also
proves stability for k = k0 + δk(.) where δk(.) is any memoryless nonlinearity
with gain less than 0.75.
3-31
3 Robustness Analysis
References
[1] Barmish, B.R., “Stabilization of Uncertain Systems via Linear Control,”
IEEE Trans. Aut. Contr., AC–28 (1983), pp. 848–850.
[2] Boyd, S., and Q. Yang, “Structured and Simultaneous Lyapunov Functions
for System Stability Problems,” Int. J. Contr., 49 (1989), pp. 2215–2240.
[3] Doyle, J.C., “Analysis of Feedback Systems with Structured Uncertainties,”
IEE Proc., vol. 129, pt. D (1982), pp. 242–250.
[4] Fan, M.K.H., A.L. Tits, and J.C. Doyle,“Robustness in the Presence of Mixed
Parametric Uncertainty and Unmodeled Dynamics,” IEEE Trans. Aut. Contr.,
36 (1991), pp. 25–38.
[5] Feron, E, P. Apkarian, and P. Gahinet, “S-Procedure for the Analysis of
Control Systems with Parametric Uncertainties via Parameter-Dependent
Lyapunov Functions,” Third SIAM Conf. on Contr. and its Applic., St. Louis,
Missouri, 1995.
[6] Gahinet, P., P. Apkarian, and M. Chilali, “Affine Parameter-Dependent
Lyapunov Functions for Real Parametric Uncertainty,” Proc. Conf. Dec. Contr.,
1994, pp. 2026–2031.
[7] Haddad, W.M. and D.S. Berstein,“Parameter-Dependent Lyapunov
Functions, Constant Real Parameter Uncertainty, and the Popov Criterion in
Robust Analysis and Synthesis: Part 1 and 2,” Proc. Conf. Dec. Contr., 1991, pp.
2274–2279 and 2632–2633.
[8] Horisberger, H.P., and P.R. Belanger, “Regulators for Linear Time-Varying
Plants with Uncertain Parameters,” IEEE Trans. Aut. Contr., AC–21 (1976),
pp. 705–708.
[9] How, J.P., and S.R. Hall, “Connection between the Popov Stability Criterion
and Bounds for Real Parameter Uncertainty,” Proc. Amer. Contr. Conf., 1993,
pp. 1084–1089.
[10] Packard, A., and J.C. Doyle, “The Complex Structured Singular Value,”
Automatica, 29 (1994), pp. 71–109.
[11] Popov, V.M., “Absolute Stability of Nonlinear Systems of Automatic
Control,” Automation and Remote Control, 22 (1962), pp. 857–875.
[12] Safonov, M.G., “L1 Optimal Sensitivity vs. Stability Margin,” Proc. Conf.
Dec. Contr., 1983.
3-32
References
[13] Stein, G. and J.C. Doyle, “Beyond Singular Values and Loop Shapes,” J.
Guidance, 14 (1991), pp. 5–16.
[14] Wie, B., and D.S. Berstein, “Benchmark Problem for Robust Control
Design,” J. Guidance and Contr., 15 (1992), pp. 1057–1059.
[15] Wie, B., Q. Liu, and K.-W. Byun, “Robust H∞ Control Synthesis Method
and Its Application to Benchmark Problems,” J. Guidance and Contr., 15
(1992), pp 1140–1148.
[16] Young, P. M., M. P. Newlin, and J. C. Doyle, “Let's Get Real,” in Robust
Control Theory, Springer Verlag, 1994, pp. 143–174.
[17] Zames, G., “On the Input-Output Stability of Time-Varying Nonlinear
Feedback Systems, Part I and II,” IEEE Trans. Aut. Contr., AC–11 (1966), pp.
228–238 and 465–476.
3-33
3 Robustness Analysis
3-34
4
State-Feedback Synthesis
Multi-Objective State-Feedback . . . . . . . . . . . 4-3
References . . . . . . . . . . . . . . . . . . . . . 4-18
4 State-Feedback Synthesis
These tools apply to multi-model problems, i.e., when the objectives are to be
robustly achieved over a polytopic set of plant models.
4-2
Multi-Objective State-Feedback
Multi-Objective State-Feedback
The function msfsyn performs multi-model H2/H∞ state-feedback synthesis
with pole placement constraints. For simplicity, we describe the underlying
problem in the case of a single LTI model. The control structure is depicted by
Figure 4-1. The plant P(s) is a given LTI system and we assume full
measurement of its state vector x.
z∞
w
P(s) z2
x
u
Denoting by T∞(s) and T2(s) the closed-loop transfer functions from w to z∞ and
z2, respectively, our goal is to design a state-feedback law u = Kx that
• Maintains the RMS gain (H∞ norm) of T∞ below some prescribed value γ0 > 0
• Maintains the H2 norm of T2 (LQG cost) below some prescribed value ν0 > 0
• Minimizes an H2/H∞ trade-off criterion of the form
2 2
α T∞ ∞ + β T2 2
4-3
4 State-Feedback Synthesis
the mixed H2/H∞ criterion takes into account both the disturbance rejection
aspects (RMS gain from d to e) and the LQG aspects (H2 norm from n to z2 ). In
addition, the closed-loop poles can be forced into some sector of the stable
half-plane to obtain well-damped transient responses.
4-4
Pole Placement in LMI Regions
is called the characteristic function of the region D. The class of LMI regions is
fairly general since its closure is the set of convex regions symmetric with
respect to the real axis. More practically, LMI regions include relevant regions
such as sectors, disks, conics, strips, etc., as well as any intersection of the
above.
Another strength of LMI regions is the availability of a “Lyapunov theorem” for
such regions. Specifically, if { λ ij } 1 ≤ i ,j ≤ m and { µ ij } 1 ≤ i ,j ≤ m denote the
entries of the matrices L and M, a matrix A has all its eigenvalues in D if and
only if there exists a positive definite matrix P such that [3]
T
[ λ ij P + µ ij AP + µ ji PA ] 1 ≤ i ,j ≤ m < 0
S … S i1
11 .
[ S ij ] 1 ≤ i ,j ≤ m := ..
…
…
S m1 … S mm
Note that this condition is an LMI in P and that the classical Lyapunov
theorem corresponds to the special case
fD ( z ) = z + z
Next we list a few examples of useful LMI regions along with their
characteristic function fD:
4-5
4 State-Feedback Synthesis
fD ( z ) = –r z + q
r z + q –r
–q
θ
The damping ratio of poles lying in this sector is at least cos --- .
2
• Vertical strip h1 < x < h2:
2h – ( z + z ) 0
fD ( z ) = 1
0 ( z + z ) – 2h 2
h1 h2
4-6
LMI Formulation
LMI Formulation
Given a state-space realization
x· = Ax + B 1 w + B 2 u
z∞ = C 1 x + D 11 w + D 12 u
z = C 2 x + D 22 u
2
x· = ( A + B 2 K )x + B 1 w
z∞ = ( C 1 + D 12 K )x + D 11 w
z = ( C 2 + D 22 K )x
2
Taken separately, our three design objectives have the following LMI
formulation:
• H∞ performance: the closed-loop RMS gain from w to z∞ does not exceed γ
if and only if there exists a symmetric matrix X∞ such that [5]
T T
( A + B 2 K )X ∞ + X ∞ ( A + B 2 K ) B 1 X ∞ ( C 1 + D 12 K )
B1
T
–I
T
D 11 <0
2
( C 1 + D 12 K )X ∞ D 11 –γ I
X∞ > 0
• H2 performance: the closed-loop H2 norm of T2 does not exceed ν if there
exist two symmetric matrices X2 and Q such that
4-7
4 State-Feedback Synthesis
T
( A + B 2 K )X 2 + X 2 ( A + B 2 K ) B 1
<0
T
B1 – I
Q ( C 2 + D 22 K )X 2
>0
T
X 2 ( C 2 + D 22 K ) X2
2
Trace ( Q ) < ν
T
[ λ ij X pol + µ ij ( A + B 2 K )X pol + µ ij X pol + µ ji X pol ( A + B 2 K ) ] 1 ≤ i ,j ≤ m < 0
X pol > 0
X := X ∞ = X 2 = X pol
(4-1)
that enforces all three objectives. With the change of variable Y := KX, this
leads to the following suboptimal LMI formulation of our multi-objective
state-feedback synthesis problem [4, 3, 2]:
4-8
LMI Formulation
T T T T T T
AX + XA + B 2 Y + Y B 2 B 1 XC 1 + Y D 12
B1
T
–I
T
D 11 <0
2
C 1 X + D 12 Y D 11 –γ I
(4-2)
Q C 2 X + D 22 Y
>0
T T T
XC 2 + Y D 22 X
(4-3)
T T T
[ λ ij + µ ij ( AX + B 2 Y )X pol + µ ji ( XA + Y B 2 ) ] 1 ≤ i ,j ≤ m < 0
(4-4)
2
Trace ( Q ) < ν 0
(4-5)
2 2
γ < γ0
(4-6)
Note that K* does not yield the globally optimal trade-off in general due to the
conservatism of Assumption (4-1).
4-9
4 State-Feedback Synthesis
x· = A ( t )x + B 1 ( t )w + B 2 ( t )u
P: z ∞ = C 1 ( t )x + D 11 ( t )w + D 12 ( t )u
z = C 2 ( t )x + D 22 ( t )u
2
Such polytopic models are useful to represent plants with uncertain and/or
time-varying parameters (see “Polytopic Models” on page 2-14 and the design
example below). Seeking a single quadratic Lyapunov function that enforces
the design objectives for all plants in the polytope leads to the following
multi-model counterpart of the LMI conditions (4-2)–(4-6):
Minimize α γ2 + β Trace(Q) over Y, X, Q, and γ2 subject to
T T T T T T
A k X + XA k + B 2k Y + Y B 2k B 1k XC 1k + Y D 12k
T
B 1k –I
T
D 11k <0
2
C 1k X + D 12k Y D 11k –γ I
Q C 2k X + D 22k Y
>0
T T T
XC 2k + Y D 22 k X
T T T
[ λ ij + µ ij ( A k X + B 2k Y ) + µ ji ( XA k + Y B 2k ) ] 1 ≤ i ,j ≤ m < 0
2
Trace ( Q ) < ν 0
2 2
γ < γ0
4-10
The Function msfsyn
• ||T∞||∞ < γ0
• ||T2 ||2 < ν0
• The closed-loop poles lie in D.
The syntax is
[gopt,h2opt,K,Pcl] = msfsyn(P,r,obj,region)
where
4-11
4 State-Feedback Synthesis
Several mixed or unmixed designs can be performed with msfsyn. The various
possibilities are summarized in the table below.
4-12
Design Example
Design Example
This example is adapted from [1] and covered by the demo sateldem. The
system is a satellite consisting of two rigid bodies (main body and instrumen-
tation module) joined by a flexible link (the “boom”). The boom is modeled as a
spring with torque constant k and viscous damping f and finite-element
analysis gives the following uncertainty ranges for k and f :
0.09 ð k 0.4
0.0038 ð f 0.04:
The dynamical equations are
·· · ·
J1 θ1 + f ( θ1 – θ2 ) + k ( θ1 – θ2 ) = T + w
·· · ·
J2 θ2 + f ( θ2 – θ1 ) + k ( θ2 – θ1 ) = 0
where θ1 and θ2 are the yaw angles for the main body and the sensor module,
T is the control torque, and w is a torque disturbance on the main body.
θ1 main body
sensor module
θ2
4-13
4 State-Feedback Synthesis
• Obtain a good trade-off between the RMS gain from w to θ2 and the H2 norm
of the transfer function from w to
θ
1
θ
2
T
(LQG cost of control)
• Place the closed-loop poles in the region shown in Figure 4-3 to guarantee
some minimum decay rate and closed-loop damping
–0.1
• Achieve these objectives for all possible values of the varying parameters k
and f. Since these parameters enter the plant state matrix in an affine
manner, we can model the parameter uncertainty by a polytopic system with
four vertices corresponding to the four combinations of extremal parameter
values (see “From Affine to Polytopic Models” on page 2-20).
4-14
Design Example
To solve this design problem with the LMI Control Toolbox, first specify the
plant as a parameter-dependent system with affine dependence on k and f. A
state-space description is readily derived from the dynamical equations as:
· θ1
1 0 0 0 θ1
0 0 1 0 0
·
0 1 0 0 θ2
= 0 0 0 1 + θ2 + 0 ( w + T )
0 0 J1 0 ·· –k k –f ·
θ1 f θ1 0
0 0 0 J 2 ·· k –k f –f · 1
θ2 θ2
θ1
1 0 0 0 θ 0
2
z∞ = θ2 , z2 = 0 1 0 0 + 0 T
·
θ
0 0 0 0 1 1
·
θ2
% parameter-dependent plant
P = psys(pv,[ ltisys(a0,b,c,d,e0) , ...
ltisys(ak,0*b,0*c,0*d,0) , ...
ltisys(af,0*b,0*c,0*d,0) ])
Next, specify the LMI region for pole placement as the intersection of the
half-plane x < –0.1 and of the sector centered at the origin and with inner angle
3π/4. This is done interactively with the function lmireg:
4-15
4 State-Feedback Synthesis
region = lmireg
To assess the trade-off between the H∞ and H2 performances, first compute the
optimal quadratic H∞ performance subject to the pole placement constraint by
gopt = msfsyn(P,[1 1],[0 0 1 0],region)
This yields gopt ≈ 0. For a prescribed H∞ performance g > 0, the best H2
performance h2opt is computed by
[gopt,h2opt,K,Pcl] = msfsyn(P,[1 1],[g 0 0 1],region)
Here obj = [g 0 0 1] asks to optimize the H2 performance subject to
||T∞||∞ < g and the pole placement constraint. Repeating this operation for the
values g ∈ {0.01, 0.1, 0.2, 0.5} yields the Pareto-like trade-off curve shown in
Figure 4-4.
By inspection of this curve, the state-feedback gain K obtained for g = 0.1 yields
the best compromise between the H∞ and H2 objectives. For this choice of K,
Figure 4-5 superimposes the impulse responses from w to θ2 for the four
combinations of extremal values of k and f.
Finally, the closed-loop poles for these four extremal combinations are
displayed in Figure 4-6. Note that they are robustly placed in the prescribed
LMI region.
6.5
5.5
5
H2 performance
4.5
3.5
2.5
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
H−infinity performance
4-16
Design Example
−3
x 10 Impulse responses w −> x2 for the robust design
12
10
6
Amplitude
−2
0 1 2 3 4 5 6 7 8 9 10
Time (secs)
−2
−4
−6
−8
−10
−14 −12 −10 −8 −6 −4 −2 0
4-17
4 State-Feedback Synthesis
References
[1] Biernacki, R.M., H. Hwang, and S.P. Battacharyya, “Robust Stability with
Structured Real Parameter Perturbations,” IEEE Trans. Aut. Contr., AC–32
(1987), pp. 495–506.
[2] Boyd, S., L. El Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix
Inequalities in Systems and Control Theory, SIAM books, 1994.
[3] Chilali, M., and P. Gahinet, “H∞ Design with Pole Placement Constraints:
an LMI Approach,” to appear in IEEE Trans. Aut. Contr. Also in Proc. Conf.
Dec. Contr., 1994, pp. 553–558.
[4] Khargonekar, P.P., and M.A. Rotea,“Mixed H2/H∞ Control: A Convex
Optimization Approach,” IEEE Trans. Aut. Contr., 39 (1991), pp. 824-837.
[5] Scherer, C., “H∞ Optimization without Assumptions on Finite or Infinite
Zeros,” SIAM J. Contr. Opt., 30 (1992), pp. 143–166.
4-18
5
Synthesis of H∞
Controllers
H∞ Control . . . . . . . . . . . . . . . . . . . . . 5-3
Riccati- and LMI-Based Approaches . . . . . . . . . . . 5-7
H∞ Synthesis . . . . . . . . . . . . . . . . . . . . 5-10
Validation of the Closed-Loop System . . . . . . . . . . 5-13
References . . . . . . . . . . . . . . . . . . . . . 5-22
5-21
5 Synthesis of H∞ Controllers
5-2
H• Control
H∞ Control
The H∞ norm of a stable transfer function G(s) is its largest input/output
RMS gain, i.e.,
y L
G ∞ = sup ------------2-
u∈L u L
2 2
u≠0
where L2 is the space of signals with finite energy and y(t) is the output of the
system G for a given input u(t). This norm also corresponds to the peak gain of
the frequency response G(jω), that is,
G ∞ = sup σ max ( G ( jω ) )
ω
w z
P(s)
u y
K(s)
5-3
5 Synthesis of H∞ Controllers
Hence the optimal H∞ control seeks to minimize ||F(P, K)||∞ over all stabilizing
LTI controllers K(s). Alternatively, we can specify some maximum value γ for
the closed-loop RMS gain and ask the following question:
Does there exist a stabilizing controller K(s) that ensures ||F(P, K)||∞ < γ ?
This is known as the suboptimal H∞ control problem, and γ is called the
prescribed H∞ performance.
A number of control problems can be recast in this standard formulation.
+ e u + y
r
K G
−
d ( s ) = W l p ( s )d̃ ( s )
5-4
H• Control
W ( s ) –G ( s )
SW l p = F ( P, K ) with Ps := l p
W ( s ) –G ( s )
lp
r y
The closed-loop transfer function from 1
to 1
y
r2 2
10 -
-------------- S ( s ) < ε,
s + 10 ∞
5-5
5 Synthesis of H∞ Controllers
r1 + y1
u
−
r2 + K(s) G(s)
− y2
10
10 -
--------------- I – --------------
G(s)
F ( P, K ) ∞ < ε with P ( s ) := s + 10 2 s + 10 .
I2 –G ( s )
Example 5.3. H∞ optimization is also useful for the design of robust controllers
in the face of unstructured uncertainty. Consider the uncertain system
∆( · )
w z
u P(s) y
5-6
H• Control
x· = Ax + B 1 w + B 2 u
z = C 1 x + D 11 w + D 12 u
y = C 2 x + D 21 w + D 22 u
5-7
5 Synthesis of H∞ Controllers
T –1 T T
A 0 γ C1 C2
0
0 –A –B1 0 0
I 0
J – λε = B1
T
0 – I γ D 11 D 21 – λ
–1 T T
0 0
–1 –1
0 γ C 1 γ D 11 – I 0
0 C2 D 21 0 0
5-8
H• Control
T T
T AR + RA RC 1 B 1
N 12 0 N12 0 <0
C1 R – γI D 11
0 I 0 I
T
B1 D 11 – γI
T
(5-2)
T T
T A S + SA SB 1 C 1
N21 0 N 21 0 <0
T
B1 S – γI D 11
T
0 I 0 I
C1 D 11 – γI
(5-3)
R I
≥0
I S
(5-4)
T T
where N12 and N21 denote bases of the null spaces of ( B 2 , D 12 ) and (C2, D21),
respectively.
This problem falls within the scope of the LMI solver mincx. Note that the LMI
constraints (5-2)–(5-3) amount to the inequality counterpart of the H∞ Riccati
equations, R–1 and S–1 being solutions of these inequalities. Again, explicit
formulas are available to derive H∞ controllers from any solution (R, S, γ) of
(5-2)–(5-4) [3].
5-9
5 Synthesis of H∞ Controllers
H∞ Synthesis
The LMI Control Toolbox supports continuous- and discrete-time H∞ synthesis
using either Riccati- or LMI-based approaches. Transparency and numerical
reliability were primary concerns in the development of the related tools.
Original features of the Riccati-based functions include:
• The use of pencil-based Riccati solvers for highly accurate computation of
Riccati solutions (see care, ricpen, and their discrete-time counterparts)
• Direct computation of the optimal H∞ performance when D12 and D21 are
rank-deficient (without using regularization) [5]
• Automatic regularization of singular problems for the controller
computation.
In addition, reduced-order controllers are computed whenever the matrices I –
γ–2X∞Y∞ or I – RS are nearly rank-deficient. The functions for standard H∞
synthesis are listed in Table 5-1.
To illustrate the use of hinfric and hinflmi, consider the simple first-order
plant P(s) with state-space equations
x· = w + 2u
z = x
y = –x+w
5-10
H• Synthesis
Gamma Diagnosis
10.0000 : feasible
1.0000 : infeasible (Y is not positive semi-definite)
5.5000 : feasible
3.2500 : feasible
2.1250 : feasible
1.5625 : feasible
1.2812 : feasible
1.1406 : feasible
1.0703 : feasible
1.0352 : feasible
1.0176 : feasible
1.0088 : feasible
5-11
5 Synthesis of H∞ Controllers
Minimization of gamma:
1 2.196542
5 1.052469
6 1.008499
7 1.008499
8 1.008499
*** new lower bound: 0.656673
9 1.002517
5-12
H• Synthesis
In this case only the value γ = 10 is tested. In the LMI approach, the same
problem is solved by the command
[g,K] = hinflmi(P,[1 1],10)
w z
P(s)
u y
K(s)
5-13
5 Synthesis of H∞ Controllers
You can also plot the time- and frequency-domain responses of the closed- loop
clsys with splot.
Finally, hinfpar extracts the state-space matrices A, B1, . . . from the plant
SYSTEM matrix P:
[a,b1,b2,c1,c2,d11,d12,d21,d22] = hinfpar(P,r)
5-14
Multi-Objective H• Synthesis
Multi-Objective H∞ Synthesis
In many real-world applications, standard H∞ synthesis cannot adequately
capture all design specifications. For instance, noise attenuation or regulation
against random disturbances are more naturally expressed in LQG terms.
Similarly, pure H∞ synthesis only enforces closed-loop stability and does not
allow for direct placement of the closed-loop poles in more specific regions of the
left-half plane. Since the pole location is related to the time response and
transient behavior of the feedback system, it is often desirable to impose
additional damping and clustering constraints on the closed-loop dynamics.
This makes multi-objective synthesis highly desirable in practice, and LMI
theory offers powerful tools to attack such problems.
Mixed H2/H∞ synthesis with regional pole placement is one example of
multi-objective design addressed by the LMI Control Toolbox. The control
problem is sketched in Figure 5-5. The output channel z∞ is associated with the
H∞ performance while the channel z2 is associated with the LQG aspects (H2
performance).
Denoting by T∞(s) and T2(s) the closed-loop transfer functions from w to z∞ and
z2, respectively, we consider the following multi-objective synthesis problem:
z∞
w
P(s) z2
u y
K(s)
5-15
5 Synthesis of H∞ Controllers
with α Š 0 and β ≥ 0
• Places the closed-loop poles in some prescribed LMI region D
Recall that LMI regions are general convex subregions of the open left-half
plane (see “Pole Placement in LMI Regions” on page 4-5 for details).
LMI Formulation
Let
x· = Ax + B 1 w + B 2 u
z∞ = C ∞ x + D ∞1 w + D ∞2 u
z2 = C 2 x + D 21 w + D 22 u
y = C y x + D y1 w
and
·
ζ = A Kζ + B Ky
·
u = C Kζ + D Ky
x· cl = A cl x cl + B cl w
z∞ = C cl1 x cl + D cl1 w
z = C cl2 x cl + D cl2 w
2
5-16
Multi-Objective H• Synthesis
T T
A cl χ ∞ + χ ∞ A cl B cl X ∞ C cl1
T
B cl –I
T
D cl1 <0
2
C cl1 χ ∞ D cl1 – γ I
χ∞ > 0
• H2 performance: the H2 norm of the closed-loop transfer function from w to
z2 does not exceed ν if and only if Dcl2 = 0 and there exist two symmetric
matrices χ2 and Q such that
T
A cl χ 2 + χ 2 A cl B cl
<0
B
T
– I
cl
Q C cl2 χ 2
>0
T
χ 2 C cl2 χ 2
2
Trace ( Q ) < ν
• Pole placement: the closed-loop poles lie in the LMI region
D = { z ∈ C : L + Mz + M T z < 0 }
with L = LT = { λ ij } 1 ≤ i ,j ≤ m and M = [ µ ij ] 1 ≤ i ,j ≤ m if and only if there exists a
symmetric matrix χpol satisfying
5-17
5 Synthesis of H∞ Controllers
T
[ λ ij χ pol + µ ij A cl χ pol + µ ji X pol A cl ] 1 ≤ i ,j ≤ m < 0
χ pol > 0
–1 R I 0 S
χ = χ1 χ2 , χ 1 := , χ 2 :=
MT 0 I NT
B := NB K + SB 2 D K
K T
C K := C K M + D K C y R
T T
A K := NA K M + NB K C y R + SB 2 C K M + S ( A + B 2 D K C y )R,
the inequality constraints on χ are readily turned into LMI constraints in the
variables R, S, Q, AK, BK, CK, and DK [8, 1]. This leads to the following
suboptimal LMI formulation of our multi-objective synthesis problem:
5-18
Multi-Objective H• Synthesis
T T T T
AR + RA + B 2 C K + C K B 2 AK + A + B2 DK Cy B 1 + B 2 D K D y1 H
T T T
H A S + SA + B K C y + C y B K SB 1 + B K D y1 H <0
H H –I H
2
C ∞ R + D ∞2 C K C ∞ + D ∞2 D K C y D ∞1 + D ∞2 D K D y1 – γ I
Q C R+D C C +D D C
2 22 K 2 22 K y
H R I >0
H I S
AR + B C A + B D C
λ ij R I + µ ij 2 K 2 K y +
I S
AK SA + B K C y
T T T T
RA + C K B 2 AK
µ ji <0
( A + B D C )T AT S + CT BT
2 K y y K 1 ≤ i ,j ≤ m
2
TraceQ < ν 0
2 2
γ < γ0
D 21 + D 22 D K D y1 = 0
Given optimal solutions γ*, Q* of this LMI problem, the closed-loop H∞ and
H2 performances are bounded by
T ∞ ∞ ≤ γ*, T 2 2 ≤ Trace ( Q* ).
5-19
5 Synthesis of H∞ Controllers
where
• P is the SYSTEM matrix of the LTI plant P(s). Note that z2 or z∞ can be empty,
or even both when performing pure pole placement
• r is a three-entry vector listing the lengths of z2, y, and u
• obj = [γ0, ν0, α, β] is a four-entry vector specifying the H2/H∞ constraints and
criterion
• region specifies the LMI region for pole placement, the default being the
open left-half plane. Use lmireg to interactively generate the matrix region.
The outputs gopt and h2opt are the guaranteed H∞ and H2 performances, K is
the controller SYSTEM matrix, and R,S are optimal values of the variables R, S
(see “LMI Formulation” on page 5-16).
You can perform the following mixed and unmixed designs by setting obj
appropriately:
obj Corresponding Design
[0 0 0 0] pole placement only
[0 0 1 0] H∞-optimal design
[0 0 0 1] H2-optimal design
[g 0 0 1] minimize ||T2||2 subject to ||T∞||∞ < g
[0 h 1 0] minimize ||T∞||∞ subject to ||T2||2 < h
2 2
[0 0 a b] minimize a T ∞ 0 + b T 2 2
[g h a b] most general problem
5-20
Multi-Objective H• Synthesis
5-21
5 Synthesis of H∞ Controllers
References
[1] Chilali, M., and P. Gahinet, “H∞ Design with Pole Placement Constraints:
an LMI Approach,” to appear in IEEE Trans. Aut. Contr., 1995.
[2] Doyle, J.C., Glover, K., Khargonekar, P., and Francis, B., “State-Space
Solutions to Standard H2 and H∞ Control Problems,” IEEE Trans. Aut. Contr.,
AC–34 (1989), pp. 831–847.
[3] Gahinet, P., “Explicit Controller Formulas for LMI-based H∞ Synthesis,”
submitted to Automatica. Also in Proc. Amer. Contr. Conf., 1994, pp. 2396–
2400.
[4] Gahinet, P., and P. Apkarian, “A Linear Matrix Inequality Approach to H∞
Control,” Int. J. Robust and Nonlinear Contr., 4 (1994), pp. 421–448.
[5] Gahinet, P., and A.J. Laub, “Reliable Computation of γopt in Singular H∞
Control,” to appear in SIAM J. Contr. Opt., 1995. Also in Proc. Conf. Dec.
Contr., Lake Buena Vista, Fl., 1994, pp. 1527–1532.
[6] Iwasaki, T., and R.E. Skelton, “All Controllers for the General H∞ Control
Problem: LMI Existence Conditions and State-Space Formulas,” Automatica,
30 (1994), pp. 1307–1317.
[7] Scherer, C., “H∞ Optimization without Assumptions on Finite or Infinite
Zeros,” SIAM J. Contr. Opt., 30 (1992), pp. 143–166.
[8] Scherer, C., “Mixed H2H∞ Control,” to appear in Trends in Control: A
European Perspective, volume of the special contributions to the ECC 1995.
5-22
6
Loop Shaping
The Loop-Shaping Methodology . . . . . . . . . . . 6-2
References . . . . . . . . . . . . . . . . . . . . . 6-24
6 Loop Shaping
2 Specify the desired loop shapes graphically with the GUI magshape.
3 Specify the control structure, i.e., how the feedback loop is organized and
which input/output transfer functions are relevant to the loop-shaping
objectives.
6-2
The Loop-Shaping Methodology
6-3
6 Loop Shaping
W ( s )S ( s ) ∞ < 1
(6-3)
6-4
Design Example
Design Example
Loop shaping design with the LMI Control Toolbox involves four steps:
1 Express the design specifications in terms of loop shapes and shaping filters.
3 Specify the control loop structure with the functions sconnect and smult, or
alternatively with Simulink.
6-5
6 Loop Shaping
·
Our goal is to control θ a through the torque T. The control structure is the
tracking loop of Figure 6-3 where the dynamical uncertainty ∆(s) accounts for
neglected high-frequency dynamics and flexibilities.
θa
Ja
antenna
Jm
motor
θm
6-6
Design Example
30
20
Magnitude (dB)
10
−10
−20
−30 1 2 3 4
10 10 10 10
Frequency (rad/sec)
W 1 ( s )S ( s ) ∞ < 1
(6-4)
where S = 1 = (1 + G0K) and W1(s) is a low-pass filter whose gain lies above the
shaded region in Figure 6-5. From the small gain theorem [2], robust stability
in the face of the uncertainty ∆(s) is equivalent to
W 2 ( s )T ( s ) ∞ < 1
(6-5)
6-7
6 Loop Shaping
where T = G0K = (1 + G0K) and W2 is a high pass filter whose gain follows the
profile of Figure 6-4.
For tractability in the H∞ framework, (6-4)–(6-5) are replaced by the single
RMS gain constraint
W1 S
< 1.
W2 T ∞
(6-6)
integral action
dB
disturbance rejection
40
bandwidth
30
0
rad/s
1 10
W S
The transfer function 1
W2 T
maps r to the signals ẽ , ỹ in the control loop of Figure 6-6. Hence our original
design problem is equivalent to the following H∞ synthesis problem:
Find a stabilizing controller K(s) that makes the closed-loop RMS
gain from r to ẽ less than one.
ỹ
6-8
Design Example
e
W1 ẽ
r + e y
u ỹ
K G0 W2
−
1 Compute two shaping filters W1(s) and W2(s) whose magnitude responses
match the profiles of Figures 6-5 and 6-4.
2 Specify the control structure of Figure 6-6 and derive the corresponding
plant.
6-9
6 Loop Shaping
1 Type w1,w2 after the prompt Filter names. This tells magshape to write the
filter SYSTEM matrices in MATLAB variables called w1 and w2.
3 Click on the add point button: you can now specify the desired magnitude
profile with the mouse. This profile is sketched with a few characteristic
points marking the asymptotes and the slope changes. Clicking on the
delete point or move point buttons allows you to delete or move particular
points.
4 Once the desired profile is sketched, specify the filter order and click the fit
data button to interpolate the data points by a rational filter. The gain of the
resulting filter appears in solid on the plot, and its realization is written in
the MATLAB variable of the same name. To make adjustments, you can
move some of the points or change the filter order, and then redo the fitting.
Figure 6-7 is a snapshot of the magshape window after specifying the profiles of
W1 and W2 and fitting the data points (marked by o in the plot). The solid lines
show the rational fits obtained with an order 3 for W1 and an order 1 for W2.
6-10
Specification of the Shaping Filters
The transfer functions of these filters (as given by ltitf(w1) and ltitf(w2))
are approximately
3 2
0.64s + 10.45s + 149.28s + 25.87
W 1 ( s ) = ----------------------------------------------------------------------------------------------
3 2 –4
s + 1.20s + 0.83s + 8.31 × 10
3.41s + 217.75
W 2 ( s ) = 100 ---------------------------------------
4
s + 3.54 × 10
Note that horizontal asymptotes have been added in both filters. Such
asymptotes prevent humps in S and T near the crossover frequency, thus
reducing the overshoot in the step response.
The function magshape always returns stable and proper filters. If you do not
use magshape to generate your filters, make sure that they are stable with poles
sufficiently far from the imaginary axis. To avoid difficulties in the subsequent
H∞ optimization, their poles should satisfy
–3
Re ( s ) ≤ – 10
6-11
6 Loop Shaping
6-12
Specification of the Shaping Filters
0.01s+1
P(s) 0.01s+1
is specified by
Pd = sderiv(P,[ 1 2],[0.01 1])
In the calling list, [ 1 2] lists the input and output channels to be filtered by
ns + d (here 1 for “first input” and 2 for “second output”) and the vector [0.01
1] lists the values of n and d. An error is generated if the resulting system Pd
is not proper.
To specify more complex nonproper filters,
3 Add the derivative action of the nonproper filters by applying sderiv to the
augmented plant.
6-13
6 Loop Shaping
e
W1 ẽ
r y
P(s)
W2 ỹ
u y
K(s)
The function sconnect computes the SYSTEM matrices of P(s) or Paug(s) given
G0, W1, W2, and the qualitative description of the control structure (Figure 6-6).
With this function, general control structures are described by listing the input
and output signals and by specifying the input of each dynamical system. In
our problem, the plant P(s) corresponding to the control loop
+ e
r y
K G0
−
6-14
Specification of the Control Structure
is specified by
g0 = ltisys('tf',3e4,conv([1.02],[1.99 3.03e4]))
P = sconnect('r(1)','e=r-G0 ; G0','K:e','G0:K',g0)
• The first argument 'r(1)' lists and dimensions the input signals. Here the
only input is the scalar reference signal r.
• The second argument 'e=r-G0 ; G0' lists the two output signals separated
by a semicolon. Outputs are defined as combinations of the input signals and
the outputs of the dynamical systems. For instance, the first output e is
specified as r minus the output y of G0. For systems with several outputs, the
syntax G0([1,3:4]) would select the first, third, and fourth outputs.
• The third argument 'K:e' names the controller and specifies its inputs. A
two-degrees-of-freedom controller with inputs e and r would be specified by
'K: e;r'.
• The remaining arguments come in pairs and specify, for each known LTI
system in the loop, its input list and its SYSTEM matrix. Here 'G0:K' means
that the input of G0 is the output of K, and g0 is the SYSTEM matrix of G0(s).
You can give arbitrary names to G0 and K in the string definitions, provided
that you use the same names throughout.
Similarly, the augmented plant Paug(s) corresponding to the loop of Figure 6-6
would be specified by
Paug = sconnect('r(1)','W1;W2','K:e = r-G0','G0:K',g0,...
'W1:e',w1,'W2:G0',w2)
where w1 and w2 are the shaping filter SYSTEM matrices returned by magshape.
However, the same result is obtained more directly by appending the shaping
filters to P(s) according to Figure 6-8:
Paug = smult(P,sdiag(w1,w2,1))
Finally, note that sconnect is also useful to compute the SYSTEM matrix of
ordinary system interconnections. In such cases, the third argument should be
set to [] to signal the absence of a controller.
6-15
6 Loop Shaping
Gamma-Iteration:
Gamma Diagnosis
1.0000 : feasible
The Bode diagram of GK and the nominal step response are then plotted by
splot(gk,'bo')
splot(ssub(Pcl,1,2),'st')
6-16
Controller Synthesis and Validation
1.2
0.8
Amplitude
0.6
0.4
0.2
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Time (secs)
6-17
6 Loop Shaping
Practical Considerations
When the model G0(s) of the system has poles on the imaginary axis, loop-
shaping design often leads to “singular” H∞ problems with jω-axis zeros in the
(1,2) or (2,1) blocks of Paug. While immaterial in the LMI approach, jω-axis
zeros bar the direct use of standard Riccati-based synthesis algorithms [1]. The
function hinfric automatically regularizes such problems by perturbing the A
matrix of Paug to A + εI for some positive ε. The message
** jw-axis zeros of P12(s) regularized
or
** jw-axis zeros of P21(s) regularized
% pre-regularization
[a,b,c,d] = ltiss(P)
a = a + reg * eye(size(a))
Preg = ltisys(a,b,c,d)
Note that the regularization level reg should always be chosen positive.
Finally, a careful validation of the resulting controller on the true plant P is
necessary since the H∞ synthesis is performed on a perturbed system.
6-18
Loop Shaping with Regional Pole Placement
50
K(s)
0
Singular Values dB
G(s)
−50
−100 −1 0 1 2 3
10 10 10 10 10
Frequency (rad/sec)
6-19
6 Loop Shaping
1.5
0.5
Amplitude
−0.5
−1
−1.5
0 1 2 3 4 5 6 7 8 9 10
Time (secs)
For “Example 6.1” on page 6-5, we chose as pole placement region the disk
centered at (–1000, 0) and with radius 1000. By inspection of the shaping filters
used in the previous design, this region clashes with the pseudo-integrator in
W1(s) and the fast pole at s = –3.54 × 104 in W2(s). To alleviate this difficulty,
1 Remove the pseudo-integrator from W1(s) and move it to the main loop as
shown in Figure 6-12. This defines an equivalent loop-shaping problem
˜ (s)/s and the integrator is
where the controller is reparametrized as K(s) = K
now assignable via output feedback.
6-20
Loop Shaping with Regional Pole Placement
2 Specify W2(s) as a nonproper filter with derivative action. This is done with
sderiv.
e
W1 ẽ
r + e y
1/s ˜
K G0 W2 ỹ
−
A satisfactory design was obtained for the following values of W1 and W2:
2
s + 16s + 200 s
W 1 ( s ) = -------------------------------------- , W 2 ( s ) = 0.9 + ----------
2 200
s + 1.2s + 0.8
A summary of the commands needed to specify the control structure and solve
the constrained H∞ problem is listed below. Run the demo radardem to perform
this design interactively.
% control structure, sint = integrator
sint = ltisys('tf',1,[1 0])
[P,r] = sconnect('r','Int;G','Kt:Int','G:Kt',G0,'Int:e=r-G',
sint)
% add w1
w1=ltisys('tf',[1 16 200],[1 1.2 0.8])
Paug = smult(P,sdiag(w1,1,1))
% add w2
Paug = sderiv(Paug,2,[1/200 0.9])
6-21
6 Loop Shaping
The resulting open-loop response and the time response to a step r and an
impulse disturbance at the plant input appear in Figure 6-13 and 6-14. Note
that the new controller no longer inverts the plant and that input disturbances
are now rejected with satisfactory transient behavior.
100
50
Gain dB
−50 −1 0 1 2 3
10 10 10 10 10
Frequency (rad/sec)
0
Phase deg
−360
−720
−1 0 1 2 3
10 10 10 10 10
Frequency (rad/sec)
6-22
Loop Shaping with Regional Pole Placement
Response to a step r
1.5
1
Amplitude
0.5
−0.5
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Time (secs)
Response to an impulse disturbance on u
2
1.5
Amplitude
0.5
−0.5
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Time (secs)
6-23
6 Loop Shaping
References
[1] Doyle, J.C., Glover, K., Khargonekar, P., and Francis, B., “State-Space
Solutions to Standard H2 and H∞ Control Problems,” IEEE Trans. Aut. Contr.,
AC–34 (1989), pp. 831–847.
[2] Zames, G., “On the Input-Output Stability of Time-Varying Nonlinear
Feedback Systems, Part I and II,” IEEE Trans. Aut. Contr., AC–11 (1966), pp.
228–238 and 465–476.
6-24
7
Robust Gain-Scheduled
Controllers
Gain-Scheduled Control . . . . . . . . . . . . . . . 7-3
References . . . . . . . . . . . . . . . . . . . . . .
7 Robust Gain-Scheduled Controllers
7-2
Gain-Scheduled Control
Gain-Scheduled Control
The synthesis technique discussed below is applicable to affine
parameter-dependent plants with equations
x· = A ( p )x + B ( p )w + B u
1 2
P ( .,p ) z = C 1 ( p )x + D 11 ( p )w + D 12 u
y = C x + D w + D u
2 21 22
where
p(t) = (p1(t), . . ., pn(t)), pi ≤ pi ( t ) ≤ pi
is a time-varying vector of physical parameters (velocity, angle of attack,
stiffness,. . .) and A(.), B1(.), C1(.), D11(.) are affine functions of p(t). This is a
simple model of systems whose dynamical equations depend on physical
coefficients that vary during operation. When these coefficients undergo large
variations, it is often impossible to achieve high performance over the entire
operating range with a single robust LTI controller. Provided that the
parameter values are measured in real time, it is then desirable to use
controllers that incorporate such measurements to adjust to the current
operating conditions [4]. Such controllers are said to be scheduled by the
parameter measurements. This control strategy typically achieves higher
performance in the face of large variations in operating conditions. Note that
p(t) may include part of the state vector x itself provided that the corresponding
states are accessible to measurement.
If the parameter vector p(t) takes values in a box of Rn with corners
N n
{ Π i } i = 1 ( N = 2 ) , the plant SYSTEM matrix
A(p) B (p) B
1 2
S ( p ) := C 1 ( p ) D 11 ( p ) D 12
C D D
2 21 22
7-3
7 Robust Gain-Scheduled Controllers
ranges in a matrix polytope with vertices S(Πi). Specifically, given any convex
decomposition
N
p ( t ) = α1 Π1 + … + αN ΠN , α i ≥ 0, ∑ αi = 1
i=1
of p over the corners of the parameter box, the SYSTEM matrix S(p) is given by
S(p) = α1S(Π1) + . . . + αN S(ΠN)
This suggests seeking parameter-dependent controllers with equations
ζ = A K ( p )ζ + B K ( p )y
K ( .,p )
u = C K ( p )ζ + D K ( p )y
∑i = 1 α i Π i
N
Given the convex decomposition p ( t ) = of the current
parameter value p(t), the values of AK(p), BK(p), . . . are derived from the
values AK(Πi), BK(Πi), . . . at the corners of the parameter box by
N
A (p) B (p) A (Π ) B (Π )
K = αi K i
∑
K K i
C (p) D (p) C (Π ) D (Π )
K K i=1 K i K i
In other words, the controller state-space matrices at the operating point p(t)
are obtained by convex interpolation of the LTI vertex controllers
A (Π ) B (Π )
K i := K i K i
C (Π ) D (Π )
K i K i
7-4
Gain-Scheduled Control
For this class of controllers, consider the following H∞-like synthesis problem
relative to the interconnection of Figure 7-1:
w z
P(. ,p)
u y
K(., p)
p(t)
T T
T
A i R + RA i RC 1i B 1i
N 0 N12 0 < 0, i = 1, …, N
12 C 1i R – γI D 11i
0 I 0 I
B
T
D
T
– γI
1 11i
(7-1)
7-5
7 Robust Gain-Scheduled Controllers
T T
T
A i S + SA SB 1i C 1i
N 0 N21 0 < 0, i = 1, …, N
21 T
B 1i S
T
– γI D 11i
0 I 0 I
C 1i D 11i – γI
(7-2)
R I
≥0
I S
(7-3)
where
A B
i 1i := A ( Π i ) B 1 ( Π i )
C D C (Π ) D (Π )
1i 11i 1 i 11 i
T T
and N12 and N21 are bases of the null spaces of ( ( B 2 , D 12 ) ) and (C2, D21),
respectively.
Remark: Requiring the plant matrices B2, C2, D12, D21 to be independent of
p(t) may be restrictive in practice. When B2 and D12 depend on p(t), a simple
trick to enforce these requirements consists of filtering u through a low-pass
filter with sufficiently large bandwidth. Appending this filter to the plant
rejects the parameter dependence of B2 and D12 into the A matrix of the
augmented plant. Similarly, parameter dependences in C2 and D21 can be
eliminated by filtering the output (see [1] for details).
7-6
Synthesis of Gain-Scheduled H• Controllers
Here pdP is an affine or polytopic model of the plant (see psys) and the two-
entry vector r specifies the dimensions of the D22 matrix (set r=[p2 m2] if
y ∈ Rp2 and u ∈ Rm2).
On output, gopt is the best achievable RMS gain and pdK is the polytopic
system consisting of the vertex controllers.
A (Π ) B (Π )
Ki = K i K i
C (Π ) D (Π )
K i K i A (Π ) B (Π )
Ki = K i K i
C (Π ) D (Π )
K i K i
Given any value p of the parameter vector p(t), the corresponding state-space
parameters
A (p) B (p)
K(p) = K K
C (p) D (p)
K K
∑
N
The function polydec computes the convex decomposition p ( t ) = αΠ
i=1 i i
and returns c = (α1, . . ., αN). The controller state-space data
∑
N
K(p) = α K at time t is then computed by psinfo. Note that polydec
i=1 i i
ensures the continuity of the polytopic coordinates αi as a function of p, thereby
guaranteeing the continuity of the controller state-space matrices along
parameter trajectories.
7-7
7 Robust Gain-Scheduled Controllers
7-8
Simulation of Gain-Scheduled Control Systems
7-9
7 Robust Gain-Scheduled Controllers
Design Example
This example is drawn from [2] and deals with the design of a gain-scheduled
autopilot for the pitch axis of a missile. Type misldem to run the corresponding
demo.
The missile dynamics are highly dependent on the angle of attack α, the speed
V, and the altitude H. These three parameters undergo large variations during
operation. A simple model of the linearized dynamics of the missile is the
parameter-dependent model
α = – Zα 1 α + 0 δ
·
m
q· – Mα 0 q 1
G (7-4)
a zv –1 0 α
=
q 0 1 q
where azv is the normalized vertical acceleration, q the pitch rate, δm the fin
deflection, and Zα, Mα are aerodynamical coefficients depending on α, V, and
H. These two coefficients are “measured” in real time.
As V, H, and α vary in
V ∈ [0.5, 4] Mach, H ∈ [0, 18000] m, α ∈ [0, 40] degrees
during operation, the coefficients Zα and Mα range in
Zα ∈ [0.5, 4], Mα ∈ [0, 106].
Our goal is to control the vertical acceleration azv over this operating range.
The stringent performance specifications (settling time < 0.5 s) and the
bandwidth limitation imposed by unmodeled high frequency dynamics make
gain scheduling desirable in this context.
The control structure is sketched in Figure 7-2. To enforce the performance and
robustness requirements, we can use the loop-shaping criterion
W1 S
<1 (7-5)
W 2 KS ∞
7-10
Design Example
Note that S = (I + GK)–1 is now a time-varying system and that the H∞ norm
must be understood in terms of input/output RMS gain. Adequate shaping
filters are derived with magshape as
2.01
W 1 ( s ) = ------------------------
s + 0.201
3 2
9.678s + 0.029s
W 2 ( s ) = ---------------------------------------------------------------------------------------------------------------------------------
3 4 2 7 10
s + 1.206 × 10 s + 1.136 × 10 s + 1.066 × 10
azv
+ e
r u
K G
−
autopilot missile
q
% affine model:
For loop-shaping purposes, we must form the augmented plant associated with
the criterion (7-5). This is done with sconnect and smult as follows:
7-11
7 Robust Gain-Scheduled Controllers
[pdP,r] = sconnect('r','e=r-GP1;K','K:e;G(2)','G:K',pdG);
Paug = smult(pdP,sdiag(w1,w2,eye(2)))
Note that pdP and Paug are polytopic models at this stage.
We are now ready to perform the gain-scheduled controller synthesis with
hinfgs:
[gopt,pdK] = hinfgs(Paug,r)
gopt =
0.205
Our specifications are achievable since the best performance γopt = 0.205 is
smaller than 1, and the polytopic description of a controller with such
performance is returned in pdK.
To validate the design, form the closed-loop system with
pCL = slft(pdP,pdK)
You can now simulate the step response of the gain-scheduled system along
particular parameter trajectories. For the sake of illustration, consider the
spiral trajectory
Zα(t) = 2.25 + 1.70 e–4t cos(100 t)
Mα(t) = 50 + 49 e–4t sin(100 t)
shown in Figure 7-3. After defining this trajectory is a function spiralt.m:
function p = spiral(t)
plot(t,1-y(:,1))
7-12
Design Example
The function pdsimul returns the output trajectories y(t) = (e(t), u(t))T and the
response azv(t) is deduced by
azv(t) = 1 – e(t)
The second command plots azv(t) versus time as shown in Figure 7-4.
Parameter trajectory (duration : 1 s)
100
90
80
70
60
M_alpha
50
40
30
20
10
0
0.5 1 1.5 2 2.5 3 3.5 4
Z_alpha
7-13
7 Robust Gain-Scheduled Controllers
0.8
azv
0.6
0.4
0.2
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
7-14
References
References
[1] Apkarian, P., and P. Gahinet,“A Convex Characterization of
Gain-Scheduled H∞ Controllers,” to appear in IEEE Trans. Aut. Contr., 1995.
[2] Apkarian, P., P. Gahinet, and G. Becker, “Self-Scheduled H∞ Control of
Linear Parameter-Varying Systems,” to appear in Automatica, 1995.
[3] Becker, G., Packard, P., “Robust Performance of Linear Parametrically
Varying Systems Using Parametrically Dependent Linear Feedback,” Systems
and Control Letters, 23 (1994), pp. 205–215.
[4] Packard, A., “Gain Scheduling via Linear Fractional Transformations,”
Syst. Contr. Letters, 22 (1994), pp. 79–92.
7-15
7 Robust Gain-Scheduled Controllers
7-16
8
References . . . . . . . . . . . . . . . . . . . . 8-44
8 The LMI Lab
The LMI Lab is a high-performance package for solving general LMI problems.
It blends simple tools for the specification and manipulation of LMIs with
powerful LMI solvers for three generic LMI problems. Thanks to a
structure-oriented representation of LMIs, the various LMI constraints can be
described in their natural block-matrix form. Similarly, the optimization
variables are specified directly as matrix variables with some given structure.
Once an LMI problem is specified, it can be solved numerically by calling the
appropriate LMI solver. The three solvers feasp, mincx, and gevp constitute
the computational engine of the LMI Control Toolbox. Their high performance
is achieved through C-MEX implementation and by taking advantage of the
particular structure of each LMI.
The LMI Lab offers tools to
This chapter gives a tutorial introduction to the LMI Lab as well as more
advanced tips for making the most out of its potential. The tutorial material is
also covered by the demo lmidem.
8-2
Background and Terminology
Even though this canonical expression is generic, LMIs rarely arise in this form
in control applications. Consider for instance the Lyapunov inequality
T
A X + XA < 0
(8-1)
x x
where A = – 1 2 and the variable X = 1 2 is a symmetric matrix.
0 –2 x2 x3
Here the decision variables are the free entries x1, x2, x3 of X and the canonical
form of this LMI reads
x1 –2 2 + x2 0 –3 + x3 0 0 < 0
2 0 –3 4 0 –4
(8-2)
Clearly this expression is less intuitive and transparent than (8-1). Moreover,
the number of matrices involved in (8-2) grows roughly as n2 /2 if n is the size
of the A matrix. Hence, the canonical form is very inefficient from a storage
viewpoint since it requires storing o(n2 /2) matrices of size n when the single
n-by-n matrix A would be sufficient. Finally, working with the canonical form
is also detrimental to the efficiency of the LMI solvers. For these various
reasons, the LMI Lab uses a structured representation of LMIs. For instance,
the expression ATX + XA in the Lyapunov inequality (8-1) is explicitly
described as a function of the matrix variable X, and only the A matrix is
stored.
8-3
8 The LMI Lab
In general, LMIs assume a block matrix form where each block is an affine
combination of the matrix variables. As a fairly typical illustration, consider
the following LMI drawn from H∞ theory
T T
A X + XA XC B
T N < 0
N CX – γI D
T T
B D – γI
(8-3)
is called the inner factor. The outer factor needs not be square and is often
absent.
• X and γ are the matrix variables of the problem. Note that scalars are
considered as 1-by-1 matrices.
• The inner factor L(X, γ) is a symmetric block matrix, its block structure being
characterized by the sizes of its diagonal blocks. By symmetry, L(X, γ) is
entirely specified by the blocks on or above the diagonal.
• Each block of L(X, γ) is an affine expression in the matrix variables X and γ.
This expression can be broken down into a sum of elementaryterm terms. For
instance, the block (1,1) contains two elementary terms: ATX and XA.
• Terms are either constant or variable. Constant terms are fixed matrices like
B and D above. Variable terms involve one of the matrix variables, like XA,
XCT , and –γI above.
The LMI (8-3) is specified by the list of terms in each block, as is any LMI
regardless of its complexity.
8-4
Background and Terminology
As for the matrix variables X and γ, they are characterized by their dimensions
and structure. Common structures include rectangular unstructured,
symmetric, skew-symmetric, and scalar. More sophisticated structures are
sometimes encountered in control problems. For instance, the matrix variable
X could be constrained to the block-diagonal structure
x 0 0
1
X = 0 x2 x3
0 x x
3 4
x x x
1 2 3
X = x2 x1 x2
x x x
3 2 1
Summing up, structured LMI problems are specified by declaring the matrix
variables and describing the term content of each LMI. This term-oriented
description is systematic and accurately reflects the specific structure of the
LMI constraints. There is no built-in limitation on the number of LMIs that can
be specified or on the number of blocks and terms in any given LMI. LMI
systems of arbitrary complexity can therefore, be defined in the LMI Lab.
8-5
8 The LMI Lab
Information Retrieval
The interactive function lmiinfo answers qualitative queries about LMI
systems created with lmiedit or lmivar and lmiterm. You can also use
lmiedit to visualize the LMI system produced by a particular sequence of
lmivar/lmiterm commands.
8-6
Overview of the LMI Lab
Result Validation
The solution x* produced by the LMI solvers is easily validated with the
functions evallmi and showlmi. This allows a fast check and/or analysis of the
results. With evallmi, all variable terms in the LMI system are evaluated for
the value x* of the decision variables. The left- and right-hand sides of each
LMI then become constant matrices that can be displayed with showlmi.
8-7
8 The LMI Lab
This process creates the so-called internal representation of the LMI system.
This computer description of the problem is used by the LMI solvers and in all
subsequent manipulations of the LMI system. It is stored as a single vector
which we consistently denote by LMISYS in the sequel (for the sake of clarity).
There are two ways of generating the internal description of a given LMI
system: (1) by a sequence of lmivar/lmiterm commands that build it
incrementally, or (2) via the LMI Editor lmiedit where LMIs can be specified
directly as symbolic matrix expressions. Though somewhat less flexible and
powerful than the command-based description, the LMI Editor is more
straightforward to use, hence particularly well-suited for beginners. Thanks to
its coding and decoding capabilities, it also constitutes a good tutorial
introduction to lmivar and lmiterm. Accordingly, beginners may elect to skip
8-8
Specifying a System of LMIs
A Simple Example
The following tutorial example is used to illustrate the specification of LMI
systems with the LMI Lab tools. Run the demo lmidem to see a complete
treatment of this example.
–1
G ( s ) = C ( sI – A ) B
(8-4)
with four inputs, four outputs, and six states, and consider the set of
input/output scaling matrices D with block-diagonal structure
d 1 0 0 0
0 d 1 0 0
D =
0 0 d 2 d 3
0 0 d 4 d 5
(8-5)
The following problem arises in the robust stability analysis of systems with
time-varying uncertainty [4 ]:
Find, if any, a scaling D of structure (8-5) such that the largest gain across
frequency of D G(s) D–1 is less than one.
8-9
8 The LMI Lab
This problem has a simple LMI formulation: there exists an adequate scaling
D if the following feasibility problem has solutions:
Find two symmetric matrices X ∈ R6×6 and S = DT D ∈ R4×4 such that
A T X + XA + C T SC XB
<0
T
B X –S
(8-6)
X>0
(8-7)
S>1
(8-8)
The LMI system (8-6)–(8-8) can be described with the LMI Editor as outlined
below. Alternatively, its internal description can be generated with lmivar and
lmiterm commands as follows:
setlmis([])
X=lmivar(1,[6 1])
S=lmivar(1,[2 0;2 1])
% 1st LMI
lmiterm([1 1 1 X],1,A,'s')
lmiterm([1 1 1 S],C',C)
lmiterm([1 1 2 X],1,B)
lmiterm([1 2 2 S], 1,1)
% 2nd LMI
lmiterm([ 2 1 1 X],1,1)
% 3rd LMI
lmiterm([ 3 1 1 S],1,1)
lmiterm([3 1 1 0],1)
LMISYS = getlmis
Here the lmivar commands define the two matrix variables X and S while the
lmiterm commands describe the various terms in each LMI. Upon completion,
8-10
Specifying a System of LMIs
getlmis returns the internal representation LMISYS of this LMI system. The
next three subsections give more details on the syntax and usage of these
various commands. More information on how the internal representation is
updated by lmivar/lmiterm can also be found in “How It All Works” on
page 8-18.
This returns the internal representation LMISYS of this LMI system. This
MATLAB description of the problem can be forwarded to other LMI-Lab
functions for subsequent processing. The command getlmis must be used only
once and after declaring all matrix variables and LMI terms.
lmivar
The matrix variables are declared one at a time with lmivar and are
characterized by their structure. To facilitate the specification of this structure,
the LMI Lab offers two predefined structure types along with the means to
describe more general structures:
Type 1: Symmetric block diagonal structure. This corresponds to matrix
variables of the form
D 0 … 0
1
0 D
…
... ...
X = 2
0
…
...
0 … 0 Dr
8-11
8 The LMI Lab
where each diagonal block Dj is square and is either zero, a full symmetric
matrix, or a scalar matrix
Dj = d × I, d∈R
This type encompasses ordinary symmetric matrices (single block) and scalar
variables (one block of size one).
Type 2: Rectangular structure. This corresponds to arbitrary rectangular
matrices without any particular structure.
Type 3: General structures. This third type is used to describe more
sophisticated structures and/or correlations between the matrix
variables. The principle is as follows: each entry of X is specified
independently as either 0, xn, or –xn where xn denotes the n-th
decision variable in the problem. For details on how to use Type 3,
see “Structured Matrix Variables” on page 8-33 below as well as the
lmivar entry of the “Command Reference” chapter.
In “Example 8.1” on page 8-9, the matrix variables X and S are of Type 1.
Indeed, both are symmetric and S inherits the block-diagonal structure (8-5) of
D. Specifically, S is of the form
s 1 0 0 0
0 s 1 0 0
S =
0 0 s 2 s 3
0 0 s 3 s 4
After initializing the description with the command setlmis([]), these two
matrix variables are declared by
lmivar(1,[6 1]) % X
lmivar(1,[2 0;2 1]) % S
In both commands, the first input specifies the structure type and the second
input contains additional information about the structure of the variable:
• For a matrix variable X of Type 1, this second input is a matrix with two
columns and as many rows as diagonal blocks in X. The first column lists the
8-12
Specifying a System of LMIs
sizes of the diagonal blocks and the second column specifies their nature with
the following convention:
1 → full symmetric block
0 → scalar block
–1 → zero block
In the second command, for instance, [2 0;2 1] means that S has two
diagonal blocks, the first one being a 2-by-2 scalar block and the second one
a 2−βψ−2 full block.
• For matrix variables of Type 2, the second input of lmivar is a two-entry
vector listing the row and column dimensions of the variable. For instance, a
3-by-5 rectangular matrix variable would be defined by
lmivar(2,[3 5])
For convenience, lmivar also returns a “tag” that identifies the matrix variable
for subsequent reference. For instance, X and S in “Example 8.1” could be
defined by
X = lmivar(1,[6 1])
S = lmivar(1,[2 0;2 1])
lmiterm
After declaring the matrix variables with lmivar, we are left with specifying
the term content of each LMI. Recall that LMI terms fall into three categories:
• The constant terms, i.e., fixed matrices like I in the left-hand side of the LMI
S>I
• The variable terms, i.e., terms involving a matrix variable. For instance, ATX
and CTSC in (8-6). Variable terms are of the form PXQ where X is a variable
and P, Q are given matrices called the left and right coefficients, respectively.
8-13
8 The LMI Lab
The following rule should be kept in mind when describing the term content of
an LMI:
Important: Specify only the terms in the blocks on or above the diagonal. The
inner factors being symmetric, this is sufficient to specify the entire LMI.
Specifying all blocks results in the duplication of off-diagonal terms, hence in
the creation of a different LMI. Alternatively, you can describe the blocks on or
below the diagonal.
LMI terms are specified one at a time with lmiterm. For instance, the LMI
A T X + XA + C T SC XB
<0
T
B X –S
is described by
lmiterm([1 1 1 1],1,A,'s')
lmiterm([1 1 1 2],C',C)
lmiterm([1 1 2 1],1,B)
lmiterm([1 2 2 2], 1,1)
These commands successively declare the terms ATX + XA, CTSC, XB, and –S.
In each command, the first argument is a four-entry vector listing the term
characteristics as follows:
• The first entry indicates to which LMI the term belongs. The value m means
“left-hand side of the m-th LMI,” and m means “right-hand side of the m-th
LMI”
• The second and third entries identify the block to which the term belongs.
For instance, the vector [1 1 2 1] indicates that the term is attached to the
(1, 2) block
• The last entry indicates which matrix variable is involved in the term. This
entry is 0 for constant terms, k for terms involving the k-th matrix variable
8-14
Specifying a System of LMIs
T
Xk , and k for terms involving X k (here X and S are first and second
variables in the order of declaration).
Finally, the second and third arguments of lmiterm contain the numerical data
(values of the constant term, outer factor, or matrix coefficients P and Q for
variable terms PXQ or PXTQ). These arguments must refer to existing
MATLAB variables and be real-valued. See“Complex-Valued LMIs” on
page 8-35 for the specification of LMIs with complex-valued coefficients.
Some shorthand is provided to simplify term specification. First, blocks are
zero by default. Second, in diagonal blocks the extra argument 's' allows you
to specify the conjugated expression AXB + BTXTAT with a single lmiterm
command. For instance, the first command specifies ATX + XA as the
“symmetrization” of XA. Finally, scalar values are allowed as shorthand for
scalar matrices, i.e., matrices of the form αI with α scalar. Thus, a constant
term of the form αI can be specified as the “scalar” α. This also applies to the
coefficients P and Q of variable terms. The dimensions of scalar matrices are
inferred from the context and set to 1 by default. For instance, the third LMI S
> I in “Example 8.3” on page 8-33 is described by
BRL = newlmi
lmiterm([BRL 1 1 X],1,A,'s')
lmiterm([BRL 1 1 S],C',C)
lmiterm([BRL 1 2 X],1,B)
lmiterm([BRL 2 2 S], 1,1)
8-15
8 The LMI Lab
Xpos = newlmi
lmiterm([-Xpos 1 1 X],1,1)
Slmi = newlmi
lmiterm([-Slmi 1 1 S],1,1)
lmiterm([Slmi 1 1 0],1)
LMISYS = getlmis
Here the identifiers X and S point to the variables X and S while the tags BRL,
Xpos, and Slmi point to the first, second, and third LMI, respectively. Note that
Xpos refers to the right-hand side of the second LMI. Similarly, X would
indicate transposition of the variable X.
calls up a window with several editable text areas and various pushbuttons. To
specify your LMI system,
1 Declare each matrix variable (name and structure) in the upper half of the
worksheet. The structure is characterized by its type (S for symmetric block
diagonal, R for unstructured, and G for other structures) and by an additional
“structure” matrix. This matrix contains specific information about the
structure and corresponds to the second argument of lmivar (see “lmivar” on
page 8-11 for details).
Please use one line per matrix variable in the text editing areas.
8-16
Specifying a System of LMIs
is entered by typing
[a'*x+x*a x*b; b'*x 1] < 0
if x is the name given to the matrix variable X in the upper half of the
worksheet. The left- and right-hand sides of the LMIs should be valid
MATLAB expressions.
Once the LMI system is fully specified, the following tasks can be performed by
pressing the corresponding button:
8-17
8 The LMI Lab
Figure 8-1 shows how to specify the feasibility problem of “Example 8.1” on
page 8-9 with lmiedit.
8-18
Specifying a System of LMIs
getlmis, and not visible in the workspace. Even though this artifact is
transparent from the user's viewpoint, be sure to
• Invoke getlmis only once and after completely specifying the LMI system
• Refrain from using the command clear global before the LMI system
description is ended with getlmis
8-19
8 The LMI Lab
8-20
Retrieving Information
Retrieving Information
Recall that the full description of an LMI system is stored as a single vector
called the internal representation. The user should not attempt to read or
retrieve information directly from this vector. Three functions called lmiinfo,
lminbr, and matnbr are provided to extract and display all relevant
information in a user-readable format.
lmiinfo
lmiinfo is an interactive facility to retrieve qualitative information about LMI
systems. This includes the number of LMIs, the number of matrix variables
and their structure, the term content of each LMI block, etc. To invoke lmiinfo,
enter
lmiinfo(LMISYS)
8-21
8 The LMI Lab
LMI Solvers
LMI solvers are provided for the following three generic optimization problems
(here x denotes the vector of decision variables, i.e., of the free entries of the
matrix variables X1, . . . , XK):
• Feasibility problem
Find x ∈ RN (or equivalently matrices X1, . . . , XK with prescribed structure)
that satisfies the LMI system
A(x) < B(x)
The corresponding solver is called feasp.
• Minimization of a linear objective under LMI constraints
Minimize cTx over x ∈ RN subject to A(x) < B(x)
The corresponding solver is called mincx.
• Generalized eigenvalue minimization problem
Minimize λ over x ∈ RN subject to
C(x) < D(x)
0 < B(x)
A(x) < λB(x).
The corresponding solver is called gevp.
Note that A(x) < B(x) above is a shorthand notation for general structured LMI
systems with decision variables x = (x1, . . . , xN).
The three LMI solvers feasp, mincx, and gevp take as input the internal
representation LMISYS of an LMI system and return a feasible or optimizing
value x* of the decision variables. The corresponding values of the matrix
variables X1, . . . , XK are derived from x* with the function dec2mat. These
solvers are C-MEX implementations of the polynomial-time Projective
Algorithm Projective Algorithm of Nesterov and Nemirovski [3, 2].
For generalized eigenvalue minimization problems, it is necessary to
distinguish between the standard LMI constraints C(x) < D(x) and the
linear-fractional LMIs
A(x) < λB(x)
8-22
LMI Solvers
An initial guess xinit for x can be supplied to mincx or gevp. Use mat2dec to
derive xinit from given values of the matrix variables X1, . . . , XK . Finally,
various options are available to control the optimization process and the solver
behavior. These options are described in detail in the “Command Reference”
chapter.
The following example illustrates the use of the mincx solver.
T T
A X + XA + XBB X + Q <
(8-9)
with data
–1 –2 1 1 1 –1 0
A = 3 2 1 ; B = 0 ; Q = – 1 – 3 – 12
1 –2 –1 1 0 – 12 – 36
It can be shown that the minimizer X* is simply the stabilizing solution of the
algebraic Riccati equation
ATX + XA + XBBTX + Q = 0
This solution can be computed directly with the Riccati solver care and
compared to the minimizer returned by mincx.
8-23
8 The LMI Lab
T
Minimize Trace(X) subject to A X + XA + Q XB < 0
T
B X –I
(8-10)
Since Trace(X) is a linear function of the entries of X, this problem falls within
the scope of the mincx solver and can be numerically solved as follows:
lmiterm([1 1 1 X],1,a,'s')
lmiterm([1 1 1 0],q)
lmiterm([1 2 2 0], 1)
lmiterm([1 2 1 X],b',1)
LMIs = getlmis
2 Write the objective Trace(X) as cTx where x is the vector of free entries of X.
Since c should select the diagonal entries of X, it is obtained as the decision
vector corresponding to X = I, that is,
c = mat2dec(LMIs,eye(3))
Note that the function defcx provides a more systematic way of specifying
such objectives (see “Specifying cTx Objectives for mincx” on page 8-38 for
details).
3 Call mincx to compute the minimizer xopt and the global minimum
copt = c'*xopt of the objective:
options = [1e 5,0,0,0,0]
8-24
LMI Solvers
[copt,xopt] = mincx(LMIs,c,options)
2 8.511476
3 13.063640
*** new lower bound: 34.023978
4 15.768450
*** new lower bound: 25.005604
5 17.123012
*** new lower bound: 21.306781
6 17.882558
*** new lower bound: 19.819471
7 18-339853
*** new lower bound: 19.189417
8 18.552558
*** new lower bound: 18.919668
9 18.646811
*** new lower bound: 18.803708
10 18.687324
*** new lower bound: 18.753903
11 18.705715
*** new lower bound: 18.732574
12 18.712175
8-25
8 The LMI Lab
The iteration number and the best value of cTx at the current iteration
appear in the left and right columns, respectively. Note that no value is
displayed at the first iteration, which means that a feasible x satisfying the
constraint (8-10) was found only at the second iteration. Lower bounds on
the global minimum of cTx are sometimes detected as the optimization
progresses. These lower bounds are reported by the message
*** new lower bound: xxx
Upon termination, mincx reports that the global minimum for the objective
Trace(X) is –18.716695 with relative accuracy of at least 9.5-by-10–6. This is
the value copt returned by mincx.
4 mincx also returns the optimizing vector of decision variables xopt. The
corresponding optimal value of the matrix variable X is given by
8-26
LMI Solvers
Xopt = dec2mat(LMIs,xopt,X)
which returns
– 6.3542 – 5.8895 2.2046
X opt = – 5.8895 – 6.2855 2.2201
2.2046 2.2201 – 6.0771
This result can be compared with the stabilizing Riccati solution computed
by care:
Xst = care(a,b,q, 1)
norm(Xopt-Xst)
ans =
6.5390e 05
8-27
8 The LMI Lab
An error is issued if the number of arguments following LMISYS differs from the
number of matrix variables in the problem (see matnbr).
Conversely, given a value xdec of the vector of decision variables, the
corresponding value of the k-th matrix is given by dec2mat. For instance, the
value X2 of the second matrix variable is extracted from xdec by
X2 = dec2mat(LMISYS,xdec,2)
The last argument indicates that the second matrix variable is requested. It
could be set to the matrix variable identifier returned by lmivar.
The total numbers of matrix variables and decision variables are returned by
matnbr and decnbr, respectively. In addition, the function decinfo provides
precise information about the mapping between decision variables and matrix
variable entries (see the “Command Reference” chapter).
8-28
Validating Results
Validating Results
The LMI Lab offers two functions to analyze and validate the results of an LMI
optimization. The function evallmi evaluates all variable terms in an LMI
system for a given value of the vector of decision variables, for instance, the
feasible or optimal vector returned by the LMI solvers. Once this evaluation is
performed, the left- and right-hand sides of a particular LMI are returned by
showlmi.
In the LMI problem considered in “Example 8.2” on page 8-23, you can verify
that the minimizer xopt returned by mincx satisfies the LMI constraint (8-10)
as follows:
evlmi = evallmi(LMIs,xopt)
[lhs,rhs] = showlmi(evlmi,1)
The first command evaluates the system for the value xopt of the decision
variables, and the second command returns the left- and right-hand sides of the
first (and only) LMI. The negative definiteness of this LMI is checked by
eig(lhs-rhs)
ans =
2.0387e 04
3.9333e 05
1.8917e 07
4.6680e+01
8-29
8 The LMI Lab
dellmi
The first possibility is to remove an entire LMI from the system with dellmi.
For instance, suppose that the LMI system of “Example 8.1” on page 8-9 is
described in LMISYS and that we want to remove the positivity constraint on X.
This is done by
NEWSYS = dellmi(LMISYS,2)
where the second argument specifies deletion of the second LMI. The resulting
system of two LMIs is returned in NEWSYS.
The LMI identifiers (initial ranking of the LMI in the LMI system) are not
altered by deletions. As a result, the last LMI
S>I
remains known as the third LMI even though it now ranks second in the
modified system. To avoid confusion, it is safer to refer to LMIs via the
identifiers returned by newlmi. If BRL, Xpos, and Slmi are the identifiers
attached to the three LMIs (8-6)–(8-8), Slmi keeps pointing to S > I even after
deleting the second LMI by
NEWSYS = dellmi(LMISYS,Xpos)
dellmi
Another way of modifying an LMI system is to delete a matrix variable, that is,
to remove all variable terms involving this matrix variable. This operation is
performed by delmvar. For instance, consider the LMI
ATX + XA + BW + WTBT + I < 0
with variables X = XT ∈ R4×4 and W ∈ R2×4. This LMI is defined by
setlmis([])
X = lmivar(1,[4 1]) % X
W = lmivar(2,[2 4]) % W
8-30
Modifying a System of LMIs
lmiterm([1 1 1 X],1,A,'s')
lmiterm([1 1 1 W],B,1,'s')
lmiterm([1 1 1 0],1)
LMISYS = getlmis
setmvar
The function setmvar is used to set a matrix variable to some given value. As
a result, this variable is removed from the problem and all terms involving it
become constant terms. This is useful, for instance, to fixsetmvar some
variables and optimize with respect to the remaining ones.
Consider again “Example 8.1” on page 8-9 and suppose we want to know if the
peak gain of G itself is less than one, that is, if
||G||∞ < 1
This amounts to setting the scaling matrix D (or equivalently, S = DTD) to a
multiple of the identity matrix. Keeping in mind the constraint S > I, a
legitimate choice is S = 2−βψ−I. To set S to this value, enter
NEWSYS = setmvar(LMISYS,S,2)
8-31
8 The LMI Lab
The second argument is the variable identifier S, and the third argument is the
value to which S should be set. Here the value 2 is shorthand for 2−βψ−I. The
resulting system NEWSYS reads
A T X + XA + 2C T C XB
<0
T
B X – 2I
X>0
2I > I
Note that the last LMI is now free of variable and trivially satisfied. It could,
therefore, be deleted by
NEWSYS = dellmi(NEWSYS,3)
or
NEWSYS = dellmi(NEWSYS,Slmi)
8-32
Advanced Topics
Advanced Topics
This last section gives a few hints for making the most out of the LMI Lab. It
is directed toward users who are comfortable with the basics described above.
Example 8.3. Suppose that the problem variables include a 3-by-3 symmetric
matrix X and a 3-by-3 symmetric Toeplitz matrix
y y y
1 2 3
Y = y2 y1 y2
y y y
3 2 1
The variable Y has three independent entries, hence involves three decision
variables. Since Y is independent of X, these decision variables should be
labeled n + 1, n + 2, n + 3 where n is the number of decision variables involved
in X. To retrieve this number, define the variable X (Type 1) by
setlmis([])
[X,n] =
lmivar(1,[3 1])
The second output argument n gives the total number of decision variables
used so far (here n = 6). Given this number, Y can be defined by
Y = lmivar(3,n+[1 2 3;2 1 2;3 2 1])
8-33
8 The LMI Lab
or equivalently by
Y = lmivar(3,toeplitz(n+[1 2 3]))
ans =
1 2 4
2 3 5
4 5 6
decinfo(lmis,Y)
ans =
7 8 9
8 7 8
9 8 7
The next example is a problem with interdependent matrix variables.
X = x 0 , Y = z 0 , Z = 0 –x
0 y 0 t –t 0
The third output of lmivar gives the entry-wise dependence of X and Y on the
decision variables (x1, x2, x3, x4) := (x, y, z, t):
sX =
1 0
0 2
8-34
Advanced Topics
sY =
3 0
0 4
Using Type 3 of lmivar, you can now specify the structure of Z in terms of the
decision variables x1 = x and x4 = t:
[Z,n,sZ] = lmivar(3,[0 sX(1,1);–sY(2,2) 0])
Since sX(1,1) refers to x1 while sY(2,2) refers to x4 , this defines the variable
0 –x
Z = 1
= 0 –x
–x 0 –t 0
4
Complex-Valued LMIs
The LMI solvers are written for real-valued matrices and cannot directly
handle LMI problems involving complex-valued matrices. However,
complex-valued LMIs can be turned into real-valued LMIs by observing that a
complex Hermitian matrix L(x) satisfies
L(x) < 0
if and only if
Re ( L ( x ) ) Im ( L ( x ) )
<0
– Im ( L ( x ) ) Re ( L ( x ) )
This suggests the following systematic procedure for turning complex LMIs
into real ones:
8-35
8 The LMI Lab
For LMIs without outer factor, a streamlined version of this procedure consists
of replacing any occurrence of the matrix variable X = X1 + jX2 by
X X 2
1
–X 2 X1
A A
1 2
–A A
2 1
H H
M XM < X, X = X >I
(8-11)
reads (given the decompositions M = M1 + jM2 and X = X1 + jX2 with Mj, Xj real):
T
M M X X M M X X
1 2 1 2 1 2 < 1 2
–M M –X X –M M –X X
2 1 2 1 2 1 2 1
X X
1 2
<I
–X X
2 1
H T
Note that X = XH in turn requires that X 1 = X and X 2 + X 2 2 = 0 .
Consequently, X1 and X2 should be declared as symmetric and skew-
symmetric matrix variables, respectively.
Assuming, for instance, that M ∈ C5×5, the LMI system (8-11) would be
specified as follows:
8-36
Advanced Topics
M1=real(M), M2=imag(M)
bigM=[M1 M2;-M2 M1]
setlmis([])
lmiterm([1 1 1 0],1)
lmiterm([ 1 1 1 bigX],1,1)
lmiterm([2 1 1 bigX],bigM',bigM)
lmiterm([ 2 1 1 bigX],1,1)
lmis = getlmis
3 Define the structure of bigX in terms of the structures sX1 and sX2 of X1 and
X2 .
See the previous subsection for more details on such structure manipulations.
8-37
8 The LMI Lab
for j=1:n,
[Xj,Pj] = defcx(lmisys,j,X,P)
c(j) = trace(Xj) + x0'*Pj*x0
end
The first command returns the number of decision variables in the problem and
the second command dimensions c accordingly. Then the for loop performs the
following operations:
1 Evaluate the matrix variables X and P when all entries of the decision vector
x are set to zero except xj := 1. This operation is performed by the function
defcx. Apart from lmisys and j, the inputs of defcx are the identifiers X and
8-38
Advanced Topics
P of the variables involved in the objective, and the outputs Xj and Pj are the
corresponding values.
2 Evaluate the objective expression for X := Xj and P := Pj. This yields the
j-th entry of c by definition.
Feasibility Radius
When solving LMI problems with feasp, mincx, or gevp, it is possible to
constrain the solution x to lie in the ball
xTx < R2
where R > 0 is called the feasibility radius. This specifies a maximum
(Euclidean norm) magnitude for x and avoids getting solutions of very large
norm. This may also speed up computations and improve numerical stability.
Finally, the feasibility radius bound regularizes problems with redundant
variable sets. In rough terms, the set of scalar variables is redundant when an
equivalent problem could be formulated with a smaller number of variables.
The feasibility radius R is set by the third entry of the options vector of the LMI
solvers. Its default value is R = 109 . Setting R to a negative value means “no
rigid bound,” in which case the feasibility radius is increased during the
8-39
8 The LMI Lab
Well-Posedness Issues
The LMI solvers used in the LMI Lab are based on interior-point optimization
techniques. To compute feasible solutions, such techniques require that the
system of LMI constraints be strictly feasible, that is, the feasible set has a
nonempty interior. As a result, these solvers may encounter difficulty when the
LMI constraints are feasible but not strictly feasible, that is, when the LMI
L(x) ð 0
has solutions while
L(x) < 0
has no solution.
For feasibility problems, this difficulty is automatically circumvented by
feasp, which reformulates the problem
as
8-40
Advanced Topics
cannot be directly solved with gevp. A simple remedy consists of replacing the
constraints
A(x) < B(x), B(x) > 0
by
A(x) < Y 0 , Y < λ B 1 ( x ), B1 ( x ) > 0
0 0
8-41
8 The LMI Lab
A 0 + x 1 A 1 + … + x N A N < 0.
(8-15)
This is no longer true, however, when the number of variable terms is nearly
equal to or greater than the number N of decision variables in the problem. If
your LMI problem has few free scalar variables but many terms in each LMI,
it is therefore preferable to rewrite it as (8-15) and to specify it in this form.
Each scalar variable xj is then declared independently and the LMI terms are
of the form xjAj.
If M denotes the total row size of the LMI system and N the total number of
scalar decision variables, the flop count per iteration for the feasp and mincx
solvers is proportional to
T T T
M + P XQ + Q X P < 0
(8-16)
8-42
Advanced Topics
It turns out that a particular solution Xc of (8-16) can be computed via simple
linear algebra manipulations [1 ]. Typically, Xc corresponds to the center of the
ellipsoid of matrices defined by (8-16).
The function basiclmi returns the “explicit” solution Xc:
Xc = basiclmi(M,P,Q)
Since this central solution sometimes has large norm, basiclmi also offers the
option of computing an approximate least-norm solution of (8-16). This is done
by
X = basiclmi(M,P,Q,'Xmin')
8-43
8 The LMI Lab
References
[1] Gahinet, P., and P. Apkarian, “A Linear Matrix Inequality Approach to H∞
Control,” Int. J. Robust and Nonlinear Contr., 4 (1994), pp. 421–448.
[2] Nemirovski, A., and P. Gahinet, “The Projective Method for Solving Linear
Matrix Inequalities,” Proc. Amer. Contr. Conf., 1994, pp. 840–844.
[3] Nesterov, Yu, and A. Nemirovski, Interior Point Polynomial Methods in
Convex Programming: Theory and Applications, SIAM Books, Philadelphia,
1994.
[4] Shamma, J.S., “Robustness Analysis for Time-Varying Systems,” Proc.
Conf. Dec. Contr., 1992, pp. 3163–3168.
8-44
9
Command Reference
9 Command Reference
This chapter gives a detailed description of all LMI Control Toolbox functions.
Functions are grouped by application in tables at the beginning of this chapter.
In addition, information on each function is aavailable through the online Help
facility.
9-2
List of Functions
List of Functions
Linear Algebra
Interconnection of Systems
9-3
9 Command Reference
Interconnection of Systems
Dynamical Uncertainty
9-4
List of Functions
Robustness Analysis
9-5
9 Command Reference
Continuous Time
Discrete Time
Gain Scheduling
9-6
LMI Lab: Specifying and Solving LMIs
Specification of LMIs
LMI Solvers
dec2mat Convert the output of the solvers into values of the matrix
variables
evallmi Evaluate all variable terms for given values of the decision
variables
9-7
9 Command Reference
Information Retrieval
decnbr Get the number of decision variables, that is, the number of
free scalar variables in a problem
9-8
aff2lft
Purpose 9aff2lft
Compute the linear-fractional representation of an affine parameter-
dependent system
u P(s) y
where
• P(s) is an LTI system called the nominal plant
• The block-diagonal operator ∆ = diag(δ1Ir1, . . ., δnIrn) accounts for the
uncertainty on the real parameters pj. Each scalar δj is real, possibly
repeated (rj > 1), and bounded by
pj – pj
δ j ≤ ----------------
2
9-9
aff2lft
The SYSTEM matrix of P and the description of ∆ are returned in sys and delta.
Closing the loop with ∆ ≡ 0 yields the LTI system
E(pav) x· = A(pav)x + B(pav)u
y = C(pav)x + D(pav)u
where pav = --12- ( p + p ) is the vector of average parameter values.
9-10
aff2pol
Purpose 9aff2pol
Convert affine parameter-dependent models to polytopic ones
E ( p )x· = A ( p )x + B ( p )u
(9-1)
y = C ( p )x + D ( p )u
(9-2)
for all corners pex of the parameter box or all vertices pex of the polytope of
parameter values.
9-11
basiclmi
Syntax X = basiclmi(M,P,Q,option)
9-12
care
Purpose 9care
Solve continuous-time Riccati equations (CARE)
Syntax X = care(A,B,Q,R,S,E)
[X,L,G,RR] = care(A,B,Q,R,S,E)
Description This function computes the stabilizing solution X of the Riccati equation
ATXE + ETXA – (ETXB + S)R–1(BTXE + ST) + Q = 0
where E is an invertible matrix. The solution is computed by unitary deflation
of the associated Hamiltonian pencil
A 0 B
E 0 0
H – λ L = –Q –A –Tλ T
–S 0 E 0
T T 0 0 0
S B R
Reference Laub, A. J., “A Schur Method for Solving Algebraic Riccati Equations,” IEEE
Trans. Aut. Contr., AC–24 (1979), pp. 913–921.
Arnold, W.F., and A.J. Laub, “Generalized Eigenproblem Algorithms and
Software for Algebraic Riccati Equations,” Proc. IEEE, 72 (1984), pp. 1746–
1754.
9-13
dare
Purpose 9dare
Solve discrete-time Riccati equations (DARE)
Syntax X = dare(A,B,Q,R,S,E)
[X,L,G,RR] = dare(A,B,Q,R,S,E)
Description dare computes the stabilizing solution X of the algebraic Riccati equation
T T T T –1 T T
A XA – E XE – ( A XB + S ) ( B XB + R ) ( B XA + S ) + Q = 0 (9-3)
Remark The function dricpen is a variant of dare where the inputs are the two matrices
H and L defined above. The syntax is
X = dricpen(H,L)
or
[X1,X2] = dricpen(H,L)
9-14
dare
–1
• X = X 2 X 1 is the stabilizing solution of (9-3).
Reference Pappas, T., A.J. Laub, and N.R. Sandell, “On the Numerical Solution of the
Discrete-Time Algebraic Riccati Equation,” IEEE Trans. Aut. Contr., AC–25
(1980), pp. 631–641.
Arnold, W.F., and A.J. Laub, “Generalized Eigenproblem Algorithms and
Software for Algebraic Riccati Equations,” Proc. IEEE, 72 (1984), pp. 1746–
1754.
9-15
decay
Purpose 9decay
Quadratic decay rate of polytopic or affine P-systems
9-16
decinfo
Purpose 9decinfo
Describe how the entries of a matrix variable X relate to the decision variables
Syntax decinfo(lmisys)
decX = decinfo(lmisys,X)
Description The function decinfo expresses the entries of a matrix variable X in terms of
the decision variables x1, . . ., xN. Recall that the decision variables are the free
scalar variables of the problem, or equivalently, the free entries of all matrix
variables described in lmisys. Each entry of X is either a hard zero, some
decision variable xn, or its opposite –xn.
If X is the identifier of X supplied by lmivar, the command
decX = decinfo(lmisys,X)
returns an integer matrix decX of the same dimensions as X whose (i, j) entry is
• 0 if X(i, j) is a hard zero
• n if X(i, j) = xn (the n-th decision variable)
• –n if X(i, j) = –xn
Example 1 Consider an LMI with two matrix variables X and Y with structure:
• X = x I3 with x scalar
• Y rectangular of size 2-by-1
9-17
decinfo
dX =
1 0 0
0 1 0
0 0 1
dY = decinfo(lmis,Y)
dY =
2
3
This indicates a total of three decision variables x1, x2, x3 that are related to the
entries of X and Y by
x 0 0
1 x
X = 0 x1 0 , Y= 2
x
0 0 x 3
1
Note that the number of decision variables corresponds to the number of free
entries in X and Y when taking structure into account.
Example 2 Suppose that the matrix variable X is symmetric block diagonal with one 2-by-2
full block and one 2-by-2 scalar block, and is declared by
setlmis([])
X = lmivar(1,[2 1;2 0])
:
lmis = getlmis
9-18
decinfo
?> 1
X1 :
1 2 0 0
2 3 0 0
0 0 4 0
0 0 0 4
*********
?> 0
9-19
decnbr
Purpose 9decnbr
Give the total number of decision variables in a system of LMIs
Description The function decnbr returns the number ndec of decision variables (free scalar
variables) in the LMI problem described in lmisys. In other words, ndec is the
length of the vector of decision variables.
Example For an LMI system lmis with two matrix variables X and Y such that
• X is symmetric block diagonal with one 2-by-2 full block, and one 2-by-2
scalar block
• Y is 2-by-3 rectangular,
ndec =
10
This is exactly the number of free entries in X and Y when taking structure into
account (see decinfo for more details).
9-20
dec2mat
Purpose 9dec2mat
Given values of the decision variables, derive the corresponding values of the
matrix variables
Description Given a value decvars of the vector of decision variables, dec2mat computes the
corresponding value valX of the matrix variable with identifier X. This
identifier is returned by lmivar when declaring the matrix variable.
Recall that the decision variables are all free scalar variables in the LMI
problem and correspond to the free entries of the matrix variables X1, . . ., XK.
Since LMI solvers return a feasible or optimal value of the vector of decision
variables, dec2mat is useful to derive the corresponding feasible or optimal
values of the matrix variables.
9-21
defcx
Description defcx is useful to derive the c vector needed by mincx when the objective is
expressed in terms of the matrix variables.
Given the identifiers X1,...,Xk of the matrix variables involved in this
objective, defcx returns the values V1,...,Vk of these variables when the n-th
decision variable is set to one and all others to zero. See the example in
“Specifying cTx Objectives for mincx” on page 8-38 for more details on how to
use this function to derive the c vector.
9-22
dellmi
Purpose 9dellmi
Remove an LMI from a given system of LMIs
Description dellmi deletes the n-th LMI from the system of LMIs described in lmisys. The
updated system is returned in newsys.
The ranking n is relative to the order in which the LMIs were declared and
corresponds to the identifier returned by newlmi. Since this ranking is not
modified by deletions, it is safer to refer to the remaining LMIs by their
identifiers. Finally, matrix variables that only appeared in the deleted LMI are
removed from the problem.
T
A1 X1 + X1 A1 + Q1 < 0
(9-4)
T
A3 X3 + X3 A3 + Q3 < 0
(9-5)
and the second variable X2 has been removed from the problem since it no
longer appears in the system (9-4)–(9-5).
To further delete (9-5), type
lmis = dellmi(lmis,LMI3)
or equivalently
lmis = dellmi(lmis,3)
9-23
dellmi
Note that (9-5) has retained its original ranking after the first deletion.
9-24
delmvar
Purpose 9delmvar
Delete one of the matrix variables of an LMI problem
Description delmvar removes the matrix variable X with identifier X from the list of
variables defined in lmisys. The identifier X should be the second argument
returned by lmivar when declaring X. All terms involving X are automatically
removed from the list of LMI terms. The description of the resulting system of
LMIs is returned in newsys.
involving two variables X and Y with identifiers X and Y. To delete the variable
X, type
lmisys = delmvar(lmisys,X)
with only one variable Y. Note that Y is still identified by the label Y.
9-25
dhinflmi
Purpose 9dhinflmi
LMI-based H∞ synthesis for discrete-time systems
Description dhinflmi is the counterpart of hinflmi for discrete-time plants. Its syntax and
usage mirror those of hinflmi.
The H∞ performance γ is achievable if and only if the following system of LMIs
has symmetric solutions R and S:
T T
T ARA – R ARC 1 B1
N 12 0 N 12 0 <0
C 1 RA
T
– γI + C 1 RC 1 D 11
T
0 I 0 I
T T
B1 D 11 – γI
T T T
T A SA – S A SB 1 C1
N 21 0 N 21 0 <0
T
B 1 SA
T T
– γI + B 1 SB 1 D 11
0 I 0 I
C1 D 11 – γI
R I
≥0
I S
T T
where N12 and N21 denote bases of the null spaces of ( B 2 , D 12 ) and (C2, D21),
respectively. The function dhinflmi returns solutions R = x1 and S = y1 of this
LMI system for γ = gopt (the matrices x2 and y2 are set to gopt-by-I).
9-26
dhinfric
Purpose 9dhinfric
Riccati-based H∞ synthesis for discrete-time systems
Description dhinfric is the counterpart of hinfric for discrete-time plants P(s) with
equations:
xk+1 = Axk + B1 wk + B2uk
zk = C1xk + D11 wk + D12uk
yk = C2xk + D21 wk + D22uk.
See dnorminf for a definition of the RMS gain (H∞ performance) of
discrete-time systems. The syntax and usage of dhinfric are identical to those
of hinfric.
9-27
dnorminf
Purpose 9dnorminf
Compute the random mean-squares (RMS) gain of discrete-time systems
of the response on the unit circle. When G(z) is stable, this peak gain coincides
with the RMS gain or H∞ norm of G, that is,
y l
G ∞ = sup -----------2
u∈t u l
2 2
u≠0
9-28
evallmi
Purpose 9evallmi
Given a particular instance of the decision variables, evaluate all variable
terms in the system of LMIs
Description evallmi evaluates all LMI constraints for a particular instance decvars of the
vector of decision variables. Recall that decvars fully determines the values of
the matrix variables X1, . . ., XK. The “evaluation” consists of replacing all terms
involving X1, . . ., XK by their matrix value. The output evalsys is an LMI
system containing only constant terms.
The function evallmi is useful for validation of the LMI solvers’ output. The
vector returned by these solvers can be fed directly to evallmi to evaluate all
variable terms. The matrix values of the left- and right-hand sides of each LMI
are then returned by showlmi.
Observation evallmi is meant to operate on the output of the LMI solvers. To evaluate all
LMIs for particular instances of the matrix variables X1, . . ., XK, first form the
corresponding decision vector x with mat2dec and then call evallmi with x as
input.
where A = 0.5 – 0.2 . This LMI system is defined by:
0.1 – 0.7
setlmis([])
X = lmivar(1,[2 1]) % full symmetric X
9-29
evallmi
The result is
tmin =
-4.7117e+00
xfeas' =
1.1029e+02 -1.1519e+01 1.1942e+02
The LMI constraints are therefore feasible since tmin < 0. The solution X
corresponding to the feasible decision vector xfeas would be given by
X = dec2mat(lmis,xfeas,X).
To check that xfeas is indeed feasible, evaluate all LMI constraints by typing
evals = evallmi(lmis,xfeas)
The left- and right-hand sides of the first and second LMIs are then given by
[lhs1,rhs1] = showlmi(evals,1)
[lhs2,rhs2] = showlmi(evals,2)
9-30
feasp
Purpose 9feasp
Find a solution to a given system of LMIs
Description The function feasp computes a solution xfeas (if any) of the system of LMIs
described by lmisys. The vector xfeas is a particular value of the decision
variables for which all LMIs are satisfied.
Given the LMI system
T T
N LxN ≤ M R ( x )M, (9-7)
Control The optional argument options gives access to certain control parameters for
Parameters the optimization algorithm. This five-entry vector is organized as follows:
• options(1) is not used
• options(2) sets the maximum number of iterations allowed to be performed
by the optimization procedure (100 by default)
• options(3) resets the feasibility radius. Setting options(3) to a value R >
0 further constrains the decision vector x = (x1, . . ., xN) to lie within the ball
9-31
feasp
∑ xi < R
2 2
i=1
In other words, the Euclidean norm of xfeas should not exceed R. The
feasibility radius is a simple means of controlling the magnitude of solutions.
Upon termination, feasp displays the f-radius saturation, that is, the norm
of the solution as a percentage of the feasibility radius R.
The default value is R = 109. Setting options(3) to a negative value
activates the “flexible bound” mode. In this mode, the feasibility radius is
initially set to 108, and increased if necessary during the course of
optimization
• options(4) helps speed up termination. When set to an integer value J > 0,
the code terminates if t did not decrease by more than one percent in relative
terms during the last J iterations. The default value is 10. This parameter
trades off speed vs. accuracy. If set to a small value (< 10), the code
terminates quickly but without guarantee of accuracy. On the contrary, a
large value results in natural convergence at the expense of a possibly large
number of iterations.
• options(5) = 1 turns off the trace of execution of the optimization
procedure. Resetting options(5) to zero (default value) turns it back on.
Memory When the least-squares problem solved at each iteration becomes ill
Problems conditioned, the feasp solver switches from Cholesky-based to QR-based linear
algebra (see“Memory Problems” on page 9-74 for details). Since the QR mode
typically requires much more memory, MATLAB may run out of memory and
display the message
??? Error using ==> feaslv
Out of memory. Type HELP MEMORY for your options.
9-32
feasp
You should then ask your system manager to increase your swap space or, if no
additional swap space is available, set options(4) = 1. This will prevent
switching to QR and feasp will terminate when Cholesky fails due to
numerical instabilities.
T
A 1 P + PA 1 < 0
(9-8)
T
A 2 P + PA 2 < 0
(9-9)
T
A 3 P + PA 3 < 0
(9-10)
with data
A 1 = – 1 2 , A 2 = – 0.8 1.5 , A 3 = – 1.4 0.9
1 –3 1.3 – 2.7 0.7 – 2.0
This problem arises when studying the quadratic stability of the polytope of
matrices Co{A1, A2, A3} (see“Quadratic Stability” on page 3-6 for details).
To assess feasibility with feasp, first enter the LMIs (9-8)–(9-10) by:
setlmis([])
p = lmivar(1,[2 1])
9-33
feasp
This returns tmin = -3.1363. Hence (9-8)–(9-10) is feasible and the dynamical
system x· = A(t)x is quadratically stable for A(t) ∈ Co{A1, A2, A3}.
To obtain a Lyapunov matrix P proving the quadratic stability, type
P = dec2mat(lmis,xfeas,p)
This returns
P = 270.8 126.4
126.4 155.1
The third entry 10 of options sets the feasibility radius to 10 while the third
argument -1 sets the target value for tmin. This yields tmin = -1.1745 and a
matrix P with largest eigenvalue λmax(P) = 9.6912.
Reference The feasibility solver feasp is based on Nesterov and Nemirovski’s Projective
Method described in
Nesterov, Y., and A. Nemirovski, Interior Point Polynomial Methods in Convex
Programming: Theory and Applications, SIAM, Philadelphia, 1994.
Nemirovski, A., and P. Gahinet, “The Projective Method for Solving Linear
Matrix Inequalities,” Proc. Amer. Contr. Conf., 1994, Baltimore, Maryland, pp.
840–844.
The optimization is performed by the C-MEX file feaslv.mex.
9-34
getdg
Purpose 9getdg
Retrieve the D, G scaling matrices in problems
Syntax D = getdg(ds,i)
G = getdg(gs,i)
Description The function mustab computes the mixed-µ upper bound over a grid of
frequencies fs and returns the corresponding D, G scalings compacted in two
matrices ds and gs. The values of D, G at the frequency fs(i) are extracted
from ds and gs by getdg.
9-35
getlmis
Purpose 9getlmis
Get the internal description of an LMI system
Description After completing the description of a given LMI system with lmivar and
lmiterm, its internal representation lmisys is obtained with the command
lmisys = getlmis
This MATLAB representation of the LMI system can be forwarded to the LMI
solvers or any other LMI-Lab function for subsequent processing.
9-36
gevp
Purpose 9gevp
Generalized eigenvalue minimization under LMI constraints
where C(x) < D(x) and A(x) < λB(x) denote systems of LMIs. Provided that
(9-11)–(9-12) are jointly feasible, gevp returns the global minimum lopt and
the minimizing value xopt of the vector of decision variables x. The
corresponding optimal values of the matrix variables are obtained with
dec2mat.
The argument lmisys describes the system of LMIs (9-11)–(9-13) for λ = 1. The
LMIs involving λ are called the linear-fractional constraints while (9-11)–(9-12)
are regular LMI constraints. The number of linear-fractional constraints (9-13)
is specified by nlfc. All other input arguments are optional. If an initial
feasible pair (λ0, x0) is available, it can be passed to gevp by setting linit to λ0
and xinit to x0. Note that xinit should be of length decnbr(lmisys) (the
number of decision variables). The initial point is ignored when infeasible.
Finally, the last argument target sets some target value for λ. The code
terminates as soon as it has found a feasible pair (λ, x) with λ ð target.
9-37
gevp
Control The optional argument options gives access to certain control parameters of
Parameters the optimization code. In gevp, this is a five-entry vector organized as follows:
• options(1) sets the desired relative accuracy on the optimal value lopt
(default = 10–2).
• options(2) sets the maximum number of iterations allowed to be performed
by the optimization procedure (100 by default).
• options(3) sets the feasibility radius. Its purpose and usage are as for
feasp.
• options(4) helps speed up termination. If set to an integer value J > 0, the
code terminates when the progress in λ over the last J iterations falls below
the desired relative accuracy. By progress, we mean the amount by which λ
decreases. The default value is 5 iterations.
• options(5) = 1 turns off the trace of execution of the optimization
procedure. Resetting options(5) to zero (default value) turns it back on.
Example Given
A 1 = – 1 2 , A 2 = – 0.8 1.5 , A 3 = – 1.4 0.9 ,
1 –3 1.3 – 2.7 0.7 – 2.0
consider the problem of finding a single Lyapunov function V(x) = xTPx that
proves stability of
x· = Aix (i = 1, 2, 3)
dV ( x )
and maximizes the decay rate – ---------------- . This is equivalent to minimizing
dt
α subject to
I<P (9-14)
T
A 1 P + PA 1 < αP
(9-15)
9-38
gevp
T
A 2 P + PA 2 < αP (9-16)
T
A 3 P + PA 3 < αP
(9-17)
To set up this problem for gevp, first specify the LMIs (9-15)–(9-17) with α = 1:
setlmis([]);
p = lmivar(1,[2 1])
Note that the linear fractional constraints are defined last as required. To
minimize α subject to (9-15)–(9-17), call gevp by
[alpha,popt]=gevp(lmis,3)
This returns alpha = -0.122 as optimal value (the largest decay rate is
therefore 0.122). This value is achieved for
P = 5.58 – 8.35
– 8.35 18.64
9-39
gevp
Reference The solver gevp is based on Nesterov and Nemirovski’s Projective Method
described in
Nesterov, Y., and A. Nemirovski, Interior Point Polynomial Methods in Convex
Programming: Theory and Applications, SIAM, Philadelphia, 1994.
The optimization is performed by the CMEX file fpds.mex.
9-40
hinfgs
Purpose 9hinfgs
Synthesis of gain-scheduled H∞ controllers
x· = A ( p )x + B ( p )w + B u
1 2
P z = C 1 ( p )x + D 11 ( p )w + D 12 u
y = C x + D w + D u
2 21 22
where the time-varying parameter vector p(t) ranges in a box and is measured
in real time, hinfgs seeks an affine parameter-dependent controller
·
ζ = A K ( p )ζ + B K ( p )y
K
u = C K ( p )ζ + D K ( p )y
w z
P
u y
9-41
hinfgs
gives the corresponding corner ³j of the parameter box (pv is the parameter
vector description).
The controller scheduling should be performed as follows. Given the
measurements p(t) of the parameters at time t,
3 Express p(t) as a convex combination of the ³j:
i=1
5 Use AK(t), BK(t), CK(t), DK(t) to update the controller state-space equations.
9-42
hinfgs
9-43
hinflmi
Purpose 9hinflmi
LMI-based H∞ synthesis for continuous-time plants
Description hinflmi computes an internally stabilizing controller K(s) that minimizes the
closed-loop H∞ performance in the control loop
w z
P(s)
u y
K(s)
The optional argument tol specifies the relative accuracy on the computed
optimal performance gopt. The default value is 10–2. Finally,
[gopt,K,x1,x2,y1,y2] = hinflmi(P,r)
9-44
hinflmi
Control The optional argument options resets the following control parameters:
Parameters
• options(1) is valued in [0, 1] with default value 0. Increasing its value
reduces the norm of the LMI solution R. This typically yields controllers with
slower dynamics in singular problems (rank-deficient D12). This also
improves closed-loop damping when p12(s) has jω-axis zeros.
• options(2): same as options(1) for S and singular problems with
rank-deficient D21
• when options(3) is set to ε > 0, reduced-order synthesis is performed
whenever
ρ(R–1 S–1) Š (1 – ε) gopt2
The default value is ε = 10–3
9-45
hinfmix
Purpose 9hinfmix
Mixed H2/H∞ synthesis with pole placement constraints
w z∞
P(s) z2
u y
K(s)
If T∞(s) and T2(s) denote the closed-loop transfer functions from w to z∞ and z2,
respectively, hinfmix computes a suboptimal solution of the following
synthesis problem:
Design an LTI controller K(s) that minimizes the mixed H2/H∞ criterion
2 2
α T∞ ∞ + β T2 2
subject to
• ||T∞||∞ < γ0
• ||T2||2 < ν0
• The closed-loop poles lie in some prescribed LMI region D.
Recall that ||.||∞ and ||.||2 denote the H∞ norm (RMS gain) and H2 norm of transfer
functions. More details on the motivations and statement of this problem can
be found in “Multi-Objective H• Synthesis” on page 5-15.
On input, P is the SYSTEM matrix of the plant P(s) and r is a three-entry vector
listing the lengths of z2, y, and u. Note that z∞ and/or z2 can be empty. The
9-46
hinfmix
four-entry vector obj = [γ0, ν0, α, β] specifies the H2/H∞ constraints and
trade-off criterion, and the remaining input arguments are optional:
• region specifies the LMI region for pole placement (the default region = []
is the open left-half plane). Use lmireg to interactively build the LMI region
description region
• dkbnd is a user-specified bound on the norm of the controller feedthrough
matrix DK (see the definition in “LMI Formulation” on page 5-16). The
default value is 100. To make the controller K(s) strictly proper, set dkbnd =
0.
• tol is the required relative accuracy on the optimal value of the trade-off
criterion (the default is 10–2).
The function hinfmix returns guaranteed H∞ and H2 performances gopt and
h2opt as well as the SYSTEM matrix K of the LMI-optimal controller. You can
also access the optimal values of the LMI variables R, S via the extra output
arguments R and S.
A variety of mixed and unmixed problems can be solved with hinfmix (see list
in “The Function hinfmix” on page 5-20). In particular, you can use hinfmix to
perform pure pole placement by setting obj = [0 0 0 0]. Note that both z∞
and z2 can be empty in such case.
Reference Chilali, M., and P. Gahinet, “H∞ Design with Pole Placement Constraints: An
LMI Approach,” to appear in IEEE Trans. Aut. Contr., 1995.
Scherer, C., “Mixed H2 H∞ Control,” to appear in Trends in Control: A European
Perspective, volume of the special contributions to the ECC 1995.
9-47
hinfpar
Purpose 9hinfpar
Extract the state-space matrices A, B1, B2, . . .from the SYSTEM matrix
representation of an H∞ plant
Description In H∞ control problems, the plant P(s) is specified by its state-space equations
x· = Ax + B1w + B2u
z = C1x + D11w + D12u
y = C2x + D21w + D22u
or their discrete-time counterparts. For design purposes, this data is stored in
a single SYSTEM matrix P created by
P = ltisys(a,[b1 b2],[c1;c2],[d11 d12;d21 d22])
The function hinfpar retrieves the state-space matrices A, B1, B2, . . . from P.
The second argument r specifies the size of the D22 matrix. Set r = [ny,nu]
where ny is the number of measurements (length of y) and nu is the number of
controls (length of u).
The syntax
hinfpar(P,r,'b1')
can be used to retrieve just the B1 matrix. Here the third argument should be
one of the strings 'a', 'b1', 'b2',....
9-48
hinfric
Purpose 9hinfric
Riccati-based H∞ synthesis for continuous-time plants
Description hinfric computes an H∞ controller K(s) that internally stabilizes the control
loop
w z
P(s)
u y
K(s)
and minimizes the RMS gain (H∞ performance) of the closed-loop transfer
function Twz. The function hinfric implements the Riccati-based approach.
The SYSTEM matrix P contains a state-space realization
x· = Ax + B1w + B2u
z = C1x + D11w + D12u
y = C2x + D21w + D22u
of the plant P(s) and is formed by the command
P = ltisys(a,[b1 b2],[c1;c2],[d11 d12;d21 d22])
The row vector r specifies the dimensions of D22. Set r = [p2 m2] when y ∈ Rp2
and u ∈ Rm2.
To minimize the H∞ performance, call hinfric with the syntax
gopt = hinfric(P,r)
9-49
hinfric
This returns the smallest achievable RMS gain from w to z over all stabilizing
controllers K(s). An H∞-optimal controller K(s) with performance gopt is
computed by
[gopt,K] = hinfric(P,r)
The H∞ performance can also be minimized within prescribed bounds gmin and
gmax and with relative accuracy tol by typing
[gopt,K] = hinfric(P,r,gmin,gmax,tol)
Setting both gmin and gmax to γ > 0 tests whether the H∞ performance γ is
achievable and returns a controller K(s) such that ||Twz||∞ < γ when γ is
achievable. Setting gmin or gmax to zero is equivalent to specifying no lower or
upper bound.
Finally, the full syntax
[gopt,K,x1,x2,y1,y2,Preg] = hinfric(P,r)
9-50
hinfric
Remark Setting gmin = gmax = Inf computes the optimal LQG controller.
Reference Doyle, J.C., Glover, K., Khargonekar, P., and Francis, B., “State-Space
Solutions to Standard H2 and H∞ Control Problems,” IEEE Trans. Aut. Contr.,
AC–34 (1989), pp. 831–847.
Gahinet, P., and A.J. Laub, “Reliable Computation of flopt in Singular H∞
Control,” in Proc. Conf. Dec. Contr., Lake Buena Vista, Fl., 1994, pp. 1527–
1532.
Gahinet, P., “Explicit Controller Formulas for LMI-based H∞ Synthesis,” in
Proc. Amer. Contr. Conf., 1994, pp. 2396–2400.
9-51
lmiedit
Purpose 9lmiedit
Specify or display systems of LMIs as MATLAB expressions
Syntax lmiedit
Description lmiedit is a graphical user interface for the symbolic specification of LMI
problems. Typing lmiedit calls up a window with two editable text areas and
various pushbuttons. To specify an LMI system,
Once the LMI system is fully specified, you can perform the following
operations by pressing the corresponding pushbutton:
• Visualize the sequence of lmivar/lmiterm commands needed to describe this
LMI system (view commands buttons)
• Conversely, display the symbolic expression of the LMI system produced by
a particular sequence of lmivar/lmiterm commands (click the describe...
buttons)
• Save the symbolic description of the LMI system as a MATLAB string (save
button). This description can be reloaded later on by pressing the load
button
• Read a sequence of lmivar/lmiterm commands from a file (read button). The
matrix expression of the LMI system specified by these commands is then
displayed by clicking on describe the LMIs...
• Write in a file the sequence of lmivar/lmiterm commands needed to specify
a particular LMI system (write button)
• Generate the internal representation of the LMI system by pressing create.
The result is written in a MATLAB variable with the same name as the LMI
system
9-52
lmiedit
Remark Editable text areas have built-in scrolling capabilities. To activate the scroll
mode, click in the text area, maintain the mouse button down, and move the
mouse up or down. The scroll mode is only active when all visible lines have
been used.
Example An example and some limitations of lmiedit can be found in “The LMI Editor
lmiedit” on page 8-16.
9-53
lmiinfo
Purpose 9lmiinfo
Interactively retrieve information about the variables and term content of
LMIs
Syntax lmiinfo(lmisys)
Description lmiinfo provides qualitative information about the system of LMIs lmisys.
This includes the type and structure of the matrix variables, the number of
diagonal blocks in the inner factors, and the term content of each block.
lmiinfo is an interactive facility where the user seeks specific pieces of
information. General LMIs are displayed as
N' * L(x) * N < M' * R(x) * M
where N,M denote the outer factors and L,R the left and right inner factors. If
the outer factors are missing, the LMI is simply written as
L(x) < R(x)
Information on the block structure and term content of L(x) and R(x) is also
available. The term content of a block is symbolically displayed as
C1 + A1*X2*B1 + B1'*X2*A1' + a2*X1 + x3*Q1
The index j in Aj, Bj, Cj, Qj is a dummy label. Hence C1 may appear in
several blocks or several LMIs without implying any connection between the
corresponding constant terms. Exceptions to this rule are the notations
9-54
lmiinfo
where the matrix variables are X of Type 1, Y of Type 2, and z scalar. If this
LMI is described in lmis, information about X and the LMI block structure can
be obtained as follows:
lmiinfo(lmis)
LMI ORACLE
------
?> v
-------
?> l
9-55
lmiinfo
0 < R(x)
where the inner factor(s) has 2 diagonal block(s)
?> w
------
?> q
Note that the prompt symbol is ?> and that answers are either indices or
letters. All blocks can be displayed at once with option (w), or you can prompt
for specific blocks with option (b).
Remark lmiinfo does not provide access to the numerical value of LMI coefficients.
9-56
lminbr
Purpose 9lminbr
Return the number of LMIs in an LMI system
Syntax k = lminbr(lmisys)
Description lminbr returns the number k of linear matrix inequalities in the LMI problem
described in lmisys.
9-57
lmireg
Purpose 9lmireg
Specify LMI regions for pole placement purposes
9-58
lmiterm
Purpose 9lmiterm
Specify the term content of LMIs
Syntax lmiterm(termID,A,B,flag)
Description lmiterm specifies the term content of an LMI one term at a time. Recall that
LMI term refers to the elementary additive terms involved in the block-matrix
expression of the LMI. Before using lmiterm, the LMI description must be
initialized with setlmis and the matrix variables must be declared with
lmivar. Each lmiterm command adds one extra term to the LMI system
currently described.
LMI terms are one of the following entities:
• outer factors
• constant terms (fixed matrices)
• variable terms AXB or AXTB where X is a matrix variable and A and B are
given matrices called the term coefficients.
When describing an LMI with several blocks, remember to specify only the
terms in the blocks on or below the diagonal (or equivalently, only the terms
in blocks on or above the diagonal). For instance, specify the blocks (1,1), (2,1),
and (2,2) in a two-block LMI.
In the calling of limterm, termID is a four-entry vector of integers specifying
the term location and the matrix variable involved.
+p
termID (1) =
-p
where positive p is for terms on the left-hand side of the p-th LMI and
negative p i s for terms on the right-hand side of the p-th LMI.
Recall that, by convention, the left-hand side always refers to the smaller side
of the LMI. The index p is relative to the order of declaration and corresponds
to the identifier returned by newlimi.
9-59
lmiterm
Type of Term A B
Note that identity outer factors and zero constant terms need not be specified.
The extra argument flag is optional and concerns only conjugated expressions
of the form
(AXB) + (AXB)T = AXB + BTX(T)AT
in diagonal blocks. Setting flag = 's' allows you to specify such expressions
with a single lmiterm command. For instance,
lmiterm([1 1 1 X],A,1,'s')
adds the symmetrized expression AX + XTAT to the (1,1) block of the first LMI
and summarizes the two commands
lmiterm([1 1 1 X],A,1)
lmiterm([1 1 1 X],1,A')
Aside from being convenient, this shortcut also results in a more efficient
representation of the LMI.
9-60
lmiterm
9-61
lmivar
Purpose 9lmivar
Specify the matrix variables in an LMI problem
Syntax X = lmivar(type,struct)
[X,n,sX] = lmivar(type,struct)
Description lmivar defines a new matrix variable X in the LMI system currently described.
The optional output X is an identifier that can be used for subsequent reference
to this new variable.
The first argument type selects among available types of variables and the
second argument struct gives further information on the structure of X
depending on its type. Available variable types include:
type=1: Symmetric matrices with a block-diagonal structure. Each diagonal
block is either full (arbitrary symmetric matrix), scalar (a multiple of the
identity matrix), or identically zero.
If X has R diagonal blocks, struct is an R-by-2 matrix where
• struct(r,1) is the size of the r-th block
• struct(r,2) is the type of the r-th block (1 for full, 0 for scalar, 1 for zero
block).
type=2: Full m-by-n rectangular matrix. Set struct = [m,n] in this case.
type=3: Other structures. With Type 3, each entry of X is specified as zero or
±xn where xn is the n-th decision variable.
Accordingly, struct is a matrix of the same dimensions as X such that
• struct(i,j)=0 if X(i, j) is a hard zero
• struct(i,j)=n if X(i, j) = xn
struct(i,j)= n if X(i, j) = –xn
9-62
lmivar
dependence of X on the decision variables x1, . . ., xn. For more details, see
Example 2 below and “Structured Matrix Variables” on page 8-33.
Example 1 Consider an LMI system with three matrix variables X1, X2, X3 such that
• X1 is a 3 × 3 symmetric matrix (unstructured),
• X2 is a 2 × 4 rectangular matrix (unstructured),
• X3 =
∆ 0 0
0 δ1 0
0 0 δ2 I2
Example 2 Combined with the extra outputs n and sX of lmivar, Type 3 allows you to
specify fairly complex matrix variable structures. For instance, consider a
matrix variable X with structure
X 0
X = 1
0 X
2
where X1 and X2 are 2-by-3 and 3-by-2 rectangular matrices, respectively. You
can specify this structure as follows:
9-63
lmivar
The outputs sX1 and sX2 give the decision variable content of X1 and X2:
sX1
sX1 =
1 2 3
4 5 6
sX2
sX2 =
7 8
9 10
11 12
For instance, sX2(1,1)=7 means that the (1,1) entry of X2 is the seventh
decision variable.
2 Use Type 3 to specify the matrix variable X and define its structure in terms
of those of X1 and X2 :
[X,n,sX] = lmivar(3,[sX1,zeros(2);zeros(3),sX2])
sX =
1 2 3 0 0
4 5 6 0 0
0 0 0 7 8
0 0 0 9 10
0 0 0 11 12
See Also setlmis, lmiterm, getlmis, lmiedit, skewdec, symdec, delmvar, setmvar
9-64
ltiss
Purpose 9ltiss
Extract the state-space realization of a linear system from its SYSTEM matrix
description
Description Given a SYSTEM matrix sys created with ltisys, this function returns the
state-space matrices A, B, C, D, E.
–2
Example The system G(s) = ------------ is specified in transfer function form by
s+1
sys = ltisys('tf', 2,[1 1])
a =
-1
b =
1
c =
-2
d =
0
Important If the output argument e is not supplied, ltiss returns the realization
(E–1A,E–1B,C,D).
While this is immaterial for systems with E = I, this should be remembered
when manipulating descriptor systems.
9-65
ltisys
Purpose 9ltisys
Pack the state-space realization (A, B, C, D, E) into a single SYSTEM matrix
or discrete-time systems
Exk+1 = Axk + Buk, yk = Cxk + Duk,
n
A + j(E – I) B
0
...
C D
0
0 -Inf
The upper right entry n gives the number of states (i.e., A ∈ Rn×n). This data
structure is called a SYSTEM matrix. When omitted, d and e are set to the default
values D = 0 and E = I.
The syntax sys = ltisys(a) specifies the autonomous system x· = Ax while
sys = ltisys(a,e) specifies E x· = Ax. Finally, you can specify SISO systems
by their transfer function
n(s)
G(s) = -----------
d(s)
The syntax is then
sys = ltisys('tf',num,den)
where num and den are the vectors of coefficients of the polynomials n(s) and
d(s) in descending order.
9-66
ltisys
εx· = – x + 2u
y = x
is created by
sys = ltisys(-1,2,1,0,eps)
s+4
Similarly, the SISO system with transfer function G(s) = ------------ is specified by
s+7
sys = ltisys('tf',[1 4],[1 7])
9-67
ltitf
Purpose 9ltitf
Compute the transfer function of a SISO system
is given by
sys = ltisys(-1,2,1,-1)
[n,d] = ltitf(sys)
n =
-1 1
d =
1 1
The vectors n and d represent the function
– s + 1-
G ( s ) = ----------------
s+1
See Also ltisys, ltiss, sinfo
9-68
magshape
Purpose 9magshape
Graphical specification of the shaping filters
Syntax magshape
Description magshape is a graphical user interface to construct the shaping filters needed
in loop-shaping design. Typing magshape brings up a window where the
magnitude profile of each filter can be specified graphically with the mouse.
Once a profile is sketched, magshape computes an interpolating SISO filter and
returns its state-space realization. Several filters can be specified in the same
magshape window as follows:
1 First enter the filter names after Filter names. This tells magshape to write
the SYSTEM matrices of the interpolating filters in MATLAB variables with
these names.
2 Click on the button showing the name of the filter you are about to specify.
To sketch its profile, use the mouse to specify a few characteristic points that
define the asymptotes and the slope changes. For accurate fitting by a
rational filter, make sure that the slopes are integer multiples of ±20 dB/dec.
3 Specify the filter order after filter order and click on the fit data button
to perform the fitting. The magnitude response of the computed filter is then
drawn as a solid line and its SYSTEM matrix is written in the MATLAB
variable with the same name.
4 Adjustments can be made by moving or deleting some of the specified points.
Click on delete point or move point to activate those modes. You can also
increase the order for a better fit. Press fit data to recompute the
interpolating filter.
If some of the filters are already defined in the MATLAB environment, their
magnitude response is plotted when the button bearing their name is clicked.
This allows for easy modification of existing filters.
Remark See sderiv for the specification of nonproper shaping filters with a derivative
action.
Acknowledg- The authors are grateful to Prof. Gary Balas and Prof. Andy Packard for
ment providing the cepstrum algorithm used for phase reconstruction.
9-69
matnbr
Purpose 9matnbr
Return the number of matrix variables in a system of LMIs
Syntax K = matnbr(lmisys)
Description matnbr returns the number K of matrix variables in the LMI problem described
by lmisys.
9-70
mat2dec
Purpose 9mat2dec
Return the vector of decision variables corresponding to particular values of
the matrix variables
Description Given an LMI system lmisys with matrix variables X1, . . ., XK and given
values X1,...,Xk of X1, . . ., XK, mat2dec returns the corresponding value
decvec of the vector of decision variables. Recall that the decision variables are
the independent entries of the matrices X1, . . ., XK and constitute the free
scalar variables in the LMI problem.
This function is useful, for example, to initialize the LMI solvers mincx or gevp.
Given an initial guess for X1, . . ., XK, mat2dec forms the corresponding vector
of decision variables xinit.
An error occurs if the dimensions and structure of X1,...,Xk are inconsistent
with the description of X1, . . ., XK in lmisys.
Example Consider an LMI system with two matrix variables X and Y such that
• X is A symmetric block diagonal with one 2-by-2 full block and one 2-by-2
scalar block
• Y is a 2-by-3 rectangular matrix
decv'
ans =
1 3 -1 5 1 2 3 4 5 6
9-71
mat2dec
Note that decv is of length 10 since Y has 6 free entries while X has 4
independent entries due to its structure. Use decinfo to obtain more
information about the decision variable distribution in X and Y.
9-72
mincx
Purpose 9mincx
Minimize a linear objective under LMI constraints
The remaining arguments are optional. The vector xinit is an initial guess of
the minimizer xopt. It is ignored when infeasible, but may speed up compu-
tations otherwise. Note that xinit should be of the same length as c. As for
target, it sets some target for the objective value. The code terminates as soon
as this target is achieved, that is, as soon as some feasible x such that
cTx ð target is found. Set options to [] to use xinit and target with the
default options.
Control The optional argument options gives access to certain control parameters of
Parameters the optimization code. In mincx, this is a five-entry vector organized as follows:
• options(1) sets the desired relative accuracy on the optimal value lopt
(default = 10–2).
• options(2) sets the maximum number of iterations allowed to be performed
by the optimization procedure (100 by default).
• options(3) sets the feasibility radius. Its purpose and usage are as for
feasp.
9-73
mincx
Tip for In LMI optimization, the computational overhead per iteration mostly comes
Speed-Up from solving a least-squares problem of the form
min Ax – b
x
where x is the vector of decision variables. Two methods are used to solve this
problem: Cholesky factorization of ATA (default), and QR factorization of A
when the normal equation becomes ill conditioned (when close to the solution
typically). The message
* switching to QR
Memory QR-based linear algebra (see above) is not only expensiveLMI solvers,memory
Problems shortagememory shortage in terms of computational overhead, but also in
terms of memory requirement. As a result, the amount of memory required by
QR may exceed your swap space for large problems with numerous LMI
constraints. In such case, MATLAB issues the error
??? Error using ==> pds
Out of memory. Type HELP MEMORY for your options.
You should then ask your system manager to increase your swap space or, if no
additional swap space is available, set options(4) = 1. This will prevent
9-74
mincx
Reference The solver mincx implements Nesterov and Nemirovski’s Projective Method as
described in
Nesterov, Yu, and A. Nemirovski, Interior Point Polynomial Methods in Convex
Programming: Theory and Applications, SIAM, Philadelphia, 1994.
Nemirovski, A., and P. Gahinet, “The Projective Method for Solving Linear
Matrix Inequalities,” Proc. Amer. Contr. Conf., 1994, Baltimore, Maryland, pp.
840–844.
The optimization is performed by the C-MEX file pds.mex.
9-75
msfsyn
Purpose 9msfsyn
Multi-model/multi-objective state-feedback synthesis
x· = Ax + B 1 w + B 2 u
z∞ = C 1 x + D 11 w + D 12 u
z2 = C 2 x + D 22 u
• Maintains the RMS gain (H∞ norm) of the closed-loop transfer function T∞
from w to z∞ below some prescribed value γ 0 > 0
• Maintains the H2 norm of the closed-loop transfer function T2 from w to z2
below some prescribed value υ 0 > 0
• Minimizes an H2/H∞ trade-off criterion of the form
2 2
α T∞ ∞ + β T2 2
• Places the closed-loop poles inside the LMI region specified by region (see
lmireg for the specification of such regions). The default is the open left-half
plane.
Set r = size(d22) and obj = [γ0, ν0, α, β] to specify the problem dimensions
and the design parameters γ0, ν0, α, and β. You can perform pure pole
placement by setting obj = [0 0 0 0]. Note also that z∞ or z2 can be empty.
On output, gopt and h2opt are the guaranteed H∞ and H2 performances, K is
the optimal state-feedback gain, Pcl the closed-loop transfer function from w
z ∞
to , and X the corresponding Lyapunov matrix.
z2
9-76
msfsyn
x· = A ( t )x + B 1 ( t )w + B 2 ( t )u
z∞ = C 1 ( t )x + D 11 ( t )w + D 12 ( t )u
z2 = C 2 ( t )x + D 22 ( t )u
In this context, msfsyn seeks a state-feedback gain that robustly enforces the
specifications over the entire polytope of plants. Note that polytopic plants
should be defined with psys and that the closed-loop system Pcl is itself
polytopic in such problems. Affine parameter-dependent plants are also
accepted and automatically converted to polytopic models.
9-77
mubnd
Purpose 9mubnd
Compute the upper bound for the µ structured singular value
Description mubnd computes the mixed-µ upper bound mu = ν∆(M) for the matrix M ∈ Cm×n
and the norm-bounded structured perturbation ∆ = diag(∆1, . . ., ∆n) (see
“Structured Singular Value” on page 3-17). The reciprocal of mu is a guaranteed
well-posedness margin for the algebraic loop
w q
9-78
muperf
Purpose 9muperf
Estimate the robust H∞ performance of uncertain dynamical systems
Description muperf estimates the robust H∞ performance (worst-case RMS gain) of the
uncertain LTI system
∆(s)
u P(s) y
muperf returns grob = Inf when the interconnection is not robustly stable.
Alternatively,
[margin,peakf] = muperf(P,delta,g)
9-79
muperf
9-80
mustab
Purpose 9mustab
Estimate the robust stability margin of uncertain LTI systems
Description mustab computes an estimate margin of the robust stability margin for the
interconnection
∆(s)
G(s)
returns the guaranteed stability margin margin and the frequency peakf
where the margin is weakest. The uncertainty ∆(s) is specified by delta (see
ublock and udiag). The optional third argument freqs is a user-supplied vector
of frequencies where to evaluate ν∆(G(jω)).
To obtain the D, G scaling matrices at all tested frequencies, use the syntax
[margin,peak,fs,ds,gs] = mustab(sys,delta)
9-81
mustab
D = getdg(ds,i)
G = getdg(gs,i)
Caution margin may overestimate the robust stability margin when the frequency grid
freqs is too coarse or when the µ upper bound is discontinuous due to real
parameter uncertainty. See “Caution” on page 3-21 for more details.
9-82
newlmi
Purpose 9newlmi
Attach an identifying tag to LMIs
Description newlmi adds a new LMI to the LMI system currently described and returns an
identifier tag for this LMI. This identifier can be used in lmiterm, showlmi, or
dellmi commands to refer to the newly declared LMI. Tagging LMIs is optional
and only meant to facilitate code development and readability.
Identifiers can be given mnemonic names to help keep track of the various
LMIs. Their value is simply the ranking of each LMI in the system (in the order
of declaration). They prove useful when some LMIs are deleted from the LMI
system. In such cases, the identifiers are the safest means of referring to the
remaining LMIs.
9-83
norminf
Purpose 9norminf
Compute the random mean-squares (RMS) gain of continuous-time systems
Description Given the SYSTEM matrix g of an LTI system with state-space equations
dx
E ------- = Ax + Bu
dt
y = Cx + Du
and transfer function G(s) = D + C(sE – A)–1B, norminf computes the peak gain
G ∞ = sup σ max ( G ( jω ) )
ω
of the frequency response G(jω). When G(s) is stable, this peak gain coincides
with the RMS gain or H∞ norm of G, that is,
y L
G ∞ = sup -------------2
u ∈ L2 u L2
u≠0
The function norminf returns the peak gain and the frequency at which it is
attained in gain and peakf, respectively. The optional input tol specifies the
relative accuracy required on the computed RMS gain (the default value is
10–2).
100
G ( s ) = -------------------------------------------
2
s + 0.01s + 100
is given by
norminf(ltisys('tf',100,[1 0.01 100]))
ans =
1.0000e+03
9-84
norminf
9-85
norm2
Purpose 9norm2
Compute the H2 norm of a continuous-time LTI system
x· = Ax + Bw
G:
y = Cx
driven by a white noise w with unit covariance, norm2 computes the H2 norm
(LQG performance) of G defined by
1 T T
∫
2
G 2 = lim E ---- y ( t )y ( t )dt
T→∞ T
0
1 ∞ H
∫
= ------
2π –∞
G ( jω )G ( jω )dω
T
G 2 = Trace ( CPC )
AP + PAT + BBT = 0
9-86
pdlstab
Purpose 9pdlstab
Assess the robust stability of a polytopic or parameter-dependent system
n n
A + jE B A + jE B
= ∑ αi i i
, αi ≥ 0 , ∑ αi = 1 ,
C D C D
i=1 i i i=1
(9-19)
9-87
pdlstab
Several options and control parameters are accessible through the optional
argument options:
E ( p )x· = A ( p )x
(9-20)
E ( p ) z· = A ( p ) z
T T
(9-21)
9-88
pdsimul
Purpose 9pdsimul
Time response of a parameter-dependent system along a given parameter
trajectory
Syntax pdsimul(pds,'traj',tf,'ut',xi,options)
[t,x,y] = pdsimul(pds,pv,'traj',tf,'ut',xi,options)
The affine system pds is specified with psys. The function pdsimul also accepts
the polytopic representation of such systems as returned by aff2pol(pds) or
hinfgs. The final time and initial state vector can be reset through tf and xi
(their respective default values are 5 seconds and 0). Finally, options gives
access to the parameters controlling the ODE integration (type help gear for
details).
When invoked without output arguments, pdsimul plots the output trajectories
y(t). Otherwise, it returns the vector of integration time points t as well as the
state and output trajectories x,y.
9-89
popov
Purpose 9popov
Perform the Popov robust stability test
Description popov uses the Popov criterion to test the robust stability of dynamical systems
with possibly nonlinear and/or time-varying uncertainty (see “The Popov
Criterion” on page 3-24 for details). The uncertain system must be described as
the interconnection of a nominal LTI system sys and some uncertainty delta
(see “Norm-Bounded Uncertainty” on page 2-27). Use ublock and udiag to
specify delta.
The command
[t,P,S,N] = popov(sys,delta)
9-90
psinfo
Purpose 9psinfo
Inquire about polytopic or parameter-dependent systems created with psys
Syntax psinfo(ps)
[type,k,ns,ni,no] = psinfo(ps)
pv = psinfo(ps,'par')
sk = psinfo(ps,'sys',k)
sys = psinfo(ps,'eval',p)
9-91
psys
Purpose 9psys
Specify polytopic or parameter-dependent linear systems
Description psys specifies state-space models where the state-space matrices can be
uncertain, time-varying, or parameter-dependent.
Two types of uncertain state-space models can be manipulated in the LMI
Control Toolbox:
• Polytopic systems
E(t) x· = A(t)x + B(t)u
y = C(t)x + D(t)u
whose SYSTEM matrix takes values in a fixed polytope:
A ( t ) + jE ( t ) B ( t ) ∈ Co A 1 + jE 1 B 1 , . . ., A k + jE k B k
C(t) D(t) C1 D1 Ck Dk
S(t)
S1 Sk
k k
Co{S1, . . ., Sk} =
∑ αi Si : αi ≥ 0 , ∑ αi = 1
i = 1 i=1
9-92
psys
y = C(p)x + D(p)u
where A(.); B(.), . . ., E(.) are fixed affine functions of some vector
p = (p1, . . ., pn) of real parameters, i.e.,
A ( p ) + jE ( p ) B ( p ) =
C(p) D(p)
S(p)
A 0 + jE 0 B 0 A + jE 1 B 1 A + jE n B n
+ p1 1 + . . . + pn n
C0 D0 C1
D1 Cn Dn
S0 S1 Sn
where S0, S1, . . ., Sn are given SYSTEM matrices. The parameters pi can be
time-varying or constant but uncertain.
Both types of models are specified with the function psys. The argument
syslist lists the SYSTEM matrices Si characterizing the polytopic value set or
parameter dependence. In addition, the description pv of the parameter vector
(range of values and rate of variation) is required for affine parameter-
dependent models (see pvec for details). Thus, a polytopic model with vertex
systems S1, . . ., S4 is created by
pols = psys([s1,s2,s3,s4])
9-93
psys
Do not forget the 0 in the second and third commands. This 0 marks the
independence of the E matrix on p1, p2 . Typing
s0 = ltisys(a0)
s1 = ltisys(a1)
s2 = ltisys(a2)
ps = psys(pv,[s0 s1 s2])
9-94
pvec
Purpose 9pvec
Specify the range and rate of variation of uncertain or time-varying parameters
Syntax pv = pvec('box',range,rates)
pv = pvec('pol',vertices)
dp j
ν j ≤ -------- ≤ ν j
dt
Set ν j = Inf and ν j = Inf if pj(t) can vary arbitrarily fast or discontinuously.
The type 'pol' corresponds to parameter vectors p ranging in a polytope of the
parameter space Rn. This polytope is defined by a set of vertices V1, . . ., Vn
corresponding to “extremal” values of the vector p. Such parameter vectors are
declared by the command
pv = pvec('pol',[v1,v2, . . ., vn])
9-95
pvec
p2
50
20
-1 2 p1
9-96
pvinfo
Purpose 9pvinfo
Describe a parameter vector specified with pvec
Description pvec retrieves information about a vector p = (p1, . . ., pn) of real parameters
declared with pvec and stored in pv. The command pvinfo(pv) displays the
type of parameter vector ('box' or 'pol'), the number n of scalar parameters,
and for the type 'pol', the number of vertices used to specify the parameter
range.
For the type 'box':
[pmin,pmax,dpmin,dpmax] = pvinfo(pv,'par',j)
returns the bounds on the value and rate of variations of the j-th real
parameter pj. Specifically,
dp j
pmin ≤ p ( t ) ≤ pmax , dpmin ≤ --------- ≤ dpmax
j dt
For the type 'pol':
pvinfo(pv,'par',j)
returns the value of the parameter vector p given its barycentric coordinates c
with respect to the polytope vertices (V1, . . .,Vk). The vector c must be of length
k and have nonnegative entries. The corresponding value of p is then given by
∑
k
cV
i=1 i i
p = ---------------------------
-
∑i = 1
k
c i
9-97
quadperf
Purpose 9quadperf
Compute the quadratic H∞ performance of a polytopic or parameter-dependent
system
E ( t )x· = A ( t )x + B ( t )u, y = C ( t )x + D ( t )u
(9-22)
y L2 ≤ γ u L2
(9-23)
for all input u(t) with bounded energy. A sufficient condition for (9-23) is the
existence of a quadratic Lyapunov function
V(x) = xTPx, P>0
such that
dV T 2 T
∀u ∈ L 2 , -------- + y y – γ u u < 0
dt
• If options(1)=1, perf is the largest portion of the parameter box where the
quadratic RMS gain remains smaller than the positive value g (for affine
parameter-dependent systems only). The default value is 0
9-98
quadperf
9-99
quadstab
Purpose 9quadstab
Quadratic stability of polytopic or affine parameter-dependent systems
9-100
quadstab
Example See “Example 3.1” on page 3-7 and “Example 3.2” on page 3-8.
9-101
ricpen
Purpose 9ricpen
Solve continuous-time Riccati equations (CARE)
Syntax X = ricpen(H,L)
[X1,X2] = ricpen(H,L)
Description This function computes the stabilizing solution X of the Riccati equation
T T T T –1 T T
A XE + E XA + E XFXE – ( E XB + S )R ( B XE + S ) + Q = 0
(9-24)
A F E 0 0B
H – λL = –Q –A –S – λ 0 ET 0
T
T T 0 0 0
S B R
The matrices H and L are specified as input arguments. For solvability, the
pencil H – λL must have no finite generalized eigenvalue on the imaginary
axis.
When called with two output arguments, ricpen returns two matrices X1, X2
X1 –1
such that has or thonormal columns, and X = X2 X 1 is the stabilizing
X2
solution of (9-24).
If X1 is invertible, the solution X is returned directly by the syntax
X = ricpen(H,L).
ricpen is adapted from care and implements the generalized Schur method for
solving CAREs.
Reference Laub, A. J., “A Schur Method for Solving Algebraic Riccati Equations,” IEEE
Trans. Aut. Contr., AC–24 (1979), pp. 913–921.
Arnold, W.F., and A.J. Laub, “Generalized Eigenproblem Algorithms and
Software for Algebraic Riccati Equations,” Proc. IEEE, 72 (1984), pp. 1746–
1754.
9-102
ricpen
9-103
sadd
Purpose 9sadd
Parallel interconnection of linear systems
G1
+
G2
. . .
+
Gn
+
of the linear systems G1, . . ., Gn. The arguments g1, g2,... are either SYSTEM
matrices (dynamical systems) or constant matrices (static gains). In addition,
one (and at most one) of these systems can be polytopic or parameter-
dependent, in which case the resulting system sys is of the same nature.
In terms of transfer functions, sadd performs the sum
G(s) = G1(s) + G2(s) + . . .+ Gn(s)
9-104
sbalanc
Purpose 9sbalanc
Numerically balance the state-space realization of a linear system
Description sbalanc computes a diagonal invertible matrix T that balances the state-space
realization (A, B, C) by reducing the magnitude of the off-diagonal entries of A
and by scaling the norms of B and C. The resulting realization is of the form
(T –1AT, T–1B, CT)
The optional argument condnbr specifies an upper bound on the condition
number of T. The default value is 108.
a =
-1000 -250000
1 0
b =
1
0
c =
-999 -249999
d =
1
To reduce the numerical range of this data and improve numerical stability of
subsequent computations, you can balance this realization by
[a,b,c]=sbalanc(a,b,c)
a =
-1000.00 -3906.25
64.00 0
9-105
sbalanc
b =
64.00
0
c =
-15.61 -61.03
9-106
sconnect
Purpose 9sconnect
Specify general control structures or system interconnections
• The exogenous inputs, i.e., what signals enter the loop or interconnection
• The ouput signals, i.e., the signals generated by the loop or interconnection
• How the inputs of each dynamical system relate to the exogenous inputs and
to the outputs of other systems.
• The first argument inputs is a string listing the exogenous inputs. For
instance, inputs='r(2), d' specifies two inputs, a vector r of size 2 and a
scalar input d
• The second argument outputs is a string listing the outputs generated by the
control loop. Outputs are defined as combinations of the exogenous inputs
and the outputs of the dynamical systems. For instance, outputs='e=r-S;
S+d' specifies two outputs e = r – y and y + d where y is the output of the
system of name S.
• The third argument Kin names the controller and specifies its inputs. For
instance, Kin='K:e' inserts a controller of name K and input e. If no name is
specified as in Kin='e', the default name K is given to the controller.
9-107
sconnect
• The remaining arguments come in pairs and specify, for each known LTI
system in the loop, its input list Gkin and its SYSTEM matrix gk. The input list
is a string of the form
system name : input1 ; input2 ; ... ; input n
For instance, G1in='S: K(2);d' inserts a system called S whose inputs
consist of the second output of the controller K and of the input signal d.
Note that the names given to the various systems are immaterial provided that
they are used consistently throughout the definitions.
Example 1
y
r 1 1
s+1 s+2
Note that inputK is set to [] to mark the absence of controller. The same result
would be obtained with
S = smult(sys1,sys2)
9-108
sconnect
Example 2
r + e u y
1
C(s)
− s+1
The standard plant associated with this simple tracking loop is given by
g = ltisys('tf',1,[1 1])
P = sconnect('r','y=G','C:r-y','G:C',g)
Example 3
W1 z1 W2 z2 Wd d
+
+ e +
r W3 z3
K G
−
+
+
Wn n
If the SYSTEM matrices of the system G and filters W1, W2, W3, Wd, Wn are stored
in the variables g, w1, w2, w3, wd, wn, respectively, the corresponding
standard plant P(s) is formed by
inputs = 'r;n;d'
outputs = 'W1;W2;W3'
Kin = 'K: e=r-y-Wn'
W3in = 'W3: y=G+Wd'
9-109
sconnect
[P,r] = sconnect(inputs,outputs,Kin,'G:K',g,'W1:e',w1,...
'W2:K',w2,W3in,w3,'Wd:d',wd,'Wn:n',wn)
9-110
sderiv
Purpose 9sderiv
Apply proportional-derivative action to some inputs/outputs of an LTI system
Description sderiv multiplies selected inputs and/or outputs of the LTI system sys by the
proportional-derivator ns + d. The coefficients n and d are specified by setting
pd = [n , d]
The second argument chan lists the input and output channels to be filtered by
ns + d. Input channels are denoted by their ranking preceded by a minus sign.
On output, dsys is the SYSTEM matrix of the corresponding interconnection. An
error is issued if the resulting system is not proper.
Example Consider a SISO loop-shaping problem where the plant P(s) has three outputs
corresponding to the transfer functions S, KS, and T . Given the shaping filters
1
w1(s) = --- , w2(s) = 100, w3(s) = 1 + 0.01s,
s
the augmented plant associated with the criterion
w S
1
w KS
2
w T
3 ∞
is formed by
w1 = ltisys('tf',1,[1 0])
w2 = 100
paug = smult(p,sdiag(w1,w2,1))
paug = sderiv(paug,3,[0.01 1])
This last command multiplies the third output of P by the filter w3.
9-111
sdiag
Purpose 9sdiag
Append (concatenate) linear systems
Syntax g = sdiag(g1,g2,...)
Description sdiag returns the system obtained by stacking up the inputs and outputs of the
systems g1,g2,... as in the diagram.
u1 y1
G1
u ... y
yn
un
Gn
If G1(s),G2 (s), . . . are the transfer functions of g1,g2,..., the transfer function
of g is
G (s) 0 0
1
G(s) = 0 G2 ( s ) 0
0 0
...
The function sdiag takes up to 10 input arguments. One (and at most one) of
the systems g1,g2,... can be polytopic or parameter-dependent, in which case
g is of the same nature.
9-112
sdiag
Example Let p be a system with two inputs u1, u2 and three outputs y1 , y2 , y3. The
augmented system
y1
w1
u1 y˜1
y2
P w2 y˜2
u2
y˜3
y3
w3
Paug
9-113
setlmis
Purpose 9setlmis
Initialize the description of an LMI system
Syntax setlmis(lmi0)
Description Before starting the description of a new LMI system with lmivar and lmiterm,
type
setlmis([])
9-114
setmvar
Purpose 9setmvar
Instantiate a matrix variable and evaluate all LMI terms involving this matrix
variable
Description setmvar sets the matrix variable X with identifier X to the value Xval. All terms
involving X are evaluated, the constant terms are updated accordingly, and X
is removed from the list of matrix variables. A description of the resulting LMI
system is returned in newsys.
The integer X is the identifier returned by lmivar when X is declared.
Instantiating X with setmvar does not alter the identifiers of the remaining
matrix variables.
The function setmvar is useful to freeze certain matrix variables and optimize
with respect to the remaining ones. It saves time by avoiding partial or
complete redefinition of the set of LMI constraints.
setlmis([])
P = lmivar(1,[n 1]) % P full symmetric
Y = lmivar(2,[ncon n]) % Y rectangular
9-115
setmvar
To find out whether this problem has a solution K for the particular Lyapunov
matrix P = I, set P to I by typing
news = setmvar(lmis,P,1)
The resulting LMI system news has only one variable Y = K. Its feasibility is
assessed by calling feasp:
[tmin,xfeas] = feasp(news)
Y = dec2mat(news,xfeas,Y)
9-116
showlmi
Purpose 9showlmi
Return the left- and right-hand sides of an LMI after evaluation of all variable
terms
Description For given values of the decision variables, the function evallmi evaluates all
variable terms in a system of LMIs. The left- and right-hand sides of the n-th
LMI are then constant matrices that can be displayed with showlmi. If evalsys
is the output of evallmi, the values lhs and rhs of these left- and right-hand
sides are given by
[lhs,rhs] = showlmi(evalsys,n)
9-117
sinfo
Purpose 9sinfo
Return the number of states, inputs, and outputs of an LTI system
Syntax sinfo(sys)
[ns,ni,no] = sinfo(sys)
9-118
sinv
Syntax h = sinv(g)
9-119
slft
Purpose 9slft
Form the linear-fractional interconnection of two time-invariant systems
(Redheffer’s star product)
sys1
u y
sys2
w2 z2
of the two systems sys1 and sys2 and returns a realization sys of the
w
z
closed-loop transfer function from 1 to 1 . The optional arguments
w2 z2
udim and ydim specify the lengths of the vectors u and y. Note that u enters
sys1 while y is an output of this system.When udim and ydim are omitted, slft
forms one of the two interconnections:
9-120
slft
w1 z1
sys1
u y
sys2
or
u y
sys1
z2 w2
sys2
9-121
slft
∆(s)
w
P(s) z
u y
K(s)
where ∆(s), P(s), K(s) are LTI systems. A realization of the closed-loop transfer
function from w to z is obtained as
clsys = slft(delta,slft(p,k))
or equivalently as
clsys = slft(slft(delta,p),k)
9-122
sloop
Purpose 9sloop
Form the feedback interconnection of two systems
r + y
G1
sgn
G2
r + y
1/(s+1)
−
1/s
is obtained by
sys = sloop( ltisys('tf',1,[1 1]) , ltisys('tf',1,[1 0]))
[num,den] = ltitf(sys)
num =
9-123
sloop
0 1.00 0.00
den =
1.00 1.00 1.00
9-124
smult
Purpose 9smult
Series interconnection of linear systems
u y
G1 G2 ........ Gn
The arguments g1, g2,... are SYSTEM matrices containing the state-space
data for the systems G1, . . ., Gn. Constant matrices are also allowed as a
representation of static gains. In addition, one (and at most one) of the input
arguments can be a polytopic or affine parameter-dependent model, in which
case the output sys is of the same nature.
In terms of transfer functions, this interconnection corresponds to the product
G(s) = Gn(s) × . . . × G1(s)
Note the reverse order of the terms.
Wi(s) G Wo(s)
where Wi(s), W0(s) are given LTI filters and G is the affine
parameter-dependent system
x· = θ1x + u
y = x – θ 2u
parametrized by θ1, θ2. The parameter-dependent system defined by this
interconnection is returned by
9-125
smult
sys = smult(wi,g,w0)
While the SYSTEM matrices wi and w0 are created with ltisys, the
parameter-dependent system g should be specified with psys.
9-126
splot
Purpose 9splot
Plot the various frequency and time responses of LTI systems
Syntax splot(sys,type,xrange)
splot(sys,T,type,xrange)
Frequency Response
type plot
Time Response
type plot
9-127
splot
The resulting plot appears in Figure 9-3. Due to the integrator and poor
damping of the natural frequency at 1 rd/s, this plot is not very informative. In
such cases, the lin-log Nyquist plot is more appropriate. Given the gain/phase
decomposition
G(jω) = γ(ω)ejφ(ω)
of the frequency response, the lin-log Nyquist plot is the curve described by
x = ρ(ω) cos ϕ(ω), y = ρ(ω) sin ϕ(ω)
where
γω if γω ≥ 1
p( ω) =
1 + log 10 γω if γω > 1
and appears in Figure 9-4. The resulting aspect ratio is more satisfactory
thanks to the logarithmic scale for gains larger than one. A more complete
contour is drawn in Figure 9-5 by
splot(g,'li',logspace(-3,2))
9-128
splot
9-129
splot
9-130
spol
Purpose 9spol
Return the poles of an LTI system
Description The function spol computes the poles of the LTI system of SYSTEM matrix sys.
If (A, B, C, D, E) denotes the state-space realization of this system, the poles
are the generalized eigenvalues of the pencil (A, E) in general, and the
eigenvalues of A when E = I.
9-131
sresp
Purpose 9sresp
Frequency response of a continuous-time system
9-132
ssub
Purpose 9ssub
Select particular inputs and outputs in a MIMO system
The second and third arguments are the vectors of indices of the selected
inputs/outputs. Inputs and outputs are labeled 1,2,3,... starting from the top of
the block diagram:
u1 y1
y2
u2 y3
P
y4
u3 y5
y6
9-133
ublock
Purpose 9ublock
Specify the characteristics of an uncertainty block
• Norm bound: here the second argument bound is either a scalar β for uniform
bounds,
9-134
ublock
||∆||∞ < β,
or a SISO shaping filter W(s) for frequency-weighted bounds ||W –1∆||∞ < 1,
that is,
σmax(∆(jω)) < |W(jω)| for all ω
• Sector bounds: set bound = [a b] to indicate that the response of ∆(.) lies in
the sector {a, b}. Valid values for a and b include Inf and Inf.
9-135
udiag
Purpose 9udiag
Form block-diagonal uncertainty structures
Description The function udiag appends individual uncertainty blocks specified with
ublock and returns a complete description of the block-diagonal uncertainty
operator
∆ = diag(∆1, ∆2, . . .)
The input arguments delta1, delta2,... are the descriptions of ∆1, ∆2, . . . .
If udiag is called with k arguments, the output delta is a matrix with k
columns, the k-th column being the description of ∆k.
9-136
uinfo
Purpose 9uinfo
Display the characteristics of a block-diagonal uncertainty structure
Syntax uinfo(delta)
9-137
uinfo
9-138