0% found this document useful (0 votes)
335 views

Predictive Control: J.M.Maciejowski Cambridge University Engineering Department

Uploaded by

Sebastian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
335 views

Predictive Control: J.M.Maciejowski Cambridge University Engineering Department

Uploaded by

Sebastian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

PREDICTIVE CONTROL

WITH CONSTRAINTS
SOLUTIONS MANUAL

J.M.Maciejowski
Cambridge University Engineering Department

3 December 2001
Copyright Pearson
c Education Limited 2002

MATLAB and Simulink are registered trademarks of The MathWorks, Inc.

World-Wide Web site for this book: http://www.booksites.net/maciejowski/

ii
Pearson
c Education Limited 2002
Contents

1 Introduction 1

2 A basic formulation of predictive control 11

3 Solving predictive control problems 25

4 Step response and transfer function formulations 37

5 Other formulations of predictive control 47

6 Stability 53

7 Tuning 59

8 Robust predictive control 77

9 Two case studies 81

iii
CONTENTS CONTENTS

iv
Pearson
c Education Limited 2002
CHAPTER 1. INTRODUCTION

Chapter 1

Introduction

1.1 Assuming a perfect model and no disturbances, y(k + 1) = 2.2864. Hence using
ŷ(k + 2|k + 1) = 0.7y(k + 1) + 2u(k + 1), the free response is given by assuming
that u(k + 1) = u(k + 2) = 0.4432:

ŷf (k + 2|k + 1) = 0.7 × 2.2864 + 2 × 0.4432 = 2.4869 (1)


ŷf (k + 3|k + 1) = 0.7 × 2.4869 + 2 × 0.4432 = 2.6272 (2)

We have (k + 1) = s(k + 1) − y(k + 1) = 3 − 2.2864 = 0.7136, hence

r(k + 3|k + 1) = s(k + 3) − λ2 (k + 1) = 3 − 0.71652 × 0.7136 = 2.6337 (3)

Hence
2.6337 − 2.6272
∆û(k + 1|k + 1) = = 0.0019 (4)
3.4
u(k + 1) = û(k + 1|k + 1) = u(k) + ∆û(k + 1|k + 1) (5)
= 0.4432 + 0.0019 = 0.4451 (6)

That is one step done.


Now the second step:

y(k + 2) = 0.7 × 2.2864 + 2 × 0.4451 = 2.4907 (7)


ŷf (k + 3|k + 2) = 0.7 × 2.4907 + 2 × 0.4451 = 2.6337 (8)
ŷf (k + 4|k + 2) = 0.7 × 2.6337 + 2 × 0.4451 = 2.7338 (9)
(k + 2) = s(k + 2) − y(k + 2) = 3 − 2.4907 = 0.5093 (10)
r(k + 4|k + 2) = 3 − 0.7165 × 0.5093 = 2.7385
2
(11)
2.7385 − 2.7338
∆û(k + 2|k + 2) = = 0.0014 (12)
3.4
u(k + 2) = 0.4451 + 0.0014 = 0.4469 (13)

Verification: y(k + 2) = 2.4907 6= 2.2864 = 0.7 × 2.0 + 2 × 0.4432 = ŷ(k + 2|k).

1
Pearson
c Education Limited 2002
CHAPTER 1. INTRODUCTION

1.2 Here is the solution as a MATLAB file:

% Solution to Exercise 1.2:


setpoint = 3;
Tref = 9; % Reference trajectory time constant
Ts = 3; % sample time
lambda = exp(-Ts/Tref);
P = [1;2]; % coincidence points

% Initial conditions:
yk = 2; % output y(k), from Example 1.3 (note y(k)=y(k-1)).
uk = 0.4429; % last applied input signal u(k) (from Example 1.4)
ykplus1 = 0.7*yk + 2*uk; % output y(k+1)

% Step response vector [S(P(1));S(P(2))] as in (1.21):


S = [2.0 ; 3.4]; % from Example 1.4

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Next step:
% Reference trajectory:
error = setpoint-ykplus1;
% Form reference (or target) vector [r(k+2|k+1) ; r(k+3|k+1)]:
T = [setpoint;setpoint]-[lambda^P(1);lambda^P(2)]*error;

% Free response vector:


yf1 = 0.7*ykplus1 + 2*uk; % yf(k+2|k+1)
yf2 = 0.7*yf1 + 2*uk; % yf(k+3|k+1)
Yf = [yf1 ; yf2]; % vector of free responses - as in (1.21)

% New optimal control signal:


Deltau = S\(T-Yf); % as in (1.22)
ukplus1 = uk + Deltau(1) % use first element only of Deltau

% Resulting output:
ykplus2 = 0.7*ykplus1 + 2*ukplus1
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% One more step:


% Reference trajectory:
error = setpoint-ykplus2;
% Form reference (or target) vector [r(k+3|k+2) ; r(k+4|k+2)]:
T = [setpoint;setpoint]-[lambda^P(1);lambda^P(2)]*error;

% Free response vector:


yf1 = 0.7*ykplus2 + 2*ukplus1; % yf(k+3|k+2)
yf2 = 0.7*yf1 + 2*ukplus1; % yf(k+4|k+2)

2
Pearson
c Education Limited 2002
CHAPTER 1. INTRODUCTION

Yf = [yf1 ; yf2]; % vector of free responses - as in (1.21)

% New optimal control signal:


Deltau = S\(T-Yf); % as in (1.22)
ukplus2 = ukplus1 + Deltau(1) % use first element only of Deltau

% Resulting output:
ykplus3 = 0.7*ykplus2 + 2*ukplus2
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

This gives the following results:

u(k + 1) = 0.4448 y(k + 2) = 2.4897


u(k + 2) = 0.4463 y(k + 3) = 2.6354

In this MATLAB program the computations for each step have been set out
separately. In practice one would set them out once inside a repeating loop —
see program basicmpc.

1.3 Repeat of Example 1.3:


As in Example 1.3 we have (k) = 1. Hence
 
2Ts 1
(k + 2) = 1 − (k) = (14)
Tref 3

Hence r(k + 2|k) = 3 − 1


3 = 2.6667. So proceeding as in Example 1.3,
2.6667 − 2.0
∆û(k|k) = = 0.1961 (15)
3.4
û(k|k) = u(k − 1) + ∆û(k|k) = 0.4961 (16)

This should result in the next plant output value being y(k + 1) = 0.7 × 2 +
2 × 0.4961 = 2.3922.

Repeat of Example 1.4:


The vector of reference trajectory values (or ‘target’ vector) at the coincidence
points is now    
3 − 23 2.3333
T = = (17)
3 − 13 2.6667
while the vectors Yf and S remain unchanged. Hence

∆û(k|k) = S\(T − Yf ) = 0.1885 (18)


û(k|k) = u(k − 1) + ∆û(k|k) = 0.4885 (19)

This should result in the next plant output value being y(k + 1) = 0.7 × 2 +
2 × 0.4885 = 2.3770.

3
Pearson
c Education Limited 2002
CHAPTER 1. INTRODUCTION

1.4 There is an error in equation (1.23) in the book. Since the free response ŷf (k +
Pi |k) is defined to be the response when the input remains at its last value,
namely u(k − 1), each term in (1.23) involving an input value û(k + j|k) should
in fact involve the difference û(k + j|k) − u(k − 1). Thus the correct expression
for (1.23) is:

ŷ(k + Pi |k) = ŷf (k + Pi |k) + H(Pi )[û(k|k) − u(k − 1)] +


H(Pi − 1)[û(k + 1|k) − u(k − 1)] + · · ·
H(Pi − Hu + 2)[û(k + Hu − 2|k) − u(k − 1)] +
S(Pi − Hu + 1)[û(k + Hu − 1|k) − u(k − 1)] (20)

This can be written as

ŷ(k + Pi |k) = ŷf (k + Pi |k) + [S(Pi ) − S(Pi − 1)][û(k|k) − u(k − 1)]+


[S(Pi − 1) − S(Pi − 2)]û(k + 1|k) + · · ·
[S(Pi −Hu +2)−S(Pi −Hu +1)]û(k+Hu −2|k)+S(Pi −Hu +1)û(k+Hu −1|k)
(21)

The terms can be regrouped as:

ŷ(k + Pi |k) = ŷf (k + Pi |k) + S(Pi )[û(k|k) − u(k − 1)] +


S(Pi − 1)[û(k + 1|k) − û(k|k)] + · · ·
S(Pi − Hu + 1)[û(k + Hu − 1|k) − û(k + Hu − 2|k)]
= ŷf (k + Pi |k) + S(Pi )∆û(k|k) +
S(Pi − 1)∆û(k + 1|k) + · · · +
S(Pi − Hu + 1)∆û(k + Hu − 1|k) (22)

which verifies (1.24).

1.5 The reference trajectory values at the two coincidence points are the same whichever
model is used, so we calculate these first. Let t denote the current time. The
initial error is (t) = 3 − 1 = 2, so the reference trajectory 6 sec ahead is
r(t + 6|t) = 3 − exp(−6/5) × 2 = 2.3976, and 10 sec ahead is r(t + 10|t) =
3 − exp(−10/5) × 2 = 2.7293. Thus the target vector needed in (1.22) is
 
2.3976
T = (23)
2.7293

Since the output is constant at 1, the input must be constant. The steady-
state gain of the model is 2, so this constant value of the input must be 0.5.
The free response, with u(t + τ ) = 0.5 for τ > 0, is yf (t + τ ) = 1, since the
output would remain at its equilibrium value if the input value did not change.

4
Pearson
c Education Limited 2002
CHAPTER 1. INTRODUCTION

(a). The only information needed from the continuous-time model is the step
response values at the coincidence points. The transfer function corre-
sponds to the differential equation

y(t) + 7ẏ(t) = 2u(t − 1.5) (24)

so the step response is the solution to



0 (t < 1.5)
yf (t) + 7ẏf (t) = (25)
2 (t ≥ 1.5)

with initial condition y(0) = 0, which is y(t) = 2(1 − e−(t−1.5)/7 ) for


t ≥ 1.5 (from the elementary theory of differential equations). Hence the
vector of step responses at the coincidence points is
 
0.9484
S= (26)
1.4062

(This can also be obtained by using the function step in MATLAB’s


Control System Toolbox.) So now we can apply (1.22) to get
ˆ = S\(T − Yf ) = 1.3060
∆u(k|k) (27)

so the optimal input is

u(k) = 0.5 + 1.3060 = 1.8060 (28)

(b). An equivalent discrete-time model is obtained most easily using MAT-


LAB’s Control System Toolbox function c2d on the original transfer func-
tion without the delay:
sysc=tf(2,[7,1])
sysd=c2d(sysc,0.5)
which gives the z-transform transfer function 0.1379/(z − 0.9311). Now
the delay of 1.5 sec, namely three sample periods, can be incorporated by
multiplying by z −3 : 0.1379/z 3 (z − 0.9311). Now using the function step
gives the step response at the coincidence points as 0.9487 and 1.4068,
respectively. Proceeding as in (a), we get ∆u(k|k)ˆ = 1.3055 and hence
u(k) = 1.8055.

The two points made by this exercise are: (1) Continuous-time models can be
used directly. (2) Whether a continuous-time or a discrete-time model is used
makes little difference, if the sampling interval is sufficiently small.

1.6 Since the plant has a delay of 1.5 sec, the predicted output is not affected by the
input within that time. So choosing a coincidence point nearer than 1.5 sec
into the future would have no effect on the solution.

5
Pearson
c Education Limited 2002
CHAPTER 1. INTRODUCTION

1.7 With only one coincidence point (assumed to be P steps ahead) and Hu = 1 we
have T = r(k + P |k), Yf = yf (k + P |k), and Θ = S(P ), so (1.31) becomes

r(k + P |k) − d(k) − yf (k + P |k)


∆û(k|k) = (29)
S(P )

In steady state (1.33) and (1.34) become

rP − d∞ − yf P
∆û(k|k) = (30)
S(P )
rP − yp∞ + ym∞ − yf P
= (31)
S(P )

(1.35) becomes ym∞ − yf P = 0, and (1.36) becomes

rP − yp∞
∆û(k|k) = (32)
S(P )

But in the steady state ∆û(k|k) = 0 and hence rP = yp∞ . But rP = s∞ −


λP (s∞ − yp∞ ) = s∞ − λP (s∞ − rP ). Hence yp∞ = rP = s∞ .

1.8 To do this exercise, simply change files basicmpc.m and trackmpc.m as follows.
First define the plant by replacing the lines:

%%%%% CHANGE FROM HERE TO DEFINE NEW PLANT %%%%%


nump=1;
denp=[1,-1.4,0.45];
plant=tf(nump,denp,Ts);
%%%%% CHANGE UP TO HERE TO DEFINE NEW PLANT %%%%%

by the lines:

%%%%% CHANGE FROM HERE TO DEFINE NEW PLANT %%%%%


nump=1;
denp=[1,-1.5,0.5]; % (z-0.5)(z-1)
plant=tf(nump,denp,Ts);
%%%%% CHANGE UP TO HERE TO DEFINE NEW PLANT %%%%%

then define the model by replacing the lines:

%%%%% CHANGE FROM HERE TO DEFINE NEW MODEL %%%%%


model = plant;
%%%%% CHANGE UP TO HERE TO DEFINE NEW MODEL %%%%%

by the lines:

6
Pearson
c Education Limited 2002
CHAPTER 1. INTRODUCTION

%%%%% CHANGE FROM HERE TO DEFINE NEW MODEL %%%%%


numm=1.1;
denm=[1,-1.5,0.5];
model = tf(numm,denm,Ts);
%%%%% CHANGE UP TO HERE TO DEFINE NEW MODEL %%%%%

1.9 With the independent model implementation, the measured plant output does not
directly influence the model output. When Tref = 0 the reference trajectory
does not depend on the measured plant output, because it is equal to the
future set-point trajectory. (Whereas with Tref 6= 0 it does depend on the
measured output.) Thus for each coincidence point the controller is concerned
only to move the output from the present model output value ym (k) to the
required value s(k + Pi ). Neither of these values, nor the free response of the
model, is affected by the measurement noise. Hence the control signal is not
affected by the noise, either.
When offset-free tracking is provided, the disturbance estimate d(k) = yp (k) −
ŷ(k|k − 1) is affected by the noise, through the measurement yp (k), and this
directly affects the control signal.
Comment: This shows how essential it is to tie the model output to the plant
output in some way. Otherwise the model and plant could drift arbitrarily far
apart. The way it is done in Section 1.5 is a very easy way of doing it, but not
the only possible one.

1.10 This exercise can be solved just by editing the files basicmpc.m and trackmpc.m,
to define the variables plant, model, Tref, Ts, P and M appropriately.

1.11 Just edit the file unstampc.m in the obvious way — change the definition of
variable model.

1.12 The text following equation (1.31) shows what has to be done: in file unstampc.m
replace the lines:

% Compute input signal uu(k):


if k>1,
dutraj = theta\(reftraj-ymfree(P)’);
uu(k) = dutraj(1) + uu(k-1);
else
dutraj = theta\(reftraj-ymfree(P)’);
uu(k) = dutraj(1) + umpast(1);
end

7
Pearson
c Education Limited 2002
CHAPTER 1. INTRODUCTION

by:

% Compute input signal uu(k):


d = yp(k) - ym(k);
if k>1,
dutraj = theta\(reftraj-d-ymfree(P)’);
uu(k) = dutraj(1) + uu(k-1);
else
dutraj = theta\(reftraj-d-ymfree(P)’);
uu(k) = dutraj(1) + umpast(1);
end

1.13 In the existing file unstampc.m the restriction that the plant and model should
have the same order arises in the lines

% Simulate model:
% Update past model inputs:
umpast = [uu(k);umpast(1:length(umpast)-1)];
% Simulation:
ym(k+1) = -denm(2:ndenm+1)*ympast+numm(2:nnumm+1)*umpast;
% Update past model outputs:
ympast = yppast; %%% RE-ALIGNED MODEL

In particular the product denm(2:ndenm+1)*ympast assumes that the length


of the vector denm(2:ndenm+1), which depends on the model order, is the
same as the length of the vector yppast (since ympast is set equal to yppast),
which depends on the order of the plant.
To handle different orders of plant and model it is necessary to store the
appropriate number of past values of the plant output in the vector ympast. If
the model order is smaller than the plant order this is very easy: just truncate
the vector yppast (since the most recent plant output values are stored in the
initial locations of yppast):

% Simulate model:
% Update past model inputs:
umpast = [uu(k);umpast(1:length(umpast)-1)];
% Simulation:
ym(k+1) = -denm(2:ndenm+1)*ympast+numm(2:nnumm+1)*umpast;
% Update past model outputs:
if ndenm <= ndenp, %%% MODEL ORDER <= PLANT ORDER
ympast = yppast(1:ndenm); %%% RE-ALIGNED MODEL
end

If the model order is larger than the plant order then there are not enough
past values in yppast, as currently defined. One solution is to leave yppast

8
Pearson
c Education Limited 2002
CHAPTER 1. INTRODUCTION

unchanged, but to hold the additional past values in ympast, by ‘shuffling’


them up to the tail of the vector:

if ndenm > ndenp, %%% MODEL ORDER > PLANT ORDER


ympast(ndenp+1:ndenm) = ympast(ndenp:ndenm-1);
ympast(1:ndenp) = yppast;
end

1.14 The vector ∆U should contain only the changes of the control signal:
 
∆û(k + M1 |k)
 ∆û(k + M2 |k) 
 
∆U =  ..  (33)
 . 
∆û(k + MHu |k)

The matrix Θ should contain the columns corresponding to these elements of


∆U, so that (1.27) is replaced by:
 
S(P1 − M1 ) S(P1 − M2 ) · · · S(P1 − MHu )
 S(P2 − M1 ) S(P2 − M2 ) · · · S(P2 − MH ) 
 u 
Θ= .. .. .. ..  (34)
 . . . . 
S(Pc − M1 ) S(Pc − M2 ) · · · S(Pc − MHu )

(remember that S(i) = 0 if i < 0). Note that the vector Y remains the same
as in (1.26).

1.15 Exercise 1.15 is done by editing the files basicmpc, trackmpc and noisympc. The
main point to be made here is that the effects of changing various parameters
are difficult to predict precisely, and can occasionally be counter-intuitive.
This provides some motivation for Chapter 7, which discusses tuning MPC
controllers.
Obvious things to check are:

(a). Increasing Tref slows down the response.


(b). Increasing the prediction horizon (the location of the last coincidence
point) improves closed-loop stability (because the controller pays more
attention to long-term consequences of its actions).
(c). Increasing Hu increases the computation time. (Use MATLAB functions
tic and toc to check this.)

9
Pearson
c Education Limited 2002
CHAPTER 1. INTRODUCTION

10
Pearson
c Education Limited 2002
CHAPTER 2. A BASIC FORMULATION OF PREDICTIVE CONTROL

Chapter 2

A basic formulation of
predictive control

2.1 The following MATLAB code does the job:

Q = [10 4 ; 4 3]
eig(Q)

This shows the eigenvalues are 11.815 and 1.185, which are both positive.

2.2 (a).
  
9 0 0 x1
V (x) = 9x21 + 25x22 + 16x23 = [x1 , x2 , x3 ]  0 25 0   x2  (1)
0 0 16 x3

so Q = diag[9, 25, 16].


(b). With xT = [x1 , x2 , x3 ] and uT = [u1 , u2 ] we get V (x, u) = xT Qx + uT Ru
if  
5 0 0  
 100 0
Q= 0 2 0  and R= (2)
0 4
0 0 1

(c). Yes. If xi 6= 0 for any i then both V (x) in (a) and V (x, u) in (b) are
positive, by inspection. Similarly, if ui 6= 0 for any i then V (x, u) in (b)
is positive. If x = 0 then V (x) = 0. If x = 0 and u = 0 then V (x, u) = 0.
So both functions V are positive-definite.

2.3 Error in question: The formula for the gradient is not (2.19), but the formula for
∇V which appears at the end of Mini-Tutorial 1.

11
Pearson
c Education Limited 2002
CHAPTER 2. A BASIC FORMULATION OF PREDICTIVE CONTROL

If V (x) = xT Qx = 10x21 + 8x1 x2 + 3x22 , as in Example 2.1, then


∂V ∂V
= 20x1 + 8x2 and = 8x1 + 6x2 (3)
∂x1 ∂x2
so ∇V = [20x1 + 8x2 , 8x1 + 6x2 ] from the definition. Applying the formula we
get  
10 4
∇V = 2x Q = 2[x1 , x2 ]
T
= 2[10x1 + 4x2 , 4x1 + 3x2 ] (4)
4 3
so the two agree.

2.4 Error in question: The question should refer to Example 2.3, not Example 2.4.
(But the reader could answer the question for Example 2.4 too — see below.)
Solution for Example 2.3:
In Example 2.3 the constraints on the inputs are 0 ≤ u2 ≤ 3. The control
horizon is Hu = 1, so this translates into the constraint 0 ≤ û2 (k|k) ≤ 3,
which can be written as
 
  û1 (k|k)  
0 −1 0  û2 (k|k)  ≤ 0 (5)
0 1 −3 0
1
so that  
0 −1 0
F = (6)
0 1 −3

Here is the corresponding solution for Example 2.4:


In Example 2.4 the constraints on the inputs are −10 ≤ uj (k) ≤ 10 for j =
1, 2. Since these must be respected across the whole control horizon, we have
−10 ≤ ûj (k + i|k) ≤ 10 for i = 0, 1, 2, 3 (since Hu = 3) and j = 1, 2. For each
i and j this constraint can be expressed as
    
−1 −10 ûj (k + i|k) 0
≤ (7)
1 −10 1 0

So all the constraints can be represented using a 16 × 9 matrix F (not 4 × 7,


as written in the book):
 
  û1 (k|k)  
−1 0 ··· 0 −10  û2 (k|k)  0
 1 · · · −10    
 0 0  
 û1 (k + 1|k)  
0 
 0 −1 · · · 0  
−10   û2 (k + 1|k)   0 
    
 0 ··· 0 −10  ≤ 0 
 1  û 1 (k + 2|k)   .  (8)
 .. .. .. .. ..   
 .  û (k + 2|k)  . 
 . . . . 
2   . 

 0 0 · · · −1 −10   û1 (k + 3|k)   0 
 û2 (k + 3|k) 
0 0 ··· 1 −10 0
1

12
Pearson
c Education Limited 2002
CHAPTER 2. A BASIC FORMULATION OF PREDICTIVE CONTROL

2.5 The choice of ∆u(k) will affect x(k + 1), and hence z(k + 1). It will not affect
z(k), since the model has no direct ‘feed-through’ from u(k) to z(k). We have
measurements up to time k, so our best estimate of z(k + 1) is

ẑ(k + 1|k) = 3x̂(k + 1|k) = 3[2x̂(k|k) + u(k)] (9)


= 3[2x̂(k|k) + u(k − 1) + ∆u(k)] (10)
= 3[2 × 3 − 1 + ∆u(k)] = 15 + 3∆u(k)] (11)

But we must respect the constraint −1 ≤ ẑ(k+1|k) ≤ 2, so −1 ≤ 15+3∆u(k) ≤


2, hence
16 13
− ≤ ∆u(k) ≤ − (12)
3 3

2.6 Checking the discretization using MATLAB, assuming the continuous-time sys-
tem matrices Ac,Bc,Cy have already been defined to correspond to Ac , Bc , Cy
in Example 2.4:

D = zeros(3,2); % Direct ’feed-through’ matrix is 0


cont_sys=ss(Ac,Bc,Cy,D); % Continuous-time system
Ts = 2; % Sampling interval
disc_sys=c2d(cont_sys,Ts); % Discrete-time system
A = get(disc_sys,’a’); % Should agree with A in Example 2.4
B = get(disc_sys,’b’); % Should agree with B in Example 2.4

Checking stability:

eig(A) % Eigenvalues of A

This shows that the eigenvalues of the discrete-time system are 0.4266, 0.4266,
0.2837 and 0.0211. These all lie within the unit circle, so the dicrete-time
system is stable.
To check that each of these eigenvalues corresponds to exp(λTs ), where λ is
an eigenvalue of Ac :

exp(eig(Ac)*Ts)

2.7 (a). Since the feed-tank level H1 is to be constrained, it becomes a controlled


output — even though it is not to be controlled to a set-point. So Cz
must have another row, to represent this. Assuming that H1 will be the
first element in the vector of controlled outputs, Cz becomes:
 
1 0 0 0
Cz =  0 1 0 0  (13)
0 0 0 1

13
Pearson
c Education Limited 2002
CHAPTER 2. A BASIC FORMULATION OF PREDICTIVE CONTROL

(b). Note that the dimensions given for the matrices E and F in Example 2.4
are wrong. They should be 16 × 9 (not 4 × 7, as stated). The dimensions
given for G are correct. Since E and F are concerned with constraints
on the input amplitudes and input increments, they are not affected by
adding constraints on H1 , which is an output.
There will be two constraints on H1 for each step in the prediction horizon
— one minimum value and one maximum value. So two rows need to be
added to G for each step. The prediction horizon is Hp = 10, so G will
need 20 additional rows. That is, the new dimensions of G will be 40×21.
(c). The additional difficulty is that although H1 is to be controlled within
constraints, it is not measured. So its value has to be inferred from the
measured variables somehow. If the system behaviour is sufficiently linear
then the standard way of doing this is to use a state observer (see Mini-
Tutorial 2). But if this does not give sufficiently accurate estimates of
H1 then some nonlinear observer may have to be used, or (preferably) a
sensor to measure H1 should be installed, if feasible.

2.8 Equation (2.40) states that ∆x(k + 1) = A∆x(k) + B∆u(k). This gives the first
rows of the matrices in (2.108). From (2.2) we have

y(k) = Cy x(k) (14)


= Cy [∆x(k) + x(k − 1)] (15)
= Cy ∆x(k) + y(k − 1) (16)
= [Cy , I, 0]ξ(k) (17)

which gives the second row of (2.108) and the first row of (2.109). The third
row of (2.108) and the second row of (2.109) follow similarly from (2.3).

Rτ R Ts −τ
2.9 We have B1 = eAc (Ts −τ ) 0 eAc η dηBc and B2 = 0 eAc η dηBc .
Now the function c2d computes the discretised system with the assump-
tion that Ac Ts and
R Ts τA=η 0, so the call [Ad,Bd]=c2d(Ac,Bc,Ts) gives Ad = e
Bd = 0 e c dηBc . Thus the required realization can be obtained using the
following function c2dd:

function [Ad,Bd,Cd] = c2dd(Ac,Bc,Cc,Ts,tau)


%C2DD Computes discrete-time model with computational delay
%
% Usage: [Ad,Bd,Cd] = c2dd(Ac,Bc,Cc,Ts,tau)
%
% Computes discrete-time state space realisation for system with
% computation delay.
% Inputs: Ac,Bc,Cc: ‘A’,‘B’,’C’ matrices of continuous-time model
% Ts: Sampling interval

14
Pearson
c Education Limited 2002
CHAPTER 2. A BASIC FORMULATION OF PREDICTIVE CONTROL

% tau: Computational delay


% Outputs: Ad,Bd,Bd: ‘A’,‘B’,’C’ matrices of discrete-time model
%
% ’D’ matrix assumed to be zero.

% Written by J.M.Maciejowski, 25.9.98.


% Reference: ‘Predictive control with constraints’, section
% on ‘Computational delays’.

[nstates,ninputs] = size(Bc);
noutputs = size(Cc,1);

if Ts < tau,
error(’tau greater than Ts’)
end

[Ad,dummy] = c2d(Ac,Bc,Ts); % (1,1) block of Ad computed

[A1,B1] = c2d(Ac,Bc,tau);
[A2,B2] = c2d(Ac,Bc,Ts-tau); % B2 computed
B1 = A2*B1; % B1 computed

Ad = [Ad, B1; zeros(ninputs,nstates+ninputs)];


Bd = [B2; eye(ninputs)];
Cd = [Cc,zeros(noutputs,ninputs)];

2.10 Using the function c2d for the case Ts = 0.1, τ = 0 gives

 
0.85679 0 0.08312 0
 −0.02438 1 0.09057 0 
A = 


−0.46049 0 0.80976 0 
−12.03761 12.8200 0.03680 1
 
−0.10272
 −0.07944 
B = 


−1.53242 
0.16674
 
C = −128.2 128.2 0 0

Using the function c2dd written for the previous problem for the case Ts =

15
Pearson
c Education Limited 2002
CHAPTER 2. A BASIC FORMULATION OF PREDICTIVE CONTROL

0.1, τ = 0.02 gives


 
0.85679 0 0.08312 0 −0.03109
 −0.02438 1 0.09057 0 −0.02788 
 
A = 
 −0.46049 0 0.80976 0 −0.27938 

 −12.03761 12.82000 0.03680 1 0.05578 
0 0 0 0 0
 
−0.07163
 −0.05156 
 
B = 
 −1.25304 

 0.11096 
1
 
0 1 0 0 0
C =  0 0 0 1 0 
−128.2 128.2 0 0 0

As is to be expected, the step (or impulse) responses of the model with the
computational delay lag behind the other ones by 0.05 sec.
The frequency responses are almost identical up to a frequency of about 1
rad/sec. At about 6 rad/sec (about 1 Hz) there are phase differences of about
10◦ in each channel. This means that if the closed-loop bandwidth is about
1 rad/sec, or lower, the computational delay can be neglected. If it is above
1 rad/sec then it should be taken into account. But note that the Nyquist
frequency is π/Ts = 10π ≈ 30 rad/sec, so one is unlikely to try to obtain a
bandwidth as high as 5 rad/sec, say, with this value of Ts .

2.11
   
  Cz A Cz B
ẑ(k + 1|k)  ..   .. 
 ..   .   . 
 .     PHu −1 i 
   Cz AHu   Cz i=0 A B 
 ẑ(k + Hu |k) =  x(k)+  P u i  u(k−1)+
   Cz AHu +1   Cz H 
 ẑ(k + Hu + 1|k)     i=0 A B 
   ..   .. 
..  .   . 
.ẑ(k + Hp |k) PHp −1 i
Cz AHp Cz i=0 A B
 
Cz B ··· 0
 ··· 
 Cz (AB + B) 0 
 .. .. ..  
 . . .  ∆û(k|k)
 PHu −1 i 
 ···  .. 
 Cz i=0 A B
P u i
Cz B  .  (18)
 
 Cz H i=0 A B ··· Cz (AB + B)  ∆û(k + Hu − 1|k)
 
 .. .. .. 
 . . . 
PHp −1 i PHp −Hu i
Cz i=0 A B ··· Cz i=0 AB

16
Pearson
c Education Limited 2002
CHAPTER 2. A BASIC FORMULATION OF PREDICTIVE CONTROL

2.12 Note: The solution to this exercise may be clearer after reading Sections 3.1 and
3.2.

From (2.19) the cost function is

X
Hp u −1
HX
V (k) = kz̃(k
ˆ +i|k)−Dz û(k +i|k)−r(k +i|k)k2 +
Q(i) k∆û(k +i|k)k2R(i)
i=Hw i=0
(19)
But for vectors a and b we have

ka − bk2M = (a − b)T M (a − b) (20)


= a M a − 2a M b + b M b
T T T
(21)
= kak2M + kbk2M − 2a M b
T
(22)

ˆ + i|k) − r(k + i|k) and b = Dz û(k + i|k), the first term


Hence, taking a = z̃(k
in V (k) is

Hp n
X
kz̃(k
ˆ + i|k) − r(k + i|k)k2 + kDz û(k + i|k)k2 −
Q(i) R(i)
i=Hw
o
ˆ + i|k)T Q(i)Dz û(k + i|k) + 2r(k + i|k)T Q(i)Dz û(k + i|k)
2z̃(k (23)

Since z̃(k) = z(k) − Dz u(k) = Cz x(k) we have z̃(k


ˆ + i|k) = Cz x̂(k + i|k), which
is linear in ∆û if a linear observer is used, so the first and third
P terms in this
expression are quadratic in ∆û (since û(k + i|k) = u(k − 1) + i ∆û(k + i|k) is
also linear in ∆û). The second term is clearly quadratic in ∆û, and the fourth
term is linear in ∆û. Hence the whole expression is quadratic in ∆û.

Since z̃ˆ is linear in both ẑ and û (and hence in ∆û), and ẑ is itself linear in
∆û, linear constraints on ẑ can be expressesd as linear constraints on z̃ˆ and
∆û, and hence on the ∆û variables only.

2.13 (2.66) still holds when Dz 6= 0. But (2.67)–(2.69) have to be changed, since now

ẑ(k + i|k) = Cz x̂(k + i|k) + D(z)û(k + i|k) (24)


X
i
= Cz x̂(k + i|k) + D(z)[u(k − 1) + ∆û(k + j|k)] (25)
j=0

17
Pearson
c Education Limited 2002
CHAPTER 2. A BASIC FORMULATION OF PREDICTIVE CONTROL

So (2.70) should be replaced by


 
  Cz 0 · · · 0  
ẑ(k + 1|k)   x̂(k + 1|k)
 ..   0 Cz · · · 0  .. 
 .  =  .. .. . . ..  . +
 . . . . 
ẑ(k + Hp |k) x̂(k + Hp |k)
0 0 ··· Cz
 
    ∆û(k|k)
Dz Dz Dz 0 ··· 0  
 ∆û(k + 1|k) 
 ..   .. .. .. .. ..   ∆û(k + 2|k) 
 .  u(k − 1) +  . . . . .   (26)
 .. 
Dz Dz Dz Dz · · · Dz  . 
∆û(k + Hp |k)

Note, however, that if Dz 6= 0 then one should consider including ẑ(k|k) in the
cost function, since it is influenced by ∆û(k) (unlike in the case Dz = 0).

2.14 Applying the standard observability test, the pair


  
A 0  
, Cy I
0 I

is observable if and only if the matrix


   
Cy  I   
   A 0  Cy I
 Cy I 
 0 I   Cy A I 
  2   
    =  Cy A2 I  (27)
 C I A 0   
 y  .. ..
 0 I  . .
..
.

has full column rank. It is easy to see that this is the case if and only if the
matrix  
Cy
 Cy A 
 
 Cy A2 
 
..
.
has full column rank, which is precisely the condition that the pair (A, Cy )
should be observable.

2.15 Note: This exercise requires the use of the Model Predictive Control Toolbox
in parts (c)–(e). It may be better done after Exercise 3.1. Even then it will
require some reading of the Model Predictive Control Toolbox documentation.
The advice given in part (b) of the question is seriously misleading; it is easier
to do it by hand than to use the Model Predictive Control Toolbox.

18
Pearson
c Education Limited 2002
CHAPTER 2. A BASIC FORMULATION OF PREDICTIVE CONTROL

(a). The model becomes


x(k + 1) = 0.9x(k) + 0.5[u(k) + d(k)] (28)
d(k + 1) = d(k) (29)
y(k) = x(k) (30)
which can be written in state-space form as
      
x(k + 1) 0.9 0.5 x(k) 0.5
= + u(k) (31)
d(k + 1) 0 1 d(k) 0
 
  x(k)
y(k) = 1 0 (32)
d(k)
(b). Using this model in the observer equation (2.81), with the same notation
as used in (2.84), gives the observer’s state transition matrix as
     
0.9 0.5 Lx   0.9 − Lx 0.5
A − LC = − 1 0 = (33)
0 1 Ld −Ld 1
For a deadbeat observer both eigenvalues of this matrix should be zero,
so both its determinant and its trace should be zero. Hence we need
0.9 − Lx + 0.5Ld = 0 and 1.9 − Lx = 0 (34)
from which Lx = 1.9 and Ld = 2.
(c). To do this using the Model Predictive Control Toolbox is complicated by
the fact that the Model Predictive Control Toolbox uses the augmented
state vector (2.37) — without distinguishing between y and z. See Ap-
pendix C and the Model Predictive Control Toolbox documentation.
From (2.43) the ‘A’ matrix of the plant with the state vector as in (2.37)
is      
0.9 0.5 0
à =   0 1  0  (35)
0.9 0.5 1
 
and the corresponding ‘C’ matrix is C̃ = 0 0 1 . The observer gain
matrix ‘L’ now has 3 elements and the closed-loop state transition matrix
of the observer ‘A − LC’ is
       
0.9 0.5 0 L1  
à − L̃C̃ =   0 1  0  −  L2  0 0 1 (36)
0.9 0.5 1 L3
 
0.9 0.5 −L1
=  0 1 −L2  (37)
0.9 0.5 1 − L3
The characteristic polynomial of this is
h i
det λI − (Ã − L̃C̃) =
λ3 + (L3 − 2.9)λ2 + (2.8 + 0.9L1 + 0.5L2 − 1.9L3 )λ − 0.9(L1 − L3 + 1)
(38)

19
Pearson
c Education Limited 2002
CHAPTER 2. A BASIC FORMULATION OF PREDICTIVE CONTROL

For a deadbeat observer all the roots of this should be at zero, so this
characteristic polynomial should be just λ3 ; hence setting the coefficients
of λ2 , λ and λ0 all to zero gives

L1 = 1.9, L2 = 2, L3 = 2.9 (39)

(Compare this with the observer gain obtained in part (b).)


One more computation is required before the simulation can proceed: the
observer gain matrix that must be applied to the Model Predictive Control
Toolbox function smpcsim is L̃0 , where ÃL̃0 = L̃ — see Mini-Tutorial 2,
on state observers. Since à is invertible in this case, we obtain
 
1
L̃ = Ã L̃ = 2 
0 −1  (40)
1

Now the response can be simulated using the Model Predictive Control
Toolbox functions smpccon and smpcsim as follows:
%%%%%% Define plant and internal model:
A = [0.9 .5 ; 0 1]; B = [0.5 ; 0]; % Define the plant with
C = [1 0]; D = 0; % disturbance as in part (a)
pmod = ss2mod(A,B,C,D); % Put plant model into MOD format
imod = pmod; % Internal model same as plant model

%%%%%% Compute
predictive controller:
Q = 1; % penalty
on plant output errors
R = 0; % penalty
on input moves
Hp = 30; Hu = 2;
% Prediction and control horizons,
% respectively
Ks = smpccon(imod,Q,R,Hu,Hp) % Compute controller
% gain matrix
%%% CHECK: I get Ks = [2, -1.8, 56.6163, -2]

%%%%%% Simulate with deadbeat observer:


Lprime = [1 ; 2 ; 1]; % Observer gain L’ as calculated above
tend = 50; % End time for simulation
r = 1; % Set-point = 1 (constant)
wu=[zeros(1,20),1]’; % Unmeasured step disturbance on the
% input after 20 time steps
[y1,u1] = smpcsim(pmod,imod,Ks,tend,r,[],Lprime,[],[],[],wu);
plotall(y1,u1)

%%%%%% Simulate with default observer:


[y2,u2] = smpcsim(pmod,imod,Ks,tend,r,[],[],[],[],[],wu);
figure % Open new figure window
plotall(y2,u2)

20
Pearson
c Education Limited 2002
CHAPTER 2. A BASIC FORMULATION OF PREDICTIVE CONTROL

Outputs
30

25

20

15

10

0
0 5 10 15 20 25 30 35 40 45 50
Time

Manipulated Variables
60

40

20

−20

−40

−60
0 5 10 15 20 25 30 35 40 45 50
Time

Figure 1: Exercise 2.15(c): simulation with deadbeat observer.

Outputs
1.5

0.5

0
0 5 10 15 20 25 30 35 40 45 50
Time

Manipulated Variables
2

1.5

0.5

−0.5

−1
0 5 10 15 20 25 30 35 40 45 50
Time

Figure 2: Exercise 2.15(c): simulation with default observer.

21
Pearson
c Education Limited 2002
CHAPTER 2. A BASIC FORMULATION OF PREDICTIVE CONTROL

Figures 1 and 2 show the results using the deadbeat and default observer,
respectively. As expected, with the deadbeat observer the controller com-
pletely compensates for the disturbance after two steps (because the plant
with disturbance model has two states), but large input changes occur
and the peak deviation of the output from its set-point is large. On the
other hand the controller with the default (DMC) observer takes longer
to correct for the disturbance, but the peak deviation of the output is
much smaller, and much less violent input changes are made.
(d). The observer makes no difference to the set-point response, as can be
seen from the two sets of plots, which are identical until the disturbance
occurs. This can be checked further by defining various trajectories for
the set-point vector r in the simulations.
(e). To use the Control System Toolbox function place it is first necessary
to put the plant model into the ‘augmented’ form used by the Model
Predictive Control Toolbox, as in part (c) above. The Model Predictive
Control Toolbox function mpcaugss does this:

%%%% Assume we are given A,B,C,D %%%%


[Atilde,Btilde,Ctilde] = mpcaugss(A,B,C);

Placing the eigenvalues of the observer closed-loop transition matrix à −


LC̃ is the dual of the problem of placing the eigenvalues of the closed-loop
state-feedback matrix à − B̃K, which is solved by the function place:

%%%% Assume we have vector P of desired closed-loop


%%%% observer pole locations, and solve
%%%% dual pole-placement problem:
L = place(Atilde’,Ctilde’,P); % Atilde and Ctilde
% must be transposed
L = L’; % and L itself must be transposed too to get its
% dimensions right.

Note that there is a restriction, that no pole in P can have multiplicity


greater than the number of outputs of the plant (beause of the particular
algorithm used in place). So you can design a deadbeat observer only
approximately in this way, because you have to specify poles close to 0,
but not all exactly at 0.
Finally, as remarked in (c) above, the observer gain matrix required for
Model Predictive Control Toolbox functions such as smpcsim is L0 rather
than L, where L = ÃL0 :

Lprime = Atilde\L;

This can now be used with Model Predictive Control Toolbox functions,
for example as shown in part (c) above.

22
Pearson
c Education Limited 2002
CHAPTER 2. A BASIC FORMULATION OF PREDICTIVE CONTROL

2.16 Suppose that the (measured) plant output vector has constant value y(k) = y0 .
If the plant input is also constant and if the observer (2.86) is asymptotically
stable then the estimated state will converge to a constant value ξ̂0 = [x̂T0 , dˆT0 ]T .
From the second ‘row’ of (2.86) we see that

dˆ0 = dˆ0 + Ld (y0 − Cy x̂0 − dˆ0 ) (41)

But from (2.79) we have the output estimate

ŷ0 = Cy x̂0 + dˆ0 (42)

and hence ŷ0 = y0 .

23
Pearson
c Education Limited 2002
CHAPTER 2. A BASIC FORMULATION OF PREDICTIVE CONTROL

24
Pearson
c Education Limited 2002
CHAPTER 3. SOLVING PREDICTIVE CONTROL PROBLEMS

Chapter 3

Solving predictive control


problems

3.1 No solution required here. Follow Model Predictive Control Toolbox User’s Guide.

3.2 No solution required here. Follow Model Predictive Control Toolbox User’s Guide.

3.3 (a). The continuous-time state-space model of the swimming pool is


  
1 k 1 q
θ̇ = − θ + , (1)
T T T θa
where θ is taken as the state variable. The following MATLAB code
fragment gets the discrete-time model:
kmod = 0.2; Tmod = 1; % assumed model parameters
amod = -1/Tmod; bmod = [kmod/Tmod, 1/Tmod]; % A and B
cmod = 1; dmod = [0, 0]; % C and D
modc = ss(amod,bmod,cmod,dmod); % Makes LTI system object
Ts = 0.25; % Sampling interval
modd = c2d(modc,Ts); % Convert to discrete-time model
[ad,bd] = ssdata(modd); % A and B matrices of discrete-time
% model. C and D stay unchanged.
Now you should have ad = 0.7788 and bd = [0.0442, 0.2212], which
gives (3.94). This could also have been obtained by using transfer func-
tions, using Control System Toolbox function tf instead of ss, or possibly
tf2ss.
To form the model in the Model Predictive Control Toolbox’s MOD format,
first define the minfo vector, which specifies the sampling interval, and
that there is 1 state, 1 control input, 1 measured output and 1 unmeasured
disturbance:

25
Pearson
c Education Limited 2002
CHAPTER 3. SOLVING PREDICTIVE CONTROL PROBLEMS

minfo = [Ts,1,1,0,1,1,0];
Now create the model in MOD format:
imod = ss2mod(ad,bd,c,d,minfo);
(b). Define the parameters (using the Model Predictive Control Toolbox nota-
tion), then compute Ks :
ywt = 1; uwt = 0; % Q and R, respectively
P = 10; M = 3; % Hp and Hu respectively
Ks = smpccon(imod,ywt,uwt,M,P); % Should give
% [22.6041, -17.6041, -22.6041]
Closed-loop stability can be verified either by simulating the closed loop
using smpcsim — see below — or by forming the closed-loop system
and checking the closed-loop poles (that is, the eigenvalues of its state-
transition matrix):
clmod = smpccl(imod,imod,Ks); % Form closed-loop system
smpcpole(clmod) % Compute closed-loop poles
This shows 2 poles at the origin, and 1 at 0.7788. These are all inside the
unit disk, so the closed-loop is stable. (The poles at 0 indicate that
‘deadbeat’ closed-loop behaviour has been obtained, because of using
R = 0. The pole at 0.7788 is the open-loop plant pole, which arises
from the use of the default ‘DMC’ observer; it does not appear in the
set-point → output transfer function.)
(c). To simulate the behaviour with a perfect model over 25 hours use smpcsim
as follows:
tend = 25; % End time
setpoint = 20;
airtemp = 15;
[wtemp,power] = smpcsim(imod,imod,Ks,tend,setpoint,...
[],[],[],[],airtemp,[]);
% wtemp is water temperature.
To see what happens when the plant parameters change, create a new
model of the plant pmod:
kplant = 0.3; Tplant = 1.25;
% Now proceed as in part (a) above, get discrete-time model,
% and convert to MOD format, eventually getting:
pmod = ss2mod(aplantd,bplantd,c,d,minfo);
Now simulate (see Figure 1):
[wtemp,power] = smpcsim(pmod,imod,Ks,tend,setpoint,...
[],[],[],[],airtemp,[]);
timevector = (0:0.25:120)’; % for plotting purposes
plotall(wtemp,power,timevector(1:length(wtemp)) % Plot results

26
Pearson
c Education Limited 2002
CHAPTER 3. SOLVING PREDICTIVE CONTROL PROBLEMS

Outputs
30

25
Temperature (deg C)

20

15

10

0
0 5 10 15 20 25
Time (hours)

Manipulated Variables
500

400
Input power (kW)

300

200

100

−100
0 5 10 15 20 25
Time (hours)

Figure 1: Solution to Exercise 3.3(c).

(d). The only change required here is to redefine the air temperature variation.
Here we run the simlation over 120 hours (Figure 2):
airtemp = 15+10*sin(2*pi*timevector/24); % Diurnal variation
tend = 120;
[wtemp,power] = smpcsim(pmod,imod,Ks,tend,setpoint,...
[],[],[],[],airtemp,[]);
plotall(wtemp,power,timevector)
(e). Now the limits on q must be specified. In order to solve the QP problem,
Model Predictive Control Toolbox function scmpc is needed (Figure 3):
ulim = [ 0, 40, 1e6]; % Finite limit on slew rate required.
[wtemp,power] = scmpc(pmod,imod,ywt,uwt,M,P,tend,setpoint,...
ulim,[],[],[],[],airtemp,[]);

3.4 Error in question: The question should make reference to equation (2.66), not to
(2.23).
As in Exercise 1.14, the vector ∆U(k) should contain only the changes of
the control signal (using the definition of ∆U (k) introduced in Section 3.1,
equation (3.2)):  
∆û(k + M1 |k)
 ∆û(k + M2 |k) 
 
∆U (k) =  ..  (2)
 . 
∆û(k + MHu |k)

27
Pearson
c Education Limited 2002
CHAPTER 3. SOLVING PREDICTIVE CONTROL PROBLEMS

Outputs
30

25
Temperature (deg C)

20

15

10

0
0 20 40 60 80 100 120
Time (hours)

Manipulated Variables
500

400
Input power (kW)

300

200

100

−100
0 20 40 60 80 100 120
Time (hours)

Figure 2: Solution to Exercise 3.3(d). The water and air temperatures are shown
by the solid and dotted lines, respectively.

Water and Air Temperature


25

20
Temperature (deg C)

15

10

0
0 20 40 60 80 100 120
Time (hours)

Manipulated Variables
50

40
Input power (kW)

30

20

10

−10
0 20 40 60 80 100 120
Time (hours)

Figure 3: Solution to Exercise 3.3(e). The water and air temperatures are shown by
the solid and dotted lines, respectively.

28
Pearson
c Education Limited 2002
CHAPTER 3. SOLVING PREDICTIVE CONTROL PROBLEMS

and the matrix which multiplies this vector in (2.66) should contain only the
corresponding columns. In the following expression it should be understood
that negative powers of A are to be replaced by the zero matrix, so that a
prediction x̂(k + j|k) does not depend on ∆û(k + `|k) if ` ≥ j:

 P−M1 i 
i=0 A B 0 ··· 0
 P1−M P1−M2 i 
 i=0
1
AiB
i=0 A B ··· 0 
 .. .. .. .. 
 . . . . 
 PHu −1−M1 i PH−u−1−M2 i PHu −1−MHu i 
 
 AB AB ··· AB  (3)
 Pi=0
Hu −M1 i Pi=0 Pi=0
Hu −MHu i 
 AB H−u−M2 i
AB ··· AB 
 i=0 i=0 i=0 
 .. .. .. .. 
 . . . . 
PHp −M1 i PH−p−M2 i PHp −MHu i
i=0 AB i=0 AB ··· i=0 AB

The first two terms in (2.66) remain unchanged.

These changes propagate through to the terms Θ and ∆U(k) in equation (3.5),
the other terms remaining unchanged.

3.5 (a). The only difference as regards prediction is that the values of û(k + i|k)
(i = 1, . . . , Hu − 1) must be obtained, either explicitly or implicitly, in
terms of ∆U (k) and u(k − 1). Explicitly, these are given by

 
    I 0 ··· 0
û(k|k) I  .. .. . . .. 
     . . . . 
 û(k + 1|k)   I   
U (k) =  = 
 u(k−1)+ I I · · · I 

..
 
..
  ∆U (k)
. .  .. .. . . .. 
 . . . . 
û(k + Hu − 1|k) I
I I ··· I
(4)
Now the cost function can be expressed as

V (k) = ||Z(k) − T (k)||2Q + ||U(k) − Uref (k)||2S + ||∆U (k)||2R (5)

which replaces (3.2) in the book, where Uref (k) is a vector containing the
prescribed future trajectory of the inputs and U (k) is defined above. S
is a block-diagonal matrix with the penalty weight S(i) in the ith block.
But we saw above that we can express U (k) in the form

U (k) = Mu(k − 1) + N ∆U (k) (6)

with suitable matrices M and N . So instead of equations (3.7)–(3.9) we

29
Pearson
c Education Limited 2002
CHAPTER 3. SOLVING PREDICTIVE CONTROL PROBLEMS

have

V (k) = ||Θ∆U(k) − E(k)||2Q + ||Mu(k − 1) + N ∆U (k) − Uref (k)||2S


+ ||∆U (k)||2R
= E(k)T QE(k) − 2∆U(k)T ΘT QE(k) + ∆U (k)T [ΘT QΘ + R]∆U (k)
| {z }
as before
+ (Mu(k − 1) − Uref (k)) S(Mu(k − 1) − Uref (k))
T

− 2∆U (k)T N T S(Mu(k − 1) − Uref (k)) + ∆U (k)T N T SN ∆U (k) (7)

which is of the form

V (k) = const − ∆U (k)T G 0 + ∆U (k)T H0 ∆U(k) (8)

where G 0 = G + Mu(k − 1) − Uref (k) and H0 = H + N T SN (compare


these with (3.10) and (3.11)).
So V (k) is a quadratic function of ∆U (k) again, as in the standard formu-
lation in the book. With the usual linear constraints on the inputs and
states, computation of the optimal ∆U (k) again requires the solution of
a QP problem.
(b). The main reason for specifying trajectories for the inputs as well as for
outputs is usually economic. Some inputs may be associated with higher
costs than others, so if a choice of input trajectories exists it may be
important to ‘steer’ the solution towards the lowest possible cost. Most
commonly this relates to steady-state values, the optimal combination
being found from a steady-state economic optimization. (See also Sec-
tion 10.1.1.) The main reason against doing this is that if steady-state
gains are not known accurately, then pushing the inputs towards partic-
ular values will prevent the controller from achieving offset-free tracking,
because those values are likely to be inconsistent with offset-free tracking.

3.6 In this case the constraints are of the form

û(k + i|k) − uhigh (k + i) ≤ 0


û(k + i|k) + ulow (k + i) ≤ 0

so that F has the form


 
I 0 ··· 0 −uhigh (k)
 −I 0 · · · 0 
+ulow (k)
 
 .. .. . . .. .. 
F = . . . . . 
 
 0 0 ··· I −uhigh (k + Hu − 1) 
0 0 ··· −I +ulow (k + Hu − 1)
Now z1 is obtained by summing block-columns 1 to Hu , z2 is obtained by
summing block-columns 2 to Hu , etc., so that z has the form indicated in the
question.

30
Pearson
c Education Limited 2002
CHAPTER 3. SOLVING PREDICTIVE CONTROL PROBLEMS

The right hand side is −z1 u(k − 1) − f , which is in this case:


   
I −uhigh (k)
 −I   +ulow (k) 
   
 ..   .. 
− .  u(k − 1) −  . =
   
 I   −uhigh (k + Hu − 1) 
−I +ulow (k + Hu − 1)
 
−u(k − 1) + uhigh (k)
 +u(k − 1) − ulow (k) 
 
 .. 
 .  (9)
 
 −u(k − 1) + uhigh (k + Hu − 1) 
+u(k − 1) − ulow (k + Hu − 1)

3.7 Function modification:


At line 252 of function smpccon we find:

X = [diag(ywts)*Su;diag(uwts)]\[diag(ywts);zeros(mnu,pny)];
% This is X=A\B.

It will be seen that this implements the unconstrained solution to the MPC
problem in the form given in equation (3.28) in the book. diag(ywts) corre-
sponds to SQ , where SQ T S = Q, and diag(uwts) corresponds to S , where
Q R
SR SR = R. Here ywts is a vector made
T
p up ofpthe diagonal pelements of
the square roots of the output weights, [ Q11 (1), Q22 (1), . . . , Qmm (Hp )]
and
p uwts p is a similar vector
p made up of square roots of the input weights
[ R11 (0), R22 (1), . . . , R`` (Hu − 1)]. The ‘diag’ function converts them
into diagonal matrices, so that diag(ywts) becomes a square matrix of di-
mension mHp and diag(uwts) becomes a square matrix of dimension `Hu .
All that needs to be done is to replace these diagonal matrices by block-
diagonal matrices corresponding to the matrices SQ and SR , respectively. One
way of doing this is to allow the input arguments ywts and uwts to the function
smpccon to be three-dimensional arrays with dimensions m × m × Hp and
1/2
` × ` × Hu , respectively, so that ywts(i,j,k) represents Qij (k), etc., then
form them into block-diagonal matrices as follows:

Qroot = []; Rroot = [];


for k=1:P, %%% P is prediction horizon in MPC Toolbox
Qroot = blkdiag(Qroot,ywts(:,:,k));
end
for k=1:M, %%% M is control horizon in MPC Toolbox
Rroot = blkdiag(Rroot,uwts(:,:,k));
end

These can then be used in the formation of X:

31
Pearson
c Education Limited 2002
CHAPTER 3. SOLVING PREDICTIVE CONTROL PROBLEMS

X = [Qroot*Su;Rroot]\[Qroot;zeros(mnu,pny)]; % This is X=A\B.

For this to work some dimension-checking code must be commented out in the
function smpccon.
It is also possible to have the input arguments ywts and uwts corresponding
to Q and R rather than SQ and SR , but then matrix square roots must be
computed when forming the variables Qroot and Rroot, for instance by finding
Cholesky factors:

for k=1:P,
Qroot = blkdiag(Qroot,chol(ywts(:,:,k)));
end
for k=1:M,
Rroot = blkdiag(Rroot,chol(uwts(:,:,k)));
end

Solution of Example 6.1: Using the function mysmpccon, modified as described


above, Example 6.1 can be solved as follows:

a = [0,0 ; 1,0]; b=[1 ; 0];


c = eye(2); d = zeros(2,1); % Both states are controlled outputs
imod = ss2mod(a,b,c,d); % Put model into MPC Toolbox MOD format.

Hp = 1; % Prediction horizon
Hu = 1; % Control horizon
ywt(:,:,1) = [1,2 ; 2,6]; % Only 1 matrix needed since Hp=1.
uwt = 1e-10; % Not zero because ’chol’ needs
% positive definite weight.

Ks = mysmpccon(imod,ywt,uwt,Hu,Hp)
%%% CHECK: I get Ks = [1, 2, -2, 0, -1, -2]

% Simulation of the resulting closed loop:


[y,u] = smpcsim(imod,imod,Ks,10,[1,1]);
plotall(y,u)

Figure 4 shows the resulting closed-loop behaviour. It can be seen to be


unstable, as predicted in Example 6.1.
Solution of Exercise 6.2:
For part (a) only Hp has to be changed from 1 to 2, and ywt(:,:,2) needs to
be defined:

Hp = 2;
ywt(:,:,2)=ywt(:,:,1);
Ks = mysmpccon(imod,ywt,uwt,Hu,Hp)
%%% CHECK: I get Ks = [0.3333 0.8333 -0.8333 0 -0.3333 -0.8333]

32
Pearson
c Education Limited 2002
CHAPTER 3. SOLVING PREDICTIVE CONTROL PROBLEMS

Outputs
40

30

20

10

−10

−20
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time

Manipulated Variables
40

20

−20

−40

−60

−80
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time

Figure 4: Exercise 3.7: simulation of Example 6.1.

% Simulate and plot:


[y,u] = smpcsim(imod,imod,Ks,10,[1,1]);
plotall(y,u)

Figure 5 shows the resulting closed-loop behaviour. It can be seen to be stable,


as predicted in Exercise 6.2(a).
For part (b) Hu must be changed from 1 to 2, and uwt changed correspondingly:

Hu = 2;
uwt(1,1,2)=uwt(1,1,1);
Ks = mysmpccon(imod,ywt,uwt,Hu,Hp)
%%% CHECK: I get Ks = [0.3333 1.3333 -1.3333 0 -0.3333 -1.3333]
% Simulate and plot:
[y,u] = smpcsim(imod,imod,Ks,10,[1,1]);
plotall(y,u)

Figure 6 shows the resulting closed-loop behaviour. It can be seen to be stable,


as predicted in Exercise 6.2(b).

3.8 The objective function can be written as


   T
1 T 2 −1 0
θ θ+ θ
2 −1 2 −3
| {z } | {z }
Φ φ

33
Pearson
c Education Limited 2002
CHAPTER 3. SOLVING PREDICTIVE CONTROL PROBLEMS

Outputs
1.4

1.2

0.8

0.6

0.4

0.2

0
0 1 2 3 4 5 6 7 8 9 10
Time

Manipulated Variables
1.25

1.2

1.15

1.1

1.05

0.95
0 1 2 3 4 5 6 7 8 9 10
Time

Figure 5: Exercise 3.7: simulation of Exercise 6.2(a).

Outputs
2

1.5

0.5

0
0 1 2 3 4 5 6 7 8 9 10
Time

Manipulated Variables
2

1.5

0.5

0
0 1 2 3 4 5 6 7 8 9 10
Time

Figure 6: Exercise 3.7: simulation of Exercise 6.2(b).

34
Pearson
c Education Limited 2002
CHAPTER 3. SOLVING PREDICTIVE CONTROL PROBLEMS

and the constraints can be written as


   
−1 0 0
 0 −1  θ ≤  0 
1 1 2
| {z } | {z }
Ω ω

The eigenvalues of Φ are 3 and 1, so Φ > 0. Hence the QP problem is convex.


KKT conditions:
There are no equality constraints in this problem, so H and h are empty in
this case. Condition (3.61) gives t = [3/2, 1/2, 0]T if θ = [3/2, 1/2]T . Hence,
from the complementarity condition tT λ = 0 (3.62) and the fact that λ ≥ 0
and t ≥ 0, the first two elements of λ must be zero, so that λ = [0, 0, λ3 ]T . So
finally we have to check the condition (3.59):
ΩT λ = −Φθ − φ
 
  0  
−1 0 −1  1/2
0  =
0 1 1 1/2
λ3
which is satisfied by
1
λ3 =
2
Thus the KKT conditions are satisfied, and θ = [3/2, 1/2]T is the optimal
solution.

3.9 (a). The optimization problem can be written as


    
1 Φ 0 θ θ
min [θ T T ] + [φT 0]
θ, 2 0 ρI  

subject to
    
Ω −I θ ω

0 −I  0
which is in the standard form of a QP problem.
(b). The expression (3.86) stays unchanged, except that the dimension of  is
now the same as that of ω2 . Inequalities (3.87) should be replaced by

Ω1 θ ≤ ω1
Ω2 θ ≤ ω2 + 
 ≥ 0
So the optimization problem can be written as
    
1 T T Φ 0 θ θ
min [θ  ] + [φT 0]
θ, 2 0 ρI  

35
Pearson
c Education Limited 2002
CHAPTER 3. SOLVING PREDICTIVE CONTROL PROBLEMS

subject to
   
Ω1 0   ω1
 Ω2 −I  θ ≤  ω2 

0 −I 0

which is again in the standard form of a QP problem.

3.10 Error in question: The reference to (3.4) is spurious.


1-norm problem, (3.88):
The cost function can be written as
    
1 T T Φ 0 θ θ
[θ  ] + [φT ρ1 ]T
2 0 0  

subject to
    
Ω −I θ ω

0 −I  0

where 1 is a vector of the same dimension as , each element of which is 1.


This is in the standard form of a QP problem.
∞-norm problem, (3.89)–(3.91).
Take the second formulation, namely that of (3.90)–(3.91). This can be written
as
    
1 T T Φ 0 θ T θ
[θ  ] + [φ ρ]
2 0 0  

subject to
    
Ω −1 θ ω

0 −1  0

Again this is in the standard form of a QP problem.

3.11 No solution required. Use function mismatch2 as suggested in the question.

3.12 No solution required. Use function disturb2 as suggested in the question.

36
Pearson
c Education Limited 2002
CHAPTER 4. STEP RESPONSE AND TRANSFER FUNCTION
FORMULATIONS

Chapter 4

Step response and transfer


function formulations

4.1 S(k) = H(0) + H(1) + . . . + H(k). Hence H(k) = S(k) − S(k − 1). So we can find
H(N ), H(N − 1), . . . , H(1) without any problems. For H(0) it seems that we
need H(−1); but recall that for any causal system we must have H(k) = 0 for
k < 0. Hence we can also get H(0) = S(0).

4.2 (a). This is probably the simplest, though not the most efficient, implementation.
Note that you get S(1), . . . , S(N ), even if M > 1. With the given interface
specification you can’t get S(0), since that would have to be stored in
S(:,:,0), but a zero index to an array is not allowed in MATLAB.
function S = ss2step(A,B,C,D,M,N)
S(:,:,1) = C*B+D;
for t=2:N,
S(:,:,t) = S(:,:,t-1) + C*A^(t-1)*B; % From (4.24)
end
(b). function upsilon = step2ups(S,Hw,Hp) % Follow (4.27)
[noutputs,ninputs,nsteps] = size(S);
if Hp>nsteps,
error(’Hp exceeds number of responses stored in S’)
end
% Reserve space (for speed):
upsilon = zeros((Hp-Hw+1)*noutputs,ninputs);
for t=Hw:Hp,
upsilon((t-Hw)*noutputs+1:(t-Hw+1)*noutputs,:) = S(:,:,t);
end

function theta = step2the(S,Hw,Hp,Hu) % Follow (4.28)


[noutputs,ninputs,nsteps] = size(S);

37
Pearson
c Education Limited 2002
CHAPTER 4. STEP RESPONSE AND TRANSFER FUNCTION
FORMULATIONS
% Reserve space (for speed):
theta = zeros((Hp-Hw+1)*noutputs,Hu*ninputs);
for j=1:min(Hw,Hu), % Do blocks of columns at a time
theta(:,(j-1)*ninputs+1:j*ninputs) = ...
step2ups(S,Hw-j+1,Hp-j+1);
end
for j=Hw+1:Hu, % this skipped if Hu <= Hw
theta((j-Hw)*noutputs+1:(Hp-Hw+1)*noutputs,...
(j-1)*ninputs+1:j*ninputs) = step2ups(S,1,Hp-j+1);
end

4.3 In this case p(Hp − Hw + 1) = m, so that Θ and Υ are square. Furthermore


from (4.27) and (4.28) we have that Υ = Θ = S(Hw ). But from (3.27)
we have KM P C = H−1 ΘT Q, since H has only m rows in this case. Also,
we have H = ΘT QΘ, since R = 0. So, assuming S(Hw )−1 exists, we have
KM P C = Θ−1 Q−1 Θ−T ΘT Q = Θ−1 = S(Hw )−1 . Therefore KM P C Υ = I, and
hence I − KM P C Υ = 0. Hence there is no feedback from u(k − 1) to u(k), and
hence the control law (in the full state measurement case) is:

u(k) = S(Hw )−1 r(k + Hw ) − S(Hw )−1 Cz AHw x(k).

4.4 The pulse responses are obtained as: H(0) = S(0) = [0, 0] (which tells us that
the ‘D’ matrix of the state-space model will be D = [0, 0]), H(1) = S(1) −
S(0) = [0, 2], H(2) = S(2) − S(1) = [1, 1], H(3) = S(3) − S(2) = [0, 0] (since
S(k) = S(2) for k > 2) and similarly H(k) = [0, 0] for k > 3. So we have a
finite pulse response, hence all the eigenvalues of the ‘A’ matrix must be at
0. A naive (but adequate) way to proceed is to build two separate state-space
models, one for each input, and then put them together.
For the first input, the required pulse response sequence is (omitting H(0)):
{0, 1, 0, 0, . . .}. Hence use two states; as in Example 4.1 try:
   
0 1 0
A1 = B1 =
0 0 1

We need to find C1 = [c1 , c2 ] such that

C1 B1 = c2 = 0
C1 A1 B1 = c1 = 1

For the second input, the required pulse response sequence is (omitting H(0)):
{2, 1, 0, 0, . . .}. Again use two states; try:
   
0 1 0
A2 = B2 =
0 0 1

38
Pearson
c Education Limited 2002
CHAPTER 4. STEP RESPONSE AND TRANSFER FUNCTION
FORMULATIONS
We need to find C2 = [c3 , c4 ] such that

C2 B2 = c4 = 2
C2 A2 B2 = c3 = 1

Now we get a 4-state system with the required pulse response as follows:
   
A1 0 B1 0
A= B=
0 A2 0 B2
   
C = C1 C2 D= 0 0

Although this is correct, the state dimension is unnecessarily large. It can be


checked that the observability matrix has rank 2 rather than 4, which shows
that a minimal realization with only two states can be found which gives the
same step response. The SVD-based method given in Section 4.1.4 will find
such a two-state model (and so will other less naive methods than the one used
here). The SVD-based method gives
   
0.3536 0.5493 0.2367 1.3798
A= B=
−0.2275 −0.3536 0.8880 0.1524
   
C = 1.4935 −0.3981 D= 0 0

4.5 If the controllability matrix Γ is ‘shifted left’ by m columns to form the matrix
Γ← then we have Γ← = AΓ. So shift Γn left by m columns to form the matrix
Γ← ←
n , and estimate A by solving Γn = AΓn .

4.6 Solution of equation (4.110): We need to find polynomials E20 (z −1 ) and F20 (z −1 ),
of degrees at most 1 and 0 respectively, such that
1 + 0.5z −1 0 −1
0 −1
−2 F2 (z )
= E (z ) + z (1)
1 − z −1 2
1 − z −1
Let E20 (z −1 ) = e00 + e01 z −1 and F20 (z −1 ) = f00 . Then, multiplying through by
1 − z −1 , we need:

1 + 0.5z −1 = (e00 + e01 z −1 )(1 − z −1 ) + z −2 f00 (2)


= e00 + (e01 − e00 )z −1 + (f00 − e01 )z −2 (3)

Hence, by comparing coefficients, e00 = 1, e01 = 1.5, f00 = 1. So

E20 (z −1 ) = 1 − 1.5z −1 , F20 (z −1 ) = 1 (4)

Solution of equation (4.121): We need to find a polynomial E2 (z −1 ) of degree


at most 1 (because i − d = 2 − 1 = 1), and a polynomial F2 (z −1 ), such that
0.5 −1
−1 −1 F2 (z )
= E2 (z ) + z (5)
1 − 0.9z −1 1 − 0.9z −1

39
Pearson
c Education Limited 2002
CHAPTER 4. STEP RESPONSE AND TRANSFER FUNCTION
FORMULATIONS
Letting E2 (z −1 ) = e0 + e1 z −1 and multiplying through by 1 − 0.9z −1 gives

0.5 = (e0 + e1 z −1 )(1 − 0.9z −1 ) + z −1 F2 (z −1 ) (6)

from which it is clear that F2 (z −1 ) need be only a constant f0 . Hence

0.5 = (e0 + e1 z −1 )(1 − 0.9z −1 ) + z −1 f0 (7)


−1 −2
= e0 + (e1 − 0.9e0 + f0 )z − 0.9e1 z (8)

from which e0 = 0.5, e1 = 0, f0 = 0.45. So

E2 (z −1 ) = 0.5, F2 (z −1 ) = 0.45 (9)

4.7 Let D(z −1 ) = (1 − z −1 )2 = 1 − 2z −1 + z −2 , C(z −1 ) = 1, v(0) = v0 , v(1) = v1 , and


v(k) = 0 for k > 1. The difference equation coresponding to equation (4.100)
in the book is
d(k) − 2d(k − 1) + d(k − 2) = v(k) (10)
Assuming v(−1) = v(−2) = 0 gives

d(0) = v0
d(1) = v1 + 2d(0) = v1 + 2v0
d(2) = 2d(1) − d(0) = 2v1 + 3v0
d(3) = 2d(2) − d(1) = 3v1 + 4v0
..
.
d(k) = k(v1 + v0 ) + v0 (11)

which is a ramp disturbance of slope v0 + v1 , initial value v0 .

4.8 Proceeding as in (4.70)–(4.74), the transfer function from w(k) to y(k) is Cy (zI −
A)−1 W . But from (4.74) the transfer function from u(k) to y(k) is Cy (zI −
A)−1 B. But we have
det(zI − A)
(zI − A)−1 =
adj(zI − A)
where the adjoint matrix adj(zI −A) contains only polynomial elements. Thus
the denominator of each element of both transfer function matrices is the
determinant det(zI − A).

4.9 The distinguishing feature of the GPC model is that the denominator polynomial
of the plant’s disturbance–output transfer function is (1 − z −1 )A(z −1 ), where
A(z −1 ) is the denominator of the plant’s input–output transfer function.

40
Pearson
c Education Limited 2002
CHAPTER 4. STEP RESPONSE AND TRANSFER FUNCTION
FORMULATIONS
First write the model (4.142)–(4.144) in the form
        
xp (k + 1) Ap I xp (k) Bp 0
= + u(k) + v2 (k)
xd (k + 1) 0 I xd (k) 0 Bd
 
xp (k)
y(k) = [Cp Cd ]
xd (k)

Applying (4.74) to this model, the transfer function from u to y is


 −1  
zI − Ap −I Bp
Pyu (z) = [Cp Cd ]
0 (z − 1)I 0
= Cp (zI − A)−1 Bp

from which it follows that the denominator of the plant’s input–output transfer
function is det(zI − A). Applying (4.74) again, the transfer function from the
disturbance v2 to y is
 −1  
zI − Ap −I 0
Pyv (z) = [Cp Cd ]
0 (z − 1)I Bd
" # −1  
(zI−Ap )−1
(zI − Ap )−1 z−1
0
= 1
0 z−1 I
Bd
Cp (zI − Ap )−1 Bd + Cd Bd
=
z−1
from which it follows that the denominator of the plant’s disturbance–output
transfer function is det(zI − A)(z − 1), which is the same as det(zI − A)(1 −
z −1 )z. The additional z factor does not matter for the disturbance–output
transfer function (it affects the phase but not the power spectrum), so this is
essentially the same as the GPC model.

4.10 The first p rows of (4.151) give

y(k + 1) = (I + A1 )y(k) + (A2 − A1 )y(k − 1) + · · · − An y(k − n)


+ (B2 − B1 )u(k − 1) + · · · − Bp u(k − p)
+ C2 v(k − 1) + · · · + Cq v(k − q + 1)
+ B1 u(k) + v(k) (12)
= y(k) + A1 [y(k) − y(k − 1)] + · · · + An [y(k − n + 1) − y(k − n)]
+ B1 [u(k) − u(k − 1)] + · · · + bp [u(k − p + 1) − u(k − p)]
+ v(k) + C2 v(k − 1) + · · · + Cq v(k − q + 1) (13)

which is the same as (4.149) with k replaced by k + 1. The remaining rows


of (4.151) are ‘housekeeping’ to ensure that identities are satisfied (such as
y(k) = y(k)) on each side of equation (4.151).

41
Pearson
c Education Limited 2002
CHAPTER 4. STEP RESPONSE AND TRANSFER FUNCTION
FORMULATIONS
4.11 Equation (4.178) follows by forming the difference x̂(k|k) − x̂(k − 1|k − 1). From
the standard observer update equation (2.81) we have
        
∆x̂(k|k) L∆x   ∆x̂(k|k − 1) L∆x
= I− 0 I + y(k)
η̂(k|k) Lη η̂(k|k − 1) Lη

This gives

∆x̂(k|k) = ∆x̂(k|k − 1) − L∆x η̂(k|k − 1) + L∆x y(k) (14)

(for any value of Lη ), and η̂(k|k) = y(k) if Lη = I. Now from the observer
prediction equation (2.82) we have
      
∆x̂(k|k − 1) A 0 ∆x̂(k − 1|k − 1) Bu
= + ∆u(k − 1)
η̂(k|k − 1) CA I η̂(k − 1|k − 1) CBu
(15)
Substituting the value obtained from this for η̂(k|k − 1) back into (14) gives

∆x̂(k|k) = (I − L∆x C)∆x̂(k|k − 1) + L∆x [y(k) − y(k − 1)]

since η̂(k − 1|k − 1) = y(k − 1) — if Lη = I. Finally, note that setting L∆x = L0


gives (4.178).

4.12 The following function performs the simulation and plots the results:

%GPC Apply GPC to unstable helicopter.


% Example 4.11 and Exercise 4.12.
%
% Usage: gpcheli(pole)

function gpcheli(pole)

% State-space model of helicopter as in Example 4.11:


A = [3.769 -5.334 3.3423 -0.7773 -8.948 10.27 -7.794 -0.8;
1 0 0 0 0 0 0 0;
0 1 0 0 0 0 0 0;
0 0 1 0 0 0 0 0;
0 0 0 0 0 0 0 0;
0 0 0 0 1 0 0 0;
0 0 0 0 0 1 0 0;
0 0 0 0 0 0 0 0 ];

Bu = [6.472 0 0 0 1 0 0 0]’;
C = [1,0,0,0,0,0,0,0];

imod = ss2mod(A,Bu,C,0,0.6); % Model in MOD format

42
Pearson
c Education Limited 2002
CHAPTER 4. STEP RESPONSE AND TRANSFER FUNCTION
FORMULATIONS
% Observer gain - as in (4.166):
Kest = [1,0,0,0,0,0,0,1,1]’;

% Define MPC parameters:


Hp = 5; % Prediction horizon
Hu = 2; % Control horizon
ywt = 1; % Tracking error penalty
uwt = 0; % Control move penalty

% Simulation parameters:
tend = 30; % Simulation length
r=1 % Set-point
wu = [zeros(16,1);-0.03;-0.07;-0.1]; % Input disturbance

% Generate response with obs pole at 0.8,


% with disturbance on input:
[y8w,u8w]=scmpc(imod,imod,ywt,uwt,Hu,Hp,tend,r,[],[],Kest,...
[],[],[],wu);

% Modify model for observer pole at ’pole’:


Anew = A;
Anew(1,8) = -pole;
imodnew = ss2mod(Anew,Bu,C,0,0.6);

% Generate response with obs pole at ’pole’,


% with disturbance on input:
[ypw,upw]=scmpc(imodnew,imodnew,ywt,uwt,Hu,Hp,tend,r,[],[],...
Kest,[],[],[],wu);

% Plot results:

tvec = 0:0.6:tend;

% Plots with disturbance:


figure
subplot(211)
plot(tvec,y8w,’-’)
hold on
plot(tvec,ypw,’--’)
xlabel(’Time (seconds)’)
ylabel(’Output’)
grid

subplot(212)
stairs(tvec,u8w,’-’)
hold on

43
Pearson
c Education Limited 2002
CHAPTER 4. STEP RESPONSE AND TRANSFER FUNCTION
FORMULATIONS
1.4

1.2

1
Output

0.8

0.6

0.4

0.2

0
0 5 10 15 20 25 30
Time (seconds)

0.25

0.2

0.15

0.1
Input

0.05

−0.05

−0.1
0 5 10 15 20 25 30
Time (seconds)

Figure 1: Exercise 4.12: helicopter response with observer pole at 0.8 (solid lines)
and at 0.3 (broken lines).

stairs(tvec,upw,’--’)
xlabel(’Time (seconds)’)
ylabel(’Input’)
grid
%%% End of function GPCHELI

Figure 1 shows the results with observer poles at 0.8 (solid lines) and 0.3
(broken lines).
Figure 2 shows the results with observer poles at 0.8 (solid lines) and 0.99
(broken lines).

44
Pearson
c Education Limited 2002
CHAPTER 4. STEP RESPONSE AND TRANSFER FUNCTION
FORMULATIONS

1.4

1.2

1
Output

0.8

0.6

0.4

0.2

0
0 5 10 15 20 25 30
Time (seconds)

0.15

0.1

0.05
Input

−0.05

−0.1
0 5 10 15 20 25 30
Time (seconds)

Figure 2: Exercise 4.12: helicopter response with observer pole at 0.8 (solid lines)
and at 0.99 (broken lines).

45
Pearson
c Education Limited 2002
CHAPTER 4. STEP RESPONSE AND TRANSFER FUNCTION
FORMULATIONS

46
Pearson
c Education Limited 2002
CHAPTER 5. OTHER FORMULATIONS OF PREDICTIVE CONTROL

Chapter 5

Other formulations of predictive


control

5.1 (a). To represent the fact that the air temperature is a measured disturbance,
the vector minfo should be defined to be:
minfo2 = [Ts, 1, 1, 1, 0,1,0]
Then new models of the plant are formed from the state-space matrices,
as in Exercise 3.3(a):
imod2 = ss2mod(ad,bd,c,d,minfo2);
pmod2 = ss2mod(aplantd,bplantd,c,d,minfo2);
To see the effectiveness of feedforward when the model is perfect and
there are no input constraints use:
[wtemp,power] = smpcsim(imod2,imod2,Ks,tend,setpoint,...
[],[],[],airtemp,[],[]);
(b). To see the effect when there is a plant/model mismatch use:
[wtemp,power] = smpcsim(pmod2,imod2,Ks,tend,setpoint,...
[],[],[],airtemp,[],[]);
(c). When there are input power constraints then use function scmpc to ob-
tain the QP solution. Use pmod,imod to simulate unmeasured air tem-
perature, and pmod2,imod2 to simulate the effect of feedforward from air
temperature measurements. (Note that the argument airtemp must be
supplied in different positions in each of these cases.) Figure 1 shows the
behaviour of the water temperature in the two cases, over one period of
24 hours, when the model is perfect (pmod==imod etc).

5.2 The cost function can be written as

V (k) = kQ[Θ∆U(k) − E(k)]k1 + kR∆U (k)k1 (1)

47
Pearson
c Education Limited 2002
CHAPTER 5. OTHER FORMULATIONS OF PREDICTIVE CONTROL

Water temperature
25

24

23

22
Temperature (deg C)

21

20

19

18

17

16

15
25 30 35 40 45
Time (hours)

Figure 1: Exercise 5.1(c): water temperature when air temperature is unmeasured


(solid line) and measured (dashed line).

But  
a
kak1 + kbk1 =
b
1
so    
Q× E(k)
V (k) =
R ∆U (k) −
(2)
0 1

So again the resulting problem is a 1-norm minimization problem, subject to


linear inequality constraints, and hence it is an LP problem.

5.3 We need to find a transfer function pair such that the numerator of Ñ (z) has
the zeros of P (z), and the numerator of M̃ (z) has the unstable poles of P (z),
with any unwanted factors cancelling out between the denominators of Ñ (z)
and M̃ (z), ensuring that all the roots of the denominators lie within the unit
circle. For example, any pair of the form

6.472z 2 − 2.476z + 7.794


Ñ (z) =
(z − p1 )(z − p2 )(z − 0.675)
(z − 1.047 + 0.2352i)(z − 1.047 − 0.2352i)
M̃ (z) =
(z − p1 )(z − p2 )

with |p1 | < 1 and |p2 | < 1, will be a solution. For example, p1 = p2 = 0.5
or p1 = 0, p2 = 0.93, would be solutions. Note that we need to have at least

48
Pearson
c Education Limited 2002
CHAPTER 5. OTHER FORMULATIONS OF PREDICTIVE CONTROL

two poles which cancel between Ñ (z) and M̃ (z), in order to ensure that both
factors are proper, since P (z) has two unstable poles.
(There are standard algorithms for obtaining coprime factors in state-space
form, usually each pair having exactly the same set of poles, so that all the
poles of P (z) appear in the numerator of M̃ (z).)

5.4 If the predicted input signal is û(k + i|k) = u0 (k) + u1 (k)i, this can be visualized
as a superposition of the constant signal u0 (k), which is present throughout
the prediction horizon, with a step of amplitude u1 (k) which starts at time
k + 1, another step of amplitude u1 (k) starting at time k + 2, and so on. The
effect of each of these steps on the output at time k +Hp depends on how much
time has elapsed between the occurence of the step signal and k + Hp (that
is, the usual convolution argument). So the effect of the step which occurs at
time k is given by the step response after Hp steps, namely S(Hp ), etc. Thus,
if we were starting with zero initial conditions, we would have

ŷ(k + Hp |k) =
S(Hp )u0 (k) + S(Hp − 1)u1 (k) + S(Hp − 2)u1 (k) + · · · + S(1)u1 (k) (3)

However, we start with some general initial conditions, so we must also take
into account the free response of the system. Hence we get

ŷ(k + Hp |k) =
S(Hp )u0 (k) + [S(Hp − 1) + S(Hp − 2) + · · · + S(1)]u1 (k) + ŷf (k + Hp |k) (4)

If û(k + i|k) denotes the actual control signal level to be applied to the plant
then ŷf (k + Hp |k) should be the free response which results from applying a
zero input signal. To be consistent with the way that ‘free response’ is used
in the book, it should be the response that would result if the input were kept
constant at u(k − 1); in this case û(k + i|k) should denote the change made at
time k + i relative to u(k − 1).

5.5 (a). A ramp is a polynomial of degree ν = 1. There must be at least ν + 1 = 2


coincidence points. (‘Common-sense’ reasoning: at least 2 points are
needed to define a straight line.)
(b). The future input trajectory should be structured as a polynomial of degree
ν = 1, that is, as a straight line:

û(k + i|k) = u0 (k) + u1 (k)i

This has 2 parameters, u0 (k) and u1 (k), which can be chosen to give zero
predicted error at the 2 coincidence points.

49
Pearson
c Education Limited 2002
CHAPTER 5. OTHER FORMULATIONS OF PREDICTIVE CONTROL

(c). If there are two inputs, then other possibilities may exist for achieving
zero predicted error at the coincidence points. If the transfer functions
from the two inputs to the output are different, then it may be sufficient
to assume that each input is a constant, and choose the two levels appro-
priately. (This is only possible if the dynamics are different, not just the
gains.) In some applications it may be preferable to structure the two
inputs identically as ramps (one a multiple of the other), or to use only
one input, keeping the other one in reserve.
(d). The other feature needed to get error-free tracking of the ramp set-point
is a model of the plant-model error. Assuming an asymptotically stable
model is used, the error can be modelled as a ramp, whose 2 parameters
should be fitted from previous errors between the plant output and the
(independent) model output.

5.6 (a). The Model Predictive Control Toolbox function smpcsim applies the uncon-
strained controller computed by smpccon, and merely ‘clips’ the control
input signal at its saturation limits. Proceed as in the solution to Ex-
ercise 3.3(e); in the last step, instead of using scmpc, use the following
MATLAB command:
[wtemp,power] = smpcsim(pmod,imod,Ks,tend,setpoint,...
ulim,[],[],[],airtemp,[]);
In this case this gives virtually the same results as the QP solution com-
puted by scmpc.
(b). Since the input power slew rate is limited to 20 kW/hour and the control
update interval is Ts = 0.25 hours, the control increment is limited to
|∆q| ≤ 20Ts = 5 kW. To represent this in the Model Predictive Control
Toolbox, re-define ulim as: ulim = [0, 40, 5]. Now proceed as above,
using smpcsim to get the ‘clipped’ solution and scmpc to get the optimal
QP solution.

5.7 From (5.50) the first element of Y(t) is given by Cx(t), which is y(t), from (5.49).
The second element is given by CAx(t) + CBu(t) = C[Ax(t) + Bu(t)] =
C ẋ(t) = ẏ(t), so this agrees. The third element is given by CA2 x(t) +
CABu(t) + CB u̇(t) = CA[Ax(t) + Bu(t)] + CB u̇(t) = C[Aẋ(t) + B u̇(t)] =
C ẍ(t) = ÿ(t), which agrees again. And so on . . . .

5.8 Let  
u(t)
 u̇(t) 
 
U(t) =  ..  (5)
 . 
u(m) (t)

50
Pearson
c Education Limited 2002
CHAPTER 5. OTHER FORMULATIONS OF PREDICTIVE CONTROL

Then, from (5.48),


 
τ2 tm
û(t + τ |t) = 1, τ, , · · · , U (t) (6)
2! m!

and, from (5.50), Y(t) = Ψx(t) + ΘU (t) for suitable matrices Ψ and Θ. Sub-
stituting this into (5.52) gives
Z τ2
||r(t + τ ) − ŷ(t + τ |t)||2 dτ = [Ψx(t) + ΘU (t)]T Φ(τ1 , τ2 )[Ψx(t) + ΘU (t)]
τ1
Z τ2
− 2φ(τ1 , τ2 )[Ψx(t) + ΘU (t)] + r(t + τ )2 dτ
τ1
= U(t) Θ Φ(τ1 , τ2 )ΘU(t) + 2U (t) Θ [Φ(τ1 , τ2 )Ψx(t) − φ(τ1 , τ2 )T ]
T T T T
Z τ2
+ [x(t)T ΨT Φ(τ1 , τ2 )Ψx(t) + r(t + τ )2 dτ ] (7)
τ1

Also,
Z τ2 Z τ2  
τ2 tm
||û(t + τ |t)|| dτ
2
= || 1, τ, , · · · , U (t)||2 dτ (8)
τ1 τ1 2! m!
= U (t)T Φm (τ1 , τ2 )U(t) (9)

where Φm (τ1 , τ2 ) is defined as in (5.53) but with n replaced by m. Hence, from


(5.46),

V (t) = U (t)T [ΘT Φ(τ1 , τ2 )Θ + λΦm (τ1 , τ2 )]U(t)


+ 2U (t)T ΘT [Φ(τ1 , τ2 )Ψx(t) − φ(τ1 , τ2 )T ]
 Z τ2 
T T 2
+ x(t) Ψ Φ(τ1 , τ2 )Ψx(t) + r(t + τ ) dτ (10)
τ1
= U (t)T HU(t) − U(t)T G + const (11)

for suitably defined matrices G and H.

51
Pearson
c Education Limited 2002
CHAPTER 5. OTHER FORMULATIONS OF PREDICTIVE CONTROL

52
Pearson
c Education Limited 2002
CHAPTER 6. STABILITY

Chapter 6

Stability

6.1 There are many ways of doing this. Here is a simple way, suitable for MATLAB
5.

a = [-2, 0 ; 1, 0] % closed-loop A matrix


b = [ 0 ; 0 ] % B matrix (irrelevant)
c = [ 0 ; 1 ] % Output is second state
d = 0

clsys = ss(a,b,c,d,1);

x0 = [ 1 ; 0 ] % Initial state
initial(clsys, x0); % Plots response to initial conditions

6.2 (a). Using the model,

   
0 0 1
x̂(k + 2|k) = x̂(k + 1|k) + û(k + 1|k) (1)
1 0 0
 
û(k + 1|k)
= (2)
x̂1 (k + 1|k)
 
û(k|k)
= (3)
û(k|k)
 
u(k)
= (4)
u(k)

since Hu = 1. As in Example 6.1, we have x̂2 (k + 1|k) = x1 (k). Hence

53
Pearson
c Education Limited 2002
CHAPTER 6. STABILITY

we have

V (k) = u(k)2 + 6x1 (k) + 4u(k)x1 (k)


| {z }
first step

+ u(k) + 6u(k) + 4u(k)2


2 2
(5)
| {z }
second step
2
= 12u(k) + 6x1 (k) + 4u(k)x1 (k) (6)

Hence
∂V (k)
= 24u(k) + 4x1 (k) (7)
∂u(k)
1
= 0 if u(k) = − x1 (k) (8)
6
Substituting this back into the model gives the closed-loop equation:
 1 
−6 0
x(k + 1) = x(k) (9)
1 0

(b). With Hu = 2 we can choose both û(k|k) and û(k + 1|k). With no distur-
bances and a perfect model, we can replace these by u(k) and u(k + 1),
respectively. Hence we have
 
u(k + 1)
x̂(k + 2|k) = (10)
u(k)

so that

V (k) = u(k)2 + 6x1 (k) + 4u(k)x1 (k)


| {z }
first step

+ u(k + 1) + 6u(k)2 + 4u(k + 1)u(k)


2
(11)
| {z }
second step
2 2
= 7u(k) + u(k + 1)
+ 4u(k)[x1 (k) + u(k + 1)] + 6x1 (k) (12)

and hence
∂V (k)
= 14u(k) + 4[x1 (k) + u(k + 1)] (13)
∂u(k)
∂V (k)
= 2u(k + 1) + 4u(k) (14)
∂u(k + 1)

Setting ∂V (k)/∂u(k + 1) = 0 gives u(k + 1) = −2u(k) and hence

∂V (k)/∂u(k) = 14u(k) + 4[x1 (k) − 2u(k)]


= 6u(k) + 4x1 (k) = 0 (15)

54
Pearson
c Education Limited 2002
CHAPTER 6. STABILITY

if u(k) = −(2/3)x1 (k). Substituting this back into the model gives the
closed-loop equation:
 2 
−3 0
x(k + 1) = x(k) (16)
1 0

(c). Follow the same procedure as in Exercise 6.1, changing the matrix a
appropriately. (Compare the results with those you get from Exercise
6.3.)

6.3 The solution to this has already been given as the solution to the second part of
Exercise 3.7.

6.4 Since û(k + i − 1) = 0 for i ≥ Hu , we have W̃s x̂(k + i|k) = Js W̃s x̂(k + i − 1|k) for
i > Hu , and hence W̃s x̂(k + Hu + i|k) = Jsi W̃s x̂(k + Hu |k). Consequently


X
||Cz W̃s x̂(k + i − 1|k)||2Q =
i=Hu +1
" ∞
#
X
x̂(k + Hu |k)T W̃sT (JsT )i W̃sT CzT QCz W̃s Jsi W̃s x̂(k + Hu |k)
i=0
= x̂(k + Hu |k)T Q̄x̂(k + Hu |k) (17)

if
Q̄ = W̃sT ΠW̃s ((6.30) in the book)
where

X
Π= (JsT )i W̃sT CzT QCz W̃s Js (18)
i=0

Pre-multiplying each side of this by JsT and post-multiplying by Js gives



X
JsT ΠJs = (JsT )i W̃sT CzT QCz W̃s Js (19)
i=1
= Π − W̃sT CzT QCz W̃s (20)

which is equation (6.31) in the book.

6.5 (a). The plant is stable, so from Section 6.2 we have the equivalent finite-horizon
problem:

u −1
HX
min ||ẑ(k + j + 1|k) − r(k + j + 1)||2Q(j+1)
∆U (k)
j=0

55
Pearson
c Education Limited 2002
CHAPTER 6. STABILITY

where Q(j) = Q for j = 1, . . . , Hu − 1, Q(Hu ) = Q̄, and

AT Q̄A = Q̄ − CzT QCz

In this case, since we have A = 0.9, Cz = 1 and Q = 1, we find Q̄ =


1/(1 − 0.92 ) = 5.263. Since Hu = 2, we have
 
1 0
Q=
0 5.263

(b). The computation and simulation can be done with the Model Predictive
Control Toolbox, using ywt=[1 ; sqrt(5.263)].
(c). In general for multi-output systems, Q̄ is not diagonal, even if each Q(j)
is diagonal. (See Example 6.3.) But the Model Predictive Control Toolbox
assumes that the weighting matrices are always diagonal.

6.6 The pair (Ã, B̃) being stabilizable is the same as the pair
   
A B B
,
0 I I

being stabilizable.
The pair (A, B) is stabilizable if and only if any vector x which is not in the
range space (column span) of the controllability matrix P = [B, AB, A2 B, . . .]
lies in a subspace spanned by eigenvectors associated with stable eigenvalues
of A. (This is standard linear systems theory.)
Similarly the pair (Ã, B̃) is stabilizable if and only if any vector x̃ which is not
in the range space of the controllability matrix P̃ = [B̃, ÃB̃, Ã2 B̃, . . .] lies in
a subspace spanned by eigenvectors associated with stable eigenvalues of Ã.
But
 
B AB + B A2 B + AB + B · · ·
P̃ = (21)
I I I ···
 
I I I ···
  
B AB A2 B · · ·  0 I I · · · 
=  0 0 I ···  (22)
I 0 0 ···  
.. .. .. . .
. . . .

Note that the last matrix has full rank, so every vector (of suitable dimension)
is in its range space. Now suppose that we have a vector x̃ which is not in
the range space of P̃ , namely it cannot be expressed as P̃ v for any vector v.
Partition it as x̃ = [x̃T1 , x̃T2 ]T , where x̃1 has as many elements as B has rows.
x̃2 is always in the range space of [I, 0, 0, . . .]. So x̃ is not in the range space of
P̃ if and only if x̃1 is not in the range space of P . But if (A, B) is stabilizable
then any such x̃1 must be in the stable invariant subspace of A.

56
Pearson
c Education Limited 2002
CHAPTER 6. STABILITY

Note that equivalence of controllability for the two representations is easier to


establish: (A, B) is controllable if and only if P has full row rank, and it is
easy to see from equation (22) above that this happens if and only if P̃ has
full row rank.

6.7 Applying (6.36) with j = N gives KN −1 = [0, 1.8824]. This gives the closed-loop
state-transition matrix
 
0 0.1176
A − BKN −1 =
2 0

which has eigenvalues ±0.4851, which both have modulus smaller than 1. Solv-
ing (6.49) for QN −1 gives
 
−48 0
QN −1 =
0 12.2353

which is clearly not positive semi-definite.

57
Pearson
c Education Limited 2002
CHAPTER 6. STABILITY

58
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

Chapter 7

Tuning

7.1 From the definitions (7.6), (7.7),

S(z) + T (z) = [I + P (z)K(z)H(z)]−1


+ [I + P (z)K(z)H(z)]−1 P (z)K(z)H(z) (1)
−1
= [I + P (z)K(z)H(z)] [I + P (z)K(z)H(z)] (2)
= I (3)

7.2 From Figure 7.1 we have

u = K(z){r − H(z)[n + d + P (z)u]} (4)

Hence
[I + K(z)H(z)P (z)]u = K(z){r − H(z)[n + d]} (5)
So the transfer function from the disturbance d to the plant input u is

X(z)/B(z)
−[I + K(z)H(z)P (z)]−1 K(z)H(z) = − (6)
1 + X(z)/A(z)
X(z)A(z)
= − (7)
B(z)[A(z) + X(z)]

So zeros of B(z) are poles of this transfer function. Zeros of B(z) outside the
unit circle would lead to internal instability of the feedback system. That is
why you cannot use the controller to cancel such zeros.

7.3 (a). As suggested in the question, the easiest way of generating the data for
this exercise is to run the Model Predictive Control Toolbox demo file
mpctutss. Variable pmod holds the model of the plant. The demo con-
cerns a multivariable control problem with two control inputs and two

59
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

controlled outputs. There is an unmeasured disturbance which acts on


the two outputs through different transfer functions. (See the Model
Predictive Control Toolbox User’s Guide for details.) Although the orig-
inal continuous-time transfer functions are simple, they include time de-
lays, which result in a large number of states (14) being required for the
discrete-time model.
For mean-level control the prediction horizon Hp should be large (infinite
in principle), the control horizon should be Hu = 1, and the weight on
control moves should be R = 0. The following code simulates the set-
point response when Hp = 50, with equal penalty weights for tracking
errors of the two outputs, over a simulation horizon of 40 steps:
imod = pmod; % Make model identical to plant
Hp = 50;
Hu = 1;
ywt = [1,1];
uwt = [0,0];
tend = 40;
r = [1,0]; % Step set-point change on output 1
Ks = smpccon(imod,ywt,uwt,Hu,Hp);
[y,u] = smpcsim(pmod,imod,Ks,tend,r);
Figure 1 shows the resulting response. Repeating the simulation, but with
r = [0,1], namely the set-point change on the second output, gives the
response shown in Figure 2.

Outputs
1

0.8

0.6

0.4

0.2

−0.2

−0.4
0 5 10 15 20 25 30 35 40
Time

Manipulated Variables
0.18

0.16

0.14

0.12

0.1

0.08

0.06

0.04
0 5 10 15 20 25 30 35 40
Time

Figure 1: Exercise 7.3(a): mean-level control. Set-point change on output 1. Solid


lines: input and output 1. Broken lines: input and output 2.

60
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

Outputs
1

0.8

0.6

0.4

0.2

−0.2

−0.4
0 5 10 15 20 25 30 35 40
Time

Manipulated Variables
−0.1

−0.11

−0.12

−0.13

−0.14

−0.15
0 5 10 15 20 25 30 35 40
Time

Figure 2: Exercise 7.3(a): mean-level control. Set-point change on output 2. Solid


lines: input and output 1. Broken lines: input and output 2.

To see the response to a step disturbance, when both set-points are held
at 0:
r = [0,0];
d = 1;
[y,u] = smpcsim(pmod,imod,Ks,tend,r,[],[],[],[],d,[]);
Figure 3 shows the resulting response.
(b). For ‘perfect’ control the horizons are Hp = Hu = 1. The ‘perfect’ con-
troller is obtained as follows:
Hp = 1; Hu = 1;
ywt = [1 1];
uwt = [0 0];
Ks = smpccon(imod,ywt,uwt,Hu,Hp);
The response to a set-point change on output 1 is shown in Figure 4. It is
much faster than the response of the mean-level controller. However, the
controller does not respond at all to a set-point change on output 2. This
is because there is a delay of more than one step between either input
and the second output. So it is impossible to influence the second output
within the prediction horizon Hp = 1, and hence the controller gain for
set-point errors on the second output is zero. Increasing the prediction
horizon to Hp = 2, and making Hu = Hp (to give the controller a chance
of hitting the set-point perfectly at both coincidence points), gives the
result shown in Figure 5. Note the very oscillatory behaviour of the

61
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

Outputs
2.5

1.5

0.5

0
0 5 10 15 20 25 30 35 40
Time

Manipulated Variables
0.35

0.3

0.25

0.2

0.15

0.1

0.05

0
0 5 10 15 20 25 30 35 40
Time

Figure 3: Exercise 7.3(a): mean-level control. Response to step unmeasured disur-


bance. Solid lines: input and output 1. Broken lines: input and output 2.

inputs, typical of ‘perfect’ control. The response to a step disturbance is


shown in Figure 6. It can be seen that the disturbance is counteracted
much more quickly than with mean-level control.
In order to make the controller behave more like the mean-level controller,
but without changing the horizons, the main requirement is to reduce
the agressiveness of the controller. We expect that this can be done by
increasing the penalty on control moves (which is 0 for Figures 4–6).
Using uwt=[10,10] (which corresponds to R = 100I) gives the results
shown in Figures 7–9, which look rather similar to the behaviour obtained
with the mean-level controller in Figures 1–3, at least as regards the
behaviour of the controlled outputs. (The inputs are very different.) Note
that the large penalty on input moves has slowed down the transients
generally, but also has delayed the elimination of steady-state error, so
that the outputs give the appearance of settling to incorrect values —
but those would eventually be corrected. The reader may be able to get
closer to the mean-level behaviour by further experimentation with R
(parameter uwt corresponds to R1/2 ).

62
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

Outputs
1.4

1.2

0.8

0.6

0.4

0.2

0
0 5 10 15 20 25 30 35 40
Time

Manipulated Variables
1.5

0.5

−0.5

−1

−1.5
0 5 10 15 20 25 30 35 40
Time

Figure 4: Exercise 7.3(b): ‘perfect’ control. Response to set-point change on output


1. Solid lines: input and output 1. Broken lines: input and output 2.

Outputs
1.2

0.8

0.6

0.4

0.2

−0.2
0 5 10 15 20 25 30 35 40
Time

Manipulated Variables
1

0.5

−0.5

−1
0 5 10 15 20 25 30 35 40
Time

Figure 5: Exercise 7.3(b): ‘Perfect’ control with Hp = Hu = 2. Response to set-


point change on output 2. Solid lines: input and output 1. Broken lines: input and
output 2.

63
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

Outputs
1.2

0.8

0.6

0.4

0.2

−0.2
0 5 10 15 20 25 30 35 40
Time

Manipulated Variables
0.6

0.4

0.2

−0.2

−0.4
0 5 10 15 20 25 30 35 40
Time

Figure 6: Exercise 7.3(b): ‘Perfect’ control with Hp = Hu = 2. Response to step


disturbance. Solid lines: input and output 1. Broken lines: input and output 2.

Outputs
1.4

1.2

0.8

0.6

0.4

0.2

0
0 5 10 15 20 25 30 35 40
Time

Manipulated Variables
0.15

0.1

0.05

−0.05
0 5 10 15 20 25 30 35 40
Time

Figure 7: Exercise 7.3(b): Hp = Hu = 2, R = 100I. Response to set-point change


on output 1. Solid lines: input and output 1. Broken lines: input and output 2.

64
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

Outputs
0.8

0.6

0.4

0.2

−0.2
0 5 10 15 20 25 30 35 40
Time

Manipulated Variables
0

−0.02

−0.04

−0.06

−0.08

−0.1

−0.12
0 5 10 15 20 25 30 35 40
Time

Figure 8: Exercise 7.3(b): Hp = Hu = 2, R = 100I. Response to set-point change


on output 2. Solid lines: input and output 1. Broken lines: input and output 2.

Outputs
3

−1
0 5 10 15 20 25 30 35 40
Time

Manipulated Variables
0.3

0.2

0.1

−0.1

−0.2
0 5 10 15 20 25 30 35 40
Time

Figure 9: Exercise 7.3(b): Hp = Hu = 2, R = 100I. Response to step disturbance.


Solid lines: input and output 1. Broken lines: input and output 2.

65
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

(c). The frequency responses of the sensitivity and complementary sensitivity


can be evaluated for each controller by the following Model Predictive
Control Toolbox commands. First the function smpccl is used to com-
pute a state-space model of the complementary sensitivity T . Then the
function mod2frsp is used to compute the frequency responses of both T
and the sensitivity S = I − T . The frequencies at which T and S should
be computed are determined as follows: in (b) the responses obtained
exhibit a time constant of the order of 10 time steps, and Ts = 2; if this
were a simple first-order lag response, the corner frequency would be of
the order of 1/10Ts = 0.05. So we expect the closed-loop bandwidth to be
of this order. Therefore we are interested in seeing the frequency response
from at least a factor of 10 lower than this, say ωmin = 0.005 rad/sec.
The highest frequency which can be seen in a discrete-time system is
ωmax = π/Ts = 1.57 rad/sec. We specify (somewhat arbitrarily) that the
frequency response should be computed at 100 frequencies in this range.
For function mod2frsp the minimum and maximum frequencies must be
specified as powers of 10, so the range we want is covered by defining the
vector freq= [-3,log10(pi/Ts),100];. Finally the singular values of S and
T are computed using svdfrsp, and the largest one (at each frequency)
is plotted.

clmod = smpccl(pmod,imod,Ks); % Ks computed previously


% by ’smpccon’
[T,S] = mod2frsp(clmod,freq,[1,2],[1,2]); % Select inputs
% and outputs 1 and 2.
[svS,freqlist] = svdfrsp(S);
[svT,freqlist] = svdfrsp(T);
maxsvS = svS(:,1); % Largest ones are in first column
maxsvT = svT(:,1);
loglog(freqlist,maxsvS,’-’)
hold on
loglog(freqlist,maxsvT,’--’)
grid, xlabel(’Frequency’), ylabel(’Singular values’)
title(’Max SV of S (solid) and T (broken line)’)

Figure 10 shows the results obtained for the mean-level controller designed
in (a), while Figure 11 shows those for the ‘perfect’ controller designed in
(b). The sensitivity function S for the perfect controller has largest sin-
gular value of about 2 at frequencies above 0.6 rad/sec, which indicates
(usually) unacceptably low robustness
√ to modelling errors. (Typically
peak values larger than about 2 are considered unacceptable.) The
largest singular value of the complementary sensitivity T is constant at 1
at all frequencies. While this is good at low frequencies, it indicates that
measurement noise is not attenuated at any frequency, and that the stabil-
ity of the feedback loop is vulnerable to modelling errors. Figure 10 shows
much more acceptable behaviour √ of the mean-level controller. The sensi-
tivity function peaks at about 2, and the complementary function falls

66
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

at higher frequencies. Both show good behaviour at low frequencies; the


‘integral action’ which arises from the default ‘DMC’ disturbance model
shows up as the gain of S falling to zero and the gain of T approaching 1
at low frequencies. The fact that ‘perfect’ control is more agressive and
has faster reaction time appears shows up as a higher bandwidth of the
‘perfect’ controller, as measured by the frequency at which the gain of
the sensitivity S becomes 1. It is also seen that for a given frequency
below that bandwidth the gain of S for the ‘perfect’ controller is lower
than that of the mean-level controller at the same frequency.
Max SV of S (solid) and T (broken line)
1
10

0
10
Singular values

−1
10

−2
10

−3
10
−3 −2 −1 0 1
10 10 10 10 10
Frequency

Figure 10: Exercise 7.3(c): mean-level controller.

The same procedure can be followed to investigate the frequency variation


of the gains of S and T for any other design.

67
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

Max SV of S (solid) and T (broken line)


1
10

0
10
Singular values

−1
10

−2
10

−3
10
−3 −2 −1 0 1
10 10 10 10 10
Frequency

Figure 11: Exercise 7.3(c): ‘perfect’ controller.

68
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

7.4 This question is rather ill-posed, since the precise meaning of ‘best’ has not been
specified. However, the reader can interpret this as referring to the quality
of step responses and disturbance rejection in the time domain, or to the
sensitivity function in the frequency domain, etc. The simplest thing to do
is to judge the quality by the step response and disturbance rejection plots
already produced by the Model Predictive Control Toolbox demo file pmlin.m.
Frequency domain properties of the controller can be investigated as in the
solution to Exercise 7.3(c).
The pmlin demo is discussed at length in the Model Predictive Control Tool-
box User’s Guide, in the section entitled Application: Paper machine headbox
control.

(a). The first part of the demo file pmlin.m takes the user through two designs
in which the output weights are changed by changing the parameter ywt.
The user can edit this file to change the input weights by changing the
parameter uwt, and the prediction and control horizon parameters P and
M.
(b). The demo file pmlin.m also shows the effect of changing from the default
observer (estimator) to two alternatives. See the Model Predictive Control
Toolbox User’s Guide for details of these, and Sections 8.2 and 8.3 in the
next chapter. But it is possible to experiment with the observer design
by editing the file to change the parameters Q, R, taus and signoise,
without complete understanding of what is going on. Note that making
R or signoise very small should make the control more agressive (the
observer becomes approximately ‘deadbeat’).

7.5 (a). A ‘perfect’ controller for this example is designed and simulated by the fol-
lowing MATLAB commands, using the Model Predictive Control Toolbox:
plant = tf([-10,1],conv([1,1],[1,30])); % continuous-time
plant = c2d(plant,0.2); % discrete-time
plant = ss(plant); % convert to state-space form
a = get(plant,’a’); b = get(plant,’b’); % get matrices
c = get(plant,’c’); d = get(plant,’d’);
pmod = ss2mod(a,b,c,d,0.2); % put plant into MOD format
% Compute ’perfect’ controller:
Ksperf = smpccon(pmod,1,0,1,1);
% Simulate step response:
[y,u] = smpcsim(pmod,pmod,Ksperf,150,1);
plotall(y,u,0.2) % Plot results.
The simulation shows that the closed loop is unstable. As shown in
the text, the ‘perfect’ controller attempts to invert the plant transfer

69
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

function. The (continuous-time) plant in this case has a zero with positive
value, at +10, which becomes a zero located outside the unit circle in the
discrete-time equivalent model. The ‘perfect’ controller has a pole at the
same location as this zero. This results in an unstable ‘hidden mode’ in
the controller-plant combination (in this case an unobservable unstable
mode), which is the source of the instability of the closed loop. The
simulated step response shows the plant output tracking the set-point
exactly, but the plant input going off to −∞.
(b). As discussed in Chapter 6, closed-loop stability is ensured if an infinite
prediction horizon is used (assuming constraints are not active). The
infinite-horizon controller is approached more and more closely as the
prediction horizon is increased. Thus increasing Hp will eventually result
in closed-loop stability.
By trial-and-error I found that closed-loop stability is obtained for Hp ≥
52. Using the Model Predictive Control Toolbox functions smpccl and
smpcpole the closed-loop poles with Hp = 52 were 0.9995, 0.8187, 0.0025,
0.0017 and 0. From the set-point response (or from the ‘dominant’ pole at
0.9995) the time constant is approximately (treating it as if it were a first-
order exponential) 1/0.0005=2000 steps, which corresponds to 2000 ×
0.2 = 400 sec. Notice that this is very different from 1 sec, which is
the time constant expected from the ‘mean-level’ controller, as Hp → ∞.
Values as high as Hp ≈ 1000 are needed to get close to this speed of
response for this plant (with Hu = 1 and R = 0).
(c). Error in question: To get stability Hp must be greater than 30. Try
Hp = 60 for instance.
The set-point response without constraints is obtained as in part (a)
above, making the necessary changes to the parameters. (Note that the
correct setting is uwt = sqrt(0.1), since the parameter uwt corresponds
to Q1/2 .) To constrain the input moves to |∆u(k)| < 0.1, use the following
Model Predictive Control Toolbox commands:
ulim = [-inf,inf,0.1];
[y,u] = scmpc(pmod,pmod,1,sqrt(0.1),2,60,200,1,ulim);
plotall(y,u)
Note that it is now necessary to use the function scmpc in place of the
pair of functions smpccon and smpcsim.

7.6 (a). See the function swimpool available on the web site. The only modification
needed to run the simulation with the default observer is in the call to
function scmpc: replace the argument Kest by the empty matrix [].
(b). The only modification required here is in the model of the disturbance. A
ramp air temperature disturbance has the continuous-time model θ̈a = 0,
which in state-space form becomes
    
d θa 0 1 θa
=
dt θ̇a 0 0 θ̇a

70
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

Thus the only change required in function swimpool is to replace the line
aairc = [0, 1; -(2*pi/24)^2, 0];

by
aairc = [0, 1; 0, 0];

(c). The sensitivity function can be computed using the Model Predictive Con-
trol Toolbox in the same way as in Exercise 7.3, once the unconstrained
controller has been obtained by using the Model Predictive Control Tool-
box function smpccon. Figure 12 shows the gains of the sensitivity func-
tions for the two disturbance models. (The easiest way to generate this
figure is to edit the function swimpool.m: add commands similar to those
used in Exercise 7.3 at the end of swimpool.)
Note: I could not get the Model Predictive Control Toolbox function
smpccl to work correctly on this example for some reason, so I used
Control System Toolbox functions instead. Code extracts are shown at
the end of this solution.
The most prominent feature is that when the sinusoidal disturbance
model is used, the sensitivity falls sharply to zero at the frequency at
which the disturbance occurs (2π/24 = 0.2618 rad/sec). Note that when
the ramp disturbance model is assumed, the sensitivity at this frequency
is about −75 dB, which confirms that the ramp disturbance model is also
very effective at attenuating the effect of the sinusoidal disturbance.
When the disturbance is assumed to be a sinusoid, a step disturbance is
still assumed to be present, by default. The disturbance is thus assumed
to have a pole at +1, in addition to the two poles at exp(±2πi/24). The
gain of the disturbance transfer function thus increases with frequency
as ω −1 when ω → 0, and the gain of the sensitivity falls linearly with
frequency to counteract this (slope +1 on Bode magnitude plot). When
the disturbance is assumed to be a ramp, this corresponds to two poles at
+1; the gain of the disturbance transfer function increases as ω −2 at low
frequencies. The sensitivity falls cubically with frequency (slope +3 on
Bode magnitude plot) because the disturbance model now assumes that
there are three poles at +1, two for the ramp and one for the implicit
step disturbance.
Here are extracts of the code used to generate Figure 12:
% Define plant:

% Continuous-time:
aplantc = -1/Tplant; bplantc = [kplant, 1]/Tplant;
cplant = 1; dplant = [0, 0];
plantc = ss(aplantc,bplantc,cplant,dplant); % LTI object

71
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

Bode Magnitude Diagram


50

−50
Magnitude (dB)

−100

−150

−200

−250
−3 −2 −1 0 1
10 10 10 10 10
Frequency (rad/sec)

Figure 12: Exercise 7.6(c): sensitivity functions with the sinusoidal (solid line) and
the ramp (broken line) disturbance models.

% Discrete-time equivalent:
plantd = c2d(plantc,Ts);
[aplantd,bplantd] = ssdata(plantd); % A and B matrices.
% (C and D stay unchanged)

plantinfo = [Ts,1,1,0,1,1,0]; % information for MOD format


% (1 state, SISO, 1 unmeasured disturbance)
plant = ss2mod(aplantd,bplantd,cplant,dplant,plantinfo);
% plant in MOD format

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% Define controller’s internal model:

% Disturbance dynamics (diurnal variation):


% Continuous-time:
if disturbance == ’sine’,
aairc = [0, 1; -(2*pi/24)^2, 0]; % Air temp A matrix
elseif disturbance == ’ramp’,
aairc = [0, 1; 0, 0];
end
% Complete continuous-time model, with state vector =
% [water temp, air temp, air temp derivative]’:
amodelc = [-1/Tmodel, 1/Tmodel, 0;

72
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

zeros(2,1), aairc ];
bmodelc = [kmodel/Tmodel; 0; 0];
cmodel = [1, 0, 0];
dmodel = 0;
modelc = ss(amodelc,bmodelc,cmodel,dmodel); % LTI object

% Discrete-time equivalent:
modeld = c2d(modelc,Ts);
[amodeld,bmodeld] = ssdata(modeld); % A and B matrices.
% (C and D stay unchanged)

modinfo = [Ts,3,1,0,0,1,0];
model = ss2mod(amodeld,bmodeld,cmodel,dmodel,modinfo);
% model in MOD format

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% Compute observer gain which gives


% asymptotically stable observer:

% First need slightly different model, with


% artificial process noise:
% Use same ’A’ matrix, different ’B’ matrix.
bobs = [bmodeld, [0;0;1]];
obsinfo = [Ts,3,1,0,1,1,0]; % minfo vector for MOD format
% Model for observer:
modobs = ss2mod(amodeld,bobs,cmodel,[0,0],obsinfo);

% Now compute observer gain:


Kest = smpcest(modobs,W,V);

% Compute and display observer poles:


% Augment model:
[auga,augb,augc]=mpcaugss(amodeld,bobs,cmodel);
obspoles = eig(auga*(eye(4)-Kest*augc)); % pole computation
disp(’ Observer poles are:’), disp(obspoles)
if any(abs(obspoles)>=1),
disp(’Warning: Observer not asymptotically stable’)
end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% Compute MPC controller Ks:


Ks = smpccon(model,Q,R,Hu,Hp);
% Compute closed loop system and controller in MOD format:
[clmod,cmod] = smpccl(plant,model,Ks,Kest);

73
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

% Compute T and S frequency responses:


% Next 2 lines don’t work correctly - don’t know why
%[T,S] = mod2frsp(clmod,[-3,log10(pi/Ts),50],1,1);
%[svS,omega] = svdfrsp(S);
% So use Control Toolbox instead:
plantlti = ss(aplantd,bplantd,cplant,dplant,Ts);
% Make controller as LTI object:
[ka,kb,kc,kd] = mod2ss(cmod);
K = ss(ka,kb,kc,kd,Ts);
% Make T and S systems:
GK = plantlti(1,1)*K(1,1);
T = feedback(GK,1); % Complementary sensitivity
S = 1-T; % Sensitivity (display using ’bodemag’)

7.7 (a). If Λ = 0 then (7.36) becomes


   
Iz s(k + 1)
 Iz 2   s(k + 2) 
   
T (k) =  ..  s(k) =  ..  (8)
 .   . 
Iz Hp s(k + Hp )

(b). Suppose that the reference trajectory is specified as

eαj eβj
r(k + j|k) = s(k + j) − [s(k + j) − z(k)] − [s(k + j) − z(k)]
2 2
Note that a simplification here, compared with the development in Section
7.5 of the book, is that we assume the same constants α and β apply to
each component of the reference trajectory (if there is more than one
output).
Assuming that future set-point changes are not known, the reference tra-
jectory can be rewritten in the same form as (7.32):
 
eαj + eβj eαj + eβj
r(k + j|k) = 1 − s(k) + z(k) (9)
2 2

and hence (7.33) becomes


   
αHw +eβHw  
1− e I eαHw +eβHw
  2   I
 α(H w +1) β(H w +1)   2
eα(Hw +1) +eβ(Hw +1) 
 1− e +e
I   I 
T (k) =   s(k)+  z(k)
2 2
 ..   .. 
 .   . 
    eαHp +eβHp
eαHp +eβHp I
1− 2 I 2

(10)

74
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

If the controller is aware of future set-point changes then, taking Hw = 1


for simplicity, (7.35) becomes
   
α β  
1 − e +e s(k + 1) eα +eβ
  2   2 r(k|k)
 α β   α β 
 1− 2 e +e
s(k + 2)   e +e
r(k + 1|k) 
T (k) = 
 ..
+
 
2
..  (11)

 .   . 
    α +eβ
2 r(k + Hp − 1|k)
α β e
1 − e +e2 s(k + Hp )

and hence (7.36) becomes


   
α
1 − e +e
β
 
  2  Iz  eα +eβ
  2 I
α
 1 − e +e
β
Iz 2   0 
T (k) =   2  s(k) + 


 z(k)
..   .. 
 .   .
  
1− eα +eβ
Iz Hp 0
2
 
0 0 ··· 0 0
 eα +eβ
··· 
 2 I 0 0 0 
 eα +eβ 
+ 

0 2 I ··· 0 0  T (k)
 (12)
 .. .. .. .. .. 
 . . . . . 
eα +eβ
0 0 ··· 2 I 0

The equations corresponding to (7.37) and (7.38) now follow by making


the obvious changes.

75
Pearson
c Education Limited 2002
CHAPTER 7. TUNING

76
Pearson
c Education Limited 2002
CHAPTER 8. ROBUST PREDICTIVE CONTROL

Chapter 8

Robust predictive control

8.1 The controller in feedback around the actual plant is shown in Figure 1. This can
be redrawn as in Figure 2. The transfer function matrix of the block enclosed
within the broken line is P0 (z)K(z)[I + P0 (z)K(z)]−1 , which is the comple-
mentary sensitivity T (z). Thus the loop gain is T (z)∆, and hence a sufficient
condition for the feedback loop to remain stable is σ̄[T (ejωTs )]k∆k∞ < 1.

+
+
K(z) P0 (z) I

Figure 1: Exercise 8.1: plant with output multiplicative uncertainty, and feedback
controller.

K(z) P0 (z)
+

Figure 2: Exercise 8.1: previous figure redrawn.

77
Pearson
c Education Limited 2002
CHAPTER 8. ROBUST PREDICTIVE CONTROL

8.2 The observer’s state-transition matrix is


     
A 0 0 0 A 0 0
 0 Aw 0  −  L∆w  [0 0 I] =  0 Aw −L∆w  (1)
CA Aw I Lη CA Aw I − Lη

This has a block-triangular structure, so its eigenvalues are the eigenvalues of


A together with those of
   
Aw −L∆w diag{αi } −diag{φi }
= (2)
Aw I − Lη diag{αi } diag{1 − ψi }

8.3 The observer gain is determined by (8.28). But in the model (8.41)–(8.42) C is
replaced by [0 0 I], so with V = 0 we have
    
−1  
L∆x 0 0 P13
L∞ = L∆w = P∞ 0  [0
   0 I]P∞  0  =  P23  P33
−1
(3)
Lη I I P33

and hence Lη = I.

8.4 If F (v) is symmetric and affine in v then we can write

X
n
F (v) = F0 + Fi (vi ) (4)
i=1

where each Fi is a symmetric matrix which depends only on vi , the ith element
of v (since no products such as vi vj , or more complicated functions of v, occur
in F (v)). Furthermore, by putting all terms independent of v into F0 , we have
Fi (vi ) = vi Fi , where each Fi is a symmetric constant matrix.

8.5 (a). Applying (8.60) and (8.61) to the first matrix inequality in (8.65), we see
that this inequality is equivalent to γ − r − q T x − xT Qx ≥ 0 (since I > 0).
The second matrix inequality in (8.65) is clearly equivalent to Ax − b ≥ 0.
So the problem (8.65) is equivalent to the problem (8.64), which is clearly
equivalent to the original QP problem (8.63).
(b). The variables over which the optimization in (8.65) is to be performed are
γ and x. The cost to be minimized is γ = 1γ + 0x, which is in the form
of the cost function in (8.59). The two matrix inequalities which appear
in (8.65) involve symmetric matrices which are affine in the variables γ

78
Pearson
c Education Limited 2002
CHAPTER 8. ROBUST PREDICTIVE CONTROL

and x. So, by Exercise 8.4, they are LMIs. But we can write these two
LMIs as the single LMI:
 
I Q1/2 x 0 ··· 0
 xT Q1/2 γ − r − q T x 0 ··· 0 
 
 − · · · 
 0 0 (Ax b)1 0 ≥0 (5)
 .. .. .. .. .. 
 . . . . . 
0 0 0 · · · (Ax − b)m
Thus problem (8.65) is indeed in the form of problem (8.59).

8.6 Clearly the cost function is already in the required form. The vector inequality
can be written as a matrix inequality, as in (8.65). But this is affine in x and
symmetric, so it is an LMI, by Exercise 8.4.

8.7 From the LMI (8.85) we get the following inequality using (8.62):
zi2 − ρ2 Ci [A(k + j)Q + B(k + j)Y ]Q−1 [A(k + j)Q + B(k + j)Y ]T CiT ≥ 0 (6)
which is the same as
zi2 − ρ2 Ci [A(k + j) + B(k + j)Y Q−1 ]Q[A(k + j) + B(k + j)Y Q−1 ]T CiT ≥ 0 (7)
from which we get, since Kk = Y Q−1 ,
zi2 − ρ2 ||Ci [A(k + j) + B(k + j)Kk ]Q1/2 ||22 ≥ 0 (8)
which is the same as (8.84).

8.8 Recall that feasibility is obtained at time k if the LMIs (8.67), (8.72), (8.79) and
(8.86) hold. As stated in the question, only the first of these depends explicitly
on x(k|k). Let Qk denote the value of Q obtained as the solution obtained by
minimising γ subject to these LMI constraints at time k, so that
 
Qk x(k|k)
≥0 (9)
x(k|k)T 1
and hence x(k|k) ∈ EV (x(k|k)) . But, as argued in the text, this is an invariant
set for the predicted state x̂(k + j|k), namely x̂(k + j|k) ∈ EV (x(k|k)) for all j, if
û(k + j|k) = Kk x̂(k + j|k). But if no disturbances occur then the actual state
trajectory will also remain in this set, so that x(k + 1|k + 1) ∈ EV (x(k|k)) , or
 
Qk x(k + 1|k + 1)
≥0 (10)
x(k + 1|k + 1)T 1
This means that there will exist at least one Q for which (8.67) will hold at
time k + 1, so the optimization problem to be solved at time k + 1 will be
feasible.

79
Pearson
c Education Limited 2002
CHAPTER 8. ROBUST PREDICTIVE CONTROL

8.9 Let O∞ be the maximal output admissible set for x(k + 1) = F x(k), with the
constraint y(k) = Hx(k) ∈ Y . Suppose that x(0) ∈ O∞ , then y(k) = Hx(k) =
HF k x(0) ∈ Y for all k > 0. Now suppose that x(k) 6∈ O∞ for some k. Then,
by the definition of O∞ (and time-invariance), y(k + j) 6∈ Y for some j > 0.
But this contradicts the earlier statement that y(k) ∈ Y for all k > 0. Hence
x(k) ∈ O∞ for all k > 0, and so O∞ is positively invariant.

80
Pearson
c Education Limited 2002
CHAPTER 9. TWO CASE STUDIES

Chapter 9

Two case studies

Solutions are not provided for the exercises in this chapter. As stated in the book,
all of these exercises are open-ended, without unique correct answers. They can all
be solved by using and/or editing MATLAB and Simulink software provided on the
book’s web site.

Note on Exercise 9.2: This is worded rather carelessly. It is the singular values
of the sensitivity and of the complementary sensitivity functions that should be
investigated, not the singular values of the controller itself.

81
Pearson
c Education Limited 2002

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy