Optimal Control and Decision Making: Eexam
Optimal Control and Decision Making: Eexam
Optimal Control and Decision Making: Eexam
Note:
• During the attendance check a sticker containing a unique code will be put on this
Eexam exam.
Place student sticker here
• This code contains a unique number that associates this exam with your registration
number.
• This number is printed both next to the code and to the signature field in the
attendance check list.
Working instructions
• This exam consists of 18 pages with a total of 4 problems.
Please make sure now that you received a complete copy of the exam.
• The total amount of achievable credits in this exam is 91 credits.
• Detaching pages from the exam is prohibited.
• Allowed resources:
• Turn off all electronic devices that are not used for the Zoom session or reading questions, put them
into your bag and close the bag.
• Check your own submission after uploading! It is your responsibility to submit your full exam with
reasonable quality and according to our guidelines.
I am aware that I must immediately report any technical difficulties that may occur during the test. I
am also aware that in case of sickness, I must report this immediately in writing and provide credible
evidence. The corresponding notification must be submitted to the Student Services Office and addressed
to the chairperson of the examination board or the examiner.
– Page 1 / 18 –
Problem 1 Short Questions (12 credits)
Hint: Problems that can be solved without solving previous subproblems first are marked by *.
0 a)* If you have a discontinuous candidate Lyapunov Function V (x), how can you check that it is a Lyapunov
1 Function that proves global stability for the system x+ = f (x) with f discontinuous? Give precise conditions.
2
3
4
0 b)* In an MPC control implementation, how will you see that the MPC is not recursively feasible? Describe
1 two approaches that may make your implementation successful in case of loss of recursive feasibility.
2
3
0 c)* Give an example cost function, an example discrete-time dynamical system and one example inequality
1 constraint such that in every MPC iteration, a QP solver can find the constrained minimum (x ∈ Rn and
2 u ∈ Rm ).
3
– Page 2 / 18 –
d)* In tube-based MPC, constraints are tightened to make sure that also in presence of disturbances, no 0
constraint violation will occur. Describe, what is the problem if disturbances are large? What can you do to 1
avoid this problem? 2
– Page 3 / 18 –
Problem 2 Model Predictive Control (29 credits)
Hint: Problems that can be solved without solving previous subproblems first are marked by ∗.
x+ = Ax + Bu
where x ∈ R2 is the state vector and u ∈ R is the input vector. A and B are system matrices, which are
1 1 0.5
A= , B= .
0 1 1
0 a)* For prediction horizon N = 2, calculate predictions x(1) and x(2) with known initial state x(0) =
1 [x1 (0),x2 (0)]> and control inputs u(0) and u(1).
2
3
4
0 b)* Assuming the feedback control law u = Kx = [2, − 0.5]x, give the corresponding closed loop system in
1 the form of x+ = Ax.
e Is the closed loop system stable? Why?
2
3
4
5
6
7
– Page 4 / 18 –
c) With the feedback control law in 2b), calculate x(1), x(2) and x(3) with known initial state x(0) = 0
[x1 (0),x2 (0)] = [1, − 2]> . Sketch the trajectory of x1 (k), k = 0,1,2,3. 1
2
3
4
5
d)* For proving the stability of MPC, choose the necessary 3 parts from the following items: (Notations:
∗
VM P C is the Lyapunov Function in MPC, which is defined as optimal cost to go VM P C = JN . XN represents
the set of feasible initial values.)
Correct crosses give +1 credit, incorrect crosses give -1 credit. Missing crosses do not count.
The discontinuous Lyapunov Function VM P C is bounded by something continuous.
XN is positive invariant.
XN = {0}.
– Page 5 / 18 –
The cost function is given as
N
X −1
J= (x(k)> Qx(k) + u(k)> Ru(k)) + x(N )> Sx(N )
k=0
∂J
0 e)* For prediction horizon N = 1, derive the cost function J and the derivative ∂u(0) .
1 Hint: The prediction for x(1) can be used from question 2a).
2
3
4
5
6
– Page 6 / 18 –
∂J
f) With the cost function J and derivative ∂u(0) obtained from question 2e), derive the optimal control 0
∗ + 1
u(0) , and then give the closed loop system x = Acl x.
2
3
g)* MPC is widely used in autonomous driving. Through designing cost functions, a vehicle can track
reference trajectories xref and make control input u as small as possible. Which of the following cost functions
can meet the two requirements? (Notation: xref represents reference states; uref represents reference control
inputs.)
N
X −1
J= (xref (k)> Qxref (k) + u(k)> Ru(k)) + xref (N )> Sxref (N )
k=0
N
X −1
J= (x(k)> Qx(k) + u(k)> Ru(k)) + x(N )> Sx(N )
k=0
N
X −1
J= (x(k)> Qx(k) + (u(k) − uref (k))> R(u(k) − uref (k))) + x(N )> S(x(N )
k=0
N
X −1
J= ((x(k) − xref (k))> Q(x(k) − xref (k)) + u(k)> Ru(k)) + (x(N ) − xref (N ))> S(x(N ) − xref (N ))
k=0
N
X −1
J= ((x(k)−xref (k))> Q(x(k)−xref (k))+(u(k)−uref (k))> R(u(k)−uref (k)))+(x(N )−xref (N ))> S(x(N )−
k=0
xref (N ))
– Page 7 / 18 –
Problem 3 LQ control, Optimal Control (27 credits)
Hint: Problems that can be solved without solving previous subproblems first are marked by *.
x+ = −2x + u, (3.1)
with state x ∈ R, and input u ∈ R, and the following cost function over the finite horizon N
N −1
1 X 1
J= x(N )2 + u(k)2 . (3.2)
2 2
k=0
0 b) Considering x(0) a parameter, compute the cost over the full horizon, using two different methods.
1
2
3
4
5
6
– Page 8 / 18 –
Assume now that the input is constrained u ∈ U = {u| − a ≤ u ≤ a}. Moreover, considering the terminal set
Xf = {0}, let define the set XN of the feasible initial conditions, i.e. if x(0) ∈ XN there exist a feasible input
sequence u(0), . . . ,u(N − 1) ∈ U such that x(N ) ∈ Xf .
d)* For N = 2, applying the system dynamics (3.1) twice, the cost function (3.2) can be written as 0
1
2
3
5 4
J(x(0),u(0),u(1)) = u(0)2 + u(1)2 − 2u(0)u(1) − 8x(0)u(0) + 4x(0)u(1) + 8x(0)2 . (3.3) 5
2
6
Find the optimal input sequence u(0)∗ ,u(1)∗ minimizing the cost function (3.3), subject to the simplified
input constraint u(0) ≤ a, with N = 2, x(0) = 3, a = 3.
– Page 9 / 18 –
Consider now a changed cost function for the infinite horizon
+∞
X 1
J= (qx(k)2 + u(k)2 ), (3.4)
2
k=0
with q ∈ R, q ≥ 0.
0 e)* Would an infinite horizon LQ controller ensure the stability of the closed-loop system? Consider all
1 possible values of q.
2 Hint: No computation is needed.
3
4
– Page 10 / 18 –
f)* Describe what is the expected behavior of the closed-loop state trajectory for q → 0 and q → ∞. 0
1
2
– Page 11 / 18 –
Problem 4 Dynamic Programming (23 credits)
Hint: Problems that can be solved without solving previous subproblems first are marked by *.
0 1 N −1 N
Let the target be in the origin xorig = 0. We want to determine the commanded speed u(k) for each step
k = 0, . . . ,N − 1 by minimizing the following cost function
N
X −1
J0→N = x(N )2 + (u(k) − u(k − 1))2 . (4.1)
| {z } | {z }
k=0
terminal cost stage cost
0 a)* Give an interpretation of the stage cost and the terminal cost, referring to their physical meaning.
1
2
Let the robot be initially at position x̃, with a speed equal to v. We can then formulate the following optimal
control problem
min J0→N (4.2a)
u(k),k=0,...,N −1
x(k + 1) = x(k) + T u(k), k = 0, . . . ,N − 1
s.t. x(0) = x̃ (4.2b)
u(−1) = v
0
∗
b)* Using Dynamic Programming, determine the first optimal input u(0)∗ and the overall cost J0→2 ,
1 for T = 1, N = 2.
2
3
4
5
6
7
8
9
– Page 12 / 18 –
c) Compute the input sequence and the trajectory of the robot for the full horizon. 0
1
2
3
– Page 13 / 18 –
Observe that, using the solution from 4b), the robot reaches the target xorig = 0 only for specific values of
the parameters x̃ and v. To ensure that the vehicle reaches the origin, a terminal constraint can be used. We
therefore consider the following optimal control problem
where, again, J0→N is defined as in (4.1). Observe that the terminal cost is identically zero and can be
neglected.
d)* Let us again consider T = 1, N = 2. Can you guess for which values of the parameters x̃ and v the two
problems (4.2) and (4.3) yield the same solution?
v = 0, ∀x̃.
x̃ = 1, v = 1.
x̃
v=− .
2
x̃ = 0, ∀v.
– Page 14 / 18 –
∗
g) Using the result from 4f), compute the optimal cost J0→N (4.1) associated to the optimal solution to 0
problem (4.3). 1
2
h) Compare the value of the cost for the two solutions to problem (4.2) and (4.3), respectively, and give an 0
interpretation of the comparison. For which values of the parameters x̃ and v the two costs are equal? Why? 1
2
3
4
– Page 15 / 18 –
Additional space for solutions–clearly mark the (sub)problem your answers are related to and
strike out invalid solutions.
– Page 16 / 18 –
– Page 17 / 18 –
– Page 18 / 18 –