AA278A Lecture Notes 8. Optimal Control and Dynamic Games: Claire J. Tomlin May 11, 2005
AA278A Lecture Notes 8. Optimal Control and Dynamic Games: Claire J. Tomlin May 11, 2005
AA278A Lecture Notes 8. Optimal Control and Dynamic Games: Claire J. Tomlin May 11, 2005
These notes represent an introduction to the theory of optimal control and dynamic games;
they were written by S. S. Sastry [1].
There exist two main approaches to optimal control and dynamic games:
1. For the calculus of variations, the optimal curve should be such that neighboring curves
to not lead to smaller costs. Thus the ‘derivative’ of the cost function about the optimal
curve should be zero: one takes small variations about the candidate optimal solution
and attempts to make the change in the cost zero.
2. For dynamic programming, the optimal curve remains optimal at intermediate points
in time.
In these notes, both approaches are discussed for optimal control; the methods are then
extended to dynamic games.
1
know as Carthage in a circle of the appropriate radius1 . The calculus of variations is really
the ancient precursor to optimal control. Iso perimetric problems of the kind that gave
Dido her kingdom were treated in detail by Tonelli and later by Euler. Both Euler and
Lagrange laid the foundations of mechanics in a variational setting culminating in the Euler
Lagrange equations. Newton used variational methods to determine the shape of a body that
minimizes drag, and Bernoulli formulated his brachistochrone problem in the seventeenth
century, which attracted the attention of Newton and L’Hôpital. This intellectual heritage
was revived and generalized by Bellman [3] in the context of dynamic programming and by
Pontryagin and his school in the so-called Pontryagin principle for optimal control ([6]).
Consider a nonlinear possibly time varying dynamical system described by
ẋ = f (x, u, t) (1)
with state x(t) ∈ Rn and the control input u ∈ Rni . Consider the problem of minimizing the
performance index Z tf
J = φ(x(tf ), tf ) + L(x(t), u(t), t)dt (2)
t0
where t0 is the initial time, tf the final time (free), L(x, u, t) is the running cost, and
φ(x(tf ), tf ) is the cost at the terminal time. The initial time t0 is assumed to be fixed
and tf variable. Problems involving a cost only on the final and initial state are referred
to as Mayer problems, those involving only the integral or running cost are called Lagrange
problems and costs of the form of equation (2) are referred to as Bolza problems. We will
also have a constraint on the final state given by
ψ(x(tf ), tf ) = 0 (3)
The variation of (4) is given by assuming independent variations in δu(), δx(), δp(), δλ, and
δtf :
δ J˜ = (D1 φ + D1 ψ T λ)δx|tf + (D2 φ + D2 ψ T λ)δt|tf + ψ T δλ
+ (H − pT ẋ)δt|tf (6)
R tf £ ¤
+ t0
D1 Hδx + D3 Hδu − pT δ ẋ + (D2 H T − ẋ)T δp dt
1
The optimal control problem here is to enclose the maximum area using a closed curve of given length.
2
The notation Di H stands for the derivative of H with respect to the i th argument. Thus,
for example,
∂H ∂H
D3 H(x, p, u, t) = D1 H(x, p, u, t) =
∂u ∂x
R T
Integrating by parts for p δ ẋdt yields
An extremum of J˜ is achieved when δ J˜ = 0 for all independent variations δλ, δx, δu, δp.
These conditions are recorded in the following
Table 1
∂H T
State Equation ẋ = ∂p
δp
T
Costate equation ṗ = − ∂H
∂x
δx
∂H
Input stationarity ∂u
=0 δu
Boundary conditions D1 φ − pT = −D1 ψ T λ|tf δx(tf )
H + D2 φ = −D2 ψ T λ|tf δtf
The conditions of Table (1) and the boundary conditions x(t0 ) = x0 and the constraint on
the final state ψ(x(tf ), tf ) = 0 constitute the necessary conditions for optimality. The end
point constraint equation is referred to as the transversality condition:
D1 φ − pT = −D1 ψ T λ
(8)
H + D2 φ = −D2 ψ T λ
3
and the endpoint constraint ψ(x(tf ), tf ) = 0. The key point to the derivation of the nec-
essary conditions of optimality is that the Legendre transformation of the Lagrangian to
be minimized into a Hamiltonian converts a functional minimization problem into a static
optimization problem on the function H(x, u, p, t).
The question of when these equations also constitute sufficient conditions for (local) opti-
mality is an important one and needs to be ascertained by taking the second variation of J. ˜
This is an involved procedure but the input stationarity condition in Table (1) hints at the
sufficient condition for local minimality of a given trajectory x∗ (·), u∗ (·), p∗ (·) being a
local minimum as being that the Hessian of the Hamiltonian,
being positive definite along the optimal trajectory. A sufficient condition for this is to ask
simply that the ni × ni Hessian matrix
be positive definite. As far as conditions for global minimality are concerned, again the
stationarity condition hints at a sufficient condition for global minimality being that
Sufficient conditions for this are, for example, the convexity of the Hamiltonian H(x, u, p, t)
in u.
Finally, there are instances in which the Hamiltonian H(x, u, p, t) is not a function of u at
some values of x, p, t. These cases are referred to as singular extremals and need to be treated
with care, since the value of u is left unspecified as far as the optimization is concerned.
Further, if there is no final state constraint the boundary condition simplifies even further
to
p(tf ) = D1 φT |tf (14)
4
∂H T
State Equation ẋ = ∂p
= f (x, u∗ )
T
Costate Equation ṗ = − ∂H
∂x
= −D1 f T p + D1 LT
Stationarity Condition 0 = ∂H
∂u
= D2 L T + D2 f T p
Transversality Conditions D1 φ − pT = −D1 ψ T λ
H(tf ) = 0
In addition, it may be verified that
dH ∗ ∂H ∗ ∂H ∗
= (x, p)ẋ + ṗ = 0 (15)
dt ∂x ∂p
thereby establishing that H ∗ (t) ≡ 0.
5
2 Optimal Control based on Dynamic Programming
To begin this discussion, we will embed the optimization problem which we are solving in
a larger class of problems, more specifically we will consider the original cost function of
equation (2) from an initial time t ∈ [t0 , tf ] by considering the cost function on the interval
[t, tf ]:
Z tf
J(x(t), t) = φ(x(tf ), tf ) + L(x(τ ), u(τ ), τ )dτ
t
Bellman’s principle of optimality says that if we have found the optimal trajectory on the
interval from [t0 , tf ] by solving the optimal control problem on that interval, the resulting
trajectory is also optimal on all subintervals of this interval of the form [t, tf ] with t > t0 ,
provided that the initial condition at time t was obtained from running the system forward
along the optimal trajectory from time t0 . The optimal value of J(x(t), t) is referred to as
the “cost-to go”. To be able to state the following key theorem of optimal control we will
need to define the “optimal Hamiltonian” to be
H ∗ (x, p, t) := H(x, u∗ , p, t)
Proof: The proof uses the principle of optimality. This principle says that if we have found
the optimal trajectory on the interval from [t, tf ] by solving the optimal control problem
on that interval, the resulting trajectory is also optimal on all subintervals of this interval
of the form [t1 , tf ] with t1 > t, provided that the initial condition at time t1 was obtained
from running the system forward along the optimal trajectory from time t. Thus, from using
t1 = t + ∆t, it follows that
min ·Z t+∆t ¸
J (x, t) =
∗
u(τ ) L(x, u, τ )dτ + J (x + ∆x, t + ∆t)
∗
(20)
t ≤ τ ≤ t + ∆t t
6
2
Remarks:
1. The preceding theorem was stated as a necessary condition for extremal solutions of
the optimal control problem. As far as minimal and global solutions of the optimal
control problem, the Hamilton Jacobi Bellman equations read as in equation (21). In
this sense, the form of the Hamilton Jacobi Bellman equation in (21) is more general.
2. The Eulerian conditions of Table (1) are easily obtained from the Hamilton Jacobi
Bellman equation by proving that pT (t) := ∂J
∗
∂x
(x, t) satisfies the costate equations of
that Table. Indeed, consider the equation (21). Since u(t) is unconstrained, it follows
that it should satisfy
∂L ∗ ∗ ∂f T
(x , u ) + p=0 (22)
∂u ∂u
Now differentiating the definition of p(t) above with respect to t yields
dp T ∂ 2J ∗ ∗ ∂ 2J ∗
= (x , t) + 2 f (x , u , t)
∗ ∗
(23)
dt ∂t∂x ∂x
Differentiating the Hamilton Jacobi equation (21) with respect to x and using the
relation (22) for a stationary solution yields
∂ 2J ∗ ∗ ∂L ∂ 2 J ∗ ∂f
− (x , t) = + 2
f + pT (24)
∂t∂x ∂x ∂x ∂x
Using equation (24) in equation (23) yields
∂f T ∂L T
−ṗ = p+ (25)
∂x ∂x
establishing that p is indeed the co-state of Table 1. The boundary conditions on p(t)
follow from the boundary conditions on the Hamilton Jacobi Bellman equation.
7
2.2 Free end time problems
In the instance that the final time tf is free, the transversality conditions are that
pT (tf ) = D1 φ + D1 ψ T λ
(27)
H(tf ) = −(D2 φ + D2 ψ T λ)
A special class of minimum time problems of especial interest is minimum time problems,
where tf is to be minimized subject to the constraints. This is accounted for by setting
the Lagrangian to be 1, and the terminal state cost φ ≡ 0, so that the Hamiltonian is
H(x, u, p, t) = 1 + pT f (x, u, t). Note that by differentiating H(x, u, p, t) with respect to time,
we get
dH ∗ ∂H ∗
= D1 H ∗ ẋ + D2 H ∗ u̇ + D3 H ∗ ṗ + (28)
dt ∂t
Continuing the calculation using the Hamilton Jacobi equation,
dH ∗ ∂H ∗ ∂H ∗ ∂H ∗
=( + ṗ)f (x, u∗ , t) + = (29)
dt ∂x ∂t ∂t
In particular, if H ∗ is not an explicit function of t, it follows that H ∗ (x, u, p, t) ≡ H. Thus,
for minimum time problems for which f (x, u, t) and ψ(x, t) are not explicitly functions of t,
it follows that 0 = H(tf ) ≡ H(t).
8
to minimize and the other to maximize the same cost function taken to be of the form
Z tf
J = φ( x(tf ), tf ) + L(x, u, d, t)dt (31)
t0
We will assume that player 1 (u) is trying to minimize J and player 2 (d) is trying to
maximize J. For simplicity we have omitted the final state constraint and also assumed the
end time tf to be fixed. These two assumptions are made for simplicity but we will discuss
the tf free case when we study pursuit evasion games. The game is said to have perfect
information if both players have access to the full state x(t). The solution of two person zero
sum games proceeds very much along the lines of the optimal control problem by setting up
the Hamiltonian
H(x, u, d, p, t) = L(x, u, d, t) + pT f (x, u, d, t) (32)
Rather than simply minimizing H(x, u, d, p, t) the game is said to have a saddle point solution
if the following analog of the saddle point condition for two person zero sum static games
holds:
min max H(x, u, d, p, t) = max min H(x, u, d, p, t) (33)
u d d u
If the minmax is equal to the maxmin, the resulting optimal Hamiltonian is denoted H ∗ (x, p, t)
and the optimal inputs u∗ , d∗ are determined to be respectively,
µ ¶
u (t) = argmin
∗
max H(x, u, d, p, t) (34)
u d
and µ ¶
d (t) = argmax
∗
min H(x, u, d, p, t) (35)
d u
The equations for the state and costate and the transversality conditions are given as before
by
∗T
ẋ = ∂H
∂p
(x, p)
T (36)
ṗ = − ∂H
∗
∂x
(x, p)
with boundary conditions x(t0 ) = x0 and pT (tf ) = D1 φ(xtf ), and the equation is the familiar
Hamilton Jacobi equation. As before, one can introduce the optimal cost to go J ∗ (x(t), t)
and we have the following analog of Theorem (1):
9
Remarks
1. We have dealt with saddle solutions for unconstrained input signals u, d thus far in the
development. If the inputs are constrained to lie in sets U, D respectively the saddle
solutions can be guaranteed to exist if
2. The sort of remarks that were made about free endpoint optimal control problems can
also be made of games.
3. In our problem formulation for games, we did not include explicit terminal state con-
straints of the form ψ(x(tf ), tf ) = 0. These can be easily included, and we will study
this situation in greater detail under the heading of pursuit evasion games.
4. The key point in the theory of dynamical games is that the Legendre transformation
of the Lagrangian cost function into the Hamiltonian function converts the solution of
the “dynamic” game into a “static” game, where one needs to find a saddle point of
the Hamiltonian function H(x, u, d, p, t). This is very much in the spirit of the calculus
of variations and optimal control.
10
and each cost functional (to be minimized) is of the form
Z tf
Ji (u1 (·), . . . , uN (·)) = φi (x(tf ), tf ) + Li (x, u1 , . . . , uN , t)dt (40)
t0
different solution concepts need to be invoked. The simplest non-cooperative solution strat-
egy is a so-called non-cooperative Nash equilibrium. A set of controls u∗i , i = 1, . . . , N is
said to be a Nash strategy if for each player modifying that strategy, and assuming that the
others play their Nash strategies, results in an increase in his payoff, that is for i = 1, . . . , N
Ji (u∗1 , . . . , ui , . . . , u∗N ) ≥ Ji (u∗1 , . . . , u∗i , . . . , u∗N ) ∀ui (·) (41)
It is important to note that Nash equilibria may not be unique. It is also easy to see that
for 2 person zero sum games, a Nash equilibrium is a saddle solution.
As in the previous section on saddle solutions, we can write Hamilton Jacobi equations for
Nash equilibria by defining Hamiltonians Hi (x, u1 , . . . , uN , p, t) according to
Hi (x, u1 , . . . , uN , p, t) = Li (x, u1 , . . . , uN ) + pT f (x, u1 , . . . , uN , t) (42)
The conditions for a Nash equilibrium of equation (41) are there exist u∗i (x, p, t) such that
Hi (x, u∗1 , . . . , ui , . . . , u∗N , p, t) ≥ Hi (x, u∗1 , . . . , u∗i , . . . , u∗N , p, t) (43)
Then, we have N sets of Hamilton Jacobi equations for the N players satisfying the Hamilton
Jacobi equations with Hi∗ = Hi∗ (x, u∗1 , . . . , u∗N , pi , t). Note that we have changed the costate
variables to pi to account for different Hamiltonians and boundary conditions.
∂Hi∗ T
ẋ = ∂pi
∂H ∗ T
(44)
ṗi = − ∂xi
11
In turn the leader chooses his strategy to be that u∗1 which minimizes J1 subject to the
assumption that player 2 will rationally play u∗2 (u∗1 ). Thus, the system of equations that he
has to solve to minimize J1 subject to
ẋ = f (x, u1 , u2 , t) x(t0 ) = x0
∂H2 T (46)
ṗ2 = − ∂x (x, u1 , u2 (u1 ), p, t) p2 (tf ) = D1T φ2 (x(tf ), tf )
o o
0 = D3 H2 (x, u1 , u2 , p2 , t)
The last equation in (46) is the stationarity condition for minimizing H2 . The optimization
problem of the system in (46) is not a standard optimal control in R2n because there is an
equality to be satisfied. Thus, Lagrange multipliers (co-states) taking values in R2n+n2 for
t ∈ [t0 , tf ] are needed. We will omit the details in these notes.
References
[1] S. S. Sastry, Lectures in Optimal Control and Dynamic Games, Notes for the course
EECS290A, Advanced Topics in Control Theory, University of California, Berkeley,
1996.
[2] M. Athans and P. Falb, Optimal Control, Mc Graw Hill, 1966.
[3] R. Bellman, Dynamic Programming, Princeton University Press, 1957.
[4] L. D. Berkovitz, Optimal Control Theory, Springer-Verlag, 1974.
[5] A. E. Bryson and Y-C. Ho, Applied Optimal Control, Blaisdell Publishing Company,
Waltham, 1969.
[6] L. Pontryagin, V. Boltyanskii, R. Gamkrelidze, and E. Mischenko, The Mathematical
Theory of Optimal Processes, Wiley-New York, 1962.
[7] L. C. Young, Optimal Control Theory, Cambridge University Press, 2nd edition, 1980.
[8] D. Kirk, Optimal Control Theory: An Introduction, Prentice Hall, 1970.
[9] F. Lewis, Optimal Control, Wiley-New York, 1986.
[10] W. Fleming and R. Rishel, Deterministic and Stochastic Optimal Control, Springer
Verlag, 1975.
[11] V. Arnold, Mathematical Methods of Classical Mechanics, Springer Verlag, 2nd edition,
1989.
[12] J. von Neumann and O. Morgenstern, Theory of Games and Economic Behavior, Prince-
ton University Press, 1947.
[13] T. Basar and G. Olsder, Dynamic Noncooperative Games, Academic Press, 2nd Edition,
1995.
12