0% found this document useful (0 votes)
4 views40 pages

CNMAC2017_moises

The document discusses necessary conditions for optimization problems in continuous time under linear independence regularity. It emphasizes the importance of constraint qualifications, particularly the Karush-Kuhn-Tucker (KKT) conditions, in ensuring the validity of optimality conditions. The authors aim to generalize existing constraint qualifications to continuous-time programming and provide a mathematical framework for analyzing these problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views40 pages

CNMAC2017_moises

The document discusses necessary conditions for optimization problems in continuous time under linear independence regularity. It emphasizes the importance of constraint qualifications, particularly the Karush-Kuhn-Tucker (KKT) conditions, in ensuring the validity of optimality conditions. The authors aim to generalize existing constraint qualifications to continuous-time programming and provide a mathematical framework for analyzing these problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Condições Necessárias para Problemas de

Otimização com Tempo Contı́nuo sob


Regularidade Tipo Independência Linear

Moisés R. C. do Monte & Valeriano A. de Oliveira


UNESP - Universidade Estadual Paulista
São José do Rio Preto, São Paulo, Brasil

CNMAC 2017, S.J. Campos, SP

Agradecimentos: Projetos 457785/2014-4 e 310955/2015-7, CNPq


Outline

1 Introduction

2 Preliminaries

3 The continuous-time problem

4 Optimality conditions
Outline

1 Introduction

2 Preliminaries

3 The continuous-time problem

4 Optimality conditions
Constraint qualifications in mathematical programming

The importance of the constraint qualifications in


optimization theory is well known.

Consider a (smooth) mathematical programming problem. We


know that the optimal solutions of these problems satisfy
Lagrange multipliers rules, as Fritz John rule, for example.

However, this rule allows the multiplier associated with the


objective function to be zero, so that we lose important
information of the problem.

We see, then, that a rule in which such a multiplier is nonzero


is crucial.

This rule is known as Karush-Kuhn-Tucker conditions.


CQ’s: key role

Constraint qualifications play a key role in ensuring the


validity of the KKT conditions.

It is worthy mentioning that when a constraint qualification is


valid, that is, when the KKT conditions are verified, efficient
algorithms can be defined, sensitivity analysis can be carried
out, and the duality theory can be developed.

This is due to the fact that when a constraint qualification is


verified, it is possible to analytically capture the geometric
properties of the problem, which, in turn, are fundamental in
the development of optimality conditions.
Some CQ’s in mathematical programming

The importance of the constraint qualifications in


optimization theory is well known.

Slater LICQ

% y 
MFCQ
O CRCQ

 
PLICQ / CPLD

Objective: Generalize some of them to continuous-time


programming.
Continuous-time programming

Continuous-time optimization problems were first proposed by


Bellman in

R. Bellman, Bottleneck problems and dynamic programming,


Proceedings of the National Academy of Sciences of the
United States of America, v. 39, n. 9, p. 947–951, 1953,

while he was investigating some dynamical models of


production and inventory “bottleneck processes”.
One of the applications pointed by Bellman is that of
determining the allocation policy that maximizes the amount
of steel in the stock of an industry at the end of a certain
period.
The continuous-time problem studied by Bellman is posed as
Z T
maximize a> z(t) dt
0

subject to z(t) ≥ 0, 0 ≤ t ≤ T ,
Z t
Bz(t) ≤ c + Kz(s) ds, 0 ≤ t ≤ T ,
0

where a ∈ Rn , B, K ∈ Rm×n , c ∈ Rm and z ∈ L∞ ([0, T ]; Rn ).


A small bibliographical review

Rigorous mathematical treatment:


W. F. Tyndall, A duality theorem for a class of continuous
linear programming problems, J. Soc. Ind. Appl. Math. 13,
644–666, 1965.
Non-linear case:
M. A. Hanson and B. Mond, A class of continuous convex
programming problems, J. Math. Anal. Appl. 22, 427–437,
1968.
KKT optimality conditions:
T. W. Reiland and M. A. Hanson, Generalized Kuhn-Tucker
conditions and duality for continuous nonlinear programming
problems, J. Math. Anal. Appl. 74, 578–598, 1980.
KKT via geometric ideas:
G. Zalmai, The Fritz John and Kuhn-Tucker optimality
conditions in continuous-time nonlinear programming, J.
Math. Anal. Appl. 110, 503–518, 1985.
Nonsmooth case:
A. J. V. Brandão, M. Rojas-Medar and G. N. Silva,
Nonsmooth continuous-time optimization problems: necessary
conditions. Comp. Math. Appl. 41 1477–1486, 2001.
Multiobjective problems:
S. Nobakhtian and M. Pouryayevali, Optimality criteria for
nonsmooth continuous time problems of multiobjective
optimization, J. Optim. Theory Appl., 136, 69–76, 2008.
Constraint qualifications in continuous-time problems

Reiland, in 1980, obtained Karush-Kuhn-Tucker type


optimality conditions by making use of a Abadie type
constraint qualification.
Later, in 1985, Zalmai has also obtained KKT type optimality
conditions, by means of a generalization of the Slater
constraint qualification (SCQ).
However, to the best of our knowledge, other classical
constraint qualifications well known from the nonlinear
mathematical programming in finite dimensions were not
studied in continuous-time programming.
Outline

1 Introduction

2 Preliminaries

3 The continuous-time problem

4 Optimality conditions
Uniform Inverse Mapping Theorem

Consider A ⊂ Rk , α > 0, x0 , y0 ∈ Rn , and a family of functions


{F a : Rn → Rn }a∈A satisfying y0 = F a (x0 ), a ∈ A.
It is assumed that:
i. F a is continuously differentiable on x0 + αB for all a ∈ A,
uniformly in a;
ii. There exists a monotone increasing function
θ : (0, ∞) → (0, ∞) with θ(s) ↓ x0 as s ↓ 0 such that

k∇F a (x) − ∇F a (x 0 )k ≤ θ(kx − x 0 k) ∀ x, x 0 ∈ x0 + αB, a ∈ A;

iii. ∇F a (x0 ) is nonsingular for each a ∈ A and there exists c > 0


such that
k[∇F a (x0 )]−1 k ≤ c ∀ a ∈ A.
Uniform Inverse Mapping Theorem cont.

Then there exist  ∈ (0, α) and δ > 0 and a family of continuously


differentiable functions {G a : y0 + δB → x0 + αB}a∈A which are
Lipschitz continuous with a common Lipschitz constant K such
that

F a (G a (y )) = y ∀ y ∈ y0 + δB, a ∈ A,
a a
G (F (x)) = x ∀ x ∈ x0 + B, a ∈ A.

The numbers , δ and K depend only on α, θ and c.


Furthermore, if A is a Borel set and a 7→ F a (x) is Borel measurable
for each x ∈ x0 + αB, then a 7→ G a (y ) is Borel measurable for
each y ∈ y0 + δB.
Uniform Implicit Function Theorem

Consider A ⊂ Rk , α > 0, {ψ a : Rm × Rn → Rn }a∈A , and


(u0 , v0 ) ∈ Rm × Rn satisfying ψ a (u0 , v0 ) = 0, a ∈ A. It is assumed
that:
i. ψ a is continuously differentiable on (u0 , v0 ) + αB for all
a ∈ A, uniformly in a;
ii. There exists a monotone increasing function
θ : (0, ∞) → (0, ∞) with θ(s) ↓ x0 as s ↓ 0 such that

k∇ψ a (u, v ) − ∇ψ a (u 0 , v 0 )k ≤ θ(k(u, v ) − (u 0 , v 0 )k)

for all (u, v ), (u 0 , v 0 ) ∈ (u0 , v0 ) + αB, a ∈ A;


iii. ∇v ψ a (u0 , v0 ) is nonsingular, a ∈ A, and there exists c > 0
such that
k[∇v ψ a (u0 , v0 )]−1 k ≤ c ∀ a ∈ A.
Uniform Implicit Function Theorem cont.

Then there exist δ > 0 and a family of continuously differentiable


functions {φa : u0 + δB → v0 + αB}a∈A which are Lipschitz
continuous with a common Lipschitz constant K such that

v0 = φa (u0 ) ∀ a ∈ A,
a a
ψ (u, φ (u)) = 0 ∀ u ∈ u0 + δB, a ∈ A,
∇φ (u0 ) = −[∇v ψ (u0 , v0 )]−1 ∇u ψ a (u0 , v0 ).
a a

The numbers δ and K depend only on α, θ and c.


Furthermore, if A is a Borel set and a 7→ ψ a (u, v ) is Borel
measurable for each (u, v ) ∈ (u0 , v0 ) + αB, then a 7→ φa (u) is
Borel measurable for each u ∈ u0 + δB.
The last results were given in

M. do R. de Pinho and R. B. Vinter, Necessary conditions for


optimal control problems involving nonlinear differential algebraic
equations, J. Math. Anal. Appl. 212, 493–516, 1997.
Outline

1 Introduction

2 Preliminaries

3 The continuous-time problem

4 Optimality conditions
The continuous-time problem

Z T
Maximize P(z) = φ(z(t), t)dt
0

(CTP)
subject to h(z(t), t) = 0 a.e. in [0, T ]
g (z(t), t) ≥ 0 a.e. in [0, T ]
z ∈ L∞ ([0, T ]; Rn )

I φ, h, g : Rn × [0, T ] → R × Rp × Rm
Optimal solutions

The set of all feasible solutions is denoted by

Ω = {z ∈ L∞ ([0, T ]; Rn ) : h(z(t), t) = 0, g (z(t), t) ≥ 0 a.e. in [0, T ]}.

Definition
z̄ ∈ Ω is said to be an optimal local solution of (CTP) if there
exists  > 0 such that P(z̄) ≥ P(z) for all z ∈ Ω satisfying
z(t) ∈ z̄(t) + B̄ a.e. in [0, T ].
Hypotheses

Let z̄ be an optimal local solution of (CTP). It is assumed that


(H1) φ(·, t) is twice continuously differentiable on z̄(t) + B̄ a.e. in
[0, T ], φ(z, ·) is measurable for each z and there exists
Kφ > 0 such that
k∇φ(z̄(t), t)k ≤ Kφ a.e. in [0, T ];
(H2) h(z, ·) and g (z, ·) are measurable for each z, h(·, t) and g (·, t)
are twice continuously differentiable on
z̄(t) + B̄ a.e. in [0, T ];
(H3) There exists a monotone increasing function
θ : (0, ∞) → (0, ∞), θ(s) ↓ 0 as s ↓ 0, such that, for all
z̃, z ∈ z̄(t) + B̄,
k∇[h, g ](z̃, t) − ∇[h, g ](z, t)k ≤ θ(kz̃ − zk) a.e. in [0, T ];
There exists K0 > 0 such that
||∇[h, g ](z̄, t)|| ≤ K0 a.e. in [0, T ].
Outline

1 Introduction

2 Preliminaries

3 The continuous-time problem

4 Optimality conditions
The unconstrained case

Z T
Maximize P(z) = φ(z(t), t)dt
0 (CTP)
subject to z ∈ L∞ ([0, T ]; Rn )
Optimality conditions

Theorem
If z̄ is an optimal local solution of the unconstrained (CTP), then

∇φ(z̄(t), t) = 0 a.e. in [0, T ]

and
Z T
γ(t)> ∇2 φ(z̄(t), t)γ(t) dt ≤ 0 ∀ γ ∈ L∞ ([0, T ]; Rn ).
0

Proof.
Standard (from calculus in Banach spaces, Taylor’s expansion with
Fréchet derivatives).
Equality constraints

Z T
Maximize P(z) = φ(z(t), t)dt
0
(CTP)
subject to h(z(t), t) = 0 a.e. in [0, T ]
z ∈ L∞ ([0, T ]; Rn )

Ω = {z ∈ L∞ ([0, T ]; Rn ) : h(z(t), t) = 0 a.e. in [0, T ]}.


The linear independence constraint qualification

(H4) The linear independence constraint qualification is said to be


satisfied at z̄ ∈ Ω if

∇h(z̄(t), t) = [∇h1 (z̄(t), t) · · · ∇hp (z̄(t), t)]>

has full rank a.e. in [0, T ].


Optimality conditions

Theorem
Let z̄ be an optimal local solution of (CTP) with equality
constraints. Assume that (H1)-(H4) hold true.
Then there exists u ∈ L∞ ([0, T ]; Rp ) such that for a.e. t ∈ [0, T ],
p
X
∇φ(z̄(t), t) + ui (t)∇hi (z̄(t), t)) = 0,
i=1
p
" #
Z T X
> 2 2
γ(t) ∇ φ(z̄(t), t) + ui (t)∇ hi (z̄(t), t) γ(t) dt ≤ 0
0 i=1

for all γ ∈ N, where N is given by

N = {γ ∈ L∞ ([0, T ]; Rn ) : ∇h(z̄(t), t)γ(t) = 0 a.e. in [0, T ]}.


Sketch of the proof

Proof.
Step 1: Define µ : Rn × Rp × [0, T ] → Rp by

µj (ξ, η, t) = hj (z̄(t) + ξ + Υ(t)> η, t), j = 1, . . . , r ,

where

Υ(t)> = [∇h1 (z̄(t), t) · · · ∇hp (z̄(t), t)] a.e. in [0, T ].

Note that
∇η µ(0, 0, t) = Υ(t)Υ(t)> ,
which is nonsingular, by (H4).
By the uniform implicit function theorem, there exist σ̄ ∈ (0, ),
δ ∈ (0, ) and the implicit map d : σB × [0, T ] → δB such that
Proof cont.
d(ξ, ·) is measurable for each ξ, d(·, t) is Lipschitz continuous a.e.
in [0, T ] with a common Lipschitz constant, d(·, t) is continuously
differentiable a.e. in [0, T ], and

d(0, t) = 0 a.e. in [0, T ],


µ(ξ, d(ξ, t), t) = 0 a.e. in [0, T ], ξ ∈ σB,
∇d(0, t) = −[Υ(t)Υ(t)> ]−1 Υ(t).

Take σ1 > 0 e δ1 > 0 such that


  
σ1 ∈ (0, min{σ̄, }), δ1 ∈ (0, min{δ, }), σ1 +K0 δ1 ∈ (0, ).
2 2 2
In what follows, without loss of generality, it is assumed that d is
defined from σ1 B × [0, T ] into δ1 B.
Proof cont.
Step 2: z̄ is an optimal local solution of the following auxiliary
problem
Z T
maximize P̃(z) = ϕ(z(t), t) dt
0

subject to z ∈ L∞
n [0, T ],

where
ϕ(z, t) = φ(z + Υ(t)> d(z − z̄(t), t), t).

Step 3: By applying the last theorem on optimality conditions for


the unconstrained problem, the result follows. 
The general case

Z T
Maximize P(z) = φ(z(t), t)dt
0

(CTP)
subject to h(z(t), t) = 0 a.e. in [0, T ]
g (z(t), t) ≥ 0 a.e. in [0, T ]
z ∈ L∞ ([0, T ]; Rn )

Ω = {z ∈ L∞ ([0, T ]; Rn ) : h(z(t), t) = 0, g (z(t), t) ≥ 0 a.e. in [0, T ]}.


The linear independence constraint qualification

(H4’) The uniform linear independence constraint qualification is


said to be satisfied at z̄ ∈ Ω if
 
∇h(z̄(t), t) p 0
M(t) =
∇g (z̄(t), t) diag{−2 gj (z̄(t), t)}m
j=1

has full rank a.e. in [0, T ].


Optimality conditions

Theorem
Let z̄ be an optimal local solution of (CTP). Assume that
(H1)-(H4’) hold true and that g (z̄(·), ·) is bounded on [0, T ].
Then there exist u ∈ L∞ ([0, T ]; Rp ) and v ∈ L∞ ([0, T ]; Rm ) such
that for a.e. t ∈ [0, T ],
p
X
∇φ(z̄(t), t) + ui (t)∇hi (z̄(t), t)
i=1
Xm
+ vj (t)∇gj (z̄(t), t) = 0,
j=1
vj (t)gj (z̄(t), t) = 0 a.e. in [0, T ], j = 1, . . . , m,
vj (t) ≥ 0 a.e. in [0, T ], j = 1, . . . , m,
Optimality conditions

Theorem cont.

Z T p
X
> 2
γ(t) [∇ φ(z̄(t), t) + ui (t)∇2 hi (z̄(t), t)
0 i=1
m
X
+ vj (t)∇2 gj (z̄(t), t)]γ(t) dt ≤ 0
j=1

for all γ ∈ N, where N is given by

N = {γ ∈ L∞ ([0, T ]; Rn ) : ∇h(z̄(t), t)γ(t) = 0 a.e. in [0, T ],


∇gj (z̄(t), t)> γ(t) = 0 a.e. in [0, T ], j ∈ Ja (t)},

where Ja (t) = {j ∈ {1, . . . , m} : gj (z̄(t), t) = 0}.


Sketch of the proof

Proof.
Consider the auxiliary problem below
Z T
maximize P(z, w ) = φ(z(t), t) dt
0
subject to h(z(t), t) = 0 a.e. in [0, T ],
g (z(t), t) − w 2 (t) = 0 a.e. in [0, T ],
z ∈ L∞ ([0, T ]; Rn ), w ∈ L∞ ([0, T ]; Rm )

where
w12 (t)
 

w 2 (t) =  ..
 q.t.p. em [0, T ].
 
.
2 (t)
wm
Proof cont.
(z̄, w̄ ) is an optimal local solution of the auxiliary problem with
q
w̄j (t) = gj (z̄(t), t) a.e. in [0, T ], j = 1, . . . , m.

Define, for a.a. t ∈ [0, T ],


 
h(z, t)
Ψ(z, w , t) = .
g (z, t) − w 2

Then, ∇Ψ(z̄(t), w̄ (t), t) = M(t) a.e. in [0, T ].


By applying the last theorem (on optimality conditions for (CTP)
with equality constraints), the result follows.
The nonnegativity of the multipliers related to the inequality
constraints follows from the second order conditions. 
Current work

Slater (Zalmai, 1985) LICQ (Monte, 2017)

( v 
MFCQ
O CRCQ (Monte, 2017)

 
PLICQ / CPLD
Future work

Establish optimality conditions for (CTP) under CPLD;


Establish the maximum principle for optimal control problems
with mixed constraints under CRCQ and CPLD:
minimize l(x(0), x(1))
subject to ẋ(t) = f (t, x(t), u(t), v (t)) a.e. in [0, 1],
g (t, x(t), u(t), v (t)) ≤ 0 a.e. in [0, 1],
h(t, x(t), u(t), v (t)) = 0 a.e. in [0, 1],
v (t) ∈ V (t) a.e. in [0, 1],
(x(0), x(1)) ∈ C ,
where l : Rn × Rn → R,
(f , g , h) : [0, 1] × Rn × Rku × Rkv → Rn × Rmg × Rmh are given
functions, V (t) ⊂ Rkv for all t ∈ [0, 1] and C ⊂ Rn × Rn .
Future work

Establish optimality conditions for (CTP) under CPLD;


Establish the maximum principle for optimal control problems
with mixed constraints under CRCQ and CPLD:
minimize l(x(0), x(1))
subject to ẋ(t) = f (t, x(t), u(t), v (t)) a.e. in [0, 1],
g (t, x(t), u(t), v (t)) ≤ 0 a.e. in [0, 1],
h(t, x(t), u(t), v (t)) = 0 a.e. in [0, 1],
v (t) ∈ V (t) a.e. in [0, 1],
(x(0), x(1)) ∈ C ,
where l : Rn × Rn → R,
(f , g , h) : [0, 1] × Rn × Rku × Rkv → Rn × Rmg × Rmh are given
functions, V (t) ⊂ Rkv for all t ∈ [0, 1] and C ⊂ Rn × Rn .
OBRIGADO!

antunes@ibilce.unesp.br moises.monte@ufu.br

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy