0% found this document useful (0 votes)
96 views

Convex Optimization Quizz

This document summarizes the key details of a final exam for a convex optimization course, including: - The exam is open book and covers material from the textbook "Convex Optimization" by Boyd and Vandenberghe. - There are 3 problems worth a total of 112 points: problem 1 is worth 45% of the total and involves finding Lagrangian functions and dual problems; problem 2 worth 40% involves equality constrained optimization; and problem 3 is worth 15%. - For problem 1, the document provides the Lagrangian and dual problems for 3 convex optimization examples. For problem 2, it derives the KKT conditions and Newton's method for an equality constrained problem. - The summary briefly outlines the key

Uploaded by

Kuei Kao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views

Convex Optimization Quizz

This document summarizes the key details of a final exam for a convex optimization course, including: - The exam is open book and covers material from the textbook "Convex Optimization" by Boyd and Vandenberghe. - There are 3 problems worth a total of 112 points: problem 1 is worth 45% of the total and involves finding Lagrangian functions and dual problems; problem 2 worth 40% involves equality constrained optimization; and problem 3 is worth 15%. - For problem 1, the document provides the Lagrangian and dual problems for 3 convex optimization examples. For problem 2, it derives the KKT conditions and Newton's method for an equality constrained problem. - The summary briefly outlines the key

Uploaded by

Kuei Kao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Convex Optimization – Final Exam, Thursday June 28, 2018.

Answer prepared by Tzu-Yu Jeng.


Textbook: Stephen Boyd and Lieven Vandenberghe, Convex Optimization. Cambridge U.
Press, 2004.
Exam policy: Open book. You can bring any books, handouts, and any kinds of paper-based
notes with you, but use of any electronic devices (including cellphones, laptops, tablets, etc.) is
strictly prohibited.

Note: the total score of all problems is 112 points.

1. (45%) For each of the following optimization problems, find (i) the lagrangian L(x, λ, ν), (ii)
dual function g(λ, ν), and (iii) the dual problem.

(a) (5% + 5% + 5%)

minimize xT x
subject to Ax  b

where A ∈ Rm×n and b ∈ Rm .

L(x, λ, ν)
=kxk22 + hλ, Ax − bi

Note that ν is absent. By straightforward arrangement,

L(x, λ, ν)
2 1
= x − AT λ 2 − kAT λk22 − hλ, bi

4
Thus

g(λ, ν)
= inf L(x, λ, ν)
x
1
= − kAT λk22 − hλ, bi
4
The dual problem is
1
maximize − kAT λk22 − hλ, bi
4
subject to λ0

(b) (5% + 5% + 5%)

minimize xT P x
subject to xT x ≤ 1
Ax = b

where P ∈ Sn++ .

1
L(x, λ, ν)
= hx, P xi + λ · (kxk22 − 1) + hν, Ax − bi
Here the λ is a scalar, and the inner product reduces to a multiplication.
g(λ, ν)
= inf L(x, λ, ν)
x
= inf [hx, (P + λI)xi + hν, Axi − hν, bi − λ]
x
= inf hx, (P + λI)xi + AT ν, x − (hν, bi + λ)


x
√ −1
It is well known that the infimum of the quadratic form hx, A0 xi + hb0 , xi is k A b0 k22 /4,
for self-adjoint (symmetric) positive definite linear operator A (which is assumed here). As
a reminder, diagonalize A√0 (why is√that possible?) to be U DU T where D has all-positive
entries as assumed, then A0 = U DU T and the generalization of “completion of square”
works.
1 √ −1
g(λ, ν) = − k P + λI (AT ν)k22 − (hν, bi + λ)
4
The dual problem is
1 √ −1
maximize − k P + λI (AT ν)k22 − (hν, bi + λ)
4
subject to λ0

(c) (5% + 5% + 5%)

minimize 2x1 + 3x2 + 4x3


       
1 0 1 1 1 −1 0 1
subject to x1 + x2 + x3 S2+ .
0 1 1 1 −1 2 1 0
Hint: the Lagrange multiplier for this problem is in the form of a symmetric matrix. You
can use the notation Z, as in, e.g., L(x, Z) and g(Z), etc.
Relevant material is Lecture 12, p.8-9. Let
 
2
c = 3 ,
4
 
1, 0
F1 =
0, 1
 
1, 1
F2 =
1, 1
 
1, −1
F3 =
−1, 2
 
0, −1
G=
−1, 0
Then (if you have defined them clearly, you are allowed not to expand them)
L(x, Z) = hc, xi + tr((x1 F1 + x2 F2 + x3 F3 + G)Z)
(
tr(GZ), tr(Fi Z) + ci = 0, i = 1, 2, 3
g(Z) =
−∞, otherwise

2
And finally the dual problem is

maximize tr(GZ)
(
tr(Fi Z) + ci = 0, i = 1, 2, 3
subject to
ZO

2. (40%) Consider the equality constrained problem

n
X
minimize f0 (x) = cT x + x3i
i=1
subject to Ax = b

where x ∈ dom f0 = Rn+ , A ∈ Rp×n , rank A = p, b ∈ Rp , and c ∈ Rn .

(a) (10%) Derive the Lagrange dual function g(ν) of the problem (2a). Find also dom g.

n
X
L(x, λ, ν) = hc, xi + x3i + hν, Ax − bi
i=1

This function is a cubic function of each of xi , and always attains −∞ if we just set
x = (∞, . . . , ∞), and thus g(λ, ν) = ∞.
(b) (5%) Formulate the dual problem of the problem (2a).
We have the trivial dual problem

maximize −∞
subject to λ0

The dual problem is never dual feasible.


(c) (10%) Derive Of0 (x) and O2 f0 (x).
I think this should be straightforward and only give the result.

∇f0 (x) = c + 3hx21 , . . . , x2n i


∇2 f0 (x) = 6 diag(x)

(d) (5%) Find the KKT conditions for the problem (2a).
(1) Primal feasibility for inequality constraint: No.
(2) Primal feasibility for inequality constraint: Ax − b = 0.
(3) Dual feasibility: No (since g is trivial).
(4) Complementary slackeness: No (since there is no inequality constraint).
(5) Vanishing of gradient of Lagrangian:

∇L = c + h3x21 , . . . , 3x2n i + AT ν = 0

Or

3hx21 , . . . , x2n i + c + AT ν = 0.

3
(e) (10%) Given a feasible point x (i.e., Ax = b), derive the Newton’s step ∆xnt by writing
down a linear system in which (∆xnt , ν + ) is the variable (ν + is the updated version of dual
variable ν):
    
M1 M2 ∆xnt v1
= .
M3 M4 ν+ v2
Find M1 , M2 , M3 , M4 , v1 , and v2 explicitly.
Quote the result from textbook p.526, eqn. (10.19).

6 diag(x), AT ∆xnt −c − 3hx21 , . . . , x2n i


    
=
A, 0 ν+ 0

This result need not be memorized. The first equation comes from the second KKT
equation. Notice that (∇2 f0 (x))∆xnt the variation of ∇f0 (x), and AT ν + is simply plugged
in the new ν value. And likewise the second equation comes from the first KKT equation
Ax − b = 0 by varying ∆xnt a little.

3. (15%) Consider the simple problem

minimize x2 − 1
subject to 1≤x≤3

whose feasible set is [1, 3]. Suppose you are going to apply the barrier method to this problem
(whose objective function is denoted f0 and constraint functions f1 and f2 ).

(a) (6%) Derive tf0 + φ as a function of x, where t is any given positive number and φ denotes
the logarithmic barrier for the original problem.
f0 = x2 − 1, and for convenience set f1 = 1 − x, f2 = x − 3. Then

tf0 + φ =tf0 − log(−f1 ) − log(−f2 )


=t · (x2 − 1) − log(x − 1) − log(3 − x)

(b) (7%) Find the optimal point for the “approximated” problem

minimize tf0 + φ
x

for any given t > 0. In other words, find the central point x∗ (t) for any given t > 0.
Relevant material: textbook p.597.
Since tf0 + φ is convex in x ∈ (1, 3),


(tf0 + φ) = 0
∂x
Or
1 1
2xt − − =0
x−1 x−3
When this condition is met, x ← x∗ (t) is plugged in, after some direct arrangement (x 6= 1, 3
where the function is singular and thus arrangement is valid),

tx∗ (t)3 − 4tx∗ (t)2 + (3t − 1)x∗ (t) + 2 = 0

4
(c) (2%) What is lim x∗ (t)?
t→∞
Cast as

x∗ (t)3 − 4x∗ (t)2 + (3 − 1/t)x∗ (t) + 2/t = 0

When t → ∞,

x∗ (t)3 − 4x∗ (t)2 + 3x∗ (t) = 0

Or x = 0, 1, 3. Of course, x = 0 is not considered, x = 1 gives the minimum, and x = 3


the maximum, agreeing the (originaly) obvious result. Unless we calculate the second
derivative w.r.t. x, we have to plug in the values to know which is minimum.

4. (12%) For the following pairs of proper cones K ⊆ Rq and functions ψ : Rq → R, determine
whether ψ is a generalized logarithm for K. Briefly explain why.

(a) (4%) K = R3+ , ψ(x) = log x1 + log x2 + log x3 .


(b) (4%) K = R3+ , ψ(x) = log x1 + 2 log x2 + 3 log x3 .
(c) (4%) K = R2+ , ψ(x) = log x1 − log x2 .
(a) Yes (b) Yes (c) No
We quote Lecture 15, p.62: “We say that Rq 7→ R is a generalized logarithm for K if
(i) ψ is concave, (ii) closed, (iii) twice continuously differentiable, (iv) dom = int(K), (v)
∇2 ψ(y) ≺ 0 for y ∈ int(K), (vi) that there is a constant θ > 0 such that for all y K 0,
and all s > 0, we have ψ(sy) = ψ(y) + θ log s”.
(i), (ii), (iii), and (iv) are entirely obvious. For (ii), see textbook p.457 for the definition
of a closed function: a function is closed if it always has closed sublevel sets. For (iv),
note that the definition of K ensures x1 , x2 , x3 > 0 (and no more so) in each case, and the
generalized inequality here reduces to an usual inequality.
The explanation for (b) is extremely similar to that of (a) in just above.
For (c), the (2,2)-entry of Hessian matrix fails to be negative, as required by (v):

∂2ψ
=x22
∂x22
>0

always.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy