1 Intro
1 Intro
1
Course Information
▶ Instructors:
- Prof. Geert Leus
- Prof. Borbala Hunyadi (Bori)
- MSc. Alberto Natali
▶ Class schedule (watch out for any changes on TU Delft roosters):
- Wednesdays between 10.45-12.30
- Fridays between 10.45-12.30
2
Course Information
I Assessment
– Open-book written exam.
– Compulsory lab assignment worth 1 EC (20%); report and short
presentation. Enroll via Brightspace.
I Course information:
– http://ens.ewi.tudelft.nl/Education/courses/ee4530/index.php
3
3
Mathematical optimization
minimize f0(x)
subject to fi(x) ≤ bi, i = 1, . . . , m
• f0 : Rn → R: objective function
• fi : Rn → R, i = 1, . . . , m: constraint functions
4
θ
Array processing
Phased-array antenna beamforming
•• demodulate
omnidirectional antenna
to get elements
output ej(xi cosatθ+y
positions
i sin θ) ∈(x
C1, y1), . . . , (xn, yn)
•• linearly
unit plane wave incident
combine from angle
with complex
j(xi cos θ+yi sin θ−ωt)
θ induces
weights wi: in ith element a signal
e
50◦
√ n |y(θ)|
(j = −1, frequency ω, wavelength ! 2π) ❅
i=1
10◦
sidelobe
15 level
❅
❘
❅
via2α:
(θtar : target direction; beamwidth) (discretize angles)
least-squares
50◦
2
!
|y(θ)| via least-squares (discretize angles) minimize i |y(θi )|
❅
θtar = 30◦
❅
❘
❅ subject to y(θtar ) = 1
2
!
10◦ minimize i |y(θi )|
sidelobe level
❅
❘
❅ (sum subject
is over angles
to y(θoutside
tar ) = 1beam)
50◦
|y(θ)|
minimize sidelobe level (discretize angles) ❅
❅ θtar = 30◦ 17
❘
❅
10◦
minimize maxi |y(θi)| sidelobe level
❅
❘
❅
subject to y(θtar ) = 1
aT xi + b > 0, i = 1, . . . , N, aT yi + b < 0, i = 1, . . . , M
aT xi + b ≥ 1, i = 1, . . . , N, aT yi + b ≤ −1, i = 1, . . . , M
Robust(Euclidean)
linear discrimination
distance between hyperplanes
8–9
(1)
H1 = {z | aT z + b = 1}
H2 = {z | aT z + b = −1}
(Euclidean) distance between hyperplanes
aT yi + b ≤ −1, i = 1, . . . , M
H2 = {z | aT z + b = −1}
aT xi + b ≥ 1, i = 1, . . . , N
st linear discrimination
a QP in a, b
(1/2)∥a∥2
+ b = −1}
minimize (1/2)∥a∥
(after squaring
2 objective) a QP in a, b
+ b = 1}
T
subject to a xi + b ≥ 1, i = 1, . . . , N (1)
T
aGeometric b ≤ −1, i = 1, . . . , M
yi +problems
8
2
Examples
portfolio optimization
• variables: amounts invested in different assets
• constraints: budget, max./min. investment per asset, minimum return
• objective: overall risk or return variance
data fitting
• variables: model parameters
• constraints: prior information, parameter limits
• objective: measure of misfit or prediction error
9
Solving optimization problems
• least-squares problems
• linear programming problems
• convex optimization problems
10
Least-squares
using least-squares
minimize cT x
subject to aTi x ≤ bi, i = 1, . . . , m
minimize f0(x)
subject to fi(x) ≤ bi, i = 1, . . . , m
if α + β = 1, α ≥ 0, β ≥ 0
13
The case of a convex cost function
f0 (xThe
⋆
) ≤ case
f0 (x),of a∀x
convex
∈ Rn cost
with function
∥x − x⋆ ∥ < ϵ
for ϵ > 0.
Local minima: x⋆ is an unconstrained local minimum of f0 : Rn !→ R if
is no worse than its neighbors.
n
Global minima: xf0⋆(xis⋆ )an unconstrainedn local minimum
⋆
≤ f0 (x), ∀x ∈ R with ∥x − x ∥ < ϵ of f 0 : R !→ R if
is no worse than all other vectors.
for ϵ > 0.
f0 (x⋆ ) ≤ f0 (x), ∀x ∈ Rn .
Global minima: x⋆ is an unconstrained local minimum of f0 : Rn !→ R if
it is no worse than all other vectors.
When the function is convex every local minimum is also global.
f0 (x⋆ ) ≤ f0 (x), ∀x ∈ Rn .
• no analytical solution
• reliable and efficient algorithms
• computation time (roughly) proportional to max{n3, n2m, F }, where F
is cost of evaluating fi’s and their first and second derivatives
• almost a technology
15
Example
rkj
θkj
illumination Ik
3
h(u)
0
0 1 2 3 4
u
• answer: with (1), still easy to solve; with (2), extremely difficult
• moral: (untrained) intuition doesn’t always work; without the proper
background very easy problems can appear quite similar to very difficult
problems
19
Course goals and topics
Goals
1. recognize and formulate problems (such as the illumination problem,
classification, etc.) as convex optimization problems
2. Use optimization tools (CVX, YALMIP, etc.) as a part the lab
assignment.
3. characterize optimal solution (optimal power distribution), give
limits of performance, etc.
Topics
1. Background and optimization basics;
2. Convex sets and functions;
3. Canonical convex optimization problems (LP, QP, SDP);
4. Second-order methods (unconstrained and constrained optimization);
5. First-order methods (gradient, subgradient);
5
20
1 Project 1: Change Detection in Time Series Model
Context
In statistical signal processing, change point detection tries to identify time
instances when the probability distribution of a stochastic process or time
Time Signal
series changes. The problem in this assignment AR concerns both detecting
Coefficients
4
whether or not a change has occurred, or whether several changes might
1
3
have occurred, and identifying the times0.8
of any such changes.
2 This exercise consists of two parts:0.6 (a) formulate the step detection
1 problem as a suitable convex optimization
0.4 problem; and (b) implement the
0.2
0 change detector. In a group of 2 students,
0
make a short report (4-5 pages;
y
−1 pdf file) containing the required Matlab scripts, plots, and answers. Also,
−0.2
−2 prepare a short presentation to explain your results and defend your choices.
−0.4
−0.6
−3
−0.8
−4
−5
Dataset explanation −1
0 50 100 150 200 250 300 50 100 150 200 250 300
t t
Consider the following scalar autoregressive (AR) time-series model
The assumption
assumption: a(t), b(t),isand
that
c(t)the
are AR coefficients
piecewise are piecewise constant and change
constant, change
infrequently.
infrequently Given y(t), t = 1, . . . , T, the problem is to estimate a(t), b(t),
Project 2: Linear Support Vector Machines
10
8
Decision Boundary
6 Class A
−2 Class B
−4
−6
0 0.5 1 1.5 2 2.5 3 3.5 4
Project 3: Multidimensional Scaling for Localization.
assumption: a(t), b(t), and c(t) are piecewise constant, cha
53.5
infrequently
53 X Real Positions
Estimated Positions A R M V G
assumption: a(t), b(t), and c(t) are piecewise
⎛ constant, change ⎞
52.5
infrequently A 0 71 146 177 127
R ⎜
⎜ 71 0 136 104 159 ⎟
M⎜
⎟
⎜ 146 136 0 208 258
52 ⎟
V ⎝ 177 ⎞
⎟
⎛ 104 208 0 279 ⎠
0 71 146 177 G 127 159 258 279
51.5
0
⎜ 71 0 136 104 159 ⎟
51 ⎜ ⎟
⎜ 146 136 0
2 208 258 ⎟
⎝ 177 104 208tij ∝0∥xi 279
⎜ ∥2
− xj ⎟
⎠
50.5
T
X) − 2XT X + diag(XT X)1T
3 4 5 6 7 8
= 1diag(X
127 159 258T 279 0
t2ij ∝ ∥xi − xj ∥2
T = 1diag(XT X) − 2XT X + diag(XT X)1T
ymbols. This is basically an inverse problem with a finite-alphabet
nt. Project 4: MIMO Detection
exercise consists of two parts:Dataset explanation
(a) formulate the MIMO detection
as a suitable convex optimization problem;
Consider a genericand (b) Mimplement
N -input -output modelthe
receiver. In a group of 2 students, make a short report (4-5 pages;
yc = Hc sc + vc .
containing the required Matlab scripts, plots, and answers. Also,
M is the received vector, H 2 CM ⇥
a short presentation to explain Here, c 2 C and
youryresults defend your choices.
c
nel, sc 2 C is the transmitted symbol vector, and v
N
1
white Re(s)
Gaussian noise vector. In this application exam
transmitted symbols follow a quaternary phase-shift-k
set explanation0.5
−1.5
0 5 10 15 20 25 30 35 40
Im(s)
yc = Hc sc + v
1 c.
0.5
−0.5
FFT
(Positivity, sparsity?)
Sampling Mask