chapter_3 Performance Surface and Search Method
chapter_3 Performance Surface and Search Method
chapter_3 Performance Surface and Search Method
CHAPTER THREE
Ch3_Eigenanalysis and Performance Surface
Ch4_Search Methods
adaptive filters theory and applications
Behrouz Farhang-Boroujeny
1
2/19/2023
2 0
Example: find eigenvalues and eigenvectors of 𝑅 = .
0 3
Solution:
2 0 1 0
det(𝑅 − 𝜆𝐼) = 0 ⇒ det( −𝜆 )=0
0 3 0 1
2−𝜆 0
det( ) = 0 ⇒ (2 − 𝜆)(3 − 𝜆)=0
0 3−𝜆
𝜆 = 2 𝑎𝑛𝑑 𝜆 = 3
For 𝜆 = 2
2−2 0 𝑥
(𝑅 − 𝜆𝐼)𝑞 = 0 ⇒ ( . 𝑦 =0
0 3−2
0∗𝑥+0∗𝑦 0
= ⇒∴ 𝑦 =0
0∗𝑥+1∗𝑦 0
𝒙 𝟏
𝒒𝟏 = 𝒘𝒉𝒆𝒓𝒆 𝒙 ≠ 𝟎 𝒇𝒐𝒓 𝒔𝒊𝒎𝒑𝒍𝒊𝒄𝒊𝒕𝒚 𝒒𝟏 =
𝟎 𝟎
At the same manner for For 𝜆 = 3
𝟎 𝟎
𝒒𝟐 = 𝒘𝒉𝒆𝒓𝒆 𝒚 ≠ 𝟎 𝒇𝒐𝒓 𝒔𝒊𝒎𝒑𝒍𝒊𝒄𝒊𝒕𝒚 𝒒𝟐 =
𝒚 𝟏 4
2
2/19/2023
∴ 2 − − 2 + − 12 = 0
− 3 − 10 = 0
So the eigenvalues are
5
1 = 5 and 2 = - 2
For 1 = 5
𝑥
Let 𝑉 = 𝑦 is the eigenvector of the matrix A
corresponding to 1 = 5.
∵ 𝐴 − 𝜆. 𝐼 . 𝑉 = 0
1 3 1 0 𝑥 0
∴ −5 . 𝑦 =
4 2 0 1 0
1−5 3 𝑥 0
. =
4 2−5 𝑦 0
−4𝑥 + 3𝑦 0
=
4𝑥 − 3𝑦 0
So
- 4x + 3y = 0
4x - 3y = 0 6
3
2/19/2023
y 1 4 8 …..
x 3/4 3 6 ……
𝟑
So any multiple of the vector 𝑽𝟏 = is an eigenvector
𝟒
for 1 = 5.
7
For 2 = - 2
𝑥
Let 𝑉 = 𝑦 is the eigenvector of the matrix A
corresponding to 2 = -2.
∵ 𝐴 − .𝐼 .𝑉 = 0
1 3 1 0 𝑥 0
∴ +2 . 𝑦 =
4 2 0 1 0
1+2 3 𝑥 0
. =
4 2+2 𝑦 0
3𝑥 + 4𝑦 0
=
4𝑥 + 4𝑦 0
So
3x + 3y = 0
4x +4y = 0 8
4
2/19/2023
y -1 1 2 …..
x 1 -1 -2 ……
1
any multiple of the vector 𝑉 = is an eigenvector for 2 = -2.
−1
PROPERTIES OF
EIGENVALUES AND EIGENVECTORS
ome of the properties derived here are directly related
to the fact that the correlation matrix R is Hermitian
and nonnegative definite.
A matrix A, in general, is said to be Hermitian (or
self-adjoint matrix) if 𝑨 = 𝑨 = 𝐴 .
The N-by-N Hermitian matrix A is said to be
nonnegative definite or positive semidefinite, if :
𝑣 . 𝐴. 𝑣 ≥ 0
The fact that A is Hermitian implies that 𝒗𝑯 . 𝑨. 𝒗 is
real-valued.
the correlation matrix R is almost always
positive definite.
10
5
2/19/2023
12
6
2/19/2023
for i = 1, 2, . . . , N - 1
13
14
7
2/19/2023
9. Karhunen–Lo´eve expansion
15
8
2/19/2023
17
18
9
2/19/2023
19
20
10
2/19/2023
21
22
11
2/19/2023
23
24
12
2/19/2023
25
SEARCH METHODS
1. METHOD OF STEEPEST DESCENT
26
13
2/19/2023
14
2/19/2023
30
15
2/19/2023
31
THE V-AXES
16
2/19/2023
THE V’-AXES
Recall that R has the following unitary similarity
decomposition
33
34
17
2/19/2023
35
Starting with an initial value w(0) = [w0(0) w1(0)]T and letting the
recursive equation (5.29) to run, we get two sequences of the tap-weight
variables w0(k) and w1(k). 36
18
2/19/2023
LEARNING CURVE
38
19
2/19/2023
39
20
2/19/2023
41
42
21
2/19/2023
43
44
22
2/19/2023
NEWTON’S METHOD
Our discussions in the last few sections show that
the steepest-descent algorithm may suffer from
slow modes of convergence, which arise due to
the spread in the eigenvalues of the correlation
matrix R.
This means that if we can somehow get rid of the
eigenvalue spread, we can get a much better
convergence performance. This is exactly what
Newton’s method does.
45
46
23
2/19/2023
47
48
24