0% found this document useful (0 votes)
19 views

Matrix Decompo 2024

Uploaded by

Abhishek Anand
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Matrix Decompo 2024

Uploaded by

Abhishek Anand
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Matrix Decomposition

This chapter deals with the following topics.

(a) Eigenvalues and eigenvectors.

(b) Diagonalization of a square matrix.

(c) Singular value decomposition.

(d) Cholesky decompostion.

1 Eigenvalues and Eigenvectors


Let A be a square matrix of order n. A scalar λ is as an eigenvalue of A if there is
a non-zero column vector v of length n such that Av = λv . The vector v is called an
eigenvector of A corresponding to the eigenvalue λ.
Equivalently, a scalar λ is an eigenvalue of A if det(A − λIn ) = 0, and the eigenvectors
corresponding to λ are the solutions of the homogeneous system (A − λIn )X = 0.

The characteristic polynomial of A is denoted by ∆(A), and is dened as ∆(A) =


det(A − λIn ). Henceforth, the eigenvalues of A are the roots of the characteristic polyno-
mial.

The following theorem gives the characteristic polynomial of A in terms of principal mi-
nors of A.

If λ is an eigenvalue of A, then the solution space of the system (A − λIn )X = 0 is called


the eigenspace of A corresponding to the eigenvalue λ and is denoted by Eλ (A).

Theorem 1.1. Let A be a square matrix of order n. Let Sk denote the sum of all principal
minors of A of order k. Then ∆(A) = λn − S1 λn−1 + S2 λn−2 − + · · · + (−1)n Sn .

Note: S1 = trace(A) and Sn = det(A).

Example 1.2. (a) If A is a square matrix of order 2, then ∆(A) = λ2 −trace(A)λ+det(A).


(b) If A is a square matrix of order 3, then ∆(A) = λ3 − trace(A)λ2 + S2 λ − det(A).

1
Problem 1.3.
 Find
 a basis for the eigenspace corresponding to each eigenvalue of the
1 2
matrix A = 2 1
.

Solution : The characteristic polynomial of A is λ2 − 2λ − 3. Therefore the eigenvalues of


A are λ = −1, 3. Let X = [x, y]T .

Eigenspace corresponding to λ = −1:


From the equation (A − λI2 )X = 0, we get x + y = 0. . Thus Eλ (A) = {(x, y) : x + y = 0}.
So, {(1, −1)} is a basis of Eλ (A).

Eigenspace corresponding to λ = 3:
From the equation (A − λI2 )X = 0, we get x − y = 0. Thus Eλ (A) = {(x, y) : x − y = 0}.
So, {(1, 1)} is a basis of Eλ (A).

Problem 1.4.
 Find a basis
 for the eigenspace corresponding to each eigenvalue of the
0 −1 1
matrix A = −1 0 1.

1 −1 0

Solution : The characteristic polynomial of A is λ3 − λ. Therefore the eigenvalues of A


are λ = −1, 0, 1. Let X = [x, y, z]T .

Eigenspace corresponding to λ = −1:


From the matrix equation (A − λI3 )X = 0, we get x − y + z = 0; −x + y + z = 0.
Employing Gauss elimination method, we have x − y + z = 0; z = 0. Thus Eλ (A) =
{(x, y, 0) : x − y = 0}. So, {(1, 1, 0)} is a basis of Eλ (A).

Eigenspace corresponding to λ = 0:
The equation (A − λI3 )X = 0 implies x = y and y = z . Thus Eλ (A) = {(x, x, x) : x ∈ R}.
So, {(1, 1, 1)} is a basis of Eλ (A).

Eigenspace corresponding to λ = 1:
The equation (A − λI3 )X = 0 implies x = z and y = 0. Thus Eλ (A) = {(x, 0, x) : x ∈ R}.
So, {(1, 0, 1)} is a basis of Eλ (A).

2
Problem 1.5.
 Find a basis
 for the eigenspace corresponding to each eigenvalue of the
5 −4 4
matrix A = 12 −11 12.
4 −4 5
Solution : The characteristic polynomial of A is λ3 +λ2 −5λ+3. Therefore the eigenvalues
of A are λ = −3, 1, 1. Let X = [x, y, z]T .

Eigenspace corresponding to λ = −3:


From the matrix equation (A − λI3 )X = 0, we get x − y + 2z = 0; 2x − y + z =
0; 3x − 2y + 3z = 0. Employing Gauss elimination method, we have x − y + 2z = 0;
y − 3z = 0. Thus Eλ (A) = {(y/3, y, y/3) : y ∈ R}. So, {(1/3, 1, 1/3)} is a basis of Eλ (A).

Eigenspace corresponding to λ = 1:
The equation (A−λI3 )X = 0 implies x−y+z = 0. Thus Eλ (A) = {(x, y, z) : x−y+z = 0}.
Substituting (y, z) = (1, 0) and (y, z) = (0, 1), we get (1, 1, 0) and (−1, 0, 1) as two
independent solutions. So, {(1, 1, 0), (−1, 0, 1)} is a basis of Eλ (A).

Properties of eigenvalues:
(1) Any square matrix and its transpose have the same eigenvalues.

(2) The eigenvalues of a triangular matrix are the diagonal elements of the matrix.

(3) The sum of the eigenvalues of a matrix is the sum of the elements of the principal
diagonal(trace of a matrix).

(4) The product of the eigenvalues of a matrix is equal to its determinant.


1
(5) If λ is an eigenvalue of a non-singular matrix A, then is an eigenvalue of A−1 .
λ
Theorem 1.6. Cayley Hamilton Theorem: Every square matrix satises its characteristic
polynomial.
Problems:
 
2 −1 2
(1) Let A =  −1 2 −1 . Using Cayley Hamilton theorem, express A4 as a linear
1 −1 2
combination of the matrices A3 , A2 , A and I .

3
 
1 1 2
(2) Find the inverse of the matrix A =  9 2 0  using Cayley-Hamilton theorem.
5 0 3
Proposition 1.7. If A is a real symmetric matrix of order n. Then all its eigenvalues
are real.

Proof. Suppose Av = λv for some non-zero column vector v. Then vT Av = λvT v. Since
A is real symmetric, v T A = (Av)T = λv T . Therefore, λv T v = λv T v . So, λ = λ because v
is a non-zero column vector.

Proposition 1.8. Let A be a square matrix of order n. If λ1 , λ2 , . . . , λk are the distinct


eigenvalues of A corresponding to the eigenvectors v1 , v2 , . . . , vk , respectively. Then the
vectors v1 , v2 , . . . , vk are linearly independent.

Proposition 1.9. Let A be a symmetric matrix of order n. If λ1 , λ2 , . . . , λk are the dis-


tinct eigenvalues of A corresponding to the eigenvectors v1 , v2 , . . . , vk , respectively. Then
the vectors v1 , v2 , . . . , vk are orthogonal.

2 Diagonalization of a Matrix
• Square matrices A and B are similar if there exists an invertible matrix P such that
A = P BP −1 .

Proposition 2.1. If A and B are similar matrices, then they share the same characteristic
polynomial.

• A square matrix A is diagonalizable if it is similar to a diagonal matrix.

Proposition 2.2. A square matrix A of order n is diagonalizable if and only if it has n


linearly independent eigenvectors.

Note: If a square matrix A of order n has ‘n′ distinct eigenvalues, then it is diagonalizable.

Diagonalization of a square matrix A of order n: Working procedure

Step 1 Find all the eigenvalues of the matrix A.

4
Step 2 Find n linearly independent eigenvectors of A.

Step 3 If v1 , v2 , . . . , vn are linearly independent eigenvectors corresponding to the eigen-


values λ1 , λ2 , . . . , λn , respectively, then set matrix P = [v1 v2 . . . vn ] and D =
diag(λ1 , λ2 , . . . , λn ).

Step 4 A = P DP −1 .

Problems:

 
1 1
1. Diagonalize the matrix A = . Hence nd A.
2 2
Solution : The characteristic polynomial of A is λ2 − 3λ. Thus the eigenvalues of A are
0 and 3.

Let X = [x, y]T .

Eigenspace corresponding to λ = 0:
From the matrix equation (A − λI2 )X = 0, we get x + y = 0. Thus Eλ (A) = {(x, −x) :
x ∈ R}. So, {(1, −1)} is a basis of Eλ (A).

Eigenspace corresponding to λ = 3:
From the matrix equation (A − λI2 )X = 0, we get 2x − y = 0. Thus Eλ (A) = {(x, 2x) :
x ∈ R}. So, {(1, 2)} is a basis of Eλ (A).
" #
2/3 −1/3
   
1 1 0 0
Let P = and D = . Then P −1 = .
−1 2 0 3 1/3 1/3
Hence, A = P DP −1 .

To nd A:
" # " √ √ #
√ √ 2/3 −1/3 (1/3) 3 (1/3) 3
 
1 1 0 √0
A = P DP −1 = = √ √ .
−1 2 0 3 1/3 1/3 (2/3) 3 (2/3) 3
 
0 −1 1
2. Diagonalize the matrix A = −1 0 1. Hence nd A100 .
1 −1 0
Solution : The characteristic polynomial of A is λ3 − λ. Therefore the eigenvalues of A
are λ = −1, 0, 1. Let X = [x, y, z]T .

5
Eigenspace corresponding to λ = −1:
From the matrix equation (A − λI3 )X = 0, we get x − y + z = 0; −x + y + z = 0.
Employing Gauss elimination method, we have x − y + z = 0; z = 0. Thus Eλ (A) =
{(x, y, 0) : x − y = 0}. So, {(1, 1, 0)} is a basis of Eλ (A).

Eigenspace corresponding to λ = 0:
The equation (A − λI3 )X = 0 implies x = y and y = z . Thus Eλ (A) = {(x, x, x) : x ∈
R}. So, {(1, 1, 1)} is a basis of Eλ (A).

Eigenspace corresponding to λ = 1:
The equation (A − λI3 )X = 0 implies x = z and y = 0. Thus Eλ (A) = {(x, 0, x) : x ∈
R}. So, {(1, 0, 1)} is a basis of Eλ (A).
 
    1 0 −1
1 1 1 −1 0 0
Let P =  1 1 0  and D =  0 0 0 . Then P −1 =  .
 
 −1 1 1 
0 1 1 0 0 1
1 −1 0
Hence A = P DP −1 .

To nd A100 :    

1 1 1

1 0 0 
 1 0 −1 2 −1 −1
  
A100 = P D100 P −1 =  1 1 0   0 0 0  
 −1 1  =  1 0 −1  .
1   
0 1 1 0 0 1
1 −1 0 1 −1 0
 
1 1 1
3. Diagonalize the matrix A = 2 2 2.
1 1 1
Solution : The characteristic polynomial of A is λ3 − 4λ2 . Therefore the eigenvalues of A
are λ = 0, 0, 4. Let X = [x, y, z]T .

Eigenspace corresponding to λ = 0:
The equation (A−λI3 )X = 0 implies x+y+z = 0. Thus Eλ (A) = {(x, y, z) : x+y+z = 0}.
Substituting (y, z) = (1, 0) and (y, z) = (0, 1), we get (−1, 1, 0) and (−1, 0, 1) as two in-
dependent solutions. So, {(−1, 1, 0), (−1, 0, 1)} is a basis of Eλ (A).

Eigenspace corresponding to λ = 4:

6
From the matrix equation (A−λI3 )X = 0, we get −3x+y+z = 0; x−y+z = 0; x+y−3z =
0. Employing Gauss elimination method, we have x + y − 3z = 0; y − 2z = 0. Thus
Eλ (A) = {(z, 2z, z) : z ∈ R}. So, {(1, 2, 1)} is a basis of Eλ (A).
 
    −1/2 1/2 −1/2
−1 −1 1 0 0 0
Let P =  1 0 2  and D =  0 0 0 . Then P −1 =  .
 
 −1/4 −1/4 3/4 
0 1 1 0 0 4 1/4 1/4 1/4
Hence A = P DP −1 .

2.1 Spectral theorem for symmetric matrices


A square matrix A is orthogonally diagonalizable if there exists an orthogonal matrix P
such that A = P DP T , where D is a diagonal matrix.
The following is the spectral theorem for symmetric matrices.

Theorem 2.3. Let A be a symmetric matrix of order n. Then

a. A has n real eigenvalues, counting multiplicities.

b. If λ is an eigenvalue of A with multiplicity k, then the dimension of the eigenspace


Eλ (A) is k .

c. The eigenvectors corresponding to dierent eigenvalues are orthogonal.

d. A is orthogonally diagonalizable.

Problems:
 
0 1 1
(1) Find an orthogonal matrix that diagonalizes the matrix A =  1 0 1 .
1 1 0
Solution : The characteristic polynomial of A is λ3 − 3λ − 2. Therefore the eigenval-
ues of A are λ = −1, −1, 2. Let X = [x, y, z]T .

Eigenspace corresponding to λ = −1:


The equation (A − λI3 )X = 0 implies x + y + z = 0. Thus Eλ (A) = {(x, y, z) :

7
x + y + z = 0}. Substituting (y, z) = (1, 0) and (y, z) = (0, 1), we get (−1, 1, 0)
1 1
and (−1, 0, 1) as two independent solutions. So, { √ (−1, 1, 0), √ (1, 1, −2)} is
2 6
an orthonormal basis of Eλ (A).

Eigenspace corresponding to λ = 2:
From the matrix equation (A − λI3 )X = 0, we get x + y − 2z = 0; x − 2y + z =
0; 2x − y − z = 0. Employing Gauss elimination method, we have x + y − 2z = 0;
1
y − z = 0. Thus Eλ (A) = {(z, z, z) : z ∈ R}. So, { √ (1, 1, 1)} is an orthonormal
3
basis of Eλ (A).
 √ √   
−√ 3 1 √2 −1 0 0
1
Let P = √  3 1 √2  and D =  0 −1 0 . Then A = P DP T .
6 0 −2 2 0 0 2

 
2 −1
(2) Find an orthogonal matrix that diagonalizes the matrix A = .
−1 2
Solution : The characteristic polynomial of A is λ2 −4λ+3. Therefore the eigenvalues
of A are λ = 1, 3. Let X = [x, y]T .

Eigenspace corresponding to λ = 1:
From the equation (A − λI2 )X = 0, we get x − y = 0. . Thus Eλ (A) = {(y, y) : y ∈
1
R}. So, { √ (1, 1)} is an orthonormal basis of Eλ (A).
2
Eigenspace corresponding to λ = 3:
From the equation (A−λI2 )X = 0, we get x+y = 0. Thus Eλ (A) = {(x, y)
 : x+y =

1 1 1 1
0}. So, { √ (1, −1)} is an orthonormal basis of Eλ (A). Let P = √
 2  2 1 −1
1 0
and D = . Then A = P DP T .
0 3

3 Singular Value Decomposition


3.1 Positive semi-denite matrix
A real symmetric matrix A is called positive semi-denite if v T Av ≥ 0 for all non-zero
real column vector v .

8
Theorem 3.1. For a real-symmetric matrix A, the following are equivalent:
(i) vT Av ≥ 0 for all non-zero real column vector v.

(ii) All the eigenvalues of A are non-negative.

(iii) There exists a matrix B such that A = BB T .

Theorem 3.2.
A real symmetric matrix A of order 2 is positive semi-denite if and only its diagonal
elements are non-negative and |A| ≥ 0.

Exercise: Check whether the following matrices are positive semi-denite or not.
     
1 3 1 −2 1 −2
(i) (ii) (iii) .
3 4 −2 −3 −2 5

3.2 Singular values and singular vectors


Let A be an m × n matrix. A scalar λ is called a singular value of the matrix A if there
is a pair of non-zero column vectors u and v such that Av = λu and AT u = λv . The
vectors u and v are called as singular vector pair corresponding to the singular value λ.
More precisely, the vector u is called as left singular vector and the vector v is called as
right singular vector.

Equivalently, if min{m, n} = n, then the sigular values of A are the positive square roots
of the eigenvalues of AT A. Otherwise, the singular values of A are the positive roots of
the eigenvalues of AAT .

Note:

(i) rank(A) = rank(AT A).

(ii) The matrices AT A and AAT have the same set of non-zero eigenvalues.

(iii) The left singular vectors of A are the eigenvectors of AAT and the right singular
vectors of A are the eigenvectors of AT A.

9
Example 3.3.
" # " #
1 2 5 4
(i) Let A = . Then AT A = . and its eigenvalues are 1 and 9. There-
2 1 4 5
fore the singular values of A are 1 and 3.

" #
3 6
 
1 1 1
(ii) Let A = . Then AAT = and its eigenvalues are 15 and 0.
2 2 2 6 12

Therefore the singular values of A are 15 and 0 .

" #
333 81
 
4 11 14
(iii) Let A = . Then AA ∗ T = and its eigenvalues are 90,
8 7 −2 81 117
√ √
360. Therefore the singular values of A are 90 and 360.

Theorem 3.4. Let A be an m × n matrix of rank r. Suppose {v1 , v2 , . . . , vn } is an


orthonormal set of eigenvectors of AT A corresponding to the eigenvalues λ1 ≥ λ2 ≥ · · · ≥
λn , respectively. Then {Av1 , Av2 , . . . , Avr } is an orthogonal set and spans the column
space of A.

3.3 Singular value decomposition


Theorem 3.5 (Singular value decomposition). Let A be a matrix of order m × n. Let
σ1 ≥ σ2 ≥ · · · ≥ σr be the non-zero singular values of A. Then A = U V T , where
P

(i) U is an m × m orthogonal matrix whose column vectors u1 , u2 , . . . , um form a set of


orthonormal eigenvectors of AT A (or orthonormal left singular vectors of A).

(ii) V is an n × n orthogonal matrix whose column vectors v1 , v2 , . . . , vn form a set of


orthonormal eigenvectors of AAT (or orthonormal right singular vectors of A).
0
 
D
(iii) is an m × n matrix of the form 0 0 where D is the diagonal matrix of order
P

r given by D = diag(σ1 , σ2 , . . . , σr ).

10
Working procedure:
To determine a singular value decomposition of a matrix A of order m × n (m ≥ n):

(i) Determine an orthonormal set of eigenvectors v1 , v2 , . . ., vn of AT A corresponding


to the eigenvalues λ1 ≥ λ2 ≥ · · · ≥ λr > 0, 0, . . . , 0.
| {z }
n−r times

(ii) Set V to be an n × n matrix whose ith column is vi .


Avi
(iii) For 1 ≤ i ≤ r, set ui := .
||Avi ||
(iv) Determine vectors ur+1 , ur+2 , . . . , um such that the set {u1 , u2 , . . . , um } is an or-
thonormal set of m vectors.

(v) Set U to be an m × m matrix whose ith column is ui .

(vi) Then, A = U V T.
P

Problems: " #
1 1 1
1. Find a singular value decomposition of the matrix A = .
1 −1 1
 
2 0 2
Solution: Consider AT A = 
 0 2 0 . Its eigenvalues are 0, 2 and 4. Therefore the
 

2 0 2

singular values of A are 2, 2 and 0.

Orthonormal basis of the eigenspace corresponding to λ = 4:


1
From the equation (AT A − λI3 )X = 0, we get x − z = 0 and y = 0. Thus, { √ (1, 0, 1)}
2
is an orthonormal basis of Eλ (AT A).

Orthonormal basis of the eigenspace corresponding to λ = 2:


From the equation (AT A − λI3 )X = 0, we get x = z = 0. Thus, {(0, 1, 0)} is an orthonor-
mal basis of Eλ (AT A).

11
Orthonormal basis of the eigenspace corresponding to λ = 0:
From the matrix equation (AT A − λI3 )X = 0, we get x + z = 0 and y = 0. Therefore,
1
{ √ (1, 0, −1)} is an orthonormal basis of EAT A (λ).
2
1 1 Av1 1
Set v1 = √ (1, 0, 1)T , v2 = (0, 1, 0)T and v3 = √ (1, 0, −1)T , u1 = = √ (1, 1)T
2  2  ||Av1 || 2
1 0 1

 
Av2 1 1 , U = √1 1 1
and u2 = = √ (1, −1)T . Let V = √  and
 
0 2 0
||Av2 || 2 2  2 1 −1
1 0 −1
 
2 √0 0
. Then A = U V T .
P P
=
0 2 0
 
0 1 0
2. Find a singular value decomposition of the matrix A = 
 1 0 0 .
 

0 0 2
 
1 0 0
Solution: Consider AT A =   0 1 0  . Its eigenvalues are 1, 1 and 4. Thus the singular
 

0 0 4
values of A are 1, 1 and 2.

Orthonormal basis corresponding to the eigenvalue λ = 4:


From the matrix equation (AT A − λI3 )X = 0, we get x = 0 and y = 0. Therefore
{(0, 0, 1)} is an orthonormal basis of Eλ (AT A).

Orthonormal basis corresponding to the eigenvalue λ = 1:


From the matrix equation (AT A − λI3 )X = 0, we get z = 0. Therefore {(1, 0, 0), (0, 1, 0)}
is an orthonormal basis of Eλ (AT A).
Av1
Set v1 = (0, 0, 1)T , v2 = (1, 0, 0)T and v3 = (0, 1, 0)T , u1 = = (0, 0, 1)T , u2 =
 ||Av1 ||  
0 1 0 0 1 0
Av2 Av3
= (0, 1, 0)T and u3 = = (1, 0, 0)T Let V =  , U = 
   
0 0 1 0 0 1 
||Av2 || ||Av3 ||    
1 0 0 1 0 0
 
2 0 0
and =  0 1 0 . Then A = U V T .
P P
0 0 1

12
 
−3 1
3. Find a singular value decomposition of the matrix A = 
 
 6 −2 
.
6 −2
 
10 −20 −20
Solution: Consider AAT =  . Its eigenvalues are 0, 0 and 90. There-
 
 −20 40 40 
−20 40 40

fore the singular values of A are 0 and 3 10.

Orthonormal basis corresponding to the eigenvalue λ = 90:


From the matrix equation (AAT − λI3 )X = 0, we get
 4x + y + z = 0, 2x + 5y− 4z = 0
x y z
and 2x − 4y + 5z = 0. Therefore, Eλ (AAT ) = (x, y, z) : = = . Thus,
1 −2 −2
1
{ (1, −2. − 2)} is an orthonormal basis of Eλ (AAT ).
3
Orthonormal basis corresponding to the eigenvalue λ = 0:
From the matrix equation (AAT − λI3 )X = 0, we get x − 2y − 2z = 0. Therefore,
Eλ (AAT ) = {(x, y, z) : x − 2y − 2z = 0}. Let (y, z) = (0, 1) and (y, z) = (1, 0). Then,
{(2, 0, 1), (2, 1, 0)} is a basis of Eλ (AAT ). Employing Gram-Schmidt orthogonalization
1 1
process, we get an orthonormal basis of Eλ (AAT ) as { √ (2, 0, 1), √ (2, 5, −4)}.
5 3 5
1 1 1 AT u1
Set u1 = (1, −2, −2)T , u2 = √ (2, 0, 1)T , u3 = √ (2, 5, −4)T , v1 = =
3 5 3 5 ||Au1 ||
1 1 1
√ (−3, 1)T . An unit vector orthogonal to v1 is √ (1, 3)T . Dene u2 = √ (1, 3)T .
10 10 10
 √   √ 
  5/3
√ 2 2/3 3 10 0
1 −3 1 1
Let V = √ , U = √  −2√5/3 0 5/3  and
P
=  0 0 .
10 1 3 5 −2 5/3 1 −4/3 0 0
Then A = U V .
P T

4 Cholesky decomposition
Let A = [aij ]n×n be a positive denite matrix. Then A can be written A = LLT , where
L = [lij ]n×n is a lower triangular and it is called as Cholesky decomposition of A.

13
The entries of the matrix L can be computed as follows:

∗ l11 = a11 .
aj1
∗ lj1 = , j = 2, . . . , n.
l11
v
u j−1
, j = 2, 3, . . . , n.
u X
2
∗ ljj = ajj −
t ljs
s=1

j−1
!
1
ljs lps , p = j + 1, j + 2, . . . , n and j ≥ 2.
X
∗ lpj = apj −
ljj s=1

Note: The Cholesky decomposition is mainly used to solve the linear system AX = b. If
A is symmetric and positive denite, then we can solve by rst computing the Cholesky
decomposition A = LLT , then solving for Ly = b for y by forward substitution, and nally
solving for x by back substitution.

Problems:
 
4 2 14
1. Find the Cholesky decomposition of  2 17 −5 .
14 −5 83
Solution:

l11 = a11 = 2.
a21 a31
l21 = = 1; l31 = = 7.
l11 l11
= 4.
p
2
l22 = a22 − l21
1
l32 = (a32 − l21 l31 ) = −3.
l22
= 5.
p
2 2
l33 = a33 − l31 − l32

Therefore,
    
4 2 14 2 0 0 2 1 7
 2 17 −5  =  1 4 0   0 4 −3 
14 −5 83 7 −3 5 0 0 5

14
 
9 6 12
2. Find the Cholesky decomposition of  6 13 11 .
12 11 26
Solution:

l11 = a11 = 3.
a21 a31
l21 = = 2; l31 = = 4.
l11 l11
p
2
l22 = a22 − l21 = 3.
1
l32 = (a32 − l31 l21 ) = 1.
l22
p
2 2
l33 = a33 − l31 − l32 = 3.
Therefore,
    
9 6 12 3 0 0 3 2 4
 6 13 11  =  2 3 0   0 3 1 .
12 11 26 4 1 3 0 0 3

Exercise:

(1) Diagonalize the following


 matrices.
" # 1 3 3
1 1
(i) (ii)  .
 
−3 −5 −3
3 −1  
3 3 1

(2) Find an orthogonal matrix P that diagonalizes the following matrices.


   
 
 6 −2 −1   4 −1 −1 
(i)  1 −2  (ii) 
 −2 6 −1  (iii)  −1 4 −1  .
  
−2 1
−1 −1 5 −1 −1 4
" #
2 −1
(3) Find a singular value decomposition of the following matrices. (i) (ii)
2 2
 
1 1 " #
1 2 1
 0 1  (iii) .
 
  2 1 1
−1 1

15
(4) Find the Cholesky decomposition of the following matrices.
 
  1 −1 3 2
4 6 8  −1 5 −5 −2 
(i) 6 34 52  (ii) 

 3 −5 19 3 .

8 52 129
2 −2 3 21

16

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy