Matrix Decompo 2024
Matrix Decompo 2024
The following theorem gives the characteristic polynomial of A in terms of principal mi-
nors of A.
Theorem 1.1. Let A be a square matrix of order n. Let Sk denote the sum of all principal
minors of A of order k. Then ∆(A) = λn − S1 λn−1 + S2 λn−2 − + · · · + (−1)n Sn .
1
Problem 1.3.
Find
a basis for the eigenspace corresponding to each eigenvalue of the
1 2
matrix A = 2 1
.
Eigenspace corresponding to λ = 3:
From the equation (A − λI2 )X = 0, we get x − y = 0. Thus Eλ (A) = {(x, y) : x − y = 0}.
So, {(1, 1)} is a basis of Eλ (A).
Problem 1.4.
Find a basis
for the eigenspace corresponding to each eigenvalue of the
0 −1 1
matrix A = −1 0 1.
1 −1 0
Eigenspace corresponding to λ = 0:
The equation (A − λI3 )X = 0 implies x = y and y = z . Thus Eλ (A) = {(x, x, x) : x ∈ R}.
So, {(1, 1, 1)} is a basis of Eλ (A).
Eigenspace corresponding to λ = 1:
The equation (A − λI3 )X = 0 implies x = z and y = 0. Thus Eλ (A) = {(x, 0, x) : x ∈ R}.
So, {(1, 0, 1)} is a basis of Eλ (A).
2
Problem 1.5.
Find a basis
for the eigenspace corresponding to each eigenvalue of the
5 −4 4
matrix A = 12 −11 12.
4 −4 5
Solution : The characteristic polynomial of A is λ3 +λ2 −5λ+3. Therefore the eigenvalues
of A are λ = −3, 1, 1. Let X = [x, y, z]T .
Eigenspace corresponding to λ = 1:
The equation (A−λI3 )X = 0 implies x−y+z = 0. Thus Eλ (A) = {(x, y, z) : x−y+z = 0}.
Substituting (y, z) = (1, 0) and (y, z) = (0, 1), we get (1, 1, 0) and (−1, 0, 1) as two
independent solutions. So, {(1, 1, 0), (−1, 0, 1)} is a basis of Eλ (A).
Properties of eigenvalues:
(1) Any square matrix and its transpose have the same eigenvalues.
(2) The eigenvalues of a triangular matrix are the diagonal elements of the matrix.
(3) The sum of the eigenvalues of a matrix is the sum of the elements of the principal
diagonal(trace of a matrix).
3
1 1 2
(2) Find the inverse of the matrix A = 9 2 0 using Cayley-Hamilton theorem.
5 0 3
Proposition 1.7. If A is a real symmetric matrix of order n. Then all its eigenvalues
are real.
Proof. Suppose Av = λv for some non-zero column vector v. Then vT Av = λvT v. Since
A is real symmetric, v T A = (Av)T = λv T . Therefore, λv T v = λv T v . So, λ = λ because v
is a non-zero column vector.
2 Diagonalization of a Matrix
• Square matrices A and B are similar if there exists an invertible matrix P such that
A = P BP −1 .
Proposition 2.1. If A and B are similar matrices, then they share the same characteristic
polynomial.
Note: If a square matrix A of order n has ‘n′ distinct eigenvalues, then it is diagonalizable.
4
Step 2 Find n linearly independent eigenvectors of A.
Step 4 A = P DP −1 .
Problems:
√
1 1
1. Diagonalize the matrix A = . Hence nd A.
2 2
Solution : The characteristic polynomial of A is λ2 − 3λ. Thus the eigenvalues of A are
0 and 3.
Eigenspace corresponding to λ = 0:
From the matrix equation (A − λI2 )X = 0, we get x + y = 0. Thus Eλ (A) = {(x, −x) :
x ∈ R}. So, {(1, −1)} is a basis of Eλ (A).
Eigenspace corresponding to λ = 3:
From the matrix equation (A − λI2 )X = 0, we get 2x − y = 0. Thus Eλ (A) = {(x, 2x) :
x ∈ R}. So, {(1, 2)} is a basis of Eλ (A).
" #
2/3 −1/3
1 1 0 0
Let P = and D = . Then P −1 = .
−1 2 0 3 1/3 1/3
Hence, A = P DP −1 .
√
To nd A:
" # " √ √ #
√ √ 2/3 −1/3 (1/3) 3 (1/3) 3
1 1 0 √0
A = P DP −1 = = √ √ .
−1 2 0 3 1/3 1/3 (2/3) 3 (2/3) 3
0 −1 1
2. Diagonalize the matrix A = −1 0 1. Hence nd A100 .
1 −1 0
Solution : The characteristic polynomial of A is λ3 − λ. Therefore the eigenvalues of A
are λ = −1, 0, 1. Let X = [x, y, z]T .
5
Eigenspace corresponding to λ = −1:
From the matrix equation (A − λI3 )X = 0, we get x − y + z = 0; −x + y + z = 0.
Employing Gauss elimination method, we have x − y + z = 0; z = 0. Thus Eλ (A) =
{(x, y, 0) : x − y = 0}. So, {(1, 1, 0)} is a basis of Eλ (A).
Eigenspace corresponding to λ = 0:
The equation (A − λI3 )X = 0 implies x = y and y = z . Thus Eλ (A) = {(x, x, x) : x ∈
R}. So, {(1, 1, 1)} is a basis of Eλ (A).
Eigenspace corresponding to λ = 1:
The equation (A − λI3 )X = 0 implies x = z and y = 0. Thus Eλ (A) = {(x, 0, x) : x ∈
R}. So, {(1, 0, 1)} is a basis of Eλ (A).
1 0 −1
1 1 1 −1 0 0
Let P = 1 1 0 and D = 0 0 0 . Then P −1 = .
−1 1 1
0 1 1 0 0 1
1 −1 0
Hence A = P DP −1 .
To nd A100 :
1 1 1
1 0 0
1 0 −1 2 −1 −1
A100 = P D100 P −1 = 1 1 0 0 0 0
−1 1 = 1 0 −1 .
1
0 1 1 0 0 1
1 −1 0 1 −1 0
1 1 1
3. Diagonalize the matrix A = 2 2 2.
1 1 1
Solution : The characteristic polynomial of A is λ3 − 4λ2 . Therefore the eigenvalues of A
are λ = 0, 0, 4. Let X = [x, y, z]T .
Eigenspace corresponding to λ = 0:
The equation (A−λI3 )X = 0 implies x+y+z = 0. Thus Eλ (A) = {(x, y, z) : x+y+z = 0}.
Substituting (y, z) = (1, 0) and (y, z) = (0, 1), we get (−1, 1, 0) and (−1, 0, 1) as two in-
dependent solutions. So, {(−1, 1, 0), (−1, 0, 1)} is a basis of Eλ (A).
Eigenspace corresponding to λ = 4:
6
From the matrix equation (A−λI3 )X = 0, we get −3x+y+z = 0; x−y+z = 0; x+y−3z =
0. Employing Gauss elimination method, we have x + y − 3z = 0; y − 2z = 0. Thus
Eλ (A) = {(z, 2z, z) : z ∈ R}. So, {(1, 2, 1)} is a basis of Eλ (A).
−1/2 1/2 −1/2
−1 −1 1 0 0 0
Let P = 1 0 2 and D = 0 0 0 . Then P −1 = .
−1/4 −1/4 3/4
0 1 1 0 0 4 1/4 1/4 1/4
Hence A = P DP −1 .
d. A is orthogonally diagonalizable.
Problems:
0 1 1
(1) Find an orthogonal matrix that diagonalizes the matrix A = 1 0 1 .
1 1 0
Solution : The characteristic polynomial of A is λ3 − 3λ − 2. Therefore the eigenval-
ues of A are λ = −1, −1, 2. Let X = [x, y, z]T .
7
x + y + z = 0}. Substituting (y, z) = (1, 0) and (y, z) = (0, 1), we get (−1, 1, 0)
1 1
and (−1, 0, 1) as two independent solutions. So, { √ (−1, 1, 0), √ (1, 1, −2)} is
2 6
an orthonormal basis of Eλ (A).
Eigenspace corresponding to λ = 2:
From the matrix equation (A − λI3 )X = 0, we get x + y − 2z = 0; x − 2y + z =
0; 2x − y − z = 0. Employing Gauss elimination method, we have x + y − 2z = 0;
1
y − z = 0. Thus Eλ (A) = {(z, z, z) : z ∈ R}. So, { √ (1, 1, 1)} is an orthonormal
3
basis of Eλ (A).
√ √
−√ 3 1 √2 −1 0 0
1
Let P = √ 3 1 √2 and D = 0 −1 0 . Then A = P DP T .
6 0 −2 2 0 0 2
2 −1
(2) Find an orthogonal matrix that diagonalizes the matrix A = .
−1 2
Solution : The characteristic polynomial of A is λ2 −4λ+3. Therefore the eigenvalues
of A are λ = 1, 3. Let X = [x, y]T .
Eigenspace corresponding to λ = 1:
From the equation (A − λI2 )X = 0, we get x − y = 0. . Thus Eλ (A) = {(y, y) : y ∈
1
R}. So, { √ (1, 1)} is an orthonormal basis of Eλ (A).
2
Eigenspace corresponding to λ = 3:
From the equation (A−λI2 )X = 0, we get x+y = 0. Thus Eλ (A) = {(x, y)
: x+y =
1 1 1 1
0}. So, { √ (1, −1)} is an orthonormal basis of Eλ (A). Let P = √
2 2 1 −1
1 0
and D = . Then A = P DP T .
0 3
8
Theorem 3.1. For a real-symmetric matrix A, the following are equivalent:
(i) vT Av ≥ 0 for all non-zero real column vector v.
Theorem 3.2.
A real symmetric matrix A of order 2 is positive semi-denite if and only its diagonal
elements are non-negative and |A| ≥ 0.
Exercise: Check whether the following matrices are positive semi-denite or not.
1 3 1 −2 1 −2
(i) (ii) (iii) .
3 4 −2 −3 −2 5
Equivalently, if min{m, n} = n, then the sigular values of A are the positive square roots
of the eigenvalues of AT A. Otherwise, the singular values of A are the positive roots of
the eigenvalues of AAT .
Note:
(ii) The matrices AT A and AAT have the same set of non-zero eigenvalues.
(iii) The left singular vectors of A are the eigenvectors of AAT and the right singular
vectors of A are the eigenvectors of AT A.
9
Example 3.3.
" # " #
1 2 5 4
(i) Let A = . Then AT A = . and its eigenvalues are 1 and 9. There-
2 1 4 5
fore the singular values of A are 1 and 3.
" #
3 6
1 1 1
(ii) Let A = . Then AAT = and its eigenvalues are 15 and 0.
2 2 2 6 12
√
Therefore the singular values of A are 15 and 0 .
" #
333 81
4 11 14
(iii) Let A = . Then AA ∗ T = and its eigenvalues are 90,
8 7 −2 81 117
√ √
360. Therefore the singular values of A are 90 and 360.
r given by D = diag(σ1 , σ2 , . . . , σr ).
10
Working procedure:
To determine a singular value decomposition of a matrix A of order m × n (m ≥ n):
(vi) Then, A = U V T.
P
Problems: " #
1 1 1
1. Find a singular value decomposition of the matrix A = .
1 −1 1
2 0 2
Solution: Consider AT A =
0 2 0 . Its eigenvalues are 0, 2 and 4. Therefore the
2 0 2
√
singular values of A are 2, 2 and 0.
11
Orthonormal basis of the eigenspace corresponding to λ = 0:
From the matrix equation (AT A − λI3 )X = 0, we get x + z = 0 and y = 0. Therefore,
1
{ √ (1, 0, −1)} is an orthonormal basis of EAT A (λ).
2
1 1 Av1 1
Set v1 = √ (1, 0, 1)T , v2 = (0, 1, 0)T and v3 = √ (1, 0, −1)T , u1 = = √ (1, 1)T
2 2 ||Av1 || 2
1 0 1
√
Av2 1 1 , U = √1 1 1
and u2 = = √ (1, −1)T . Let V = √ and
0 2 0
||Av2 || 2 2 2 1 −1
1 0 −1
2 √0 0
. Then A = U V T .
P P
=
0 2 0
0 1 0
2. Find a singular value decomposition of the matrix A =
1 0 0 .
0 0 2
1 0 0
Solution: Consider AT A = 0 1 0 . Its eigenvalues are 1, 1 and 4. Thus the singular
0 0 4
values of A are 1, 1 and 2.
12
−3 1
3. Find a singular value decomposition of the matrix A =
6 −2
.
6 −2
10 −20 −20
Solution: Consider AAT = . Its eigenvalues are 0, 0 and 90. There-
−20 40 40
−20 40 40
√
fore the singular values of A are 0 and 3 10.
4 Cholesky decomposition
Let A = [aij ]n×n be a positive denite matrix. Then A can be written A = LLT , where
L = [lij ]n×n is a lower triangular and it is called as Cholesky decomposition of A.
13
The entries of the matrix L can be computed as follows:
√
∗ l11 = a11 .
aj1
∗ lj1 = , j = 2, . . . , n.
l11
v
u j−1
, j = 2, 3, . . . , n.
u X
2
∗ ljj = ajj −
t ljs
s=1
j−1
!
1
ljs lps , p = j + 1, j + 2, . . . , n and j ≥ 2.
X
∗ lpj = apj −
ljj s=1
Note: The Cholesky decomposition is mainly used to solve the linear system AX = b. If
A is symmetric and positive denite, then we can solve by rst computing the Cholesky
decomposition A = LLT , then solving for Ly = b for y by forward substitution, and nally
solving for x by back substitution.
Problems:
4 2 14
1. Find the Cholesky decomposition of 2 17 −5 .
14 −5 83
Solution:
√
l11 = a11 = 2.
a21 a31
l21 = = 1; l31 = = 7.
l11 l11
= 4.
p
2
l22 = a22 − l21
1
l32 = (a32 − l21 l31 ) = −3.
l22
= 5.
p
2 2
l33 = a33 − l31 − l32
Therefore,
4 2 14 2 0 0 2 1 7
2 17 −5 = 1 4 0 0 4 −3
14 −5 83 7 −3 5 0 0 5
14
9 6 12
2. Find the Cholesky decomposition of 6 13 11 .
12 11 26
Solution:
√
l11 = a11 = 3.
a21 a31
l21 = = 2; l31 = = 4.
l11 l11
p
2
l22 = a22 − l21 = 3.
1
l32 = (a32 − l31 l21 ) = 1.
l22
p
2 2
l33 = a33 − l31 − l32 = 3.
Therefore,
9 6 12 3 0 0 3 2 4
6 13 11 = 2 3 0 0 3 1 .
12 11 26 4 1 3 0 0 3
Exercise:
15
(4) Find the Cholesky decomposition of the following matrices.
1 −1 3 2
4 6 8 −1 5 −5 −2
(i) 6 34 52 (ii)
3 −5 19 3 .
8 52 129
2 −2 3 21
16