0% found this document useful (0 votes)
11 views

7 - Eigenvalues and Eigenvectors

This section discusses eigenvalues and eigenvectors of real matrices, defining an eigenvalue as a scalar for which there exists a nonzero vector such that the matrix transformation results in a scalar multiple of that vector. It introduces the characteristic polynomial and equation, which are essential for determining eigenvalues, and presents several examples illustrating these concepts. Additionally, it covers properties of eigenvalues, including their relationship with matrix operations and the Cayley-Hamilton theorem.

Uploaded by

damsanalisara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

7 - Eigenvalues and Eigenvectors

This section discusses eigenvalues and eigenvectors of real matrices, defining an eigenvalue as a scalar for which there exists a nonzero vector such that the matrix transformation results in a scalar multiple of that vector. It introduces the characteristic polynomial and equation, which are essential for determining eigenvalues, and presents several examples illustrating these concepts. Additionally, it covers properties of eigenvalues, including their relationship with matrix operations and the Cayley-Hamilton theorem.

Uploaded by

damsanalisara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

SECTION 7

EIGENVALUES AND EIGENVECTORS

In this section, we will only be considering real matrices.

For an 𝑛 × 𝑛 matrix 𝐴, the linear transformation 𝒙 ↦ 𝐴𝒙 may move vectors in various


directions. However, for certain 𝒙, 𝐴𝒙 is a scalar multiple of 𝒙. That is, 𝐴𝒙 = 𝒙 for
some scalar  . This means that 𝐴𝒙 is in the same direction as 𝒙 or in the opposite
direction. In this section we will study such cases.

1 1 1 3
Example 7.1: Let 𝐴 = ( ) and 𝒙 = ( ) , 𝒚 = ( ).
−2 4 1 2
1 1 1 2 1
Then 𝐴𝒙 = ( ) ( ) = ( ) = 2 ( ).
−2 4 1 2 1
Therefore, 𝐴𝒙 = 𝒙 where  = 2.
1 1 3 5
𝐴𝒚 = ( ) ( ) = ( ). It is clear that 𝐴𝒚 is not a scalar multiple of 𝒚.
−2 4 2 2

Recall from section 4 that an 𝑛 × 1 column matrix is also called a column vector or
sometimes just a vector.

Definition 7.1: Let 𝐴 be an 𝑛 × 𝑛 matrix. A scalar  is said to be an eigenvalue of 𝑨, if


there exists a nonzero 𝑛 × 1 column vector 𝒙 such that 𝐴𝒙 = 𝒙. Every nonzero vector
𝒙 satisfying this equation is called an eigenvector of 𝑨 associated with the eigenvalue .
Eigenvalues of a matrix are also called proper values, latent values and characteristic
values of the matrix and the corresponding eigenvectors are called proper vectors, latent
vectors and characteristic vectors of the matrix respectively.

𝐴𝒙 = 𝒙 can also be written as the homogeneous system (𝐼𝑛 − 𝐴)𝒙 = 𝟎.


A homogeneous system 𝐵𝒙 = 𝟎 has a unique solution (the trivial solution 𝒙 = 𝟎) if and
only if 𝐵 is invertible. Therefore, (𝐼𝑛 − 𝐴)𝒙 = 𝟎 has a non-trivial solution if and only
if 𝐼𝑛 − 𝐴 is not invertible if and only if |𝐼𝑛 − 𝐴| = 0. If we expand |𝐼𝑛 − 𝐴|, we
obtain a polynomial of degree 𝑛 in . This is called the characteristic polynomial of 𝑨 and
|𝐼𝑛 − 𝐴| = 0 is called the characteristic equation of 𝑨.

The set of all solutions to (𝐼𝑛 − 𝐴)𝒙 = 𝟎 is just the null space of the matrix
𝐼𝑛 – 𝐴. This set is a subspace of ℝ𝑛 and is called the eigenspace of 𝐴 corresponding to
the eigenvalue . This eigenspace consists of the zero vector and all the eigenvectors
corresponding to .

1 1
Example 7.2: Show that  = 3 is an eigen value of the matrix 𝐴 = ( ).
−2 4

Answer: We need to show that there is nonzero 𝒙 such that 𝐴𝒙 = 3𝒙.

1
That is, that 𝐴𝒙 − 3𝒙 = 𝟎 has nontrivial solutions.
This is so if and only if (𝐴 − 3𝐼)𝒙 = 𝟎 has nontrivial solutions.
1 1 1 0 −2 1
𝐴 − 3𝐼 = ( ) − 3( )=( )
−2 4 0 1 −2 1
Obviously, (𝐴 − 3𝐼)𝑥 = 0 has nontrivial solutions, since the rows of 𝐴 − 3𝐼 are linearly
dependent. Therefore, 3 is an eigenvalue of 𝐴.
Note from example 7.1 that 2 is also an eigenvalue of 𝐴.

Example 7.3: Determine the eigenvalues and the corresponding eigenspaces of the
−1 0 2
matrix 𝐴 = ( 0 1 2).
2 2 0

Answer:
𝜆+1 0 −2
|𝜆𝐼 − 𝐴| = | 0 𝜆 − 1 −2| = (𝜆 + 1)[(𝜆 − 1)𝜆 − 4] − 0 − 2[0 + 2(𝜆 − 1)]
−2 −2 𝜆

Therefore, | 𝐼 − 𝐴 | = 3 −  − 4 − 4 − 4 + 4 = (2 − 9).

Now, |𝐼 − 𝐴| = 0 if and only if  = 0,  = 3 or  = −3.


Thus 0, 3 and -3 are the eigenvalues of 𝐴.
−1 0 2 𝑥1 0
Suppose  = 0. Substituting in 𝐴𝒙 = 𝒙 we obtain ( 0 1 2) (𝑥2 ) = (0).
2 2 0 𝑥3 0
Therefore we obtain – 𝑥1 + 2𝑥3 = 0
𝑥2 + 2𝑥3 = 0
2𝑥1 + 2𝑥2 = 0.
Thus 𝑥2 = − 2𝑥3 , 𝑥1 = 2𝑥3 , 𝑥3 is free. Therefore, the set of eigenvectors
2
corresponding to the eigenvalue  = 0 is { 𝑡 (−2)| 𝑡 ∈ ℝ, 𝑡 ≠ 0} and the eigenspace
1
2
corresponding to  = 0 is { 𝑡 (−2) | 𝑡 ∈ ℝ}.
1
We can similarly obtain the eigenspaces corresponding to the eigenvalues 3 and -3.

Properties

Result 7.1: If  is an eigenvalue of the matrix 𝐴, then


(a) 2 is an eigenvalue of 𝐴2 ,
(b)  is an eigenvalue of 𝐴𝑇 ,
(c) if 𝐴 is nonsingular, 1/ is an eigenvalue of 𝐴−1 .

Proof: Suppose  is an eigenvalue of the matrix 𝐴.

2
(a) Since  is an eigenvalue of 𝐴 we have that 𝐴𝒙 = 𝒙 has nonzero solutions 𝒙.
Since 𝐴𝒙 = 𝒙, we obtain 𝐴(𝐴𝒙) = 𝐴(𝒙).
Therefore, 𝐴2 𝒙 = (𝐴𝒙) = (𝒙) = 2 𝒙.
Hence, 2 is an eigenvalue of 𝐴2 .

(b) Since  is an eigenvalue of 𝐴 we have |𝐼 − 𝐴| = 0.


Now, | (𝐼 − 𝐴)𝑇 | = | 𝐼 𝑇 – 𝐴𝑇 | = | 𝐼 – 𝐴𝑇 | .
Since |𝐼 − 𝐴| = | (𝐼 − 𝐴)𝑇 | , we obtain
|𝐼 − 𝐴| = 0 if and only if | (𝐼 − 𝐴)𝑇 | = 0 if and only if |𝐼 – 𝐴𝑇 | = 0.
Thus,  is an eigenvalue of 𝐴 if and only if  is an eigenvalue of 𝐴𝑇 . Therefore, since
 is an eigenvalue of 𝐴, it is also an eigenvalue of 𝐴𝑇 .

(c) Suppose 𝐴 is nonsingular. Since  is an eigenvalue of 𝐴, we have that 𝐴𝒙 = 𝒙


has nonzero solutions 𝒙.
Since 𝐴𝒙 = 𝒙, we have that 𝐴−1 (𝐴𝒙) = 𝐴−1 (𝒙). Hence, 𝐼𝒙 = 𝐴−1 𝒙. Thus,
𝐴−1 𝒙 = (1/)𝒙. Therefore, 1/ is an eigenvalue of 𝐴−1 .

Result 7.2: The determinant of a matrix 𝐴 is the product of its eigenvalues.

Proof:
Suppose 1 , 2 ,…, 𝑛 are the roots of the characteristic polynomial 𝑘(). Then
|𝐼 − 𝐴| = ( − 1 )(  − 2 ) … ( − 𝑛 ).
Setting  = 0 we obtain |−𝐴| = (−1)𝑛 1 2 …𝑛 .
Thus |𝐴| = 1 2 …𝑛 .

Result 7.3: An 𝑛 × 𝑛 matrix 𝐴 is singular if and only if 0 is an eigenvalue of 𝐴.

Proof:
By the previous result, |𝐴| = 1 2 …𝑛 . Thus |𝐴| = 0 iff one of the eigenvalues is 0.

Definition 7.2: Let 𝑝() = 𝑛 + 𝑝1 𝑛−1 + 𝑝2 𝑛−2 + ⋯ + 𝑝𝑛−1  + 𝑝𝑛 be an arbitrary


polynomial of degree 𝑛. We define the matrix polynomial 𝑝(𝐴) in the matrix 𝐴 to be the
𝑛 × 𝑛 matrix obtained by replacing  by 𝐴 in 𝑝(). i.e.,
𝑝(𝐴) = 𝐴𝑛 + 𝑝1 𝐴𝑛−1 + 𝑝2 𝐴𝑛−2 + ⋯ + 𝑝𝑛−1 𝐴 + 𝑝𝑛 𝐼𝑛

Cayley – Hamilton Theorem


Every matrix satisfies its own characteristic equation.
i.e., 𝑘(𝐴) = 𝐴𝑛 + 𝑘1 𝐴𝑛−1 + 𝑘2 𝐴𝑛−2 + … + 𝑘𝑛−1 𝐴 + 𝑘𝑛 𝐼𝑛 = 0𝑛 × 𝑛

−1 0 2
Example 7.4: Consider the matrix 𝐴 = ( 0 1 2).
2 2 0
Verify the Cayley-Hamilton theorem for this matrix.

3
Answer:
In example 7.3 we showed that the characteristic equation of 𝐴 is 3 − 9 = 0.
Thus 𝑘(𝐴) = 𝐴3 – 9𝐴.
5 4 −2 −9 0 18 −1 0 2
2 3
Now, 𝐴 = ( 4 5 2 ), and 𝐴 = ( 0 9 18) = 9 ( 0 1 2) = 9𝐴
−2 2 8 18 18 0 2 2 0
Thus, we see that 𝐴3 – 9𝐴 = 0.

Similar Matrices

Definition 7.3: A matrix 𝐵 is said to be similar to a square matrix 𝐴 if there is a nonsingular


(invertible) matrix 𝑃 such that 𝐵 = 𝑃−1 𝐴𝑃.

2 0 1 1
Example 7.5: The matrix 𝐵 = ( ) is similar to the matrix 𝐴 = ( ) since
0 3 −2 4
−1 1 1
𝐵 = 𝑃 𝐴𝑃 for 𝑃 = ( ).
1 2

Result 7.4:
(i) Every square matrix 𝐴 is similar to itself.
(ii) If the matrix 𝐵 is similar to the matrix 𝐴, then 𝐴 is similar to 𝐵.
(iii) If the matrix 𝐴 is similar to the matrix 𝐵 and 𝐵 is similar to the matrix 𝐶, then
𝐴 is similar to 𝐶.

Proof: Let, 𝐴, 𝐵 and 𝐶 be square matrices of order 𝑛.


(i) 𝐴 = 𝐼 −1 𝐴𝐼, where 𝐼 is the identity matrix of order 𝑛. Therefore, 𝐴 is similar
to itself.
(ii) Suppose the matrix 𝐵 is similar to the matrix 𝐴. Then there is a nonsingular
matrix 𝑃 such that 𝐵 = 𝑃−1 𝐴𝑃. Thus, 𝐴 = (𝑃−1 )−1 𝐵𝑃−1 , and hence 𝐴 is
similar to 𝐵.
(iii) Suppose the matrix 𝐴 is similar to the matrix 𝐵 and 𝐵 is similar to the matrix
𝐶. Then there are nonsingular matrices 𝑃 and 𝑄 such that 𝐴 = 𝑃 −1 𝐵𝑃 and
𝐵 = 𝑄 −1 𝐶𝑄. Therefore, 𝐴 = 𝑃−1 𝐵𝑃 = 𝑃−1 𝑄 −1 𝐶𝑄𝑃 = (𝑄𝑃)−1 𝐶(𝑄𝑃)
and hence 𝐴 is similar to 𝐶.

Result 7.5: If the 𝑛 × 𝑛 matrices 𝐴 and 𝐵 are similar, then they have the same
characteristic polynomial and hence the same eigenvalues.

Proof: Suppose the 𝑛 × 𝑛 matrices 𝐴 and 𝐵 are similar. Let 𝐵 = 𝑃−1 𝐴𝑃, where 𝑃 is
nonsingular. Then 𝐵 − 𝜆𝐼 = 𝑃−1 𝐴𝑃 − 𝜆𝑃 −1 𝑃 = 𝑃 −1 (𝐴𝑃 − 𝜆𝑃) = 𝑃−1 (𝐴 − 𝜆𝐼)𝑃.
Therefore,
|𝐵 − 𝜆𝐼| = |𝑃−1 (𝐴 − 𝜆𝐼)𝑃| = |𝑃−1 ||𝐴 − 𝜆𝐼||𝑃| = |𝑃−1 ||𝑃||𝐴 − 𝜆𝐼| = |𝐴 − 𝜆𝐼|,
since |𝑃−1 ||𝑃| = |𝑃−1 𝑃| = |𝐼| = 1.
That is, |𝐵 − 𝜆𝐼| = |𝐴 − 𝜆𝐼| and hence, 𝐴 and 𝐵 have the same characteristic polynomial
and therefore, the same eigenvalues.

4
Diagonalization

Definition 7.4: We say that the matrix 𝐴 is diagonalizable if it is similar to a diagonal


matrix. In this case we also say that 𝐴 can be diagonalized.

Recall the following definition from section 4 (Definition 4.5).

A set of vectors {𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒌 } in ℝ𝑛 is said to be linearly independent if the vector


equation 𝑥1 𝒗𝟏 + 𝑥2 𝒗𝟐 + ⋯ + 𝑥𝑘 𝒗𝒌 = 𝟎 has only the trivial solution. The set
{𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒌 } is said to be linearly dependent if there is a non-trivial solution to the
equation 𝑥1 𝒗𝟏 + 𝑥2 𝒗𝟐 + ⋯ + 𝑥𝑘 𝒗𝒌 = 𝟎.

Theorem 7.1: An 𝑛 × 𝑛 matrix 𝐴 is diagonalizable if and only if it has 𝑛 linearly


independent eigenvectors. In this case, 𝐴 is similar to a diagonal matrix 𝐷, whose diagonal
elements are the eigenvalues of 𝐴 and if 𝐷 = 𝑃 −1 𝐴𝑃, then 𝑃 is a matrix whose columns
are respectively the 𝑛 linearly independent eigenvectors of 𝐴.

Proof: Omit

Example 7.6:

1 1 𝜆 − 1 −1
Let 𝐴 = ( ). Then 𝜆𝐼 – 𝐴 = ( ).
0 1 0 𝜆−1

Therefore, |𝜆𝐼 – 𝐴| = (𝜆 – 1)2 and the eigenvalues of 𝐴 are 𝜆1 = 1 and 𝜆2 = 1.

𝑥 1 1 𝑥 𝑥
Thus if (𝑦) is an eigenvector of 𝐴 we have ( ) (𝑦) = 1 (𝑦).
0 1
Hence, 𝑥 + 𝑦 = 𝑥, and therefore, 𝑦 = 0.
𝑟
The corresponding eigenvectors are of the form ( ) where 𝑟 ≠ 0 and hence are not
0
linearly independent. Therefore, 𝐴 is not diagonalizable by theorem 7.1.

Example 7.7:
1 1 𝜆 − 1 −1
Let 𝐴 = ( ). Then 𝜆𝐼 – 𝐴 = ( ).
−2 4 2 𝜆−4

Therefore, | 𝜆𝐼 – 𝐴| = (𝜆 – 1)( 𝜆 – 4) + 2 = 𝜆2 – 5 𝜆 + 6.

Thus, |𝜆𝐼 – 𝐴| = 0 if and only if 𝜆2 – 5 𝜆 + 6 = (𝜆 – 3)( 𝜆 – 2) = 0. Therefore, the


eigenvalues of 𝐴 are 𝜆 = 3 and 𝜆 = 2.

𝑥
If (𝑦) is an eigenvector of 𝐴 corresponding to 𝜆 = 3 we have,
1 1 𝑥 𝑥
( ) (𝑦) = 3 (𝑦). Hence 𝑥 + 𝑦 = 3𝑥 and −2𝑥 + 4𝑦 = 3𝑦. Therefore, 𝑦 = 2𝑥.
−2 4

5
𝑟
Thus, any vector of the form ( ), where 𝑟 is a nonzero real number, is an eigenvector
2𝑟
1
which corresponds to the eigenvalue 𝜆 = 3. In particular, ( ) is an eigenvector
2
corresponding to the eigenvalue 𝜆 = 3.

𝑟
It can similarly be shown that ( ), where 𝑟 is a nonzero real number, is an eigenvector
𝑟
1
which corresponds to the eigenvalue 𝜆 = 2. Thus, ( ) is an eigen vector corresponding
1
to the eigen value 𝜆 = 2.

1 1 0
Now, 𝑎 ( ) + 𝑏 ( ) = ( ) implies that 𝑎 + 𝑏 = 0 and 2𝑎 + 𝑏 = 0.
2 1 0
1 1
Therefore 𝑎 = 𝑏 = 0, and hence ( ) and ( ) are linearly independent.
2 1
Therefore, 𝐴 is diagonalizable by theorem 7.1.
1 1 −1 1
By taking 𝜆1 = 3 and 𝜆2 = 2 we get 𝑃 = ( ) and 𝑃−1 = ( ).
2 1 2 −1
−1 1 1 1 1 1 3 0
Hence 𝑃 −1 𝐴𝑃 = ( )( )( )=( ).
2 −1 −2 4 2 1 0 2

1 1 2 −1
Also, by taking 𝜆1 = 2 and 𝜆2 = 3 we get 𝑃 = ( ) and 𝑃−1 = ( ).
1 2 −1 1

2 −1 1 1 1 1 2 0
Hence 𝑃 −1 𝐴𝑃 = ( )( )( )=( ).
−1 1 −2 4 1 2 0 3

Theorem 7.2: If the eigenvalues of a square matrix are distinct, then the associated set
of eigenvectors is linearly independent.

Proof:
Suppose 1 , 2 , …, 𝑛 are the distinct eigenvalues of the 𝑛 × 𝑛 matrix 𝐴 and 𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒏
are the corresponding eigenvectors. Suppose {𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒏 } is linearly dependent. Then
there is a least index 𝑝 such that 𝒗𝒑+𝟏 is a linear combination of the preceding (linearly
independent) vectors.
That is, there are scalars 𝑐1 , 𝑐2 , … , 𝑐𝑝 such that 𝑐1 𝒗𝟏 + 𝑐2 𝒗𝟐 + ⋯ + 𝑐𝑝 𝒗𝒑 = 𝒗𝒑+𝟏 -------(1)
Therefore, 𝐴(𝑐1 𝒗𝟏 + 𝑐2 𝒗𝟐 + ⋯ + 𝑐𝑝 𝒗𝒑 ) = 𝐴𝒗𝒑+𝟏 .
That is, 𝑐1 𝐴𝒗𝟏 + 𝑐2 𝐴𝒗𝟐 + ⋯ + 𝑐𝑝 𝐴𝒗𝒑 = 𝐴𝒗𝒑+𝟏 .
Since 𝐴𝒗𝒌 = 𝑘 𝒗𝒌 for each 𝑘, by substituting we obtain,
𝑐1 1 𝒗𝟏 + 𝑐2 2 𝒗𝟐 + ⋯ + 𝑐𝑝 𝑝 𝒗𝒑 = 𝑝+1 𝒗𝒑+𝟏 ---------------(2)
Multiplying equation (1) by 𝑝+1 and subtracting the result from (2) we obtain,
𝑐1 (1 − 𝑝+1 )𝒗𝟏 + 𝑐2 (2 − 𝑝+1 )𝒗𝟐 + ⋯ + 𝑐𝑝 (𝑝 − 𝑝+1 )𝒗𝒑 = 𝟎.
Since {𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒑 } is linearly independent and the eigenvalues are all distinct we have,
𝑐1 = 𝑐2 = ⋯ = 𝑐𝑝 = 0. But this means that 𝒗𝒑+𝟏 = 𝟎 from (1), which is impossible as
𝒗𝒑+𝟏 is an eigenvector. Therefore, {𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒏 } cannot be linearly dependent and
hence is linearly independent.

6
Theorem 7.3: If an 𝑛 × 𝑛 matrix 𝐴 has 𝑛 distinct eigenvalues, then 𝐴 is diagonalizable.

Proof: Let 1 , 2 , …, 𝑛 be the distinct eigenvalues of the 𝑛 × 𝑛 matrix 𝐴 and let


𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒏 be the corresponding eigenvectors. Then {𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒏 } is linearly
independent by theorem 7.2. Therefore, 𝐴 is diagonalizable by theorem 7.1.

Note: If the eigenvalues of an 𝑛 × 𝑛 matrix are not distinct, 𝐴 may or may not have 𝑛
linearly independent eigenvectors. (See examples 7.8 and 7.9)

1 3 3
Example 7.8: Diagonalize the matrix 𝐴 = (−3 −5 −3).
3 3 1
Answer:
𝜆 − 1 −3 −3
|𝜆𝐼 − 𝐴| = | 3 𝜆+5 3 |
−3 −3 𝜆 − 1
= (𝜆 − 1)[(𝜆 + 5)(𝜆 − 1) + 9] + 3[3(𝜆 − 1) + 9] − 3[−9 + 3(𝜆 + 5)]
= (𝜆 − 1)[𝜆2 + 4𝜆 + 4] + 9(𝜆 + 2) − 9(𝜆 + 2)
= (𝜆 − 1)(𝜆 + 2)2

Thus the eigenvalues of 𝐴 are 1 and −2.

𝑥
If (𝑦) is an eigenvector corresponding to the eigenvalue 1, then
𝑧
1 3 3 𝑥 𝑥
𝑦
(−3 −5 −3) ( ) = 1 ( ). 𝑦
3 3 1 𝑧 𝑧
Therefore, 𝑥 + 3𝑦 + 3𝑧 = 𝑥, −3𝑥 − 5𝑦 − 3𝑧 = 𝑦 and 3𝑥 + 3𝑦 + 𝑧 = 𝑧.
This gives us 𝑧 = −𝑦, 𝑥 = −𝑦, 𝑦 is free.
−1
Hence the eigenspace corresponding to 𝜆 = 1 is { 𝑡 ( 1 ) | 𝑡 ∈ ℝ}.
−1
𝑥
If (𝑦) is an eigenvector corresponding to the eigenvalue −2, then
𝑧
1 3 3 𝑥 𝑥
(−3 −5 −3) (𝑦) = −2 (𝑦).
3 3 1 𝑧 𝑧
Therefore, 𝑥 + 3𝑦 + 3𝑧 = −2𝑥, −3𝑥 − 5𝑦 − 3𝑧 = −2𝑦 and 3𝑥 + 3𝑦 + 𝑧 = −2𝑧.
This gives us 𝑥 = −𝑦 − 𝑧, with 𝑦, 𝑧 free.
−1 −1
Hence the eigenspace corresponding to 𝜆 = −2 is { 𝑡 ( 1 ) + 𝑟 ( 0 )| 𝑡, 𝑟 ∈ ℝ}.
0 1
−1 −1 −1
Thus, ( 1 ) , ( 1 ) and ( 0 ) are eigenvectors of 𝐴.
−1 0 1

7
−1 −1 −1 0
Suppose 𝑎 ( 1 ) + 𝑏 ( 1 ) + c ( 0 ) = (0).
−1 0 1 0
Then we have, −𝑎 − 𝑏 − 𝑐 = 0, 𝑎 + 𝑏 = 0 and − 𝑎 + 𝑐 = 0, from which we can
−1 −1 −1
conclude that 𝑎 = 𝑏 = 𝑐 = 0. Thus, {( 1 ) , ( 1 ) , ( 0 )} is a linearly independent set
−1 0 1
of 3 eigenvectors. Hence, 𝐴 is diagonalizable by theorem 7.1, and 𝐷 = 𝑃−1 𝐴𝑃, where
1 0 0 −1 −1 −1
𝐷 = (0 −2 0 ) and 𝑃 = ( 1 1 0 ).
0 0 −2 −1 0 1

2 4 3
Example 7.9: Diagonalize the matrix 𝐴 = (−4 −6 −3) if possible.
3 3 1

Answer:
𝜆 − 2 −4 −3
|𝜆𝐼 − 𝐴| = | 4 𝜆+6 3 |
−3 −3 𝜆 − 1
= (𝜆 − 2)[(𝜆 + 6)(𝜆 − 1) + 9] + 4[4(𝜆 − 1) + 9] − 3[−12 + 3(𝜆 + 6)]
= (𝜆 − 2)[𝜆2 + 5𝜆 + 3] + 4(4𝜆 + 5) − 9(𝜆 + 2)
= 3 + 3𝜆2 − 7 − 6 + 16 + 20 − 9 − 18
= 3 + 3𝜆2 − 4 = (𝜆 − 1)(𝜆 + 2)2

Thus the eigenvalues are 1 and −2.

𝑥
If (𝑦) is an eigenvector corresponding to the eigenvalue 1, then
𝑧
2 4 3 𝑥 𝑥
(−4 −6 −3) (𝑦) = 1 (𝑦).
3 3 1 𝑧 𝑧
Therefore, 2𝑥 + 4𝑦 + 3𝑧 = 𝑥, −4𝑥 − 6𝑦 − 3𝑧 = 𝑦 and 3𝑥 + 3𝑦 + 𝑧 = 𝑧.
This gives us 𝑥 = −𝑦, 𝑧 = −𝑦, 𝑦 is free.
−1
Hence the eigenspace corresponding to the eigenvalue 𝜆 = 1 is { 𝑡 ( 1 ) | 𝑡 ∈ ℝ}.
−1
𝑥
If (𝑦) is an eigenvector corresponding to the eigenvalue −2, then
𝑧
2 4 3 𝑥 𝑥
(−4 −6 −3) (𝑦) = −2 (𝑦).
3 3 1 𝑧 𝑧
Therefore, 2𝑥 + 4𝑦 + 3𝑧 = −2𝑥, −4𝑥 − 6𝑦 − 3𝑧 = −2𝑦 and 3𝑥 + 3𝑦 + 𝑧 = −2𝑧.

8
This gives us, 𝑥 = −𝑦, 𝑧 = 0. Hence the eigenspace corresponding to 𝜆 = −2 is
−1
{ 𝑡 ( 1 ) | 𝑡 ∈ ℝ}.
0
We see that in this case, we do not have two independent eigenvectors corresponding to
the repeated eigenvalue 𝜆 = −2, and hence 𝐴 is not diagonalizable.

Determining powers of a diagonalizable matrix

If 𝐴 is a diagonalizable matrix then there exists an invertible matrix 𝑃 and a diagonal


matrix 𝐷 such that 𝐷 = 𝑃 −1 𝐴𝑃.
This means that 𝐴 = 𝑃𝐷𝑃−1 .
Therefore, 𝐴𝐴 = 𝑃𝐷𝑃 −1 𝑃𝐷𝑃 −1 .
That is, 𝐴2 = 𝑃𝐷2 𝑃 −1 .
Moreover, 𝐴𝐴2 = 𝑃𝐷𝑃−1 𝑃𝐷2 𝑃−1 .
Therefore, 𝐴3 = 𝑃𝐷3 𝑃−1 .
In general, for an integer 𝑘 ≥ 1, 𝐴𝑘 = 𝑃𝐷𝑘 𝑃 −1 .

1 3 3
Example 7.9: Determine 𝐴4 for 𝐴 = (−3 −5 −3) from example 7.8.
3 3 1
Answer:
1 0 0 −1 −1 −1
From example 7.8 we obtain, 𝐷 = (0 −2 0 ) and 𝑃 = ( 1 1 0 ).
0 0 −2 −1 0 1
1 0 0 1 0 0 (1)2 0 0
Now, 𝐷2 = (0 −2 0 ) (0 −2 0 ) = ( 0 (−2)2 0 )
0 0 −2 0 0 −2 0 0 (−2)2
4
(1) 0 0 1 0 0
4 4
And hence 𝐷 = ( 0 (−2) 0 ) = (0 16 0 )
0 0 (−2)4 0 0 16
−1 −1 −1 1 0 0 −1 −1 −1
Therefore, 𝐴4 = 𝑃𝐷4 𝑃−1 = ( 1 1 0 ) (0 16 0 ) ( 1 2 1)
−1 0 1 0 0 16 −1 −1 0

−1 −1 −1 −1 −1 −1 1 −15 −15
=( 1 1 0 ) ( 16 32 16 ) = ( 15 31 15 )
−1 0 1 −16 −16 0 −15 −15 1

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy