0% found this document useful (0 votes)
12 views5 pages

mit18_701f21_lect10

Uploaded by

Ananda Chanda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views5 pages

mit18_701f21_lect10

Uploaded by

Ananda Chanda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Lecture 10: The Jordan Decomposition

10 Eigenbases and the Jordan Form


Change of basis is a powerful tool, and often, we would like to work in as natural a basis as possible.

Guiding Question
Given a linear operator, how can we fnd a basis in which the matrix is as nice as possible?

10.1 Review
Last time, we learned about eigenvectors and eigenvalues of linear operators, or more concretely, matrices, on
vector spaces. An eigenvector is a (nonzero) vector sent to itself, up to scaling, under the linear operator, and
its associated eigenvalue is the scaling factor; that is, if A⃗v = λ⃗v for some scalar λ, ⃗v is an eigenvector with
associated eigenvalue λ.
 
λ1 · · · 0
If there exists an eigenbasis30 , then in that basis, the linear operator P −1 AP 31 will simply become  0 . . . 0  ,
 

0 · · · λn
since each basis vector ⃗vi is sent to λi⃗vi . So having an eigenbasis is equivalent to the matrix A being similar to
a diagonal matrix.
In order to concretely fnd the eigenvectors, it is easier to frst fnd the eigenvalues, which are the roots of
the characteristic polynomial pA (t) = det (tIn − A). Each root λ of pA has at least one corresponding
eigenvector. The eigenvectors for λ are precisely the nonzero vectors in ker(λIn − A).

10.2 The Characteristic Polynomial


Let’s start with an example.

Example 10.1

a b
If A = , then pA (t) = t2 − (a + d)t + (ad − bc).
c d

In general, for an n×n matrix,


pA (t) = tn − (a11 + · · · + ann )tn−1 + (−1)n det(A).
The coefcient of tn−1 is the sum of the entries on the diagonal, called the trace of A.
Because the characteristic polynomial can be written as a determinant, it can be defned for general linear
operators without specifying a basis, and so each of the coefcients are basis-independent. In particular, we get
that
Tr(P −1 AP ) = Tr(A).

Guiding Question
What can go wrong when hunting for an eigenbasis? How can we fx this?

Example 10.2  
cos θ − sin θ
Over the real numbers R, take A = . The characteristic polynomial is pA (t) = t2 −2 cos θ+1,
sin θ cos θ
which has no real roots (unless θ = π.) Geometrically, that makes sense, because under a rotation by θ = ̸ kπ,
every vector will end up pointing an a diferent direction than it initially was, so there should be no real
eigenvectors.

Over a general feld F, it is certainly possible for the characteristic polynomial not to have any roots at all; in
order to fx this issue, we work over a feld like C,32 where every degree n polynomial always has n roots (with
30 A basis consisting of eigenvectors
31 This comes from the change of basis formula
32 Fields where every non-constant polynomial has roots are called algebraically closed.

45
Lecture 10: The Jordan Decomposition

multiplicity). So pA (t) = (t − λ1 ) · · · (t − λn ), where the λi can repeat. For the rest of the lecture, we will only
consider linear operators on vector spaces over C, which takes care of the frst obstacle of fnding eigenvalues.
However, even over C, not every linear operator has an eigenbasis.

Example 10.3  
0 1
Consider A = . The characteristic polynomial is pA (t) = t2 , so if A were similar to some diagonal
0 0
matrix, it would be similar to the zero matrix; this would mean that A would be the zero matrix, and thus
A cannot be diagonalizable.
In other words, pA (t) only has one root, 0, so any eigenvector would be in ker(0I2 − A) = Span(⃗e1 ). So
there is only a one-dimensional space of eigenvectors, which is not enough to create an eigenbasis.

In some sense, which we will make precise later on in this lecture, this is the most important counterexample
for why linear operators can be nondiagonalizable.

Proposition 10.4
Given an n×n matrix A, eigenvectors ⃗v1 , · · · , ⃗vk , and distinct eigenvalues λ1 , · · · , λk , the vectors ⃗vi are all
linearly independent.

Proof. Let’s prove this by induction.


Base Case. If k = 1, by the defnition of an eigenvector, ⃗vk ̸= 0 so {⃗vi } is linearly independent.
Inductive Hypothesis. Suppose the proposition is true for k − 1.
Inductive Step. Now, suppose the proposition is not true for k. Then there exist coefcients ai such that
X
ai⃗vi = 0.

Applying A to both sides, we get X


ai λi⃗vi = 0,
which is another linear relation between the k vectors. Subtracting k times the frst relation from the second
one results in the linear relation
X i=
Xk−1
ai (λi − λk ) = ai (λi − λk ) = 0.
i=0

Since in the last term λk − λk = 0, while λi − λk = ̸ 0 for i ̸= k since the λi are distinct, we obtain a linear
relation between ⃗v1 , · · · , ⃗vk−1 , which is a contradiction of the inductive hypothesis. Thus, {vi }i=0,··· ,k is linearly
independent.

Corollary 10.5
Consider a matrix A. If the characteristic polynomial is

pA (t) = (t − λ1 ) · · · (t − λn )

where each λi is distinct, A will have an eigenbasis and will thus be diagonalizable.

Proof. Each eigenvalue must have at least one eigenvector. Taking ⃗v1 , · · · , ⃗vn to be eigenvectors for λ1 , · · · , λn .
Since there are n eigenvectors, which is the same as the dimension of the vector space, and by Proposition 10.4
they are linearly independent, they form an eigenbasis and A is diagonalizable.

If there are repeated roots, then there will not necessarily be enough eigenvectors to form a basis. Luckily for
us, it is usually true that a matrix will be diagonalizable.33
33 More concretely, the space of n×n square matrices can be thought of as a metric space, and the non-diagonalizable matrices

will be a set of measure zero. In particular, the diagonalizable matrices are dense in the space of all square matrices. Intuitively,
given a non-diagonalizable matrix, perturbing the entries by a little bit will perturb the roots a little bit, making them non-distinct.

46
Lecture 10: The Jordan Decomposition

In general, pA (t) = (t − λ1 )e1 · · · (t − λk )ek , where the λi are distinct. Let Vλi = ker(λi I − A). Any vector
v ∈ Vλi is an eigenvector with eigenvalue λi . We know that for each i, dim Vλi ≥ 1. Using our proposition, given
a basis for each subspace Vλi , if there are enough to get n total vectors, combining all the bases would give an
eigenbasis for A, since they would all be linearly independent.34

10.3 Jordan Form


 
0 1
Keeping in mind the matrix A = , we have the following question.
0 0

Guiding Question
If a matrix is not diagonalizable, what is nicest form it can take on under a change of basis?

Let’s see a class of matrices that always have the issue of repeated eigenvalues.

Defnition 10.6
Given a ≥ 1 and λ ∈ F, let the Jordan block be an a×a matrix
 
λ 1 ··· 0
.. .. 
. .

0 λ
Ja (λ) =  . ..

 0 .. .
 
1
0 0 ··· λ

with λ on the diagonal and 1s above each λ.a


a The notation here is a little diferent from the textbook.

Example 10.7
For λ = 1, 2, 3, we get  
  λ 1 0
�  λ 1
λ , , 0 λ 1 .
0 λ
0 0 λ

For Ja (λ), the characteristic polynomial is (t − λ)a , and when a > 1, the only eigenvalue is ⃗e1 so it will not be
diagonalizable. Although this these Jordan blocks are very specifc matrices, in some sense they are exactly the
sources of all the problems.

Example 10.8 (J4 (0))


 
0 1 0 0
0 0 1 0
The matrix J4 (0) = 
0 0 0 1. Applying it to the basis vectors gives a "chain of vectors"

0 0 0 0

⃗e4 7→ ⃗e3 7→ ⃗e2 7→ ⃗e1 7→ 0,

where each basis vector is mapped to the next.

This leads us to the main theorem.


34 Computationally, it is simple to fnd basis vectors, and in a more computational class we would go further in-depth on fnding

these.

47
Lecture 10: The Jordan Decomposition

Theorem 10.9 (Jordan Decomposition Theorem)


Given a linear operator T : V −→ V, where dim V = n, there exists a basis ⃗v1 , · · · , ⃗vn , there exist pairs
(a1 , λ1 ), · · · , (ar , λr ) such that the matrix of T in this basis is a block diagonal matrix
 
Ja1 (λ1 )
 Ja2 (λ2 ) 
. ,
 

 .. 
Jar (λr )

where all other entries are 0.

While it is not possible to diagonalize every linear operator, it is possible to write them as a block diagonal
matrix with Jordan blocks on the diagonal. Additioanlly, these Jordan blocks are unique up to rearrangement.
This block diagonal matrix is called the Jordan decomposition of the linear operator T.
Student Question. Do the ai correspond to the exponents of the roots?
Answer. Not quite, as we will see promptly.
Let’s continue with some examples.

Example 10.10 (n = 4)  
λ1 1 0 0
 0 λ1 1 0
If we have a1 = 4, then the Jordan decomposition will look like 0
. If a1 = 3 and a2 = 1,
0 λ1 1 
  0 0 0 λ1

λ1 1 0 λ1 1
 0 λ1 1
then it will look like  . For a1 = 2 and a2 = 2, we have  0 λ1 . Where
  
0 0 λ1   λ2 1 
λ2 0 λ2
 
λ1 1
 0 λ1
a1 = 2, a2 = 1, and a3 = 1, we would have  , and for a1 , a2 , a3 , a4 = 1, we would just


λ2 
  λ3
λ1
λ2
get  , which is really just a diagonal matrix.
 
 λ3 
λ4

Essentially, every matrix is similar to some Jordan decomposition matrix, where it is diagonalizable if and only
if each ai = 1.
The characteristic polynomial of T will be (t − λ1 )a1 · · · (t − λr )ar . These exponents ai are not quite the same
as the exponents ei from before, since the λi in the characteristic polynomial of T can repeat. However, for
eigenvalues equal to λj , the sum of all the exponents will in fact be ej .
From the characteristic polynomial of a matrix, it is not possible to precisely fgure out the Jordan decomposition,
but it does provide some amount of information. Next class, we will continue seeing what information we get
from the Jordan Decomposition Theorem.

48
MIT OpenCourseWare
https://ocw.mit.edu

Resource: Algebra I Student Notes


Fall 2021
Instructor: Davesh Maulik
Notes taken by Jakin Ng, Sanjana Das, and Ethan Yang

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy