Linear Algebra Spring 2020 6th
Linear Algebra Spring 2020 6th
Question
Let's ask this question:
given a linear transformation T ∶ V Ñ V where V is an n
dimensional vector space over F . Does there exists an ordered
basis B pv1 , ..., vn q such that the matrix rT sB is a diagonal
matrix:
α1 0 ⋯ 0
rT sB 0 α2 ⋯ 0
⋮ ⋱ ⋮
(1)
0 0 ⋯ αn
Obviously if this occurs then we should have Tvi αvi for
1 ¤ i ¤ n.
In other words vi 's are eigenvectors and αi 's are eigenalues:
Eigenvalues and Eigenvectors
Denition 1.
Let V be a vector space over a eld F and let T ∶ V Ñ V be a
linear transformation. A scalar α P F is called an eigenvalue of T if
there exists a nonzero vector v P V such that
Tv αv
The vector v is named an eigenvector of T .
In other words eigenvalues are those elements α P F for which
ker pT αI q is a non-trivial subspace of V . And each nonzero
element of pT αI q is named an eigenvector of T corresponding to
the eigenvalue α. The subspace ker pT αI q is also called an
eigenspace of T corresponding to the eigenvalue α P F .
Denition 2 (Diagonalizability).
A linear transformation T ∶ V Ñ V dened on an n dimensional
vector space is called diagonizable if there exists an ordered basis
B pv1 , ..., vn q with respect to which the matrix representation of
T denoted by rT sB has the diagonal form (1)
Eigenvalues and Eigenvectors
Diagonalizability
Proposition 1.
A linear transformation T ∶ V Ñ V on an n dimensional vector
space is diagonizable i there exists a basis B pv1 , ..., vn q all of
whose members are eigenvectors of T .
Proof:
The proof should be trivial from the way we introduced rT sB .
Eigenvalues and Eigenvectors
Characteristic Polynomial
In order to answer the question in page 1 we rst try to
compute eigenvalues and corresponding eigenvectors.
Assume that α is an eigenvalue of the linear transformation T .
Then according to denition (1), T αI should be singular.
We have already seen that this is equivalent to saying that
detpT αI q 0 (2)
In other words in order to compute eigenvalues we may resolve
the equation:
detpT xI q 0 (3)
in terms of the unknown x .
Eigenvalues and Eigenvectors
Characteristic Polynomial
Denition 3.
The monic polynomial p px q ∶ detpxI T q is named the
characteristic polynomial of T .
Eigenvalues and Eigenvectors
Example
Let T ∶ R2 Ñ R2 be a self-adjoint linear transformation (w.r.t.
standard inner product). We know that the matrix of A rT s
in the standard basis B tp1, 0q, p0, 1qu is a symmetric matrix:
A
a b
b c
Av λ v Av λ v
Theorem 1.
if T ∶ V Ñ V is diagonaizable then its characteristic polynomial
p px q can split into linear factors:
where α1 , ..., αk are scalars and b1 , ..., bk are positive integers with
°k
i 1 bi n
Eigenvalues and Eigenvectors
then
detpxI T q px α1q2px α2q3
Eigenvalues and Eigenvectors
Diagonizability:
In fact if we reorder the elements of the basis B in such a way
that the eigenvectors having identical eigenvalues lie next to
each other and if we denote the new ordered basis by B 1 then
the matrix of T with respect to B 1 is written as
α1 I1 0 ⋯ 0
0 ⋯ 0
rT sB 1 ⋮
α2 I2
⋱ ⋮
(4)
0 0 ⋯ αk Ik
Diagonizability:
In fact if
B 1 pu1 , .., ub1 , ..., ub1 ... bj 1 1 , ..., ub1 b2 ...bj , ..., un q
Then Wj ub1 ... bj 1 1 , ..., ub1 ... bj ¡, 1 ¤ j ¤ k and we
have
W1 ` ... ` Wj ` ... ` Wk
V
and also the restriction T |W ∶ Wj Ñ Wj is nothing but αj IdW
where IdW ∶ Wj Ñ Wj is the identity matrix.
j j
j
Eigenvalues and Eigenvectors
Diagonizability:
To sum up we get to the following theorem
Theorem 2.
A linear transformation T ∶ V Ñ V from an n dimensional vector
space to itself is diagonizable i V can be decomposed into a direct
sum
V W1 ` ... ` Wk
where Wj V for 1 ¤ j ¤ k is a bj dimensional subspace and
T |W ∶ Wj Ñ Wj is given by
j
T |W αj IdW , 1 ¤ j ¤ k
j j
Theorem 3.
Let V be an n-dimensional vector space over a eld F . Let
T ∶ V Ñ V be a linear transformation. If T has n distinct
eigenvalues then T is diagonizable.
Eigenvalues and Eigenvectors
Diagonizability:
Proof:
Let α1 , ..., αn denote the eigenvalues of T . We know that for
1 ¤ i, j ¤ n if i j then αi αj , Let vi for 1 ¤ i ¤ n be the
eigenvector corresponding to the eigenvalue αi :
Tvi αi vi
In order to prove the theorem it suces to show that v1 , ..., vn
are linearly independent and thus form a basis for V .
Eigenvalues and Eigenvectors
Diagonizability:
Proof:
If v1 , ..., vn are linearly dependent then we can nd 2 ¤ j ¤ n
such that v1 , ..., vj 1 are linearly independent but
v1 , ..., vj 1 , vj are linearly dependent.
Thus vj can be written as a linear combination of v1 , ..., vj 1 :
¸
vj λi vi , λi P F, 1¤i ¤j 1 (5)
1¤i ¤j 1
so we get ¸
Tvj λi Tvi
1¤i ¤j 1
or equivalently ¸
αj vj αi λi vi
1¤i ¤j 1
Eigenvalues and Eigenvectors
Diagonizability:
Proof:
so using (5) we obtain
¸ ¸
αj λi vi αi λi vi
1¤i ¤j 1 1¤i ¤j 1
Denition 4.
A polynomial q px q such that q pT q 0 is called an annihilating
polynomial for T .
Consider the subset AT F rx s of all annihilating polynomials
for T :
AT ∶ tq P F rx s|q pT q 0u
The theory of a single linear transformation
Minimal Polynomial:
Denition 5.
A ring R is a mathematical system consisting of a nonempty set
R ta, b, ...u together with two operations, addition and
multiplication, each of which assignes to a pair of elements a and b
in R other elements of R , denoted by a b in the case of addition
and ab in the case of multiplication, such that the following
conditions hold for all a, b, c P R
The theory of a single linear transformation
Minimal Polynomial:
Minimal Polynomial:
Denition 6.
Given a commutative ring R a subset I R of R is named an ideal
if
1. pI , q is a group
2. for all a P I and for all r P R we have ra P I
@r P F rx s, @p P AT Ôñ rp P AT
The second property for AT comes from the following lemma
The theory of a single linear transformation
Minimal Polynomial:
Lemma 1.
If p1 , p2 P F rx s are two polynomials with coecients in F then we
have
pp1p2qpT q p2pT q p1pT q p1pT q p2pT q
The proof is easy and left as exercise.
The theory of a single linear transformation
Minimal Polynomial:
Let 0 m P AT be a nonzero polynomial of least degree in AT
Theorem 4.
Every polynomial p P AT is a multiple of m i.e. for every p P AT
there exists q P F rx s such that
p qm
Proof:
This follows from the so called division algorithem in the space of
polynomials by which we know that for every pair of polynomials
p1 , p2 P F rx s with p1 0 there exists a unique polynomial q P F rx s
named quotient such that
p2 qp1 r
Proof:
In order to prove the theorem we divide p by m to obtain the
corresponding quotient q and remainder r such that
p mq r
with deg pr q deg pmq
But since AT is an ideal and p, m P AT we have
r p mq P AT .
Since m P AT was the nonzero polynomial of least degree we
should have r 0. This proves the theorem.
The theory of a single linear transformation
Corollary 1.
If m1 P AT is another nonzero polynomial of least degree then there
exists a scaler α P F such that m1 αm.
The theory of a single linear transformation
Corollary 2.
Given a linear transformaton T P LpV , V q then there exists a
unique monic polynomial m P F rx s of least degree such that
m pT q 0
Denition 7.
Given a linear transformation T P LpV , V q the monic polynomial
mT of least degree annihilating T is called the minimal polynomial
for T .
The theory of a single linear transformation
Reminder from polynomials
Denition 9.
Here the notation r |s ("read r devides s " or "s is a multiple of r " )
for two polynomials r , s P F rx s means that there exists t P F rx s
such that
s rt
or in other words the remainder of the division of s to r is zero.
The theory of a single linear transformation
Reminder from polynomials
Denition 10.
Given two polynomials a, b P F rx s a common divisor of a and b
consists of a polynomial c P F rx s such that c |a and c |b.
The theory of a single linear transformation
Reminder from polynomials
Denition 11.
Two distinct polynomials a and b are relatively prime if they do not
have any common divisor of degree greater than zero. Or in other
words their only common divisors are constant polynomials.
The theory of a single linear transformation
Reminder from polynomials
Denition 12.
An element d P F rx s is called a greatest common divisor of
r1 , ..., rk P F rx s if d |ri , for 1 ¤ i ¤ k , and if d 1 is such that d 1 |ri ,
1 ¤ i ¤ k , then d'|d.
The theory of a single linear transformation
Reminder from polynomials
Theorem 6. °
There exists g1 , ..., gk P F rx s such that d ki1 gi fi and d is the
greatest common divisor of f1 , ..., fk
Proof:
The existence of gi 's follow from the fact that d P I
If d 1 is a common divisor of f1 , ..., fk we should have
°
d 1 |d ki1 gi fi .
This proves d is the greatest common divisor of f1 , .., fk
The theory of a single linear transformation
Reminder from polynomials
Theorem 8.
Given a polynomial p P F rx s there exists a factorization
p px q p1 px qa1 ...pk px qak
Proof
Proof: The proof of existence of a factorization into irreducible
polynomials can be done by induction and is left as exercise.
The proof of uniqueness is an immediate consequence of the
following lemma and this also is left as exercise.
The theory of a single linear transformation
Reminder from polynomials
Lemma 2.
Let a, b P F rx s be two arbitrary polynomials and let p P F rx s be an
irreducible polynomial. Assume that p |ab. Then either p |a or p |b
Proof: Suppose p does not divide a. Then a and p are
relatively prime.So we have
au pv 1
for some u, v P F rx s.
Then
abu pvb b
and since p |ab, we have p |b.
The theory of a single linear transformation
Minimal Polynomials Toy Example:
Theorem 9.
With the above hypothesis we have
Impp pT qq Ker pq pT qq, Ker pp pT qq Impq pT qq
V W1 ` W2
The theory of a single linear transformation
Minimal Polynomials Toy Example:
Proof:
We rst show that
Impp pT qq X ker pp pT qq Impq pT qq X ker pq pT qq t0u
To see this we note that due to the property pp, q q 1 (this is
the notation for being relatively prime) according to theorem 7
we can nd polynomials a, b P F rx s such that
ap bq 1
or
a p T qp pT q b pT qq pT q Id (6)
The theory of a single linear transformation
Minimal Polynomials Toy Example:
Proof:
On the other hand we know that mT pT q p pT qq pT q 0 so
Proof:
Also from (6) we know that
ap bq 1
and thus
Theorem 10.
With the hypothesis of previous slide if we set Wi ∶ ker ppi pT qai q,
1 ¤ i ¤ k then we have
V W1 ` ... ` Wk
Moreover the subspaces W1 , ..., Wk are invariant under T i.e.
T pW i q W i , 1¤i ¤k
and the minimal polynomial mi P F rx s of T |Wi ∶ Wi Ñ Wi is equal
to
pi px qai
The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:
Proof:
The proof is easily done by induction and using the theorem 9.
Note rst that if we set p ∶ pp1 qa1 and q ∶ p a2 ...p ak then p
2 k
and q are relatively prime.
So if we set W1 ∶ ker pp q and W̃2 ∶ ker pq q then by theorem
9 we have
V W1 ` W̃2
Furthermore we know that W1 and W̃2 are invariant under T .
The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:
Proof:
If m1 and m̃2 denote the minimal polynomials of
T |W1 ∶ W1 Ñ W1 and T |W̃2 ∶ W̃2 Ñ W̃2 respectively, then we
know by denition of W1 and W̃2 that
m1 |p1a1 m̃2 |pp2a2 ...pkak q
Proof:
By comparing (10) and (11) we conclude that mT m1m̃2
and more precisely
m1 p1a1 m̃2 p2a2 ...pkak
Theorem 11.
The operators Π1 , ..., Πk are nothing but the projections on
W1 , ..., Wk respectively with respect to the decomposition
V W1 ` ... ` Wk . More precisely if v P V is decomposed as
v v1 ... vk for vj P Wj , 1 ¤ j ¤ k then we have
Πj pv q vj , 1¤j ¤k
The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:
Proof:
If i j then the pia |qj mp
i T
aj . By denition of Wi 's this means
that Πj |W 0 for i j .
j
i
°
On the other hand since we have kj1 Πj Id thus
Πj |Wj IdWj .
The combination of the above two observations proves
theorem 11.
The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:
Corollary 3.
The projection operators Πj ∶ V Ñ Wj can be expressed as
polynomials in terms of T .
The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:
The case of algebraically closed elds
As a consequence of the recent theorem we nd that the
problem of understanding the behavior of an arbitrary linear
transformation T reduces to that of characterizing the
behavior of T |Wi 's.
In other word we restrict ourselves to the cases where the
minimal polynomial mT has the form p px qa P F rx s where p px q
is an irreducible polynomial in F rx s and a is a positive integer.
In the case where F C we know by fundamental theorem of
algebra that the only irreducible polynomial upto multiplication
by scalars are polynomials of degree 1:
p px q x ξ
generalized eigenvectors
Denition 13.
A nonzero vector v P V is a generalized eigenvector of rank m of T
corresponding to the eigenvalue α if
pT αpId qqm v 0
In other words given an eigenvalue α P F we dene the generalized
eigenspace of V of rank m to be equal to ker pT αpId qqm and the
nonzero elements of ker pT αpId qqm are called generalized
eigenvectors.
Note that for any positive integer m the subspace ker pT αpId qqm
can be non trivial only if α is an eigenvector of T . (Prove this!)
The theory of a single linear transformation
Nilpotent Maps
Corollary 4.
Any upper triangular matrix with vanishing main diagonal:
0 a12 ⋯ a1n
0 0 ⋯ a2n
rT sB ⋮ ⋱ ⋮
(12)
0 0 ⋯ 0
is nilpotent.
We will see that the inverse of this is also true in other words
any nilpotent linear operator in appropriate basis has a
representation like (12).
The theory of a single linear transformation
Nilpotent maps and triangular matrices
T pWj q Wj 1 for 1 ¤ j ¤ n
The theory of a single linear transformation
Nilpotent maps and triangular matrices
Theorem 12.
There exists a basis B for the linear application T ∶ V Ñ V with
respect to which the matrix representation rT sB has the upper
triangular form (13) if and only if there exists nested subspaces
t0u W0 W1 ... Wn1 Wn V such that
T pW j q W j 1 for 1 ¤ j ¤ n
Proof:
The proof is easy and left as exercise.
The theory of a single linear transformation
Nilpotent maps and triangular matrices
The theory of a single linear transformation
Nilpotent maps and triangular matrices
Theorem 13.
Let V be a vector space dened over an algebraically closed eld F .
Let T ∶ V Ñ V be a linear transformation. Then there exists an
ordered basis B for V with respect to which the matrix
representation rT sB has the following form:
A1 0
⋱
Aj
(14)
⋱
0 Ak
The theory of a single linear transformation
Nilpotent maps and triangular matrices
Theorem 14.
A linear transformation T P LpV , V q is diagonizable i the minimal
polynomial of T has the form
mT px q px α1 q...px αk q
pT αj Id q|W 0j
1¤j ¤k
thus
pT α1Id q ... pT αk Id q 0
Eigenvalues and Eigenvectors
Diagonizability and minimal polynomials::
Proof:
This means that
mT px q|px α1 q...px αk q
mT px q px α1 q...px αk q
Eigenvalues and Eigenvectors
In particular P pT q 0.
The theory of a single linear transformation
Nilpotent maps and triangular matrices
Πj fj pT q, 1¤j ¤k (17)
The theory of a single linear transformation
Nilpotent maps and triangular matrices
Proof:
Now if we dene D; V ÑV as
ķ
D ∶ αj Πj
j 1
Proof:
We set N ∶ T D . Then obviously N g pT q for
g px q x f px q.
Also if a ∶ max1¤j ¤k taj u then we claim that N a 0.
This is because we know that
N |Wj pT αj Id q|W j
pN |W qa 0 @1 ¤ j ¤ k
j
Proof:
In order to prove uniqueness part assume that T D 1 N1
1 1
and D and N satisfy the hypothesis of the last part of the
theorem.
Since D 1 N 1 N 1 D 1 we have TD 1 D 1 T and TN 1 N 1 T .
Since D and N are polynomials in terms of T we can deduce
that DD 1 D”D and NN 1 N 1 N .
Thus from T D 1 N 1 D N it follows that
D1 D N N1
The theory of a single linear transformation
Nilpotent maps and triangular matrices
Proof:
Due to relation
¸
pN N 1 q s n N j N 1 s j
j
which comes from commutativity of N and N 1 we can deduce
that if N a 0 and N 1 b 0 then pN N 1 qa b 0
on the other hand due to theorem 15 D and D 1 are
simultaneously diagonizable we can nd a basis B with respect
to which both D and D 1 and thus D D 1 is diagonizable
β1 0
⋱
βj
(18)
⋱
0 βk
The theory of a single linear transformation
Nilpotent maps and triangular matrices
Proof:
This means that D D 1 can be nilpotent i D D 1 and this
completes the uniqueness part of the theorem.
The theory of a single linear transformation
Nilpotent maps and triangular matrices
Examples:
Exercise. Prove that the minimal polynomial and the
characteristic polynomial of a linear transformation T ∶ V Ñ V
have the same roots. In particular if p px q Πki1 px αi qbi
can be decomposed into linear factors then
mT px q Πki1 px αi qai where 1 ¤ ai ¤ bi for all 1 ¤ i ¤ k
The theory of a single linear transformation
Nilpotent maps and triangular matrices
Examples:
Let V be a three dimensional vector space over C with basis
tv1, v2, v3u. And let T P LpV , V q be dened by the equations:
Tv1 v1 2v3
Tv2 3v1 2v2 v3
Tv3 v3
Examples:
To this end we can proceed as follows. The characteristic
polynomial p px q of T is computed as follows
RRRx 1 3 0 RRRR
RRR R
p px q detpxI Aq RRR 0 x 2 0 RRRR px 1 q2 p x 2 q
RRR
RR 2 1 x 1RRRRR
Thus the distinct eigenvalues of T are t1, 2u. Also the
minimal polynomial of T is either
p1 x qp2 x q or p1 x q2 p 2 x q
The theory of a single linear transformation
Nilpotent maps and triangular matrices
Examples:
Now we consider A I and A 2I
0 3 0 3 3 0
A
I 0 3 0 , A 2I
0 0 0
2 1 0 2 1 3
It is not dicult to see that rank pA I q rank pA 2I q 2
and thus dim ker pA I q dim ker pA 2I q 1
This means that T is not diagonizable and so we have
mT px q p1 x q 2 p2 x q
The theory of a single linear transformation
Nilpotent maps and triangular matrices
Examples:
In order to nd the Jordan decomposition we need to nd
ker pT I q2 and ker pT 2q.
It is an exercise to see that ker pT I q2 Spantv1 , v2 u and
ker pT 2I q Spantpv1 v2 v3 qu.
We set
Examples:
More precisely we look for a basis tw1 , w2 u for W1 such that
1 2 0
rT sB 1 0 1 0
0 0 2
We can also see easily that the matrix of change of
coordinates P consists of
0 1 1
0 0 1
1 0 1
and we have
rT sB 1 P 1AP
The theory of a single linear transformation
Nilpotent maps and triangular matrices
Examples:
In this example we have
1 0 0 0 2 0
D 0 1 0 and N 0 0 0
0 0 2 0 0 0
The theory of a single linear transformation
Jordan Canonical Form
In fact we have
V1 S 1 pt0uq, V2 S 1 pV1 q, ..., Vi S 1pVi 1q for 1 ¤ i ¤ k
The theory of a single linear transformation
Jordan Canonical Form
Vk 1 Vk 2 ` Wk 2 , S p W k 1 q W k 2
The theory of a single linear transformation
Jordan Canonical Form
Vj Vj 1 ` Wj 1 i 1¤j ¤k
where Vj ker pS j q and satisfying
Consquently we obtain
S pWk 1 q ↪ Wk 2 , S pWk 2 q ↪ Wk 3 , ..., S pW2 q ↪ W1
where 0 1 0 ⋯ 0
0 0 1 ⋯ 0
Cj
⋮ ⋮ ⋱ ⋱ ⋮
(20)
0 0 ⋯ ⋱ 1
0 0 ⋯ 0
The theory of a single linear transformation
αj 1 0 ⋯ 0
0 αj 1 ⋯ 0
Aj
⋮ ⋮ ⋱ ⋱ ⋮
(21)
0 0 ⋯ ⋱ 1
0 0 ⋯ αj
The theory of a single linear transformation
Complexication of a real vector space
for u iv , u 1 iv 1 P V b C
Tc ppu iv q pu 1 iv 1 qq Tc pu iv q Tc pu 1 iv 1 q
The theory of a single linear transformation
Complexication of a real vector space
Lemma 3.
If W̃ V b C is a complex subspace such that W̃ ¯ W̃ then there
exists a subspace W V such that W̃ W b C.
Proof: In fact for every u iv P W̃ we have u P W̃ . This is
because u iv P W̃ .
Also i pu iv q v iu P W̃ thus we also have v P W̃ .
Therefore if we set W ∶ W̃ X V then W̃ W b C and thus
we get
W̃ W bC
The theory of a single linear transformation
Complexication of a real vector space
Lemma 4.
W0 ∶ W̃ X W̃¯ t0u
Proof: By denition we have W̄0 W0 so according to
previous lemma there exists a a subspace V0 V such that
W0 V0 b C.
Now if the complex numbers a1 ib1 , ..., ak ibk P C are such
that
ķ
v paj ibj qpuj ivj q P V0
j 1
Or equivalently
¸ ?
rpaj uj bj vj q 1 p aj v j bj uj qs P V0
The theory of a single linear transformation
Complexication of a real vector space
X W̃¯ t0u
W̃
and if tu1 iv1 , ..., uk ivk u constitute a basis for W̃ then the
family of vectors tu1 , ..., uk , v1 , ..., vk u are linearly independent.
this is because due to the identity
ķ ¸ ?
paj ibj qpuj ivj q rpaj uj bj vj q
1paj vj bj uj qs
j 1
Also since α ᾱ the eigen spaces W1 ker pT αI q and
W̄1 ker pT ᾱI q satisfy
W1 X W̄1 t0u
The theory of a single linear transformation
Orthogonal Transformations
Note that
Tc pu iv q pa ib qpu iv q ðñ Tc pu iv q pa ib qpu iv q
which implies that ker pT ᾱI q W̄1 .
Now take an eigenvector u iv P W1. Then tu, v u is linearly
independent and we have
Tc p u iv q pa ib qpu iv q
which is equivalent to
Tu au bv , Tv bu av
Theorem 18.
Let pV , x, yq be an inner product vector space and let T ∶ V Ñ V be
an orthogonal transformation. Then there exists a decomposition
V W1 ` ... ` Wk ` V1 ` ... ` Vl
into subspaces
dim Wi 2, 1¤i ¤k dim Vj 1 1¤j ¤l
Each of the vector spaces Wi 's and Vj 's are invariant under the
application of T and they are mutually orthogonal subspaces of V .
The theory of a single linear transformation
Orthogonal Transformations
Corollary 5.
With the hypothesis of the previous theorem there exists an
orthonormal base B for V such that the matrix A rT sB in this
basis has the form
C1 0
⋱
Cj
(22)
⋱
0 Cr
then we have S 2 TT t .
Now if we set U ∶ S 1 T then we can write
UU t pS 1T qpT t pS 1qt q S 1pS 2qS 1 I
This means that U is orthogonal.