0% found this document useful (0 votes)
23 views147 pages

Linear Algebra Spring 2020 6th

Uploaded by

ehsan ardeshiri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views147 pages

Linear Algebra Spring 2020 6th

Uploaded by

ehsan ardeshiri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 147

Linear Algebra

May 26, 2020


Eigenvalues and Eigenvectors

Question
Let's ask this question:
Ÿ given a linear transformation T ∶ V Ñ V where V is an n
dimensional vector space over F . Does there exists an ordered
basis B  pv1 , ..., vn q such that the matrix rT sB is a diagonal
matrix:  
α1 0 ⋯ 0
 
rT sB   0 α2 ⋯ 0 
⋮ ⋱ ⋮
(1)
0 0 ⋯ αn
Ÿ Obviously if this occurs then we should have Tvi  αvi for
1 ¤ i ¤ n.
Ÿ In other words vi 's are eigenvectors and αi 's are eigenalues:
Eigenvalues and Eigenvectors

Ÿ Denition 1.
Let V be a vector space over a eld F and let T ∶ V Ñ V be a
linear transformation. A scalar α P F is called an eigenvalue of T if
there exists a nonzero vector v P V such that
Tv  αv
The vector v is named an eigenvector of T .
In other words eigenvalues are those elements α P F for which
ker pT  αI q is a non-trivial subspace of V . And each nonzero
element of pT  αI q is named an eigenvector of T corresponding to
the eigenvalue α. The subspace ker pT  αI q is also called an
eigenspace of T corresponding to the eigenvalue α P F .

( Here I denotes the identity transformation on V .)


Eigenvalues and Eigenvectors

Ÿ It is obvious that if v is an eigenvector of T corresponding to


the eigenvalue α P F then any scalar multiple rv where
0  r P F is also an eigenvector of T . In other words all the
non-zero vectors in the one dimensional subspace v ¡
generated by v are also eigenvectors. This is why we can talk
also about eigen direction corrsponding to the eigenvalue α.
Ÿ If we are given an eigenvalue α for T in order to compute the
corresponding eigenvector we should resolve the linear system
of equations
pT  αI qX  0
Eigenvalues and Eigenvectors

Ÿ Denition 2 (Diagonalizability).
A linear transformation T ∶ V Ñ V dened on an n dimensional
vector space is called diagonizable if there exists an ordered basis
B  pv1 , ..., vn q with respect to which the matrix representation of
T denoted by rT sB has the diagonal form (1)
Eigenvalues and Eigenvectors

Diagonalizability
Ÿ Proposition 1.
A linear transformation T ∶ V Ñ V on an n dimensional vector
space is diagonizable i there exists a basis B  pv1 , ..., vn q all of
whose members are eigenvectors of T .
Proof:
Ÿ The proof should be trivial from the way we introduced rT sB .
Eigenvalues and Eigenvectors
Characteristic Polynomial
Ÿ In order to answer the question in page 1 we rst try to
compute eigenvalues and corresponding eigenvectors.
Ÿ Assume that α is an eigenvalue of the linear transformation T .
Then according to denition (1), T  αI should be singular.
Ÿ We have already seen that this is equivalent to saying that
detpT  αI q  0 (2)
Ÿ In other words in order to compute eigenvalues we may resolve
the equation:

detpT  xI q  0 (3)
in terms of the unknown x .
Eigenvalues and Eigenvectors
Characteristic Polynomial

Ÿ Assume that A  raij snn represents the matrix of T in some


ordered basis B . Then we can rewrite the equation (3) in this
coordinates:
 
x  a11 a12 ⋯ a1n
 a21 x  a22 ⋯ a2n 
det 
 ⋮ 
0
⋱ ⋮
 an 1 an2 ⋯ x  ann

Ÿ As can be seen from this expression detpxI  T q is a monic


polynomial of degree n in terms of x .

( We have already seen that this equation is independent of


the choice of the basis B .)
Eigenvalues and Eigenvectors
Characteristic Polynomial

Ÿ Denition 3.
The monic polynomial p px q ∶ detpxI  T q is named the
characteristic polynomial of T .
Eigenvalues and Eigenvectors
Example
Ÿ Let T ∶ R2 Ñ R2 be a self-adjoint linear transformation (w.r.t.
standard inner product). We know that the matrix of A  rT s
in the standard basis B  tp1, 0q, p0, 1qu is a symmetric matrix:

A
a b
b c

Ÿ Then we the characteristic polynomial of T is given by



x  a b
p px q  detpxI Aq  det  x 2pa c qx ac b 2
b x  c
Ÿ This polynomial has the following roots:
a
λ  pa cq  pa  c q2
2
4b 2
Eigenvalues and Eigenvectors
Example

Ÿ Thus in the case where a  c or b  0 there exists two


distincte eigenvalues λ  λ .
Ÿ This means that there exists also two distinct engen-directions
Spantv u  Spantv u:

Av λ v Av  λ v

Ÿ This means that A and tus T is diagonizable.


Eigenvalues and Eigenvectors
Example

Ÿ Let T ∶ R2 Ñ R2 be the rotation with angle θ around the


origin.
Ÿ In the standard basis tp1, 0q, p0, 1qu the matrix of T has the
following representation:

cos θ  sin θ
sin θ cos θ

Ÿ if θ  kπ for any integer k then T can not have any


eigenvector since no straight line remains invariant under the
application of T .
Ÿ And for θ  kπ , k P Z, T is not diagonalizable. (Why?)
Eigenvalues and Eigenvectors
Example

Ÿ This can also be seen by doing direct computation since we


have

x  cos θ
det
sin θ
 sin θ x  cos θ  x 2  2 cos θx 1

Ÿ Thus the characteristic polynomial p px q  x 2  2 cos θx 1


has no root in R unless cos θ  1.
Eigenvalues and Eigenvectors
Diagonizability and Characteristic Polynomial

Ÿ Theorem 1.
if T ∶ V Ñ V is diagonaizable then its characteristic polynomial
p px q can split into linear factors:

p px q  px  α1 qb1 ...px  αk qbk

where α1 , ..., αk are scalars and b1 , ..., bk are positive integers with
°k

i 1 bi n
Eigenvalues and Eigenvectors

Ÿ Proof: In fact α1 , ..., αk are distinct entries on the diagonal of


rT sB in (1) and b1, ..., bk respectively represent the the
number of times they are repeated.
Ÿ Example: If
α 0 0 0 0
1

 0 α1 0 0 0
rT sB  
 0 0 α2 0 0
 0 0 0 α2 0
0 0 0 0 α2

then
detpxI  T q  px  α1q2px  α2q3
Eigenvalues and Eigenvectors
Diagonizability:
Ÿ In fact if we reorder the elements of the basis B in such a way
that the eigenvectors having identical eigenvalues lie next to
each other and if we denote the new ordered basis by B 1 then
the matrix of T with respect to B 1 is written as
 
α1 I1 0 ⋯ 0
 0 ⋯ 0 
rT sB 1   ⋮
α2 I2
⋱ ⋮

 (4)
0 0 ⋯ αk Ik

where Ij is a bj  bj identity matrix.


Ÿ From this representation we nd that the eigen space
Wj  ker pT  αj I q, 1¤j ¤k
has dimension bj . (Where bj is the number of repetition of αj
on the diagonal).
Eigenvalues and Eigenvectors

Diagonizability:
Ÿ In fact if
B 1  pu1 , .., ub1 , ..., ub1 ... bj 1 1 , ..., ub1 b2 ...bj , ..., un q
Then Wj  ub1 ... bj 1 1 , ..., ub1 ... bj ¡, 1 ¤ j ¤ k and we
have
 W1 ` ... ` Wj ` ... ` Wk
V
and also the restriction T |W ∶ Wj Ñ Wj is nothing but αj IdW
where IdW ∶ Wj Ñ Wj is the identity matrix.
j j

j
Eigenvalues and Eigenvectors
Diagonizability:
Ÿ To sum up we get to the following theorem

Theorem 2.
A linear transformation T ∶ V Ñ V from an n dimensional vector
space to itself is diagonizable i V can be decomposed into a direct
sum
V  W1 ` ... ` Wk
where Wj € V for 1 ¤ j ¤ k is a bj dimensional subspace and
T |W ∶ Wj Ñ Wj is given by
j

T |W  αj IdW , 1 ¤ j ¤ k
j j

Moreover if T is diagonizable then its characteristic polynomial


splits into linear factors as follows:
p px q  Πkj1 px  αj qbj
Eigenvalues and Eigenvectors
Example

Ÿ Consider the following 2  2 matrix:



1 1
A
0 1
Ÿ The characteristic polynomial of A consists of

x 1
1  px  1q2
p px q  det
0 x 1

Ÿ This polynomial has a double root x  1 (of multiplicity 2)


and thus A has only one eigenvalue.
Eigenvalues and Eigenvectors
Example
Ÿ In order to compute the corresponding eigenvectors we have to
resolve the following linear system of equation
pαI  AqX  0
whereα is an eigenvalue of A: in our example α  1, and
X  x1
x2
Ÿ We get to the following linear system of equations
#
x2  0
00
Ÿ Therefore we have only one eigen direction corresponding to
the eigenvalue α  1 which consists of the one dimensional
subspace Spantp1, 0qu.
Eigenvalues and Eigenvectors
Example

Ÿ From the above computation


 and proposition 1 we deduce
1 1
that the matrix A  is not diagonizable.
0 1
Ÿ In other words the fact that the characteristic polynomial splits
into linear factors is not sucient for a linear transformation to
be diagonizable.
Ÿ Yet in the next theorem we see that if the roots of the
characteristic polynomial have multiplicity one then
diagonizability holds.
Eigenvalues and Eigenvectors
Diagonizability:

Ÿ Theorem 3.
Let V be an n-dimensional vector space over a eld F . Let
T ∶ V Ñ V be a linear transformation. If T has n distinct
eigenvalues then T is diagonizable.
Eigenvalues and Eigenvectors
Diagonizability:

Proof:
Ÿ Let α1 , ..., αn denote the eigenvalues of T . We know that for
1 ¤ i, j ¤ n if i  j then αi  αj , Let vi for 1 ¤ i ¤ n be the
eigenvector corresponding to the eigenvalue αi :
Tvi  αi vi
Ÿ In order to prove the theorem it suces to show that v1 , ..., vn
are linearly independent and thus form a basis for V .
Eigenvalues and Eigenvectors
Diagonizability:
Proof:
Ÿ If v1 , ..., vn are linearly dependent then we can nd 2 ¤ j ¤ n
such that v1 , ..., vj 1 are linearly independent but
v1 , ..., vj 1 , vj are linearly dependent.
Ÿ Thus vj can be written as a linear combination of v1 , ..., vj 1 :

¸
vj  λi vi , λi P F, 1¤i ¤j 1 (5)
1¤i ¤j 1

so we get ¸
Tvj  λi Tvi
1¤i ¤j 1

or equivalently ¸
αj vj  αi λi vi
1¤i ¤j 1
Eigenvalues and Eigenvectors
Diagonizability:

Proof:
Ÿ so using (5) we obtain
¸ ¸
αj λi vi  αi λi vi
1¤i ¤j 1 1¤i ¤j 1

Ÿ since at leas one of the vectors λi vi for 1 ¤ i ¤ j  1 is non


zero and since v1 , ..., vj 1 are linearly independent the above
equation implies that
αj  αi
for some i ¤ j  1 which is a contradiction.
Ÿ Hence v1 , ..., vn should be linearly independent and the
theorem is proved.
The theory of a single linear transformation
Main Question and Jordan Canonical Form:
Ÿ From now on we try to understand as clearly as possible the
behavior of a general linear transformation on nite
dimensional vector spaces.
Ÿ Consider a nite dimensional vector space V over a eld F
and let T ∶ V Ñ V be a linear transformation.
Ÿ What we are looking for is to decompose the vector space V
as a direct sum
V  W1 ` W2 ` ... ` Wk
such that each of the subspaces Wi € V for i  1, ..., k are
invariant under the application of T :
T pW i q € W i , 1¤i ¤k
and moreover the restrictions T |Wi ∶ Wi Ñ Wi are as simple as
possible.
The theory of a single linear transformation
Minimal Polynomial:

Ÿ To do so we will see that polynomials play a crucial role.


Ÿ if we assume that dimF V  n then we know that the space of
linear transformations LpV , V q is a vector space of dimension
n2 over F .
This means that n2 1 elements I , T , T 2 , ..., T n of LpV , V q
2
Ÿ
are linearly dependent where I denotes the identity
transformation.
Ÿ It follows that there exists scalars a0 , a1 , .., an2 such that
¸
n2
ai T i 0

i 0
The theory of a single linear transformation
Minimal Polynomial:

Ÿ This means that there exists a polynomial q px q P F rx s given


°2
by q px q  ni 0 ai x i for which we have
q pT q  0

Ÿ Denition 4.
A polynomial q px q such that q pT q  0 is called an annihilating
polynomial for T .
Ÿ Consider the subset AT € F rx s of all annihilating polynomials
for T :
AT ∶ tq P F rx s|q pT q  0u
The theory of a single linear transformation
Minimal Polynomial:

Ÿ Denition 5.
A ring R is a mathematical system consisting of a nonempty set
R  ta, b, ...u together with two operations, addition and
multiplication, each of which assignes to a pair of elements a and b
in R other elements of R , denoted by a b in the case of addition
and ab in the case of multiplication, such that the following
conditions hold for all a, b, c P R
The theory of a single linear transformation
Minimal Polynomial:

1. pR, q is a commutative group.


2. multiplication is associative: pabqc  apbc q.
3. There exists an identity element 1 P R for multiplication i.e
1a  a1  a
4. apb c q  ab ac and pa b qc  ac bc
5. If the commutative law for multiplication holds (ab  ba for all
a, b P R ) Then R is called a commutative ring.
The theory of a single linear transformation
Minimal Polynomial:

Ÿ Example: Z with natural and  is a commutative ring.


Ÿ Example: F rx s the space of polynomials on a eld of scalars F
with and  in the space of polynomials is a commutative
ring.
Ÿ Example: LpV , V q the space of linear transformations on a
vector space V with and the composition of operators  as
multiplication is a (noncommutative) ring.
The theory of a single linear transformation

Minimal Polynomial:
Ÿ Denition 6.
Given a commutative ring R a subset I € R of R is named an ideal
if
1. pI , q is a group
2. for all a P I and for all r P R we have ra P I

Ÿ Example: For example the set of integers which are a multiple


of 3 form an ideal in Z.
Ÿ We will soon characterize the ideals of the F rx s.
The theory of a single linear transformation
Minimal Polynomial:

Ÿ AT is an ideal of F rx s which by denition means it has the


following two properties:
1. pAT , q is a subgroup of F rx s. This is equivalent to say that
AT is closed under addition and subtraction.
2. AT is closed under multiplication by arbitrary elements of F rx s:

@r P F rx s, @p P AT Ôñ rp P AT
Ÿ The second property for AT comes from the following lemma
The theory of a single linear transformation
Minimal Polynomial:

Ÿ Lemma 1.
If p1 , p2 P F rx s are two polynomials with coecients in F then we
have
pp1p2qpT q  p2pT q  p1pT q  p1pT q  p2pT q
Ÿ The proof is easy and left as exercise.
The theory of a single linear transformation
Minimal Polynomial:
Ÿ Let 0  m P AT be a nonzero polynomial of least degree in AT

Ÿ Theorem 4.
Every polynomial p P AT is a multiple of m i.e. for every p P AT
there exists q P F rx s such that
p  qm

Proof:
Ÿ This follows from the so called division algorithem in the space of
polynomials by which we know that for every pair of polynomials
p1 , p2 P F rx s with p1  0 there exists a unique polynomial q P F rx s
named quotient such that
p2  qp1 r

where r P F rx s named the remainder is either zero r  0 or it


satises deg pr q deg pp1 q.
The theory of a single linear transformation
Minimal Polynomial:

Proof:
Ÿ In order to prove the theorem we divide p by m to obtain the
corresponding quotient q and remainder r such that

p  mq r
with deg pr q deg pmq
Ÿ But since AT is an ideal and p, m P AT we have
r  p  mq P AT .
Ÿ Since m P AT was the nonzero polynomial of least degree we
should have r  0. This proves the theorem.
The theory of a single linear transformation

Ÿ In fact we have just proved the following theorem


Ÿ Theorem 5.
Every ideal I of the ring of polynomials F rx s is generated by a
single element i.e. there exists a P I such that
I  a ¡ tra|r P F rx su

Ÿ Corollary 1.
If m1 P AT is another nonzero polynomial of least degree then there
exists a scaler α P F such that m1  αm.
The theory of a single linear transformation

Ÿ Corollary 2.
Given a linear transformaton T P LpV , V q then there exists a
unique monic polynomial m P F rx s of least degree such that
m pT q  0

Ÿ We recall that a monic polynomial is a polynomial whose


leading coecient (the nonzero coecient of highest degree)
is equal to 1. So m has the following form
mpx q  b0 b1 x ... bk 1 x k 1 xk
The theory of a single linear transformation

Ÿ Denition 7.
Given a linear transformation T P LpV , V q the monic polynomial
mT of least degree annihilating T is called the minimal polynomial
for T .
The theory of a single linear transformation
Reminder from polynomials

Ÿ An important property of the polynomial ring F rx s where F is


an arbitrary eld is the unique factorization property.
Ÿ Denition 8.
A polynomial p P F rx s is called reducible if there exists
non-constant polynomials q1 , q2 P F rx s such that
p  q1 q2 ,

Since both q1 and q2 are non-constant we should have


1 deg q1 , deg q2 deg p .
p is called irreducible or prime if it is not reducible.
The theory of a single linear transformation
Reminder from polynomials

Ÿ Denition 9.
Here the notation r |s ("read r devides s " or "s is a multiple of r " )
for two polynomials r , s P F rx s means that there exists t P F rx s
such that
s  rt
or in other words the remainder of the division of s to r is zero.
The theory of a single linear transformation
Reminder from polynomials

Ÿ Denition 10.
Given two polynomials a, b P F rx s a common divisor of a and b
consists of a polynomial c P F rx s such that c |a and c |b.
The theory of a single linear transformation
Reminder from polynomials

Ÿ Denition 11.
Two distinct polynomials a and b are relatively prime if they do not
have any common divisor of degree greater than zero. Or in other
words their only common divisors are constant polynomials.
The theory of a single linear transformation
Reminder from polynomials

Ÿ Denition 12.
An element d P F rx s is called a greatest common divisor of
r1 , ..., rk P F rx s if d |ri , for 1 ¤ i ¤ k , and if d 1 is such that d 1 |ri ,
1 ¤ i ¤ k , then d'|d.
The theory of a single linear transformation
Reminder from polynomials

Ÿ Now consider polynomials f1 , ..., fk P F rx s. Then we set



I ∶ f1 , ..., fk ¡ t ai fi |ai P F rx s1 ¤ i ¤ k u
i 1 
Ÿ It is not dicult to see that I is an ideal of F rx s.
Ÿ By theorem in page 9 we know that there exists d P F rx s
which generates°I . In other words there exists h1 , ..., hk P F rx s
such that d  ki1 hi fi and I  d ¡.
Ÿ Since f1 , ..., fk P I this means that d is a divisor of f1 , ..., fk
The theory of a single linear transformation
Reminder from polynomials

Ÿ Theorem 6. °
There exists g1 , ..., gk P F rx s such that d  ki1 gi fi and d is the
greatest common divisor of f1 , ..., fk
Proof:
Ÿ The existence of gi 's follow from the fact that d P I
Ÿ If d 1 is a common divisor of f1 , ..., fk we should have
°
d 1 |d  ki1 gi fi .
Ÿ This proves d is the greatest common divisor of f1 , .., fk
The theory of a single linear transformation
Reminder from polynomials

Ÿ Thus far we have proved the following


Ÿ Theorem 7.
Given polynomials f1 , ..., fk P F rx s their greatest common divisor
always exists. If we denote this greatest common divisor by d then
there exists polynomials g1 , ..., gk P F rx s such that
d  g1f1 ... gk fk
The theory of a single linear transformation
Reminder from polynomials

Ÿ Theorem 8.
Given a polynomial p P F rx s there exists a factorization
p px q  p1 px qa1 ...pk px qak

where p1 , ..., pk P F rx s are distinct irreducible polynomials and


a1 , ..., ak are positive integers. Moreover the decomposition is
unique in the sense that if
p px q  q1 px qb1 ....ql px qbl

is another factorization where q1 , ..., ql are distinct irreducible


polynomials then k  l , pi  θi qi and ai  bi for i  1, ..., k , where
θi 's are scalars belonging to F .
The theory of a single linear transformation
Reminder from polynomials

Proof
Ÿ Proof: The proof of existence of a factorization into irreducible
polynomials can be done by induction and is left as exercise.
Ÿ The proof of uniqueness is an immediate consequence of the
following lemma and this also is left as exercise.
The theory of a single linear transformation
Reminder from polynomials
Ÿ Lemma 2.
Let a, b P F rx s be two arbitrary polynomials and let p P F rx s be an
irreducible polynomial. Assume that p |ab. Then either p |a or p |b
Ÿ Proof: Suppose p does not divide a. Then a and p are
relatively prime.So we have
au pv 1
for some u, v P F rx s.
Ÿ Then
abu pvb  b
and since p |ab, we have p |b.
The theory of a single linear transformation
Minimal Polynomials Toy Example:

Ÿ Now let's return to our initial problem of characterization of


the behavior of linear transformations.
Ÿ For simplicity we rst assume that mT the minimal polynomial
of T is written as
mT  pq
where p and q are polynomials which are relatively prime, by
which we mean their only common divisors are scalars.
The theory of a single linear transformation
Minimal Polynomials Toy Example:

Ÿ Theorem 9.
With the above hypothesis we have
Impp pT qq  Ker pq pT qq, Ker pp pT qq  Impq pT qq

moreover if we set W1 ∶ ker pp pT qq and W2 ∶ ker pq pT qq then


W1 and W2 are invariant subspaces of V i.e T pW1 q € W1 and
T pW2 q € W2 and we have

V  W1 ` W2
The theory of a single linear transformation
Minimal Polynomials Toy Example:

Proof:
Ÿ We rst show that
Impp pT qq X ker pp pT qq  Impq pT qq X ker pq pT qq  t0u
Ÿ To see this we note that due to the property pp, q q  1 (this is
the notation for being relatively prime) according to theorem 7
we can nd polynomials a, b P F rx s such that
ap bq  1

or
a p T qp pT q b pT qq pT q  Id (6)
The theory of a single linear transformation
Minimal Polynomials Toy Example:

Proof:
Ÿ On the other hand we know that mT pT q  p pT qq pT q  0 so

ImpP pT qq € ker pq pT qq, Impq pT qq € ker pp pT qq (7)


Ÿ Thus if v P Impp pT qq X ker pp pT qq then we have
p pT qv  q pT qv  0. But this contradicts (6)
Ÿ So we obtain

V  ker pppT qq ` ImpppT qq, (8)


(due to the property dim ker pp pT qq dim Impp pT qq  n)
The theory of a single linear transformation
Minimal Polynomials Toy Example:

Proof:
Ÿ Also from (6) we know that

Impp pT qq Impq pT qq  V (9)


Ÿ Comparing (7), (8) and (9) we can conclude that
Impq pT qq  ker pp pT qq. Similarly we can derive that
Impp pT qq  ker pq pT qq.
Ÿ Thus due to (8) the assertion of theorem 8 is proved i.e.
V  ker pppT qq ` kerqpT q
The theory of a single linear transformation
Minimal Polynomials Toy Example:
Second proof of theorem 9:
Ÿ We rst note that

Impq pT qq € ker pp pT qq and Impp pT qq € ker pq pT qq


This can be seen simply by taking an arbitrary vector v P V
and see that
p pT qpq pT qpv qq  q pT qpp pT qv q  0

Ÿ Also since p and q are relatively prime we can nd polynomials


a, b P F rx s such that

ap bq  1

which implies that


a p T qp pT q b pT qq pT q  Id
The theory of a single linear transformation
Minimal Polynomials Toy Example:
Second proof of theorem 9:
Ÿ From which we conclude that ker pp pT qq X ker pq pT qq  t0u.
Ÿ On the other hand since we have

dim Impp pT qq dim ker pp pT qq  dim Impq pT qq dim ker pq pT qq


 dim V
Ÿ Thus if dim ker pq pT qq ¡ dim Impp pT qq it would follow that
dim ker pq pT qq dim ker pp pT qq ¡ dim V

and thus

ker pq pT qq X ker ppT qq  t0u


which is a contradiction
The theory of a single linear transformation
Minimal Polynomials Toy Example:

Second proof of theorem 9:


Ÿ We deduce that

dim ker pq pT qq  dim Impp pT qq

which means ker pq pT qq  Impp pT qq similarly we have


ker pp pT qq  Impq pT qq.
Ÿ Since ker pp pT qq X ker pq pT qq  t0u we conclude
V  ker pppT qq ` ImpppT qq  ker pqpT qq ` ImpqpT qq
Ÿ The invariance of ker pp pT qq and Impp pT qq with respect to T
comes from
T  p p T q  p pT q  T
The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:

Ÿ Consider a linear transformation T ∶ V Ñ V where V is an n


dimensional vector space over a eld F .
Ÿ Let mT P F rx s denote the minimal polynomial of T . Consider
the factorization of mT into irreducible polynomials
mT px q  p1 px qa1 ...pk px qak

Ÿ We recall that here p1 , ..., pk are distinct irreducible


polynomials in F rx s.
The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:

Ÿ Theorem 10.
With the hypothesis of previous slide if we set Wi ∶ ker ppi pT qai q,
1 ¤ i ¤ k then we have
V  W1 ` ... ` Wk
Moreover the subspaces W1 , ..., Wk are invariant under T i.e.
T pW i q € W i , 1¤i ¤k
and the minimal polynomial mi P F rx s of T |Wi ∶ Wi Ñ Wi is equal
to
pi px qai
The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:

Proof:
Ÿ The proof is easily done by induction and using the theorem 9.
Ÿ Note rst that if we set p ∶ pp1 qa1 and q ∶ p a2 ...p ak then p
2 k
and q are relatively prime.
Ÿ So if we set W1 ∶ ker pp q and W̃2 ∶ ker pq q then by theorem
9 we have
V  W1 ` W̃2
Ÿ Furthermore we know that W1 and W̃2 are invariant under T .
The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:
Proof:
Ÿ If m1 and m̃2 denote the minimal polynomials of
T |W1 ∶ W1 Ñ W1 and T |W̃2 ∶ W̃2 Ñ W̃2 respectively, then we
know by denition of W1 and W̃2 that
m1 |p1a1 m̃2 |pp2a2 ...pkak q

and so m1 and m̃2 are relatively prime and


m1 m̃2 |mT (10)
Ÿ On the other hand due to the direct decomposition
V  W1 ` W̃2 it is obvious that m1 m̃2 is an annihilator of T .
Thus
mT |m1 m̃2 (11)
The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:

Proof:
Ÿ By comparing (10) and (11) we conclude that mT  m1m̃2
and more precisely
m1  p1a1 m̃2  p2a2 ...pkak

Ÿ Continuing inductively the proof of the theorem can be


completed.
The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:

Ÿ Let's assume the hypothesis of theorem 10 i.e. a linear


transformaton T ∶ V Ñ V with minimal polynomial
mT  p1a1 ...pkak and a decomposition V  W1 ` ... ` Wk .
where Wj  ker ppjaj pT qq for j  1, ..., k .
°k
Ÿ Let q1 , ..., qk P F rx s be such that 
mT
i 1 qi p ai  1. This is
i
feasible due to theorem 7.
Ÿ We set
Πj ∶ V Ñ V, Πj  qj pT q maT pT q, 1¤j ¤k
pj j

Ÿ Then we have the following theorem


The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:

Ÿ Theorem 11.
The operators Π1 , ..., Πk are nothing but the projections on
W1 , ..., Wk respectively with respect to the decomposition
V  W1 ` ... ` Wk . More precisely if v P V is decomposed as
v  v1 ... vk for vj P Wj , 1 ¤ j ¤ k then we have

Πj pv q  vj , 1¤j ¤k
The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:

Proof:
Ÿ If i  j then the pia |qj mp
i T
aj . By denition of Wi 's this means
that Πj |W  0 for i  j .
j

i
°
Ÿ On the other hand since we have kj1 Πj  Id thus
Πj |Wj  IdWj .
Ÿ The combination of the above two observations proves
theorem 11.
The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:

Ÿ Corollary 3.
The projection operators Πj ∶ V Ñ Wj can be expressed as
polynomials in terms of T .
The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:
The case of algebraically closed elds
Ÿ As a consequence of the recent theorem we nd that the
problem of understanding the behavior of an arbitrary linear
transformation T reduces to that of characterizing the
behavior of T |Wi 's.
Ÿ In other word we restrict ourselves to the cases where the
minimal polynomial mT has the form p px qa P F rx s where p px q
is an irreducible polynomial in F rx s and a is a positive integer.
Ÿ In the case where F  C we know by fundamental theorem of
algebra that the only irreducible polynomial upto multiplication
by scalars are polynomials of degree 1:
p px q  x  ξ

where ξ P C is a xed complex number.


The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:

The case of algebraically closed elds


Ÿ As a special case let V be a vector space over C. Let
T ∶ V Ñ V be a linear transformation.
Ÿ In this case according to fundamental theorem of algebra the
minimal polynomial mT of T splits as
mT px q  px  α1 qa1 ...px  αk qak

If we set Wi ∶ ker ppT  αi Id qai q, where Id ∶ V Ñ V denotes


the identity map, then we know by previous theorem that
V  W1 ` ... ` Wk
The theory of a single linear transformation
Minimal Polynomials and Decomposition into Invariant Subspaces:

generalized eigenvectors
Ÿ Denition 13.
A nonzero vector v P V is a generalized eigenvector of rank m of T
corresponding to the eigenvalue α if
pT  αpId qqm v  0
In other words given an eigenvalue α P F we dene the generalized
eigenspace of V of rank m to be equal to ker pT  αpId qqm and the
nonzero elements of ker pT  αpId qqm are called generalized
eigenvectors.
Ÿ Note that for any positive integer m the subspace ker pT  αpId qqm
can be non trivial only if α is an eigenvector of T . (Prove this!)
The theory of a single linear transformation
Nilpotent Maps

Ÿ As can be seen from theorem 10 and our recent discussion in


the case where F is algebraically closed and in particular in the
case where F  C, the problem of characterization of arbitrary
linear operators T ∶ V Ñ V reduces to the characterization of
T |Wj where Wj  ker pT  αj I qaj for some eigenvalues αj and
some positive integers aj .
Ÿ This means that pT  αj I q|Wj is a nilpotent operator on Wj ,
1 ¤ j ¤ k.
Ÿ Thus if we can characterize nilpotent operators then we will be
able to understand the behavior of T |Wj .
Ÿ This is because the dierence between pT  αj I q|Wj and T |Wj
is just αj I which has a trivial behavior.
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Ÿ There is a close relation between nilpotent maps and triangular


matrices.
Ÿ Denition 14.
A square n  n matrix A is called upper triangular if all the entries
below the main diagonal are zero
 
a11 a12 ⋯ a1n
 0 a22 ⋯ a2n 
A
 

⋮ ⋱ ⋮
0 0 ⋯ ann
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Ÿ Now assume that A is an upper triangular matrix such that all


the entries on the main diagonal vanish: a11  ...  ann  0.
Ÿ Then we claim that A is a nilpotent matrix with index at most
n:
An  0
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Ÿ In fact a simple computation shows that if we multiply an


upper triangular matrix A with vanishing main diagonal by
itself the result will have zero entries both on its main and on
its super diagonal.
Ÿ Similarly Aj should have zero entries on its i -th superdiagonal
for i ¤ j  1.
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Ÿ Let's write this argument more rigorously. Consider an n  n


matrix A  raij s such that
aij 0 if j ¤ i
Ÿ Let Ak  rbijk s for k ¥ 2 and 1 ¤ i, j ¤ n. Then we have
bijk  0 if , j ¤ i k 1
Ÿ Prove the above claim.
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Ÿ Corollary 4.
Any upper triangular matrix with vanishing main diagonal:
 
0 a12 ⋯ a1n
 0 0 ⋯ a2n 
rT sB   ⋮ ⋱ ⋮

 (12)
0 0 ⋯ 0
is nilpotent.
We will see that the inverse of this is also true in other words
any nilpotent linear operator in appropriate basis has a
representation like (12).
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Ÿ Let T ∶ V Ñ V be a linear transformation on an n-dimensional


vector space V .
Ÿ Let the matrix of T with respect to an ordered basis
B  pv1 , ..., vn q has the following form:
 
0 a12 ⋯ a1n
 0 0 ⋯ a2n 
rT sB   ⋮ ⋱ ⋮

 (13)
0 0 ⋯ 0
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Ÿ If we set Wj  v1 , ..., vj ¡ for j  1, ..., n, and W0  t0u


then we have
1.
t0u  W0 € W1 € ... € Wn  V
2. and Tvj P Wj 1 for 1 ¤ j ¤ n thus

T pWj q € Wj 1 for 1 ¤ j ¤ n
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Ÿ Theorem 12.
There exists a basis B for the linear application T ∶ V Ñ V with
respect to which the matrix representation rT sB has the upper
triangular form (13) if and only if there exists nested subspaces
t0u  W0 ˆ W1 ˆ ... ˆ Wn1 ˆ Wn  V such that
T pW j q € W j 1 for 1 ¤ j ¤ n

Proof:
Ÿ The proof is easy and left as exercise.
The theory of a single linear transformation
Nilpotent maps and triangular matrices
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Ÿ Theorem 13.
Let V be a vector space dened over an algebraically closed eld F .
Let T ∶ V Ñ V be a linear transformation. Then there exists an
ordered basis B for V with respect to which the matrix
representation rT sB has the following form:
 A1 0

 ⋱ 

 Aj 
  (14)

 ⋱
0 Ak
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Ÿ Where Aj 1 ¤ j ¤ k is a bj  bj upper triangular matrix like


 
αj  ⋯ 
 0 αj ⋯  
Aj  
 
 (15)
⋮ ⋱ ⋮
0 0 ⋯ αj
such that Aj  αj Ij is a nilpotent bj  bj matrix of index aj :
pAj  αj Ij qaj  0
Ÿ It is obvious that
aj ¤ bj (16)
Eigenvalues and Eigenvectors
Diagonizability and minimal polynomials:

Ÿ Theorem 14.
A linear transformation T P LpV , V q is diagonizable i the minimal
polynomial of T has the form
mT px q  px  α1 q...px  αk q

where α1 , ..., αk are distinct elements of F .


Eigenvalues and Eigenvectors
Diagonizability and minimal polynomials::
Proof:
Ÿ If T is diagonizable we know from theorem 2 that there exists
a decomposition of V  `kj1 Wj into a direct sum of
subspaces W1 , ..., Wk such that
T |Wj  αj IdW j

for some distinct scalers α1 , ..., αk P F .


Ÿ In other words

pT  αj Id q|W  0j
1¤j ¤k
thus
pT  α1Id q  ...  pT  αk Id q  0
Eigenvalues and Eigenvectors
Diagonizability and minimal polynomials::
Proof:
Ÿ This means that
mT px q|px  α1 q...px  αk q

Ÿ On the other hand if each of the prime factors x  αj


1 ¤ j ¤ k is deleted the obtained polynomial
Π k p x α q
m px q  i px1αj q i does not annihilate T .
Ÿ in fact if vj P Wj then Tvj  αi vj so pT  αi I qvj  pαj  αi qvj .
Ÿ so m pT qvj  Πi j pαj  αi qvj  0
Ÿ This shows that

mT px q  px  α1 q...px  αk q
Eigenvalues and Eigenvectors

Diagonizability and minimal polynomials::


Proof:
Ÿ Conversely assume that the minimal polynomial mT of T has
the form
mT px q  px  α1 q...px  αk q
where α1 , ..., αk are distinct scalars.
Ÿ Then by theorem 10 we know that V  W1 ` ... ` Wk where
Wj  ker pT  αj I q, 1 ¤ j ¤ k

Ÿ Thus by theorem 2, T is diagonizable.


Eigenvalues and Eigenvectors

Diagonizability and minimal polynomials::


Ÿ Theorem 15.
Let T1 , ..., Tk be a set of diagonizable linear transformations on V
such that Ti Tj  Tj Ti for 1 ¤ i, j ¤ k . Then there exists a basis of
V such that the basis B for V with respect to which all the
transformations rT1 s, ..., rTk s are diagonal matrices.
Eigenvalues and Eigenvectors
Diagonizability and minimal polynomials::
Proof:
Ÿ We proceed the proof by induction on k .
Ÿ Let α1 , ..., αr denote the distinct eigenvalues of T1 and let
Wj  ker pT  αj I q 1 ¤ j ¤ k be the eigenspace corresponding
to eigenvalue αj . We know from theorem 2 that
V  W1 ` ... ` Wk .
Ÿ The fact that Tj 's commute with each other implies that the
subspaces W1 , ..., Wr are invariant with respect to all the
operators T1 , ..., Tk . This is because if vj P Wj then we have
T1 Ti vj  Ti T1vj  Ti pαj vj q  αj Ti vj
which means that Ti vj is also an eigenvector of T1 with
eigenvalue αj thus it belongs to Wj
Eigenvalues and Eigenvectors
Diagonizability and minimal polynomials::
Proof:
Ÿ Now we consider the operators S2 , .., Sk ∶ W1 Ñ W1 dened by
Si ∶ Ti |W1 ,2 ¤ i ¤ k
Ÿ We claim that S2 , ..., Sk are diagonizable. This is because for
each 2 ¤ i ¤ k the minimal polynomial mSi of Si divides the
minimal polynomial of mTi of Ti :
mSi |mTi 2¤i ¤k
Ÿ Now we can apply theorem 14 and the hypothesis on
diagonizability of Ti 's to deduce that Si 's are also diagonizable.
Ÿ Since obviously S2 , ..., Sk commute with each other by
induction hypothesis we can infer that S2 , ..., Sk are
simultaneously diagonizable.
Ÿ Repeating this for all the subspaces W1 , ..., Wr completes the
proof.
Ÿ
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Ÿ Theorem 16 (Cayly Hamilton Theorem For


Algebraically Closed Fields).
Let T be a linear transformation on a nite dimensional vector
spaceV over an algebraically closed eld F , then the minimal
polynomial mT divides the characteristic polynomial p px q:
mT |p

In particular P pT q  0.
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Ÿ Proof: The proof is more or less obvious from theorem 13.


Due to relation (14) we have
detpxI  T q  detpxI1  A1q  ...  pxIk  Ak q
Ÿ Also from (15) we have
detpxIj
 Aj q  px  αj qb j

Ÿ The inequality (16) shows also that aj ¤ bj for 1 ¤ j ¤ k .


Ÿ The proof is complete by recalling that
mT px q  Πkj1 pxj  α j qa j
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Ÿ Theorem 17 (Jordan Decomposition).


Let T P LpV , V q, where V is a nite dimensional vector space over
an algebraically closed eld. Then there exists linear transformation
D and N on V such that
a) T  D N
b) D is diagonizable and N is nilpotent.
c) There exist polynomials f px q and g px q P F rx s such that
D  f pT q and N  g pT q
The transformations D and N are uniquely determined in the sense
that if D 1 and N 1 are diagonizable and nilpotent transformations,
respectively, such that T  D 1 N 1 and D 1 N 1  N 1 D 1 , then
D  D 1 and N  N 1 .
The theory of a single linear transformation
Nilpotent maps and triangular matrices
Proof:
Ÿ We have already seen that if mT P F px q denotes the minimal
polynomial of T then since F is algebraically closed we have
the decomposition
mT px q  px  α1 qa1 ...px  αk qak

and if we set Wj ∶ ker pT  αj I qaj 1 ¤ j ¤ k then


V  W1 ` ... ` Wk and according to corollary 3 the projection
maps
Πj ∶ V Ñ W j , m 1 ¤ j ¤ k
are polynomials in terms of T . So there exists fi px q P F rx s
such that

Πj  fj pT q, 1¤j ¤k (17)
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Proof:
Ÿ Now if we dene D; V ÑV as

D ∶ αj Πj

j 1

then D is diagonizable since D |Wj  αj IdWj and more over


V  `kj1 Wj .
°
Ÿ Also if we wet f px q  kj1 αj fj px q then due to (17) we have
D  f pT q
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Proof:
Ÿ We set N ∶ T  D . Then obviously N  g pT q for
g px q  x  f px q.
Ÿ Also if a ∶ max1¤j ¤k taj u then we claim that N a  0.
Ÿ This is because we know that

N |Wj  pT  αj Id q|W j

thus N |aWj j  0 by denition of Wj . In particular since aj ¤a


for all 1 ¤ j ¤ k we have

pN |W qa  0 @1 ¤ j ¤ k
j

Ÿ Since V  `kj1 Wj we conclude that N a  0


The theory of a single linear transformation
Nilpotent maps and triangular matrices

Proof:
Ÿ In order to prove uniqueness part assume that T  D 1 N1
1 1
and D and N satisfy the hypothesis of the last part of the
theorem.
Ÿ Since D 1 N 1  N 1 D 1 we have TD 1  D 1 T and TN 1  N 1 T .
Ÿ Since D and N are polynomials in terms of T we can deduce
that DD 1  D”D and NN 1  N 1 N .
Ÿ Thus from T  D 1 N 1  D N it follows that

D1  D  N  N1
The theory of a single linear transformation
Nilpotent maps and triangular matrices
Proof:
Ÿ Due to relation
¸ 
pN  N 1 q s  n N j N 1 s j
j
which comes from commutativity of N and N 1 we can deduce
that if N a  0 and N 1 b  0 then pN  N 1 qa b  0
Ÿ on the other hand due to theorem 15 D and D 1 are
simultaneously diagonizable we can nd a basis B with respect
to which both D and D 1 and thus D  D 1 is diagonizable
 β1 0

 ⋱ 

 βj 
  (18)

 ⋱
0 βk
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Proof:
Ÿ This means that D  D 1 can be nilpotent i D  D 1 and this
completes the uniqueness part of the theorem.
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Examples:
Ÿ Exercise. Prove that the minimal polynomial and the
characteristic polynomial of a linear transformation T ∶ V Ñ V
have the same roots. In particular if p px q  Πki1 px  αi qbi
can be decomposed into linear factors then
mT px q  Πki1 px  αi qai where 1 ¤ ai ¤ bi for all 1 ¤ i ¤ k
The theory of a single linear transformation
Nilpotent maps and triangular matrices
Examples:
Ÿ Let V be a three dimensional vector space over C with basis
tv1, v2, v3u. And let T P LpV , V q be dened by the equations:
Tv1  v1 2v3
Tv2  3v1 2v2 v3
Tv3  v3

Ÿ Thus the matrix of T with respect to pv1 , v2 , v3 q is given by



1 3 0
0 2 0
2 1 1
Ÿ Find the distinct eigenvalues of T .
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Examples:
Ÿ To this end we can proceed as follows. The characteristic
polynomial p px q of T is computed as follows
RRRx 1 3 0 RRRR
RRR R
p px q  detpxI  Aq  RRR 0 x 2 0 RRRR  px 1 q2 p x  2 q
RRR
RR 2 1 x 1RRRRR
Ÿ Thus the distinct eigenvalues of T are t1, 2u. Also the
minimal polynomial of T is either
p1 x qp2  x q or p1 x q2 p 2  x q
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Examples:
Ÿ Now we consider A I and A  2I
 
0 3 0 3 3 0
A 
I 0 3 0 , A  2I 
 0 0 0
2 1 0 2 1 3
Ÿ It is not dicult to see that rank pA I q  rank pA  2I q  2
and thus dim ker pA I q  dim ker pA  2I q  1
Ÿ This means that T is not diagonizable and so we have
mT px q  p1 x q 2 p2  x q
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Examples:
Ÿ In order to nd the Jordan decomposition we need to nd
ker pT I q2 and ker pT  2q.
Ÿ It is an exercise to see that ker pT I q2  Spantv1 , v2 u and
ker pT  2I q  Spantpv1 v2 v3 qu.
Ÿ We set

W1  Spac tv1 , v2 u and W2  Spantv1 v2 v3 u

Ÿ According to our previous discussion our desired matrix


representation for T is obtained through taking appropriate
basis for W1 and W2 .
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Examples:
Ÿ More precisely we look for a basis tw1 , w2 u for W1 such that

pT I qw 1  0 and pT I qpw2 q  Spantw1 u

Ÿ we know that Tv1  v1 2v3 and Tv3  v3 . Thus


pT I qv1  2v3 and pT I qv3  0.
Ÿ Thus if we set w1  v3 and w2  v1 we get
pT I qw1  0 and pT I qw2  2w1
Ÿ If we set w3  v1 v2 v3 we also have
Tw3  2w3 .
The theory of a single linear transformation
Nilpotent maps and triangular matrices
Examples:
Ÿ Consequently if we set B 1  pw1 , w2 , w3 q then


1 2 0
rT sB 1   0 1 0
0 0 2
Ÿ We can also see easily that the matrix of change of
coordinates P consists of

0 1 1
0 0 1
1 0 1
and we have
rT sB 1  P 1AP
The theory of a single linear transformation
Nilpotent maps and triangular matrices

Examples:
Ÿ In this example we have
 
1 0 0 0 2 0
D   0 1 0 and N  0 0 0
0 0 2 0 0 0
The theory of a single linear transformation
Jordan Canonical Form

Ÿ Consider a linear transformation T ∶ V Ñ V . Let α P F be a


scalar such that T  αI is a nilpotent linear transformation of
index k i.e.
pT  αI qk  0
and k is the smallest positive integer with this property.
Ÿ Let S ∶ V Ñ V be the operator S  T  αI and we set
Vj  tv P V |S j pv q  0u  ker pS j q, 1 ¤ j ¤ k
Ÿ Obviously we have V1  kerpS q and Vk  V and
t0u  V0 € V1 € V2 € ... € Vk 1 € Vk  V
The theory of a single linear transformation
Jordan Canonical Form

Ÿ In fact we have
V1  S 1 pt0uq, V2  S 1 pV1 q, ..., Vi  S 1pVi 1q for 1 ¤ i ¤ k
The theory of a single linear transformation
Jordan Canonical Form

Ÿ We have Vk 1  ker pS k 1 q. Let Wk 1 be a complement of


Vk 1 in V :
V  Vk 1 ` Wk 1
Ÿ Then for every v P Wk 1 we have S k 1 pv q  0
Ÿ Thus
S pWk 1 q X Vk 2  t0u
where Vk 2  ker pS k 2 q
Ÿ Let Wk 2 be a complement of Vk 2 in Vk 1 containing
S pW k 1 q :

Vk 1  Vk 2 ` Wk 2 , S p W k 1 q € W k 2
The theory of a single linear transformation
Jordan Canonical Form

Ÿ Continuing inductively we assume that the subspaces


Wk 1 , ..., Wi € V are constructed such that

Vj  Vj 1 ` Wj 1 i 1¤j ¤k
where Vj  ker pS j q and satisfying

S pWj 1 q € Wj 2 , for i 2¤j ¤k


Ÿ Then S pWi q € Vi is such that S pWi q X Vi 1  t0u
Ÿ We then construct a complement Wi 1 for Vi 1 in Vi
comtaining S pWi q:
Vi  Vi 1 ` Wi 1 S pWi q € Wi 1
The theory of a single linear transformation
Jordan Canonical Form

Ÿ Consquently we obtain
S pWk 1 q ↪ Wk 2 , S pWk 2 q ↪ Wk 3 , ..., S pW2 q ↪ W1

Ÿ Also S pW1 q ↪ V1 and V1  ker pS q. We also set W0 ∶ V1 .


Jordan Canonical Form
The theory of a single linear transformation
Jordan Canonical Form

Ÿ Assume that dim Wj  dj for 0 ¤ j ¤ k  1.


Ÿ Let Bj  pvj 1 , vj 2 , ..., vjdj q be an ordered basis for Wj for
0 ¤ j ¤ k  1 such that
Svji  vj 1i for 1 ¤ j ¤ k  1, 1 ¤ i ¤ dj
Ÿ As an exercise prove that such bases exists.
The theory of a single linear transformation
Jordan Canonical Form

Ÿ We know that B0 , ..., Bk 1 are disjoint and their union


B  Ykj 11 Bj forms a basis for V .
Ÿ Consider chains of maximal lenght of the form
pv , Sv , S 2v , ..., S l v q in B .
Ÿ for example pv01 , v21 , ..., vk 11 q is one such chain.
Ÿ In fact we have dk 1 chains of length k and we have pd2  d1 q
chains of length k  1...and we have d0  d1  ...  dk 1
chains of lenth 1. (Why?)
The theory of a single linear transformation

Ÿ The partition of B into these chains gives rise to a


decomposition of V into a direct sum
V  E1 ` ... ` Er
where each Ej , 1 ¤ j ¤ r corresponds to the subspace
generated by the above mentioned chains.
Ÿ For example E1 , v01 , v21 , ..., vk 1,1 ¡ etc.
The theory of a single linear transformation
Jordan Canonical Form

Ÿ It is obvious that the subspaces E1 , ..., Er are invariant under


the application of S . And thus the matrix of S with respect to
the basis B has the following block form
 C1 0

 ⋱ 

 Cj 
  (19)

 ⋱
0 Cr
The theory of a single linear transformation

Ÿ where  0 1 0 ⋯ 0


 0 0 1 ⋯ 0 

Cj  
 ⋮ ⋮ ⋱ ⋱ ⋮ 
 (20)
 0 0 ⋯ ⋱ 1 
0 0 ⋯ 0
The theory of a single linear transformation

Ÿ
Ÿ  αj 1 0 ⋯ 0


 0 αj 1 ⋯ 0 

Aj  
 ⋮ ⋮ ⋱ ⋱ ⋮ 
 (21)
 0 0 ⋯ ⋱ 1 
0 0 ⋯ αj
The theory of a single linear transformation
Complexication of a real vector space

Ÿ Let V be a real n dimensional vector space. We dene a


complex vector space V b C as follows:
V b C ∶ tu iv |u, v P Vu
Ÿ The addition and scalar multiplication are dened by
pu iv q pu1 iv 1q ∶ pu u1q i pv v 1q

for u iv , u 1 iv 1 P V b C

pa ibqpu iv q  pau  bv q i pbu av q


for pa ib q P C and u iv P V b C
The theory of a single linear transformation
Complexication of a real vector space

Ÿ It is not dicult to see that V b C with the above operations


is a vector space over C. This is called complexication of the
vector space V .
Ÿ Now if T ∶ V Ñ V is an R-linear transformation we can dene
Tc ∶ V bCÑV bC
by
Tc pu iv q ∶ Tu iTv
we have
Tc ppa ib qpu iv qq  T pau bv q iT pav bu q  pa ib qTc pu iv q

Tc ppu iv q pu 1 iv 1 qq  Tc pu iv q Tc pu 1 iv 1 q
The theory of a single linear transformation
Complexication of a real vector space

Ÿ Thus Tc is a C-linear transformation on V b C


Ÿ Example If V  Rn then V b C is nothing but Cn . Also if A is
an n  n matrix with real coecients then the complexication
of A is represented by the same matrix A.
The theory of a single linear transformation
Complexication of a real vector space

Ÿ If W̃ € V b C is a complex subspace of V b C then we dene


¯
W̃  tu iv |u iv P W u  tu  iv |u iv P Wu
Ÿ ¯ is also a complex subspace of V b C. To see this we take a

vector u iv P W̃ and we note that
pa ib qpu  iv q  pa  ib qpu iv q

and pa  ibqpu iv q P W̃ . Also it is not dicult to see that W̃


¯
is closed under addition.
Ÿ
The theory of a single linear transformation
Complexication of a real vector space

Ÿ Several cases migh occur:


¯
W̃  W̃ or
¯
W̃ X W̃  t0u or t0u ˆ W̃¯ X W̃ ˆ W̃
Ÿ If W € V is a real subspace of V then W b C is a complex
subspace of V b C. and we have
W bCW bC
Ÿ This is not dicult to verify but we can also prove the
converse.
The theory of a single linear transformation
Complexication of a real vector space

Ÿ Lemma 3.
If W̃ € V b C is a complex subspace such that W̃ ¯  W̃ then there
exists a subspace W € V such that W̃  W b C.
Ÿ Proof: In fact for every u iv P W̃ we have u P W̃ . This is
because u  iv P W̃ .
Ÿ Also i pu iv q  v  iu P W̃ thus we also have v P W̃ .
Ÿ Therefore if we set W ∶ W̃ X V then W̃ € W b C and thus
we get
W̃ W bC
The theory of a single linear transformation
Complexication of a real vector space

Ÿ Now let u1 , ..., uk , v1 , ..., vk be 2k linearly independent vectors


in V .
Ÿ Set
W̃ ∶ Spantu1 iv1 , ..., uk bC ivk u € V
by which we mean W̃ is the complex subspace of V b C:

W̃  t p aj ibj qpuj ivj q|aj , bj P R for 1 ¤ j ¤ k u

j 1
The theory of a single linear transformation
Complexication of a real vector space

Ÿ Lemma 4.
W0 ∶ W̃ X W̃¯  t0u
Ÿ Proof: By denition we have W̄0  W0 so according to
previous lemma there exists a a subspace V0 € V such that
W0  V0 b C.
Ÿ Now if the complex numbers a1 ib1 , ..., ak ibk P C are such
that

v  paj ibj qpuj ivj q P V0

j 1

Ÿ Or equivalently
¸ ?
rpaj uj  bj vj q  1 p aj v j bj uj qs P V0
The theory of a single linear transformation
Complexication of a real vector space

Ÿ Then we should have


¸
paj vj bj uj q  0

Since the family of vectors tu1 , ..., uk , v1 , ..., vk u is linearly


independent this implies that
aj  bj  0 @1 ¤ j ¤ k
Ÿ Thus v  0 and this completes the proof.
The theory of a single linear transformation
Complexication of a real vector space
Ÿ A converse for lemma 4 also holds in the sense that if
W̃ € V b C is a complex vector subspace satisfying

X W̃¯  t0u

and if tu1 iv1 , ..., uk ivk u constitute a basis for W̃ then the
family of vectors tu1 , ..., uk , v1 , ..., vk u are linearly independent.
Ÿ this is because due to the identity
ķ ¸ ?
paj ibj qpuj ivj q  rpaj uj  bj vj q
1paj vj bj uj qs

j 1

if tu1 , ..., uk , v1 , ..., vk u are linearly dependent we can nd


a1 ib1 , ..., ak ibk such that

w  paj ibj qpuj ivj q P V

j 1
The theory of a single linear transformation
Complexication of a real vector space

Ÿ It follows that wP W̃ X W̃¯


The theory of a single linear transformation
Orthogonal Transformations

Ÿ Now consider an inner product vector space pV , x, yq. Let


T ∶ V Ñ V be an orthogonal transformation.
Ÿ Let Tc ∶ V b C Ñ V b C be the corresponding
complexication of T .
Ÿ Let α  a ib P C be a complex number with b  0. and
assume that α is an eigenvalue of Tc .
Ÿ Since the characteristic polynomial of Tc and T are identical
we know that ᾱ is also an eigenvalue of Tc .
The theory of a single linear transformation
Orthogonal Transformations

Ÿ
Ÿ Also since α  ᾱ the eigen spaces W1  ker pT  αI q and
W̄1  ker pT  ᾱI q satisfy

W1 X W̄1  t0u
The theory of a single linear transformation
Orthogonal Transformations

Ÿ Note that
Tc pu iv q  pa ib qpu iv q ðñ Tc pu  iv q  pa  ib qpu  iv q
which implies that ker pT  ᾱI q  W̄1 .
Ÿ Now take an eigenvector u iv P W1. Then tu, v u is linearly
independent and we have
Tc p u iv q  pa ib qpu iv q

which is equivalent to
Tu  au  bv , Tv  bu av

Ÿ Thus the 2-dimensional real subspace Spantu, v u € V is


invariant under the application of T .
The theory of a single linear transformation
Orthogonal Transformations

Ÿ Since T is an orthogonal transformation if we set


W2  Spantu, v u then the orthogonal complement W K is also
invariant under T .
Ÿ For real eigenvalues λ P R of Tc Due to orthogonality one can
see that λ  1. If v P V denote the eigenvector associated
to λ and V1 ∶ Spantv u then we can again decompose V like
V  V1 ` V1K
where V1K is the orthogonal complement of V1 . We also know
that V1K is invariant under V .
Ÿ So by induction we get to the following theorem:
The theory of a single linear transformation
Orthogonal Transformations

Ÿ Theorem 18.
Let pV , x, yq be an inner product vector space and let T ∶ V Ñ V be
an orthogonal transformation. Then there exists a decomposition
V  W1 ` ... ` Wk ` V1 ` ... ` Vl
into subspaces
dim Wi  2, 1¤i ¤k dim Vj 1 1¤j ¤l
Each of the vector spaces Wi 's and Vj 's are invariant under the
application of T and they are mutually orthogonal subspaces of V .
The theory of a single linear transformation
Orthogonal Transformations
Ÿ Corollary 5.
With the hypothesis of the previous theorem there exists an
orthonormal base B for V such that the matrix A  rT sB in this
basis has the form
 C1 0

 ⋱ 

 Cj 
  (22)

 ⋱
0 Cr

where C1 , ...Ck are 2  2 rotation matrices



Cj  cos θj  sin θj 1¤j ¤k
sin θj cos θj
The theory of a single linear transformation
Orthogonal Transformations

Ÿ And Ci  1 for k+1¤ i ¤ k l n


The theory of a single linear transformation
Self-Adjoint Transformations

Ÿ Let pV x, yq be an inner product vector space and let


T ∶ V Ñ V be a self-adjoint linear transformation which by
denition satises:
xTu, v y  xu, Tv y
Ÿ Theorem 19.
With the above hypothesis T is diagonizable in an orthogonal basis.
This completes the proof of the theorem.
The theory of a single linear transformation
Self-Adjoint Transformations

Ÿ Proof: Let Tc ∶ V b C Ñ V b C be the complexication of T


as dened before.
Ÿ Let p px q be the characteristic polynomial of T which is also
the characteristic polynomial of Tc .
Ÿ We rst claim that all the roots of p are real. Consequently
both T and Tc have n real eigenvalues (taking into account
the multiplicities of these eigenvalues.)
The theory of a single linear transformation
Self-Adjoint Transformations

Ÿ To see this we assume that α  a ib is an eigenvalue of Tc .


Let also u iv P V b C be the eigenvector associated to α.
Ÿ so we have:
Tc pu iv q  pa ib qpu iv q
Ÿ By denition of Tc this is equivalent to say that
Tu  au  bv Tv  bu av (23)
Ÿ This relation means that the subspace W  Spantu, v u is
invariant under the application of T :
T pW q € W
The theory of a single linear transformation
Self-Adjoint Transformations

Ÿ If dim W  1 then there exists a real λ P R such that u  λv .


Ÿ In this case by (23) we nd that
T pλv q  paλ  b qv T pv q  pa bλqv

Ÿ It is not dicult to see from the above equations that b  0.


So α P R
Ÿ If dim W  2 then we use xTu, v y  xu, Tv y to obtain
axu, v y  b xv , v y  b xu, u y axu, v y

We can again conclude that b  0.


The theory of a single linear transformation
Self-Adjoint Transformations

Ÿ Consequently all the eigenvalues of T are real. Let now α P R


be an eigenvalue of T and let v1 P V denote the associated
eigenvector
Tv1  αv1
Ÿ Now let W0  Spantv1 uK . Then W0 is invariant under the
application of T .
Ÿ This is because if w P W0 then
xTw , v1y  xw , Tv1y  αxv1, w y  0 for all w P W0
Ÿ Thus Tw P W0
The theory of a single linear transformation
Self-Adjoint Transformations

Ÿ T |W0 ∶ W0 Ñ W0 is also self adjoint and by repeating the


above argument we can obtain an orthogonal basis of
eigenvalues for T like B  tv1 , ..., vn u.
The theory of a single linear transformation
Polar decomposition theorem

Ÿ Theorem 20 ( Polar decomposition theorem).


Consider Rn with its standard inner product x, y. Let T ∶ Rn Ñ Rn
be an invertible linear transformation. Then there exists an
orthogonal transformation U ∶ Rn Ñ Rn and a self-adjoint operator
S ∶ Rn Ñ Rn such that
T  SU
The theory of a single linear transformation
Polar decomposition theorem

Ÿ Proof: We rst not that if such a decomposition exists then


we can write
TT t  SUU t S t  S 2
Here we are using the orthogonality of U and self-adjointness
of S
?
Ÿ Thus in some sense S  TT t . This has sense since TT t is a
positive denite symmetric matrix.
Ÿ By our last theorem TT t is diagonizable in an orthonormal
basis which means that there exists an orthogonal matrix P
such that
 P t DP
TT t
where D is a dianal matrix like D  diag pλ1 , ..., λn q where
λ1 , ..., λn are the eigenvalues of TT t .
The theory of a single linear transformation
Polar decomposition theorem

Ÿ Also since TT t is non-singular and positive denite λi 's are


positive.
Ÿ Thus if we set
a a
S ∶ P t diag p λ1 , ..., λn qP

then we have S 2  TT t .
Ÿ Now if we set U ∶ S 1 T then we can write
UU t  pS 1T qpT t pS 1qt q  S 1pS 2qS 1  I
Ÿ This means that U is orthogonal.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy