Ordinary Differential Equations notes chapter 4
Ordinary Differential Equations notes chapter 4
where, for each t, A(t) 2 Mn (R) and r(t) 2 Rn . If r ⌘ 0, then the system is called
homogeneous. If not, the system is called nonhomogeneous.
The proof is left as an exercise. A consequence of Theorem 4.1.1 is that if A and r are
continuous on R, then the unique solution is globally defined on R, that is y : R ! Rn .
27
28 CHAPTER 4. SYSTEMS OF LINEAR EQUATIONS
y 0 = A(t)y. (4.2)
z 0 (t) = ↵1 y10 (t) + ↵2 y20 (t) = ↵1 A(t)y1 + ↵2 A(t)y2 = A(t)(↵1 y1 (t) + ↵2 y2 (t)) = A(t)z(t).
Since z, w 2 S and z(t0 ) = w(t0 ), then by Theorem 1.2.4 (Existence and Uniqueness),
w(t) = z(t) for all t 2 I. Hence
n
X
z(t) = w(t) = z0i yi (t) 2 hBi.
i=1
Hence, S ⇢ hBi, and we conclude that S = hBi, that is B spans the solution set.
Finally, assume that there exists ↵1 , . . . , ↵n such that
In particular, at t = t0 ,
0
1
↵1
n
X n
X B ↵2 C
B C
0= ↵i yi (t0 ) = ↵i ei = B . C =) ↵i = 0, for all i = 1, . . . , n,
@ .. A
i=1 i=1
↵n
and hence Rt
tr A(s) ds
det Y (t) = det Y0 e t0 . (4.4)
Proof. Let 2 3
R1 (t)
6 R2 (t) 7
6 7
Y (t) = 6 . 7
4 .. 5
Rn (t)
where Rj (t) denotes the j-th row of Y (t). Since the determinant is a multilinear function
of its rows, 2 3
R1 (t)
6 .. 7
n 6 . 7
d X 6d 7
(det Y (t)) = det 6
6 dt Ri (t)7
7. (4.5)
dt 6 .. 7
i=1
4 . 5
Rn (t)
Denoting Y (t) = {yi,j }ni,j=1 and A(t) = {ai,k }ni,k=1 , and writing out (4.2) in matrix form
(that is Y 0 (t) = A(t)Y (t)) gives the expression
X n
d
yi,j (t) = ai,k (t)yk,j (t)
dt
k=1
Our goal is now clear: look for n distinct solutions y1 , . . . , yn of (4.2) and use the
Wronskian in (4.6) to show that they are linearly independent by showing that W (t0 ) 6= 0
at a given point t0 2 I. The general solution is then given by
If an initial condition y(t0 ) = y0 = (y01 , y02 , . . . , y0n ) is given, find uniquely the constants
c1 , . . . , cn 2 R as follows:
Since W (t0 ) 6= 0, then the matrix is invertible and the constants c1 , . . . , cn 2 R are uniquely
determined by 0 1 0
c1 1 1 0y1 1
.. .. .. 0
B c2 C B . . . C By2 C
B C B B 0C
B .. C = @y1 (t0 ) y2 (t0 ) · · · yn (t0 )C
A B . C.
@.A .. .. .. @ .. A
cn . . . y0n
y 0 = Ay (4.7)
where A 2 Mn (R).
def
We know that if n = 1, then A = 2 R and the solution to (4.7) takes the form
y(t) = e t . For general n 1, it is therefore natural to look for solutions of (4.7) in the
form
y(t) = e t u, (4.8)
32 CHAPTER 4. SYSTEMS OF LINEAR EQUATIONS
e t=
6 0
e t u = y 0 (t) = Ay(t) = Ae t u =) Au = u,
ker A = {u 2 Cn : Au = 0} .
p( ) = det(A I)
1 3
=
3 1
= (1 )2 9
2
= 2 8
= ( + 2)( 4) = 0,
def def
and hence = 1 = 2 and = 2 = 4, that is
(A) = { 2, 4}.
4.2. CONSTANT COEFFICIENT HOMOGENEOUS LINEAR SYSTEMS 33
Let us find eigenvectors u1 and u2 associated to 1 and 2 , respectively. To find u1 = (a, b)T ,
we solve for
✓ ◆✓ ◆ ✓ ◆
3 3 a 0
(A 1 I)u1 = (A + 2I)u1 = = =) a = b.
3 3 b 0
✓ ◆✓ ◆ ✓ ◆
3 3 a 0
(A 2 I)u2 = (A 4I)u2 = = =) a = b.
3 3 b 0
are two solutions of (4.10). To verify that they are linearly independent, we compute their
Wronskian at t = 0:
1 1
W (0) = det(y1 (0) y2 (0)) = = 2 6= 0.
1 1
By Proposition 4.1.3, y1 (t) and y2 (t) are two linearly independent solutions of (4.10). By
Lemma 4.1.2, the solution set (recall (4.3)) S of (4.10) is two-dimensional. Hence, we
conclude that the general solution of (4.10) is given by
✓ ◆ ✓ ◆
2t 1 4t 1
y(t) = c1 y1 (t) + c2 y2 (t) = c1 e + c2 e , c1 , c2 2 R.
1 1
For instance, to find the unique solution going through the initial condition y(0) = (1, 2)T
must satisfy
✓ ◆ ✓ ◆ ✓ ◆✓ ◆ ✓ ◆
1 1 1 1 c1 1
y(0) = c1 + c2 = = ,
1 1 1 1 c2 2
and therefore
✓ ◆ ✓ ◆ !✓ ◆
1✓ ◆ ✓ ◆✓ ◆ 1 1
c1 1 1 1 1 1 1 1 2 2 1
= = = 1 1
c2 1 1 2 2 1 1 2 2 2
2
3 3 0 c 0 0 0 0 c 0
Hence,
*011 0 21+
ker(A 4I) = @1A , @ 0 A
0 1
def def
and therefore u2 = (1, 1, 0)T and u3 = ( 2, 0, 1)T are two eigenvectors associated to = 4.
By construction
0 1 0 1 0 1
1 1 2
2t @ A @ A @ 0A
def def 4t def 4t
y1 (t) = e 1 , y2 (t) = e 1 and y3 (t) = e
3 0 1
4.2. CONSTANT COEFFICIENT HOMOGENEOUS LINEAR SYSTEMS 35
are three solutions of (4.10). To verify that they are linearly independent, we compute
their Wronskian at t = 0:
1 1 2
W (0) = det y1 (0) y2 (0) y3 (0) = 1 1 0 = 6 6= 0.
3 0 1
By Proposition 4.1.3, y1 (t), y2 (t) and y3 (t) are three linearly independent solutions of
(4.11). By Lemma 4.1.2, the solution set (recall (4.3)) S of (4.11) is three-dimensional.
We conclude that the general solution of (4.11) is given by
y(t) = c1 y1 (t) + c2 y2 (t) + c3 y3 (t)
0 1 0 1 0 1
1 1 2
= c1 e 2t @1A + c2 e4t @1A + c3 e4t @ 0 A , c1 , c2 , c3 2 R,
3 0 1
where the constants c1 , c2 , c3 2 R can be determined by fixing an initial condition y(t0 ) =
y0 2 R3 . For instance, the unique solution y(t) of (4.11) satisfying y(0) = (1, 0, 0)T must
satisfy
0 1 0 1 0 1 0 10 1 0 1
1 1 2 1 1 2 c1 1
y(0) = c1 @1A + c2 @1A + c3 @ 0 A = @1 1 0 A @c2 A = @0A ,
3 0 1 3 0 1 c3 0
that is 0 1 0 1 1 0 1
10 1
c1 1 1 2 1 6
@c2 A = @1 1 0A @0A = B 1C
@ 6A .
c3 3 0 1 0 1
2
Hence the unique solution y(t) of (4.11) satisfying y(0) = (1, 0, 0)T
is given by
0 1 2t 1 4t 4t
1 0 1 2t + 5 e4t
1
6e 6e + e 6e 6
B C B C
y(t) = @ 16 e 2t 16 e4t A = @ 16 e 2t 16 e4t A .
1 2t 1 4t 1 2t 1 4t
2e 2e 2e 2e
we get that 0 1
1 0 0 ··· 0
B0 0 ··· 0C
B 2 C
B .. C
P 1 AP = B
B0 0 3 . 0CC
B .. .. .. .. C
@. . . . 0A
0 0 ··· 0 n
Proposition 4.2.7. Assume (A) = { 1 , . . . , n } with the i real and distinct. Denote
by u1 , . . . , un the corresponding n real-valued eigenvectors. Then A is diagonalisable and
the linear system y 0 = Ay has n linearly independent real-valued solutions yi (t) = e i t ui
(i = 1, . . . , n). The general solution is then given by
n
X n
X
it
y(t) = ci yi (t) = ci e ui , c1 , . . . , cn 2 R.
i=1 i=1
Proof. By Theorem 4.2.8, there exists a set {u1 , . . . , un } of n linearly independent (possibly
complex valued) eigenvectors of A. For i = 1, . . . , n let i be the eigenvalue associated to
4.2. CONSTANT COEFFICIENT HOMOGENEOUS LINEAR SYSTEMS 37
In case A is diagonalizable, the natural question which arises is what happens when it
has complex eigenvalues? It is known that complex eigenvalues always come in pairs (after
all they are roots of a polynomial equation). Indeed, if ↵ + i is an eigenvalue associated
to the eigenvector u + iv, then ↵ i is an eigenvalue associated to the eigenvector u iv.
Now,
def
y(t) = e(↵+i )t
(u + iv)
= y1 (t) + iy2 (t)
def
= e↵t (cos( t)u sin( t)v) + ie↵t (sin( t)u + cos( t)v)
is a complex valued solution of y 0 = Ay. However, y1 (t) = e↵t (cos( t)u sin( t)v) and
y2 (t) = e↵t (sin( t)u + cos( t)v) are two real solutions of y 0 = Ay, as
y 0 (t) = y10 (t) + iy20 (t) = Ay(t) = A(y1 (t) + iy2 (t)) = Ay1 (t) + iAy2 (t)
and therefore y10 (t) = Ay1 (t) and y20 (t) = Ay2 (t).
Lemma 4.2.11. Let u, v 2 Rn such that u 6= 0 or v 6= 0. If 6= 0, then
y1 (t) = e↵t (cos( t)u sin( t)v) and y2 (t) = e↵t (sin( t)u + cos( t)v)
are two linearly independent functions on R.
Proof. Assume the opposite, that is there exists c 6= 0 such that y1 (t) = cy2 (t) for all t 2 R.
Hence, dividing by e↵t leads to
(cos( t)u sin( t)v) = c (sin( t)u + cos( t)v) , 8 t 2 R.
In particular, at t = 0, we get that u = cv. Di↵erentiating leads to
( sin( t)u cos( t)v) = c ( cos( t)u sin( t)v) , 8 t 2 R,
and in particular at t = 0, we get
6=0
v = c u = c2 v =) (c2 + 1)u = 0 =) u = v = 0,
which is a contradiction.
38 CHAPTER 4. SYSTEMS OF LINEAR EQUATIONS
Let 0 1 0 1
cos bt sin bt
y1 (t) = eat @ sin bt A y2 (t) = eat @ cos btA
def def
and
0 0
the real and the imaginary parts of e(a+ib)t w, respectively. By Lemma 4.2.11, y1 and y2
are linearly independent. We conclude that
10
0 1
* 8 + 6i +
B 19 + 7iC
ker(A + 2iI) = B C
@ 12 + 14i A
10
0 1
* 2 +
B 3C
ker(A 2I) = B @ 2A
C
4
0 1
* 2 +
B1C
ker(A + 6I) = B@2A .
C
0
This gives us four linearly independent complex-valued solutions:
0 1 0 1 0 1 0 1
8 6i 8 + 6i 2 2
B 19 7i C B 19 + 7i C B
2t B 3C
C B1C
e2it B C
@ 12 14i A , e
2it B C
@ 12 + 14i A , e @ 2A and e 6t B C
@ 2 A.
10 10 4 0
Taking the real and the imaginary parts of the first complex-valued solution yields two
linearly independent real-valued solutions
0 1 0 1
8 cos 2t + 6 sin 2t 6 cos 2t + 8 sin 2t
B 19 cos 2t + 7 sin 2tC B 7 cos 2t 19 sin 2t C
y1 (t) = B C B C
@ 12 cos 2t + 14 sin 2t A and y2 (t) = @ 14 cos 2t + 12 sin 2tA
10 cos 2t 10 sin 2t
40 CHAPTER 4. SYSTEMS OF LINEAR EQUATIONS
Definition 4.2.14. Let A 2 Mn (R) and let p( ) = det(A I) its associated characteristic
polynomial. Let 0 2 (A), that is such that p( 0 ) = 0.
(a) The algebraic multiplicity of 0 is defined by the largest integer m 1 such that
( m
0 ) divides the characteristic polynomial p( ).
(b) The geometric multiplicity of 0 is defined by the number dim (ker(A 0 I)).
Definition 4.2.15. Let A 2 Mn (R). If 2 (A) has algebraic multiplicity m, we say that
is defective if
dim (ker(A I)) < m, (4.13)
that is the geometric multiplicity of is strictly less than its algebraic multiplicity. We
define the defect of an eigenvalue by
def
d =m dim (ker(A I)) 0. (4.14)
with p( ) = ( + 2)( 4)2 , that is with eigenvalues 2 and 4. Hence, the algebraic
multiplicity of 4 is two. However, we showed that dim(ker(A 4I)) = 2 and so = 4 is
not defective: it has a defect equal to 0.
Definition 4.2.18. If is an eigenvalue of A 2 Mn (R) with algebraic multiplicity m, we
say that a vector u is a generalized eigenvector associated to if
(A I)m u = 0.
The following fundamental result from linear algebra is crucial for our understanding
of defective eigenvalues. Its technical linear algebra proof is omitted.
Theorem 4.2.19. Let 2 (A) with algebraic multiplicity m 1. Then
Hence, if 2 (A) has algebraic multiplicity m > 1 and is defective of defect d > 0,
then the previous result shows that is always possible to construct d generalized eigen-
vectors associated to . To present this construction, it is convenient to introduce the
following notation.
Kj = ker (A I)j , j = 0, . . . , m.
Note that K0 = ker (A I)0 = ker I = {0} and that K1 = ker (A I) corresponds to
the set of standard eigenvectors. Moreover, for all j = 1, . . . , m,
(A I) : Kj ! Kj 1,
42 CHAPTER 4. SYSTEMS OF LINEAR EQUATIONS
Kj = Kj+1 = Kj+2 = · · · = Km .
Proof. Assume that Kj = Kj+1 . We now show that Kj+1 = Kj+2 . From (4.15), we get that
Kj+1 ⇢ Kj+2 . It remains to show that Kj+2 ⇢ Kj+1 . Let u 2 Kj+2 = ker (A I)j+2 .
Let v = (A I)u. Then,
Therefore, since v 2 Kj
We conclude that Kj+2 ⇢ Kj+1 and then that Kj+1 = Kj+2 . The proof follows by
induction.
{0} = K0 ( K1 ( K2 ( · · · ( Kj 1 ( Kj = Kj+1 = · · · = Km .
The proof of Lemma 4.2.20 indicates the procedure to follow to construct all the gen-
eralized eigenvectors. Suppose that for a given i 1, we have computed a basis for the
subspace Ki consisting of ki < m generalized eigenvectors. Since ki 6= m, we cannot have
that Ki = Ki+1 , that is we have that Ki ( Ki+1 . Hence, there exists a vector u 2 Ki+1 \Ki .
Denote
def
w = (A I)u 2 Ki .
(A I)u = w
Let us find a generalized eigenvector. Following the above procedure, we look for u 2
K2 \ K1 such that
(A I)u = w
where w 2 K1 \ K0 . The only such vector is w = (1, 0)T . Hence, we look for u = (a, b)T
such that ✓ ◆✓ ◆ ✓ ◆
0 1 a 1
(A I)u = = = w.
0 0 b 0
Hence b = 1 and u = (0, 1)T . We conclude that u = (0, 1)T is the extra missing generalized
eigenvector. Moreover,
⌧✓ ◆ ⌧✓ ◆ ✓ ◆
1 1 0
K0 = {0} ( K1 = ( K2 = , .
0 0 1
with defect d = 2. Let us find two generalized eigenvectors. We showed also that K1 =
def
ker(A) is spanned by v1 = (1, 1, 1, 1)T 2 K1 \ K0 . Let us find u 2 K2 \ K1 such that
Au = v1 , that is
0 10 1 0 1
0 1 0 1 a 1
B0 2 1 1C B b C B 1C
C B C B
A=B @1 1 = C
0 0 A @cA @ 1 A
0 1 0 1 d 1
This leads to
b+d=1
2b c d= 1
a+b=1
44 CHAPTER 4. SYSTEMS OF LINEAR EQUATIONS
and therefore 0 1 0 1 0 1 0 1
a 1 b 1 1
BbC B b C B 1 C B0C
B C=B C B C B C
@ c A @ b A = b @ 1A + @0A
d 1 b 1 1
We choose the particular solution
0 1
1
B0C
u = v2 = B C
def
@0A 2 K2 \ K1
1
Let us now find u 2 K3 \ K2 such that Au = v2 , that is
0 10 1 0 1
0 1 0 1 a 1
B0 2 1 1C B b C B0C
C B C B
A=B @1 1 = C
0 0 A @ c A @0A
0 1 0 1 d 1
This leads to
b+d=1
2b c d=0
a+b=0
and therefore 0 1 0 1 0 1 0 1
a d 1 1 1
B b C B1 dC B 1C B 1C
B C=B C = dB C + B C
@ c A @d 2A @1A @ 2A
d d 1 0
We choose the particular solution
0 1
1
B1C
u = v3 = B C
def
@ 2A 2 K3 \ K2
0
Hence
0 1 0 1 0 1 0 1 0 1 0 1
* 1 + * 1 1 + * 1 1 1 +
B 1C B 1C B C B 1C B0C
C B B 1C
K1 = B C ( K2 = B C , B0C ( K3 = ker(A3 ) = B ,@ C ,B C
@1A @ 1 A @0 A @ 1 A 0 @
A 2A
1 1 1 1 1 0
Now that we have developed a general approach to compute generalized eigenvectors,
we must find a way to construct solutions to y 0 = Ay with them. It is not surprising that
this construction involves once again the exponential.
4.2. CONSTANT COEFFICIENT HOMOGENEOUS LINEAR SYSTEMS 45
where A 2 Mn (R).
Lemma 4.2.23. For all x 2 Rn and for any A 2 Mn (R),
kAxk kAkkxk. (4.17)
Moreover, for any A, B 2 Mn (R),
kABk kAkkBk. (4.18)
Proof. The proof is left as an exercise.
Proposition 4.2.24. Let {Ak }k 0 ⇢ Mn (R) be such that
1
X
kAk k < 1.
k=0
Then, for each x 2 Rn , the vector series converges (in norm) and the resulting function
1
X
x 7! Ak x (4.19)
k=0
is an element of Mn (R).
Proof. Fix x 2 Rn and consider the sequence of partial sums {Sj }j 0 defined by
j
X
def
Sj = kAk xk.
k=0
which shows that {Sj }j 0 is an increasing sequence of real numbers which is bounded,
hence converging. Hence, the vector series converges in norm. The linearity of the function
defined in (4.19) follows from the properties of series.
46 CHAPTER 4. SYSTEMS OF LINEAR EQUATIONS
Hence,
1
X j
X
1 1
k
kA k = lim kAk k ekAk .
k! j!1 k!
k=0 k=0
By Proposition 4.2.24, the function
1
X 1 k
x 7! A x
k!
k=0
keA k ekAk .
holds for commuting matrices. This being said, the partial sum (of order m) of eA+B is
given by
m
X m
X k m X
k
A+B def 1 1 X k! X 1
Sm = (A + B)k = Aj B k j
= Aj B k j
.
k! k! j!(k j)! j!(k j)!
k=0 k=0 j=0 k=0 j=0
Moreover, letting
m
X m
X
A def 1 k B def 1 k
Sm = A and Sm = B ,
k! k!
k=0 k=0
we get from the Cauchy product formula that
2m
X X
A B 1
Sm Sm = Ak 1 B k 2 .
k1 !k2 !
k=0 k1 +k2 =k
4.2. CONSTANT COEFFICIENT HOMOGENEOUS LINEAR SYSTEMS 47
However,
m
X X m XXk
1 1
Ak 1 B k 2 = Aj B k j A+B
= Sm
k1 !k2 ! j!(k j)!
k=0 k1 +k2 =k k=0 j=0
and therefore
2m
X X
A B A+B 1
Sm Sm Sm = Ak1 B k2 .
k1 !k2 !
k=m+1 k1 +k2 =k
2m
X X
A B A+B 1
kSm Sm Sm k kAkk1 kBkk2 ,
k1 !k2 !
k=m+1 k1 +k2 =k
which yields
A B A+B kAk kBk kAk+kBk
kSm Sm Sm k Sm Sm Sm .
Since ekAk+kBk = ekAk ekBk , we conclude that the right-hand side of the last inequality
tends to 0 as m tends to 1.
Proposition 4.2.26. The matrix valued function defined in (4.21) satisfies the following
properties:
(a) eA·0 = I.
d At
(d) e = AeAt = eAt A.
dt
Proof. Part (a) follows by plugging t = 0 in (4.21). The proof of (b) follows by observing
that the matrices As and At commute (for any s, t 2 R) and by applying Proposition 4.2.27.
Part (c) follows by plugging s = t in (b) and then using part (a). Finally, to prove part
(d), we can show that the function eAt converges uniformly on any compact set of time,
48 CHAPTER 4. SYSTEMS OF LINEAR EQUATIONS
d At
Similarly, we obtain that dt e = eAt A.
def
Part (d) of Proposition 4.2.28 implies that X(t) = eAt satisfies
d
X(t) = AX(t),
dt
that is X(t) = eAt is the unique fundamental matrix solution of y 0 = Ay such that X(0) = I.
For that reason, the matrix exponential eAt is called the principal fundamental matrix
solution of y 0 = Ay. The following result justifies the name principal.
is given by
y(t) = eAt y0 . (4.22)
Proof. Since eAt is a fundamental matrix solution, its columns form a basis of the solution
set S and therefore for any solution y of y 0 = Ay there exists c = (c1 , . . . , cn )T 2 Rn such
that y(t) = eAt c. This conclude that (4.22) is the general solution. In particular, the
unique solution y satisfying y(0) = y0 is the one for which y(0) = eA·0 c = Ic = c = y0 .
4.3. BUILDING THE SOLUTION SET WITH THE MATRIX EXPONENTIAL 49
Remark 4.2.28. For any two general fundamental matrix solution X(t) and Y (t), there
exists C 2 Mn (R) invertible such that X(t) = Y (t)C. For instance, if X(t) = eAt , then
eAt = Y (t)C, and in particular I = eA·0 = Y (0)C, which implies that C = Y (0) 1 . A
consequence is that given any fundamental matrix solution Y (t) the exponential matrix
eAt can be recovered from it using the formula
eAt = Y (t)Y (0) 1 . (4.23)
◆ ✓
1 3
Example 4.2.29. Consider A = as in Example 4.2.3, where we verified that
3 1
✓ ◆ ✓ 4t ◆
e 2t e
y1 (t) = 2t and y2 (t) = 4t
e e
are two linearly independent solutions of y 0 = Ay. Hence
✓ ◆
def e 2t e4t
Y (t) =
e 2t e4t
is a fundamental matrix solution. Note that Y (t) is not equal to eAt since Y (0) 6= I. To
recover eAt from Y (t) we use (4.24). Note that
✓ ◆ 1 ✓ ◆ !
1 1
1 1 1 1 1
Y (0) 1 = = = 1
2 2
1 ,
1 1 2 1 1 2 2
and hence
✓ ◆ ! ✓ ◆
1 1
At 1 e 2t e4t 2 2 1 e 2t + e4t e 2t + e4t
e = Y (t)Y (0) = =
e 2t e4t 1
2
1
2
2 e 2t + e4t e 2t + e4t
✓ ◆
1 3
is the matrix exponential function of A = .
3 1
The formulas (4.22) provides a beautiful and powerful construction of the general solu-
tion of a homogeneous linear system with constant coefficients. This being said, in practice,
computing the matrix exponential function eAt may be difficult. However, we can combine
eAt together with the notion of generalized eigenvectors as introduced in Section 4.2.1 to
build the solution set of y 0 = Ay explicitly.
y(t) = eAt u
It (A I)t
=e e u
= e t Ie(A I)t
u
1
!
X 1 k
t
=e t (A I)k u
k!
k=0
j
!
X 1 k
t k
=e t (A I) u,
k!
k=0
yi (t) = eAt ui .
satisfies 0 1
.. .. ..
B. . . C
W (0) = det Bu
@ 1 u 2 ··· un C A 6= 0,
.. .. ..
. . .
then we conclude that {y1 (t), . . . , yn (t)} is a set of linearly independent solutions of y 0 = Ay,
that is it forms a basis for the solution set S. The proof follows.
✓ ◆
2 4
Example 4.3.2. Let A = which has characteristic polynomial
1 2
2
p( ) = (2 )( 2 )+4= = 0,
that is = 0 2 (A) has algebraic multiplicity m = 2. An associated eigenvector u1 =
(a, b)T satifies ✓ ◆✓ ◆ ✓ ◆
2 4 a 0
= =) a = 2b.
1 2 b 0
def
Choosing b = 1 leads to u1 = ( 2, 1)T , that is
⌧✓ ◆
2
K1 = ker A =
1
which implies that = 0 is defective with defect d = 1. To find a generalized eigenvector
u2 = (a, b)T 2 K2 \ K1 , we solve for v such that Au2 = u1 , that is
✓ ◆✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
2 4 a 2 a 2b 1 2 1
= =) a = 2b 1 =) u2 = = =b + .
1 2 b 1 b b 1 0
def
We choose u2 = ( 1, 0)T so that
⌧✓ ◆ ✓ ◆
2 2 1
K2 = ker A = , .
1 0
Hence, ⇢✓ ◆ ✓ ◆
2 1
V = {u1 , u2 } = ,
1 0
forms a set of two linearly independent generalized eigenvectors of A. Let
def
y1 (t) = eAt u1 = e t u1 = u1
1
X 1
X
def 1 k k 1 k k
y2 (t) = eAt u2 = e t e(A I)t
u2 = t A u2 = t A u2 = u2 + tAu2 = u2 + tu1 .
k! k!
k=0 k=0
def
y1 (t) = eAt u1 = e t u1 = et u1
1
X
def At t (A I)t t 1 k
y2 (t) = e u2 = e e u2 = e t (A I)k u2 = et (u2 + tAu2 ) = et (u2 + tu1 ).
k!
k=0
✓ t◆ ✓ t◆
t t e te
y(t) = c1 y1 (t) + c2 y2 (t) = c1 e u1 + c2 e (u2 + tu1 ) = c1 + c2 , c1 , c2 2 R.
0 et
0 1
0 1 0 1
B0 2 1 1C
A=B
@1
C.
1 0 0A
0 1 0 1
0 1 0 1 0 1 0 1 0 1 0 1
* 1 + * 1 1 + * 1 1 1 +
B 1C B 1C B C B 1C B0C
C B B 1C
K1 = B C ( K2 = B C , B0C ( K3 = ker(A3 ) = B , C,B C .
@1A @ 1 A @0A @ 1 A @0A @ 2A
1 1 1 1 1 0
def def
Denote u1 = (1, 1, 1, 1)T the (only) eigenvector associated to 1 = 0, u2 = (1, 0, 0, 1)T and
def
u3 = ( 1, 1, 2, 0)T the generalized eigenvectors associated to 1 = 0 such that Au2 = u1
4.4. NON-HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 53
z 0 (t) = y20 (t) y10 (t) = Ay2 (t) + r(t) (Ay1 (t) + r(t)) = A(y2 (t) y1 (t)) = Az(t),
that is z is a solution of y 0 = Ay. Hence, there exists c 2 Rn such that z(t) = Y (t)c for
Y (t) any fundamental matrix solution of y 0 = Ay. In other words
solves (4.25), where Y (t)c is the general solution of the homogeneous equation y 0 = Ay
and y1 (t) is a particular solution of the non-homogeneous system (4.25). This leads to the
following result.
where cp (t) is to be determined. Assume that yp solve (4.25) and by the product rule of
di↵erentiation
yp0 (t) = Y 0 (t)cp (t) + Y (t)c0p (t) = AY (t)cp (t) + Y (t)c0p (t)
= Ayp (t) + r(t) = AY (t)cp (t) + r(t)
Z
y(t) = Y (t)c + Y (t) Y (t) 1 r(t) dt
✓ Z ◆
= Y (t) c + Y (t) 1 r(t) dt , c 2 Rn . (4.27)
In particular, letting Y (t) = eAt , the unique solution of the initial value problem
y 0 = Ay, y(0) = y0
is given by
✓ Z t ◆
y(t) = eAt y0 + e As
r(s) ds . (4.28)
0
✓ ◆ ✓ t◆ ✓ ◆
0 1 1 e 1
y = y+ , y(0) = .
4 1 1 0
✓ ◆
1 1
The eigenvalues of A = are 1 = 3 and 2 = 1 with associated eigenvectors
4 1
u1 = ( 12 , 1)T and u2 = ( 12 , 1)T . Hence a fundamental matrix solution is given by
!
1 3t 1 t
2e 2e
Y (t) =
e3t e t
56 CHAPTER 4. SYSTEMS OF LINEAR EQUATIONS
and therefore
✓ ◆ ! 1 "✓ ◆ !# ! ! !
1 1 11 1 35 17
c1 2 2 1 24 1 2 24 12
= 1 = 1 1 = 3 .
c2 1 1 0 12 1 2 12 2