Diffeq 3 Systems of Linear Diffeq
Diffeq 3 Systems of Linear Diffeq
Diffeq 3 Systems of Linear Diffeq
Contents
1 Systems of First-Order Linear Dierential Equations
1.1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2
1.3
1.4
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
In many (perhaps most) applications of dierential equations, we have not one but several quantities which
change over time and interact with one another.
Example: The production of goods, availability of labor, prices of supplies, and many other quantities
over time in economic processes.
Alas, as is typical with dierential equations, we cannot solve arbitrary systems in full generality: in fact
it is very dicult even to solve individual nonlinear dierential equations, let alone a system of nonlinear
equations.
The most we will be able to do in general is to solve systems of linear equations with constant coecients,
and give an existence-uniqueness theorem for general systems of linear equations.
Before we start our discussion of systems of linear dierential equations, we rst observe that we can reduce
any system of linear dierential equations to a system of rst-order linear dierential equations (in more
variables): if we dene new variables equal to the higher-order derivatives of our old variables, then we can
rewrite the old system as a system of rst-order equations (in more variables).
w =y
000
z = y0
y 000 = y 0 ,
= y = z .
y100 + y1 y2 = 0
z1 = y10
sin(x) = e z1 z2 sin(x)
y20
w = y 00 = z 0 ,
and
y 000 + y 0 = 0.
and
and
z2 = y20 ,
z10 = y100 = y1 + y2
y 0 = z , z 0 = w, w0 = z .
and
e z1 z2 sin(x).
Thus, whatever we can show about solutions of systems of rst-order linear equations, will carry over to
arbitrary systems of linear dierential equations.
dierential equations from now on.
y10
y20
.
.
.
ai,j (x)
.
.
.
yn0
for some functions
y1 , , yn )
and
where
1 i, j n.
Most of our time we will be dealing with systems with constant coecients, in which all of the
ai,j (x)
yn (x0 ) = bn ,
x0
nth
(y1 , y2 , , yn )
y10
y20
yn0
is an
ai,j (x)
.
.
.
n-dimensional
.
.
.
vector space.
The fact that the set of solutions forms a vector space is not so hard to show using the subspace criteria.
The real result of this theorem, which follows from the existence-uniqueness theorem below, is that the
set of solutions is
is zero.
Many of the theorems about general systems of rst-order linear equations are very similar to the theorems
about
where
n-dimensional.
Theorem (Existence-Uniqueness): For a system of rst-order linear dierential equations, if the coecient
functions
ai,j (x)
pj (x)
x = x0 ,
then
the system
y10
y20
.
.
.
yn0
with initial conditions
.
.
.
y1 (x0 ) = b1 , . . . , yn (x0 ) = bn
x = x0 .
(y1 , y2 , , yn )
in some (possibly
y 0 = ex y + sin(x)y , z 0 = 3x2 y
y(a) = b1 , z(a) = b2 .
Denition: Given
with functions as
The
n vectors s1 = (y1,1 , y1,2 , , y1,n ), s2 = (y2,1 , y2,2 , , y2,n ), , sn = (yn,1 , yn,2 , , yn,n )
y1,1 y1,2 y1,n
y2,1 y2,2 y2,n
entries, their Wronskian is dened as the determinant W =
.
.
.
.
..
.
.
..
.
.
.
yn,1 yn,2 yn,n
vectors
s1 , , sn
We now restrict our discussion to homogeneous rst-order systems with constant coecients: those of the
form
y10
y20
.
.
.
yn0
.
.
.
a1,1
y1
a2,1
y2
.
.
.
..
an,2
a1,n
a2,n
.
.
.
.
an,n
The idea behind the so-called Eigenvalue Method is the following observation:
Observation: If
to
a1,2
a2,2
c1
c2
~v = .
..
cn
Theorem:
If
has
~y = ex ~v
c1 , , cn
then
is a solution
gives
~y 0 = A ~y
~y 0 = ex~v = ~y = A ~y .
are given by
A has n linearly independent eigenvectors ~v1 , , ~vn with eigenA is diagonalizable with A = P 1 DP where the
1 , , n and the columns of P are the vectors ~v1 , , ~vn .
1 , , n
diagonal elements of
with respect to
with eigenvalue
~y 0 = A ~y .
Proof: Dierentiating
where
is an eigenvector of
c1
c2
~y = . ex
..
cn
are
is a solution to
~y 0 = A ~y .
We claim
We can compute the Wronskian of these solutions; after factoring out the exponentials from each
|
|
|
( ++n )x
column, we obtain W = e 1
~v1 ~vn . Then this product
|
|
|
exponential is nonzero and the vectors ~
v1 , , ~vn are linearly independent.
, en x~vn are linearly independent.
Hence
e1 x~v1 , e2 x~v2 ,
We also know by the existence-uniqueness theorem that the set of solutions to the system
is
~y 0 = A ~y
n-dimensional.
So since we have
in an
n-dimensional
Finally, since these solutions are a basis, all solutions are of the form
cn e
n x
~vn ,
where
c1 , , cn
~y = c1 e1 x~v1 + c2 e2 x~v2 + +
By the remark, the theorem allows us to solve all homogeneous systems of linear dierential equations whose
coecient matrix
~y 0 = A ~y
for an
n1
column matrix
~y
and an
nn
matrix
(if
the system is not already in this form). If the system has equations which are not rst order, introduce
new variables to make the system rst-order.
1 , , n .
A,
and
c1 , , cn
.
~y = c1 e1 x~v1 + c2 e2 x~v2 + + cn en x~vn ,
where
Note: If there are complex-conjugate eigenvalues then we generally want to write the solutions as
real-valued functions.
= a bi
~v = w
~ 1 iw
~2
has an eigenvector
= a + bi
has an eigenvector
(the conjugate of
~v = w
~ 1 + iw
~2
so that
~v ).
ex~v and
e ~v with the two real-valued solutions e (w
~ 1 cos(bx) w
~ 2 sin(bx)) and e (w
~ 1 sin(bx) + w
~ 2 cos(bx)).
Then to obtain real-valued solutions to the system, replace the two complex-valued solutions
ax
ax
Step 4 (if necessary): Plug in any initial conditions and solve for
y10
y20
c1 , , cn .
y1 3y2
.
y1 + 5y2
y1
1 3
and A =
Step 1: The system is ~y 0 = A ~y , with ~y 0 =
.
y2
1 5
t1
3
= (t1)(t5)+3 = t2 6t+8,
Step 2: The characteristic polynomial of A is det(tI A) =
1 t 5
so the eigenvalues are = 2, 4.
1 3
a
2a
a 3b
2a
For = 2 we want
=
so that
=
. This yields a = 3b,
1 5
b
2b
a + 5b
2b
3
so
is an eigenvector.
1
1 3
a
4a
a 3b
4a
For = 2 we want
=
so that
=
. This yields a = b,
1 5
b
4b
a + 5b
4b
1
so
is an eigenvector.
1
y1
3c1 e2x + c2 e4x
3
1
Step 3: The general solution is
.
= c1
e2x + c2
e4x =
c2 e2x + c2 e4x
y2
1
1
Step 1:
Step 2:
y1
y2
and
such that
=
=
y10
y20
= y2
.
= y1
y1
0 1
0
0
The system is ~
y = A ~y , with ~y =
and A =
.
y2
1 0
t 1
2
The characteristic polynomial of A is det(tI A) =
1 t = t + 1,
y1
and
y2
such that
= i.
For
=i
For
= i
we want
0
1
1
0
a
ia
=
b
ib
so
b = ia
and thus
1
i
is an eigenvector.
we can take the complex conjugate of the eigenvector for
=i
to see that
1
i
is an
eigenvector.
y1
y2
= c1
1
i
e
ix
+ c2
1
i
eix .
and
1
i
eix
We have
=i
and
~v =
1
0
+
0
1
i
so that
w
~1 =
1
0
and
w
~2 =
0
1
.
1
i
eix
Plugging into the formula in the note gives us the equivalent real-valued solutions
0
1
cos(x)
sin(x)
sin(x) =
and
1
0
sin(x)
y1
y2
0
1
= c1
cos(x) =
cos(x)
sin(x)
+c2
sin(x)
cos(x)
sin(x)
cos(x)
1
0
cos(x) +
=
c1 cos(x) + c2 sin(x)
c1 sin(x) c2 cos(x)
~y 0 = A ~y
where
is an
nn
is not diagonalizable).
Recall that if
~y 0 = A ~y
has an
n-dimensional
solution
space.
So if
has
linearly-independent eigenvectors, then we can write down the general solution to the
system directly.
If
We do this via generalized eigenvectors: vectors which are not eigenvectors, but are close enough that
~y 0 = A ~y .
If
is a root of the
has dimension
for
characteristic equation
less than
k,
~v1 , , ~vk
nn
~y 0 = A ~y
matrix
If the eigenspace
A.
n),
-eigenspace.
~v1 , , ~vk ,
(A I) w
~2
= w
~
(A I) w
~3
= w
~2
.
.
.
(A I) w
~l
.
.
.
.
.
.
= w
~ l1 .
, et
k.
~y 0 = A ~y
for the
has multiplicity
we say that
tl1
~
l! w
Note: If the
n,
where
tl2
~2
(l2)! w
-eigenspace
+ + tw
~ l1 + w
~l
~ + tw
~2 + w
~ 3 ],
et [w]
~ , et [tw
~ +w
~ 2 ], et [ t2 w
is the multiplicity of
If the
-eigenspace
has dimension
> 1,
dierent lengths, and it may be necessary to toss out some elements of some chains, as they may
lead to linearly dependent solutions.
Step 3: If
system
If there are complex-conjugate eigenvalues then we generally want to write the solutions as realvalued functions. To obtain real-valued solutions to the system from a pair of complex-conjugate
solutions
and
y,
replace
and
y with Re(y)
and
Im(y),
y.
.
y10
y20
= 5y1 9y2
= 4y1 7y2
5 9
A=
.
4 7
~y 0 = A ~y , where
5t
9
2
2
We have AtI =
so det(AtI) = (5t)(7t)(4)(9) = 1+2t+t = (t+1) .
4
7 t
Thus there is a double eigenvalue = 1.
6 9
a
0
=
, so that 2a 3b = 0.
To compute the eigenvectors for = 1, we want
4 6
b
0
a
, and the eigenspace is 1-dimensional with a basis given
So the eigenvectors are of the form
2
3a
3
by
.
2
Step 2: There is only one eigenvector (the triple eigenvector = 1), so we need to compute a chain of
We start with
We want to nd
3
6 9
, and also have A I =
.
2
4 6
a
6 9
a
3
.
w
~2 =
with
=
b
4 6
b
2
w
~=
Dividing the rst row by 3 and the second row by 2 yields the system
we want
2a 3b = 1,
and
and so
b = 1.
.
3
2
e
and
a=2
3 1
,
3 1
Now we have the chain of the proper length (namely 2), so we can write down the two solutions for
w
~2 =
2
1
2
2
y10
y20
2
te +
et .
1
y1
3
3
2
t
t
t
= c1
e
+ c2
te +
e
y2
2
2
1
3
2
= 4y1 y2 2y3
= 2y1 + y2 2y3 .
= 5y1
3y3
4 1 2
Step 1: In matrix form this is ~y 0 = A ~y , where A = 2 1 2 .
5 0 3
4 t 1
2
1 2
4 t 1
=
2
1t
2
We have AtI =
so det(AtI) = 5
1 t 2 +(3t) 2
1t
5
0
3 t
2 t + 2t2 t3 = (2 t)(1 + t2 ).
Thus the eigenvalues are = 2, i, i.
2 1 2
a
0
= 2: For = 2, we want 2 1 2 b = 0 , so that 2ab2c = 0 and 5a5c = 0.
5 0 5
c
0
a
Hence c = a and b = 2a 2c = 0, so the eigenvectors are of the form 0 . So the eigenspace is
a
1
1-dimensional, and has a basis given by 0 .
1
= i:
For
= i,
we want
2
a
0
2 b = 0 . Subtracting the rst row
3 i
c
0
4 i 1
2
1
0
the second row by 2 i yields 1
5
0 3 i
4 i 1
2
1i
5
0
a
0
b = 0 .
c
0
Hence
a + b = 0
so
b = a;
5a + (3 i)c = 0,
c =
5(3 i)
5
a =
a.
3+i
10
=i: For = i
2
2 .
by
3+i
2
2 .
3i
.
3i
a
2
or
So the eigenspace is
= i,
so a basis is given
Step 2: The eigenspaces are all the proper sizes, so we do not need to compute any generalized eigenvectors.
2
2
y1
1
Step 3: The general solution is y2 = c1 0 e2t + c2 2 eit + c3 2 eit .
3+i
3i
y3
1
2 sin(t)
2 cos(t)
y1
1
+ c3
2 sin(t)
2 cos(t)
With real-valued functions: y2 = c1 0 e2t + c2
cos(t) + 3 sin(t)
3 cos(t) + sin(t)
1
y3
y10
y20
y30
=
c1 e2t + 2c2 cos(t) + 2c3 sin(t)
=
2c2 cos(t) + 2c3 sin(t)
= c1 e2t + c2 (3 cos(t) + sin(t)) + c3 ( cos(t) + 3 sin(t))
y10
y20
y30
=
=
=
~y 0 = A ~y ,
where
4y1
y3
2y1 + 2y2 y3
3y1 + y2
4
A= 2
3
0
2
1
1
1 .
0
4t
0
1
2 t 1
2
2 t 1 so det(AtI) = (4t)
We have AtI =
1
t
3
1
t
8 12t + 6t2 t3 = (2 t)3 .
Thus there is a triple eigenvalue = 2.
2 0 1
a
To compute the eigenvectors for = 2, we want 2 0 1 b =
3 1 2
c
+(1) 2
3
0
0 ,
0
so that
2 t
=
1
2a c = 0
and
3a + b 2c = 0.
Hence
c = 2a
and
b = 2c 3a = a,
1
1 .
2
a
a .
2a
= 2),
1
w
~ = 1 ,
2
2 0 1
0 1 .
We start with
and also have A I = 2
3 1 2
a
2 0 1
a
First we want to nd w
~ 2 = b with 2 0 1 b =
c
3 1 2
c
2
The corresponding system of equations in matrix form is 2
3
1
1 .
2
0
0
1
1
1
2
1
1 ,
2
which we now
row-reduce:
0 1 1
0 0 0
1 0 0
1
and hence a + b = 0 and 2a c = 1, so one possibility for w
~ 2 is w
~ 2 = 1 .
1
2 0 1
d
1
d
Now we want to nd w
~ 3 = e with 2 0 1 e = 1 .
3 1 2
f
1
f
2 0 1 1
The corresponding system of equations in matrix form is 2 0 1 1 ; we can
3 1 2 1
2
2
3
0
0
1
1
1
2
1
2
2 R1
1 R
0
2
3
0
0
1
1
0
2
1
2
2R1
0 R3
0
2
1
use the
2
2
3
and hence
0
0
1
1
1
2
1
2 0
2 R1
1 R
0 0
1
3 1
a + b = 1
and
2a c = 1,
1
0
2
1
2
2R1
0 R3
0
1
1
w
~3
is
0 1 1
0 0 0
1 0 1
1
w
~ 3 = 0 .
1
Now we have the chain of the proper length (namely 3), so we can write down the three solutions for
1
1
1
1
1
1
2
t
2t
2t
2t
e2t + 1 te2t + 0 e2t .
they are 1 e , 1 te + 1 e , and 1
2
1
1
2
2
2
1
this eigenspace:
Step 3: We thus obtain the general solution as the (rather unwieldy and complicated) expression
y1
1
1
1
1
1
1
2
t
y2 = c1 1 e2t + c2 1 te2t + 1 e2t + c3 1 e2t + 1 te2t + 0 e2t
2
y3
1
1
2
2
1
2
y10
y20
y30
= ( c1 + c2 + c3 + c2 t + c3 t + 12 c3 t2 )e2t
= ( c1 + c2 +
c2 t + c3 t + 21 c3 t2 )e2t
= (2c1 + c2 + c3 + 2c2 t + c3 t + c3 t2 )e2t
If the coecient matrix is not diagonalizable, life is more dicult, as we cannot generate a basis for the
solution space using eigenvectors alone.
We can still solve the system using chains of generalized eigenvectors. However, there are some slightly
cumbersome technical problems that can occur when any defective eigenspace has more than one independent eigenvector: in order to generate enough solutions, in general one must construct a chain above
y 0 = ky
y(0) = C
kx
y(x) = e C .
Denition: If
eA =
X
An
n=0
n!
nn
is an
systems.
A,
denoted
eA ,
The denition is motivated by the Taylor series for the exponential of a real or complex number
namely,
ez =
n
X
z
n=0
nn
Remark:
n!
z;
In order for this denition to make sense, we need to know that the innite sum actually
X
An
eA =
n=0
Theorem: If
~y (0) = ~y0
is an
n n matrix, then
~y (x) = eAx ~y0 .
A eAx ,
A.
The proof of this result follows from showing that the derivative
is
is given by
n!
d Ax
[e ]
dx
~y 0 = A ~y
with
eAx
which can be done by dierentiating the power series dening the matrix exponential.
dierential equation and the initial condition). The uniqueness part of the existence-uniqueness theorem
guarantees it is the only solution.
So we see that the matrix exponential allows us to solve any homogeneous rst-order linear system. All that
remains is actually to compute the exponential of a matrix. In general, this is not so straightforward: the
general computation requires the Jordan Canonical Form. In the special case where
is diagonalizable, the
computation is simpler.
Proof:
eP
fact that
AP
n!
n=0
1
n
(P AP ) = P (An )P .
Proposition: If
= P 1 eA P .
!
X
An
P = P 1 eA P ,
n!
n=0
= P 1
e , ,e
AP
1 , , n ,
then
eD
Putting these two results together shows
x
1
e 1
..
, then eAx = P 1
.
n
0 2
Ax
.
Example: Find e , if A =
3 5
X
(P 1 AP )n
P , eP
that if
= 2, 3.
is diagonalizable i.e.,
A = P 1 DP
where
D =
..
en x
A.
P.
We calculate
2
a
a
2b
2a
=2
, so
=
and thus a = b.
b
3a + 5b
2b
5
b
b
1
The eigenvectors are of the form
so a basis for the = 2 eigenspace is
.
b
1
2
0 2
a
a
2b
3a
For = 3 we need to solve
=3
, so
=
and thus a = b.
3 5
b
b
3a + 5b
3b
3
"
#
2
2
b
The eigenvectors are of the form
so a basis for the = 3 eigenspace is
.
3
3
b
1
Sincethe eigenvalues
are distinct we know that A is diagonalizable: we can write A = P
DP for
2 0
1 2
3
2
1
D=
and P =
. We also compute P
=
.
0 3
1
3
1
1
2x
e
0
Now we compute eDx =
from the formula for exponentiating diagonal matrices.
0 e3x
2x
1 2
e
0
3 2
3e2x 2e3x
2e2x 2e3x
Ax
Dx 1
Finally we have e = P e P =
=
3e2x + 3e3x 2e3x + 3e3x
1
3
0 e3x
1
1
For
= 2 we need to solve
0
3
J =
eJx
1
..
Canonical Form of
eAx
A.
Nn
1
0
1
..
1
0
1
0
kn n
k
(Jx)k = xk (I + N )k = k I + k1 k1 N 1 + + kn
N +
, but since N n is the zero matrix, only the terms up through N n1 are nonzero. Then one can plug
Jx
these expressions into the innite sum dening e
and actually evaluate the innite sum explicitly.
x
2
xn1 x
e
xex x2 ex (n1)!
e
.
.
..
.
ex
xex
.
, if J is n n.
Eventually, one ends up with the answer eJx =
..
..
2
x
x
.
.
2 e
ex
xex
ex