Euler
Euler
Euler
DIFFERENTIAL
EQUATIONS WITH
BOUNDARY VALUES
PROBLEMS
William F. Trench
Trinity University
Trinity University
Elementary Differential Equations with
Boundary Values Problems
William F. Trench
This open text is disseminated via the Open Education Resource (OER) LibreTexts Project (https://LibreTexts.org) and like the
hundreds of other open texts available within this powerful platform, it is licensed to be freely used, adapted, and distributed.
This book is openly licensed which allows you to make changes, save, and print this book as long; the applicable license is
indicated at the bottom of each page.
Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of
their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and
new technologies to support learning.
The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online
platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable
textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the
next generation of open-access texts to improve postsecondary education at all levels of higher learning by developing an
Open Access Resource environment. The project currently consists of 13 independently operating and interconnected libraries
that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books.
These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level)
and horizontally (across different fields) integrated.
The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot
Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning
Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant
No. 1246120, 1525057, and 1413739. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not
necessarily reflect the views of the National Science Foundation nor the US Department of Education.
Have questions or comments? For information about adoptions or adaptions contact info@LibreTexts.org. More information
on our activities can be found via Facebook (https://facebook.com/Libretexts), Twitter (https://twitter.com/libretexts), or our
blog (http://Blog.Libretexts.org).
1: INTRODUCTION
1.1: APPLICATIONS LEADING TO DIFFERENTIAL EQUATIONS
1.2: BASIC CONCEPTS
3: NUMERICAL METHODS
In this chapter we study numerical methods for solving a first-order differential equation
1 9/16/2020
4.3E: ELEMENTARY MECHANICS (EXERCISES)
4.4: AUTONOMOUS SECOND ORDER EQUATIONS
8: LAPLACE TRANSFORMS
8.1: INTRODUCTION TO THE LAPLACE TRANSFORM
8.1E: INTRODUCTION TO THE LAPLACE TRANSFORM (EXERCISES)
2 9/16/2020
8.2: THE INVERSE LAPLACE TRANSFORM
8.2E: THE INVERSE LAPLACE TRANSFORM (EXERCISES)
8.3: SOLUTION OF INITIAL VALUE PROBLEMS
12: MATRICES
12.1: MATRIX ARITHMETIC
12.2: MULTIPLICATION OF MATRICES
12.3: THE IJTH ENTRY OF A PRODUCT
12.4: PROPERTIES OF MATRIX MULTIPLICATION
12.5: THE TRANSPOSE
12.6: THE IDENTITY AND INVERSES
12.7: FINDING THE INVERSE OF A MATRIX
12.E: EXERCISES
13: DETERMINANTS
13.1: BASIC TECHNIQUES
13.2: PROPERTIES OF DETERMINANTS
13.3: FINDING DETERMINANTS USING ROW OPERATIONS
3 9/16/2020
13.4: APPLICATIONS OF THE DETERMINANT
13.E: EXERCISES
4 9/16/2020
SECTION 10.4 ANSWERS
SECTION 10.5 ANSWERS
SECTION 10.6 ANSWERS
SECTION 10.7 ANSWERS
BACK MATTER
INDEX
5 9/16/2020
CHAPTER OVERVIEW
1: INTRODUCTION
In this chapter, we begin our study of differential equations.
1 9/16/2020
1.1: Applications Leading to Differential Equations
In order to apply mathematical methods to a physical or “real life” problem, we must formulate the problem in mathematical
terms; that is, we must construct a mathematical model for the problem. Many physical problems concern relationships
between changing quantities. Since rates of change are represented mathematically by derivatives, mathematical models often
involve equations relating an unknown function and one or more of its derivatives. Such equations are differential equations.
They are the subject of this book.
Much of calculus is devoted to learning mathematical techniques that are applied in later courses in mathematics and the
sciences; you wouldn’t have time to learn much calculus if you insisted on seeing a specic application of every topic covered
in the course. Similarly, much of this book is devoted to methods that can be applied in later courses. Only a relatively small
part of the book is devoted to the derivation of specific differential equations from mathematical models, or relating the
differential equations that we study to specic applications. In this section we mention a few such applications. The
mathematical model for an applied problem is almost always simpler than the actual situation being studied, since simplifying
assumptions are usually required to obtain a mathematical problem that can be solved. For example, in modeling the motion of
a falling object, we might neglect air resistance and the gravitational pull of celestial bodies other than Earth, or in modeling
population growth we might assume that the population grows continuously rather than in discrete steps.
A good mathematical model has two important properties:
It’s sufficiently simple so that the mathematical problem can be solved.
It represents the actual situation sufficiently well so that the solution to the mathematical problem predicts the outcome of
the real problem to within a useful degree of accuracy. If results predicted by the model don’t agree with physical
observations,the underlying assumptions of the model must be revised until satisfactory agreement is obtained.
We will now give examples of mathematical models involving differential equations. We willreturn to these problems at the
appropriate times, as we learn how to solve the various types of differential equations that occur in the models. All the
examples in this section deal with functions of time, which we denote by t . If y is a function of t , y denotes the derivative of
′
dy
′
y = .
dt
where a is a continuous function of P that represents the rate of change of population per unit time per individual. In the
Malthusian model, it is assumed that a(P ) is a constant, so Equation 1.1.1 becomes
′
P = aP . (1.1.2)
(When you see a name in blue italics, just click on it for information about the person.) This model assumes that the numbers
of births and deaths per unit time are both proportional to the population. The constants of proportionality are the birth rate
(births per unit time per individual) and the death rate (deaths per unit time per individual); a is the birth rate minus the death
rate. You learned in calculus that if c is any constant then
at
P = ce (1.1.3)
satises Equation 1.1.2, so Equation 1.1.2 has innitely many solutions. To select the solution of the specic problem that we
are considering, we must know the population P at an initial time, say t = 0 . Setting t = 0 in Equation 1.1.3 yields
0
that is, the population approaches infinity if the birth rate exceeds the death rate, or zero if the death rate exceeds the birth rate.
To see the limitations of the Malthusian model, suppose we are modeling the population of a country, starting from a time
t = 0 when the birth rate exceeds the death rate (so a > 0 ), and the country’s resources in terms of space, food supply, and
other necessities of life can support the existing population. Then the prediction P = P e may be reasonably accurate as
0
at
long as it remains within limits that the country’s resources can support. However, the model must inevitably lose validity
when the prediction exceeds these limits. (If nothing else, eventually there will not be enough space for the predicted
population!) This aw in the Malthusian model suggests the need for a model that accounts for limitations of space and
resources that tend to oppose the rate of population growth as the population increases.
Perhaps the most famous model of this kind is the Verhulst model, where Equation 1.1.2 is replaced by
′
P = aP (1 − αP ), (1.1.4)
where α is a positive constant. As long as P is small compared to 1/α, the ratio P /P is approximately equal to a . Therefore
′
the growth is approximately exponential; however, as P increases, the ratio P /P decreases as opposing factors become
′
significant.
Equation 1.1.4 is the logistic equation. You will learn how to solve it in Section 1.2. (See Exercise 2.2.28.) The solution is
P0
P = ,
−at
α P0 + (1 − α P0 )e
′
T = −k(T − Tm ) (1.1.5)
where k is a positive constant and the minus sign indicates; that the temperature of the body increases with time if it is less
than the temperature of the medium, or decreases if it is greater. We will see in Section 4.2 that if T is constant then the m
lim T (t) = Tm
t→∞
be their initial values. Again, we assume that T and Tm are related by Equation 1.1.5. We also assume that the change in heat
of the object as its temperature changes from T to T is a(T − T ) and the change in heat of the medium as its temperature
0 0
changes from T to T is a (T − T ) , where a and am are positive constants depending upon the masses and thermal
m0 m m m m0
properties of the object and medium respectively. If we assume that the total heat of the in the object and the medium remains
constant (that is, energy is conserved), then
a(T − T0 ) + am (Tm − Tm0 ) = 0.
Solving this for T and substituting the result into Equation 1.1.6 yields the differential equation
m
a a
′
T = −k (1 + ) T + k ( Tm0 + T0 )
am am
for the temperature of the object. After learning to solve linear rst order equations, you’ll be able to show (Exercise 4.2.17)
that
aT0 + am Tm0 am (T0 − Tm0 )
−k(1+a/ am )t
T = + e
a + am a + am
be the number of units in the bloodstream at time t > 0 . Then, since the glucose being absorbed by the body is leaving the
bloodstream, G satisfies the equation
satises Equation (1.1.7), so Equation 1.1.7 has innitely many solutions. Setting t = 0 in Equation 1.1.8 and requiring that
G(0) = G 0yields c = G , so
0
−λt
G(t) = G0 e .
Now let’s complicate matters by injecting glucose intravenously at a constant rate of r units of glucose per unit of time. Then
the rate of change of the amount of glucose in the bloodstream per unit time is
′
G = −λG + r (1.1.9)
where the rst term on the right is due to the absorption of the glucose by the body and the second term is due to the injection.
After you’ve studied Section 2.1, you’ll be able to show that the solution of Equation 1.1.9 that satisfies G(0) = G is 0
r r −λt
G= + (G0 − )e
λ λ
Spread of Epidemics
One model for the spread of epidemics assumes that the number of people infected changes at a rate proportional to the
product of the number of people already infected and the number of people who are susceptible, but not yet infected.
Therefore, if S denotes the total population of susceptible people and I = I (t) denotes the number of infected people at time
t , then S − I is the number of people who are susceptible, but not yet infected. Thus,
′
I = rI (S − I )
, where r is a positive constant. Assuming that I (0) = I , the solution of this equation is
0
SI0
I =
−rSt
I0 + (S − I0 )e
(Exercise 2.2.29). Graphs of this function are similar to those in Figure 1.1.1. (Why?) Since lim I (t) = S , this model predicts
t→∞
where a and b are positive constants. One way to model the effect of competition is to assume that the growth rate per
individual of each population is reduced by an amount proportional to the other population, so Equation 1.1.10 is replaced by
′
P = aP − αQ
′
Q = −βP + bQ,
where α and β are positive constants. (Since negative population doesn’t make sense, this system works only while P and Q
are both positive.) Now suppose P (0) = P > 0 and Q(0) = Q > 0 . It can be shown (Exercise 10.4.42) that there’s a
0 0
positive constant ρ such that if (P , Q ) is above the line L through the origin with slope ρ, then the species with population
0 0
P becomes extinct in finite time, but if (P , Q ) is below L, the species with population Q becomes extinct in finite time.
0 0
Figure 1.1.3 illustrates this. The curves shown there are given parametrically by P = P (t), Q = Q(t), t > 0 . The arrows
indicate direction along the curves with increasing t .
Throughout this text, all variables and constants are real unless it is stated otherwise. We’ll usually use x for the
independent variable unless the independent variable is time; then we’ll use t .
The simplest differential equations are first order equations of the form
dy
= f (x)
dx
or equivalently
′
y = f (x),
where f is a known function of x. We already know from calculus how to find functions that satisfy this kind of equation. For
example, if
′ 3
y =x ,
then
4
x
3
y =∫ x dx = + c,
4
where c is an arbitrary constant. If n > 1 we can find functions y that satisfy equations of the form
(n)
y = f (x) (1.2.1)
dy
2
+ 2x y = −2 (first order),
dx
2
d y dy
2
+2 +y = 2x (second order),
dx dx
′′′ 2
xy +y = sin x (third order),
(n) ′
y + x y + 3y = x (n-th order).
Although none of these equations is written as in Equation 1.2.2, all of them can be written in this form:
′ 2
y = −2 − 2x y ,
′′ ′
y = 2x − 2 y − y,
2
sin x − y
′′′
y = ,
x
(n) ′
y = x − x y − 3y.
for all x in some open interval (a, b). In this case, we also say that y is a solution of Equation 1.2.2 on (a, b) . Functions that
satisfy a differential equation at isolated points are not interesting. For example, y = x satisfies 2
′ 2
xy + x = 3x
if and only if x = 0 or x = 1 , but it is not a solution of this differential equation because it does not satisfy the equation on an
open interval.
The graph of a solution of a differential equation is a solution curve. More generally, a curve C is said to be an integral curve
of a differential equation if every function y = y(x) whose graph is a segment of C is a solution of the differential equation.
Thus, any solution curve of a differential equation is an integral curve, but an integral curve need not be a solution curve.
Example 1.2.1
If a is any positive constant, the circle
2 2 2
x +y =a (1.2.3)
is an integral curve of
x
′
y =− . (1.2.4)
y
Solution
To see this, note that the only functions whose graphs are segments of Equation 1.2.3 are
− −−−−− − −−−−−
2 2 2 2
y1 = √ a − x and y2 = −√ a − x .
We leave it to you to verify that these functions both satisfy Equation 1.2.4 on the open interval (−a, a) . However,
Equation 1.2.3 is not a solution curve of Equation 1.2.4, since it is not the graph of a function.
Example 1.2.2
Verify that
2
x 1
y = + (1.2.5)
3 x
is a solution of
′ 2
xy + y = x (1.2.6)
for all x ≠ 0 . Therefore y is a solution of Equation 1.2.6 on (−∞, 0) and (0, ∞). However, y isn’t a solution of the
differential equation on any open interval that contains x = 0 , since y is not defined at x = 0 .
Figure 1.2.2 shows the graph of Equation 1.2.5. The part of the graph of Equation 1.2.5 on (0, ∞) is a solution curve of
Equation 1.2.6, as is the part of the graph on (−∞, 0).
Figure 1.2.2 : y = x
3
+
1
Example 1.2.3
Show that if c and c are constants then
1 2
−x
y = (c1 + c2 x)e + 2x − 4 (1.2.7)
is a solution of
′′ ′
y + 2 y + y = 2x (1.2.8)
on (−∞, ∞).
Solution
Differentiating Equation 1.2.7 twice yields
′ −x −x
y = −(c1 + c2 x)e + c2 e +2
and
′′ −x −x
y = (c1 + c2 x)e − 2 c2 e ,
so
′′ ′ −x −x −x −x −x
y + 2y + y = (c1 + c2 x)e − 2 c2 e + 2 [−(c1 + c2 x)e + c2 e + 2] + (c1 + c2 x)e + 2x − 4
−x −x
= (1 − 2 + 1)(c1 + c2 x)e + (−2 + 2)c2 e + 4 + 2x − 4
= 2x
Solution
Integrating Equation 1.2.9 yields
2x
e
(n−1)
y = + k1 ,
2
2x
(n−2)
e
y = + k1 x + k2 .
4
where k , k , …, k are constants. This shows that every solution of Equation 1.2.9 has the form Equation 1.2.10 for
1 2 n
some choice of the constants k , k , …, k . On the other hand, differentiating Equation 1.2.10 n times shows that if k ,
1 2 n 1
k , …, k
2 are arbitrary constants, then the function y in Equation 1.2.10 satisfies Equation 1.2.9.
n
Since the constants k , k , …, k in Equation 1.2.10 are arbitrary, so are the constants
1 2 n
k1 k2
, , ⋯ , kn .
(n − 1)! (n − 2)!
Therefore Example 1.2.4 actually shows that all solutions of Equation 1.2.9 can be written as
2x
e n−1
y = n
+ c1 + c2 x + ⋯ + cn x ,
2
where we renamed the arbitrary constants in Equation 1.2.10 to obtain a simpler formula. As a general rule, arbitrary constants
appearing in solutions of differential equations should be simplified if possible. You’ll see examples of this throughout the
text.
arbitrary constants c , c , …, c . In the absence of additional conditions, there’s no reason to prefer one solution of a
1 2 n
differential equation over another. However, we’ll often be interested in finding a solution of a differential equation that
satisfies one or more specific conditions. The next example illustrates this.
Example 1.2.5
Find a solution of
′ 3
y =x
so
7
c = .
4
Figure 1.2.2 shows the graph of this solution. Note that imposing the condition y(1) = 2 is equivalent to requiring the
graph of y to pass through the point (1, 2).
Figure 1.2.2 : y =
x +7
We call this an initial value problem. The requirement y(1) = 2 is an initial condition. Initial value problems can also be
posed for higher order differential equations. For example,
′′ ′ x ′
y − 2 y + 3y = e , y(0) = 1, y (0) = 2 (1.2.11)
is an initial value problem for a second order differential equation where y and y are required to have specified values at
′
x = 0 . In general, an initial value problem for an n -th order differential equation requires y and its first n − 1 derivatives to
have specified values at some point x . These requirements are the initial conditions.We’ll denote an initial value problem for
0
a differential equation by writing the initial conditions after the equation, as in Equation 1.2.11. For example, we would write
an initial value problem for Equation 1.2.2 as
(n) ′ (n−1) ′ (n−1)
y = f (x, y, y , … , y ), y(x0 ) = k0 , y (x0 ) = k1 , … , y = kn−1 . (1.2.12)
Consistent with our earlier definition of a solution of the differential equation in Equation 1.2.12, we say that y is a solution of
the initial value problem Equation 1.2.12 if y is n times differentiable and
(n) ′ (n−1)
y (x) = f (x, y(x), y (x), … , y (x))
for all x in some open interval (a, b) that contains x , and y satisfies the initial conditions in Equation 1.2.12. The largest
0
open interval that contains x on which y is defined and satisfies the differential equation is the interval of validity of y .
0
Example 1.2.6
Since the function in Equation 1.2.13 is defined for all x, the interval of validity of this solution is (−∞, ∞).
Example 1.2.7
In Example 1.2.2 we verified that
2
x 1
y = + (1.2.14)
3 x
is a solution of
′ 2
xy + y = x
on (0, ∞) and on (−∞, 0). By evaluating Equation 1.2.14 at x = ±1 , you can see that Equation 1.2.14 is a solution of
the initial value problems
4
′ 2
xy + y = x , y(1) = (1.2.15)
3
and
′ 2
2
xy + y = x , y(−1) = − . (1.2.16)
3
The interval of validity of Equation 1.2.14 as a solution of Equation 1.2.15 is (0, ∞), since this is the largest interval that
contains x = 1 on which Equation 1.2.14 is defined. Similarly, the interval of validity of Equation 1.2.14 as a solution
0
of Equation 1.2.16 is (−∞, 0), since this is the largest interval that contains x = −1 on which Equation 1.2.14 is
0
defined.
Example 1.2.8
An object falls under the influence of gravity near Earth’s surface, where it can be assumed that the magnitude of the
acceleration due to gravity is a constant g .
a. Construct a mathematical model for the motion of the object in the form of an initial value problem for a second order
differential equation, assuming that the altitude and velocity of the object at time t = 0 are known. Assume that
gravity is the only force acting on the object.
b. Solve the initial value problem derived above to obtain the altitude as a function of time.
Solution a
Let y(t) be the altitude of the object at time t . Since the acceleration of the object has constant magnitude g and is in the
downward (negative) direction, y satisfies the second order equation
′′
y = −g,
where the prime now indicates differentiation with respect to t . If y and v denote the altitude and velocity when t = 0 ,
0 0
Solution b
Integrating Equation 1.2.17 twice yields
′
y = −gt + c1 ,
2
gt
y =− + c1 t + c2 .
2
Imposing the initial conditions y(0) = y and y (0) = v in these two equations shows that
0
′
0 c1 = v0 and c2 = y0 .
Therefore the solution of the initial value problem Equation 1.2.17 is
2
gt
y =− + v0 t + y0 .
2
b. y ′′ ′
− 3 y + 2y = x
7
c. y ′
−y
7
=0
d. y ′′
y − (y )
′ 2
=2
2. Verify that the function is a solution of the differential equation on some interval, for any choice of the arbitrary constants
appearing in the function.
a. y = ce 2x
; y
′
= 2y
b. y = x
3
+
c
x
; xy + y = x
′ 2
c. y =
2
1 −x ′
+ ce ; y + 2xy = x
2
2 2
d. y = (1 + ce −x /2
)(1 − c e
−x /2
)
−1
;
′
2 y + x(y
2
− 1) = 0
e. y = tan( x
3
+ c); y
′ 2
= x (1 + y )
2
f. y = (c 1 + c2 x)e
x
+ sin x + x ;
2
y
′′ ′
− 2 y + y = −2 cos x + x
2
− 4x + 2
g. y = c 1e
x
+ c2 x +
2
x
; (1 − x)y
′′ ′
+ x y − y = 4(1 − x − x )x
2 −3
h. y = x −1/2
(c1 sin x + c2 cos x) + 4x + 8 ;x 2
y
′′
+ x y + (x
′ 2
−
1
4
3
)y = 4 x
2
+ 8x + 3x − 2
c. y ′
= x ln x d. y ′′
= x cos x
e. y ′′
= 2x e
x
f. y ′′
= 2x + sin x + e
x
g. y ′′′
= − cos x h. y ′′′
= −x
2
+e
x
i. y ′′′
= 7e
4x
−
−
b. y ′
= x sin x ,
2
y (√
π
2
) =1
c. y ′
= tan x, y(π/4) = 3
d. y ′′
=x ,
4
y(2) = −1,
′
y (2) = −1
e. y ′′
= xe
2x
, y(0) = 7,
′
y (0) = 1
f. y ′′
= −x sin x, y(0) = 1,
′
y (0) = −3
g. y ′′′
=x e ,
2 x
y(0) = 1,
′
y (0) = −2, y
′′
(0) = 3
h. y ′′′
= 2 + sin 2x, y(0) = 1,
′
y (0) = −6, y
′′
(0) = 3
i. y ′′′
= 2x + 1, y(2) = 1,
′
y (2) = −4, y
′′
(2) = 7
4 √2
2 2
x −2 x y+2
b. y = 1+2 ln x
x
2
+
1
2
; y
′
=
x
3
, y(1) =
3
c. y = tan( x
2
); y
′
= x(1 + y ),
2
y(0) = 0
−y(y+1)
d. y = x−2
2
; y
′
=
x
, y(1) = −2
2 ′
2
x −xy +y+1
b. y = x
3
+ x − 1; y
′′
=
x2
, y(1) =
1
3
,
′
y (1) =
5
2 2 ′
( x −1)y−x( x +1) y
c. y = (1 + x 2 −1/2
) ; y
′′
=
2
2
, y(0) = 1, y (0) = 0
′
( x +1 )
′
2 2(x+y)(xy −y)
d. y = 1−x
x
; y
′′
= 3
x
, y(1/2) = 1/2,
′
y (1/2) = 3
7. Suppose an object is launched from a point 320 feet above the earth with an initial velocity of 128 ft/sec upward, and the
only force acting on it thereafter is gravity. Take g = 32 ft/sec . 2
is a solution of
′ (a−1)/a
y = ay (B)
on (c, ∞).
b. Suppose a < 0 or a > 1 . Can you think of a solution of (B) that isn’t of the form (A)?
9. Verify that
x
e − 1, x ≥ 0,
y ={
−x
1 −e , x < 0,
is a solution of
′
y = |y| + 1
on (−∞, ∞).
10.
(a) Verify that if c is any real number then
2
y =c + cx + 2c + 1 (A)
satisfies
− −−−−−−−− −
2
−(x + 2) + √ x + 4x + 4y
′
y = (B)
2
also satisfies (B) on some open interval, and identify the open interval. (Note that y1 can’t be obtained by selecting a
value of c in (A).)
In this section we’ll simply assume that Equation 1.3.1 has solutions and discuss a graphical method for approximating them.
In Chapter 3 we discuss numerical methods for obtaining approximate solutions of Equation 1.3.1. Recall that a solution of
Equation 1.3.1 is a function y = y(x) such that
′
y (x) = f (x, y(x))
for all values of x in some interval, and an integral curve is either the graph of a solution or is made up of segments that are
graphs of solutions. Therefore, not being able to solve Equation 1.3.1 is equivalent to not knowing the equations of integral
curves of Equation 1.3.1. However, it is easy to calculate the slopes of these curves. To be specific, the slope of an integral
curve of Equation 1.3.1 through a given point (x , y ) is given by the number f (x , y ). This is the basis of the method of
0 0 0 0
direction fields.
If f is defined on a set R , we can construct a direction field for Equation 1.3.1 in R by drawing a short line segment through
each point (x, y) in R with slope f (x, y). Of course, as a practical matter, we can’t actually draw line segments through every
point in R ; rather, we must select a finite set of points in R . For example, suppose f is defined on the closed rectangular
region
R : {a ≤ x ≤ b, c ≤ y ≤ d}.
Let
form a rectangular grid (Figure 1.3.1). Through each point in the grid we draw a short line segment with slope f (x , y ). The
i j
result is an approximation to a direction field for Equation 1.3.1 in R . If the grid points are sufficiently numerous and close
together, we can draw approximate integral curves of Equation 1.3.1 by drawing curves through points in the grid tangent to
the line segments associated with the points in the grid.
2 2
x −y
y
′
=
1+x2 +y 2
,
y
′
= 1 + xy
2
, and
x−y
y
′
=
1+x
2
.
which are all of the form Equation 1.3.1 with f continuous for all (x, y).
2 2
x−y
Figure 1.3.4 : A direction and integral curves for y ′
=
1+x2
.
if R contains part of the x-axis, since f (x, y) = −x/y is undefined when y = 0 . Similarly, they will not work for the equation
2
′
x
y = (1.3.3)
1 − x2 − y 2
if R contains any part of the unit circle x + y = 1 , because the right side of Equation
2 2
1.3.3 is undefined if 2
x +y
2
=1 .
However, Equation 1.3.2 and Equation 1.3.3 can written as
A(x, y)
′
y = (1.3.4)
B(x, y)
where A and B are continuous on any rectangle R . Because of this, some differential equation software is based on
numerically solving pairs of equations of the form
where x and y are regarded as functions of a parameter t . If x = x(t) and y = y(t) satisfy these equations, then
dy dy dx A(x, y)
′
y = = / = ,
dx dt dt B(x, y)
and
dx 2 2
dy 2
= 1 −x −y , =x ,
dt dt
respectively. Even if f is continuous and otherwise “nice” throughout R , your software may require you to reformulate the
equation y = f (x, y) as
′
dx dy
= 1, = f (x, y),
dt dt
which is of the form Equation 1.3.5 with A(x, y) = f (x, y) and B(x, y) = 1 .
Figure 1.3.5 shows a direction field and some integral curves for Equation 1.3.2. As we saw in Example [example:1.2.1} and
will verify again in Section 2.2, the integral curves of Equation 1.3.2 are circles centered at the origin.
Figure 1.3.6 shows a direction field and some integral curves for Equation 1.3.3. The integral curves near the top and bottom
are solution curves. However, the integral curves near the middle are more complicated. For example, Figure 1.3.7 shows the
integral curve through the origin. The vertices of the dashed rectangle are on the circle x + y = 1 ( a ≈ .846 , b ≈ .533),
2 2
where all integral curves of Equation 1.3.3 have infinite slope. There are three solution curves of Equation 1.3.3 on the
integral curve in the figure: the segment above the level y = b is the graph of a solution on (−∞, a) , the segment below the
2
x
Figure 1.3.6 : A direction field and integral curves for y ′
=
2 2
.
1−x −y
Using Technology
As you study from this book, you’ll often be asked to use computer software and graphics. Exercises with this intent are
marked as (computer or calculator required), (computer and/or graphics required), or (laboratory work requiring software
and/or graphics). Often you may not completely understand how the software does what it does. This is similar to the situation
most people are in when they drive automobiles or watch television, and it doesn’t decrease the value of using modern
technology as an aid to learning. Just be careful that you use the technology as a supplement to thought rather than a substitute
for it.
2.
2
2xy
Figure 1.3E. 2 : A direction field for y ′
=
1+x
2
4.
2
1+x +y
2
6.
8.
10.
In Exercises 12 - 22 construct a direction field and plot some integral curves in the indicated rectangular region.
12. y ′
= y(y − 1); {−1 ≤ x ≤ 2, − 2 ≤ y ≤ 2}
13. y ′
= 2 − 3xy; {−1 ≤ x ≤ 4, − 4 ≤ y ≤ 4}
14. y ′
= xy(y − 1); {−2 ≤ x ≤ 2, − 4 ≤ y ≤ 4}
15. y ′
= 3x + y; {−2 ≤ x ≤ 2, 0 ≤ y ≤ 4}
16. y ′
= y −x ;
3
{−2 ≤ x ≤ 2, − 2 ≤ y ≤ 2}
17. y ′
= 1 −x
2
−y ;
2
{−2 ≤ x ≤ 2, − 2 ≤ y ≤ 2}
18. y ′
= x(y
2
− 1); {−3 ≤ x ≤ 3, − 3 ≤ y ≤ 2}
19. y ′
=
y( y
x
2
−1)
; {−2 ≤ x ≤ 2, − 2 ≤ y ≤ 2}
2
xy
20. y ′
=
y−1
; {−2 ≤ x ≤ 2, − 1 ≤ y ≤ 4}
2
x( y −1)
21. y ′
=
y
; {−1 ≤ x ≤ 1, − 2 ≤ y ≤ 2}
2 2
x +y
22. y ′
=−
1−x −y
2 2
; {−2 ≤ x ≤ 2, − 2 ≤ y ≤ 2}
23. By suitably renaming the constants and dependent variables in the equations
′
T = −k(T − Tm ) (A)
and
discussed in Section 1.2 in connection with Newton’s law of cooling and absorption of glucose in the body, we can write both
as
′
y = −ay + b, (C)
where a is a positive constant and b is an arbitrary constant. Thus, (A) is of the form (C) with y = T , a = k , and b = kT , m
and (B) is of the form (C) with y = G , a = λ , and b = r . We’ll encounter equations of the form (C) in many other
applications in Chapter 2.
Choose a positive a and an arbitrary b . Construct a direction field and plot some integral curves for (C) in a rectangular region
of the form
{0 ≤ t ≤ T , c ≤ y ≤ d} (1.3E.1)
of the ty -plane. Vary T , c , and d until you discover a common property of all the solutions of (C). Repeat this experiment with
various choices of a and b until you can state this property precisely in terms of a and b .
24. By suitably renaming the constants and dependent variables in the equations
′
P = aP (1 − αP ) (A)
and
′
I = rI (S − I ) (B)
discussed in Section 1.1 in connection with Verhulst’s population model and the spread of an epidemic, we can write both in
the form
′ 2
y = ay − b y , (C)
where a and b are positive constants. Thus, (A) is of the form (C) with y = P , a = a , and b = aα , and (B) is of the form (C)
with y = I , a = rS , and b = r . In Chapter 2 we’ll encounter equations of the form (C) in other applications..
Choose positive numbers a and b . Construct a direction field and plot some integral curves for (C) in a rectangular region of
the form
{0 ≤ t ≤ T , 0 ≤ y ≤ d} (1.3E.2)
of the ty -plane. Vary T and d until you discover a common property of all solutions of (C) with y(0) > 0 . Repeat this
experiment with various choices of a and b until you can state this property precisely in terms of a and b .
Choose positive numbers a and b . Construct a direction field and plot some integral curves for (C) in a rectangular region of
the form
{0 ≤ t ≤ T , c ≤ y ≤ 0} (1.3E.3)
of the ty -plane. Vary a , b , T and c until you discover a common property of all solutions of (C) with y(0) < 0 .
You can verify your results later by doing Exercise 2.2.27.
1 9/16/2020
2.1: Linear First Order Equations
A first order differential equation is said to be linear if it can be written as
′
y + p(x)y = f (x). (2.1.1)
A first order differential equation that cannot be written like this is nonlinear. We say that Equation 2.1.1 is homogeneous if
f ≡ 0 ; otherwise it is nonhomogeneous. Since y ≡ 0 is obviously a solution of the homgeneous equation
′
y + p(x)y = 0,
Example 2.1.1
The first order equations
2 ′ 2
x y + 3y = x
′ 2
x y − 8 x y = sin x
′
x y + (ln x)y = 0
′ 2
y = x y −2
are not in the form in Equation 2.1.1, but they are linear, since they can be rewritten as
′
3
y + y =1
x2
′
sin x
y − 8xy =
x
′
ln x
y + y =0
x
′ 2
y − x y = −2
Example 2.1.2
Here are some nonlinear first order equations:
′ 2
xy + 3y = 2x (because y is squared)
′ ′
yy =3 (because of the product y y )
′ y y
y + xe = 12 (because of e )
where c is an arbitrary constant. We call c a parameter and say that Equation 2.1.3 defines a one–parameter family of
functions. For each real number c , the function defined by Equation 2.1.3 is a solution of Equation 2.1.2 on (−∞, 0) and
that is, if p and f are continuous on some open interval (a, b) then there’s a unique formula y = y(x, c) analogous to Equation
2.1.3 that involves x and a parameter c and has the these properties:
For each fixed value of c , the resulting function of x is a solution of Equation 2.1.4 on (a, b).
If y is a solution of Equation 2.1.4 on (a, b), then y can be obtained from the formula by choosing c appropriately.
We’ll call y = y(x, c) the general solution of Equation 2.1.4.
When this has been established, it will follow that an equation of the form
′
P0 (x)y + P1 (x)y = F (x) (2.1.5)
has a general solution on any open interval (a, b) on which P , P , and F are all continuous and P has no zeros, since in this
0 1 0
case we can rewrite Equation 2.1.5 in the form Equation 2.1.4 with p = P /P and f = F /P , which are both continuous on
1 0 0
(a, b) .
To avoid awkward wording in examples and exercises, we will not specify the interval (a, b) when we ask for the general
solution of a specific linear first order equation. Let’s agree that this always means that we want the general solution on every
open interval on which p and f are continuous if the equation is of the form Equation 2.1.4, or on which P , P , and F are 0 1
continuous and P has no zeros, if the equation is of the form Equation 2.1.5. We leave it to you to identify these intervals in
0
(a, b) , then Equation 2.1.5 may fail to have a general solution on (a, b) in the sense just defined. Since this isn’t a major point
that needs to be developed in depth, we will not discuss it further; however, see Exercise 2.1.44 for an example.
Example 2.1.3
Let a be a constant.
a. Find the general solution of
′
y − ay = 0. (2.1.6)
Solution a
(a) You already know from calculus that if c is any constant, then y = ce satisfies Equation 2.1.6. However, let’s
ax
pretend you’ve forgotten this, and use this problem to illustrate a general method for solving a homogeneous linear first
order equation.
We know that Equation 2.1.6 has the trivial solution y ≡ 0 . Now suppose y is a nontrivial solution of Equation 2.1.6.
Then, since a differentiable function must be continuous, there must be some open interval I on which y has no zeros. We
rewrite Equation 2.1.6 as
′
y
=a
y
where
k
e ify > 0,
c ={
k
−e ify < 0.
This shows that every nontrivial solution of Equation 2.1.6 is of the form y = ce for some nonzero constant c . Since ax
setting c = 0 yields the trivial solution, all solutions of Equation 2.1.6 are of the form Equation 2.1.7. Conversely,
Equation 2.1.7 is a solution of Equation 2.1.6 for every choice of c , since differentiating Equation 2.1.7 yields
= ay .
′ ax
y = ac e
Solution b
Imposing the initial condition y(x 0) = y0 yields y0 = ce
ax0
, so c = y 0e
−ax0
and
−ax0 ax a(x−x0 )
y = y0 e e = y0 e .
Figure 2.1.1 show the graphs of this function with x 0 =0 ,y 0 =1 , and various values of a .
Example 2.1.4
a. Find the general solution of
′
x y + y = 0. (2.1.8)
Solution a
We rewrite Equation 2.1.8 as
where x is restricted to either (−∞, 0) or (0, ∞). If y is a nontrivial solution of Equation 2.1.10 , there must be some
open interval I on which y has no zeros. We can rewrite Equation 2.1.10 as
′
y 1
=−
y x
Since a function that satisfies the last equation can’t change sign on either (−∞, 0) or (0, ∞), we can rewrite this result
more simply as
c
y = (2.1.11)
x
where
k
e ify > 0,
c ={
k
−e ify < 0.
We have now shown that every solution of Equation 2.1.10 is given by Equation 2.1.11 for some choice of c . (Even
though we assumed that y was nontrivial to derive Equation 2.1.11, we can get the trivial solution by setting c = 0 in
Equation 2.1.11.) Conversely, any function of the form Equation 2.1.11 is a solution of Equation 2.1.10, since
differentiating Equation 2.1.11 yields
c
′
y =− ,
2
x
and substituting this and Equation 2.1.11 into Equation 2.1.10 yields
′
1 c 1 c
y + y = − +
x x2 x x
c c
= − + = 0.
2 2
x x
Figure 2.1.2 shows the graphs of some solutions corresponding to various values of c
Solution b
Imposing the initial condition y(1) = 3 in Equation 2.1.11 yields c = 3 . Therefore the solution of Equation 2.1.9 is
3
y = .
x
The results in Examples 2.1.3a and 2.1.4b are special cases of the next theorem.
Theorem 2.1.2
If p is continuous on (a, b), then the general solution of the homogeneous equation
′
y + p(x)y = 0 (2.1.12)
on (a, b) is
−P (x)
y = ce ,
Proof
If y = ce −P (x)
, differentiating y and using Equation 2.1.14 shows that
′ ′ −P (x) −P (x)
y = −P (x)c e = −p(x)c e = −p(x)y,
so y ′
+ p(x)y = 0 ; that is, y is a solution of Equation 2.1.12, for any choice of c .
Now we’ll show that any solution of Equation 2.1.12 can be written as y = ce for some constant c . The trivial
−P (x)
solution can be written this way, with c = 0 . Now suppose y is a nontrivial solution. Then there’s an open subinterval I of
(a, b) on which y has no zeros. We can rewrite Equation 2.1.12 as
′
y
= −p(x) (2.1.15)
y
ln |y| = −P (x) + k,
Since P is defined for all x in (a, b) and an exponential can never equal zero, we can take I = (a, b) , so y has zeros on
(a, b) (a, b) , so we can rewrite the last equation as y = ce , where −P (x)
k
e if y > 0 on (a, b),
c ={
k
−e if y < 0 on (a, b).
on x is called separation of variables. We did this in Examples 2.1.3 and 2.1.4, and in rewriting Equation 2.1.12 and Equation
2.1.15. We will apply this method to nonlinear equations in Section 2.2.
and u is to be determined. This method of using a solution of the complementary equation to obtain solutions of a
nonhomogeneous equation is a special case of a method called variation of parameters, which you’ll encounter several times
in this book. (Obviously, u can’t be constant, since if it were, the left side of Equation 2.1.16 would be zero. Recognizing this,
the early users of this method viewed u as a “parameter” that varies; hence, the name “variation of parameters.”)
If
′ ′ ′
y = u y1 , then y = u y1 + u y .
1
′ ′
u y1 + u(y + p(x)y1 ) = f (x),
1
which reduces to
′
u y1 = f (x), (2.1.17)
′
y + p(x)y1 = 0.
1
In the proof of Theorem 2.2.1 we saw that y1 has no zeros on an interval where p is continuous. Therefore we can divide
Equation 2.1.17 through by y to obtain
1
′
u = f (x)/ y1 (x).
We can integrate this (introducing a constant of integration), and multiply the result by y to get the general solution of 1
Equation 2.1.16. Before turning to the formal proof of this claim, let’s consider some examples.
Example 2.1.5
Find the general solution of
′ 3 −2x
y + 2y = x e . (2.1.18)
By applying a of Example 2.1.3 with a = −2 , we see that y = e is a solution of the complementary equation
1
−2x
Therefore
4
x
u = + c,
4
and
4
−2x −2x
x
y = ue =e ( + c)
4
Example 2.1.6
Find the general solution
′
y + (cot x)y = x csc x. (2.1.20)
Here p(x) = cot x and f (x) = x csc x are both continuous except at the points x = rπ, where r is an integer. Therefore
we seek solutions of Equation 2.1.20 on the intervals (rπ, (r + 1)π). We need a nontrival solution y of the 1
′
y cos x
1
= − cot x = − . (2.1.22)
y1 sin x
ln | y1 | = − ln | sin x|,
where we take the constant of integration to be zero since we need only one function that satisfies Equation 2.1.22.
Clearly y = 1/ sin x is a suitable choice. Therefore we seek solutions of Equation 2.1.20 in the form
1
u
y = ,
sin x
so that
′
u u cos x
′
y = − (2.1.23)
2
sin x sin x
and
′
u u cos x u cos x
= − 2
+ 2
(2.1.24)
sin x sin x sin x
′
u
= .
sin x
is the general solution of Equation 2.1.20 on every interval (rπ, (r + 1)π) ( r =integer).
b. Imposing the initial condition y(π/2) = 1 in Equation 2.1.25 yields
2 2
π π
1 = + c orc = 1 − .
8 8
Thus,
2 2
x (1 − π /8)
y = +
2 sin x sin x
is a solution of Equation 2.1.20. The interval of validity of this solution is (0, π); Figure 2.1.4 shows its graph.
REMARK: It wasn’t necessary to do the computations 2.1.23 and 2.1.24 in Example 2.1.6, since we showed in the
discussion preceding Example 2.1.5 that if y = uy where y + p(x)y = 0 , then y + p(x)y = u y . We did these
1
′
1 1
′ ′
1
computations so you would see this happen in this specific example. We recommend that you include these “unnecesary”
computations in doing exercises, until you’re confident that you really understand the method. After that, omit them.
We summarize the method of variation of parameters for solving
′
y + p(x)y = f (x) (2.1.26)
as follows:
a. Find a function y such that
1
′
y
1
= −p(x).
y1
y = uy1 (2.1.27)
we recommend that you divide through by P 0 (x) to obtain an equation of the form Equation 2.1.26 and then follow this
procedure.
Example 2.1.7
Find the general solution of
′
y − 2xy = 1.
a. To apply variation of parameters, we need a nontrivial solution y1 of the complementary equation; thus,
y − 2x y = 0 , which we rewrite as
′
1 1
′
y
1
= 2x.
y1
2 2
′ x ′ −x
u e = 1, so u =e .
Therefore
2
−x
u = c +∫ e dx,
but we can’t simplify the integral on the right because there’s no elementary function with derivative equal to e
−x
.
Therefore the best available form for the general solution of Equation 2.1.28 is
2 2 2
x x −x
y = ue =e (c + ∫ e dx) . (2.1.29)
b. Since the initial condition in Equation 2.1.28 is imposed at x 0 =0 , it is convenient to rewrite Equation 2.1.29 as
x 0
2 2 2
x −t −t
y =e (c + ∫ e dt) , since ∫ e dt = 0
0 0
Setting x = 0 and y = y here shows that c = y . Therefore the solution of the initial value problem is
0 0
x
2 2
x −t
y =e ( y0 + ∫ e dt) . (2.1.30)
0
procedure is to apply the numerical integration procedures discussed in Chapter 3 directly to the initial value problem
Equation 2.1.28. Figure 2.1.5 shows graphs of of Equation 2.1.30 for several values of y . 0
on (a, b) is
b. If x is an arbitrary point in (a, b) and y is an arbitrary real number , then the initial value problem
0 0
′
y + p(x)y = f (x), y(x0 ) = y0
on (a, b).
Proof
(a) To show that Equation 2.1.32is the general solution of Equation 2.1.31on (a, b) , we must prove that:
i. If c is any constant, the function y in Equation 2.1.32is a solution of Equation 2.1.31on (a, b) .
ii. If y is a solution of Equation 2.1.31on (a, b) then y is of the form Equation 2.1.32for some constant c.
To prove (i), we first observe that any function of the form Equation 2.1.32 is defined on (a, b) , since p and f are
continuous on (a, b) . Differentiating Equation 2.1.32yields
Since y 1
′
= −p(x)y1 , this and Equation 2.1.32imply that
′
y = −p(x)y1 (x) (c + ∫ f (x)/ y1 (x) dx) + f (x)
= −p(x)y(x) + f (x),
has no zeros on (a, b) , so the function u = y/y is defined on (a, b) . Moreover, since
1
′ ′
y = −py + f and y = −p y1 ,
1
′ ′
y1 y − y y
′ 1
u =
2
y
1
y1 (−py + f ) − (−p y1 )y f
= = .
2
y y1
1
Integrating u ′
= f / y1 yields
(b) We’ve proved (a), where ∫ f (x)/y (x) dx in Equation 2.1.32 is an arbitrary antiderivative of f /y . Now it is
1 1
convenient to choose the antiderivative that equals zero when x = x , and write the general solution of Equation 0
2.1.31as
x
f (t)
y = y1 (x) (c + ∫ dt) .
x0 y1 (t)
Since
x0
f (t)
y(x0 ) = y1 (x0 ) (c + ∫ dt) = c y1 (x0 ),
x0 y1 (t)
3. x y ′
+ (ln x)y = 0
4. x y ′
+ 3y = 0
5. x 2 ′
y +y = 0
Q2.1.2
In Exercises 2.1.6-2.1.11 solve the initial value problem.
1+x
6. y ′
+(
x
) y = 0, y(1) = 1
37 x y ′
+ (1 +
1
ln x
) y = 0, y(e) = 1
8. x y ′
+ (1 + x cot x)y = 0, y(
π
2
) =2
9. y ′
−(
1+x
2x
2
) y = 0, y(0) = 2
10. y ′
+
k
x
y = 0, y(1) = 3 (k = constant)
11. y ′
+ (tan kx)y = 0, y(0) = 2 (k = constant)
Q2.1.3
In Exercises 2.1.12-2.1.15 find the general solution. Also, plot a direction field and some integral curves on the rectangular
region {−2 ≤ x ≤ 2, −2 ≤ y ≤ 2} .
12. y ′
+ 3y = 1
13. y ′
+(
1
x
− 1) y = −
2
14. y
2
′ −x
+ 2xy = x e
−x
15. y ′
+
1+x
2x
2
y =
1+x
e
2
Q2.1.4
In Exercises 2.1.16-2.1.24 find the general solution.
16. y ′
+
1
x
y =
7
x
2
+3
17. y ′
+
x−1
4
y =
1
5
+
sin x
4
(x−1) (x−1)
18. x y ′
+ (1 + 2 x )y = x e
2 3 −x
19. x y ′
+ 2y =
x
2
2
+1
20. y ′
+ (tan x)y = cos x
21. (1 + x)y ′
+ 2y =
sin x
1+x
23. y ′
+ (2 sin x cos x)y = e
− sin x
24. x 2 ′
y + 3xy = e
x
26. (1 + x 2 ′
)y + 4xy =
1+x
2
2
, y(0) = 1
27. x y ′
+ 3y =
x(1+x )
2
2
, y(−1) = 0
28. y ′
+ (cot x)y = cos x, y(
π
2
) =1
29. y ′
+
1
x
y =
2
x
2
+ 1, y(−1) = 0
Q2.1.6
In Exercises 2.1.30-2.1.37, solve the initial value problem.
30. (x − 1)y ′
+ 3y =
1
3
+
sin x
2
, y(0) = 1
(x−1) (x−1)
31. x y ′
+ 2y = 8 x ,
2
y(1) = 3
32. x y ′
− 2y = −x ,
2
y(1) = 1
33. y ′
+ 2xy = x, y(0) = 3
2
1+(x−1) sec x
34. (x − 1)y ′
+ 3y = 3
, y(0) = −1
(x−1)
35. (x + 2)y ′
+ 4y =
1+2x
3
, y(−1) = 2
x(x+2)
36. (x 2 ′
− 1)y − 2xy = x(x
2
− 1), y(0) = 4
37. (x 2 ′
− 5)y − 2xy = −2x(x
2
− 5), y(2) = 7
Q2.1.7
In Excercises 2.1.28-2.1.42 solve the initial value problem and leave the answer in a form involving a definite integral. (You
can solve these problems numerically by methods discussed in Chapter 3.)
38. y ′
+ 2xy = x ,
2
y(0) = 3
39. y ′
+
1
x
y =
sin x
x
2
, y(1) = 2
−x
40. y ′
+y =
e tan x
x
, y(1) = 0
41. y ′
+
1+x
2x
2
y =
e
2
2
, y(0) = 1
(1+x )
42. x y ′
+ (x + 1)y = e
x
, y(1) = 2
43. Experiments indicate that glucose is absorbed by the body at a rate proportional to the amount of glucose present in the
bloodstream. Let λ denote the (positive) constant of proportionality. Now suppose glucose is injected into a patient’s
bloodstream at a constant rate of r units per unit of time. Let G = G(t) be the number of units in the patient’s bloodstream at
time t > 0 . Then
′
G = −λG + r, (2.1E.1)
where the first term on the right is due to the absorption of the glucose by the patient’s body and the second term is due to the
injection. Determine G for t > 0 , given that G(0) = G . Also, find lim G(t) . 0 t→∞
44.
(a) Plot a direction field and some integral curves for
′
x y − 2y = −1 (A)
on the rectangular region {−1 ≤ x ≤ 1, −.5 ≤ y ≤ 1.5} . What do all the integral curves have in common?
(d) Conclude from c that all solutions of (A) on (−∞, ∞) are solutions of the initial value problem
′
1
x y − 2y = −1, y(0) = . (2.1E.4)
2
(e) Use (b) to show that if x 0 ≠0 and y is arbitrary, then the initial value problem
0
′
x y − 2y = −1, y(x0 ) = y0 (2.1E.5)
has infinitely many solutions on ( −∞, ∞). Explain why this doesn't contradict Theorem 2.1.1.
(b) Suppose (a, b) = (a, ∞) , α > 0 and lim f (x) = L . Show that if y is the solution of (A), then lim y(x) = L/α .
x→∞ x→∞
46. Assume that all functions in this exercise are defined on a common interval (a, b).
(a) Prove: If y and y are solutions of
1 2
′
y + p(x)y = f1 (x) (2.1E.6)
and
′
y + p(x)y = f2 (x) (2.1E.7)
′
y + p(x)y = f (x), (A) (2.1E.9)
(c) Use (a) to show that if y is a solution of (A) and y is a solution of (B), then y
1 2 1 + y2 is a solution of (A).
47. Some nonlinear equations can be transformed into linear equations by changing the dependent variable. Show that if
′ ′
g (y)y + p(x)g(y) = f (x) (2.1E.11)
where y is a function of x and g is a function of y , then the new dependent variable z = g(y) satisfies the linear equation
′
z + p(x)z = f (x). (2.1E.12)
(b) e
2
y ′ 2 1
(2y y + ) =
x 2
x
′
xy
(c) y
+ 2 ln y = 4 x
2
′
y
(d) 2
−
1
x(1+y)
=−
3
2
x
(1+y)
49. We’ve shown that if p and f are continuous on (a, b) then every solution of
′
y + p(x)y = f (x) (A)
on (a, b) can be written as y = uy , where y is a nontrivial solution of the complementary equation for (A) and u
1 1
′
= f / y1 .
Now suppose f , f , …, f and p, p , …, p
′
are continuous on (a, b), where m is a positive integer, and define
(m) ′ (m−1)
f0 = f ,
′
fj =f + p fj−1 , 1 ≤ j ≤ m.
j−1
Show that
fj
(j+1)
u = , 0 ≤ j ≤ m. (2.1E.13)
y1
where the left side is a product of y and a function of y and the right side is a function of x. Rewriting a separable differential
′
equation in this form is called separation of variables. In Section 2.1, we used separation of variables to solve homogeneous
linear equations. In this section we’ll apply this method to nonlinear equations.
To see how to solve Equation 2.2.1, let’s first assume that y is a solution. Let G(x) and H (y) be antiderivatives of g(x) and
h(y); that is,
′ ′
H (y) = h(y) and G (x) = g(x). (2.2.2)
Integrating both sides of this equation and combining the constants of integration yields
H (y(x)) = G(x) + c. (2.2.3)
Although we derived this equation on the assumption that y is a solution of Equation 2.2.1, we can now view it differently:
Any differentiable function y that satisfies Equation 2.2.3 for some constant c is a solution of Equation 2.2.1. To see this, we
differentiate both sides of Equation 2.2.3, using the chain rule on the left, to obtain
′ ′ ′
H (y(x))y (x) = G (x),
which is equivalent to
′
h(y(x))y (x) = g(x)
Example 2.2.1
Solve the equation
′ 2
y = x(1 + y ).
Solution
Separating variables yields
′
y
= x.
2
1 +y
Integrating yields
2
x
−1
tan y = +c
2
Therefore
Example 2.2.2
a. Solve the equation
x
′
y =− . (2.2.4)
y
Solution a
Separating variables in Equation 2.2.4 yields
′
yy = −x.
Integrating yields
2 2
y x
2 2
=− + c, or equivalently x +y = 2c.
2 2
The last equation shows that c must be positive if y is to be a solution of Equation 2.2.4 on an open interval. Therefore
we let 2c = a (with a > 0 ) and rewrite the last equation as
2
2 2 2
x +y =a . (2.2.7)
and
− −−−−−
2 2
y = −√ a − x , −a < x < a. (2.2.9)
The solution curves defined by Equation 2.2.8 are semicircles above the x-axis and those defined by Equation 2.2.9 are
semicircles below the x-axis (Figure 2.2.1).
Solution b
The solution of Equation 2.2.5 is positive when x = 1 ; hence, it is of the form Equation 2.2.8. Substituting x = 1 and
y = 1 into Equation 2.2.7 to satisfy the initial condition yields a = 2 ; hence, the solution of Equation 2.2.5 is
2
−−−− − – –
2
y = √2 − x , −√2 < x < √2.
Solution c
The solution of Equation 2.2.6 is negative when x = 1 and is therefore of the form Equation 2.2.9. Substituting x = 1
and y = −2 into Equation 2.2.7 to satisfy the initial condition yields a = 5 . Hence, the solution of Equation 2.2.6 is
2
− −−− − – –
2
y = −√ 5 − x , −√5 < x < √5.
Then there’s a function y = y(x) defined on some open interval (a1 , b1 ), where a ≤ a1 < x0 < b1 ≤ b, such that
y(x ) = y
0 and 0
It’s convenient to say that Equation 2.2.11 with c arbitrary is an implicit solution of h(y)y = g(x) . Curves defined by ′
Equation 2.2.11 are integral curves of h(y)y = g(x) . If c satisfies Equation 2.2.10, we’ll say that Equation 2.2.11 is an
′
implicit solution of the initial value problem Equation 2.2.12. However, keep these points in mind:
For some choices of c there may not be any differentiable functions y that satisfy Equation 2.2.11.
The function y in Equation 2.2.11 (not Equation 2.2.11 itself) is a solution of h(y)y = g(x) . ′
Example 2.2.3
a. Find implicit solutions of
2x + 1
′
y = . (2.2.13)
4
5y +1
Solution a
Separating variables yields
4 ′
(5 y + 1)y = 2x + 1.
of Equation 2.2.13.
Solution b
Imposing the initial condition y(2) = 1 in Equation 2.2.15 yields 1 + 1 = 4 + 2 + c , so c = −4 . Therefore
5 2
y +y = x +x −4
is an implicit solution of the initial value problem Equation 2.2.14. Although more than one differentiable function
y = y(x) satisfies Equation 2.2.13 near x = 1 , it can be shown that there is only one such function that satisfies the initial
condition y(1) = 2 . Figure 2.2.2 shows a direction field and some integral curves for Equation 2.2.13.
However, the division by p(y) is not legitimate if p(y) = 0 for some values of y . The next two examples show how to deal
with this problem.
Example 2.2.4
Find all solutions of
Solution
Here we must divide by p(y) = y to separate variables. This isn’t legitimate if y is a solution of Equation 2.2.16 that
2
equals zero for some value of x. One such solution can be found by inspection: y ≡ 0 . Now suppose y is a solution of
Equation 2.2.16 that isn’t identically zero. Since y is continuous there must be an interval on which y is never zero. Since
division by y is legitimate for x in this interval, we can separate variables in Equation 2.2.16 to obtain
2
′
y
= 2x.
2
y
which is equivalent to
1
y =− . (2.2.17)
2
x +c
We’ve now shown that if y is a solution of Equation 2.2.16 that is not identically zero, then y must be of the form
Equation 2.2.17. By substituting Equation 2.2.17 into Equation 2.2.16, you can verify that Equation 2.2.17 is a solution
of Equation 2.2.16. Thus, solutions of Equation 2.2.16 are y ≡ 0 and the functions of the form Equation 2.2.17. Note that
the solution y ≡ 0 isn’t of the form Equation 2.2.17 for any value of c .
Figure 2.2.3 shows a direction field and some integral curves for Equation 2.2.16
Example 2.2.5
Find all solutions of
′
1 2
y = x(1 − y ). (2.2.18)
2
Here we must divide by p(y) = 1 − y to separate variables. This isn’t legitimate if y is a solution of Equation 2.2.18
2
that equals ±1 for some value of x. Two such solutions can be found by inspection: y ≡ 1 and y ≡ −1 . Now suppose y is
a solution of Equation 2.2.18 such that 1 − y isn’t identically zero. Since 1 − y is continuous there must be an interval
2 2
on which 1 − y is never zero. Since division by 1 − y is legitimate for x in this interval, we can separate variables in
2 2
hence,
∣ y −1 ∣ k
2
−x /2
∣ ∣ =e e .
∣ y +1 ∣
Since y(x) ≠ ±1 for x on the interval under discussion, the quantity (y − 1)/(y + 1) cannot change sign in this interval.
Therefore we can rewrite the last equation as
y −1 2
−x /2
= ce ,
y +1
where c = ±e , depending upon the sign of (y − 1)/(y + 1) on the interval. Solving for y yields
k
2
−x /2
1 + ce
y = . (2.2.19)
2
−x /2
1 − ce
We’ve now shown that if y is a solution of Equation 2.2.18 that is not identically equal to ±1, then y must be as in
Equation 2.2.19. By substituting Equation 2.2.19 into Equation 2.2.18 you can verify that Equation 2.2.19 is a solution
of Equation 2.2.18. Thus, the solutions of Equation 2.2.18 are y ≡ 1 , y ≡ −1 and the functions of the form Equation
2.2.19. Note that the constant solution y ≡ 1 can be obtained from this formula by taking c = 0 ; however, the other
2
x(1− y )
Figure 2.2.4 : A direction field and integral curves for y ′
=
2
on (a, b) can be obtained by choosing a value for the constant c in the general solution, and if x is any point in (a, b) and y
0 0
dy x
=− , y(x0 ) = y0
dx y
−−−−−−
is valid on (−a, a) , where a = √x 2
0
+y
0
2
.
Example 2.2.6
Solve the initial value problem
′ 2
y = 2x y , y(0) = y0
Imposing the initial condition shows that c = −1/y . Substituting this into Equation 2.2.20 and rearranging terms yields
0
the solution
y0
y = .
2
1 − y0 x
This is also the solution if y = 0 . If y < 0 , the denominator isn’t zero for any value of x, so the the solution is valid on
0 0
1. y ′
=
3 x +2x+1
y−2
&
3. x y ′
+y
2
+y = 0 &
4. y ′
ln |y| + x y = 0
2
(2x+1)y
5. (3y 3
+ 3y cos y + 1)y +
′
1+x
2
=0
6. x 2
yy
′
= (y
2
− 1)
3/2
Q2.2.2
In Exercises 2.2.7-2.2.10 find all solutions. Also, plot a direction field and some integral curves on the indicated rectangular
region.
7. y ′ 2 2
= x (1 + y ); {−1 ≤ x ≤ 1, − 1 ≤ y ≤ 1}
8. y ′ 2
(1 + x ) + xy = 0; {−2 ≤ x ≤ 2, − 1 ≤ y ≤ 1}
9. y ′
= (x − 1)(y − 1)(y − 2); {−2 ≤ x ≤ 2, − 3 ≤ y ≤ 3}
10. (y − 1) 2
y
′
= 2x + 3; {−2 ≤ x ≤ 2, − 2 ≤ y ≤ 5}
Q2.2.3
In Exercises 2.2.11 and 2.2.12 solve the initial value problem.
2
11. y ′
=
x +3x+2
y−2
, y(1) = 4
12. y ′
+ x(y
2
+ y) = 0, y(2) = 1
Q2.2.4
In Exercises 2.2.13-2.2.16 solve the initial value problem and graph the solution.
13. (3y 2
+ 4y)y + 2x + cos x = 0,
′
y(0) = 1
(y+1)(y−1)(y−2)
14. y ′
+
x+1
= 0, y(1) = 0
15. y ′
+ 2x(y + 1) = 0, y(0) = 2
16. y ′
= 2xy(1 + y ),
2
y(0) = 1
Q2.2.5
In Exercises 2.2.17-2.2.23 solve the initial value problem and find the interval of validity of the solution.
17. y ′
(x
2
+ 2) + 4x(y
2
+ 2y + 1) = 0, y(1) = −1
18. y ′
= −2x(y
2
− 3y + 2), y(0) = 3
19. y ′
=
1+2y
2x
, y(2) = 0 &
20. y ′
= 2y − y ,
2
y(0) = 1
21. x + y y ′
= 0, y(3) = −4
22. y ′ 2
+ x (y + 1)(y − 2 )
2
= 0, y(4) = 2
sin y
, y(π) =
π
2
explicitly.
and P (0) = P 0 >0 . Find P for t > 0 , and find lim t→∞ .
P (t)
29. An epidemic spreads through a population at a rate proportional to the product of the number of people already infected
and the number of people susceptible, but not yet infected. Therefore, if S denotes the total population of susceptible people
and I = I (t) denotes the number of infected people at time t , then
′
I = rI (S − I ), (2.2E.3)
where r is a positive constant. Assuming that I (0) = I , find I (t) for t > 0 , and show that lim
0 t→∞ I (t) = S .
30. The result of Exercise 2.2.29 is discouraging: if any susceptible member of the group is initially infected, then in the long
run all susceptible members are infected! On a more hopeful note, suppose the disease spreads according to the model of
Exercise 2.2.29, but there’s a medication that cures the infected population at a rate proportional to the number of infected
individuals. Now the equation for the number of infected individuals becomes
′
I = rI (S − I ) − qI (A)
R = {0 ≤ t ≤ T , 0 ≤ I ≤ d} (2.2E.4)
in the (t, I )-plane, verify that if I is any solution of (A) such that I (0) > 0 , then lim I (t) = S − q/r if q < rS and t→∞
b. To verify the experimental results of (a), use separation of variables to solve (A) with initial condition I (0) = I > 0 , and 0
where a , b are positive constants, and q is an arbitrary constant. Suppose y denotes a solution of this equation that satisfies the
initial condition y(0) = y . 0
in the (t, y)-plane, discover that there are numbers y and y with y < y such that if y > y then lim
1 2 y(t) = y , 1 2 0 1 t→∞ 2
and if y < y then y(t) = −∞ for some finite value of t . (What happens if y = y ?)
0 1 0 1
b. Choose a and b positive and q = a /4b . By plotting direction fields and solutions of (A) on suitable rectangular grids of
2
the form (B), discover that there’s a number y such that if y ≥ y then lim 1 y(t) = y , while if y < y 0then 1 t→∞ 1 0 1
c. Choose positive a , b and q > a /4b . By plotting direction fields and solutions of (A) on suitable rectangular grids of the
2
form (B), discover that no matter what y is, y(t) = −∞ for some finite value of t .
0
To decide what to do next you’ll have to use the quadratic formula. This should lead you to see why there are three cases.
Take it from there! Because of its role in the transition between these three cases, q = a /4b is called a bifurcation value0
2
of q. In general, if q is a parameter in any differential equation, q is said to be a bifurcation value of q if the nature of the
0
solutions of the equation with q < q is qualitatively different from the nature of the solutions with q > q .
0 0
convince yourself that q 0 =0 is a bifurcation value of q for this equation. Explain what makes you draw this conclusion.
33. Suppose a disease spreads according to the model of Exercise 2.2.29, but there’s a medication that cures the infected
population at a constant rate of q individuals per unit time, where q > 0 . Then the equation for the number of infected
individuals becomes
′
I = rI (S − I ) − q. (2.2E.7)
Assuming that I (0) = I 0 >0 , use the results of Exercise 2.2.31 to describe what happens as t → ∞ .
34. Assuming that p ≢ 0 , state conditions under which the linear equation
′
y + p(x)y = f (x) (2.2E.8)
is separable. If the equation satisfies these conditions, solve it by separation of variables and by the method developed in
Section 2.1.
Q2.2.7
Solve the equations in Exercises 2.2.35-2.2.38 using variation of parameters followed by separation of variables.
−x
35. y ′
+y =
2xe
1+yex
&
6
36. x y ′
− 2y =
y+x
x
2
4x
(x+1)e
37. y ′
−y =
x
2
&
(y+e )
2x
38. y ′
− 2y =
xe
1−ye−2x
39. Use variation of parameters to show that the solutions of the following equations are of the form y = uy1 , where u
a. x y + y = h(x)p(xy)
′
y
b. x y − y = h(x)p ( )
′
c. y + y = h(x)p(e y)
′ x
d. x y + ry = h(x)p(x y)
′ r
′
v (x)
e. ′
y +
v(x)
y = h(x)p (v(x)y)
(Figure 2.3.1). We’ll denote this set by R : {a < x < b, c < y < d} . “Open” means that the boundary rectangle (indicated by
the dashed lines in Figure 2.3.1) is not included in R .
The next theorem gives sufficient conditions for existence and uniqueness of solutions of initial value problems for first order
nonlinear differential equations. We omit the proof, which is beyond the scope of this book.
has at least one solution on some open subinterval of (a, b) that contains x . 0
b. If both f and f are continuous on R then Equation 2.3.1 has a unique solution on some open subinterval of
y (a, b)
that contains x 0
information on how to find the solution, or to determine the open interval on which it exists. Moreover, (a) provides no
information on the number of solutions that Equation 2.3.1 may have. It leaves open the possibility that Equation 2.3.1
may have two or more solutions that differ for values of x arbitrarily close to x . We will see in Example 2.3.6 that this
0
can happen.
(b) is a uniqueness theorem. It guarantees that Equation 2.3.1 has a unique solution on some open interval (a,b) that
contains x . However, if (a, b) ≠ (−∞, ∞) , Equation 2.3.1 may have more than one solution on a larger interval that
0
contains (a, b). For example, it may happen that b < ∞ and all solutions have the same values on (a, b), but two solutions
y and y are defined on some interval (a, b ) with b > b , and have different values for b < x < b ; thus, the graphs of
1 2 1 1 1
the y and y “branch off” in different directions at x = b . (See Example 2.3.7 and Figure 2.3.3). In this case, continuity
1 2
implies that y (b) = y (b) (call their common value y ), and y and y are both solutions of the initial value problem
1 2 1 2
that differ on every open interval that contains b . Therefore f or f must have a discontinuity at some point in each open
y
rectangle that contains (b, y), since if this were not so, 2.3.2 would have a unique solution on some open interval that contains
b . We leave it to you to give a similar analysis of the case where a > −∞ .
Example 2.3.1
Consider the initial value problem
2 2
x −y
′
y = , y(x0 ) = y0 . (2.3.3)
2 2
1 +x +y
Since
2 2 2
x −y 2y(1 + 2 x )
f (x, y) = and fy (x, y) = −
1 + x2 + y 2 (1 + x2 + y 2 )2
are continuous for all (x, y), Theorem 2.3.1 implies that if (x 0, y0 ) is arbitrary, then Equation 2.3.3 has a unique solution
on some open interval that contains x .
0
Example 2.3.2
Consider the initial value problem
2 2
x −y
′
y = , y(x0 ) = y0 . (2.3.4)
2 2
x +y
Here
2 2 2
x −y 4x y
f (x, y) = and fy (x, y) = −
2 2 2 2 2
x +y (x +y )
are continuous everywhere except at (0, 0). If (x , y ) ≠ (0, 0) , there’s an open rectangle R that contains (x , y ) that
0 0 0 0
does not contain (0, 0). Since f and f are continuous on R , Theorem 2.3.1 implies that if (x , y ) ≠ (0, 0) then
y 0 0
Equation 2.3.4 has a unique solution on some open interval that contains x . 0
Example 2.3.3
Consider the initial value problem
x +y
′
y = , y(x0 ) = y0 . (2.3.5)
x −y
Here
x +y 2x
f (x, y) = and fy (x, y) =
2
x −y (x − y)
are continuous everywhere except on the line y = x . If y ≠ x , there’s an open rectangle R that contains (x , y ) that
0 0 0 0
does not intersect the line y = x . Since f and f are continuous on R , Theorem 2.3.1 implies that if y ≠ x , Equation
y 0 0
Example 2.3.4
In Example 2.2.4, we saw that the solutions of
′ 2
y = 2x y (2.3.6)
where c is an arbitrary constant. In particular, this implies that no solution of Equation 2.3.6 other than y ≡0 can equal
zero for any value of x. Show that Theorem 2.3.1b implies this.
We’ll obtain a contradiction by assuming that Equation 2.3.6 has a solution y that equals zero for some value of x, but is 1
not identically zero. If y has this property, there’s a point x such that y (x ) = 0 , but y (x) ≠ 0 for some value of x in
1 0 1 0 1
every open interval that contains x . This means that the initial value problem
0
′ 2
y = 2x y , y(x0 ) = 0 (2.3.7)
Example 2.3.5
Consider the initial value problem
10
′ 2/5
y = xy , y(x0 ) = y0 . (2.3.8)
3
a. For what points (x 0, y0 ) does Theorem 2.3.1a imply that Equation 2.3.8 has a solution?
b. For what points (x 0, y0 ) does Theorem 2.3.1b imply that Equation 2.3.8 has a unique solution on some open interval
that contains x ?0
Solution a
Since
10 2/5
f (x, y) = xy
3
is continuous for all (x, y), Theorem 2.3.1 implies that Equation 2.3.8 has a solution for every (x 0, y0 ) .
Solution b
Here
4
−3/5
fy (x, y) = xy
3
is continuous for all (x, y) with y ≠ 0 . Therefore, if y ≠ 0 there’s an open rectangle on which both f and f are
0 y
continuous, and Theorem 2.3.1 implies that Equation 2.3.8 has a unique solution on some open interval that contains x . 0
If y = 0 then f y (x, y) is undefined, and therefore discontinuous; hence, Theorem 2.3.1 does not apply to Equation 2.3.8
if y = 0 .
0
Example 2.3.6
Example 2.3.5 leaves open the possibility that the initial value problem
10
′ 2/5
y = xy , y(0) = 0 (2.3.9)
3
has more than one solution on every open interval that contains x 0 =0 . Show that this is true.
′
10 2/5
y = xy . (2.3.10)
3
on any open interval where y has no zeros. Integrating this and rewriting the arbitrary constant as 5c/3 yields
5 3/5
5 2
y = (x + c).
3 3
Therefore
2 5/3
y = (x + c) . (2.3.11)
Since we divided by y to separate variables in Equation 2.3.10, our derivation of Equation 2.3.11 is legitimate only on
open intervals where y has no zeros. However, Equation 2.3.11 actually defines y for all x, and differentiating Equation
2.3.11 shows that
10 10
′ 2 2/3 2/5
y = x(x + c) = xy , −∞
3 3
−− −−
Therefore Equation 2.3.11 satisfies Equation 2.3.10 on (−∞, ∞) even if c ≤0 , so that y(√|c|) = y(−√|c|) = 0 . In
particular, taking c = 0 in Equation 2.3.11 yields
10/3
y =x
as a second solution of Equation 2.3.9. Both solutions are defined on (−∞, ∞), and they differ on every open interval
that contains x = 0 (Figure 2.3.2). In fact, there are four distinct solutions of Equation 2.3.9 defined on (−∞, ∞) that
0
differ from each other on every open interval that contains x = 0 . Can you identify the other two?
0
Example 2.3.7
From Example 2.3.5, the initial value problem
has a unique solution on some open interval that contains x 0 =0 . Find a solution and determine the largest open interval
(a, b) on which it is unique.
Solution
Let y be any solution of Equation 2.3.12. Because of the initial condition y(0) = −1 and the continuity of y , there’s an
open interval I that contains x = 0 on which y has no zeros, and is consequently of the form Equation 2.3.11. Setting
0
2 5/3
y = (x − 1) (2.3.13)
for x in I . Therefore every solution of Equation 2.3.12 differs from zero and is given by Equation 2.3.13 on (−1, 1); that
is, Equation 2.3.13 is the unique solution of Equation 2.3.12 on (−1, 1). This is the largest open interval on which
Equation 2.3.12 has a unique solution. To see this, note that Equation 2.3.13 is a solution of Equation 2.3.12 on
(−∞, ∞) . From Exercise 2.2.15, there are infinitely many other solutions of Equation 2.3.12 that differ from Equation
2.3.13 on every open interval larger than (−1, 1). One such solution is
2 5/3
(x − 1) , −1 ≤ x ≤ 1,
y ={
0, |x| > 1.
Figure 2.3.3 : Two solutions of Equation 2.3.12 on (−1,1) that coincide on (−1, 1), but on no larger open interval. (right)
Example 2.3.8
From Example 2.3.5), the initial value problem
10
′ 2/5
y = xy , y(0) = 1 (2.3.14)
3
has a unique solution on some open interval that contains x0 = 0 . Find the solution and determine the largest open
interval on which it is unique.
Solution
Let y be any solution of Equation 2.3.14. Because of the initial condition y(0) = 1 and the continuity of y , there’s an
open interval I that contains x = 0 on which y has no zeros, and is consequently of the form Equation 2.3.11. Setting
0
2 5/3
y = (x + 1) (2.3.15)
for x in I . Therefore every solution of Equation 2.3.14 differs from zero and is given by Equation 2.3.15 on (−∞, ∞);
that is, Equation 2.3.15 is the unique solution of Equation 2.3.14 on (−∞, ∞). Figure 2.3.4) shows the graph of this
solution.
′
y = f (x, y), y(x ) = y has (a) a solution and (b) a unique solution on some open interval that contains x .
0 0 0
2 2
x +y
1. y ′
=
sin x
x
e +y
2. y ′
= 2
x +y
2
3. y ′
= tan xy
2 2
x +y
4. y ′
=
ln xy
5. y ′
= (x
2
+ y )y
2 1/3
6. y ′
= 2xy
7. y ′
= ln(1 + x
2
+y )
2
2x+3y
8. y ′
=
x−4y
9. y ′
= (x
2
+y )
2 1/2
10. y ′
= x(y
2
− 1)
2/3
11. y ′
= (x
2
+y )
2 2
12. y ′
= (x + y )
1/2
tan y
13. y ′
=
x−1
Q2.3.2
14. Apply Theorem 2.3.1 to the initial value problem
′
y + p(x)y = q(x), y(x0 ) = y0 (2.3E.1)
for a linear equation, and compare the conclusions that can be drawn from it to those that follow from Theorem 2.1.2.
15.
a. Verify that the function
2 5/3
(x − 1) , −1 < x < 1,
y ={ (2.3E.2)
0, |x| ≥ 1,
on (−∞, ∞).
17. Consider the initial value problem
′ 1/3
y = 3x(y − 1 ) , y(x0 ) = y0 . (A)
a. For what points (x 0, y0 ) does Theorem 2.3.1 imply that (A) has a solution?
b. For what points (x 0, y0 ) does Theorem 2.3.1 imply that (A) has a unique solution on some open interval that contains x ? 0
that are all defined on (−∞, ∞) and differ from each other for values of x in every open interval that contains x 0 =0 .
19. From Theorem 2.3.1, the initial value problem
′ 1/3
y = 3x(y − 1 ) , y(0) = 9 (2.3E.8)
has a unique solution on an open interval that contains x0 = 0 . Find the solution and determine the largest open interval on
which it is unique.
20.
a. From Theorem 2.3.1, the initial value problem
′ 1/3
y = 3x(y − 1 ) , y(3) = −7 (A)
has a unique solution on some open interval that contains x = 3 . Determine the largest such open interval, and find the
0
′
y = f (x, y), y(x0 ) = y0
on (a, b).
b. If f and f are continuous on an open rectangle that contains (x
y 0, y0 ) and (A) holds, no solution of y ′
= f (x, y) other than
y ≡y 0can equal y at any point in (a, b).
0
′
y + p(x)y = 0 (2.4.1)
and u is a solution of
′
u y1 (x) = f (x).
In this section we’ll consider nonlinear differential equations that are not separable to begin with, but can be solved in a similar
fashion by writing their solutions in the form y = uy , where y is a suitably chosen known function and u satisfies a
1 1
separable equation. We’llsay in this case that we transformed the given equation into a separable equation.
Bernoulli Equations
A Bernoulli equation is an equation of the form
′ r
y + p(x)y = f (x)y , (2.4.2)
where r can be any real number other than 0 or 1. (Note that Equation 2.4.2 is linear if and only if r = 0 or r = 1 .) We can
transform Equation 2.4.2 into a separable equation by variation of parameters: if y is a nontrivial solution of Equation 2.4.1, 1
′ ′ r
u y1 + u(y + p(x)y1 ) = f (x)(u y1 ) ,
1
since y1
′
+ p(x)y1 = 0 .
Example 2.4.1 :
Solve the Bernoulli equation
′ 2
y − y = xy . (2.4.3)
Since y 1 =e
x
is a solution of y ′
−y = 0 , we look for solutions of Equation 2.4.3 in the form y = ue , where x
′ x 2 2x ′ 2 x
u e = xu e or equivalently u = xu e .
Hence,
and
1
y =− .
−x
x − 1 + ce
Figure 2.4.1 shows direction field and some integral curves of Equation 2.4.3.
y is suitably chosen. Now let’s discover a sufficient condition for a nonlinear first order differential equation
1
′
y = f (x, y) (2.4.4)
to be transformable into a separable equation in the same way. Substituting y = uy into Equation 2.4.4 yields 1
′ ′
u y1 (x) + u y (x) = f (x, u y1 (x)),
1
which is equivalent to
′ ′
u y1 (x) = f (x, u y1 (x)) − u y (x). (2.4.5)
1
If
′
f (x, u y1 (x)) = q(u)y (x)
1
which is separable. After checking for constant solutions u ≡ u such that q(u
0 0) = u0 , we can separate variables to obtain
′ ′
u y (x)
1
= .
q(u) − u y1 (x)
and
2 2
y + xy − x y 2 y
′
y = =( ) + −1
2
x x x
respectively. The general method discussed above can be applied to Equation 2.4.7 with y 1 =x (and therefore y
1
′
= 1) . Thus,
substituting y = ux in Equation 2.4.7 yields
′
u x + u = q(u),
and separation of variables (after checking for constant solutions u ≡ u such that q(u 0 0) = u0 ) yields
′
u 1
= .
q(u) − u x
Before turning to examples, we point out something that you may’ve have already noticed: the definition of homogeneous
equation given here is not the same as the definition given in Section 2.1, where we said that a linear equation of the form
′
y + p(x)y = 0
is homogeneous. We make no apology for this inconsistency, since we didn’t create it historically, homogeneous has been used
in these two inconsistent ways. The one having to do with linear equations is the most important. This is the only section of the
book where the meaning defined here will apply.
Since y/x is in general undefined if x = 0 , we’ll consider solutions of nonhomogeneous equations only on open intervals that
do not contain the point x = 0 .
Example 2.4.2
Solve
−y/x
y + xe
′
y = . (2.4.8)
x
Integrating yields e u
= ln |x| + c . Therefore u = ln(ln |x| + c) and y = ux = x ln(ln |x| + c) .
Figure 2.4.2 shows a direction field and integral curves for Equation 2.4.8.
Example 2.4.3
a. Solve
2 ′ 2 2
x y =y + xy − x . (2.4.9)
Solution a
We first find solutions of Equation 2.4.9 on open intervals that don’t contain x = 0 . We can rewrite Equation 2.4.9 as
2 2
y + xy − x
′
y =
2
x
so
′ 2
u x =u − 1. (2.4.11)
By inspection this equation has the constant solutions u ≡ 1 and u ≡ −1 . Therefore y = x and y = −x are solutions of
Equation 2.4.9. If u is a solution of Equation 2.4.11 that does not assume the values ±1 on some interval, separating
variables yields
′
u 1
= ,
2
u −1 x
which holds if
u −1 2
= cx (2.4.12)
u +1
Therefore
2
x(1 + c x )
y = ux = (2.4.13)
1 − cx2
is a solution of Equation 2.4.10 for any choice of the constant c . Setting c = 0 in Equation 2.4.13 yields the solution
y = x . However, the solution y = −x can’t be obtained from Equation 2.4.13. Thus, the solutions of Equation 2.4.9 on
intervals that don’t contain x = 0 are y = −x and functions of the form Equation 2.4.13.
The situation is more complicated if x = 0 is the open interval. First, note that y = −x satisfies Equation 2.4.9 on
(−∞, ∞) . If c and c are arbitrary constants, the function
1 2
2
x(1+c1 x )
⎧
⎪
2
, a <x <0
1−c1 x
y =⎨ (2.4.14)
2
x(1+c2 x
⎩
⎪ , 0 ≤x <b
1−c2 x2
−∞ if c1 ≤ 0, ∞ if c2 ≤ 0.
We leave it to you to verify this. To do so, note that if y is any function of the form Equation 2.4.13 then y(0) = 0 and
y (0) = 1 .
′
Figure 2.4.3 shows a direction field and some integral curves for Equation 2.4.9.
Solution b
– –
The interval of validity of this solution is (−√3, √3) . However, the largest interval on which Equation 2.4.10 has a
–
unique solution is (0, √3). To see this, note from Equation 2.4.14 that any function of the form
2
x(1+cx )
⎧
⎪
2
, a <x ≤0
1−cx
y =⎨ 2
(2.4.15)
x(1+x /3) –
⎩
⎪ 0 ≤ x < √3
1−x2 /3
–
is a solution of Equation 2.4.10 on (a, √3) , where a = −1/ √c if c >0 or a = −∞ if c ≤0 . Why does this not
contradict Theorem 2.3.1?
Figure 2.4.4 shows several solutions of the initial value problem Equation 2.4.10 . Note that these solutions coincide on
–
(0, √3).
In the last two examples we were able to solve the given equations explicitly. However, this is not always possible, as you’ll
see in the exercises.
2. 7x y ′
− 2y = −
x
y
6
3. x 2 ′
y + 2y = 2 e
1/x
y
1/2
4. (1 + x 2 ′
)y + 2xy =
1
(1+x2 )y
Q2.4.2
In Exercises 2.4.5 and 2.4.6 find all solutions. Also, plot a direction field and some integral curves on the indicated rectangular
region.
5. y ′
− xy = x y ;
3 3
{−3 ≤ x ≤ 3, 2 ≤ y ≥ 2}
6. y ′
−
1+x
3x
y =y ;
4
{−2 ≤ x ≤ 2, −2 ≤ y ≤ 2}
Q2.4.3
In Exercises 2.4.7-2.4.11 solve the initial value problem.
–
7. y ′
− 2y = x y ,
3
y(0) = 2 √2
8. y ′
− xy = x y
3/2
, y(1) = 4
9. x y ′
+y = x y ,
4 4
y(1) = 1/2
10. y ′
− 2y = 2 y
1/2
, y(0) = 1
11. y ′
− 4y =
48x
y2
, y(0) = 1
Q2.4.4
In Exercises 2.4.12 and 2.4.13 solve the initial value problem and graph the solution.
–
12. x 2 ′
y + 2xy = y ,
3
y(1) = 1/ √2
13. y ′
− y = xy
1/2
, y(0) = 4
Q2.4.5
14. You may have noticed that the logistic equation
′
P = aP (1 − αP ) (2.4E.1)
from Verhulst’s model for population growth can be written in Bernoulli form as
′ 2
P − aP = −aα P . (2.4E.2)
This isn’t particularly interesting, since the logistic equation is separable, and therefore solvable by the method studied in
Section 2.2. So let’s consider a more complicated model, where a is a positive constant and α is a positive continuous function
of t on [0, ∞). The equation for this model is
′ 2
P − aP = −aα(t)P , (2.4E.3)
c. Assuming that
t
−at aτ
lim e ∫ α(τ )e dτ = L (2.4E.4)
t→∞
0
Q2.4.6
In Exercises 2.4.15-2.4.18 solve the equation explicitly.
y+x
15. y ′
=
x
2
y +2xy
16. y ′
=
x
2
17. x y 3
y
′
=y
4
+x
4
y y
18. y ′
=
x
+ sec
x
Q2.4.7
In Exercises 2.4.19-2.4.21 solve the equation explicitly. Also, plot a direction field and some integral curves on the indicated
rectangular region.
19. x 2
y
′
= xy + x
2
+y ;
2
{−8 ≤ x ≤ 8, −8 ≤ y ≤ 8}
20. xy y ′
=x
2
+ 2y ;
2
{−4 ≤ x ≤ 4, −4 ≤ y ≤ 4}
2
2 2 −( y/x)
2y +x e
21. y ′
=
2xy
; {−8 ≤ x ≤ 8, −8 ≤ y ≤ 8}
Q2.4.8
In Exercises 2.4.22-2.4.27 solve the initial value problem.
2
xy+y
22. y ′
=
x
2
, y(−1) = 2
3 3
x +y
23. y ′
=
xy
2
, y(1) = 3
24. xy y ′
+x
2
+y
2
= 0, y(1) = 2
2 2
y −3xy−5 x
25. y ′
=
x
2
, y(1) = −1
26. x 2
y
′
= 2x
2
+y
2
+ 4xy, y(1) = 1
–
27. xy y ′
= 3x
2
+ 4y ,
2
y(1) = √3
Q2.4.9
In Exercises 2.4.28-2.4.34 solve the given homogeneous equation implicitly.
x+y
28. y ′
=
x−y
29. (y ′
x − y)(ln |y| − ln |x|) = x
3 2 2 3
y +2xy +x y+x
30. y ′
= 2
x(y+x)
x+2y
31. y ′
=
2x+y
y
32. y ′
=
y−2x
2 3
xy +2 y
33. y ′
= 3
x +x y+xy
2 2
3 2 3
x +x y+3 y
34. y ′
=
x +3xy
3 2
on the interval (−∞, 0). Verify that this solution is actually valid on (−∞, ∞).
b. Use Theorem 2.3.1 to show that (A) has a unique solution on (−∞, 0).
c. Plot a direction field for the differential equation in (A) on a square
{−r ≤ x ≤ r, −r ≤ y ≤ r}, (2.4E.5)
where r is any positive number. Graph the solution you obtained in (a) on this field.
d. Graph other solutions of (A) that are defined on (−∞, ∞).
e. Graph other solutions of (A) that are defined only on intervals of the form (−∞, a) , where is a finite positive number.
36.
a. Solve the equation
′ 2 2
xy y =x − xy + y (A)
implicitly.
b. Plot a direction field for (A) on a square
{0 ≤ x ≤ r, 0 ≤ y ≤ r} (2.4E.6)
for k = 1 , 2, …, K . Based on your observations, find conditions on the positive numbers x0 and y0 such that the initial
value problem
′ 2 2
xy y =x − xy + y , y(x0 ) = y0 , (B)
has a unique solution (i) on (0, ∞) or (ii) only on an interval (a, ∞), where a > 0 ?
d. What can you say about the graph of the solution of (B) as x → ∞ ? (Again, assume that x 0 >0 and y 0 >0 .)
37.
a. Solve the equation
2 2
′
2y − xy + 2 x
y = (A)
xy + 2x2
implicitly.
b. Plot a direction field for (A) on a square
{−r ≤ x ≤ r, −r ≤ y ≤ r} (2.4E.8)
where r is any positive number. By graphing solutions of (A), determine necessary and sufficient conditions on (x0 , y0 )
such that (A) has a solution on (i) (−∞, 0) or (ii) (0, ∞) such that y(x ) = y . 0 0
39. Pick any nonlinear homogeneous equation y = q(y/x) you like, and plot direction fields on the square
′
{−r ≤ x ≤ r, − r ≤ y ≤ r} , where r > 0 . What happens to the direction field as you vary r ? Why?
Q2.4.11
In Exercises 2.4.21-2.4.43 use a method suggested by Exercise 2.4.40 to solve the given equation implicitly.
−6x+y−3
41. y ′
=
2x−y−1
2x+y+1
42. y ′
=
x+2y−4
−x+3y−14
43. y ′
=
x+y−2
Q2.4.12
In Exercises 2.4.44-2.4.51 find a function y such that the substitution y = uy transforms the given equation into a separable
1 1
equation of the form (2.4.6). Then solve the given equation explicitly.
44. 3x y 2
y
′
=y
3
+x
45. xy y ′
= 3x
6
+ 6y
2
46. x 3
y
′
= 2(y
2
+x y −x )
2 4
47. y ′
=y e
2 −x
+ 4y + 2 e
x
2 2
y +y tan x+tan x
48. y ′
=
2
sin x
49. x(ln x ) 2
y
′
= −4(ln x )
2
+ y ln x + y
2
51. (y + e
2 2 2
x ′ 2 x 2x
)y = 2x(y + ye +e
Q2.4.13
52. Solve the initial value problem
2 2
2 3x y + 6xy + 2
′
y + y = , y(2) = 2. (2.4E.12)
2
x x (2xy + 3)
(If R ≡ −1 , (A) is a Riccati equation.) Let y be a known solution and 1 y an arbitrary solution of (A). Let z = y − y1 . Show
that z is a solution of a Bernoulli equation with n = 2 .
Q2.4.14
57. y ′
=e
2x x
+ (1 − 2 e )y + y
2
;y 1 =e
x
58. x y ′
= 2 − x + (2x − 2)y − x y
2
;y 1 =1
59. x y ′
=x
3 2
+ (1 − 2 x )y + x y
2
;y 1 =x
where y is the independent variable and x is the dependent variable. Since the solutions of Equation 2.5.2 and Equation 2.5.3
will often have to be left in implicit form we will say that F (x, y) = c is an implicit solution of Equation 2.5.1 if every
differentiable function y = y(x) that satisfies F (x, y) = c is a solution of Equation 2.5.2 and every differentiable function
x = x(y) that satisfies F (x, y) = c is a solution of Equation 2.5.3
2 2 2 2 dy 2 2 dx
(x + y ) dx + 2xy dy = 0 (x + y ) + 2xy = 0 (x +y ) + 2xy = 0
dx dy
dy dx
3y sin x dx − 2xy cosx dy = 0 3y sin x − 2xy cosx = 0 3y sin x − 2xy cosx = 0
dx dy
M (x) dx + N (y) dy = 0.
we will develop a method for solving Equation 2.5.1 under appropriate assumptions on M and N . This method is an
extension of the method of separation of variables. Before stating it we consider an example.
Example 2.5.1
Show that
4 3 2 5
x y +x y + 2xy = c (2.5.4)
is an implicit solution of
3 3 5 4 2 2 4
(4 x y + 2x y + 2y) dx + (3 x y + 5x y + 2x) dy = 0. (2.5.5)
Solution
Regarding y as a function of x and differentiating Equation 2.5.4 implicitly with respect to x yields
dy
3 3 5 4 2 2 4
(4 x y + 2x y + 2y) + (3 x y + 5x y + 2x) = 0.
dx
Similarly, regarding x as a function of y and differentiating Equation 2.5.4 implicitly with respect to y yields
dx
3 3 5 4 2 2 4
(4 x y + 2x y + 2y) + (3 x y + 5x y + 2x) = 0.
dy
Therefore Equation 2.5.4 is an implicit solution of Equation 2.5.5 in either of its two possible interpretations.
Theorem 2.5.1
If F = F (x, y) has continuous partial derivatives F and F , then
x y
F (x, y) = c (2.5.6)
Proof
Regarding y as a function of x and differentiating Equation 2.5.6implicitly with respect to x yields
dy
Fx (x, y) + Fy (x, y) = 0.
dx
On the other hand, regarding x as a function of y and differentiating Equation 2.5.6implicitly with respect to y yields
dx
Fx (x, y) + Fy (x, y) = 0.
dy
Thus, Equation 2.5.6is an implicit solution of Equation 2.5.7in either of its two possible interpretations.
is exact on an an open rectangle R if there’s a function F = F (x, y) such F and F are continuous, and
x y
for all (x, y) in R . This usage of “exact” is related to its usage in calculus, where the expression
Fx (x, y) dx + Fy (x, y) dy
(obtained by substituting Equation 2.5.9 into the left side of Equation 2.5.8) is the exact differential of F .
Example 2.5.1 shows that it is easy to solve Equation 2.5.8 if it is exact and we know a function F that satisfies Equation
2.5.9. The important questions are:
Question 1. Given an equation Equation 2.5.8, how can we determine whether it is exact?
Question 2. If Equation 2.5.8 is exact, how do we find a function F satisfying Equation 2.5.9?
To discover the answer to Question 1, assume that there’s a function F that satisfies Equation 2.5.9 on some open rectangle R ,
and in addition that F has continuous mixed partial derivatives F and F . Then a theorem from calculus implies that
xy yx
If Fx =M and F y =N , differentiating the first of these equations with respect to y and the second with respect to x yields
From Equation 2.5.10 and Equation 2.5.11, we conclude that a necessary condition for exactness is that My = Nx . This
motivates the next theorem, which we state without proof.
To help you remember the exactness condition, observe that the coefficients of dx and dy are differentiated in Equation 2.5.12
with respect to the “opposite” variables; that is, the coefficient of dx is differentiated with respect to y , while the coefficient of
dy is differentiated with respect to x.
Example 2.5.2
Show that the equation
2 3
3 x y dx + 4 x dy = 0
so
2 2
My (x, y) = 3 x andNx (x, y) = 12 x .
Therefore My = Nx on the line x = 0 , but not on any open rectangle, so there’s no function F such that
Fx (x, y) = M (x, y) and F (x, y) = N (x, y) for all (x, y) on any open rectangle.
y
The next example illustrates two possible methods for finding a function F that satisfies the condition F x =M and F y =N
if M dx + N dy = 0 is exact.
Example 2.5.3
Solve
3 3 2 4 2 2
(4 x y + 3 x ) dx + (3 x y + 6 y ) dy = 0. (2.5.13)
Solution (Method 1)
Here
3 3 2 4 2 2
M (x, y) = 4 x y + 3x , N (x, y) = 3 x y + 6y ,
and
3 2
My (x, y) = Nx (x, y) = 12 x y
for all (x, y). Therefore Theorem 2.5.2 implies that there’s a function F such that
3 3 2
Fx (x, y) = M (x, y) = 4 x y + 3x (2.5.14)
and
4 2 2
Fy (x, y) = N (x, y) = 3 x y + 6y (2.5.15)
for all (x, y). To find F , we integrate Equation 2.5.14 with respect to x to obtain
4 3 3
F (x, y) = x y +x + ϕ(y), (2.5.16)
We integrate this with respect to y and take the constant of integration to be zero because we are interested only in finding
some F that satisfies Equation 2.5.14 and Equation 2.5.15. This yields
3
ϕ(y) = 2 y .
is an implicit solution of Equation 2.5.13. Solving this for y yields the explicit solution
1/3
3
c −x
y =( ) .
4
2 +x
Solution (Method 2)
Instead of first integrating Equation 2.5.14 with respect to x, we could begin by integrating Equation 2.5.15 with respect
to y to obtain
4 3 3
F (x, y) = x y + 2y + ψ(x), (2.5.18)
where ψ is an arbitrary function of x. To determine ψ , we assume that ψ is differentiable and differentiate F with respect
to x, which yields
3 3 ′
Fx (x, y) = 4 x y + ψ (x).
Integrating this and again taking the constant of integration to be zero yields
3
ψ(x) = x .
Here’s a summary of the procedure used in Method 1 of this example. You should summarize procedure used in Method 2.
satisfies the exactness condition M y = Nx . If not, don’t go further with this procedure.
Step 2. Integrate
∂F (x, y)
= M (x, y)
∂x
Step 4. Equate the right side of this equation to N and solve for ϕ ; thus, ′
∂G(x, y) ∂G(x, y)
′ ′
+ ϕ (y) = N (x, y), so ϕ (y) = N (x, y) − .
∂y ∂y
Step 5. Integrate ϕ with respect to y , taking the constant of integration to be zero, and substitute the result in Equation
′
Step 6. Set F (x, y) = c to obtain an implicit solution of Equation 2.5.19. If possible, solve for y explicitly as a
function of x.
It’s a common mistake to omit Step 6 in the procedure above. However, it is important to include this step, since F isn’t itself a
solution of Equation 2.5.19. Many equations can be conveniently solved by either of the two methods used in Example 2.5.3.
Example 2.5.4
Solve the equation
xy xy 2 xy
(y e tan x + e sec x) dx + x e tan x dy = 0 (2.5.21)
Solution
We leave it to you to check that M y = Nx on any open rectangle where tan x and sec x are defined. Here we must find a
function F such that
xy xy 2
Fx (x, y) = y e tan x + e sec x (2.5.22)
and
xy
Fy (x, y) = x e tan x. (2.5.23)
It’s difficult to integrate Equation 2.5.22 with respect to x, but easy to integrate Equation 2.5.23 with respect to y . This
yields
xy
F (x, y) = e tan x + ψ(x). (2.5.24)
Attempting to apply our procedure to an differential equation that is not exact will lead to failure in Step 4, since the function
∂G
N −
∂y
will not be independent of x if My ≠ Nx , and therefore cannot be the derivative of a function of y alone. Example 2.5.5
illustrates this.
Example 2.5.5
Verify that the equation
2 2 3
3x y dx + 6 x y dy = 0 (2.5.25)
is not exact, and show that the procedure for solving exact equations fails when applied to Equation 2.5.25.
Solution
Here
2 2
My (x, y) = 6 x y and Nx (x, y) = 18 x y,
so Equation 2.5.25 is not exact. Nevertheless, let us try to find a function F such that
2 2
Fx (x, y) = 3 x y (2.5.26)
and
or
′ 3
ϕ (y) = 4 x y.
2. (3y cos x + 4x e x 2 x
+ 2 x e ) dx + (3 sin x + 3) dy = 0
3. 14x 2
y
3
dx + 21 x y
2 2
dy = 0
4. (2x − 2y 2
) dx + (12 y
2
− 4xy) dy = 0
5. (x + y ) 2
dx + (x + y )
2
dy = 0
10. (2x 2
+ 8xy + y ) dx + (2 x
2 2 3
+ x y /3) dy = 0
11. ( 1
x
+ 2x) dx + (
1
y
+ 2y) dy = 0
12. (y sin xy + x y 2
cos xy) dx + (x sin xy + x y
2
cos xy) dy = 0
y dy
13. x dx
3/2
+
3/2
=0
( x2 +y 2 ) ( x2 +y 2 )
14. (e x
(x y
2 2 2
+ 2x y ) + 6x) dx + (2 x y e
2 x
+ 2) dy = 0
15. (x
2 2
2 x +y 2 3 x +y 2
e (2 x + 3) + 4x) dx + (x e − 12 y ) dy = 0
16. (e xy 4 3
(x y + 4 x ) + 3y) dx + (x e
5 xy
+ 3x) dy = 0
17. (3x 2 3
cos xy − x y sin xy + 4x) dx + (8y − x
4
sin xy) dy = 0
Q2.5.2
In Exercises 2.5.18-2.5.22 solve the initial value problem.
18. (4x 3
y
2 2
− 6 x y − 2x − 3) dx + (2 x y − 2 x ) dy = 0,
4 3
y(1) = 3
20. (y 3
− 1)e
x
dx + 3 y (e
2 x
+ 1) dy = 0, y(0) = 0
Q2.5.3
23. Solve the exact equation
Plot a direction field and some integral curves for this equation on the rectangle
{−1 ≤ x ≤ 1, −1 ≤ y ≤ 1}.
{−2 ≤ x ≤ 2, −1 ≤ y ≤ 1}.
25. Plot a direction field and some integral curves for the exact equation
3 4 4 3
(x y + x) dx + (x y + y) dy = 0
implicitly.
b. For what choices of (x 0, y0 ) does Theorem 2.3.1 imply that the initial value problem
3 4 4 3
(x y + 2x) dx + (x y + 3y) dy = 0, y(x0 ) = y0 , (B)
c. Plot a direction field and some integral curves for (A) on a rectangular region centered at the origin. What is the interval of
validity of the solution of (B)?
28.
a. Solve the exact equation
2 2
(x + y ) dx + 2xy dy = 0 (A)
implicitly.
b. For what choices of (x 0, y0 ) does Theorem 2.3.1 imply that the initial value problem
2 2
(x + y ) dx + 2xy dy = 0, y(x0 ) = y0 , (B)
has a unique solution y = y(x) on some open interval (a, b) that contains x ? 0
c. Plot a direction field and some integral curves for (A). From the plot determine, the interval (a, b) of b, the monotonicity
properties (if any) of the solution of (B), and lim y(x) and lim y(x).
x→a+ x→b−
is not independent of x.
32. Prove: If the equations M 1 dx + N1 dy = 0 and M 2 dx + N2 dy = 0 are exact on an open rectangle R , so is the equation
(M1 + M2 ) dx + (N1 + N2 ) dy = 0.
33. Find conditions on the constants A , B , C , and D such that the equation
is exact.
34. Find conditions on the constants A , B , C , D, E , and F such that the equation
2 2 2 2
(Ax + Bxy + C y ) dx + (Dx + Exy + F y ) dy = 0
is exact.
35. Suppose M and N are continuous and have continuous partial derivatives M and N that satisfy the exactness condition y x
M =N
y on an open rectangle R . Show that if (x, y) is in R and
x
x y
then F x =M and F
y =N .
36. Under the assumptions of Exercise 2.5.35, show that
y x
37. Use the method suggested by Exercise 2.5.35, with (x 0, y0 ) = (0, 0) , to solve the these exact equations:
a. (x y + x) dx + (x y + y) dy = 0
3 4 4 3
b. (x + y ) dx + 2xy dy = 0
2 2
′
2 2xy
y + y =− , y(1) = −2.
2 2
x x + 2x y + 1
as an exact equation
M (x, y) dx + N (x, y) dy = 0. (B)
Show that applying the method of this section to (B) yields the same solutions that would be obtained by applying the method
of separation of variables to (A)
43. Suppose all second partial derivatives of F = F (x, y) are continuous and F + F = 0 on an open rectangle R . (A
xx yy
function with these properties is said to be harmonic; see also Exercise 2.5.42.) Show that −F dx + F dy = 0 is exact on R ,
y x
and therefore there’s a function G such that G = −F and G = F in R . (A function G with this property is said to be a
x y y x
harmonic conjugate of F .)
44. Verify that the following functions are harmonic, and find all their harmonic conjugates. (See Exercise 2.5.43.)
a. x − y
2 2
b. e cos y
x
c. x − 3x y
3 2
d. cos x cosh y
e. sin x cosh y
is exact on R . Sometimes an equation that isn’t exact can be made exact by multiplying it by an appropriate function. For
example,
2
(3x + 2 y ) dx + 2xy dy = 0 (2.6.2)
is not exact, since M y (x, y) = 4y ≠ Nx (x, y) = 2y in Equation 2.6.2. However, multiplying Equation 2.6.2 by x yields
2 2 2
(3 x + 2x y ) dx + 2 x y dy = 0, (2.6.3)
is exact. If we know an integrating factor μ for Equation 2.6.1, we can solve the exact equation Equation 2.6.4 by the method
of Section 2.5. It would be nice if we could say that Equation 2.6.1 and Equation 2.6.4 always have the same solutions, but
this isn’t so. For example, a solution y = y(x) of Equation 2.6.4 such that μ(x, y(x)) = 0 on some interval a < x < b could
fail to be a solution of 2.6.1 (Exercise 2.6.1), while Equation 2.6.1 may have a solution y = y(x) such that μ(x, y(x)) isn’t
even defined (Exercise 2.6.2). Similar comments apply if y is the independent variable and x is the dependent variable in
Equation 2.6.1 and Equation 2.6.4. However, if μ(x, y) is defined and nonzero for all (x, y), Equation 2.6.1 and Equation
2.6.4 are equivalent; that is, they have the same solutions.
∂ ∂
(μM ) = (μN ) or, equivalently, μy M + μMy = μx N + μNx
∂y ∂x
μ(My − Nx ) = μx N − μy M , (2.6.5)
which reduces to the known result for exact equations; that is, if M y = Nx then Equation 2.6.5 holds with μ = 1 , so Equation
2.6.1 is exact.
You may think Equation 2.6.5 is of little value, since it involves partial derivatives of the unknown integrating factor μ , and
we haven’t studied methods for solving such equations. However, we’ll now show that Equation 2.6.5 is useful if we restrict
our search to integrating factors that are products of a function of x and a function of y ; that is, μ(x, y) = P (x)Q(y). We’re
not saying that every equation M dx + N dy = 0 has an integrating factor of this form; rather, we are saying that some
equations have such integrating factors.We’llnow develop a way to determine whether a given equation has such an integrating
factor, and a method for finding the integrating factor in this case.
If μ(x, y) = P (x)Q(y), then μ x (x,
′
y) = P (x)Q(y) and μ y (x, y) = P (x)Q (y)
′
, so Equation 2.6.5 becomes
′ ′
P (x)Q(y)(My − Nx ) = P (x)Q(y)N − P (x)Q (y)M , (2.6.6)
Now let
′ ′
P (x) Q (y)
p(x) = and q(y) = ,
P (x) Q(y)
We obtained Equation 2.6.8 by assuming that M dx + N dy = 0 has an integrating factor μ(x, y) = P (x)Q(y). However,
we can now view Equation 2.6.7 differently: If there are functions p = p(x) and q = q(y) that satisfy Equation 2.6.8 and we
define
∫ p(x) dx ∫ q(y) dy
P (x) = ±e and Q(y) = ±e , (2.6.9)
then reversing the steps that led from Equation 2.6.6 to Equation 2.6.8 shows that μ(x, y) = P (x)Q(y) is an integrating
factor for M dx + N dy = 0 . In using this result, we take the constants of integration in Equation 2.6.9 to be zero and choose
the signs conveniently so the integrating factor has the simplest form.
There’s no simple general method for ascertaining whether functions p = p(x) and q = q(y) satisfying Equation 2.6.8 exist.
However, the next theorem gives simple sufficient conditions for the given equation to have an integrating factor that depends
on only one of the independent variables x and y , and for finding an integrating factor in this case.
Theorem 2.6.1
Let M , N , M y, and N be continuous on an open rectangle R. Then:
x
then
∫ p(x) dx
μ(x) = ±e (2.6.10)
on R.
(b) If (N x − My )/M is independent of x on R and we define
Nx − My
q(y) = ,
M
then
∫ q(y) dy
μ(y) = ±e (2.6.12)
Proof
(a) If (M y − Nx )/N is independent of y, then Equation 2.6.8holds with p = (M y − Nx )/N and q ≡ 0 . Therefore
∫ p(x) dx ∫ q(y) dy 0
P (x) = ±e and Q(y) = ±e = ±e = ±1,
Example 2.6.1
Find an integrating factor for the equation
3 3 3 2 2 2
(2x y − 2x y − 4x y + 2x) dx + (3 x y + 4y) dy = 0 (2.6.13)
and
2 3 2 2 3 2
My − Nx = (6x y − 6x y − 8xy) − 6x y = −6 x y − 8xy,
2
∫ p(x) dx = − ∫ 2x dx = −x ,
2 2
−x 3 3 3 2 −x 2 2
e (2x y − 2x y − 4x y + 2x) dx + e (3 x y + 4y) dy = 0. (2.6.14)
and
2
−x 2 2
Fy (x, y) = e (3 x y + 4y). (2.6.16)
Comparing this with Equation shows that ; therefore, we can let in Equation
2 2
′ −x −x
2.6.15 ψ (x) = 2x e ψ(x) = −e
2
−x 2 2
e (y (x y + 2) − 1) = c
Example 2.6.2
Find an integrating factor for
3 2 2 2 3
2x y dx + (3 x y +x y + 1)dy = 0 (2.6.18)
and
2 2 3 3
My − Nx = 6 x − (6x y + 2x y ) = −2x y ,
is not independent of y , so Theorem 2.6.1(a) does not apply. However, Theorem 2.6.1(b) does apply, since
3
Nx − My 2xy
= =1
M 2xy 3
∫ q(y)dy = ∫ dy = y,
μ(y) = e
y
is an integrating factor. Multiplying Equation 2.6.18 by μ yields the exact equation.
3 y 2 2 2 3 y
2x y e dx + (3 x y +x y + 1)e dy = 0. (2.6.19)
and
2 2 2 3 y
Fy (x, y) = (3 x y +x y + 1)e . (2.6.21)
and comparing this with Equation 2.6.21 shows that φ 0 (y) = e y . Therefore we set φ(y) = e y in Equation 2.6.22 and
conclude that
2 3 y
(x y + 1)e =c
is an implicit solution of 2.6.19. It is also an implicit solution of 2.6.18. Figure 2.6.2 shows a direction field and some
integral curves for 2.6.18.
Theorem 2.6.1 does not apply in the next example, but the more general argument that led to Theorem 2.6.1 provides an
integrating factor.
Example 2.6.3
Find an integrating factor for
2 2
(3xy + 6 y )dx + (2 x + 9xy)dy = 0 (2.6.23)
and
Therefore
My − Nx −x + 3y Nx − My x − 3y
= and =
M 3xy + 6y 2 N 2 x2 + 9xy
so Theorem 2.6.1 does not apply. Following the more general argument that led to Theorem 2.6.1, we look for functions
p = p(x) and q = q(y) such that
My − Nx = p(x)N − q(y)M ;
that is,
Since the left side contains only first degree terms in x and y , we rewrite this equation as
or, equivalently,
Equating the coefficients of x and y on both sides shows that the last equation holds for all (x, y) if
2A − 3B = −1
9A − 6B = 3
Since
we can let P (x) = x and Q(y) = y ; hence, µ(x, y) = xy is an integrating factor. Multiplying Equation 2.6.23 by µ yields
the exact equation
2 2 3 3 2 2
(3 x y + 6x y )dx + (2 x y + 9 x y )dy = 0.
We leave it to you to use the method of Section 2.5 to show that this equation has the implicit solution
3 2 2 3
x y + 3x y = c. (2.6.25)
This is also an implicit solution of Equation 2.6.23. Since x ≡ 0 and y ≡ 0 satisfy Equation 2.6.25, you should check to
see that x ≡ 0 and y ≡ 0 are also solutions of Equation 2.6.23. (Why is it necesary to check this?) Figure 2.6.3 shows a
direction field and integral curves for Equation 2.6.23. See Exercise 2.6.28 for a general discussion of equations like
Equation 2.6.23.
However, to solve Equation 2.6.27 by the method of Section 2.5 we would have to evaluate the nasty integral
dx
∫ .
6
x +x
Instead, we solve Equation 2.6.26 explicitly for y by finding an integrating factor of the form µ(x, y) = x a
y
b
.
Solution
In Equation 2.6.26
6
M = −y, N = x + x ,
and
5 5
My − Nx = −1 − (1 + 6 x ) = −2 − 6 x .
My − Nx = p(x)N − q(y)M ;
that is,
5 6
−2 − 6 x = p(x)(x + x ) + q(y)y. (2.6.28)
The right side will contain the term −6x if p(x) = −6/x. Then Equation 2.6.28 becomes
5
5 5
−2 − 6 x = −6 − 6 x + q(y)y,
and
4 4
∫ q(y) dy = ∫ dy = 4 ln |y| = ln y ,
y
we can take P (x) = x and Q(y) = y , which yields the integrating factor
−6 4
μ(x, y) = x
−6
y
4
. Multiplying Equation
2.6.26 by μ yields the exact equation
5 4
y y
4
− dx + ( + y ) dy = 0.
6 5
x x
We leave it to you to use the method of the Section 2.5 to show that this equation has the implicit solution
y 5
5
( ) +y = k.
x
which we rewrite as
5 −1/5
y = cx(1 + x )
on any open rectangle that does not intersect the x axis or, equivalently, that
2
y dx + (2xy + 1)dy = 0 (B)
y(xy + 1) = c (C)
is an implicit solution of (B), and explain why every differentiable function y = y(x) other than y ≡ 0 that satisfies (C) is
also a solution of (A).
2.
a. Verify that μ(x, y) = 1/(x − y) is an integrating factor for
2
2 2
−y dx + x dy = 0 (A)
on any open rectangle that does not intersect the line y = x or, equivalently, that
2 2
y x
− dx + dy = 0 (B)
(x − y)2 (x − y)2
is an implicit solution of (B), and explain why it is also an implicit solution of (A)
c. Verify that y = x is a solution of (A), even though it can’t be obtained from (C).
Q2.6.2
In Exercises 2.6.3-2.6.16 find an integrating factor; that is a function of only one variable, and solve the given equation.
3. ydx − xdy = 0
4. 3x 2
ydx + 2 x dy = 0
3
5. 2y 3 2
dx + 3 y dy = 0
9. (6x y 2 2
+ 2y) dx + (12 x y + 6x + 3) dy = 0
10. y 2
dx + (x y
2
+ 3xy +
1
y
) dy = 0
11. (12x 3 2 2
y + 24 x y ) dx + (9 x
4 3
+ 32 x y + 4y) dy = 0
13. −y dx + (x 4
− x) dy = 0
Q2.6.3
In Exercises 2.6.17-2.6.23 find an integrating factor of the form μ(x, y) = P (x)Q(y) and solve the given equation.
17. y(1 + 5 ln |x|) dx + 4x ln |x| dy = 0
18. (αy + γxy) dx + (βx + δxy) dy = 0
19. (3x 2
y
3
−y
2
+ y) dx + (−xy + 2x) dy = 0
20. 2y dx + 3(x 2 2
+ x y ) dy = 0
3
Q2.6.4
In Exercises 2.6.24-2.6.27 find an integrating factor and solve the equation. Plot a direction field and some integral curves for
the equation in the indicated rectangular region.
24. (x 4
y
3
+ y) dx + (x y
5 2
− x) dy = 0; {−1 ≤ x ≤ 1, −1 ≤ y ≤ 1}
25. (3xy + 2y 2
+ y) dx + (x
2
+ 2xy + x + 2y) dy = 0; {−2 ≤ x ≤ 2, −2 ≤ y ≤ 2}
26. (12xy + 6y 3
) dx + (9 x
2 2
+ 10x y ) dy = 0; {−2 ≤ x ≤ 2, −2 ≤ y ≤ 2}
27. (3x 2
y
2
+ 2y) dx + 2x dy = 0; {−4 ≤ x ≤ 4, −4 ≤ y ≤ 4}
Q2.6.5
28. Suppose a , b , c , and d are constants such that ad − bc ≠ 0 , and let m and n be arbitrary real numbers. Show that
m n+1 m+1 n
(ax y + by ) dx + (c x + dx y ) dy = 0 (2.6E.1)
Assume that and μx μy are continuous for all (x, y), and suppose y = y(x) is a differentiable function such that
μ(x, y(x)) = 0 and μ x (x, y(x)) ≠ 0 for all x in some interval I . Show that y is a solution of (A) on I .
30. According to Theorem 2.1.2, the general solution of the linear nonhomogeneous equation
′
y + p(x)y = f (x) (A)
is
where y is any nontrivial solution of the complementary equation y + p(x)y = 0 . In this exercise we obtain this conclusion
1
′
in a different way. You may find it instructive to apply the method suggested here to solve some of the exercises in Section 2.1.
a. Rewrite (A) as
[p(x)y − f (x)]dx + dy = 0, (C)
b. Multiply (A) through by μ = ±e and verify that the resulting equation can be rewritten as
∫ p(x) dx
′
(μ(x)y ) = μ(x)f (x). (2.6E.2)
Then integrate both sides of this equation and solve for y to show that the general solution of (A) is
1
y = (c + ∫ f (x)μ(x) dx) . (2.6E.3)
μ(x)
1 9/16/2020
3.1: Euler's Method
If an initial value problem
′
y = f (x, y), y(x0 ) = y0 (3.1.1)
cannot be solved analytically, it is necessary to resort to numerical methods to obtain useful approximations to a solution of
Equation 3.1.1. we will consider such methods in this chapter.
We’re interested in computing approximate values of the solution of Equation 3.1.1 at equally spaced points x0 , x1 , …,
x = b in an interval [ x , b]. Thus,
n 0
xi = x0 + ih, i = 0, 1, … , n,
where
b − x0
h = .
n
we will denote the approximate values of the solution at these points by y , y , …, y ; thus, y is an approximation to y(x ).
0 1 n i i
we will call
ei = y(xi ) − yi
the error at the ith step. Because of the initial condition y(x 0) = y0 , we will always have e 0 =0 . However, in general e i ≠0
if i > 0 .
We encounter two sources of error in applying a numerical method to solve an initial value problem:
The formulas defining the method are based on some sort of approximation. Errors due to the inaccuracy of the
approximation are called truncation errors.
Computers do arithmetic with a fixed number of digits, and therefore make errors in evaluating the formulas defining the
numerical methods. Errors due to the computer’s inability to do exact arithmetic are called roundoff errors.
Since a careful analysis of roundoff error is beyond the scope of this book, we will consider only truncation errors.
Euler’s Method
The simplest numerical method for solving Equation 3.1.1 is Euler’s method. This method is so crude that it is seldom used in
practice; however, its simplicity makes it useful for illustrative purposes. Euler’s method is based on the assumption that the
tangent line to the integral curve of Equation 3.1.1 at (x , y(x )) approximates the integral curve over the interval [x , x ].
i i i i+1
Since the slope of the integral curve of Equation 3.1.1 at (x , y(x )) is y (x ) = f (x , y(x )) , the equation of the tangent line
i i
′
i i i
y1 = y0 + hf (x0 , y0 ).
which isn’t useful, since we don’t know y(x ). Therefore we replace y(x
1 1) by its approximate value y and redefine1
y2 = y1 + hf (x1 , y1 ).
In general, Euler’s method starts with the known value y(x0 ) = y0 and computes y1 , y2 , …, yn successively by with the
formula
yi+1 = yi + hf (xi , yi ), 0 ≤ i ≤ n − 1. (3.1.4)
The next example illustrates the computational procedure indicated in Euler’s method.
Example 3.1.1
Use Euler’s method with h = 0.1 to find approximate values for the solution of the initial value problem
′ 3 −2x
y + 2y = x e , y(0) = 1 (3.1.5)
y2 = y1 + hf (x1 , y1 )
3 −0.2
= 0.8 + (0.1)f (0.1, 0.8) = 0.8 + (0.1) (−2(0.8) + (0.1 ) e ) = 0.640081873,
y3 = y2 + hf (x2 , y2 )
3 −0.4
= 0.640081873 + (0.1) (−2(0.640081873) + (0.2 ) e ) = 0.512601754.
We’ve written the details of these computations to ensure that you understand the procedure. However, in the rest of the
examples as well as the exercises in this chapter, we will assume that you can use a programmable calculator or a computer to
carry out the necessary computations.
Example 3.1.2
Use Euler’s method with step sizes h = 0.1 , h = 0.05, and h = 0.025 to find approximate values of the solution of the
initial value problem
′ 3 −2x
y + 2y = x e , y(0) = 1
at x = 0 , 0.1, 0.2, 0.3, …, 1.0. Compare these approximate values with the values of the exact solution
−2x
e
4
y = (x + 4), (3.1.6)
4
⎧ 0.0293withh = 0.1,
⎪
Based on this scanty evidence, you might guess that the error in approximating the exact solution at a fixed value of x by
Euler’s method is roughly halved when the step size is halved. You can find more evidence to support this conjecture by
examining Table 3.1.2, which lists the approximate values of y −y at x = 0.1, 0.2, …, 1.0.
exact approx
Example 3.1.3
Tables 3.1.2 and 3.1.4 show analogous results for the nonlinear initial value problem
′ 2 2
y = −2 y + xy + x , y(0) = 1, (3.1.7)
except in this case we cannot solve Equation 3.1.7 exactly. The results in the “Exact” column were obtained by using a
more accurate numerical method known as the Runge-Kutta method with a small step size. They are exact to eight
|y(b) − yn | ≤ Kh,
2. The error committed in replacing y(x ) by y in Equation 3.1.2 and using Equation 3.1.4 rather than Equation 3.1.2 to
i i
compute y . i+1
we will now use Taylor’s theorem to estimate T , assuming for simplicity that f , f , and f are continuous and bounded for all
i x y
(x, y). Then y exists and is bounded on [x , b]. To see this, we differentiate
′′
0
′
y (x) = f (x, y(x))
to obtain
′′ ′
y (x) = fx (x, y(x)) + fy (x, y(x))y (x)
Since we assumed that f , f and f are bounded, there’s a constant M such that
x y
′
| fx (x, y(x)) + fy (x, y(x))y (x)| ≤ M x0 < x < b
where x
~
is some number between x and x
i i i+1 . Since y ′
(xi ) = f (xi , y(xi )) this can be written as
2
h
′′ ~
y(xi+1 ) = y(xi ) + hf (xi , y(xi )) + y (xi ),
2
or, equivalently,
2
h
′′ ~
y(xi+1 ) − y(xi ) − hf (xi , y(xi )) = y (xi ).
2
Although it may be difficult to determine the constant M , what is important is that there’s an M such that Equation 3.1.10
holds. We say that the local truncation error of Euler’s method is of order h , which we write as O(h ). 2 2
Note that the magnitude of the local truncation error in Euler’s method is determined by the second derivative y of the ′′
solution of the initial value problem. Therefore the local truncation error will be larger where |y | is large, or smaller where ′′
| y | is small.
′′
Since the local truncation error for Euler’s method is O(h ), it is reasonable to expect that halving h reduces the local
2
truncation error by a factor of 4. This is true, but halving the step size also requires twice as many steps to approximate the
solution at a given point. To analyze the overall effect of truncation error in Euler’s method, it is useful to derive an equation
relating the errors
and
yi+1 = yi + hf (xi , yi ). (3.1.12)
The last term on the right is the local truncation error at the ith step. The other terms reflect the way errors made at previous
steps affect e . Since |T | ≤ M h /2 , we see from Equation 3.1.13 that
i+1 i
2
2
Mh
| ei+1 | ≤ | ei | + h|f (xi , y(xi )) − f (xi , yi )| + . (3.1.14)
2
Since we assumed that f is continuous and bounded, the mean value theorem implies that
y
∗ ∗
f (xi , y(xi )) − f (xi , yi ) = fy (xi , y )(y(xi ) − yi ) = fy (xi , y )ei ,
i i
For convenience, let C = 1 + Rh . Since e 0 = y(x0 ) − y0 = 0 , applying Equation 3.1.15 repeatedly yields
2
Mh
| e1 | ≤
2
2 2
Mh Mh
| e2 | ≤ C | e1 | + ≤ (1 + C )
2 2
2 2
Mh 2
Mh
| e3 | ≤ C | e2 | + ≤ (1 + C + C )
2 2
⋮
2 2
Mh n−1
Mh
| en | ≤ C | en−1 | + ≤ (1 + C + ⋯ + C ) . (3.1.16)
2 2
Recalling the formula for the sum of a geometric series, we see that
n n
1 −C (1 + Rh ) −1
n−1
1 +C +⋯ +C = =
1 −C Rh
(verify),
n nRh R(b−x0 )
(1 + Rh ) <e =e (since nh = b − x0 ).
with
R(b−x0 )
e −1
K =M .
2R
Because of Equation 3.1.18 we say that the global truncation error of Euler’s method is of order h , which we write as O(h) .
with p ≢ 0 is said to be semilinear. (Of course, Equation 3.1.19 is linear if h is independent of y .) One way to apply Euler’s
method to an initial value problem
′
y + p(x)y = h(x, y), y(x0 ) = y0 (3.1.20)
where
However, we can also start by applying variation of parameters to Equation 3.1.20, as in Sections 2.1 and 2.4; thus, we write
the solution of Equation 3.1.20 as y = uy , where y is a nontrivial solution of the complementary equation y + p(x)y = 0 .
1 1
′
Then y = uy is a solution of Equation 3.1.20 if and only if u is a solution of the initial value problem
1
′
u = h(x, u y1 (x))/ y1 (x), u(x0 ) = y(x0 )/ y1 (x0 ). (3.1.21)
We can apply Euler’s method to obtain approximate values u , u , …, u of this initial value problem, and then take
0 1 n
yi = ui y1 (xi )
as approximate values of the solution of Equation 3.1.20. we will call this procedure the Euler semilinear method.
The next two examples show that the Euler and Euler semilinear methods may yield drastically different results.
Example 3.1.2
In Example 3.1.7 we had to leave the solution of the initial value problem
′
y − 2xy = 1, y(0) = 3 (3.1.22)
in the form
x
2 2
x −t
y =e (3 + ∫ e dt) (3.1.23)
0
because it was impossible to evaluate this integral exactly in terms of elementary functions. Use step sizes h = 0.2 ,
h = 0.1 , and h = 0.05 to find approximate values of the solution of Equation 3.1.22 at x = 0 , 0.2, 0.4, 0.6, …, 2.0 by
and applying Euler’s method with f (x, y) = 1 + 2xy yields the results shown in Table 3.1.5. Because of the large
differences between the estimates obtained for the three values of h , it would be clear that these results are useless even if
the “exact” values were not included in the table.
assumes very large values on this interval. To see this, we differentiate Equation 3.1.24 to obtain
′′ ′ 2
y (x) = 2y(x) + 2x y (x) = 2y(x) + 2x(1 + 2xy(x)) = 2(1 + 2 x )y(x) + 2x,
where the second equality follows again from Equation 3.1.24. Since Equation 3.1.23 implies that y(x) > 3e x
if x > 0 ,
2
′′ 2 x
y (x) > 6(1 + 2 x )e + 2x, x > 0.
The results listed in Table 3.1.6 are clearly better than those obtained by Euler’s method.
x h = 0.2 h = 0.1 h = 0.05 Exact
is large on [x , b]. In many cases the results obtained by the two methods don’t differ appreciably. However, we propose the an
0
intuitive way to decide which is the better method: Try both methods with multiple step sizes, as we did in Example
[example:3.1.4}, and accept the results obtained by the method for which the approximations change less as the step size
decreases.
Example 3.1.5
Applying Euler’s method with step sizes h = 0.1 , h = 0.05, and h = 0.025 to the initial value problem
′
x
y − 2y = , y(1) = 7 (3.1.25)
2
1 +y
yields the results in Table 3.1.8. Since the latter are clearly less dependent on step size than the former, we conclude that
the Euler semilinear method is better than Euler’s method for Equation 3.1.25. This conclusion is supported by comparing
the approximate results obtained by the two methods with the “exact” values of the solution.
x h = 0.1 h = 0.05 h = 0.025 Exact
Example 3.1.6
Applying Euler’s method with step sizes h = 0.1 , h = 0.05, and h = 0.025 to the initial value problem
′ 2 2
y + 3x y = 1 + y , y(2) = 2 (3.1.26)
on [2, 3] yields the results in Table 3.1.9. Applying the Euler semilinear method with
3 3 3
−x ′ x 2 −2x 8
y = ue and u =e (1 + u e ), u(2) = 2 e
yields the results in Table 3.1.10. Noting the close agreement among the three columns of Table 3.1.9 (at least for larger
values of x) and the lack of any such agreement among the columns of Table 3.1.10, we conclude that Euler’s method is
better than the Euler semilinear method for Equation 3.1.26. Comparing the results with the exact values supports this
conclusion.
x h = 0.1 h = 0.05 h = 0.025 Exact
In the next two sections we will study other numerical methods for solving initial value problems, called the improved Euler
method, the midpoint method, Heun’s method and the Runge- Kutta method. If the initial value problem is semilinear as in
Equation 3.1.19, we also have the option of using variation of parameters and then applying the given numerical method to the
initial value problem Equation 3.1.21 for u. By analogy with the terminology used here, we will call the resulting procedure
the improved Euler semilinear method, the midpoint semilinear method, Heun’s semilinear method or the Runge- Kutta
semilinear method, as the case may be.
x is the point where the initial condition is imposed and i = 1 , 2 , 3 . The purpose of these exercises is to familiarize you with
0
−−−−− −
2. y ′
= y + √x2 + y 2 , y(0) = 1; h = 0.1
3. y ′
+ 3y = x
2
− 3xy + y ,
2
y(0) = 2; h = 0.05
4. y ′
=
1+x
1−y 2
, y(2) = 3; h = 0.1
5. y ′ 2
+ x y = sin xy, y(1) = π; h = 0.2
Q3.1.2
6. Use Euler’s method with step sizes h = 0.1 , h = 0.05 , and h = 0.025 to find approximate values of the solution of the
initial value problem
′ 4x
y + 3y = 7 e , y(0) = 2 (3.1E.1)
at x = 0 , 0.1, 0.2, 0.3, …, 1.0. Compare these approximate values with the values of the exact solution y =e
4x
+e
−3x
,
which can be obtained by the method of Section 2.1. Present your results in a table like Table 3.1.1.
7. Use Euler’s method with step sizes h = 0.1 , h = 0.05 , and h = 0.025 to find approximate values of the solution of the
initial value problem
′
2 3
y + y = + 1, y(1) = 1 (3.1E.2)
x x3
at x = 1.0, 1.1, 1.2, 1.3, …, 2.0. Compare these approximate values with the values of the exact solution
1 3
y = (9 ln x + x + 2), (3.1E.3)
3x2
which can be obtained by the method of Section 2.1. Present your results in a table like Table 3.1.1.
8. Use Euler’s method with step sizes h = 0.05, h = 0.025, and h = 0.0125 to find approximate values of the solution of the
initial value problem
2 2
y + xy − x
′
y = , y(1) = 2 (3.1E.4)
2
x
at x = 1.0, 1.05, 1.10, 1.15, …, 1.5. Compare these approximate values with the values of the exact solution
2
x(1 + x /3)
y = (3.1E.5)
2
1 − x /3
obtained in Example [example:2.4.3}. Present your results in a table like Table 3.1.1.
9. In Example [example:2.2.3} it was shown that
5 2
y +y = x +x −4 (3.1E.6)
Use Euler’s method with step sizes h = 0.1 , h = 0.05, and h = 0.025 to find approximate values of the solution of (A) at
x = 1.0 , 1.1, 1.2, 1.3, …, 2.0. Present your results in tabular form. To check the error in these approximate values, construct
at x = 0 , 0.1, 0.2, 0.3, …, 1.0. Compare these approximate values with the values of the exact solution y = e −3x
(7x + 6) ,
which can be obtained by the method of Section 2.1. Do you notice anything special about the results? Explain.
Q3.1.3
The linear initial value problems in Exercises 3.1.14–3.1.19 can’t be solved exactly in terms of known elementary functions. In
each exercise, use Euler’s method and the Euler semilinear methods with the indicated step sizes to find approximate values of
the solution of the given initial value problem at 11 equally spaced points (including the endpoints) in the interval.
14. y ′
− 2y =
1+x
1
2
, y(2) = 2 ; h = 0.1, 0.05, 0.025on [2, 3]
15. y ′
+ 2xy = x ,
2
y(0) = 3 (Exercise 2.1.38); h = 0.2, 0.1, 0.05 on [0, 2]
16. y ′
+
1
x
y =
sin x
x
2
, y(1) = 2; (Exercise 2.1.39); h = 0.2, 0.1, 0.05 on [1, 3]
−x
17. y ′
+y =
e tan x
x
, y(1) = 0; (Exercise 2.1.40); h = 0.05, 0.025, 0.0125 on [1, 1.5]
x
18. y ′
+
1+x2
2x
y =
e
2 2
, y(0) = 1; (Exercise 2.1.41); h = 0.2, 0.1, 0.05 on [0, 2]
(1+x )
(y+1)
, y(0) = 1 ; h = 0.1, 0.05, 0.025on [0, 1]
2
22. y ′
+ 2y =
1+y
x
2
, y(2) = 1 ; h = 0.1, 0.05, 0.025on [2, 3]
Q3.1.5
23. Numerical Quadrature. The fundamental theorem of calculus says that if f is continuous on a closed interval [a, b] then it
has an antiderivative F such that F (x) = f (x) on [a, b] and
′
This solves the problem of evaluating a definite integral if the integrand f has an antiderivative that can be found and
evaluated easily. However, if f doesn’t have this property, (A) doesn’t provide a useful way to evaluate the definite integral. In
this case we must resort to approximate methods. There’s a class of such methods called numerical quadrature, where the
approximation takes the form
b n
quadrature formula.
a. Derive the quadrature formula
b n−1
b. The quadrature formula (C) is sometimes called the left rectangle rule. Draw a figure that justifies this terminology.
c. For several choices of a , b , and A , apply (C) to f (x) = A with n = 10, 20, 40, 80, 160, 320. Compare your results with
the exact answers and explain what you find.
d. For several choices of a , b , A , and B , apply (C) to f (x) = A + Bx with n = 10 , 20, 40, 80, 160, 320. Compare your
results with the exact answers and explain what you find.
the expensive part of the computation is the evaluation of f . Therefore we want methods that give good results for a given
number of such evaluations. This is what motivates us to look for numerical methods better than Euler’s.
To clarify this point, suppose we want to approximate the value of e by applying Euler’s method to the initial value problem
′
y = y, y(0) = 1
(with solution y = e ) on [0, 1], with h = 1/12, 1/24, and 1/48, respectively. Since each step in Euler’s method requires one
x
evaluation of f , the number of evaluations of f in each of these attempts is n = 12 , 24, and 48, respectively. In each case we
accept y as an approximation to e . The second column of Table 3.2.1 shows the results. The first column of the table
n
indicates the number of evaluations of f required to obtain the approximation, and the last column contains the value of e
rounded to ten significant figures.
In this section we will study the improved Euler method, which requires two evaluations of f at each step. We’ve used this
method with h = 1/6 , 1/12, and 1/24. The required number of evaluations of f were 12, 24, and 48, as in the three
applications of Euler’s method; however, you can see from the third column of Table 3.2.1 that the approximation to e
obtained by the improved Euler method with only 12 evaluations of f is better than the approximation obtained by Euler’s
method with 48 evaluations.
In Section 3.3, we will study the Runge- Kutta method, which requires four evaluations of f at each step. We’ve used this
method with h = 1/3 , 1/6, and 1/12. The required number of evaluations of f were again 12, 24, and 48, as in the three
applications of Euler’s method and the improved Euler method; however, you can see from the fourth column of Table 3.2.1
that the approximation to e obtained by the Runge-Kutta method with only 12 evaluations of f is better than the
approximation obtained by the improved Euler method with 48 evaluations.
n Euler Improved Euler Runge-Kutta Exact
that is, m is the average of the slopes of the tangents to the integral curve at the endpoints of [x
i i, . The equation of the
xi+1 ]
h
yi+1 = yi + (f (xi , yi ) + f (xi+1 , y(xi+1 )) .
2
However, this still will not work, because we do not know y(x ), which appears on the right. We overcome this by replacing
i+1
y(x i+1) by y + hf (x , y ) , the value that the Euler method would assign to y
i i i . Thus, the improved Euler method starts i+1
with the known value y(x ) = y and computes y , y , …, y successively with the formula
0 0 1 2 n
h
yi+1 = yi + (f (xi , yi ) + f (xi+1 , yi + hf (xi , yi ))) . (3.2.4)
2
The computation indicated here can be conveniently organized as follows: given y , compute i
k1i = f (xi , yi ),
h
yi+1 = yi + (k1i + k2i ).
2
The improved Euler method requires two evaluations of f (x, y) per step, while Euler’s method requires only one. However,
we will see at the end of this section that if f satisfies appropriate assumptions, the local truncation error with the improved
Euler method is O(h ), rather than O(h ) as with Euler’s method. Therefore the global truncation error with the improved
3 2
We note that the magnitude of the local truncation error in the improved Euler method and other methods discussed in this
section is determined by the third derivative y of the solution of the initial value problem. Therefore the local truncation error
′′′
The next example, which deals with the initial value problem considered in Example 3.2.1, illustrates the computational
procedure indicated in the improved Euler method.
Example 3.2.1
Use the improved Euler method with h = 0.1 to find approximate values of the solution of the initial value problem
′ 3 −2x
y + 2y = x e , y(0) = 1 (3.2.5)
3 −0.2
= f (0.1, 0.8) = −2(0.8) + (0.1 ) e = −1.599181269,
h
y1 = y0 + (k10 + k20 ),
2
3 −0.2
k11 = f (x1 , y1 ) = f (0.1, 0.820040937) = −2(0.820040937) + (0.1 ) e = −1.639263142,
3 −0.4
= f (0.2, 0.656114622) = −2(0.656114622) + (.2 ) e = −1.306866684,
h
y2 = y1 + (k11 + k21 ),
2
3 −.4
k12 = f (x2 , y2 ) = f (.2, .672734445) = −2(.672734445) + (.2 ) e = −1.340106330,
3 −.6
= f (.3, .538723812) = −2(.538723812) + (.3 ) e = −1.062629710,
h
y3 = y2 + (k12 + k22 )
2
Example 3.2.2
Table 3.2.2 shows results of using the improved Euler method with step sizes h = 0.1 and h = 0.05 to find approximate
values of the solution of the initial value problem
′ 3 −2x
y + 2y = x e , y(0) = 1
at x = 0 , 0.1, 0.2, 0.3, …, 1.0. For comparison, it also shows the corresponding approximate values obtained with Euler’s
method in [example:3.1.2}, and the values of the exact solution
−2x
e
4
y = (x + 4).
4
The results obtained by the improved Euler method with h = 0.1 are better than those obtained by Euler’s method with
h = 0.05 .
Example 3.2.4
Use step sizes h = 0.2 , h = 0.1 , and h = 0.05 to find approximate values of the solution of
′
y − 2xy = 1, y(0) = 3 (3.2.6)
at x = 0 , 0.2, 0.4, 0.6, …, 2.0 by (a) the improved Euler method; (b) the improved Euler semilinear method. (We used
Euler’s method and the Euler semilinear method on this problem in [example:3.1.4}.)
Solution
a. Rewriting Equation 3.2.6 as
′
y = 1 + 2xy, y(0) = 3
and applying the improved Euler method with f (x, y) = 1 + 2xy yields the results shown in Table 3.2.4.
b. Since y = e is a solution of the complementary equation y , we can apply the improved Euler semilinear
2
x ′
1 − 2xy = 0
The results listed in Table 3.2.5 are clearly better than those obtained by the improved Euler method.
that f , f , f , f , f , and f are continuous and bounded for all (x, y). This implies that if y is the solution of Equation
x y xx yy xy
We begin by approximating the integral curve of Equation 3.2.1 at (x i, y(xi )) by the line through (xi, y(xi )) with slope
′ ′
mi = σ y (xi ) + ρy (xi + θh),
where σ, ρ, and θ are constants that we will soon specify; however, we insist at the outset that 0 < θ ≤ 1 , so that
xi < xi + θh ≤ xi+1 .
y = y(xi ) + mi (x − xi )
(3.2.7)
′ ′
= y(xi ) + [σ y (xi ) + ρy (xi + θh)] (x − xi ).
2 3
′
h ′′
h ′′′
y(xi+1 ) = y(xi ) + h y (xi ) + y (xi ) + y ^i ),
(x
2 6
where x
^ is in (x
i i, . Since y is bounded this implies that
xi+1 )
′′′
2
′
h ′′ 3
y(xi+1 ) − y(xi ) − h y (xi ) − y (xi ) = O(h ).
2
2
(θh)
′ ′ ′′ ′′′ ¯¯
¯
y (xi + θh) = y (xi ) + θh y (xi ) + y (xi ),
2
where x is in (x
¯¯
¯
i i, xi + θh) . Since y is bounded, this implies that
′′′
′ ′ ′′ 2
y (xi + θh) = y (xi ) + θh y (xi ) + O(h ).
Substituting this into Equation 3.2.9 and noting that the sum of two O(h 2
) terms is again O(h 2
) shows that E
i
3
= O(h ) if
′ ′′ ′
h ′′
(σ + ρ)y (xi ) + ρθh y (xi ) = y (xi ) + y (xi ),
2
which is true if
1
σ +ρ = 1 and ρθ = . (3.2.10)
2
Since y ′
= f (x, y) , we can now conclude from Equation 3.2.8 that
3
y(xi+1 ) = y(xi ) + h [σf (xi , yi ) + ρf (xi + θh, y(xi + θh))] + O(h ) (3.2.11)
if σ, ρ, and θ satisfy Equation 3.2.10. However, this formula would not be useful even if we knew y(x ) exactly (as we would i
for i = 0 ), since we still wouldn’t know y(x + θh) exactly. To overcome this difficulty, we again use Taylor’s theorem to
i
write
2
h
′ ′′ ~
y(xi + θh) = y(xi ) + θh y (xi ) + y (xi ),
2
where x
~
is in (x
i i, xi + θh) . Since y ′
(xi ) = f (xi , y(xi )) and y is bounded, this implies that
′′
2
|y(xi + θh) − y(xi ) − θhf (xi , y(xi ))| ≤ K h (3.2.12)
for some constant K . Since f is bounded, the mean value theorem implies that
y
3
ρf (xi + θh, y(xi ) + θhf (xi , y(xi )))] + O(h ).
has O(h 3
) local truncation error if σ, ρ, and θ satisfy Equation 3.2.10. Substituting σ = 1 − ρ and θ = 1/2ρ here yields
h h
yi+1 = yi + h [(1 − ρ)f (xi , yi ) + ρf ( xi + , yi + f (xi , yi ))] . (3.2.13)
2ρ 2ρ
The computation indicated here can be conveniently organized as follows: given y , compute i
k1i = f (xi , yi ),
h h
k2i = f ( xi + , yi + k1i ) ,
2ρ 2ρ
Consistent with our requirement that 0 < θ < 1 , we require that ρ ≥ 1/2 . Letting ρ = 1/2 in Equation 3.2.13 yields the
improved Euler method Equation 3.2.4. Letting ρ = 3/4 yields Heun’s method,
1 3 2 2
yi+1 = yi + h [ f (xi , yi ) + f ( xi + h, yi + hf (xi , yi ))] ,
4 4 3 3
2h 2h
k2i = f ( xi + , yi + k1i ) ,
3 3
h
yi+1 = yi + (k1i + 3 k2i ).
4
h h
k2i = f ( xi + , yi + k1i ) ,
2 2
yi+1 = yi + h k2i .
Examples involving the midpoint method and Heun’s method are given in Exercises 3.2.23 - 3.3.30.
Q3.2.1
In Exercises 3.2.1–3.2.5 use the improved Euler method to find approximate values of the solution of the given initial value
problem at the points x = x + ih , where x is the point where the initial condition is imposed and i = 1 , 2, 3.
i 0 0
1. y ′
= 2x
2
+ 3y
2
− 2, y(2) = 1; h = 0.05
−−−−−−
2. y ′ 2 2
= y + √x + y , y(0) = 1; h = 0.1
3. y ′
+ 3y = x
2
− 3xy + y ,
2
y(0) = 2; h = 0.05
4. y ′
=
1+x
1−y
2
, y(2) = 3; h = 0.1
5. y ′ 2
+ x y = sin xy, y(1) = π; h = 0.2
Q3.2.2
6. Use the improved Euler method with step sizes h = 0.1 , h = 0.05 , and h = 0.025 to find approximate values of the
solution of the initial value problem
′ 4x
y + 3y = 7 e , y(0) = 2
at x = 0 , 0.1, 0.2, 0.3, …, 1.0. Compare these approximate values with the values of the exact solution y =e
4x
+e
−3x
,
which can be obtained by the method of Section 2.1. Present your results in a table like Table 3.2.2.
7. Use the improved Euler method with step sizes h = 0.1 , h = 0.05 , and h = 0.025 to find approximate values of the
solution of the initial value problem
′
2 3
y + y = + 1, y(1) = 1
x x3
at x = 1.0, 1.1, 1.2, 1.3, …, 2.0. Compare these approximate values with the values of the exact solution
1 3
y = (9 ln x + x + 2)
2
3x
which can be obtained by the method of Section 2.1. Present your results in a table like Table 3.2.2.
8. Use the improved Euler method with step sizes h = 0.05 , h = 0.025 , and h = 0.0125 to find approximate values of the
solution of the initial value problem
2 2
′
y + xy − x
y = , y(1) = 2,
2
x
at x = 1.0, 1.05, 1.10, 1.15, …, 1.5. Compare these approximate values with the values of the exact solution
2
x(1 + x /3)
y =
2
1 − x /3
obtained in Example [example:2.4.3}. Present your results in a table like Table 3.2.2.
9. In Example [example:3.2.2} it was shown that
5 2
y +y = x +x −4
Use the improved Euler method with step sizes h = 0.1 , h = 0.05, and h = 0.025 to find approximate values of the solution
of (A) at x = 1.0, 1.14, 1.2, 1.3, …, 2.0. Present your results in tabular form. To check the error in these approximate values,
construct another table of values of the residual
4 3 2 5
R(x, y) = x y +x y + 2xy − 4
′ −3x
y + 3y = e (1 − 2x), y(0) = 2,
at x =0 , 0.1 , 0.2 , 0.3, …, 1.0. Compare these approximate values with the values of the exact solution
y =e
−3x
(2 + x − x )
2
, which can be obtained by the method of Section 2.1. Do you notice anything special about the results?
Explain.
Q3.2.3
The linear initial value problems in Exercises 3.2.14-3.2.19 can’t be solved exactly in terms of known elementary functions. In
each exercise use the improved Euler and improved Euler semilinear methods with the indicated step sizes to find approximate
values of the solution of the given initial value problem at 11 equally spaced points (including the endpoints) in the interval.
14. y ′
− 2y =
1+x2
1
, y(2) = 2 ; h = 0.1, 0.05, 0.025on [2, 3]
15. y ′
+ 2xy = x ,
2
y(0) = 3 ; h = 0.2, 0.1, 0.05on [0, 2] (Exercise 2.1.38)
16. y ′
+
x
1
y =
sin x
x
2
, y(1) = 2 , h = 0.2, 0.1, 0.05on [1, 3] (Exercise 2.1.39)
−x
17. y ′
+y =
e tan x
x
, y(1) = 0 ; h = 0.05, 0.025, 0.0125on [1, 1.5] (Exercise 2.1.40),
x
18. y ′
+
1+x2
2x
y =
e
2
, y(0) = 1 ; h = 0.2, 0.1, 0.05on [0, 2] (Exercise 2.1.41)
(1+x2 )
Q3.2.4
In Exercises 3.2.20-3.2.22 use the improved Euler method and the improved Euler semilinear method with the indicated step
sizes to find approximate values of the solution of the given initial value problem at 11 equally spaced points (including the
endpoints) in the interval.
20. y ′
+ 3y = x y (y + 1),
2
y(0) = 1 ; h = 0.1, 0.05, 0.025on [0, 1]
21. y ′
− 4y =
y
2
x
(y+1)
, y(0) = 1 ; h = 0.1, 0.05, 0.025on [0, 1]
2
22. y ′
+ 2y =
1+y
x
2
, y(2) = 1 ; h = 0.1, 0.05, 0.025on [2, 3]
Q3.2.5
23. Do Exercise 3.2E. 7 with “improved Euler method” replaced by “midpoint method.”
24. Do Exercise 3.2E. 7 with “improved Euler method” replaced by “Heun’s method.”
25. Do Exercise 3.2E. 8 with “improved Euler method” replaced by “midpoint method.”
26. Do Exercise 3.2E. 8 with “improved Euler method” replaced by “Heun’s method.”
27. Do Exercise 3.2E. 11 with “improved Euler method” replaced by “midpoint method.”
28. Do Exercise 3.2E. 11 with “improved Euler method” replaced by “Heun’s method.”
29. Do Exercise 3.2E. 12 with “improved Euler method” replaced by “midpoint method.”
30. Do Exercise 3.2E. 12 with “improved Euler method” replaced by “Heun’s method.”
31. Show that if f , f , f , f , f , and f x y xx yy xy are continuous and bounded for all (x, y) and y is the solution of the initial value
problem
′
y = f (x, y), y(x0 ) = y0 ,
(where h = (b − a)/n) by applying the improved Euler method to the initial value problem
′
y = f (x), y(a) = 0.
b. The quadrature formula (A) is called the trapezoid rule. Draw a figure that justifies this terminology.
c. For several choices of a , b , A , and B , apply (A) to f (x) = A + Bx , with n = 10, 20, 40, 80, 160, 320. Compare your
results with the exact answers and explain what you find.
d. For several choices of a , b , A , B , and C , apply (A) to f (x) = A + Bx + C x , with n = 10 , 20, 40, 80, 160, 320. 2
Compare your results with the exact answers and explain what you find.
′
y = f (x, y), y(x0 ) = y0 . (3.3.1)
Moreover, it can be shown that a method with local truncation error O(h ) has global truncation error O(h ) . In Sections
k+1 k
3.1 and 3.2 we studied numerical methods where k = 1 and k = 2 . We’ll skip methods for which k = 3 and proceed to the
Runge-Kutta method, the most widely used method, for which k = 4 . The magnitude of the local truncation error is
determined by the fifth derivative y of the solution of the initial value problem. Therefore the local truncation error will be
(5)
larger where |y | is large, or smaller where |y | is small. The Runge-Kutta method computes approximate values y , y ,
(5) (5)
1 2
k1i = f (xi , yi ),
h h
k2i = f ( xi + , yi + k1i ) ,
2 2
h h
k3i = f ( xi + , yi + k2i ) ,
2 2
and
h
yi+1 = yi + (k1i + 2 k2i + 2 k3i + k4i ).
6
The next example, which deals with the initial value problem considered in Examples and Example 3.3.1, illustrates the
computational procedure indicated in the Runge-Kutta method.
Example 3.3.1
Use the Runge-Kutta method with h = 0.1 to find approximate values for the solution of the initial value problem
′ 3 −2x
y + 2y = x e , y(0) = 1, (3.3.2)
at x = 0.1, 0.2.
Solution
Again we rewrite Equation 3.3.2 as
′ 3 −2x
y = −2y + x e , y(0) = 1,
3 −.1
= f (.05, .9) = −2(.9) + (.05 ) e = −1.799886895,
3 −.1
= f (.05, .910005655) = −2(.910005655) + (.05 ) e = −1.819898206,
3 −.2
= f (.1, .818010179) = −2(.818010179) + (.1 ) e = −1.635201628,
h
y1 = y0 + (k10 + 2 k20 + 2 k30 + k40 ),
6
.1
=1+ (−2 + 2(−1.799886895) + 2(−1.819898206) − 1.635201628) = .818753803,
6
3 −.2
k11 = f (x1 , y1 ) = f (.1, .818753803) = −2(.818753803)) + (.1 ) e = −1.636688875,
3 −.3
= f (.15, .745186880) = −2(.745186880) + (.15 ) e = −1.487873498,
3 −.4
= f (.2, .669966453) = −2(.669966453) + (.2 ) e = −1.334570346,
h
y2 = y1 + (k11 + 2 k21 + 2 k31 + k41 ),
6
.1
= .818753803 + (−1.636688875 + 2(−1.471338457) + 2(−1.487873498) − 1.334570346)
6
= .670592417.
Example 3.3.2
Table 3.3.1 shows results of using the Runge-Kutta method with step sizes h = 0.1 and h = 0.05 to find approximate
values of the solution of the initial value problem
′ 3 −2x
y + 2y = x e , y(0) = 1
at x = 0 , 0.1, 0.2, 0.3, …, 1.0. For comparison, it also shows the corresponding approximate values obtained with the
improved Euler method in Example 3.2.2:
Add text here. For the automatic number to work, you need to add the “AutoNum” template (preferably at3.2.2}, and the
values of the exact solution
−2x
e 4
y = (x + 4).
4
The results obtained by the Runge-Kutta method are clearly better than those obtained by the improved Euler method in
fact; the results obtained by the Runge-Kutta method with h = 0.1 are better than those obtained by the improved Euler
method with h = 0.05.
Improved Euler Runge-Kutta
Example 3.3.3
Table 3.3.2 shows analogous results for the nonlinear initial value problem
′ 2 2
y = −2 y + xy + x , y(0) = 1.
Example 3.3.4
Tables 3.3.3 and 3.3.4 show results obtained by applying the Runge-Kutta and Runge-Kutta semilinear methods to to the
initial value problem
′
y − 2xy = 1, y(0) = 3,
on an interval [x , b], for which x is the left endpoint. We haven’t discussed numerical methods for solving Equation 3.3.3on
0 0
an interval [a, x ], for which x is the right endpoint. To be specific, how can we obtain approximate values y , y , …, y
0 0 −1 −2 −n
of the solution of Equation 3.3.3at x − h, … , x − nh , where h = (x − a)/n ? Here’s the answer to this question:
0 0 0
Question
Consider the initial value problem
′
z = −f (−x, z), z(−x0 ) = y0 , (3.3.4)
on the interval [−x , −a], for which −x is the left endpoint. Use a numerical method to obtain approximate values z ,
0 0 1
z , …, z
2 n of the solution of (3.3.4) at −x + h , −x + 2h , …, −x + nh = −a . Then y = z , y = z , …,
0 0 0 −1 1 −2 2
The justification for this answer is sketched in Exercise 3.3.23. Note how easy it is to make the change the given problem
Equation 3.3.3 to the modified problem Equation 3.3.4 : first replace f by −f and then replace x, x , and y by −x, −x , and 0 0
z , respectively.
at x = 0 , 0.1, 0.2, …, 1.
Solution
We first rewrite Equation 3.3.5 in the form Equation 3.3.3 as
2x + 3
′
y = , y(1) = 4. (3.3.6)
2
(y − 1)
Since the initial condition y(1) = 4 is imposed at the right endpoint of the interval [0, 1] , we apply the Runge-Kutta
method to the initial value problem
2x − 3
′
z = , z(−1) = 4 (3.3.7)
2
(z − 1)
on the interval [−1, 0]. (You should verify that Equation 3.3.7 is related to Equation 3.3.6 as Equation 3.3.4 is related to
Equation 3.3.3.) Table [table:3.3.5} shows the results. Reversing the order of the rows in Table [table:3.3.5} and changing
the signs of the values of x yields the first two columns of Table [table:3.3.6}. The last column of Table [table:3.3.6}
shows the exact values of y , which are given by
2 1/3
y = 1 + (3 x + 9x + 15 ) .
(Since the differential equation in Equation 3.3.6is separable, this formula can be obtained by the method of Section 2.2.)
x z
-1.0 4.000000000
-0.9 3.944536474
-0.8 3.889298649
-0.7 3.834355648
-0.6 3.779786399
-0.5 3.725680888
-0.4 3.672141529
-0.3 3.619284615
-0.2 3.567241862
-0.1 3.516161955
0.0 3.466212070
2
, z(−1) = 4 , on [−1, 0].
(z−1)
x y Exact
Q3.3.1
In Exercises 3.3.1 -3.3.5 use the Runge-Kutta method to find approximate values of the solution of the given initial value
problem at the points x = x + ih, where x is the point where the initial condition is imposed and i = 1 , 2.
i 0 0
1. y ′
= 2x
2
+ 3y
2
− 2, y(2) = 1; h = 0.05
−−−−−−
2. y ′ 2 2
= y + √x + y , y(0) = 1; h = 0.1
3. y ′
+ 3y = x
2
− 3xy + y ,
2
y(0) = 2; h = 0.05
4. y ′
=
1+x
1−y
2
, y(2) = 3; h = 0.1
5. y ′ 2
+ x y = sin xy, y(1) = π; h = 0.2
Q3.3.2
6. Use the Runge-Kutta method with step sizes h = 0.1 , h = 0.05, and h = 0.025 to find approximate values of the solution
of the initial value problem
′ 4x
y + 3y = 7 e , y(0) = 2, (3.3E.1)
at x = 0 , 0.1, 0.2, 0.3, …, 1.0. Compare these approximate values with the values of the exact solution y =e
4x
+e
−3x
,
which can be obtained by the method of Section 2.1. Present your results in a table like Table 3.3.1.
7. Use the Runge-Kutta method with step sizes h = 0.1 , h = 0.05, and h = 0.025 to find approximate values of the solution
of the initial value problem
2 3
′
y + y = + 1, y(1) = 1 (3.3E.2)
3
x x
at x = 1.0, 1.1, 1.2, 1.3, …, 2.0. Compare these approximate values with the values of the exact solution
1
3
y = (9 ln x + x + 2), (3.3E.3)
2
3x
which can be obtained by the method of Section 2.1. Present your results in a table like Table 3.3.1.
8. Use the Runge-Kutta method with step sizes h = 0.05 , h = 0.025 , and h = 0.0125 to find approximate values of the
solution of the initial value problem
2 2
′
y + xy − x
y = , y(1) = 2 (3.3E.4)
x2
at x = 1.0, 1.05, 1.10, 1.15 …, 1.5. Compare these approximate values with the values of the exact solution
2
x(1 + x /3)
y = , (3.3E.5)
2
1 − x /3
Use the Runge-Kutta method with step sizes h = 0.1, h = 0.05, and h = 0.025 to find approximate values of the solution of
(A) at x = 2.0, 2.1, 2.2, 2.3, …, 3.0. Present your results in tabular form. To check the error in these approximate values,
construct another table of values of the residual
5 2
R(x, y) = y +y −x −x +4 (3.3E.7)
Use the Runge-Kutta method with step sizes h = 0.1, h = 0.05, and h = 0.025 to find approximate values of the solution of
(A) at x = 1.0, 1.1, 1.2, 1.3, …, 2.0. Present your results in tabular form. To check the error in these approximate values,
construct another table of values of the residual
4 3 2 5
R(x, y) = x y +x y + 2xy − 4 (3.3E.9)
at x =0 , 0.1 , ,
0.2 0.3, …, 1.0 . Compare these approximate values with the values of the exact solution
y = −e
−3x
(3 − x + 2 x
2
−x
3
+x )
4
, which can be obtained by the method of Section 2.1. Do you notice anything special
about the results? Explain.
Q3.3.3
The linear initial value problems in Exercises 3.3.14–3.3.19 can’t be solved exactly in terms of known elementary functions. In
each exercise use the Runge-Kutta and the Runge-Kutta semilinear methods with the indicated step sizes to find approximate
15. y ′
+ 2xy = x ,
2
y(0) = 3 ; h = 0.2, 0.1, 0.05on [0, 2] (Exercise 2.1.38)
16. y ′
+
1
x
y =
sin x
x
2
, y(1) = 2; h = 0.2, 0.1, 0.05 on [1, 3] (Exercise 2.1.39)
−x
17. y ′
+y =
e tan x
x
, y(1) = 0; h = 0.05, 0.025, 0.0125 on [1, 1.5] (Exercise 2.1.40)
x
18. y ′
+
1+x
2x
2
y =
e
2
2
, y(0) = 1; h = 0.2, 0.1, 0.05 on [0, 2] (Exercise 2.1.41)
(1+x )
19. x y ′
+ (x + 1)y = e
x
, y(1) = 2 ; h = 0.05, 0.025, 0.0125on [1, 1.5] (Exercise 2.1.42)
Q3.3.4
In Exercises 3.3.20–3.3.22 use the Runge-Kutta method and the Runge-Kutta semilinear method with the indicated step sizes
to find approximate values of the solution of the given initial value problem at 11 equally spaced points (including the
endpoints) in the interval.
20. y ′
+ 3y = x y (y + 1),
2
y(0) = 1 ; h = 0.1, 0.05, 0.025 on [0, 1]
21. y ′
− 4y =
y
2
x
(y+1)
, y(0) = 1 ; h = 0.1, 0.05, 0.025 on [0, 1]
2
22. y ′
+ 2y =
x
1+y 2
, y(2) = 1 ; h = 0.1, 0.05, 0.025 on [2, 3]
Q3.3.5
23. Suppose a < x , so that −x 0 0 < −a . Use the chain rule to show that if z is a solution of
′
z = −f (−x, z), z(−x0 ) = y0 , (3.3E.13)
on [a, x ]. 0
24. Use the Runge-Kutta method with step sizes h = 0.1 , h = 0.05, and h = 0.025 to find approximate values of the solution
of
2 2
′
y + xy − x
y = , y(2) = −1 (3.3E.15)
2
x
at x = 1.1, 1.2, 1.3, …2.0. Compare these approximate values with the values of the exact solution
2
x(4 − 3 x )
y = , (3.3E.16)
2
4 + 3x
at x = 0 , 0.1, 0.2, …, 1.
26. Use the Runge-Kutta method with step sizes h = 0.1 , h = 0.05, and h = 0.025 to find approximate values of the solution
of
1 7 3
′
y + y = + 3, y(1) = (3.3E.18)
2
x x 2
at x = 1.0, 1.1, 1.2, …, 3.0. Compare these approximate values with the values of the exact solution
2
12
y = 2x − , (3.3E.21)
2
x
(where h = (b − a)/n) by applying the Runge-Kutta method to the initial value problem
′
y = f (x), y(a) = 0. (3.3E.22)
1 9/16/2020
4.1: Growth and Decay
This section begins with a discussion of exponential growth and decay, which you have probably already seen in calculus. We
consider applications to radioactive decay, carbon dating, and compound interest. We also consider more complicated
problems where the rate of change of a quantity is in part proportional to the magnitude of the quantity, but is also influenced
by other other factors for example, a radioactive substance is manufactured at a certain rate, but decays at a rate proportional to
its mass, or a saver makes regular deposits in a savings account that draws compound interest.
Since the applications in this section deal with functions of time, we’ll denote the independent variable by t . If Q is a function
of t , Q will denote the derivative of Q with respect to t ; thus,
′
dQ
′
Q = .
dt
is
a(t−t0 )
Q = Q0 e . (4.1.2)
Radioactive Decay
Experimental evidence shows that radioactive material decays at a rate proportional to the mass of the material present.
According to this model the mass Q(t) of a radioactive material present at time t satisfies Equation 4.1.1, where a is a
negative constant whose value for any given material must be determined by experimental observation. For simplicity, we’ll
If the mass of the material present at t = t is Q , the mass present at time t is the solution of
0 0
′
Q = −kQ, Q(t0 ) = Q0 .
From Equation 4.1.2 with a = −k , the solution of this initial value problem is
−k(t−t0 )
Q = Q0 e . (4.1.3)
The half–life τ of a radioactive material is defined to be the time required for half of its mass to decay; that is, if Q(t 0) = Q0 ,
then
Q0
Q(τ + t0 ) = . (4.1.4)
2
−kτ
Q0
Q0 e = ,
2
so
1
−kτ
e = .
2
so the half-life is
1
τ = ln 2. (4.1.5)
k
(Figure 4.1.2). The half-life is independent of t and 0 Q0 , since it is determined by the properties of material, not by the
amount of the material present at any particular time.
Example 4.1.1
A radioactive substance has a half-life of 1620 years.
a. If its mass is now 4 g (grams), how much will be left 810 years from now?
b. Find the time t when 1.5 g of the substance remain.
1
Solution a
From Equation 4.1.3 with t 0 =0 and Q
0 =4 ,
−kt
Q = 4e , (4.1.6)
Solution b
Setting t = t in Equation 4.1.7 and requiring that Q(t
1 1) = 1.5 yields
3
(−t1 ln 2)/1620
= 4e .
2
years, during which the account bears interest at a constant annual rate r. To calculate the value of the account at the end of t
years, we need one more piece of information: how the interest is added to the account, or—as the bankers say—how it is
compounded. If the interest is compounded annually, the value of the account is multiplied by 1 + r at the end of each year.
This means that after t years the value of the account is
t
Q(t) = Q0 (1 + r) .
If interest is compounded semiannually, the value of the account is multiplied by (1 + r/2) every 6 months. Since this occurs
twice annually, the value of the account after t years is
2t
r
Q(t) = Q0 (1 + ) .
2
In general, if interest is compounded n times per year, the value of the account is multiplied n times per year by (1 + r/n) ;
therefore, the value of the account after t years is
nt
r
Q(t) = Q0 (1 + ) . (4.1.8)
n
1 $133.82
2 $134.39
4 $134.68
8 $134.83
364 $134.98
You can see from Table 4.1.1 that the value of the account after 5 years is an increasing function of n . Now suppose the
maximum allowable rate of interest on savings accounts is restricted by law, but the time intervals between successive
compoundings isn’t; then competing banks can attract savers by compounding often. The ultimate step in this direction is to
compound continuously, by which we mean that n → ∞ in Equation 4.1.8. Since we know from calculus that
r n
r
lim (1 + ) =e ,
n→∞ n
this yields
nt n t
r r
Q(t) = limn→∞ Q0 (1 + ) = Q0 [ limn→∞ (1 + ) ]
n n
rt
= Q0 e .
Observe that Q = Q 0e
rt
is the solution of the initial value problem
′
Q = rQ, Q(0) = Q0 ;
that is, with continuous compounding the value of the account grows exponentially.
Example 4.1.2
If $150 is deposited in a bank that pays 5
1
2
% annual interest compounded continuously, the value of the account after t
years is
.055t
Q(t) = 150e
dollars. (Note that it is necessary to write the interest rate as a decimal; thus, r = .055.) Therefore, after t = 10 years the
value of the account is
.55
Q(10) = 150 e ≈ $259.99.
Example 4.1.3
We wish to accumulate $10,000 in 10 years by making a single deposit in a savings account bearing 5 % annual interest 1
Since we want Q(10) to be $10,000, the initial deposit Q must satisfy the equation
0
.55
10000 = Q0 e , (4.1.10)
−.55
Q0 = 10000 e ≈ $5769.50.
Example 4.1.4
A radioactive substance with decay constant k is produced at a constant rate of a units of mass per unit time.
a. Assuming that Q(0) = Q , find the mass Q(t) of the substance present at time t .
0
Solution a:
Here
′
Q = rate of increase of Q − rate of decrease of Q.
The rate of increase is the constant a . Since Q is radioactive with decay constant k , the rate of decrease is kQ. Therefore
′
Q = a − kQ.
This is a linear first order differential equation. Rewriting it and imposing the initial condition shows that Q is the solution
of the initial value problem
′
Q + kQ = a, Q(0) = Q0 . (4.1.11)
a kt
u = e +c
k
k
as t → ∞
and
−kt
a −kt
Q = ue = + ce .
k
a a
Q0 = +c or c = Q0 − .
k k
Therefore
This limit depends only on a and k , and not on Q . We say that a/k is the steady state value of Q. From Equation 4.1.12
0
we also see that Q approaches its steady state value from above if Q > a/k , or from below if Q < a/k . If Q = a/k ,
0 0 0
Carbon Dating
The fact that Q approaches a steady state value in the situation discussed in Example 4 underlies the method of carbon dating,
devised by the American chemist and Nobel Prize Winner W.S. Libby.
Carbon 12 is stable, but carbon-14, which is produced by cosmic bombardment of nitrogen in the upper atmosphere, is
radioactive with a half-life of about 5570 years. Libby assumed that the quantity of carbon-12 in the atmosphere has been
constant throughout time, and that the quantity of radioactive carbon-14 achieved its steady state value long ago as a result of
its creation and decomposition over millions of years. These assumptions led Libby to conclude that the ratio of carbon-14 to
carbon-12 has been nearly constant for a long time. This constant, which we denote by R , has been determined experimentally.
Living cells absorb both carbon-12 and carbon-14 in the proportion in which they are present in the environment. Therefore
the ratio of carbon-14 to carbon-12 in a living cell is always R . However, when the cell dies it ceases to absorb carbon, and the
ratio of carbon-14 to carbon-12 decreases exponentially as the radioactive carbon-14 decays. This is the basis for the method
of carbon dating, as illustrated in the next example.
Example 4.1.5
An archaeologist investigating the site of an ancient village finds a burial ground where the amount of carbon-14 present
in individual remains is between 42 and 44% of the amount present in live individuals. Estimate the age of the village and
the length of time for which it survived.
Solution
Let Q = Q(t) be the quantity of carbon-14 in an individual set of remains t years after death, and let Q be the quantity 0
that would be present in live individuals. Since carbon-14 decays exponentially with half-life 5570 years, its decay
constant is
ln 2
k = .
5570
Therefore
−t(ln 2)/5570
Q = Q0 e
if we choose our time scale so that t = 0 is the time of death. If we know the present value of
0 Q we can solve this
equation for t , the number of years since death occurred. This yields
ln Q/Q0
t = −5570 .
ln 2
It is given that Q = .42Q in the remains of individuals who died first. Therefore these deaths occurred about
0
ln .42
t1 = −5570 ≈ 6971
ln 2
years ago. For the most recent deaths, Q = .44Q ; hence, these deaths occurred about
0
ln .44
t2 = −5570 ≈ 6597
ln 2
A Savings Program
Example 4.1.6 :
A person opens a savings account with an initial deposit of $1000 and subsequently deposits $50 per week. Find the value
Q(t) of the account at time t > 0 , assuming that the bank pays 6% interest compounded continuously.
Solution
Observe that Q isn’t continuous, since there are 52 discrete deposits per year of $50 each. To construct a mathematical
model for this problem in the form of a differential equation, we make the simplifying assumption that the deposits are
made continuously at a rate of $2600 per year. This is essential, since solutions of differential equations are continuous
functions. With this assumption, Q increases continuously at the rate
′
Q = 2600 + 0.06Q
(Of course, we must recognize that the solution of this equation is an approximation to the true value of Q at any given
time. We’ll discuss this further below.) Since e is a solution of the complementary equation, the solutions of Equation
.06t
2600
−0.06t
u =− e +c
.06
and
.06t
2600 .06t
Q = ue =− + ce . (4.1.14)
.06
where the first term is the value due to the initial deposit and the second is due to the subsequent weekly deposits.
Mathematical models must be tested for validity by comparing predictions based on them with the actual outcome of
experiments. Example 6 is unusual in that we can compute the exact value of the account at any specified time and compare it
with the approximate value predicted by Equation 4.1.15 (See Exercise 4.1.21). The following table gives a comparison for a
ten year period. Each exact answer corresponds to the time of the year-end deposit, and each year is assumed to have exactly
52 weeks.
Table 4.1.3
Approximate Value of Q Exact Value of P Percentage Error
Year Error Q − P
(Example 4.1.6) (Exercise 4.1.21) (Q − P )/P
2. The half-life of a radioactive substance is 2 days. Find the time required for a given amount of the material to decay to 1/10
of its original mass.
3. A radioactive material loses 25% of its mass in 10 minutes. What is its half-life?
4. A tree contains a known percentage p of a radioactive substance with half-life τ . When the tree dies the substance decays
0
and isn’t replaced. If the percentage of the substance in the fossilized remains of such a tree is found to be p , how long has the
1
6. Find the decay constant k for a radioactive substance, given that the mass of the substance is Q at time t and Q at time
1 1 2
t .
2
7. A process creates a radioactive substance at the rate of 2 g/hr and the substance decays at a rate proportional to its mass,
with constant of proportionality k = .1(hr) . If Q(t) is the mass of the substance at time t , find lim
−1
Q(t).
t→∞
8. A bank pays interest continuously at the rate of 6%. How long does it take for a deposit of Q to grow in value to 2Q ?
0 0
9. At what rate of interest, compounded continuously, will a bank deposit double in value in 8 years?
10. A savings account pays 5% per annum interest compounded continuously. The initial deposit is Q0 dollars. Assume that
there are no subsequent withdrawals or deposits.
a. How long will it take for the value of the account to triple?
b. What is Q if the value of the account after 10 years is $100,000 dollars?
0
11. A candymaker makes 500 pounds of candy per week, while his large family eats the candy at a rate equal to Q(t)/10
pounds per week, where Q(t) is the amount of candy present at time t .
a. Find Q(t) for t > 0 if the candymaker has 250 pounds of candy at t = 0 .
b. Find lim Q(t).
t→∞
12. Suppose a substance decays at a yearly rate equal to half the square of the mass of the substance present. If we start with
50 g of the substance, how long will it be until only 25 g remain?
13. A super bread dough increases in volume at a rate proportional to the volume V present. If V increases by a factor of 10 in
2 hours and V (0) = V , find V at any time t . How long will it take for V to increase to 100V ?
0 0
14. A radioactive substance decays at a rate proportional to the amount present, and half the original quantity Q is left after
0
1500 years. In how many years would the original amount be reduced to 3Q /4? How much will be left after 2000 years?
0
15. A wizard creates gold continuously at the rate of 1 ounce per hour, but an assistant steals it continuously at the rate of 5%
of however much is there per hour. Let W (t) be the number of ounces that the wizard has at time t . Find W (t) and
lim t→∞W (t) if W (0) = 1 .
16. A process creates a radioactive substance at the rate of 1 g/hr, and the substance decays at an hourly rate equal to 1/10 of
the mass present (expressed in grams). Assuming that there are initially 20 g, find the mass S(t) of the substance present at
time t , and find lim S(t) .
t→∞
17. A tank is empty at t = 0 . Water is added to the tank at the rate of 10 gal/min, but it leaks out at a rate (in gallons per
minute) equal to the number of gallons in the tank. What is the smallest capacity the tank can have if this process is to continue
forever?
22. A homebuyer borrows P dollars at an annual interest rate r, agreeing to repay the loan with equal monthly payments of
0
a. Derive a differential equation for the loan principal (amount that the homebuyer owes) P (t) at time t > 0 , making the
simplifying assumption that the homebuyer repays the loan continuously rather than in discrete steps. (See Example 4.1.6.)
b. Solve the equation derived in (a).
c. Use the result of (b) to determine an approximate value for M assuming that each year has exactly 12 months of equal
length.
d. It can be shown that the exact value of M is given by
rP0 −1
−12N
M = (1 − (1 + r/12 ) ) . (4.1E.4)
12
Compare the value of M obtained from the answer in (c) to the exact value if (i) P 0 = $50, 000 , \(r=7{1\over2}\)%, N = 20
23. Assume that the homebuyer of Exercise 4.1.22 elects to repay the loan continuously at the rate of αM dollars per month,
where α is a constant greater than 1. (This is called accelerated payment.)
a. Determine the time T (α) when the loan will be paid off and the amount S(α) that the homebuyer will save.
b. Suppose P = $50, 000, r = 8 %, and N = 15 . Compute the savings realized by accelerated payments with
0
24. A benefactor wishes to establish a trust fund to pay a researcher’s salary for T years. The salary is to start at S dollars per0
year and increase at a fractional rate of a per year. Find the amount of money P that the benefactor must deposit in a trust
0
fund paying interest at a rate r per year. Assume that the researcher’s salary is paid continuously, the interest is compounded
continuously, and the salary increases are granted continuously.
units of mass per unit time, where a and b are positive constants and Q(t) is the mass of the substance present at time t; thus,
the rate of production is small at the start and tends to slow when Q is large.
a. Set up a differential equation for Q.
b. Choose your own positive values for a , b , k , and Q = Q(0) . Use a numerical method to discover what happens to
0 Q(t)
′
T = −k(T − Tm ). (4.2.1)
Here k > 0 , since the temperature of the object must decrease if T > Tm , or increase if T < Tm . We’ll call k the temperature
decay constant of the medium.
For simplicity, in this section we’ll assume that the medium is maintained at a constant temperature T . This is another m
example of building a simple mathematical model for a physical phenomenon. Like most mathematical models it has its
limitations. For example, it is reasonable to assume that the temperature of a room remains approximately constant if the
cooling object is a cup of coffee, but perhaps not if it is a huge cauldron of molten metal. (For more on this see Exercise
4.2.17.)
To solve Equation 4.2.1, we rewrite it as
′
T + kT = kTm .
Since e is a solution of the complementary equation, the solutions of this equation are of the form
−kt
T = ue
−kt
, where
′
u e
−kt
= kT , so u = kT e . Hence,
m
′
m
kt
kt
u = Tm e + c,
so
−kt −kt
T = ue = Tm + c e .
Example 4.2.1
A ceramic insulator is baked at 400 C and cooled in a room in which the temperature is
∘
25
∘
C. After 4 minutes the
temperature of the insulator is 200 C. What is its temperature after 8 minutes?
∘
Solution
Here T 0 = 400 and Tm = 25 , so Equation 4.2.2 becomes
−kt
T = 25 + 375 e . (4.2.3)
We determine k from the stated condition that T (4) = 200 ; that is,
−4k
200 = 25 + 375 e ;
hence,
175 7
−4k
e = = .
375 15
2
7 ∘
= 25 + 375 ( ) ≈ 107 C.
15
Example 4.2.2
An object with temperature 72 F is placed outside, where the temperature is −20 F. At 11:05 the temperature of the
∘ ∘
object is 60 F and at 11:07 its temperature is 50 F. At what time was the object placed outside?
∘ ∘
Solution
Let T (t) be the temperature of the object at time t . For convenience, we choose the origin t 0 =0 of the time scale to be
11:05 so that T = 60 . We must determine the time τ when T (τ ) = 72 . Substituting T
0 0 = 60 and T = −20 into
m
or
−kt
T = −20 + 80 e . (4.2.4)
We obtain k from the stated condition that the temperature of the object is 50 F at 11:07. Since 11:07 is t = 2 on our time
∘
hence,
−
τ
ln
8
92 23
e 2 7
= = .
80 20
t 8
Therefore the object was placed outside about 2 minutes and 5 seconds before 11:05; that is, at 11:02:55.
Mixing Problems
In the next two examples a saltwater solution with a given concentration (weight of salt per unit volume of solution) is added
at a specified rate to a tank that initially contains saltwater with a different concentration. The problem is to determine the
quantity of salt in the tank as a function of time. This is an example of a mixing problem. To construct a tractable mathematical
model for mixing problems we assume in our examples (and most exercises) that the mixture is stirred instantly so that the salt
is always uniformly distributed throughout the mixture. Exercises 4.2.22 and 4.2.23 deal with situations where this isn’t so, but
the distribution of salt becomes approximately uniform as t → ∞ .
Example 4.2.3
A tank initially contains 40 pounds of salt dissolved in 600 gallons of water. Starting at t = 0 , water that contains 1/2
0
pound of salt per gallon is poured into the tank at the rate of 4 gal/min and the mixture is drained from the tank at the
same rate (Figure 4.2.3).
a. Find a differential equation for the quantity Q(t) of salt in the tank at time t > 0 , and solve the equation to determine
Q(t).
Solution a
To find a differential equation for Q, we must use the given information to derive an expression for Q . But Q is the rate ′ ′
of change of the quantity of salt in the tank changes with respect to time; thus, if rate in denotes the rate at which salt
Determining the rate out requires a little more thought. We’re removing 4 gallons of the mixture per minute, and there are
always 600 gallons in the tank; that is, we are removing 1/150 of the mixture per minute. Since the salt is evenly
distributed in the mixture, we are also removing 1/150 of the salt per minute. Therefore, if there are Q(t) pounds of salt
in the tank at time t , the rate out at any time t is Q(t)/150. Alternatively, we can arrive at this conclusion by arguing that
rate out = (concentration) × (rate of flow out)
= (lb/gal) × (gal/min)
Q(t) Q(t)
= ×4 = .
600 150
Since e −t/150
is a solution of the complementary equation, the solutions of this equation are of the form Q = ue
−t/150
,
where u e′ −t/150
= 2 , so u = 2 e
′ t/150
. Hence,
t/150
u = 300 e + c,
so
−t/150 −t/150
Q = ue = 300 + c e (4.2.6)
Solution b
From Equation 4.2.6, we see that that lim Q(t) = 300 for any value of Q(0). This is intuitively reasonable, since the
t→∞
incoming solution contains 1/2 pound of salt per gallon and there are always 600 gallons of water in the tank.
Example 4.2.4
A 500-liter tank initially contains 10 g of salt dissolved in 200 liters of water. Starting at t = 0 , water that contains 1/4 g
0
of salt per liter is poured into the tank at the rate of 4 liters/min and the mixture is drained from the tank at the rate of 2
liters/min (Figure [figure:4.2.5}). Find a differential equation for the quantity Q(t) of salt in the tank at time t prior to the
time when the tank overflows and find the concentration K(t) (g/liter) of salt in the tank at any such time.
Solution
We first determine the amount W (t) of solution in the tank at any time t prior to overflow. Since W (0) = 200 and we are
adding 4 liters/min while removing only 2 liters/min, there’s a net gain of 2 liters/min in the tank; therefore,
W (t) = 2t + 200.
Since W (150) = 500 liters (capacity of the tank), this formula is valid for 0 ≤ t ≤ 150 .
Now let Q(t) be the number of grams of salt in the tank at time t , where 0 ≤ t ≤ 150 . As in Example 4.2.3
′
Q = rate in − rate out. (4.2.7)
To determine the rate out, we observe that since the mixture is being removed from the tank at the constant rate of 2
liters/min and there are 2t + 200 liters in the tank at time t, the fraction of the mixture being removed per minute at time t
is
2 1
= .
2t + 200 t + 100
We’re removing this same fraction of the salt per minute. Therefore, since there are Q(t) grams of salt in the tank at time
t,
Q(t)
rate out = . (4.2.9)
t + 100
Q(t) Q(t)
= ×2 = .
2t+200 t+100
By separation of variables, 1/(t + 100) is a solution of the complementary equation, so the solutions of Equation 4.2.10
are of the form
′
u u
′
Q = , where , so u = t + 100.
t + 100 t + 100 = 1
Hence,
2
(t + 100)
u = + c. (4.2.11)
2
so
2
(100)
c = 100(10) − = −4000
2
and therefore
2
(t + 100)
u = − 4000.
2
Hence,
u t + 100 4000
Q = = − .
t + 200 2 t + 100
4
−
2000
2
(t+100)
2. A fluid initially at 100 C is placed outside on a day when the temperature is −10 C, and the temperature of the fluid drops
∘ ∘
20 C in one minute. Find the temperature T (t) of the fluid for t > 0 .
∘
3. At 12:00 pm a thermometer reading 10 F is placed in a room where the temperature is 70 F. It reads 56 when it is placed
∘ ∘ ∘
outside, where the temperature is 5 F, at 12:03. What does it read at 12:05 pm?
∘
4. A thermometer initially reading 212 F is placed in a room where the temperature is 70 F. After 2 minutes the thermometer
∘ ∘
reads 125 F.
∘
5. An object with initial temperature 150 C is placed outside, where the temperature is
∘
35
∘
C. Its temperatures at 12:15 and
12:20 are 120 C and 90 C, respectively.
∘ ∘
6. An object is placed in a room where the temperature is 20 C. The temperature of the object drops by 5 C in 4 minutes and
∘ ∘
by 7 C in 8 minutes. What was the temperature of the object when it was initially placed in the room?
∘
7. A cup of boiling water is placed outside at 1:00 pm. One minute later the temperature of the water is 152
∘
F. After another
minute its temperature is 112 F. What is the outside temperature?
∘
8. A tank initially contains 40 gallons of pure water. A solution with 1 gram of salt per gallon of water is added to the tank at 3
gal/min, and the resulting solution drains out at the same rate. Find the quantity Q(t) of salt in the tank at time t > 0 .
9. A tank initially contains a solution of 10 pounds of salt in 60 gallons of water. Water with 1/2 pound of salt per gallon is
added to the tank at 6 gal/min, and the resulting solution leaves at the same rate. Find the quantity Q(t) of salt in the tank at
time t > 0 .
10. A tank initially contains 100 liters of a salt solution with a concentration of .1 g/liter. A solution with a salt concentration
of .3 g/liter is added to the tank at 5 liters/min, and the resulting mixture is drained out at the same rate. Find the concentration
K(t) of salt in the tank as a function of t .
11. A 200 gallon tank initially contains 100 gallons of water with 20 pounds of salt. A salt solution with 1/4 pound of salt per
gallon is added to the tank at 4 gal/min, and the resulting mixture is drained out at 2 gal/min. Find the quantity of salt in the
tank as it is about to overflow.
12. Suppose water is added to a tank at 10 gal/min, but leaks out at the rate of 1/5 gal/min for each gallon in the tank. What is
the smallest capacity the tank can have if the process is to continue indefinitely?
13. A chemical reaction in a laboratory with volume V (in ft ) produces q ft /min of a noxious gas as a byproduct. The gas is
3
1
3
dangerous at concentrations greater than c̄, but harmless at concentrations ≤ c̄ . Intake fans at one end of the laboratory pull in
¯ ¯
fresh air at the rate of q ft /min and exhaust fans at the other end exhaust the mixture of gas and air from the laboratory at the
2
3
same rate. Assuming that the gas is always uniformly distributed in the room and its initial concentration c is at a safe level,
0
find the smallest value of q required to maintain safe conditions in the laboratory for all time.
2
14. A 1200-gallon tank initially contains 40 pounds of salt dissolved in 600 gallons of water. Starting at t = 0 , water that
0
contains 1/2 pound of salt per gallon is added to the tank at the rate of 6 gal/min and the resulting mixture is drained from the
tank at 4 gal/min. Find the quantity Q(t) of salt in the tank at any time t > 0 prior to overflow.
poured into T at the rate of 2 gal/min. The mixture is drained from T at the same rate into a second tank T , which initially
1 1 2
contains 50 gallons of pure water. Also starting at t = 0 , a mixture from another source that contains 2 pounds of salt per
0
gallon is poured into T at the rate of 2 gal/min. The mixture is drained from T at the rate of 4 gal/min.
2 2
a. Find a differential equation for the quantity Q(t) of salt in tank T at time t > 0 . 2
16. Suppose an object with initial temperature T is placed in a sealed container, which is in turn placed in a medium with
0
temperature T . Let the initial temperature of the container be S . Assume that the temperature of the object does not affect
m 0
the temperature of the container, which in turn does not affect the temperature of the medium. (These assumptions are
reasonable, for example, if the object is a cup of coffee, the container is a house, and the medium is the atmosphere.)
a. Assuming that the container and the medium have distinct temperature decay constants k and k respectively, use m
Newton’s law of cooling to find the temperatures S(t) and T (t) of the container and object at time t .
b. Assuming that the container and the medium have the same temperature decay constant k , use Newton’s law of cooling to
find the temperatures S(t) and T (t) of the container and object at time t .
c. Find lim. S(t) and lim
t→∞ T (t) . t→∞
17. In our previous examples and exercises concerning Newton’s law of cooling we assumed that the temperature of the
medium remains constant. This model is adequate if the heat lost or gained by the object is insignificant compared to the heat
required to cause an appreciable change in the temperature of the medium. If this isn’t so, we must use a model that accounts
for the heat exchanged between the object and the medium. Let T = T (t) and T = T (t) be the temperatures of the object m m
and the medium, respectively, and let T and T be their initial values. Again, we assume that T and T are related by
0 m0 m
We also assume that the change in heat of the object as its temperature changes from T to T is a(T − T ) and that the change 0 0
in heat of the medium as its temperature changes from T to T is a (T − T ) , where a and a are positive constants
m0 m m m m0 m
depending upon the masses and thermal properties of the object and medium, respectively. If we assume that the total heat of
the system consisting of the object and the medium remains constant (that is, energy is conserved), then
a. Equation (A) involves two unknown functions T and T . Use (A) and (B) to derive a differential equation involving only
m
T.
18. Control mechanisms allow fluid to flow into a tank at a rate proportional to the volume V of fluid in the tank, and to flow
out at a rate proportional to V . Suppose V (0) = V and the constants of proportionality are a and b , respectively. Find V (t)
2
0
19. Identical tanks T and T initially contain W gallons each of pure water. Starting at t = 0 , a salt solution with constant
1 2 0
concentration c is pumped into T at r gal/min and drained from T into T at the same rate. The resulting mixture in T is
1 1 2 2
also drained at the same rate. Find the concentrations c (t) and c (t) in tanks T and T for t > 0 .
1 2 1 2
20. An infinite sequence of identical tanks T , T , …, T , …, initially contain W gallons each of pure water. They are hooked
1 2 n
together so that fluid drains from T into T (n = 1, 2, ⋯). A salt solution is circulated through the tanks so that it enters
n n+1
and leaves each tank at the constant rate of r gal/min. The solution has a concentration of c pounds of salt per gallon when it
enters T .
1
21. Tanks T and T have capacities W and W liters, respectively. Initially they are both full of dye solutions with
1 2 1 2
concentrations c and c grams per liter. Starting at t = 0 , the solution from T is pumped into T at a rate of r liters per
1 2 0 1 2
a. Find the concentrations c (t) and c (t) of the dye in T and T for t > 0 .
1 2 1 2
22. Consider the mixing problem of Example 4.2.3, but without the assumption that the mixture is stirred instantly so that the
salt is always uniformly distributed throughout the mixture. Assume instead that the distribution approaches uniformity as
t → ∞ . In this case the differential equation for Q is of the form
a(t)
′
Q + Q =2 (4.2E.1)
150
23. Consider the mixing problem of Example 4.2.4 in a tank with infinite capacity, but without the assumption that the mixture
is stirred instantly so that the salt is always uniformly distributed throughout the mixture. Assume instead that the distribution
approaches uniformity as t → ∞ . In this case the differential equation for Q is of the form
a(t)
′
Q + Q =1 (4.2E.3)
t + 100
Newton’s second law of motion asserts that the force F and the acceleration a are related by the equation
F = ma. (4.3.1)
Units
In applications there are three main sets of units in use for length, mass, force, and time: the cgs, mks, and British systems. All
three use the second as the unit of time. Table 1 shows the other units. Consistent with Equation 4.3.1, the unit of force in each
system is defined to be the force required to impart an acceleration of (one unit of length)/s to one unit of mass. 2
Table 4.3.1
Length Force Mass
If we assume that Earth is a perfect sphere with constant mass density, Newton’s law of gravitation (discussed later in this
section) asserts that the force exerted on an object by Earth’s gravitational field is proportional to the mass of the object and
inversely proportional to the square of its distance from the center of Earth. However, if the object remains sufficiently close to
Earth’s surface, we may assume that the gravitational force is constant and equal to its value at the surface. The magnitude of
this force is mg, where g is called the acceleration due to gravity. (To be completely accurate, g should be called the
magnitude of the acceleration due to gravity at Earth’s surface.) This quantity has been determined experimentally.
Approximate values of g are
2
g = 980 cm/s (cgs)
2
g = 9.8 m/s (mks)
2
g = 32 ft/s (British).
In general, the force F in Equation 4.3.1 may depend upon t , y , and y . Since ′
a =y
′′
, Equation 4.3.1 can be written in the
form
′′ ′
my = F (t, y, y ), (4.3.2)
which is a second order equation. We’ll consider this equation with restrictions on F later; however, since Chapter 2 dealt only
with first order equations, we consider here only problems in which Equation 4.3.2 can be recast as a first order equation. This
is possible if F does not depend on y , so Equation 4.3.2 is of the form
′′ ′
my = F (t, y ).
Letting v = y and v
′ ′
=y
′′
yields a first order equation for v :
′
m v = F (t, v). (4.3.3)
Solving this equation yields v as a function of t . If we know y(t0 ) for some time t0 , we can integrate v to obtain y as a
function of t .
Equations of the form Equation 4.3.3 occur in problems involving motion through a resisting medium.
Example 4.3.1
An object with mass m moves under constant gravitational force through a medium that exerts a resistance with
magnitude proportional to the speed of the object. (Recall that the speed of an object is |v|, the absolute value of its
velocity v .) Find the velocity of the object as a function of t , and find the terminal velocity. Assume that the initial
velocity is v .
0
Solution
The total force acting on the object is
F = −mg + F1 , (4.3.4)
where −mg is the force due to gravity and F is the resisting force of the medium, which has magnitude k|v|, where k is
1
a positive constant. If the object is moving downward (v ≤ 0 ), the resisting force is upward (Figure 4.3.1a), so
On the other hand, if the object is moving upward (v ≥ 0 ), the resisting force is downward (Figure 4.3.1b), so
F1 = −k|v| = −kv.
or
′
k
v + v = −g. (4.3.6)
m
Since e −kt/m
is a solution of the complementary equation, the solutions of Equation 4.3.6 are of the form v = ue
−kt/m
,
where u e′ −kt/m
= −g , so u = −ge
′ kt/m
. Hence,
mg kt/m
u =− e + c,
k
so
mg
−kt/m −kt/m
v = ue =− + ce . (4.3.7)
k
Since v(0) = v , 0
mg
v0 = − + c,
k
which we rewrite as
1
′
v + v = −32.
10
Since e −t/10
is a solution of the complementary equation, the solutions of this equation are of the form v = ue
−t/10
,
where u e′ −t/10
= −32 , so u = −32 e
′
. Hence,
t/10
t/10
u = −320 e + c,
so
−t/10 −t/10
v = ue = −320 + c e . (4.3.8)
The initial velocity is 60 ft/s in the upward (positive) direction; hence, v0 = 60 . Substituting t =0 and v = 60 in
Equation 4.3.8 yields
60 = −320 + c,
Example 4.3.3
A 10 kg mass is given an initial velocity v ≤ 0 near Earth’s surface. The only forces acting on it are gravity and
0
atmospheric resistance proportional to the square of the speed. Assuming that the resistance is 8 N if the speed is 2 m/s,
find the velocity of the object as a function of t , and find the terminal velocity.
Solution
Since the object is falling, the resistance is in the upward (positive) direction. Hence,
′ 2
m v = −mg + kv , (4.3.9)
so k = 2 N-s 2 2
/m . Since m = 10 and g = 9.8 , Equation 4.3.9 becomes
′ 2 2
10 v = −98 + 2 v = 2(v − 49). (4.3.10)
so
1 1 14
′
[ − ]v = .
v−7 v+7 5
ln |v − 7| − ln |v + 7| = 14t/5 + k.
Therefore
∣ v−7 ∣ k 14t/5
∣ ∣ =e e .
∣ v+7 ∣
Since Theorem [thmtype:2.3.1} implies that (v − 7)/(v + 7) cannot change sign (why?), we can rewrite the last equation
as
v−7 14t/5
= ce , (4.3.13)
v+7
v0 − 7
c = .
v0 + 7
Since v 0 ≤0 , v is defined and negative for all t > 0 . The terminal velocity is
independent of v . More generally, it can be shown (Exercise 4.3.11) that if v is any solution of Equation 4.3.9 such that
0
v ≤ 0 then
0
−−−
mg
lim v(t) = −√
t→∞ k
(Figure 4.3.3).
Example 4.3.4 :
A 10-kg mass is launched vertically upward from Earth’s surface with an initial velocity of v m/s. The only forces acting 0
on the mass are gravity and atmospheric resistance proportional to the square of the speed. Assuming that the atmospheric
resistance is 8 N if the speed is 2 m/s, find the time T required for the mass to reach maximum altitude.
Solution
The mass will climb while v > 0 and reach its maximum altitude when v=0 . Therefore v>0 for 0 ≤t <T and
v(T ) = 0 ; therefore, we replace Equation 4.3.10 by
′ 2
10 v = −98 − 2 v . (4.3.15)
5 −1
v0
c = tan ,
7 7
so v is defined implicitly by
5 v 5 v0
−1 −1
tan = −t + tan , 0 ≤ t ≤ T. (4.3.16)
7 7 7 7
with A = tan −1
(v0 /7) and B = 7t/5 , and noting that tan(tan −1
θ) = θ , we can simplify Equation 4.3.17 to
v0 − 7 tan(7t/5)
v=7 .
7 + v0 tan(7t/5)
Therefore
5 −1
v0
T = tan .
7 7
Since tan −1
(v0 /7) < π/2 for all v , the time required for the mass to reach its maximum altitude is less than
0
5π
≈ 1.122 s
14
regardless of the initial velocity. Figure 4.3.4 shows graphs of v over [0, T ] for various values of v . 0
Escape Velocity
Suppose a space vehicle is launched vertically and its fuel is exhausted when the vehicle reaches an altitude h above Earth,
where h is sufficiently large so that resistance due to Earth’s atmosphere can be neglected. Let t = 0 be the time when burnout
occurs. Assuming that the gravitational forces of all other celestial bodies can be neglected, the motion of the vehicle for t > 0
is that of an object with constant mass m under the influence of Earth’s gravitational force, which we now assume to vary
inversely with the square of the distance from Earth’s center; thus, if we take the upward direction to be positive then
gravitational force on the vehicle at an altitude y above Earth is
K
F =− , (4.3.18)
2
(y + R)
2
mgR
F =− . (4.3.19)
2
(y + R)
We’ll show that there’s a number v , called the escape velocity, with these properties:
e
1. If v ≥ v then v(t) > 0 for all t > 0 , and the vehicle continues to climb for all t > 0 ; that is, it “escapes” Earth. (Is it
0 e
really so obvious that lim y(t) = ∞ in this case? For a proof, see Exercise 4.3.20.)
t→∞
2. If v < v then v(t) decreases to zero and becomes negative. Therefore the vehicle attains a maximum altitude y and
0 e m
rather than y , and v is easier to find. Since v = y the chain rule implies that
′
Substituting this into Equation 4.3.20 yields the first order separable equation
2
dv gR
v =− . (4.3.21)
2
dy (y + R)
When t = 0 , the velocity is v and the altitude is h . Therefore we can obtain v as a function of y by solving the initial value
0
problem
2
dv gR
v =− , v(h) = v0 .
2
dy (y + R)
Since v(h) = v , 0
2 2
v gR
0
c = − ,
2 h +R
If
2 1/2
2gR
v0 ≥ ( ) ,
h +R
the parenthetical expression in Equation 4.3.23 is nonnegative, so v(y) > 0 for y >h . This proves that there’s an escape
velocity v . We’ll now prove that
e
1/2
2
2gR
ve = ( )
h +R
If Equation 4.3.24 holds then the parenthetical expression in Equation 4.3.23 is negative and the vehicle will attain a
maximum altitude y > h that satisfies the equation
m
2 2 2
gR v gR
0
0 = +( − ).
ym + R 2 h +R
The velocity will be zero at the maximum altitude, and the object will then fall to Earth under the influence of gravity.
Q4.3.1
1. A firefighter who weighs 192 lb slides down an infinitely long fire pole that exerts a frictional resistive force with
magnitude proportional to his speed, with k = 2.5 lb-s/ft. Assuming that he starts from rest, find his velocity as a function of
time and find his terminal velocity.
2. A firefighter who weighs 192 lb slides down an infinitely long fire pole that exerts a frictional resistive force with
magnitude proportional to her speed, with constant of proportionality k . Find k , given that her terminal velocity is −16 ft/s,
and then find her velocity v as a function of t . Assume that she starts from rest.
3. A boat weighs 64, 000 lb. Its propellor produces a constant thrust of 50, 000 lb and the water exerts a resistive force with
magnitude proportional to the speed, with k = 2000 lb-s/ft. Assuming that the boat starts from rest, find its velocity as a
function of time, and find its terminal velocity.
4. A constant horizontal force of 10 N pushes a 20 kg-mass through a medium that resists its motion with .5 N for every m/s
of speed. The initial velocity of the mass is 7 m/s in the direction opposite to the direction of the applied force. Find the
velocity of the mass for t > 0 .
5. A stone weighing 1/2 lb is thrown upward from an initial height of 5 ft with an initial speed of 32 ft/s. Air resistance is
proportional to speed, with k = 1/128 lb-s/ft. Find the maximum height attained by the stone.
6. A 3200-lb car is moving at 64 ft/s down a 30-degree grade when it runs out of fuel. Find its velocity after that if friction
exerts a resistive force with magnitude proportional to the square of the speed, with k = 1 lb-s /ft . Also find its terminal
2 2
velocity.
7. A 96 lb weight is dropped from rest in a medium that exerts a resistive force with magnitude proportional to the speed. Find
its velocity as a function of time if its terminal velocity is −128 ft/s.
8. An object with mass m moves vertically through a medium that exerts a resistive force with magnitude proportional to the
speed. Let y = y(t) be the altitude of the object at time t , with y(0) = y . Use the results of Example 4.3.1 to show that
0
m
y(t) = y0 + (v0 − v − gt).
k
9. An object with mass m is launched vertically upward with initial velocity v from Earth’s surface (y = 0 ) in a medium
0 0
that exerts a resistive force with magnitude proportional to the speed. Find the time T when the object attains its maximum
altitude y . Then use the result of Exercise 4.3.8 to find y .
m m
10. An object weighing 256 lb is dropped from rest in a medium that exerts a resistive force with magnitude proportional to
the square of the speed. The magnitude of the resisting force is 1 lb when |v| = 4 ft/s . Find v for t > 0 , and find its terminal
velocity.
11. An object with mass m is given an initial velocity v ≤ 0 in a medium that exerts a resistive force with magnitude
0
proportional to the square of the speed. Find the velocity of the object for t > 0 , and find its terminal velocity.
12. An object with mass m is launched vertically upward with initial velocity v in a medium that exerts a resistive force with
0
14. An object of mass m falls in a medium that exerts a resistive force f = f (s) , where s = |v| is the speed of the object.
Assume that f (0) = 0 and f is strictly increasing and differentiable on (0, ∞).
a. Write a differential equation for the speed s = s(t) of the object. Take it as given that all solutions of this equation with
s(0) ≥ 0 are defined for all t > 0 (which makes good sense on physical grounds).
15. A 100-g mass with initial velocity v ≤ 0 falls in a medium that exerts a resistive force proportional to the fourth power of
0
Q4.3.2
In Exercises 4.3.17-4.3.20, assume that the force due to gravity is given by Newton’s law of gravitation. Take the upward
direction to be positive.
17. A space probe is to be launched from a space station 200 miles above Earth. Determine its escape velocity in miles/s. Take
Earth’s radius to be 3960 miles.
18. A space vehicle is to be launched from the moon, which has a radius of about 1080 miles. The acceleration due to gravity
at the surface of the moon is about 5.31 ft/s . Find the escape velocity in miles/s.
2
19.
a. Show that (Equation 4.3.27) can be rewritten as
h −y
2 2 2
v = ve + v .
0
y +R
b. Show that if v 0 = ρve with 0 ≤ ρ < 1 , then the maximum altitude y attained by the space vehicle is
m
2
h + Rρ
ym = .
2
1 −ρ
c. By requiring that v(y m) =0 , use (Equation 4.3.26) to deduce that if v 0 < ve then
1/2
2
(1 − ρ )(ym − y)
|v| = ve [ ] ,
y +R
d. Deduce from (c) that if v < v , the vehicle takes equal times to climb from y = h to y = y
e m and to fall back from y = y m
to y = h .
where F is independent of t , is said to be autonomous. An autonomous second order equation can be converted into a first
order equation relating v = y and y . If we let v = y , Equation 4.4.1 becomes
′ ′
′
v = F (y, v). (4.4.2)
Since
′
dv dv dy dv
v = = =v , (4.4.3)
dt dy dt dy
The integral curves of Equation 4.4.4 can be plotted in the (y, v) plane, which is called the Poincaré phase plane of Equation
4.4.1. If y is a solution of Equation 4.4.1 then y = y(t), v = y (t) is a parametric equation for an integral curve of Equation
′
4.4.4. we will call these integral curves trajectories of Equation 4.4.1, and we will call Equation 4.4.4 the phase plane
Equations of this form often arise in applications of Newton’s second law of motion. For example, suppose y is the
displacement of a moving object with mass m. It’s reasonable to think of two types of time-independent forces acting on the
object. One type - such as gravity - depends only on position. We could write such a force as −mp(y). The second type - such
as atmospheric resistance or friction - may depend on position and velocity. (Forces that depend on velocity are called
damping forces.) We write this force as −mq(y, y )y , where q(y, y ) is usually a positive function and we’ve put the factor y
′ ′ ′ ′
outside to make it explicit that the force is in the direction opposing the motion. In this case Newton’s, second law of motion
leads to Equation 4.4.5.
The phase plane equivalent of Equation 4.4.5 is
dv
v + q(y, v)v + p(y) = 0. (4.4.6)
dy
Some statements that we will be making about the properties of Equation 4.4.5 and Equation 4.4.6 are intuitively reasonable,
but difficult to prove. Therefore our presentation in this section will be informal: we will just say things without proof, all of
which are true if we assume that p = p(y) is continuously differentiable for all y and q = q(y, v) is continuously differentiable
for all (y, v). We begin with the following statements:
Properties
Statement 1: If y and v are arbitrary real numbers then Equation 4.4.5 has a unique solution on (−∞, ∞) such that
0 0
Statement 2: If y = y(t) is a solution of Equation 4.4.5 and τ is any constant then y = y(t − τ ) is also a solution of
1
Statement 3: If two solutions y and y of Equation 4.4.5 have the same trajectory then y (t) = y(t − τ ) for some
1 1
constant τ .
Statement 4: Distinct trajectories of Equation 4.4.5 cannot intersect; that is, if two trajectories of Equation 4.4.5
intersect, they are identical.
If y is a constant such that p(y) = 0 then y ≡ y is a constant solution of Equation 4.4.5. We say that y is an equilibrium of
¯
¯¯ ¯
¯¯ ¯
¯¯ ¯
¯¯
Equation 4.4.5 and (y, 0) is a critical point of the phase plane equivalent equation Equation 4.4.6. We say that the equilibrium
¯
¯¯
and the critical point are stable if, for any given ϵ > 0 no matter how small, there’s a δ > 0 , sufficiently small, such that if
−−−−−−−−−−−
¯¯ 2 2
√ (y0 − ȳ ) +v <δ
0
for all t > 0 . Figure 4.4.1 illustrates the geometrical interpretation of this definition in the Poincaré phase plane: if (y , v 0 0) is
in the smaller shaded circle (with radius δ ), then (y(t), v(t)) must be in in the larger circle (with radius ϵ) for all t > 0 .
Figure 4.4.1 : Stability: if (y 0, v0 ) is in the smaller circle then (y(t), v(t))is in the larger circle for all t > 0
If an equilibrium and the associated critical point are not stable, we say they are unstable. To see if you really understand what
stable means, try to give a direct definition of unstable (Exercise 4.4.22). we will illustrate both definitions in the following
examples.
We say that this equation - as well as any physical situation that it may model - is undamped. The phase plane equivalent of
Equation 4.4.7 is the separable equation
dv
v + p(y) = 0.
dy
potential energy of the object; thus, Equation 4.4.8 says that the total energy of the object remains constant, or is conserved. In
particular, if a trajectory passes through a given point (y , v ) then
0 0
2
v
0
c = + P (y0 ).
2
which can be written in the form Equation 4.4.7 with p(y) = ky/m. This equation can be solved easily by a method that
we will study in Section 5.2, but that method isn’t available here. Instead, we will consider the phase plane equivalent of
Equation 4.4.9.
From Equation 4.4.3, we can rewrite Equation 4.4.9 as the separable equation
dv
mv = −ky.
dy
Equation 4.4.9 has exactly one equilibrium, y = 0 , and it is stable. You can see intuitively why this is so: if the object is
¯
¯¯
displaced in either direction from equilibrium, the spring tries to bring it back.
In this case we can find y explicitly as a function of t . (Don’t expect this to happen in more complicated problems!) If
v > 0 on an interval I , Equation 4.4.10 implies that
−−−−−−−
dy ρ − ky 2
=v=√
dt m
on I . This is equivalent to
−
− −
−−
√k dy k
− −−−− − = ω0 , where ω0 =√ . (4.4.12)
2 dt m
√ ρ − ky
Since
−
− −
−
√k dy k
−1
∫ = sin (√ y) + c
− −−−− −
2 ρ
√ ρ − ky
y
−1
= sin ( )+c
R
(see Equation 4.4.11), Equation 4.4.12 implies that that there’s a constant ϕ such that
y
−1
sin ( ) = ω0 t + ϕ
R
or
y = R sin(ω0 t + ϕ)
for all t in I . Although we obtained this function by assuming that v > 0 , you can easily verify that y satisfies Equation
4.4.9 for all values of t . Thus, the displacement varies periodically between −R and R , with period T = 2π/ω (Figure 0
4.4.4). (If you’ve taken a course in elementary mechanics you may recognize this as simple harmonic motion.)
or
′′
g
y =− sin y. (4.4.13)
L
Since sin nπ = 0 if n is any integer, Equation 4.4.13 has infinitely many equilibria ȳ = nπ . If¯
¯
n
n is even, the mass is
directly below the axle (Figure 4.4.6a) and gravity opposes any deviation from the equilibrium.
2
v g
= cos y + c. (4.4.14)
2 L
If v = v when y = 0 , then
0
2
v g
0
c = − ,
2 L
2
v 2g y
0 2
= − sin ,
2 L 2
which is equivalent to
2 2
y
2 2
v =v − vc sin , (4.4.15)
0
2
where
−−
g
vc = 2 √ .
L
The curves defined by Equation 4.4.15 are the trajectories of Equation 4.4.13. They are periodic with period 2π in y ,
which isn’t surprising, since if y = y(t) is a solution of Equation 4.4.13 then so is y = y(t) + 2nπ for any integer n .
n
Figure 4.4.7 shows trajectories over the interval [−π, π]. From Equation 4.4.15, you can see that if |v | > v then v is 0 c
nonzero for all t , which means that the object whirls in the same direction forever, as in Figure 4.4.8. The trajectories
associated with this whirling motion are above the upper dashed curve and below the lower dashed curve in Figure 4.4.7.
You can also see from Equation 4.4.15 that if 0 < |v | < v , then v = 0 when y = ±y
0 c , where max
−1
ymax = 2 sin (| v0 |/ vc ).
associated with this kind of motion are the ovals between the dashed curves in Figure 4.4.7. It can be shown (see Exercise
4.4.21 for a partial proof) that the period of the oscillation is
π/2
dθ
T =8∫ . (4.4.16)
−−−−−−−−−−
0 2 2 2
√ vc − v sin θ
0
Although this integral cannot be evaluated in terms of familiar elementary functions, you can see that it is finite if
| v | < v .>
0 c
The dashed curves in Figure 4.4.7 contain four trajectories. The critical points (π, 0) and (−π, 0) are the trajectories of
the unstable equilibrium solutions ȳ = ±π . The upper dashed curve connecting (but not including) them is obtained from
¯
¯
initial conditions of the form y(t ) = 0, v(t ) = v . If y is any solution with this trajectory then
0 0 c
The lower dashed curve connecting (but not including) them is obtained from initial conditions of the form
y(t ) = 0, v(t ) = −v . If y is any solution with this trajectory then
0 0 c
Consistent with this, the integral Equation 4.4.16 diverges to ∞ if v 0 = ±vc . (Exercise 4.4.21).
Since the dashed curves separate trajectories of whirling solutions from trajectories of oscillating solutions, each of these
curves is called a separatrix.
In general, if Equation 4.4.7 has both stable and unstable equilibria then the separatrices are the curves given by Equation
4.4.8 that pass through unstable critical points. Thus, if (y , 0) is an unstable critical point, then
¯
¯¯
2
v
¯
¯¯
+ P (y) = P (y ) (4.4.17)
2
′′
y + p(y) = 0 (4.4.18)
¯¯ ¯¯
p(y) < 0 if a < y < ȳ and p(y) > 0 if ȳ <y <b (4.4.19)
If we regard p(y) as a force acting on a unit mass, Equation 4.4.19 means that the force resists all sufficiently small
displacements from ȳ .
¯
¯
We’ve already seen examples illustrating this principle. The equation Equation 4.4.9 for the undamped spring-mass system is
of the form Equation 4.4.18 with p(y) = ky/m, which has only the stable equilibrium ȳ = 0 . In this case Equation 4.4.19
¯
¯
holds with a = −∞ and b = ∞ . The equation Equation 4.4.13 for the undamped pendulum is of the form Equation 4.4.18
with p(y) = (g/L) sin y . We’ve seen that ȳ = 2mπ is a stable equilibrium if m is an integer. In this case
¯
¯
and
It can also be shown (Exercise 4.4.24) that ȳ is unstable if there’s a b > ȳ such that
¯
¯ ¯
¯
¯
¯¯
p(y) < 0 if y < y < b (4.4.20)
¯¯
p(y) > 0 if a < y < ȳ (4.4.21)
If we regard p(y) as a force acting on a unit mass, Equation 4.4.20 means that the force tends to increase all sufficiently small
positive displacements from ȳ , while Equation 4.4.21 means that the force tends to increase the magnitude of all sufficiently
¯
¯
The undamped pendulum also illustrates this principle. We’ve seen that ȳ = (2m + 1)π is an unstable equilibrium if m is an
¯
¯
Example 4.4.3
The equation
′′
y + y(y − 1) = 0 (4.4.22)
¯
¯¯
y =0 is unstable and y = 1 is stable.
¯
¯¯
Integrating yields
2 3 2
v y y
+ − = C,
2 3 2
which we rewrite as
1
2 2
v + y (2y − 3) = c (4.4.23)
3
after renaming the constant of integration. These are the trajectories of Equation 4.4.22. If y is any solution of Equation
4.4.22, the point (y(t), v(t)) moves along the trajectory of y in the direction of increasing y in the upper half plane (
Figure 4.4.10 shows typical trajectories. The dashed curve through the critical point (0, 0), obtained by setting c = 0 in
Equation 4.4.23, separates the y -v plane into regions that contain different kinds of trajectories; again, we call this curve a
separatrix. Trajectories in the region bounded by the closed loop (b) are closed curves, so solutions associated with them
are periodic. Solutions associated with other trajectories are not periodic. If y is any such solution with trajectory not on
the separatrix, then
lim y(t) = −∞, lim y(t) = −∞,
t→∞ t→−∞
The separatrix contains four trajectories of Equation 4.4.22. One is the point (0, 0), the trajectory of the equilibrium
ȳ = 0 . Since distinct trajectories cannot intersect, the segments of the separatrix marked (a), b, and (c)– which don’t
¯
¯
include (0, 0) – are distinct trajectories, none of which can be traversed in finite time. Solutions with these trajectories
have the following asymptotic behavior:
is
dv
v + q(y, v)v + p(y) = 0.
dy
This equation isn’t separable, so we cannot solve it for v in terms of y , as we did in the undamped case, and conservation of
energy doesn’t hold. (For example, energy expended in overcoming friction is lost.) However, we can study the qualitative
behavior of its solutions by rewriting it as
dv p(y)
= −q(y, v) − (4.4.25)
dy v
which is equivalent to Equation 4.4.24 in the sense defined in Section 10.1. Fortunately, most differential equation software
packages enable you to do this painlessly.
In the text we will confine ourselves to the case where q is constant, so Equation 4.4.24 and Equation 4.4.25 reduce to
′′ ′
y + c y + p(y) = 0 (4.4.26)
and
dv p(y)
= −c − .
dy v
(we will consider more general equations in the exercises.) The constant c is called the damping constant. In situations where
Equation 4.4.26 is the equation of motion of an object, c is positive; however, there are situations where c may be negative.
(A minor note: the c in Equation 4.4.26 actually corresponds to c/m in this equation.) Figure 4.4.11 shows a typical direction
field for an equation of this form. Recalling that motion along a trajectory must be in the direction of increasing y in the upper
half plane (v > 0 ) and in the direction of decreasing y in the lower half plane (v < 0 ), you can infer that all trajectories
approach the origin in clockwise fashion. To confirm this, Figure 4.4.12 shows the same direction field with some trajectories
filled in. All the trajectories shown there correspond to solutions of the initial value problem
′′ ′ ′
my + c y + ky = 0, y(0) = y0 , y (0) = v0 ,
where
2 2
mv + ky =ρ (a positive constant);
0 0
thus, if there were no damping (c = 0 ), all the solutions would have the same dashed elliptic trajectory, shown in Figure
4.4.14.
then all solutions of (4.4.27) are oscillatory, while if c ≥ c , no solutions of (4.4.27) have this property. (In fact, no solution
1
not identically zero can have more than two zeros in this case.) Figure 4.4.14 shows a direction field and some integral curves
for (4.4.28) in this case.
Example 4.4.4
Now we return to the pendulum. If we assume that some mechanism (for example, friction in the axle or atmospheric
resistance) opposes the motion of the pendulum with a force proportional to its angular velocity, Newton’s second law of
motion implies that
′′ ′
mLy = −c y − mg sin y, (4.4.29)
where c > 0 is the damping constant. (Again, a minor note: the c in Equation 4.4.26 actually corresponds to c/mL in this
equation.) To plot a direction field for Equation 4.4.29 we write its phase plane equivalent as
dv c g
=− − sin y.
dy mL Lv
Figure 4.4.15 shows trajectories of four solutions of Equation 4.4.29, all satisfying y(0) = 0 . For each m = 0 , 1, 2, 3,
imparting the initial velocity v(0) = v causes the pendulum to make m complete revolutions and then settle into
m
2. y ′′
+y
2
=0
3. y ′′
+ y|y| = 0
4. y ′′
+ ye
−y
=0
Q4.4.2
In Exercises 4.4.5–4.4.8 find the equations of the trajectories of the given undamped equation. Identify the equilibrium solutions,
determine whether they are stable or unstable, and find the equations of the separatrices (that is, the curves through the unstable
equilibria). Plot the separatrices and some trajectories in each of the regions of Poincaré plane determined by them.
5. y ′′
−y
3
+ 4y = 0
6. y ′′
+y
3
− 4y = 0
7. y ′′
+ y(y
2
− 1)(y
2
− 4) = 0
8. y ′′
+ y(y − 2)(y − 1)(y + 2) = 0
Q4.4.3
In Exercises 4.4.9–4.4.12 plot some trajectories of the given equation for various values (positive, negative, zero) of the parameter a.
Find the equilibria of the equation and classify them as stable or unstable. Explain why the phase plane plots corresponding to
positive and negative values of a differ so markedly. Can you think of a reason why zero deserves to be called the critical value of
a?
9. y ′′
+y
2
−a = 0
10. y ′′
+y
3
− ay = 0
11. y ′′
−y
3
+ ay = 0
12. y ′′
+ y − ay
3
=0
Q4.4.4
In Exercises 4.4.13-4.4.18 plot trajectories of the given equation for c = 0 and small nonzero (positive and negative) values of c to
observe the effects of damping.
13. y ′′
+ cy + y
′ 3
=0
14. y ′′ ′
+ cy − y = 0
15. y ′′
+ cy + y
′ 3
=0
16. y ′′
+ cy + y
′ 2
=0
17. y ′′ ′
+ c y + y|y| = 0
18. y ′′
+ y(y − 1) + cy = 0
Q4.4.5
19. The van der Pol equation
′′ 2 ′
y − μ(1 − y )y + y = 0, (A)
where μ is a positive constant and y is electrical current (Section 6.3), arises in the study of an electrical circuit whose resistive
properties depend upon the current. The damping term −μ(1 − y )y works to reduce |y| if |y| < 1 or to increase |y| if |y| > 1. It
2 ′
can be shown that van der Pol’s equation has exactly one closed trajectory, which is called a limit cycle. Trajectories inside the limit
also has a limit cycle. Follow the directions of Exercise 4.4.19 for this equation.
21. In connection with Equation 4.4.16, suppose y(0) = 0 and y ′
(0) = v0 , where 0 < v 0 < vc .
a. Let T be the time required for y to increase from zero to y
1 max = 2 sin
−1
(v0 / vc ) . Show that
dy −−−−−−−−−−−−
2 2 2
= √v − vc sin y/2 , 0 ≤ t < T1 . (A)
0
dt
d. Conclude from symmetry that the time required for (y(t), v(t)) to traverse the trajectory
2 2 2 2
v =v − vc sin y/2 (4.4E.2)
0
is T = 4T , and that consequently y(t + T ) = y(t) and v(t + T ) = v(t) ; that is, the oscillation is periodic with period T .
1
e. Show that if v = v , the integral in (C) is improper and diverges to ∞. Conclude from this that y(t) < π for all t and
0 c
limt→∞ y(t) = π .
α(r) = min {∫ p(x) dx, ∫ |p(x)| dx} \quad and \quadβ(r) = max {∫ p(x) dx, ∫ |p(x)| dx} . (4.4E.3)
0 −r 0 −r
y0
and define c(y 0,
2
v0 ) = v
0
+2 ∫
0
p(x) dx .
a. Show that
b. Show that
y
2
v +2 ∫ p(x) dx = c(y0 , v0 ), t > 0. (4.4E.6)
0
c. Conclude from (b) that if c(y , v ) < 2α(r) then |y| < r, t > 0 .
0 0
−−−−−−
−− −−−−
Show that if √y 2
0
2
+v
0
<δ then √y 2
+v
2
<ϵ for t > 0 , which implies that y = 0 is a stable equilibrium of y
¯
¯¯ ′′
+ p(y) = 0 .
e. Now let p be continuous for all y and p(ȳ) = 0 , where ȳ is not necessarily zero. Suppose there’s a positive number ρ such that
¯
¯ ¯
¯
p(y) > 0 if ȳ < y ≤ ȳ + ρ and p(y) < 0 if ȳ − ρ ≤ y < ȳ . Show that ȳ is a stable equilibrium of y + p(y) = 0 .
¯
¯ ¯
¯ ¯
¯ ¯
¯ ¯
¯ ′′
with 0 < y < ϵ , then y(t) ≥ ϵ for some t > 0 . Conclude that y = 0 is an unstable equilibrium of y + p(y) = 0 .
0
¯
¯¯ ′′
b. Now let p(y) = 0 , where y isn’t necessarily zero. Suppose there’s a positive number ρ such that p(y) < 0 if y < y ≤ y + ρ .
¯
¯¯ ¯
¯¯ ¯
¯¯ ¯
¯¯
c. Modify your proofs of (a) and (b) to show that if there’s a positive number ρ such that p(y) > 0 if y − ρ ≤ y < y , then y is an ¯
¯¯ ¯
¯¯ ¯
¯¯
Example 4.5.1
For each value of the parameter c , the equation
2
y − cx =0 (4.5.1)
defines a curve in the xy-plane. If c ≠ 0 , the curve is a parabola through the origin, opening upward if c >0 or
downward if c < 0 . If c = 0 , the curve is the x axis (Figure 4.5.1).
Example 4.5.2
For each value of the parameter c the equation
y = x +c (4.5.2)
Theorem 4.5.1
An equation that can be written in the form
H (x, y, c) = 0 (4.5.3)
is said to define a one-parameter family of curves if, for each value of c in in some nonempty set of real numbers, the set
of points (x, y) that satisfy Equation 4.5.3 forms a curve in the xy-plane.
Equations Equation 4.5.1 and Equation 4.5.2 define one–parameter families of curves. (Although Equation 4.5.2 isn’t in the
form Equation 4.5.3, it can be written in this form as y − x − c = 0 .)
Example 4.5.3
If c > 0 , the graph of the equation
2 2
x +y −c = 0 (4.5.4)
is a circle with center at (0, 0) and radius √c. If c = 0 , the graph is the single point (0, 0). (We don’t regard a single point
as a curve.) If c < 0 , the equation has no graph. Hence, Equation 4.5.4 defines a one–parameter family of curves for
positive values of c . This family consists of all circles centered at (0, 0) (Figure 4.5.3).
Example 4.5.4
The equation
2 2 2
x +y +c =0
does not define a one-parameter family of curves, since no (x, y) satisfies the equation if c ≠ 0 , and only the single point
(0, 0) satisfies it if c = 0 .
Recall from Section 1.2 that the graph of a solution of a differential equation is called an integral curve of the equation.
Solving a first order differential equation usually produces a one–parameter family of integral curves of the equation. Here we
are interested in the converse problem: given a one–parameter family of curves, is there a first order differential equation for
which every member of the family is an integral curve. This suggests the next definition.
Theorem 4.5.1
4.5.2 If every curve in a one-parameter family defined by the equation
H (x, y, c) = 0 (4.5.5)
To find a differential equation for a one–parameter family we differentiate its defining equation Equation 4.5.5 implicitly with
respect to x, to obtain
′
Hx (x, y, c) + Hy (x, y, c)y = 0. (4.5.7)
If this equation doesn’t, then it is a differential equation for the family. If it does contain c , it may be possible to obtain a
differential equation for the family by eliminating c between Equation 4.5.5 and Equation 4.5.7.
Solution
Differentiating Equation 4.5.8 with respect to x yields
′
y = 2cx.
Therefore c = y ′
/2x , and substituting this into Equation 4.5.8 yields
′
xy
y =
2
as a differential equation for the family of curves defined by Equation 4.5.8. The graph of any function of the form
y = cx is an integral curve of this equation.
2
The next example shows that members of a given family of curves may be obtained by joining integral curves for more than
one differential equation.
Example 4.5.6
a. Try to find a differential equation for the family of lines tangent to the parabola y = x . 2
b. Find two tangent lines to the parabola y = x that pass through (2, 3), and find the points of tangency.
2
Solution
a. The equation of the line through a given point (x 0, y0 ) with slope m is
y = y0 + m(x − x0 ). (4.5.9)
or, equivalently,
2
y = −x + 2 x0 x. (4.5.10)
0
We must choose the plus sign in Equation 4.5.11 if x < x and the minus sign if x > x ; thus,
0 0
−−−−−
2
x0 = (x + √ x − y ) if x < x0 .
and
−−−−−
2
x0 = (x − √ x − y ) if x > x0 .
and
−−−−−
′ 2
y = 2 (x − √ x −y) , if x > x0 . (4.5.13)
Neither Equation 4.5.12 nor Equation 4.5.13 is a differential equation for the family of tangent lines to the parabola
y = x . However, if each tangent line is regarded as consisting of two tangent half lines joined at the point of tangency,
2
Equation 4.5.12 is a differential equation for the family of tangent half lines on which x is less than the abscissa of the
point of tangency (Figure 4.5.4a), while Equation 4.5.13 is a differential equation for the family of tangent half lines on
which x is greater than this abscissa (Figure 4.5.4(b)). The parabola y = x is also an integral curve of both Equation
2
Figure 4.5.4
b. From Equation 4.5.10 the point (x, y) = (2, 3) is on the tangent line through (x 0,
2
x )
0
if and only if
2
3 = −x + 4 x0 ,
0
which is equivalent to
2
x − 4 x0 + 3 = (x0 − 3)(x0 − 1) = 0.
0
y = −9 + 6x,
y = −1 + 2x,
Geometric Problems
We now consider some geometric problems that can be solved by means of differential equations.
Example 4.5.7
Find curves y = y(x) such that every point (x , y(x )) on the curve is the midpoint of the line segment with endpoints
0 0
on the coordinate axes and tangent to the curve at (x , y(x )) (Figure 4.5.6).
0 0
Solution
The equation of the line tangent to the curve at P = (x0 , y(x0 ) is
′
y = y(x0 ) + y (x0 )(x − x0 ).
If we denote the x and y intercepts of the tangent line by x and y (Figure 4.5.6), then
I I
′
0 = y(x0 ) + y (x0 )(xI − x0 ) (4.5.14)
and
′
yI = y(x0 ) − y (x0 )x0 . (4.5.15)
From Figure 4.5.6, P is the midpoint of the line segment connecting (x , 0) and (0, y ) if and only if x = 2 x
I I and
I 0
yI = 2y(x0 ) . Substituting the first of these conditions into Equation 4.5.14 or the second into Equation 4.5.15 yields
′
y(x0 ) + y (x0 )x0 = 0.
Since x is arbitrary we drop the subscript and conclude that y = y(x) satisfies
0
′
y + xy = 0,
Integrating yields xy = c, or
c
y = .
x
If c = 0 this curve is the line y = 0 , which does not satisfy the geometric requirements imposed by the problem; thus,
c ≠ 0 , and the solutions define a family of hyperbolas (Figure 4.5.7).
Figure 4.5.7
Example 4.5.8
Find curves y = y(x) such that the tangent line to the curve at any point (x , y(x )) intersects the x-axis at
0 0
2
(x , 0)
0
.
Figure 4.5.8 illustrates the situation in the case where the curve is in the first quadrant and 0 < x < 1 .
Solution
The equation of the line tangent to the curve at (x 0, y(x0 )) is
′
y = y(x0 ) + y (x0 )(x − x0 ).
Since (x 2
0
, 0) is on the tangent line,
′ 2
0 = y(x0 ) + y (x0 )(x − x0 ).
0
Since x is arbitrary we drop the subscript and conclude that y = y(x) satisfies
0
′ 2
y + y (x − x) = 0.
Figure 4.5.8
so
∣ x ∣
ln |y| = ln |x| − ln |x − 1| + k = ln∣ ∣ + k,
∣ x −1 ∣
and
cx
y = .
x −1
If c = 0 , the graph of this function is the x-axis. If c ≠ 0 , it is a hyperbola with vertical asymptote x = 1 and horizontal
asymptote y = c . Figure 4.5.9 shows the graphs for c ≠ 0 .
Orthogonal Trajectories
Two curves C and C are said to be orthogonal at a point of intersection (x , y ) if they have perpendicular tangents at
1 2 0 0
(x , y ) . (Figure 4.5.10). A curve is said to be an orthogonal trajectory of a given family of curves if it is orthogonal to every
0 0
curve in the family. For example, every line through the origin is an orthogonal trajectory of the family of circles centered at
the origin. Conversely, any such circle is an orthogonal trajectory of the family of lines through the origin (Figure 4.5.11).
Orthogonal trajectories occur in many physical applications. For example, if u = u(x, y) is the temperature at a point (x, y) ,
the curves defined by
u(x, y) = c (4.5.16)
are called isothermal curves. The orthogonal trajectories of this family are called heat-flow lines, because at any given point
the direction of maximum heat flow is perpendicular to the isothermal through the point. If u represents the potential energy of
an object moving under a force that depends upon (x, y), the curves Equation 4.5.16 are called equipotentials, and the
orthogonal trajectories are called lines of force.
From analytic geometry we know that two nonvertical lines L and L with slopes m and m , respectively, are perpendicular
1 2 1 2
if and only if m = −1/m ; therefore, the integral curves of the differential equation
2 1
′
1
y =−
f (x, y)
Figure 4.5.10
because at any point (x 0, y0 ) where curves from the two families intersect the slopes of the respective tangent lines are
1
m1 = f (x0 , y0 ) and m2 = − .
f (x0 , y0 )
This suggests a method for finding orthogonal trajectories of a family of integral curves of a first order equation.
′
1
y =−
f (x, y)
Example 4.5.9
Find the orthogonal trajectories of the family of circles
2 2 2
x +y =c (c > 0). (4.5.17)
Solution
To find a differential equation for the family of circles we differentiate Equation 4.5.17 implicitly with respect to x to
obtain
′
2x + 2y y = 0,
or
′
x
y =− .
y
are orthogonal trajectories of the given family. We leave it to you to verify that the general solution of this equation is
y = kx,
Example 4.5.10
Find the orthogonal trajectories of the family of hyperbolas
xy = c (c ≠ 0) (4.5.18)
(Figure 4.5.7).
Solution
Differentiating Equation 4.5.18 implicitly with respect to x yields
′
y + xy = 0,
or
′
y
y =− ;
x
Example 4.5.11
Find the orthogonal trajectories of the family of circles defined by
2 2 2
(x − c ) +y =c (c ≠ 0). (4.5.19)
These circles are centered on the x-axis and tangent to the y -axis (Figure 4.5.13a).
Solution
Multiplying out the left side of Equation 4.5.19 yields
2 2
x − 2cx + y = 0, (4.5.20)
so
2 2 2 2
x +y x −y
x −c = x − = .
2x 2x
2 2
y −x
′
y = . (4.5.22)
2xy
The curves defined by Equation 4.5.19 are integral curves of Equation 4.5.22, and the integral curves of
′
2xy
y =
2 2
x −y
are orthogonal trajectories of the family Equation 4.5.19. This is a homogeneous nonlinear equation, which we studied in
Section 2.4. Substituting y = ux yields
2x(ux) 2u
′
u x +u = = ,
2 2 2
x − (ux ) 1 −u
so
2
2u u(u + 1)
′
u x = −u = ,
1 − u2 1 − u2
or, equivalently,
1 2u ′
1
[ − ]u = .
2
u u +1 x
Therefore
2
ln |u| − ln(u + 1) = ln |x| + k.
or
k 2 2
|y| = e (x + y ).
where
−k
e
⎧ if y ≥ 0,
2
h =⎨
⎩ e
−k
− if y ≤ 0.
2
Thus, the orthogonal trajectories are circles centered on the y axis and tangent to the x axis (Figure 4.5.13 (b)). The
circles for which h > 0 are above the x-axis, while those for which h < 0 are below.
2. e xy
= cy
3. ln |xy| = c(x 2
+y )
2
4. y = x 1/2
+ cx
5. y = e
2 2
x −x
+ ce
6. y = x 3
+
c
7. y = sin x + ce x
8. y = e x
+ c(1 + x )
2
Q4.5.2
9. Show that the family of circles
2 2
(x − x0 ) +y = 1, −∞ < x0 < ∞,
can be obtained by joining integral curves of two first order differential equations. More specifically, find differential
equations for the families of semicircles
2 2
(x − x0 ) +y = 1, x0 < x < x0 + 1, −∞ < x0 < ∞,
2 2
(x − x0 ) +y = 1, x0 − 1 < x < x0 , −∞ < x0 < ∞.
10. Suppose f and g are differentiable for all x . Find a differential equation for the family of functions y = f + cg (
c =constant).
Q4.5.3
In Exercises 4.5.11-4.5.13 find a first order differential equation for the given family of curves.
11. Lines through a given point (x 0, y0 ) .
12. Circles through (−1, 0) and (1, 0).
13. Circles through (0, 0) and (0, 2).
Q4.5.4
14. Use the method Example 4.5.6 (a) to find the equations of lines through the given points tangent to the parabola y =x
2
.
Also, find the points of tangency.
a. (5, 9)
b. (6, 11)
c. (−6, 20)
d. (−3, 5)
15.
a. Show that the equation of the line tangent to the circle
2 2
x +y =1 (A)
b. Show that if y is the slope of a nonvertical tangent line to the circle (A) and (x, y) is a point on the tangent line then
′
′ 2 2 ′ 2
(y ) (x − 1) − 2xy y + y − 1 = 0. (C)
c. Show that the segment of the tangent line (B) on which (x − x 0 )/ y0 >0 is an integral curve of the differential equation
− −−−−−−−−
2 2
xy − √ x + y − 1
′
y = , (D)
2
x −1
while the segment on which (x − x 0 )/ y0 <0 is an integral curve of the differential equation
− −−−−−−−−
2 2
xy + √ x + y − 1
′
y = . (E)
2
x −1
HINT: Use the quadratic formula to solve (C) for y . Then substitute (B) for y and choose the ± sign in the quadratic
′
formula so that the resulting expression for y reduces to the known slope y = −x /y
′ ′
0 0
d. Show that the upper and lower semicircles of (A) are also integral curves of (D) and (E).
e. Find the equations of two lines through (5,5) tangent to the circle (A), and find the points of tangency.
16.
a. Show that the equation of the line tangent to the parabola
2
x =y (A)
b. Show that if y is the slope of a nonvertical tangent line to the parabola (A) and (x, y) is a point on the tangent line then
′
2 ′ 2 ′
4 x (y ) − 4xy y + x = 0. (C)
c. Show that the segment of the tangent line defined in (a) on which x > x is an integral curve of the differential equation
0
−−−−−
2
y + √y − x
′
y = , (D)
2x
while the segment on which x < x is an integral curve of the differential equation
0
−−−−−
2
y − √y − x
′
y = , (E)
2x
HINT: Use the quadratic formula to solve (c) for y . Then substitute (B) for y and choose the
′
± sign in the quadratic
formula so that the resulting expression for y reduces to the known slope of y =
′ ′ 1
2y0
− −
d. Show that the upper and lower halves of the parabola (A), given by y = √x and y = −√x for x >0 , are also integral
curves of (D) and (E).
17. Use the results of Exercise 4.5.16 to find the equations of two lines tangent to the parabola x = y and passing through the 2
19. Find all curves y = y(x) such that the tangent to the curve at any point (x 0, y(x0 )) intersects the x axis at x I =x
3
0
.
22. Find all curves y = y(x) such that the tangent to the curve at any point (x 0, y(x0 )) intersects the y axis at y I = x0 .
23. Find a curve y = y(x) through (0, 2) such that the normal to the curve at any point (x0 , y(x0 )) intersects the x axis at
x = x +1 .
I 0
24. Find a curve y = y(x) through (2, 1) such that the normal to the curve at any point (x0 , y(x0 )) intersects the y axis at
y = 2y(x ) .
I 0
Q4.5.5
In Exercises 4.5.25-2.5.29 find the orthogonal trajectories of the given family of curves.
25. x 2
+ 2y
2 2
=c
26. x 2
+ 4xy + y
2
=c
27. y = ce 2x
28. xy e
2
x
=c
29. y = ce
Q4.5.6
30. Find a curve through (−1, 3) orthogonal to every parabola of the form
2
y = 1 + cx
that it intersects. Which of these parabolas does the desired curve intersect?
31. Show that the orthogonal trajectories of
2 2
x + 2axy + y =c
satisfy
a+1 a−1
|y − x | |y + x | = k.
32. If lines L and L intersect at (x , y ) and α is the smallest angle through which L must be rotated counterclockwise about
1 0 0
(x , y ) to bring it into coincidence with L , we say that α is the angle from L to L ; thus, 0 ≤ α < π . If L and L
0 0 1 are 1 1
tangents to curves C and C , respectively, that intersect at (x , y ), we say that C intersects C at the angle α . Use the
1 0 0 1
identity
tan A + tan B
tan(A + B) =
1 − tan A tan B
f (x, y) + tan α π
′ ′
y = f (x, y) and y = (α ≠ ),
1 − f (x, y) tan α 2
33. Use the result of Exercise 4.5.32 to find a family of curves that intersect every nonvertical line through the origin at the
angle α = π/4.
34. Use the result of Exercise 4.5.32 to find a family of curves that intersect every circle centered at the origin at a given angle
α ≠ π/2.
where a , b, and c are constant (a ). When you've completed this section you'll know everything there is to know about solving
≠ 0
such equations.
1 9/16/2020
5.1: Homogeneous Linear Equations
A second order differential equation is said to be linear if it can be written as
′′ ′
y + p(x)y + q(x)y = f (x). (5.1.1)
We call the function f on the right a forcing function, since in physical applications it is often related to a force acting on some
system modeled by the differential equation. We say that Equation 5.1.1 is homogeneous if f ≡ 0 or nonhomogeneous if
f ≢ 0 . Since these definitions are like the corresponding definitions in Section 2.1 for the linear first order equation
′
y + p(x)y = f (x), (5.1.2)
it is natural to expect similarities between methods of solving Equation 5.1.1 and Equation 5.1.2. However, solving Equation
5.1.1 is more difficult than solving Equation 5.1.2. For example, while Theorem 5.1.1 gives a formula for the general solution
of Equation 5.1.2 in the case where f ≡ 0 and Theorem 5.1.2 gives a formula for the case where f ≢ 0 , there are no formulas
for the general solution of Equation 5.1.1 in either case. Therefore we must be content to solve linear second order equations
of special forms.
In Section 2.1 we considered the homogeneous equation y + p(x)y = 0 first, and then used a nontrivial solution of this
′
equation to find the general solution of the nonhomogeneous equation y + p(x)y = f (x) . Although the progression from the
′
homogeneous to the nonhomogeneous case isn’t that simple for the linear second order equation, it is still necessary to solve
the homogeneous equation
′′ ′
y + p(x)y + q(x)y = 0 (5.1.3)
in order to solve the nonhomogeneous equation Equation 5.1.1. This section is devoted to Equation 5.1.3.
The next theorem gives sufficient conditions for existence and uniqueness of solutions of initial value problems for Equation
5.1.3. We omit the proof.
Theorem 5.1.1
Suppose p and q are continuous on an open interval (a, b), let x be any point in (a, b), and let k and k be arbitrary real
0 0 1
Since y ≡ 0 is obviously a solution of Equation 5.1.3 we call it the trivial solution. Any other solution is nontrivial. Under the
assumptions of Theorem 5.1.1 , the only solution of the initial value problem
′′ ′ ′
y + p(x)y + q(x)y = 0, y(x0 ) = 0, y (x0 ) = 0
Example 5.1.1
The coefficients of y and y in
′
′′
y −y = 0 (5.1.4)
are the constant functions p ≡ 0 and q ≡ −1 , which are continuous on (−∞, ∞). Therefore Theorem 5.1.1 implies that
every initial value problem for Equation 5.1.4 has a unique solution on (−∞, ∞).
a. Verify that y = e and y = e are solutions of Equation 5.1.4 on (−∞, ∞).
1
x
2
−x
Solution:
a. If y1 = e
x
then y
′
1
=e
x
and y
1
′′
=e
x
= y1 , so y
′′
1
− y1 = 0 . If y2 = e
−x
, then y
2
′
= −e
−x
and y
2
′′
=e
−x
= y2 , so
y
2
′′
− y2 = 0 .
b. If
x −x
y = c1 e + c2 e (5.1.6)
then
′ x −x
y = c1 e − c2 e (5.1.7)
and
′′ x −x
y = c1 e + c2 e ,
so
′′ x −x x −x
y −y = (c1 e + c2 e ) − (c1 e + c2 e )
x x −x −x
= c1 (e − e ) + c2 (e −e ) =0
c1 − c2 = 3.
on (−∞, ∞).
Example 5.1.2
Let ω be a positive constant. The coefficients of y and y in ′
′′ 2
y +ω y = 0 (5.1.8)
are the constant functions p ≡ 0 and q ≡ ω , which are continuous on (−∞, ∞). Therefore Theorem
2
5.1.1 implies that
every initial value problem for Equation 5.1.8 has a unique solution on (−∞, ∞).
a. Verify that y = cos ωx and y = sin ωx are solutions of Equation 5.1.8 on (−∞, ∞).
1 2
b. Verify that if c and c are arbitrary constants then y = c cos ωx + c sin ωx is a solution of Equation
1 2 1 2 5.1.8 on
(−∞, ∞) .
Solution:
a. If y1 = cos ωx then y
′
1
= −ω sin ωx and y
1
′′
= −ω
2
cos ωx = −ω y1
2
, so y
′′
1
2
+ ω y1 = 0 . If y2 = sin ωx then,
y
2
′
= ω cos ωx and y ′′
2
= −ω
2
sin ωx = −ω y2
2
, so y 2
′′
+ ω y2 = 0
2
.
b. If
y = c1 cos ωx + c2 sin ωx (5.1.10)
then
and
′′ 2
y = −ω (c1 cos ωx + c2 sin ωx),
so
′′ 2 2 2
y +ω y = −ω (c1 cos ωx + c2 sin ωx) + ω (c1 cos ωx + c2 sin ωx)
2 2
= c1 ω (− cos ωx + cos ωx) + c2 ω (− sin ωx + sin ωx) = 0
for all x. Therefore y = c 1 cos ωx + c2 sin ωx is a solution of Equation 5.1.8 on (−∞, ∞).
c. To solve Equation 5.1.9, we must choosing c and c in Equation 1 2 so that y(0) = 1 and
5.1.10
′
y (0) = 3 . Setting
x = 0 in Equation 5.1.10 and Equation 5.1.11 shows that c = 1 and c 1 2 = 3/ω . Therefore
3
y = cos ωx + sin ωx
ω
′′ ′ ′
P0 (x)y + P1 (x)y + P2 (x)y = 0, y(x0 ) = k0 , y (x0 ) = k1 (5.1.12)
has a unique solution on an interval (a, b) that contains x , provided that P , P , and 0 0 1 P2 are continuous and P0 has no
zeros on (a, b). To see this, we rewrite the differential equation in Equation 5.1.12 as
P1 (x) P2 (x)
′′ ′
y + y + y =0
P0 (x) P0 (x)
Example 5.1.3
The equation
2 ′′ ′
x y + x y − 4y = 0 (5.1.13)
has the form of the differential equation in Equation 5.1.12, with P (x) = x , P (x) = x , and P (x) = −4 , which are 0
2
1 2
are all continuous on (−∞, ∞). However, since P (0) = 0 we must consider solutions of Equation 5.1.13 on (−∞, 0)
and (0, ∞). Since P has no zeros on these intervals, Theorem 5.1.1 implies that the initial value problem
0
2 ′′ ′ ′
x y + x y − 4y = 0, y(x0 ) = k0 , y (x0 ) = k1
b. Verify that if c and c are any constants then y = c x + c /x is a solution of Equation 5.1.13 on (−∞, 0) and
1 2 1
2
2
2
(0, ∞).
Solution:
a. If y1 =x
2
then y ′
1
= 2x and y ′′
1
=2 , so
2 ′′ ′ 2 2
x y + xy − 4 y1 = x (2) + x(2x) − 4 x =0
1 1
2 2
6 2 4
′′ ′
x y + xy − 4 y2 = x ( ) −x ( )− =0
2 2 4 3 2
x x x
then
2c2
′
y = 2 c1 x − (5.1.17)
3
x
and
6c2
′′
y = 2 c1 + ,
4
x
so
2 ′′ ′ 2
6c2 2c2 2
c2
x y + x y − 4y =x (2 c1 + ) + x (2 c1 x − ) − 4 (c1 x + )
4 3 2
x x x
2 2 2
6 2 4
= c1 (2 x + 2x − 4 x ) + c2 ( − − )
x2 x2 x2
= c1 ⋅ 0 + c2 ⋅ 0 = 0
2 c1 − 2 c2 = 0.
c1 + c2 = 2
−2 c1 + 2 c2 = 0.
Although the formulas for the solutions of Equation 5.1.14 and Equation 5.1.15 are both y = x + 1/x , you should not 2 2
conclude that these two initial value problems have the same solution. Remember that a solution of an initial value problem is
defined on an interval that contains the initial point; therefore, the solution of Equation 5.1.14 is y = x + 1/x on the 2 2
interval (0, ∞), which contains the initial point x = 1 , while the solution of Equation 5.1.15 is y = x + 1/x on the
0
2 2
y = c1 y1 + c2 y2
The next theorem states a fact that we’ve already verified in Examples 5.1.1, 5.1.2, 5.1.3.
Theorem 5.1.2
If y and y are solutions of the homogeneous equation
1 2
′′ ′
y + p(x)y + q(x)y = 0 (5.1.18)
Proof
If
y = c1 y1 + c2 y2
then
′ ′ ′ ′′ ′′ ′′
y = c1 y + c2 y and y = c1 y + c2 y .
1 2 1 2
Therefore
′′ ′ ′′ ′′ ′ ′
y + p(x)y + q(x)y = (c1 y + c2 y ) + p(x)(c1 y + c2 y ) + q(x)(c1 y1 + c2 y2 )
1 2 1 2
′′ ′ ′′ ′
= c1 (y + p(x)y + q(x)y1 ) + c2 (y + p(x)y + q(x)y2 )
1 1 2 2
= c1 ⋅ 0 + c2 ⋅ 0 = 0,
We say that {y , y } is a fundamental set of solutions of (5.1.18) on (a, b) if every solution of Equation 5.1.18 on
1 2
(a, b) can be written as a linear combination of y and y as in Equation 5.1.19. In this case we say that Equation
1 2
Linear Independence
We need a way to determine whether a given set {y , y } of solutions of Equation 1 2 5.1.18 is a fundamental set. The next
definition will enable us to state necessary and sufficient conditions for this.
We say that two functions y and y defined on an interval (a, b) are linearly independent on (a, b) if neither is a constant
1 2
multiple of the other on (a, b). (In particular, this means that neither can be the trivial solution of Equation 5.1.18, since, for
example, if y ≡ 0 we could write y = 0y .) We’ll also say that the set {y , y } is linearly independent on (a, b).
1 1 2 1 2
Theorem 5.1.3
Suppose p and q are continuous on (a, b). Then a set {y 1, y2 } of solutions of
′′ ′
y + p(x)y + q(x)y = 0 (5.1.20)
Proof
We’ll present the proof of Theorem 5.1.3 in steps worth regarding as theorems in their own right. However, let’s first
interpret Theorem 5.1.3in terms of Examples 5.1.1, 5.1.2, 5.1.3.
on (−∞, ∞).
Since cos ωx/ sin ωx = cot ωx is nonconstant, Theorem 5.1.3 implies that y = c1 cos ωx + c2 sin ωx is the general
solution of y + ω y = 0 on (−∞, ∞).
′′ 2
Since x /x
2 −2
=x
4
is nonconstant, Theorem 5.1.3 implies that y = c1 x
2
+ c2 / x
2
is the general solution of
2
x y
′′ ′
+ x y − 4y = 0 on (−∞, 0) and (0, ∞).
fundamental set of solutions of Equation 5.1.20 on (a, b). Let x be an arbitrary point in (a, b), and suppose y is an arbitrary0
solution of Equation 5.1.20 on (a, b). Then y is the unique solution of the initial value problem
′′ ′ ′
y + p(x)y + q(x)y = 0, y(x0 ) = k0 , y (x0 ) = k1 ; (5.1.21)
that is, k and k are the numbers obtained by evaluating y and y at x . Moreover, k and k can be any real numbers, since
0 1
′
0 0 1
Theorem 5.1.1 implies that Equation 5.1.21 has a solution no matter how k and k are chosen. Therefore {y , y } is a 0 1 1 2
fundamental set of solutions of Equation 5.1.20on (a, b) if and only if it is possible to write the solution of an arbitrary initial
value problem Equation 5.1.21as y = c y + c y . This is equivalent to requiring that the system
1 1 2 2
c1 y1 (x0 ) + c2 y2 (x0 ) = k0
(5.1.22)
′ ′
c1 y (x0 ) + c2 y (x0 ) = k1
1 2
has a solution (c 1, c2 ) for every choice of (k 0, k1 ) . Let’s try to solve Equation 5.1.22.
Multiplying the first equation in Equation 5.1.22 by y ′
2
(x0 ) and the second by y 2 (x0 ) yields
′ ′ ′
c1 y1 (x0 )y (x0 ) + c2 y2 (x0 )y (x0 ) = y (x0 )k0
2 2 2
′ ′
c1 y (x0 )y2 (x0 ) + c2 y (x0 )y2 (x0 ) = y2 (x0 )k1 ,
1 2
and subtracting the second equation here from the first yields
′ ′ ′
(y1 (x0 )y (x0 ) − y (x0 )y2 (x0 )) c1 = y (x0 )k0 − y2 (x0 )k1 . (5.1.23)
2 1 2
′ ′
c1 y (x0 )y1 (x0 ) + c2 y (x0 )y1 (x0 ) = y1 (x0 )k1 ,
1 2
and subtracting the first equation here from the second yields
′ ′ ′
(y1 (x0 )y (x0 ) − y (x0 )y2 (x0 )) c2 = y1 (x0 )k1 − y (x0 )k0 . (5.1.24)
2 1 1
If
′ ′
y1 (x0 )y (x0 ) − y (x0 )y2 (x0 ) = 0,
2 1
it is impossible to satisfy Equation 5.1.23 and Equation 5.1.24 (and therefore Equation 5.1.22 ) unless k0 and k1 happen to
satisfy
′
y1 (x0 )k1 − y (x0 )k0 = 0
1
′
y (x0 )k0 − y2 (x0 )k1 = 0.
2
we can divide Equation 5.1.23and Equation 5.1.24through by the quantity on the left to obtain
′
(5.1.26)
y ( x0 ) k1 −y ( x0 ) k0
1 1
c2 = ′ ′
,
y1 ( x0 ) y2 ( x0 )−y1 ( x0 ) y2 ( x0 )
no matter how k and k are chosen. This motivates us to consider conditions on y and y that imply Equation 5.1.25.
0 1 1 2
Theorem 5.1.4
Suppose p and q are continuous on (a, b), let y and y be solutions of 1 2
′′ ′
y + p(x)y + q(x)y = 0 (5.1.27)
x
−∫ p(t) dt
x
W (x) = W (x0 )e 0
, a <x <b (5.1.29)
Proof
Differentiating Equation 5.1.28yields
′ ′ ′ ′′ ′ ′ ′′ ′′ ′′
W =y y + y1 y −y y −y y2 = y1 y −y y2 . (5.1.30)
1 2 2 1 2 1 2 1
′′ ′ ′′ ′
y = −p y − q y1 and y = −p y − q y2 .
1 1 2 2
′ ′
= −p(y1 y − y2 y ) − q(y1 y2 − y2 y1 )
2 1
′ ′
= −p(y1 y − y2 y ) = −pW .
2 1
Therefore W ′
+ p(x)W = 0 ; that is, W is the solution of the initial value problem
′
y + p(x)y = 0, y(x0 ) = W (x0 ).
We leave it to you to verify by separation of variables that this implies Equation 5.1.29. If W (x ) ≠ 0 , Equation 0
5.1.29 implies that W has no zeros in (a, b) , since an exponential is never zero. On the other hand, if W (x ) = 0 , 0
The function W defined in Equation 5.1.28 is the Wronskian of {y 1, y2 } . Formula Equation 5.1.29 is Abel’s formula.
The Wronskian of {y 1, y2 } is usually written as the determinant
∣ y1 y2 ∣
W =∣ ∣.
′ ′
∣y y ∣
1 2
1 ∣ k0 y2 (x0 ) ∣ 1 ∣ y1 (x0 ) k0 ∣
c1 = ∣ ∣ and c2 = ∣ ∣.
′ ′
W (x0 ) ∣ k1 y (x0 ) ∣ W (x0 ) ∣ y (x0 ) k1 ∣
2 1
If you’ve taken linear algebra you may recognize this as Cramer’s rule.
Example 5.1.5
a. y′′
− y = 0;
x
y1 = e , y2 = e
−x
b. y′′
+ ω y = 0;
2
y1 = cos ωx, y2 = sin ωx
c. x 2
y
′′ ′
+ x y − 4y = 0;
2
y1 = x , y2 = 1/ x
2
Solution:
a. Since p ≡ 0 , we can verify Abel’s formula by showing that W is constant, which is true, since
x −x
∣e e ∣
x −x x −x
W (x) = ∣ ∣ = e (−e )−e e = −2
x −x
∣e −e ∣
for all x.
b. Again, since p ≡ 0 , we can verify Abel’s formula by showing that W is constant, which is true, since
∣ cos ωx sin ωx
∣
W (x) = ∣ ∣
∣ −ω sin ωx ω cos ωx ∣
2 2
= ω(cos ωx + sin ωx) = ω
for all x.
c. Computing the Wronskian of y 1 =x
2
and y 2 = 1/ x
2
directly yields
∣ x2 1/x
2
∣ 2 1 4
2
W =∣ ∣ =x (− ) − 2x ( ) =− . (5.1.31)
3
∣ 2x −2/x ∣ x3 x
2
x
to see that p(x) = 1/x . If x and x are either both in (−∞, 0) or both in (0, ∞) then
0
x x
dt x
∫ p(t) dt = ∫ = ln( ),
x0 x0
t x0
4 x0
= −( )( ) from(5.1.31)
x0 x
4
=− ,
x
The next theorem will enable us to complete the proof of Theorem 5.1.3.
Theorem 5.1.5
Suppose p and q are continuous on an open interval (a, b), let y and y be solutions of 1 2
′′ ′
y + p(x)y + q(x)y = 0 (5.1.32)
(a, b).
Proof
′ ′ ′
y2 y1 y − y y2 W
2 1
( ) = = . (5.1.33)
2 2
y1 y y
1 1
However, if , Theorem 5.1.4 implies that W ≡ 0 on (a, b) . Therefore Equation 5.1.33 implies that
W (x0 ) = 0
′
(y2 / y1 ) ≡ 0 , so (constant) on I . This shows that y (x) = cy (x) for all x in I . However, we want to
y2 / y1 = c 2 1
show that y = cy (x) for all x in (a, b) . Let Y = y − cy . Then Y is a solution of Equation 5.1.32 on (a, b) such
2 1 2 1
which implies that Y ≡ 0 on (a, b) , by the paragraph following Theorem 5.1.1 . (See also Exercise 5.1.24). Hence,
y − c y ≡ 0 on (a, b) , which implies that y and y are not linearly independent on (a, b) .
2 1 1 2
Now suppose W has no zeros on (a, b). Then y can’t be identically zero on (a, b) (why not?), and therefore there is a
1
subinterval I of (a, b) on which y has no zeros. Since Equation 5.1.33 implies that y /y is nonconstant on I , y isn’t a
1 2 1 2
constant multiple of y on (a, b). A similar argument shows that y isn’t a constant multiple of y on (a, b), since
1 1 2
′ ′ ′
y1 y y2 − y1 y W
1 2
( ) = =−
2 2
y2 y y
2 2
We can now complete the proof of Theorem 5.1.3. From Theorem 5.1.5, two solutions y and y of Equation 5.1.32 are 1 2
linearly independent on (a, b) if and only if W has no zeros on (a, b). From Theorem 5.1.4 and the motivating comments
preceding it, {y , y } is a fundamental set of solutions of Equation 5.1.32 if and only if W has no zeros on (a, b). Therefore
1 2
{ y , y } is a fundamental set for Equation 5.1.32 on (a, b) if and only if { y , y } is linearly independent on (a, b) .
1 2 1 2
The next theorem summarizes the relationships among the concepts discussed in this section.
Theorem 5.1.6
Suppose p and q are continuous on an open interval (a, b) and let y and y be solutions of 1 2
′′ ′
y + p(x)y + q(x)y = 0 (5.1.34)
on (a, b). Then the following statements are equivalent; that is, they are either all true or all false.
a. The general solution of (5.1.34) on (a, b) is y = c y + c y . 1 1 2 2
on an interval (a, b) where P0 , P1 , and P2 are continuous and P0 has no zeros.dd proof here and it will automatically be
hidden
Theorem 5.1.7
′ ′
α y1 (c) + β y (c) = 0 and α y2 (c) + β y (c) = 0. (5.1.35)
1 2
Proof
Since α and β are not both zero, Equation 5.1.35implies that
′
∣ y1 (c) y (c) ∣ ∣ y1 (c) y2 (c) ∣
1
∣ ∣ = 0, so ∣ ∣ =0
′ ′ ′
∣ y2 (c) y (c) ∣ ∣ y1 (c) y (c) ∣
2
2
on (−∞, ∞).
b. Verify that if c and c are arbitrary constants then y = c
1 2 1e
2x
+ c2 e
5x
is a solution of (A) on (−∞, ∞).
c. Solve the initial value problem
′′ ′ ′
y − 7 y + 10y = 0, y(0) = −1, y (0) = 1.
2.
a. Verify that y 1 =e
x
cos x and y 2 =e
x
sin x are solutions of
′′ ′
y − 2 y + 2y = 0 (A)
on (−∞, ∞).
b. Verify that if c and c are arbitrary constants then y = c
1 2 1e
x
cos x + c2 e
x
sin x is a solution of (A) on (−∞, ∞).
c. Solve the initial value problem
′′ ′ ′
y − 2 y + 2y = 0, y(0) = 3, y (0) = −2.
3.
a. Verify that y 1 =e
x
and y 2 = xe
x
are solutions of
′′ ′
y − 2y + y = 0 (A)
on (−∞, ∞).
b. Verify that if c and c are arbitrary constants then y = e
1 2
x
(c1 + c2 x) is a solution of (A) on (−∞, ∞).
c. Solve the initial value problem
′′ ′ ′
y − 2 y + y = 0, y(0) = 7, y (0) = 4.
4.
a. Verify that y 1 = 1/(x − 1) and y 2 = 1/(x + 1) are solutions of
2 ′′ ′
(x − 1)y + 4x y + 2y = 0 (A)
on (−∞, −1), (−1, 1), and (1, ∞). What is the general solution of (A) on each of these intervals?
b. Solve the initial value problem
2 ′′ ′ ′
(x − 1)y + 4x y + 2y = 0, y(0) = −5, y (0) = 1.
b. {e , e sin x}
x x
c. {x + 1, x + 2} 2
d. {x , x 1/2
}
−1/3
e. { sin x
,
x
}
cos x
f. {x ln |x|, x ln |x|} 2
g. {e cos √−
x −
x , e sin √x }
x
′′ ′
y + p(x)y + q(x)y = 0 (A)
that has no zeros on (a, b). Let P (x) = ∫ p(x) dx be any antiderivative of p on (a, b).
a. Show that if K is an arbitrary nonzero constant and y satisfies 2
′ ′ −P (x)
y1 y − y y2 = K e (B)
2 1
y
2
(x)
, then {y 1, y2 } is a fundamental set of solutions of (A) on (a, b).
1
Q5.1.2
In Exercises 5.1.10-5.1.23 use the method suggested by Exercise 5.1.9 to find a second solution y2 that isn’t a constant
multiple of the solution y . Choose K conveniently to simplify y .
1 2
10. y ′′
− 2 y − 3y = 0
′
;y 1 =e
3x
11. y ′′
− 6 y + 9y = 0
′
;y 1 =e
3x
12. y ′′
− 2ay + a y = 0
′ 2
(a = constant); y 1 =e
ax
13. x 2
y
′′
+ xy − y = 0
′
;y 1 =x
14. x 2
y
′′
− xy + y = 0
′
;y 1 =x
15. x 2
y
′′
− (2a − 1)x y + a y = 0
′ 2
(a = nonzero constant); x >0 ;y 1 =x
a
16. 4x 2
y
′′ ′
− 4x y + (3 − 16 x )y = 0
2
;y 1 =x
1/2
e
2x
17. (x − 1)y ′′
− xy + y = 0
′
;y 1 =e
x
19. 4x 2
(sin x)y
′′
− 4x(x cos x + sin x)y + (2x cos x + 3 sin x)y = 0
′
;y 1 =x
1/2
21. (x 2
− 4)y
′′
+ 4x y + 2y = 0
′
;y 1 =
x−2
1
23. (x 2
− 2x)y
′′ 2
+ (2 − x )y + (2x − 2)y = 0
′
;y 1 =e
x
Q5.1.3
24. Suppose p and q are continuous on an open interval (a, b) and let x be in (a, b). Use Theorem 5.1.1 to show that the only 0
′′ ′
y + p(x)y + q(x)y = 0 (A)
z1 = α y1 + β y2 and z2 = γ y1 + δy2 ,
where α , β, γ, and δ are constants. Show that if {z 1, z2 } is a fundamental set of solutions of (A) on (a, b) then so is {y 1, y2 } .
27. Suppose p and q are continuous on (a, b) and {y 1, y2 } is a fundamental set of solutions of
′′ ′
y + p(x)y + q(x)y = 0 (A)
z1 = α y1 + β y2 and z2 = γ y1 + δy2 ,
where α, β, γ , and δ are constants. Show that { z1 , z2 } is a fundamental set of solutions of (A) on (a, b) if and only if
αγ − βδ ≠ 0 .
28. Suppose y is differentiable on an interval
1 (a, b) and y2 = ky1 , where k is a constant. Show that the Wronskian of
{ y , y } is identically zero on (a, b) .
1 2
29. Let
3
x , x ≥ 0,
3
y1 = x and y2 = {
3
−x , x < 0.
a. Show that the Wronskian of {y , y } is defined and identically zero on (−∞, ∞).
1 2
of an equation
′′ ′
y + p(x)y + q(x)y = 0
on (a, b). Show that if y 1 (x1 ) = y1 (x2 ) = 0 , where a < x 1 < x2 < b , then y 2 (x) =0 for some x in (x 1, x2 ) .
32. Suppose p and q are continuous on (a, b) and every solution of
′′ ′
y + p(x)y + q(x)y = 0 (A)
on (a, b) can be written as a linear combination of the twice differentiable functions {y 1, y2 } . Use Theorem 5.1.1 to show that
y and y are themselves solutions of (A) on (a, b) .
1 2
′′ ′ ′′ ′
y + p1 (x)y + q1 (x)y = 0 and y + p2 (x)y + q2 (x)y = 0
have the same solutions on (a, b). Show that p 1 = p2 and q 1 = q2 on (a, b).
34. (For this exercise you have to know about 3 ×3 determinants.) Show that if y and 1 y2 are twice continuously
differentiable on (a, b) and the Wronskian W of {y 1, y2 } has no zeros in (a, b) then the equation
∣ y y1 y2 ∣
1 ∣ ∣
′ ′ ′
∣ y y y ∣ =0
1 2
W ∣ ∣
′′ ′′ ′′
∣y y y ∣
1 2
can be written as
′′ ′
y + p(x)y + q(x)y = 0, (A)
where p and q are continuous on (a, b) and {y 1, y2 } is a fundamental set of solutions of (A) on (a, b).
35. Use the method suggested by Exercise 5.1.34 to find a linear homogeneous equation for which the given functions form a
fundamental set of solutions on some interval.
a. e cos 2x, e sin 2x
x x
b. x, e 2x
c. x, x ln x
d. cos(ln x), sin(ln x)
e. cosh x, sinh x
f. x − 1, x + 1
2 2
36. Suppose p and q are continuous on (a, b) and {y 1, y2 } is a fundamental set of solutions of
′′ ′
y + p(x)y + q(x)y = 0 (A)
on (a, b). Show that if y is a solution of (A) on (a, b), there’s exactly one way to choose c and c so that y = c 1 2 1 y1 + c2 y2 on
(a, b) .
37. Suppose p and q are continuous on (a, b) and x is in (a, b). Let y and y be the solutions of
0 1 2
such that
′ ′
y1 (x0 ) = 1, y (x0 ) = 0 and y2 (x0 ) = 0, y (x0 ) = 1.
1 2
(Theorem 5.1.1 implies that each of these initial value problems has a unique solution on (a, b).)
a. Show that {y , y } is linearly independent on (a, b).
1 2
b. Show that an arbitrary solution y of (A) on (a, b) can be written as y = y(x 0 )y1
′
+ y (x0 )y2 .
c. Express the solution of the initial value problem
′′ ′ ′
y + p(x)y + q(x)y = 0, y(x0 ) = k0 , y (x0 ) = k1
Then use Exercise 5.1.37 (c) to write the solution of the initial value problem
′′ ′
y = 0, y(0) = k0 , y (0) = k1
39. Let x be an arbitrary real number. Given (Example 5.1.1) that e and e
0
x −x
are solutions of y ′′
−y = 0 , find solutions y1
′ ′
y1 (x0 ) = 1, y (x0 ) = 0 and y2 (x0 ) = 0, y (x0 ) = 1.
1 2
Then use Exercise 5.1.37 (c) to write the solution of the initial value problem
′′ ′
y − y = 0, y(x0 ) = k0 , y (x0 ) = k1
′ ′
y1 (x0 ) = 1, y (x0 ) = 0 and y2 (x0 ) = 0, y (x0 ) = 1.
1 2
Then use Exercise 5.1.37 (c) to write the solution of the initial value problem
′′ 2 ′
y + ω y = 0, y(x0 ) = k0 , y (x0 ) = k1
41. Recall from Exercise 5.1.4 that 1/(x − 1) and 1/(x + 1) are solutions of
2 ′′ ′
(x − 1)y + 4x y + 2y = 0 (A)
Then use Exercise 5.1.37 (c) to write the solution of initial value problem
2 ′′ ′ ′
(x − 1)y + 4x y + 2y = 0, y(0) = k0 , y (0) = k1
42.
on (−∞, ∞) and that {y , y } is a fundamental set of solutions of (A) on (−∞, 0) and (0, ∞).
1 2
2 3
a1 x + a2 x , x ≥ 0,
y ={
2 3
b1 x + b2 x , x <0
is a solution of (A) on (−∞, ∞) if and only if a1 = b1 . From this, justify the statement that y is a solution of (A) on
(−∞, ∞) if and only if
2 3
c1 x + c2 x , x ≥ 0,
y ={
2 3
c1 x + c3 x , x < 0,
2 ′′ ′ ′
x y − 4x y + 6y = 0, y(0) = k0 , y (0) = k1
2 ′′ ′ ′
x y − 4x y + 6y = 0, y(x0 ) = k0 , y (x0 ) = k1 (B)
has infinitely many solutions on (−∞, ∞). On what interval does (B) have a unique solution?
43.
a. Verify that y 1 =x and y 2 =x
2
satisfy
2 ′′ ′
x y − 2x y + 2y = 0 (A)
on (−∞, ∞) and that {y , y } is a fundamental set of solutions of (A) on (−∞, 0) and (0, ∞).
1 2
2
a1 x + a2 x , x ≥ 0,
y ={
2
b1 x + b2 x , x <0
is a solution of (A) on (−∞, ∞) if and only if a = b and a = b . From this, justify the statement that the general
1 1 2 2
2 ′′ ′ ′
x y − 2x y + 2y = 0, y(0) = k0 , y (0) = k1
2 ′′ ′ ′
x y − 2x y + 2y = 0, y(x0 ) = k0 , y (x0 ) = k1
on (−∞, ∞), and that {y , y } is a fundamental set of solutions of (A) on (−∞, 0) and (0, ∞).
1 2
2 ′′ ′ ′
x y − 6x y + 12y = 0, y(0) = k0 , y (0) = k1
2 ′′ ′ ′
x y − 6x y + 12y = 0, y(x0 ) = k0 , y (x0 ) = k1 (B)
has infinitely many solutions on (−∞, ∞). On what interval does (B) have a unique solution?
where a , b , and c are constant (a ≠ 0 ). When you've completed this section you'll know everything there is to know about
solving such equations.
If a, b, and c are real constants and a ≠ 0 , then
′′ ′
ay + b y + cy = F (x)
is said to be a constant coefficient equation. In this section we consider the homogeneous constant coefficient equation
′′ ′
ay + b y + cy = 0. (5.2.2)
As we’ll see, all solutions of Equation 5.2.2 are defined on (−∞, ∞). This being the case, we’ll omit references to the
interval on which solutions are defined, or on which a given set of solutions is a fundamental set, etc., since the interval will
always be (−∞, ∞).
The key to solving Equation 5.2.2 is that if y = e where r is a constant then the left side of Equation 5.2.2 is a multiple of
rx
′′ ′ 2 rx rx rx 2 rx
ay + b y + cy = ar e + bre + ce = (ar + br + c)e . (5.2.3)
is the characteristic polynomial of Equation 5.2.2, and p(r) = 0 is the characteristic equation. From Equation 5.2.3 we can
see that y = e is a solution of Equation 5.2.2 if and only if p(r) = 0 .
rx
The roots of the characteristic equation are given by the quadratic formula
− − −−−−−
2
−b ± √ b − 4ac
r = . (5.2.4)
2a
Example 5.2.1
a. Find the general solution of
′′ ′
y + 6 y + 5y = 0. (5.2.5)
Solution
a. The characteristic polynomial of Equation 5.2.5 is
c1 + c2 = 3
−c1 − 5 c2 = −1.
The solution of this system is c 1 = 7/2, c2 = −1/2 . Therefore the solution of Equation 5.2.6 is
7 1
−x −5x
y = e − e .
2 2
solutions of ay + by + cy = 0 .
′′ ′
Figure 5.2.1 : y = 7
2
e
−x
−
1
2
−5x
e
Example 5.2.2
a. Find the general solution of
′′ ′
y + 6 y + 9y = 0. (5.2.9)
Solution
a. The characteristic polynomial of Equation 5.2.9 is
2 2
p(r) = r + 6r + 9 = (r + 3 ) ,
Since the characteristic equation has no other roots, Equation 5.2.9 has no other solutions of the form e . We look for rx
solutions of the form y = u y = u e , where u is a function that we’ll now determine. (This should remind you of the
1
−3x
method of variation of parameters used in Section 2.1 to solve the nonhomogeneous equation y + p(x)y = f (x) , given a ′
solution y of the complementary equation y + p(x)y = 0 . It’s also a special case of a method called reduction of order
1
′
that we’ll study in Section 5.6. For other ways to obtain a second solution of Equation 5.2.9 that’s not a multiple of e , −3x
so
′′ ′ −3x ′′ ′ ′
y + 6 y + 9y =e [(u − 6 u + 9u) + 6(u − 3u) + 9u]
−3x ′′ ′ ′′ −3x
=e [u − (6 − 6)u + (9 − 18 + 9)u] = u e .
−3x
y =e (c1 + c2 x) (5.2.11)
c = 0 and c = 1 yields the second solution y = x e . Since y /y = x is nonconstant, Theorem 5.1.6 implies that
−3x
1 2 2 2 1
{ y , y } is fundamental set of solutions of Equation 5.2.9, and Equation 5.2.11 is the general solution.
1 2
Imposing the initial conditions y(0) = 3, y (0) = −1 in Equation 5.2.11 and Equation
′
5.2.12 yields c1 = 3 and
−3 c + c = −1 , so c = 8 . Therefore the solution of Equation 5.2.10 is
1 2 2
−3x
y =e (3 + 8x).
2 2 2
p(r) = a(r − r1 ) = a(r − 2 r1 r + r ).
1
Therefore
Since p(r ) = 0 , t y = e
1 1is a solution of ay + by + cy = 0 , and therefore of Equation 5.2.13. Proceeding as in Example
r1 x ′′ ′
5.2.1, we look for other solutions of Equation 5.2.13 of the form y = ue ; then r1 x
′ ′ r1 x r1 x ′′ ′′ r1 x ′ r1 x 2 r1 x
y =u e + ru e and y =u e + 2 r1 u e + r ue ,
1
so
′′ ′ 2 rx ′′ ′ 2 ′ 2
y − 2 r1 y + r y =e [(u + 2 r1 u + r u) − 2 r1 (u + r1 u) + r u]
1 1 1
r1 x ′′ ′ 2 2 2 ′′ r1 x
=e [u + (2 r1 − 2 r1 )u + (r − 2r + r )u] = u e .
1 1 1
r1 x
y =e (c1 + c2 x) (5.2.14)
c = 0 and c = 1 yields the second solution y = x e . Since y /y = x is nonconstant, Theorem 5.1.6 implies that
r1 x
1 2 2 2 1
{ y , y } is a fundamental set of solutions of Equation 5.2.13, and Equation 5.2.14 is the general solution.
1 2
Example 5.2.3
a. Find the general solution of
′′ ′
y + 4 y + 13y = 0. (5.2.15)
Solution:
a. The characteristic polynomial of Equation 5.2.15 is
2 2 2
p(r) = r + 4r + 13 = r + 4r + 4 + 9 = (r + 2 ) + 9.
The roots of the characteristic equation are r = −2 + 3i and r = −2 − 3i . By analogy with Case 1, it is reasonable to
1 2
difficulties here, since you are probably not familiar with exponential functions with complex arguments, and even if you
are, it is inconvenient to work with them, since they are complex–valued. We’ll take a simpler approach, which we
motivate as follows: the exponential notation suggests that
(−2+3i)x −2x 3ix (−2−3i)x −2x −3ix
e =e e and e =e e ,
so
′′ ′ −2x ′′ ′ ′
y + 4 y + 13y =e [(u − 4 u + 4u) + 4(u − 2u) + 13u]
−2x ′′ ′ −2x ′′
=e [u − (4 − 4)u + (4 − 8 + 13)u] = e (u + 9u).
Therefore y = ue −2x
is a solution of Equation 5.2.15if and only if
From Example
Example 5.2.1:
Add text here. For the automatic number to work, you need to add the “AutoNum” template (preferably at5.1.2}, the
general solution of this equation is
c = 1 yields the second solution y = e sin 3x . Since y / y = tan 3x is nonconstant, Theorem 5.1.6 implies that
−2x
2 2 2 1
{ y , y } is a fundamental set of solutions of Equation 5.2.15, and Equation 5.2.17 is the general solution.
1 2
b. Imposing the condition y(0) = 2 in Equation 5.2.17 shows that c 1 =2 . Differentiating Equation 5.2.17 yields
′ −2x −2x
y = −2 e (c1 cos 3x + c2 sin 3x) + 3 e (−c1 sin 3x + c2 cos 3x),
which we rewrite as
r1 = λ + iω, r2 = λ − iω, (5.2.18)
with
− −−−−−−
b √ 4ac − b2
λ =− , ω = .
2a 2a
Don’t memorize these formulas. Just remember that r and r are of the form Equation 5.2.18, where λ is an arbitrary
1 2
real number and ω is positive; λ and ω are the real and imaginary parts, respectively, of r . Similarly, λ and −ω are the 1
real and imaginary parts of r . We say that r and r are complex conjugates,
2 1 2
3
sin 3x)
which means that they have the same real part and their imaginary parts have the same absolute values, but opposite signs.
so even though we haven’t defined e and e , it is reasonable to expect that every linear combination of e
iωx −iωx
and (λ+iω)x
e
(λ−iω)x
can be written as y = ue , where u depends upon x. To determine u we first observe that since r = λ + iω and
λx
1
2 2
= a [(r − λ ) +ω ]
2 2 2
= a(r − 2λr + λ + ω ).
Therefore ay ′′ ′
+ b y + cy = 0 can be written as
′′ ′ 2 2
a [y − 2λ y + (λ + ω )y] = 0.
Substituting these expressions into Equation 5.2.19and dropping the common factor e λx
yields
′′ ′ 2 ′ 2 2
(u + 2λ u + λ u) − 2λ(u + λu) + (λ + ω )u = 0,
which simplifies to
′′ 2
u + ω u = 0.
c = 1 yields a second solution y = e sin ωx . Since y / y = tan ωx is nonconstant, so Theorem 5.1.6 implies that
λx
2 2 2 1
{ y , y } is a fundamental set of solutions of Equation 5.2.19, and Equation 5.2.20 is the general solution.
1 2
Summary
The next theorem summarizes the results of this section.
Theorem 5.2.1
Let p(r) = ar 2
+ br + c be the characteristic polynomial of
′′ ′
ay + b y + cy = 0. (5.2.21)
Then:
a. If p(r) = 0 has distinct real roots r and r , then the general solution of Equation 5.2.21 is
1 2
r1 x r2 x
y = c1 e + c2 e .
b. If p(r) = 0 has a repeated root r , then the general solution of Equation 5.2.21 is
1
r1 x
y =e (c1 + c2 x).
c. If p(r) = 0 has complex conjugate roots r1 = λ + iω and r2 = λ − iω ( where ω > 0), then the general solution of
Equation 5.2.21 is
λx
y =e (c1 cos ωx + c2 sin ωx).
2. y ′′
− 4 y + 5y = 0
′
3. y ′′
+ 8 y + 7y = 0
′
4. y ′′
− 4 y + 4y = 0
′
5. y ′′
+ 2 y + 10y = 0
′
6. y ′′
+ 6 y + 10y = 0
′
7. y ′′
− 8 y + 16y = 0
′
8. y ′′
+y
′
=0
9. y ′′
− 2 y + 3y = 0
′
10. y ′′
+ 6 y + 13y = 0
′
11. 4y ′′
+ 4 y + 10y = 0
′
12. 10y ′′
− 3y − y = 0
′
Q5.2.2
In Exercises 5.2.13-5.2.17 solve the initial value problem.
13. y ′′
+ 14 y + 50y = 0,
′
y(0) = 2,
′
y (0) = −17
14. 6y ′′
− y − y = 0,
′
y(0) = 10,
′
y (0) = 0
15. 6y ′′
+ y − y = 0,
′
y(0) = −1,
′
y (0) = 3
16. 4y ′′
− 4 y − 3y = 0,
′
y(0) =
13
12
,
′
y (0) =
23
24
17. 4y ′′
− 12 y + 9y = 0,
′
y(0) = 3,
′
y (0) =
5
Q5.2.3
In Exercises 5.2.18-5.2.21 solve the initial value problem and graph the solution.
18. y ′′
+ 7 y + 12y = 0,
′
y(0) = −1,
′
y (0) = 0
19. y ′′
− 6 y + 9y = 0,
′
y(0) = 0,
′
y (0) = 2
20. 36y ′′
− 12 y + y = 0,
′
y(0) = 3,
′
y (0) =
5
21. y ′′
+ 4 y + 10y = 0,
′
y(0) = 3,
′
y (0) = −2
Q5.2.4
22.
a. Suppose y is a solution of the constant coefficient homogeneous equation
′′ ′
ay + b y + cy = 0. (A)
′′ ′
az + b z + cz = 0.
where the initial conditions are imposed at x 0 =0 . However, if the initial value problem is
′′ ′ ′
ay + b y + cy = 0, y(x0 ) = k0 , y (x0 ) = k1 , (B)
(whichever is applicable) is more complicated. Use (b) to restate Theorem 5.2.1 in a form more convenient for solving (B).
Q5.2.5
In Exercises 5.2.23-5.2.28 use a method suggested by Exercise 5.2.22 to solve the initial value problem.
23. y ′′ ′
+ 3 y + 2y = 0, y(1) = −1,
′
y (1) = 4
24. y ′′ ′
− 6 y − 7y = 0, y(2) = −
1
3
,
′
y (2) = −5
25. y ′′ ′
− 14 y + 49y = 0, y(1) = 2,
′
y (1) = 11
26. 9y ′′ ′
+ 6 y + y = 0, y(2) = 2,
′
y (2) = −
14
27. 9y ′′
+ 4y = 0, y(π/4) = 2,
′
y (π/4) = −2
28. y ′′
+ 3y = 0, y(π/3) = 2,
′
y (π/3) = −1
Q5.2.6
29. Prove: If the characteristic equation of
′′ ′
ay + b y + cy = 0 (A)
has a repeated negative root or two roots with negative real parts, then every solution of (A) approaches zero as x → ∞ .
30. Suppose the characteristic polynomial of ay + by ′′ ′
+ cy = 0 has distinct real roots r and r . Use a method suggested by
1 2
∞ n 2 3 n
u
u u u u u
e =∑ =1+ + + +⋯ + +⋯ (A)
n! 1! 2! 3! n!
n=0
∞ 2n 2 4 2n
n
u u u n
u
cos u = ∑(−1 ) =1− + + ⋯ + (−1 ) +⋯ , (B)
(2n)! 2! 4! (2n)!
n=0
and
∞ 2n+1 3 5 2n+1
n
u u u n
u
sin u = ∑(−1 ) =u− + + ⋯ + (−1 ) +⋯ (C)
(2n + 1)! 3! 5! (2n + 1)!
n=0
for all real values of u. Even though you have previously considered (A) only for real values of u, we can set u = iθ , where θ
is real, to obtain
∞ n
(iθ)
iθ
e =∑ . (D)
n!
n=0
Given the proper background in the theory of infinite series with complex terms, it can be shown that the series in (D)
converges for all real θ .
a. Recalling that i2
= −1, write enough terms of the sequence {i n
} to convince yourself that the sequence is repetitive:
collect the real part (the terms not multiplied by i) and the imaginary part (the terms multiplied by i) on the right, and use
the trigonometric identities
cos(θ1 + θ2 ) = cos θ1 cos θ2 − sin θ1 sin θ2
to verify that
i( θ1 +θ2 ) iθ1 iθ2
e =e e ,
d. Let a , b , and c be real numbers, with a ≠0 . Let z = u + iv where u and v are real-valued functions of x. Then we say
that z is a solution of
′′ ′
ay + b y + cy = 0 (G)
if u and v are both solutions of (G). Use Theorem 5.2.1 (c) to verify that if the characteristic equation of (G) has complex
conjugate roots λ ± iω then z = e 1 and z = e
(λ+iω)x
are both solutions of (G).
2
(λ−iω)x
where the forcing function f isn’t identically zero. The next theorem, an extension of Theorem 5.1.1, gives sufficient
conditions for existence and uniqueness of solutions of initial value problems for Equation 5.3.1. We omit the proof, which is
beyond the scope of this book.
To find the general solution of Equation 5.3.1 on an interval (a, b) where p, q, and f are continuous, it is necessary to find the
general solution of the associated homogeneous equation
′′ ′
y + p(x)y + q(x)y = 0 (5.3.2)
on (a, b). We call Equation 5.3.2 the complementary equation for Equation 5.3.1.
The next theorem shows how to find the general solution of Equation 5.3.1 if we know one solution y of Equation 5.3.1 and p
a fundamental set of solutions of Equation 5.3.2. We call y a particular solution of Equation 5.3.1 ; it can be any solution
p
Theorem 5.3.2
Suppose p, q, and f are continuous on (a, b). Let y be a particular solution of p
′′ ′
y + p(x)y + q(x)y = f (x) (5.3.3)
on (a, b), and let {y 1, y2 } be a fundamental set of solutions of the complementary equation
′′ ′
y + p(x)y + q(x)y = 0 (5.3.4)
y = yp + c1 y1 + c2 y2 , (5.3.5)
Proof
We first show that y in Equation 5.3.5 is a solution of Equation 5.3.3 for any choice of the constants c1 and c2 .
Differentiating Equation 5.3.5twice yields
′ ′ ′ ′ ′′ ′′ ′′ ′′
y = yp + c1 y + c2 y and y = yp + c1 y + c2 y ,
1 2 1 2
so
′′ ′ ′′ ′′ ′′ ′ ′ ′
y + p(x)y + q(x)y = (yp + c1 y + c2 y ) + p(x)(yp + c1 y + c2 y ) + q(x)(yp + c1 y1 + c2 y2 )
1 2 1 2
′′ ′ ′′ ′ ′′ ′
= (yp + p(x)yp + q(x)yp ) + c1 (y + p(x)y + q(x)y1 ) + c2 (y + p(x)y + q(x)y2 )
1 1 2 2
= f + c1 ⋅ 0 + c2 ⋅ 0 = f ,
and c . Suppose y is a solution of Equation 5.3.3. We’ll show that y − y is a solution of Equation 5.3.4, and therefore
2 p
′′ ′ ′′ ′′ ′ ′
(y − yp ) + p(x)(y − yp ) + q(x)(y − yp ) = (y − yp ) + p(x)(y − yp ) + q(x)(y − yp )
′′ ′ ′′ ′
= (y + p(x)y + q(x)y) − (yp + p(x)yp + q(x)yp )
= f (x) − f (x) = 0,
If P , P , and F are continuous and P has no zeros on (a, b), then Theorem 5.3.2 implies that the general solution of
0 1 0
′′ ′
P0 (x)y + P1 (x)y + P2 (x)y = F (x) (5.3.6)
To avoid awkward wording in examples and exercises, we will not specify the interval (a, b) when we ask for the general
solution of a specific linear second order equation, or for a fundamental set of solutions of a homogeneous linear second
order equation. Let’s agree that this always means that we want the general solution (or a fundamental set of solutions, as
the case may be) on every open interval on which p, q, and f are continuous if the equation is of the form Equation 5.3.3,
or on which P , P , P , and F are continuous and P has no zeros, if the equation is of the form Equation 5.3.6. We
0 1 2 0
For completeness, we point out that if P , P , P , and F are all continuous on an open interval (a, b), but P does have a zero
0 1 2 0
in (a, b), then Equation 5.3.6 may fail to have a general solution on (a, b) in the sense just defined. Exercises 5.1.42-5.1.44
illustrate this point for a homogeneous equation.
In this section we to limit ourselves to applications of Theorem 5.3.2 where we can guess at the form of the particular solution.
Example 5.3.1
a. Find the general solution of
′′
y + y = 1. (5.3.7)
Solution
a. We can apply Theorem 5.3.2 with (a, b) = (−∞, ∞) , since the functions p ≡ 0 , q ≡ 1 , and f ≡ 1 in Equation 5.3.7
are continuous on (−∞, ∞). By inspection we see that y ≡ 1 is a particular solution of Equation 5.3.7. Since p
y = cos x and y = sin x form a fundamental set of solutions of the complementary equation y + y = 0 , the general
′′
1 2
yields
′
y = −c1 sin x + c2 cos x.
y = 1 + cos x + 7 sin x.
Example 5.3.2
a. Find the general solution of
′′ ′ 2
y − 2 y + y = −3 − x + x . (5.3.10)
Solution
a. The characteristic polynomial of the complementary equation
′′ ′
y − 2y + y = 0
To guess a form for a particular solution of Equation 5.3.10, we note that substituting a second degree polynomial
y = A + Bx + C x
p
2
into the left side of Equation 5.3.10 will produce another second degree polynomial with
coefficients that depend upon A , B , and C . The trick is to choose A , B , and C so the polynomials on the two sides of
Equation 5.3.10 have the same coefficients; thus, if
2 ′ ′′
yp = A + Bx + C x then yp = B + 2C x and yp = 2C ,
so
′′ ′ 2
yp − 2 yp + yp = 2C − 2(B + 2C x) + (A + Bx + C x )
2 2
= (2C − 2B + A) + (−4C + B)x + C x = −3 − x + x .
Equating coefficients of like powers of x on the two sides of the last equality yields
B − 4C = −1
A − 2B + 2C = −3,
so C = 1 , B = −1 + 4C = 3 , and A = −3 − 2C + 2B = 1 . Therefore yp = 1 + 3x + x
2
is a particular solution of
Equation 5.3.10 and Theorem 5.3.2 implies that
2 x
y = 1 + 3x + x + e (c1 + c2 x) (5.3.12)
2 x
y = 1 + 3x + x − e (3 − x).
Figure 5.3.2 : y = 1 + 3x + x 2 x
− e (3 − x)
Example 5.3.3
Find the general solution of
2 ′′ ′ 4
x y + x y − 4y = 2 x (5.3.13)
on (−∞, 0) and (0, ∞). To find a particular solution of Equation 5.3.13, we note that if y = Ax , where A is a constant p
4
then both sides of Equation 5.3.13 will be constant multiples of x and we may be able to choose A so the two sides are
4
′′ ′
y + p(x)y + q(x)y = f1 (x)
′′ ′
y + p(x)y + q(x)y = f2 (x)
yp = yp + yp
1 2
is a particular solution of
′′ ′
y + p(x)y + q(x)y = f1 (x) + f2 (x)
on (a, b).
Proof
If yp = yp
1
+ yp
2
then
′′ ′ ′′ ′
yp + p(x)yp + q(x)yp = (yp + yp ) + p(x)(yp + yp ) + q(x)(yp + yp )
1 2 1 2 1 2
′′ ′ ′′ ′
= (yp + p(x)yp + q(x)yp ) + (yp + p(x)yp + q(x)yp )
1 1 1 2 2 2
= f1 (x) + f2 (x).
where
f = f1 + f2 + ⋯ + fk ;
′′ ′
y + p(x)y + q(x)y = fi (x)
on (a, b) for i = 1 , 2, …, k , then y + y + ⋯ + y is a particular solution of Equation 5.3.14 on (a, b). Moreover, by a
p1 p2 pk
proof similar to the proof of Theorem 5.3.3 we can formulate the principle of superposition in terms of a linear equation
written in the form
′′ ′
P0 (x)y + P1 (x)y + P2 (x)y = F (x)
′′ ′
P0 (x)y + P1 (x)y + P2 (x)y = F1 (x)
′′ ′
P0 (x)y + P1 (x)y + P2 (x)y = F2 (x)
on (a, b).
Example 5.3.4
The function y p1 = x /15
4
is a particular solution of
2 ′′ ′ 4
x y + 4x y + 2y = 2 x (5.3.15)
on (−∞, ∞) and y p2
2
= x /3 is a particular solution of
2 ′′ ′ 2
x y + 4x y + 2y = 4 x (5.3.16)
on (−∞, ∞).
Solution
The right side F (x) = 2x 4
+ 4x
2
in Equation 5.3.17 is the sum of the right sides
4 2
F1 (x) = 2 x and F2 (x) = 4 x .
in Equation 5.3.15and Equation 5.3.16. Therefore the principle of superposition implies that
4 2
x x
yp = yp + yp = +
1 2
15 3
2. y ′′ ′
− 4 y + 5y = 1 + 5x
3. y ′′ ′
+ 8 y + 7y = −8 − x + 24 x
2
+ 7x
3
4. y ′′ ′
− 4 y + 4y = 2 + 8x − 4 x
2
5. y ′′ ′
+ 2 y + 10y = 4 + 26x + 6 x
2
+ 10 x ,
3
y(0) = 2,
′
y (0) = 9
6. y ′′ ′
+ 6 y + 10y = 22 + 20x, y(0) = 2, y (0) = −2
′
Q5.3.2
7. Show that the method used in Example 5.3.2 will not yield a particular solution of
′′ ′ 2
y +y = 1 + 2x + x ; (A)
Q5.3.3
In Exercises 5.3.8-5.3.13 find a particular solution by the method used in Example 5.3.3.
8. x 2
y
′′ ′
+ 7x y + 8y =
6
9. x 2
y
′′ ′
− 7x y + 7y = 13 x
1/2
10. x 2
y
′′ ′
− xy + y = 2x
3
11. x 2
y
′′ ′
+ 5x y + 4y =
1
x
3
12. x 2
y
′′ ′
+ x y + y = 10 x
1/3
13. x 2
y
′′ ′
− 3x y + 13y = 2 x
4
Q5.3.4
14. Show that the method suggested for finding a particular solution in Exercises 5.3.8-5.3.13 will not yield a particular
solution of
2 ′′ ′
1
x y + 3x y − 3y = ; (A)
3
x
Q5.3.5
If a, b, c, and α are constants, then
αx ′′ αx ′ αx 2 αx
α(e ) + b(e ) + ce = (aα + bα + c)e . (5.3E.2)
Use this in Exercises 5.3.16-5.3.21 to find a particular solution. Then find the general solution and, where indicated, solve the
initial value problem and graph the solution.
17. y ′′ ′
− 4 y + 5y = e
2x
18. y ′′ ′
+ 8 y + 7y = 10 e
−2x
, y(0) = −2, y (0) = 10
′
19. y ′′ ′
− 4 y + 4y = e ,
x
y(0) = 2,
′
y (0) = 0
20. y ′′ ′
+ 2 y + 10y = e
x/2
21. y ′′ ′
+ 6 y + 10y = e
−3x
Q5.3.6
22. Show that the method suggested for finding a particular solution in Exercises 5.3.16-5.3.21 will not yield a particular
solution of
′′ ′ 4x
y − 7 y + 12y = 5 e ; (A)
Q5.3.7
If ω is a constant, differentiating a linear combination of cos ωx and sin ωx with respect to x yields another linear combination
of cos ωx and sin ωx. In Exercises 5.3.24-5.3.29 use this to find a particular solution of the equation. Then find the general
solution and, where indicated, solve the initial value problem and graph the solution.
24. y ′′ ′
− 8 y + 16y = 23 cos x − 7 sin x
25. y ′′
+y
′
= −8 cos 2x + 6 sin 2x
26. y ′′ ′
− 2 y + 3y = −6 cos 3x + 6 sin 3x
27. y ′′ ′
+ 6 y + 13y = 18 cos x + 6 sin x
28. y ′′ ′
+ 7 y + 12y = −2 cos 2x + 36 sin 2x, y(0) = −3,
′
y (0) = 3
29. y ′′ ′
− 6 y + 9y = 18 cos 3x + 18 sin 3x, y(0) = 2,
′
y (0) = 2
Q5.3.8
30. Find the general solution of
′′ 2
y + ω y = M cos ωx + N sin ωx, (5.3E.4)
0
where M and N are constants and ω and ω are distinct positive numbers.
0
31. Show that the method suggested for finding a particular solution in Exercises 5.3.24-5.3.29 will not yield a particular
solution of
′′
y + y = cos x + sin x; (A)
that is, (A) does not have a particular solution of the form y p = A cos x + B sin x .
32. Prove: If M , N are constants (not both zero) and ω > 0 , the constant coefficient equation
′′ ′
ay + b y + cy = M cos ωx + N sin ωx (A)
has a particular solution that’s a linear combination of cos ωx and sin ωx if and only if the left side of (A) is not of the form
a(y + ω y) , so that cos ωx and sin ωx are solutions of the complementary equation.
′′ 2
Q5.3.9
Q5.3.10
39. Prove: If y is a particular solution of
p
1
′′ ′
P0 (x)y + P1 (x)y + P2 (x)y = F1 (x) (5.3E.5)
on (a, b) , then y p = yp
1
+ yp
2
is a solution of
′′ ′
P0 (x)y + P1 (x)y + P2 (x)y = F1 (x) + F2 (x) (5.3E.7)
on (a, b) .
40. Suppose p , q, and f are continuous on (a, b) . Let y1 , y2 , and yp be twice differentiable on (a, b) , such that
y = c1 y1 + c2 y2 + yp is a solution of
′′ ′
y + p(x)y + q(x)y = f (5.3E.8)
on (a, b) for every choice of the constants c 1, c2 . Show that y and y are solutions of the complementary equation on (a, b) .
1 2
′′ ′
ay + b y + cy = 0.
In Section 5.2 we showed how to find {y , y }. In this section we’ll show how to find y . The procedure that we’ll use is
1 2 p
called the method of undetermined coefficients. Our first example is similar to Exercises 5.3.16-5.3.21.
Example 5.4.1 :
Find a particular solution of
′′ ′ 2x
y − 7 y + 12y = 4 e . (5.4.2)
5.4.2, so it may be possible to choose A so that y is a solution of Equation 5.4.2. Let’s try it; if y = Ae then 2x
p p
′′ ′ 2x 2x 2x 2x 2x
yp − 7 yp + 12 yp = 4Ae − 14Ae + 12Ae = 2Ae = 4e
is p(r) = r − 7r + 12 = (r − 3)(r − 4) , so
2
{e
3x
,e
4x
} is a fundamental set of solutions of Equation 5.4.3. Therefore
the general solution of Equation 5.4.2 is
2x 3x 4x
y = 2e + c1 e + c2 e .
Example 5.4.2
Find a particular solution of
′′ ′ 4x
y − 7 y + 12y = 5 e . (5.4.4)
side of Equation 5.4.2 is a constant multiple of e — it may seem reasonable to try y = Ae as a particular solution of
2x
p
4x
Equation 5.4.4. However, this will not work, since we saw in Example 5.4.1 that e is a solution of the complementary 4x
equation Equation 5.4.3, so substituting y = Ae into the left side of Equation 5.4.4) produces zero on the left, no
p
4x
matter how we chooseA . To discover a suitable form for y , we use the same approach that we used in Section 5.2 to find
p
a second solution of
′′ ′
ay + b y + cy = 0
4x ′ ′ 4x 4x ′′ ′′ 4x ′ 4x 4x
y = ue , y =u e + 4u e , and y =u e + 8u e + 16u e (5.4.5)
or
′′ ′
u + u = 5.
Example 5.4.3
Find a particular solution of
′′ ′ 4x
y − 8 y + 16y = 2 e . (5.4.6)
Solution
Since the characteristic polynomial of the complementary equation
′′ ′
y − 8 y + 16y = 0 (5.4.7)
5.4.6) does not have a solution of the form y = Ae or y = Ax e . As in Example 5.4.2, we look for solutions of
4x 4x
p p
Equation 5.4.6 in the form y = ue , where u is a function to be determined. Substituting from Equation 5.4.5 into
4x
′′ ′ ′
(u + 8 u + 16u) − 8(u + 4u) + 16u = 2,
or
′′
u = 2.
Integrating twice and taking the constants of integration to be zero shows that up = x
2
is a particular solution of this
equation, so y = x e is a particular solution of Equation 5.4.4. Therefore
p
2 4x
4x 2
y =e (x + c1 + c2 x)
The preceding examples illustrate the following facts concerning the form of a particular solution yp of a constant coefficent
equation
′′ ′ αx
ay + b y + cy = ke ,
b. If e is a solution of Equation 5.4.8 but xe is not, then y = Ax e , where A is a constant. (See Example 5.4.2.)
αx αx
p
αx
c. If both e and xe are solutions of Equation 5.4.8, then y = Ax e , where A is a constant. (See Example 5.4.3.)
αx αx
p
2 αx
′′ ′ αx
ayp + b yp + c yp = ke ,
and solve for the constant A , as we did in Example 5.4.1. (See Exercises 5.4.31-5.4.33.) However, if the equation is
′′ ′ αx
ay + b y + cy = ke G(x),
where G is a polynomial of degree greater than zero, we recommend that you use the substitution y = ue
αx
as we did in
Examples 5.4.2 and 5.4.3. The equation for u will turn out to be
′′ ′ ′
au + p (α)u + p(α)u = G(x), (5.4.9)
where p(r) = ar + br + c is the characteristic polynomial of the complementary equation and p (r) = 2ar + b (Exercise
2 ′
5.4.30); however, you shouldn’t memorize this since it is easy to derive the equation for u in any particular case. Note,
however, that if e is a solution of the complementary equation then p(α) = 0 , so Equation 5.4.9 reduces to
αx
′′ ′ ′
au + p (α)u = G(x),
′′
au = G(x).
Example 5.4.4
Find a particular solution of
′′ ′ 3x 2
y − 3 y + 2y = e (−1 + 2x + x ). (5.4.10)
Solution
Substituting
3x ′ ′ 3x 3x ′′ ′′ 3x ′ 3x 3x
y = ue , y =u e + 3u e , andy =u e + 6u e + 9u e
or
′′ ′ 2
u + 3 u + 2u = −1 + 2x + x . (5.4.11)
As in Example 5.3.2, in order to guess a form for a particular solution of Equation 5.4.11), we note that substituting a
second degree polynomial u = A + Bx + C x for u in the left side of Equation 5.4.11) produces another second
p
2
′′ ′ 2
up + 3 up + 2 up = 2C + 3(B + 2C x) + 2(A + Bx + C x )
2 2
= (2C + 3B + 2A) + (6C + 2B)x + 2C x = −1 + 2x + x .
Equating coefficients of like powers of x on the two sides of the last equality yields
2C =1
2B + 6C =2
2A + 3B + 2C = −1.
Solving these equations for C , B, and A (in that order) yields C = 1/2, B = −1/2, A = −1/4 . Therefore
Example 5.4.5
Find a particular solution of
′′ ′ 3x 2
y − 4 y + 3y = e (6 + 8x + 12 x ). (5.4.12)
Solution
Substituting
3x ′ ′ 3x 3x ′′ ′′ 3x ′ 3x 3x
y = ue , y =u e + 3u e , and y =u e + 6u e + 9u e
or
′′ ′ 2
u + 2 u = 6 + 8x + 12 x . (5.4.13)
There’s no u term in this equation, since e is a solution of the complementary equation for Equation 5.4.12). (See
3x
Exercise 5.4.30.) Therefore Equation 5.4.13) does not have a particular solution of the form u = A + Bx + C x that p
2
′′ ′
up + 2 up = 2C + (B + 2C x)
can’t contain the last term (12x ) on the right side of Equation
2
5.4.13 ). Instead, let’s try u p = Ax + Bx
2
+ Cx
3
on the
grounds that
′ 2 ′′
up = A + 2Bx + 3C x and up = 2B + 6C x
together contain all the powers of x that appear on the right side of Equation 5.4.13).
Substituting these expressions in place of u and u in Equation 5.4.13) yields
′ ′′
2 2 2
(2B + 6C x) + 2(A + 2Bx + 3C x ) = (2B + 2A) + (6C + 4B)x + 6C x = 6 + 8x + 12 x .
Comparing coefficients of like powers of x on the two sides of the last equality shows that u satisfies Equation 5.4.13) if p
6C = 12
4B + 6C =8
2A + 2B = 6.
Example 5.4.6
Solution
Substituting
−x/2 ′ ′ −x/2
1 −x/2 ′′ ′′ −x/2 ′ −x/2
1 −x/2
y = ue , y =u e − ue , and y =u e −u e + ue
2 4
or
′′ 2
u = −2 + 12x + 36 x , (5.4.15)
Exercise 5.4.30.) To obtain a particular solution of Equation 5.4.15) we integrate twice, taking the constants of integration
to be zero; thus,
′ 2 3 2 3 4 2 2
up = −2x + 6 x + 12 x and up = −x + 2x + 3x = x (−1 + 2x + 3 x ).
Therefore
−x/2 2 −x/2 2
yp = up e =x e (−1 + 2x + 3 x )
Summary
The preceding examples illustrate the following facts concerning particular solutions of a constant coefficent equation of the
form
′′ ′ αx
ay + b y + cy = e G(x),
then y = e Q(x), where Q is a polynomial of the same degree as G. (See Example 5.4.4).
p
αx
b. If e is a solution of Equation 5.4.16 but xe is not, then y = x e Q(x), where Q is a polynomial of the same degree
αx αx
p
αx
′′ ′ αx
ayp + b yp + c yp = e G(x),
and solve for the coefficients of the polynomial Q. However, if you try this you will see that the computations are more
tedious than those that you encounter by making the substitution y = ue and finding a particular solution of the resulting αx
equation for u. (See Exercises 5.4.34-5.4.36.) In Case (a) the equation for u will be of the form
′′ ′ ′
au + p (α)u + p(α)u = G(x),
with a particular solution of the form u = Q(x) , a polynomial of the same degree as G, whose coefficients can be found by
p
the method used in Example 5.4.4. In Case (b) the equation for u will be of the form
′′ ′ ′
au + p (α)u = G(x)
whose coefficents can be found by the method used in Example 5.4.5. In Case (c), the equation for u will be of the form
′′
au = G(x)
with a particular solution of the form u = x Q(x) that can be obtained by integrating G(x)/a twice and taking the constants
p
2
Example 5.4.7
Find a particular solution of
′′ ′ 2x 4x
y − 7 y + 12y = 4 e + 5e . (5.4.17)
Solution
In Example 5.4.1 we found that y p1 = 2e
2x
is a particular solution of
′′ ′ 2x
y − 7 y + 12y = 4 e ,
2. y ′′
− 6 y + 5y = e
′ −3x
(35 − 8x)
3. y ′′ ′
− 2 y − 3y = e (−8 + 3x)
x
4. y ′′
+ 2y + y = e
′ 2x
(−7 − 15x + 9 x )
2
5. y ′′
+ 4y = e
−x
(7 − 4x + 5 x )
2
6. y ′′ ′
− y − 2y = e (9 + 2x − 4 x )
x 2
7. y ′′ ′
− 4 y − 5y = −6x e
−x
8. y ′′ ′
− 3 y + 2y = e (3 − 4x)
x
9. y ′′ ′
+ y − 12y = e
3x
(−6 + 7x)
10. 2y ′′ ′
− 3 y − 2y = e
2x
(−6 + 10x)
11. y ′′
+ 2y + y = e
′ −x
(2 + 3x)
12. y ′′ ′
− 2 y + y = e (1 − 6x)
x
13. y ′′
− 4 y + 4y = e
′ 2x
(1 − 3x + 6 x )
2
14. 9y ′′
+ 6y + y = e
′ −x/3
(2 − 4x + 4 x )
2
Q5.4.2
In Exercises 5.4.15-5.4.19 find the general solution.
15. y ′′
− 3 y + 2y = e
′ 3x
(1 + x)
16. y ′′ ′
− 6 y + 8y = e (11 − 6x)
x
17. y ′′
+ 6 y + 9y = e
′ 2x
(3 − 5x)
18. y ′′ ′
+ 2 y − 3y = −16x e
x
19. y ′′ ′
− 2 y + y = e (2 − 12x)
x
Q5.4.3
In Exercises 5.4.20-5.4.23 solve the initial value problem and plot the solution.
20. y ′′ ′
− 4 y − 5y = 9 e
2x
(1 + x), y(0) = 0,
′
y (0) = −10
21. y ′′
+ 3 y − 4y = e
′ 2x
(7 + 6x), y(0) = 2,
′
y (0) = 8
22. y ′′ ′
+ 4 y + 3y = −e
−x
(2 + 8x), y(0) = 1,
′
y (0) = 2
23. y ′′ ′
− 3 y − 10y = 7 e
−2x
, y(0) = 1,
′
y (0) = −17
Q5.4.4
In Exercises 5.4.24-5.4.29 use the principle of superposition to find a particular solution.
24. y ′′ ′
+ y + y = xe
x
+e
−x
(1 + 2x)
25. y ′′ ′
− 7 y + 12y = −e (17 − 42x) − e
x 3x
26. y ′′ ′
− 8 y + 16y = 6x e
4x
+ 2 + 16x + 16 x
2
27. y ′′ ′
− 3 y + 2y = −e
2x
(3 + 4x) − e
x
29. y ′′
+y = e
−x
(2 − 4x + 2 x ) + e
2 3x
(8 − 12x − 10 x )
2
Q5.4.5
30.
a. Prove that y is a solution of the constant coefficient equation
′′ ′ αx
ay + b y + cy = e G(x) (A)
if and only if y = ue αx
, where u satisfies
′′ ′ ′
au + p (α)u + p(α)u = G(x) (B)
and p(r) = ar 2
+ br + c is the characteristic polynomial of the complementary equation
′′ ′
ay + b y + cy = 0.
For the rest of this exercise, let G be a polynomial. Give the requested proofs for the case where
2 3
G(x) = g0 + g1 x + g2 x + g3 x .
b. Prove that if e isn’t a solution of the complementary equation then (B) has a particular solution of the form u = A(x) ,
αx
p
where A is a polynomial of the same degree as G, as in Example 5.4.4. Conclude that (A) has a particular solution of the
form y = e A(x) .
p
αx
c. Show that if e is a solution of the complementary equation and xe isn’t , then (B) has a particular solution of the form
αx αx
u = xA(x) , where A is a polynomial of the same degree as G, as in Example 5.4.5. Conclude that (A) has a particular
p
d. Show that if e and xe are both solutions of the complementary equation then (B) has a particular solution of the form
αx αx
u = x A(x) , where A is a polynomial of the same degree as G, and x A(x) can be obtained by integrating G/a twice,
2 2
p
taking the constants of integration to be zero, as in Example 5.4.6. Conclude that (A) has a particular solution of the form
A(x) .
2 αx
y =x e
p
Q5.4.6
Exercises 5.4.31–5.4.36 treat the equations considered in Examples 5.4.1–5.4.6. Substitute the suggested form of y into the p
equation and equate the resulting coefficients of like functions on the two sides of the resulting equation to derive a set of
simultaneous equations for the coefficients in y . Then solve for the coefficients to obtain y . Compare the work you’ve done
p p
with the work required to obtain the same results in Examples 5.4.1–5.4.6.
31. Compare with Example 5.4.1:
′′ ′ 2x 2x
y − 7 y + 12y = 4 e ; yp = Ae
a. y
′′
+ 2y + y =
′ e
√x
b. y ′′ ′
+ 6 y + 9y = e
−3x
ln x
2x
c. y − 4y + 4y =
′′ ′ e
1+x
d. 4y + 4y + y = 4e
′′ ′ −x/2
(
1
x
+ x)
38. Suppose α ≠ 0 and k is a positive integer. In most calculus books integrals like ∫ x e
k αx
dx are evaluated by integrating
by parts k times. This exercise presents another method. Let
αx
y =∫ e P (x) dx
with
k
P (x) = p0 + p1 x + ⋯ + pk x
(where p k ≠0 ).
a. Show that y = e αx
u , where
′
u + αu = P (x). (A)
c. Conclude that
αx k αx
∫ e P (x) dx = (A0 + A1 x + ⋯ + Ak x ) e + c,
b. ∫ e −x
(−1 + x )dx
2
c. ∫ x 3
e
−2x
dx
d. ∫ e x
(1 + x ) dx
2
e. ∫ e 3x
(−14 + 30x + 27 x )dx
2
f. ∫ e −x
(1 + 6 x
2
− 14 x
3
+ 3 x )dx
4
where λ and ω are real numbers, ω ≠ 0 , and P and Q are polynomials. We want to find a particular solution of Equation 5.5.1. As in Section 5.4,
the procedure that we will use is called the method of undetermined coefficients.
and
d
r r r−1
x sin ωx = ωx cos ωx + rx sin ωx.
dx
where F and G are polynomials with coefficients that can be expressed in terms of the coefficients of A and B . This suggests that we try to choose
A and B so that F = P and G = Q , respectively. Then y will be a particular solution of Equation 5.5.2. The next theorem tells us how to choose
p
the proper form for y . For the proof see Exercise 5.5.37.
p
Theorem 5.5.1
Suppose ω is a positive number and P and Q are polynomials. Let k be the larger of the degrees of P and Q. Then the equation
′′ ′
ay + b y + cy = P (x) cos ωx + Q(x) sin ωx
where
k k
A(x) = A0 + A1 x + ⋯ + Ak x and B(x) = B0 + B1 x + ⋯ + Bk x ,
provided that cos ωx and sin ωx are not solutions of the complementary equation. The solutions of
′′ 2
a(y + ω y) = P (x) cos ωx + Q(x) sin ωx
for which cos ωx and sin ωx are solutions of the complementary equation are of the form of Equation 5.5.3, where
2 k+1 2 k+1
A(x) = A0 x + A1 x + ⋯ + Ak x and B(x) = B0 x + B1 x + ⋯ + Bk x .
For an analog of this theorem that’s applicable to Equation 5.5.1, see Exercise 5.5.38.
Example 5.5.1
Find a particular solution of
′′ ′
y − 2 y + y = 5 cos 2x + 10 sin 2x. (5.5.4)
Solution
In Equation 5.5.4 the coefficients of cos 2x and sin 2x are both zero degree polynomials (constants). Therefore Theorem 5.5.1 implies that
Equation 5.5.4 has a particular solution
Since
′ ′′
yp = −2A sin 2x + 2B cos 2x and yp = −4(A cos 2x + B sin 2x),
′′ ′
yp − 2 yp + yp = −4(A cos 2x + B sin 2x) − 4(−A sin 2x + B cos 2x)
Equating the coefficients of cos 2x and sin 2x here with the corresponding coefficients on the right side of Equation 5.5.4 shows that yp is a
solution of Equation 5.5.4 if
−3A − 4B = 5
4A − 3B = 10.
yp = cos 2x − 2 sin 2x
Example 5.5.2
Find a particular solution of
′′
y + 4y = 8 cos 2x + 12 sin 2x. (5.5.5)
Solution
The procedure used in Example 5.5.1 doesn’t work here; substituting y p = A cos 2x + B sin 2x for y in Equation 5.5.5 yields
′′
yp + 4 yp = −4(A cos 2x + B sin 2x) + 4(A cos 2x + B sin 2x) = 0
for any choice of A and B , since cos 2x and sin 2x are both solutions of the complementary equation for Equation 5.5.5 . We’re dealing with
the second case mentioned in Theorem 5.5.1, and should therefore try a particular solution of the form
Then
′
yp = A cos 2x + B sin 2x + 2x(−A sin 2x + B cos 2x)
′′
andyp = −4A sin 2x + 4B cos 2x − 4x(A cos 2x + B sin 2x)
so
′′
yp + 4 yp = −4A sin 2x + 4B cos 2x.
Example 5.5.3
Find a particular solution of
′′ ′
y + 3 y + 2y = (16 + 20x) cos x + 10 sin x. (5.5.7)
Solution
The coefficients of cos x and sin x in Equation 5.5.7 are polynomials of degree one and zero, respectively. Therefore Theorem 5.5.1 tells us to
look for a particular solution of Equation 5.5.7 of the form
yp = (A0 + A1 x) cos x + (B0 + B1 x) sin x. (5.5.8)
Then
and
′′
yp = (2 B1 − A0 − A1 x) cos x − (2 A1 + B0 + B1 x) sin x, (5.5.10)
so
′′ ′
yp + 3 yp + 2 yp = [ A0 + 3 A1 + 3 B0 + 2 B1 + (A1 + 3 B1 )x] cos x + [ B0 + 3 B1 − 3 A0 − 2 A1 + (B1 − 3 A1 )x] sin x. (5.5.11)
Comparing the coefficients of x cos x , x sin x , cos x , and sin x here with the corresponding coefficients in Equation 5.5.7 shows that yp is a
solution of Equation 5.5.7if
A1 + 3 B1 = 20
−3 A1 + B1 =0
A0 + 3 B0 + 3 A1 + 2 B1 = 16
−3 A0 + B0 − 2 A1 + 3 B1 = 10.
Solving the first two equations yields A 1 =2 ,B1 =6 . Substituting these into the last two equations yields
A0 + 3 B0 = 16 − 3 A1 − 2 B1 = −2
−3 A0 + B0 = 10 + 2 A1 − 3 B1 = −4.
Solving these equations yields A 0 =1 ,B 0 = −1 . Substituting A 0 =1 ,A 1 =2 ,B0 = −1 ,B 1 =6 into Equation 5.5.8shows that
yp = (1 + 2x) cos x − (1 − 6x) sin x
A Useful Observation
In Equations 5.5.9, 5.5.10, and 5.5.11 the polynomials multiplying sin x can be obtained by replacing A , A , B , and B by B , B , −A , and 0 1 0 1 0 1 0
−A , respectively, in the polynomials mutiplying cos x. An analogous result applies in general, as follows (Exercise 5.5.36).
1
Theorem 5.5.2
If
where A(x) and B(x) are polynomials with coefficients A …, A and B , …, B , then the polynomials multiplying sin ωx in
0 k 0 k
′ ′′ ′′ ′ ′′ 2
yp , yp , ayp + b yp + c yp and yp + ω yp
can be obtained by replacing A0 , …, Ak by B0 , …, Bk and B0 , …, Bk by −A0 , …, −Ak in the corresponding polynomials multiplying
cos ωx.
We will not use this theorem in our examples, but we recommend that you use it to check your manipulations when you work the exercises.
Example 5.5.4
Find a particular solution of
′′
y + y = (8 − 4x) cos x − (8 + 8x) sin x. (5.5.12)
Solution
According to Theorem 5.5.1, we should look for a particular solution of the form
2 2
yp = (A0 x + A1 x ) cos x + (B0 x + B1 x ) sin x, (5.5.13)
since cos x and sin x are solutions of the complementary equation. However, let’s try
yp = (A0 + A1 x) cos x + (B0 + B1 x) sin x (5.5.14)
first, so you can see why it doesn’t work. From Equation 5.5.10,
′′
yp = (2 B1 − A0 − A1 x) cos x − (2 A1 + B0 + B1 x) sin x,
Since the right side of this equation does not contain x cos x or x sin x , Equation 5.5.14 can’t satisfy Equation 5.5.12 no matter how we
choose A , A , B , and B .
0 1 0 1
′ 2
yp = [ A0 + (2 A1 + B0 )x + B1 x ] cos x
2
+ [ B0 + (2 B1 − A0 )x − A1 x ] sin x
and
′′ 2
yp = [2 A1 + 2 B0 − (A0 − 4 B1 )x − A1 x ] cos x
2
+ [2 B1 − 2 A0 − (B0 + 4 A1 )x − B1 x ] sin x,
so
′′
yp + yp = (2 A1 + 2 B0 + 4 B1 x) cos x + (2 B1 − 2 A0 − 4 A1 x) sin x.
Comparing the coefficients of cos x and sin x here with the corresponding coefficients in Equation 5.5.12 shows that yp is a solution of
Equation 5.5.12if
4B1 = −4
−4A1 = −8
2 B0 + 2 A1 =8
−2 A0 + 2 B1 = −8.
when λ ≠ 0 , we recall from Section 5.4 that substituting y = ue into Equation 5.5.15 will produce a constant coefficient equation for u with the
λx
forcing function P (x) cos ωx + Q(x) sin ωx. We can find a particular solution u of this equation by the procedure that we used in Examples p
Example 5.5.5
Find a particular solution of
′′ ′ −2x
y − 3 y + 2y = e [2 cos 3x − (34 − 150x) sin 3x] . (5.5.16)
Let y = ue −2x
. Then
′′ ′ −2x ′′ ′ ′
y − 3 y + 2y =e [(u − 4 u + 4u) − 3(u − 2u) + 2u]
−2x ′′ ′
=e (u − 7 u + 12u)
−2x
=e [2 cos 3x − (34 − 150x) sin 3x]
if
′′ ′
u − 7 u + 12u = 2 cos 3x − (34 − 150x) sin 3x. (5.5.17)
Theorem 5.5.1tells us to look for a particular solution of Equation 5.5.17of the form
up = (A0 + A1 x) cos 3x + (B0 + B1 x) sin 3x. (5.5.18)
Then
′
up = (A1 + 3 B0 + 3 B1 x) cos 3x + (B1 − 3 A0 − 3 A1 x) sin 3x
′′
and up = (−9 A0 + 6 B1 − 9 A1 x) cos 3x − (9 B0 + 6 A1 + 9 B1 x) sin 3x,
so
′′ ′
up − 7 up + 12 up = [3 A0 − 21 B0 − 7 A1 + 6 B1 + (3 A1 − 21 B1 )x] cos 3x
Comparing the coefficients of x cos 3x, x sin 3x, cos 3x , and sin 3x here with the corresponding coefficients on the right side of Equation
5.5.17shows that u is a solution of Equation 5.5.17if
p
21 A1 + 3 B1 = 150
(5.5.19)
3 A0 − 21 B0 − 7 A1 + 6 B1 = 2
21 A0 + 3 B0 − 6 A1 − 7 B1 = −34.
Solving the first two equations yields A 1 =7 ,B 1 =1 . Substituting these values into the last two equations of Equation 5.5.19yields
3 A0 − 21 B0 = 2 + 7 A1 − 6 B1 = 45
21 A0 + 3 B0 = −34 + 6 A1 + 7 B1 = 15.
Solving this system yields A 0 =1 ,B 0 = −2 . Substituting A 0 =1 ,A 1 =7 ,B0 = −2 , and B 1 =1 into Equation 5.5.18shows that
up = (1 + 7x) cos 3x − (2 − x) sin 3x
Example 5.5.6
Find a particular solution of
′′ ′ −x
y + 2 y + 5y = e [(6 − 16x) cos 2x − (8 + 8x) sin 2x] . (5.5.20)
Solution
Let y = ue −x
. Then
′′ ′ −x ′′ ′ ′
y + 2 y + 5y =e [(u − 2 u + u) + 2(u − u) + 5u]
−x ′′
=e (u + 4u)
−x
=e [(6 − 16x) cos 2x − (8 + 8x) sin 2x]
if
′′
u + 4u = (6 − 16x) cos 2x − (8 + 8x) sin 2x. (5.5.21)
Theorem 5.5.1tells us to look for a particular solution of Equation 5.5.21of the form
2 2
up = (A0 x + A1 x ) cos 2x + (B0 x + B1 x ) sin 2x.
Then
′ 2
up = [ A0 + (2 A1 + 2 B0 )x + 2 B1 x ] cos 2x
2
+ [ B0 + (2 B1 − 2 A0 )x − 2 A1 x ] sin 2x
and
′′ 2
up = [2 A1 + 4 B0 − (4 A0 − 8 B1 )x − 4 A1 x ] cos 2x
2
+ [2 B1 − 4 A0 − (4 B0 + 8 A1 )x − 4 B1 x ] sin 2x,
so
′′
up + 4 up = (2 A1 + 4 B0 + 8 B1 x) cos 2x + (2 B1 − 4 A0 − 8 A1 x) sin 2x.
Equating the coefficients of x cos 2x, x sin 2x, cos 2x , and sin 2x here with the corresponding coefficients on the right side of Equation 5.5.21
shows that u is a solution of Equation 5.5.21if
p
8B1 = −16
−8A1 =− 8
(5.5.22)
4 B0 + 2 A1 =6
−4 A0 + 2 B1 = −8.
with the corresponding coefficients on the right side of Equation 5.5.20. (See Exercise 5.5.38). This leads to the same system Equation 5.5.22
of equations for A , A , B , and B that we obtained in Example 5.5.6. However, if you try this approach you’ll see that deriving Equation
0 1 0 1
5.5.22 this way is much more tedious than the way we did it in Example 5.5.6.
2. y ′′ ′
+ 3 y + y = (2 − 6x) cos x − 9 sin x
3. y ′′ ′ x
+ 2 y + y = e (6 cos x + 17 sin x)
4. y ′′ ′
+ 3 y − 2y = −e
2x
(5 cos 2x + 9 sin 2x)
5. y ′′ ′ x
− y + y = e (2 + x) sin x
6. y ′′ ′
+ 3 y − 2y = e
−2x
[(4 + 20x) cos 3x + (26 − 32x) sin 3x]
7. y ′′
+ 4y = −12 cos 2x − 4 sin 2x
8. y ′′
+ y = (−4 + 8x) cos x + (8 − 4x) sin x
9. 4y ′′
+ y = −4 cos x/2 − 8x sin x/2
10. y ′′ ′
+ 2 y + 2y = e
−x
(8 cos x − 6 sin x)
11. y ′′ ′
− 2 y + 5y = e
x
[(6 + 8x) cos 2x + (6 − 8x) sin 2x]
12. y ′′ ′
+ 2y + y = 8x
2
cos x − 4x sin x
13. y ′′ ′
+ 3 y + 2y = (12 + 20x + 10 x ) cos x + 8x sin x
2
14. y ′′ ′ 2
+ 3 y + 2y = (1 − x − 4 x ) cos 2x − (1 + 7x + 2 x ) sin 2x
2
15. y ′′ ′
− 5 y + 6y = −e
x 2
[(4 + 6x − x ) cos x − (2 − 4x + 3 x ) sin x]
2
16. y ′′ ′
− 2 y + y = −e
x 2
[(3 + 4x − x ) cos x + (3 − 4x − x ) sin x]
2
17. y ′′ ′
− 2 y + 2y = e
x 2
[(2 − 2x − 6 x ) cos x + (2 − 10x + 6 x ) sin x]
2
Q5.5.2
In Exercises 5.5.18-5.5.21 find a particular solution and graph it.
18. y ′′ ′
+ 2y + y = e
−x
[(5 − 2x) cos x − (3 + 3x) sin x]
19. y ′′
+ 9y = −6 cos 3x − 12 sin 3x
20. y ′′ ′ 2
+ 3 y + 2y = (1 − x − 4 x ) cos 2x − (1 + 7x + 2 x ) sin 2x
2
21. y ′′ ′
+ 4 y + 3y = e
−x 2
[(2 + x + x ) cos x + (5 + 4x + 2 x ) sin x]
2
Q5.5.3
In Exercises 5.5.22-5.5.26 solve the initial value problem.
22. y ′′ ′ x
− 7 y + 6y = −e (17 cos x − 7 sin x),
′
y(0) = 4, y (0) = 2
23. y ′′ ′ x
− 2 y + 2y = −e (6 cos x + 4 sin x),
′
y(0) = 1, y (0) = 4
24. y ′′ ′
+ 6 y + 10y = −40 e
x
sin x, y(0) = 2,
′
y (0) = −3
25. y ′′ ′
− 6 y + 10y = −e
3x
(6 cos x + 4 sin x), y(0) = 2,
′
y (0) = 7
26. y ′′ ′
− 3 y + 2y = e
3x
[21 cos x − (11 + 10x) sin x] , y(0) = 0,
′
y (0) = 6
Q5.5.4
In Exercises 5.5.27-5.5.32 use the principle of superposition to find a particular solution. Where indicated, solve the initial
value problem.
28. y ′′
+ y = 4 cos x − 2 sin x + x e
x
+e
−x
29. y ′′ ′
− 3 y + 2y = x e
x
+ 2e
2x
+ sin x
30. y ′′ ′
− 2 y + 2y = 4x e
x
cos x + x e
−x
+1 +x
2
31. y ′′ ′
− 4 y + 4y = e
2x
(1 + x) + e
2x
(cos x − sin x) + 3 e
3x
+1 +x
32. y ′′ ′
− 4 y + 4y = 6 e
2x
+ 25 sin x, y(0) = 5, y (0) = 3
′
Q5.5.5
In Exercises 5.5.33-5.5.35 solve the initial value problem and graph the solution.
33. y ′′
+ 4y = −e
−2x
[(4 − 7x) cos x + (2 − 4x) sin x] , y(0) = 3,
′
y (0) = 1
34. y ′′ ′
+ 4 y + 4y = 2 cos 2x + 3 sin 2x + e
−x
, y(0) = −1, y (0) = 2
′
35. y ′′ x
+ 4y = e (11 + 15x) + 8 cos 2x − 12 sin 2x, y(0) = 3, y (0) = 5
′
Q5.5.6
36.
a. Verify that if
′′ ′′ ′ 2 ′′ ′ 2
yp = (A + 2ωB − ω A) cos ωx + (B − 2ωA − ω B) sin ωx.
2 ′ ′ ′′
[−bωA + (c − aω )B − 2aωA + b B + aB ] sin ωx.
where at least one of the coefficients p , q is nonzero, so k is the larger of the degrees of P and Q.
k k
a. Show that if cos ωx and sin ωx are not solutions of the complementary equation
′′ ′
ay + b y + cy = 0, (5.5E.4)
such that
2 ′ ′ ′′
(c − aω )A + bωB + 2aωB + b A + aA =P
(5.5E.5)
2 ′ ′ ′′
−bωA + (c − aω )B − 2aωA + b B + aB = Q,
where (A k, ,
Bk ) (Ak−1 , Bk−1 ) , …,(A 0, B0 ) can be computed successively by solving the systems
and, if 1 ≤ r ≤ k ,
2
(c − aω )Ak−r + bωBk−r = pk−r + ⋯
(5.5E.7)
2
−bωAk−r + (c − aω )Bk−r = qk−r + ⋯ ,
where the terms indicated by “⋯” depend upon the previously computed coefficients with subscripts greater than k−r .
Conclude from this and Exercise 5.5.36b that
yp = A(x) cos ωx + B(x) sin ωx (B)
is a particular solution of
′′ ′
ay + b y + cy = P (x) cos ωx + Q(x) sin ωx. (5.5E.8)
does not have a solution of the form (B) with A and B as in (A). Then show that there are polynomials
2 k+1 2 k+1
A(x) = A0 x + A1 x + ⋯ + Ak x and B(x) = B0 x + B1 x + ⋯ + Bk x (5.5E.9)
such that
′′ ′
a(A + 2ωB ) =P
(5.5E.10)
′′ ′
a(B − 2ωA ) = Q,
and, if k ≥ 1 ,
1 qk−j
Ak−j =− [ − (k − j + 2)Bk−j+1 ]
2ω a(k − j + 1)
1 pk−j
Bk−j = [ − (k − j + 2)Ak−j+1 ]
2ω a(k − j + 1)
for 1 ≤ j ≤ k . Conclude that (B) with this choice of the polynomials A and B is a particular solution of (C).
38. Show that Theorem 5.5.1 implies the next theorem:
Theorem 5.5E . 1
Suppose ω is a positive number and P and Q are polynomials. Let k be the larger of the degrees of P and . Then the
Q
equation
′′ ′ λx
ay + b y + cy = e (P (x) cos ωx + Q(x) sin ωx) (5.5E.11)
where
k k
A(x) = A0 + A1 x + ⋯ + Ak x and B(x) = B0 + B1 x + ⋯ + Bk x , (5.5E.12)
provided that e λx
cos ωx and e λx
sin ωx are not solutions of the complementary equation. The equation
(for which e λx
cos ωx and e λx
sin ωx are solutions of the complementary equation) has a particular solution of the form
(A), where
2 k+1 2 k+1
A(x) = A0 x + A1 x + ⋯ + Ak x and B(x) = B0 x + B1 x + ⋯ + Bk x . (5.5E.14)
λx
y =∫ e (P (x) cos ωx + Q(x) sin ωx) dx (5.5E.15)
where ω ≠ 0 and
k k
P (x) = p0 + p1 x + ⋯ + pk x , Q(x) = q0 + q1 x + ⋯ + qk x . (5.5E.16)
a. Show that y = e λx
u , where
′
u + λu = P (x) cos ωx + Q(x) sin ωx. (A)
where
k k
A(x) = A0 + A1 x + ⋯ + Ak x , B(x) = B0 + B1 x + ⋯ + Bk x , (5.5E.18)
and the pairs of coefficients (A , B ), (A , B ), …,(A , B ) can be computed successively as the solutions of pairs
k k k−1 k−1 0 0
c. Conclude that
λx λx
∫ e (P (x) cos ωx + Q(x) sin ωx) dx = e (A(x) cos ωx + B(x) sin ωx) + c, (5.5E.19)
b. ∫ x e cos xdx
2 x
c. ∫ x e sin 2xdx
−x
d. ∫ x e sin xdx
2 −x
e. ∫ x e sin xdx
3 x
′′ ′
P0 (x)y + P1 (x)y + P2 (x)y = 0. (5.6.2)
The method is called reduction of order because it reduces the task of solving Equation 5.6.1 to solving a first order equation.
Unlike the method of undetermined coefficients, it does not require P , P , and P to be constants, or F to be of any special 0 1 2
form.
By now you shoudn’t be surprised that we look for solutions of Equation 5.6.1 in the form
y = uy1 (5.6.3)
where u is to be determined so that y satisfies Equation 5.6.1. Substituting Equation 5.6.3 and
′ ′ ′
y = u y1 + u y
1
′′ ′′ ′ ′ ′′
y = u y1 + 2 u y + uy
1 1
′′ ′ ′ ′′ ′
(P0 y1 )u + (2 P0 y + P1 y1 )u + (P0 y + P1 y + P2 y1 )u = F . (5.6.4)
1 1 1
However, the coefficient of u is zero, since y satisfies Equation 5.6.2. Therefore Equation 5.6.4 reduces to
1
′′ ′
Q0 (x)u + Q1 (x)u = F , (5.6.5)
with
′
Q0 = P0 y1 and Q1 = 2 P0 y + P1 y1 .
1
(It isn’t worthwhile to memorize the formulas for Q and Q !) Since Equation 5.6.5 is a linear first order equation in u , we
0 1
′
can solve it for u by variation of parameters as in Section 1.2, integrate the solution to obtain u, and then obtain y from
′
Equation 5.6.3.
Example 5.6.1
a. Find the general solution of
′′ ′ 2
xy − (2x + 1)y + (x + 1)y = x , (5.6.6)
given that y 1 =e
x
is a solution of the complementary equation
′′ ′
xy − (2x + 1)y + (x + 1)y = 0. (5.6.7)
′
u
′′ −x
u − = xe . (5.6.8)
x
becomes
′
z −x
z − = xe . (5.6.9)
x
We leave it to you to show (by separation of variables) that z 1 =x is a solution of the complementary equation
′
z
z − =0
x
for Equation 5.6.9. By applying variation of parameters as in Section 1.2, we can now see that every solution of Equation
5.6.9 is of the form
′ −x ′ −x −x
z = vx where v x = xe , so v =e and v = −e + C1 .
Since u ′
= z = vx , u is a solution of Equation 5.6.8 if and only if
′ −x
u = vx = −x e + C1 x.
−x
C1 2
u = (x + 1)e + x + C2 .
2
x
C1 2 x x
y = ue = x +1 + x e + C2 e . (5.6.10)
2
C = 2 and C = 0 , we see that y is also a solution of Equation 5.6.6. Since the difference of two
2 x
1 2 = x +1 +x e p
2
solutions of Equation 5.6.6 is a solution of Equation 5.6.7, y = y − y = x e is a solution of Equation 5.6.7. Since
2 p
1
p
2
2 x
y /y
2 is nonconstant and we already know that y = e is a solution of Equation 5.6.6, Theorem 5.1.6 implies that
1 1
x
Although Equation 5.6.10 is a correct form for the general solution of Equation 5.6.6, it is silly to leave the arbitrary
coefficient of x e as C /2 where C is an arbitrary constant. Moreover, it is sensible to make the subscripts of the
2 x
1 1
coefficients of y = e and y = x e consistent with the subscripts of the functions themselves. Therefore we rewrite
1
x
2
2 x
Equation 5.6.10 as
x 2 x
y = x + 1 + c1 e + c2 x e
by simply renaming the arbitrary constants. We’ll also do this in the next two examples, and in the answers to the
exercises.
Example 5.6.2
a. Find the general solution of
2 ′′ ′ 2
x y + xy − y = x + 1,
Solution
a. If y = ux, then y ′ ′
= u x +u and y ′′ ′′
= u x + 2u
′
, so
2 ′′ ′ 2 ′′ ′ ′
x y + xy − y = x (u x + 2 u ) + x(u x + u) − ux
3 ′′ 2 ′
=x u + 3x u .
3 1 1
′′ ′
u + u = + . (5.6.13)
3
x x x
To focus on how we apply variation of parameters to this equation, we temporarily write z = u , so that Equation 5.6.13 ′
becomes
3 1 1
′
z + z = + . (5.6.14)
3
x x x
′
3
z + z =0
x
for Equation 5.6.14. By variation of parameters, every solution of Equation 5.6.14 is of the form
′ 3
v v 1 1 ′ 2
x
z = where = + , so v =x +1 and v= + x + C1 .
x3 x3 x x3 3
Since u ′ 3
= z = v/ x , u is a solution of Equation 5.6.14 if and only if
v 1 1 C1
′
u = = + + .
3 2 3
x 3 x x
Reasoning as in the solution of Example 5.6.1a , we conclude that y1 = x and y2 = 1/x form a fundamental set of
solutions for Equation 5.6.11.
As we explained above, we rename the constants in Equation 5.6.15 and rewrite it as
2
x c2
y = − 1 + c1 x + . (5.6.16)
3 x
Setting x =1 in Equation 5.6.16 and Equation 5.6.17 and imposing the initial conditions y(1) = 2 and ′
y (1) = −3
yields
Solving these equations yields c 1 = −1/2 ,c 2 = 19/6 . Therefore the solution of Equation 5.6.12is
2
x x 19
y = −1 − + .
3 2 6x
Using reduction of order to find the general solution of a homogeneous linear second order equation leads to a homogeneous
linear first order equation in u that can be solved by separation of variables. The next example illustrates this.
′
Example 5.6.3
Find the general solution and a fundamental set of solutions of
2 ′′ ′
x y − 3x y + 3y = 0, (5.6.18)
3 ′′ 2 ′
=x u −x u .
′′
u 1
= ,
′
u x
so
′ ′
ln | u | = ln |x| + k, or equivalently u = C1 x.
Therefore
C1
2
u = x + C2 ,
2
which we rewrite as
3
y = c1 x + c2 x .
Therefore {x, x 3
} is a fundamental set of solutions of Equation 5.6.18.
2. x 2
y
′′
+ xy − y =
′ 4
x2
; y1 = x
3. x 2
y
′′
− x y + y = x;
′
y1 = x
4. y ′′
− 3 y + 2y =
′
1+e
1
−x
; y1 = e
2x
5. y ′′
− 2y + y = 7x
′ 3/2
e ;
x
y1 = e
x
6. 4x 2
y
′′
+ (4x − 8 x )y + (4 x
2 ′ 2
− 4x − 1)y = 4 x
1/2 x
e (1 + 4x); y1 = x
1/2
e
x
7. y ′′
− 2 y + 2y = e
′ x
sec x; y1 = e
x
cos x
8. y ′′
+ 4x y + (4 x
′ 2
+ 2)y = 8 e
−x(x+2)
; y1 = e
−x
9. x 2
y
′′ ′
+ x y − 4y = −6x − 4; y1 = x
2
10. x 2
y
′′
+ 2x(x − 1)y + (x
′ 2
− 2x + 2)y = x e
3 2x
; y1 = x e
−x
11. x 2
y
′′
− x(2x − 1)y + (x
′ 2
− x − 1)y = x e ;
2 x
y1 = x e
x
12. (1 − 2x)y ′′ ′
+ 2 y + (2x − 3)y = (1 − 4x + 4 x )e ;
2 x
y1 = e
x
13. x 2
y
′′
− 3x y + 4y = 4 x ;
′ 4
y1 = x
2
14. 2x y ′′
+ (4x + 1)y + (2x + 1)y = 3 x
′ 1/2
e
−x
; y1 = e
−x
15. x y ′′
− (2x + 1)y + (x + 1)y = −e ;
′ x
y1 = e
x
16. 4x 2
y
′′
− 4x(x + 1)y + (2x + 3)y = 4 x
′ 5/2
e
2x
; y1 = x
1/2
17. x 2
y
′′
− 5x y + 8y = 4 x ;
′ 2
y1 = x
2
Q5.6.2
In Exercises 5.6.18-5.6.30 find a fundamental set of solutions, given that y is a solution. 1
18. x y ′′
+ (2 − 2x)y + (x − 2)y = 0;
′
y1 = e
x
19. x 2
y
′′
− 4x y + 6y = 0;
′
y1 = x
2
20. x 2
(ln |x| ) y
2 ′′
− (2x ln |x|)y + (2 + ln |x|)y = 0;
′
y1 = ln |x|
−
21. 4x y ′′
+ 2 y + y = 0;
′
y1 = sin √x
22. x y ′′
− (2x + 2)y + (x + 2)y = 0;
′
y1 = e
x
23. x 2
y
′′
− (2a − 1)x y + a y = 0;
′ 2
y1 = x
a
24. x 2
y
′′
− 2x y + (x
′ 2
+ 2)y = 0; y1 = x sin x
25. x y ′′
− (4x + 1)y + (4x + 2)y = 0;
′
y1 = e
2x
26. 4x 2
(sin x)y
′′
− 4x(x cos x + sin x)y + (2x cos x + 3 sin x)y = 0;
′
y1 = x
1/2
27. 4x 2
y
′′ ′
− 4x y + (3 − 16 x )y = 0;
2
y1 = x
1/2
e
2x
29. (x 2
− 2x)y
′′
+ (2 − x )y + (2x − 2)y = 0;
2 ′
y1 = e
x
30. x y ′′
− (4x + 1)y + (4x + 2)y = 0;
′
y1 = e
2x
31. x2
y
′′
− 3x y + 4y = 4 x ,
′ 4
y(−1) = 7,
′
y (−1) = −8; y1 = x
2
33. (x + 1) 2
y
′′
− 2(x + 1)y − (x
′ 2
+ 2x − 1)y = (x + 1 ) e ,
3 x
y(0) = 1,
′
y (0) = − 1; y1 = (x + 1)e
x
Q5.6.4
In Exercises 5.6.34 and 5.6.35 solve the initial value problem and graph the solution, given that y satisfies the complementary 1
equation.
34. x2
y
′′
+ 2x y − 2y = x ,
′ 2
y(1) =
5
4
′
, y (1) =
3
2
; y1 = x
35. (x 2
− 4)y
′′ ′
+ 4x y + 2y = x + 2, y(0) = −
1
3
,
′
y (0) = −1; y1 =
1
x−2
Q5.6.5
36. Suppose p and p are continuous on (a, b). Let y be a solution of
1 2 1
′′ ′
y + p1 (x)y + p2 (x)y = 0 (A)
that has no zeros on (a, b), and let x be in (a, b). Use reduction of order to show that y and
0 1
x t
1
y2 (x) = y1 (x) ∫ exp(− ∫ p1 (s) ds) dt (5.6E.1)
2
x0 y (t) x0
1
is a Riccati equation. (See Exercise 2.4.55.) Assume that p and q are continuous.
a. Show that y is a solution of (A) if and only if y = z ′
/z , where
′′ ′
z + p(x)z + q(x)z = 0. (B)
where {z , z } is a fundamental set of solutions of (B) and c and c are arbitrary constants.
1 2 1 2
c. Does the formula (C) imply that the first order equation (A) has a two–parameter family of solutions? Explain your answer.
38. Use a method suggested by Exercise 5.6.37 to find all solutions. of the equation.
a. y + y + k = 0
′ 2 2
b. y + y − 3y + 2 = 0
′ 2
c. y + y + 5y − 6 = 0
′ 2
d. y + y + 8y + 7 = 0
′ 2
e. y + y + 14y + 50 = 0
′ 2
f. 6y + 6y − y − 1 = 0
′ 2
39. Use a method suggested by Exercise 5.6.37 and reduction of order to find all solutions of the equation, given that y1 is a
solution.
a. x (y + y ) − x(x + 2)y + x + 2 = 0; y = 1/x
2 ′ 2
1
b. y + y + 4xy + 4x + 2 = 0; y = −2x
′ 2 2
1
e. x (y + y ) + xy + x − = 0; y = − tan x −
2 ′ 2 2 1
4
1
1
2x
f. x (y + y ) − 7xy + 7 = 0; y = 1/x
2 ′ 2
1
is the generalized Riccati equation. (See Exercise 2.4.55.) Assume that p and q are continuous and r is differentiable.
a. Show that y is a solution of (A) if and only if y = z ′
/rz , where
′
r (x)
′′ ′
z + [p(x) − ] z + r(x)q(x)z = 0. (B)
r(x)
where {z 1, z2 } is a fundamental set of solutions of (B) and c and c are arbitrary constants.
1 2
Having found a particular solution y by this method, we can write the general solution of Equation 5.7.1 as
p
y = yp + c1 y1 + c2 y2 .
Since we need only one nontrivial solution of Equation 5.7.2 to find the general solution of Equation 5.7.1 by reduction of
order, it is natural to ask why we are interested in variation of parameters, which requires two linearly independent solutions of
Equation 5.7.2 to achieve the same goal. Here’s the answer:
If we already know two linearly independent solutions of Equation 5.7.2 then variation of parameters will probably be
simpler than reduction of order.
Variation of parameters generalizes naturally to a method for finding particular solutions of higher order linear equations
(Section 9.4) and linear systems of equations (Section 10.7), while reduction of order doesn’t.
Variation of parameters is a powerful theoretical tool used by researchers in differential equations. Although a detailed
discussion of this is beyond the scope of this book, you can get an idea of what it means from Exercises 5.7.37–5.7.39.
We’ll now derive the method. As usual, we consider solutions of Equation 5.7.1 and Equation 5.7.2 on an interval (a, b)
where P , P , P , and F are continuous and P has no zeros. Suppose that {y , y } is a fundamental set of solutions of the
0 1 2 0 1 2
complementary equation Equation 5.7.2. We look for a particular solution of Equation 5.7.1 in the form
yp = u1 y1 + u2 y2 (5.7.3)
where u and u are functions to be determined so that y satisfies Equation 5.7.1. You may not think this is a good idea,
1 2 p
since there are now two unknown functions to be determined, rather than one. However, since u and u have to satisfy only 1 2
one condition (that y is a solution of Equation 5.7.1), we can impose a second condition that produces a convenient
p
simplification, as follows.
Differentiating Equation 5.7.3 yields
′ ′ ′ ′ ′
yp = u1 y + u2 y + u y1 + u y2 . (5.7.4)
1 2 1 2
′ ′
u y1 + u y2 = 0. (5.7.5)
1 2
that is, Equation 5.7.5 permits us to differentiate y (once!) as if u and u are constants. Differentiating Equation 5.7.4 yields
p 1 2
′′ ′′ ′′ ′ ′ ′ ′
yp = u1 y + u2 y +u y +u y . (5.7.7)
1 2 1 1 2 2
(There are no terms involving u and u here, as there would be if we hadn’t required Equation 5.7.5.) Substituting Equation
′′
1
′′
2
5.7.3, Equation 5.7.6, and Equation 5.7.7 into Equation 5.7.1 and collecting the coefficients of u and u yields 1 2
′′ ′ ′′ ′ ′ ′ ′ ′
u1 (P0 y + P1 y + P2 y1 ) + u2 (P0 y + P1 y + P2 y2 ) + P0 (u y +u y ) = F.
1 1 2 2 1 1 2 2
As in the derivation of the method of reduction of order, the coefficients of u and u here are both zero because 1 2 y1 and y2
satisfy the complementary equation. Hence, we can rewrite the last equation as
′ ′ ′ ′
P0 (u y +u y ) = F. (5.7.8)
1 1 2 2
′ ′
u y1 + u y2 =0
1 2
(5.7.9)
′ ′ ′ ′ F
u y +u y = ,
1 1 2 2 P0
where the first equation is the same as Equation 5.7.5 and the second is from Equation 5.7.8.
We’ll now show that you can always solve Equation 5.7.9 for u and u . (The method that we use here will always work, but
′
1
′
2
simpler methods usually work when you’re dealing with specific equations.) To obtain u , multiply the first equation in ′
1
′ ′ ′ ′
u y1 y + u y2 y =0
1 2 2 2
F y2
′ ′ ′ ′
u y y2 + u y y2 = .
1 1 2 2
P0
Since { y1 , y2 } is a fundamental set of solutions of Equation 5.7.2 on (a, b), Theorem 5.1.6 implies that the Wronskian
y1 y
2
′ ′
− y y2
1
has no zeros on (a, b). Therefore we can solve Equation 5.7.10 for u , to obtain ′
1
F y2
′
u =− . (5.7.11)
1 ′ ′
P0 (y1 y − y y2 )
2 1
We leave it to you to start from Equation 5.7.9 and show by a similar argument that
F y1
′
u = . (5.7.12)
2 ′ ′
P0 (y1 y − y y2 )
2 1
We can now obtain u and u by integrating u and u . The constants of integration can be taken to be zero, since any choice
1 2
′
1
′
2
You should not memorize Equation 5.7.11 and Equation 5.7.12. On the other hand, you don’t want to rederive the whole
procedure for every specific problem. We recommend the a compromise:
a. Write
yp = u1 y1 + u2 y2 (5.7.13)
F
(5.7.14)
′ ′ ′ ′
u y +u y =
1 1 2 2 P0
Example 5.7.1
Find a particular solution y of p
2 ′′ ′ 9/2
x y − 2x y + 2y = x , (5.7.15)
where
′ ′ 2
u x+ u x =0
1 2
9/2
x 5/2
′ ′
u + 2u x = =x .
1 2
x2
From the first equation, u = −u x . Substituting this into the second equation yields u x = x
′
1
′
2
′
2
5/2
, so ′
u
2
3/2
=x and
therefore u = −u x = −x . Integrating and taking the constants of integration to be zero yields
′
1
′
2
5/2
2 7/2
2 5/2
u1 = − x and u2 = x .
7 5
Therefore
2
2 7/2
2 5/2 2
4 9/2
yp = u1 x + u2 x =− x x+ x x = x ,
7 5 35
Example 5.7.2
Find a particular solution y of p
′′ ′ 2
(x − 1)y − x y + y = (x − 1 ) , (5.7.16)
where
′ ′ x
u x +u e =0
1 2
2
(x − 1)
′ ′ x
u +u e = = x − 1.
1 2
x −1
Subtracting the first equation from the second yields −u (x − 1) = x − 1 , so u = −1 . From this and the first equation,
′
1
′
1
′
u = −x e
2
−x
u = xe
′
1
. Integrating and taking the constants of integration to be zero yields
−x
−x
u1 = −x and u2 = −(x + 1)e .
Therefore
x −x x 2
yp = u1 x + u2 e = (−x)x + (−(x + 1)e )e = −x − x − 1,
There’s nothing wrong with leaving the general solution of Equation 5.7.16 in the form Equation 5.7.17; however, we think
you’ll agree that Equation 5.7.18 is preferable. We can also view the transition from Equation 5.7.17 to Equation 5.7.18
differently. In this example the particular solution y = −x − x − 1 contained the term −x, which satisfies the
p
2
complementary equation. We can drop this term and redefine y = −x − 1 , since −x − x − 1 is a solution of Equation p
2 2
5.7.16 and x is a solution of the complementary equation; hence, −x − 1 = (−x − x − 1) + x is also a solution of 2 2
Equation 5.7.16. In general, it is always legitimate to drop linear combinations of {y , y } from particular solutions obtained 1 2
by variation of parameters. (See Exercise 5.7.36 for a general discussion of this question.) We’ll do this in the following
examples and in the answers to exercises that ask for a particular solution. Therefore, don’t be concerned if your answer to
such an exercise differs from ours only by a solution of the complementary equation.
Example 5.7.3
Find a particular solution of
′′ ′
1
y + 3 y + 2y = . (5.7.19)
x
1 +e
is p(r) = r
2
+ 3r + 2 = (r + 1)(r + 2), so y = e and y = e
1 form a fundamental set of solutions of Equation
−x
2
−2x
−x −2x
yp = u1 e + u2 e ,
where
′ −x ′ −2x
u e + u e =0
1 2
−x −2x
1
′ ′
−u e − 2u e = .
1 2 x
1 +e
Integrating by means of the substitution v = e and taking the constants of integration to be zero yields
x
x
e dv x
u1 = ∫ dx = ∫ = ln(1 + v) = ln(1 + e )
x
1 +e 1 +v
and
Therefore
−x −2x
yp = u1 e + u2 e
x −x x x −2x
= [ln(1 + e )] e + [ln(1 + e ) − e ] e ,
so
−x −2x x −x
yp = (e +e ) ln(1 + e ) − e .
Since the last term on the right satisfies the complementary equation, we drop it and redefine
−x −2x x
yp = (e +e ) ln(1 + e ).
Example 5.7.4
Solve the initial value problem
2
2 ′′ ′ ′
(x − 1)y + 4x y + 2y = , y(0) = −1, y (0) = −5, (5.7.21)
x +1
given that
1 1
y1 = and y2 =
x −1 x +1
Solution
We first use variation of parameters to find a particular solution of
2 ′′ ′
2
(x − 1)y + 4x y + 2y =
x +1
where
′ ′
u u
1 2
+ =0 (5.7.22)
x −1 x +1
′ ′
u u 2
1 2
− − =
2 2 2
(x − 1) (x + 1) (x + 1)(x − 1)
Multiplying the first equation by 1/(x − 1) and adding the result to the second equation yields
1 1 2
′
[ − ]u = . (5.7.23)
2 2 2 2
x −1 (x + 1) (x + 1)(x − 1)
Since
2
=∫ [ − 1] dx = 2 ln(x + 1) − x
x +1
and
u2 = ∫ dx = x.
Therefore
u1 u2 1 1
yp = + = [2 ln(x + 1) − x] +x
x −1 x +1 x −1 x +1
2 ln(x + 1) 1 1 2 ln(x + 1) 2x
= +x [ − ] = − .
x −1 x +1 x −1 x −1 (x + 1)(x − 1)
However, since
2x 1 1
=[ + ]
(x + 1)(x − 1) x +1 x −1
2 2 ln(x + 1) c1 c2
′
y = − − − .
x2 − 1 (x − 1)2 (x − 1)2 (x + 1)2
Setting x = 0 in the last two equations and imposing the initial conditions y(0) = −1 and y ′
(0) = −5 yields the system
−c1 + c2 = −1
−2 − c1 − c2 = −5.
The solution of this system is c 1 = 2, c2 = 1 . Substituting these into Equation 5.7.24 yields
2 ln(x + 1) 2 1
y = + +
x −1 x −1 x +1
2 ln(x + 1) 3x + 1
= +
x −1 x2 − 1
We’ve now considered three methods for solving nonhomogeneous linear equations: undetermined coefficients, reduction of
order, and variation of parameters. It’s natural to ask which method is best for a given problem. The method of undetermined
coefficients should be used for constant coefficient equations with forcing functions that are linear combinations of
polynomials multiplied by functions of the form e , e cos ωx, or e sin ωx. Although the other two methods can be used
αx λx λx
to solve such problems, they will be more difficult except in the most trivial cases, because of the integrations involved.
If the equation isn’t a constant coefficient equation or the forcing function isn’t of the form just specified, the method of
undetermined coefficients does not apply and the choice is necessarily between the other two methods. The case could be
made that reduction of order is better because it requires only one solution of the complementary equation while variation of
parameters requires two. However, variation of parameters will probably be easier if you already know a fundamental set of
solutions of the complementary equation.
2. y ′′
+ 4y = sin 2x sec
2
2x
3. y ′′
− 3 y + 2y =
′
1+e
4
−x
4. y ′′
− 2 y + 2y = 3 e
′ x
sec x
5. y ′′
− 2 y + y = 14 x
′ 3/2
e
x
−x
6. y ′′
−y =
4e
1−e
−2x
Q5.7.2
In Exercises 5.7.7-5.7.29 use variation of parameters to find a particular solution, given the solutions y1 , y2 of the
complementary equation.
7. x 2
y
′′
+ xy − y = 2x
′ 2
+ 2; y1 = x, y2 =
1
8. x y ′′
+ (2 − 2x)y + (x − 2)y = e
′ 2x
; y1 = e ,
x
y2 =
e
9. 4x 2
y
′′
+ (4x − 8 x )y + (4 x
2 ′ 2
− 4x − 1)y = 4 x
1/2
e ,
x
x >0 ;y 1 =x
1/2 x
e , y2 = x
−1/2
e
x
2 2
10. y ′′
+ 4x y + (4 x
′ 2
+ 2)y = 4 e
−x(x+2)
; y1 = e
−x
, y2 = x e
−x
11. x 2
y
′′
− 4x y + 6y = x
′ 5/2
, x > 0; y1 = x , y2 = x
2 3
12. x 2
y
′′
− 3x y + 3y = 2 x
′ 4
sin x; y1 = x, y2 = x
3
− − −
14. 4x y ′′ ′
+ 2 y + y = sin √x ; y1 = cos √x , y2 = sin √x
15. x y ′′
− (2x + 2)y + (x + 2)y = 6 x e ;
′ 3 x
y1 = e ,
x
y2 = x e
3 x
16. x 2
y
′′
− (2a − 1)x y + a y = x
′ 2 a+1
; y1 = x ,
a
y2 = x
a
ln x
17. x 2
y
′′
− 2x y + (x
′ 2
+ 2)y = x
3
cos x; y1 = x cos x, y2 = x sin x
2 2
18. x y ′′ ′
− y − 4x y = 8x ;
3 5
y1 = e
x
, y2 = e
−x
− 2x − −2x
20. 4x 2
y
′′ ′
− 4x y + (3 − 16 x )y = 8 x
2 5/2
; y1 = √x e , y2 = √x e
− −
21. 4x 2
y
′′
− 4x y + (4 x
′ 2
+ 3)y = x
7/2
; y1 = √x sin x, y2 = √x cos x
22. x 2
y
′′
− 2x y − (x
′ 2
− 2)y = 3 x ;
4
y1 = x e , y2 = x e
x −x
23. x 2
y
′′
− 2x(x + 1)y + (x
′ 2
+ 2x + 2)y = x e ;
3 x
y1 = x e ,
x
y2 = x e
2 x
24. x 2
y
′′
− x y − 3y = x
′ 3/2
; y1 = 1/x, y2 = x
3
25. x 2
y
′′
− x(x + 4)y + 2(x + 3)y = x e ;
′ 4 x
y1 = x ,
2
y2 = x e
2 x
26. x 2
y
′′
− 2x(x + 2)y + (x
′ 2
+ 4x + 6)y = 2x e ;
x
y1 = x e ,
2 x
y2 = x e
3 x
27. x 2
y
′′
− 4x y + (x
′ 2
+ 6)y = x ;
4
y1 = x
2
cos x, y2 = x
2
sin x
28. (x − 1)y ′′ ′
− x y + y = 2(x − 1 ) e ;
2 x
y1 = x, y2 = e
x
− − x
29. 4x 2
y
′′
− 4x(x + 1)y + (2x + 3)y = x
′ 5/2 x
e ; y1 = √x , y2 = √x e
Q5.7.3
William F. Trench 9/11/2020 5.7E.1 CC-BY-NC-SA https://math.libretexts.org/@go/page/44265
In Exercises 5.7.30–5.7.32 use variation of parameters to solve the initial value problem, given y1 , y2 are solutions of the
complementary equation.
30. (3x − 1)y ′′ ′
− (3x + 2)y − (6x − 8)y = (3x − 1 ) e
2 2x
, y(0) = 1, y (0) = 2
′
;y 1 =e
2x
, y2 = x e
−x
31. (x − 1) 2
y
′′ ′
− 2(x − 1)y + 2y = (x − 1 ) ,
2
y(0) = 3,
′
y (0) = −6 ;
y1 = x − 1 ,y 2 =x
2
−1
32. (x − 1) 2
y
′′
− (x
2 ′
− 1)y + (x + 1)y = (x − 1 ) e ,
3 x
y(0) = 4,
′
y (0) = −6 ;
x
y1 = (x − 1)e , y2 = x − 1
Q5.7.4
In Exercises 5.7.33-5.7.35 use variation of parameters to solve the initial value problem and graph the solution, given that y , 1
33. (x 2
− 1)y
′′ ′
+ 4x y + 2y = 2x,
′
y(0) = 0, y (0) = −2; y1 =
1
x−1
, y2 =
x+1
1
34. x2
y
′′ ′
+ 2x y − 2y = −2 x ,
2
y(1) = 1, y (1) = −1;
′
y1 = x, y2 =
1
x
2
x+1
Q5.7.5
36. Suppose
¯
¯¯
yp = y + a1 y1 + a2 y2 (5.7E.1)
is a particular solution of
′′ ′
P0 (x)y + P1 (x)y + P2 (x)y = F (x), (A)
′′ ′
P0 (x)y + P1 (x)y + P2 (x)y = 0. (5.7E.2)
37. Suppose p, q, and f are continuous on (a, b) and let x be in (a, b). Let y and y be the solutions of 0 1 2
′′ ′
y + p(x)y + q(x)y = 0 (5.7E.3)
such that
′ ′
y1 (x0 ) = 1, y (x0 ) = 0, y2 (x0 ) = 0, y (x0 ) = 1. (5.7E.4)
1 2
Use variation of parameters to show that the solution of the initial value problem
′′ ′ ′
y + p(x)y + q(x)y = f (x), y(x0 ) = k0 , y (x0 ) = k1 , (5.7E.5)
is
x t (5.7E.6)
+∫ (y1 (t)y2 (x) − y1 (x)y2 (t)) f (t) exp(∫ p(s) ds) dt.
x0 x0
HINT: Use Abel's formula for the Wronskian of {y 1, y2 } , and integrate u and u from x to x. ′
1
′
2 0
x t
(5.7E.7)
′ ′
+∫ (y1 (t)y (x) − y (x)y2 (t)) f (t) exp(∫ p(s) ds) dt.
x0 2 1 x0
38. Suppose f is continuous on an open interval that contains x0 = 0 . Use variation of parameters to find a formula for the
solution of the initial value problem
HINT: You will need the addition formulas for the sine and cosine.
sin(A + B) = sin A cos B + cos A sin B
∞
For the rest of this exercise assume that the improper integral ∫ 0
f (t) dt is absolutely convergent.
b. Show that if y is a solution of
′′
y + y = f (x) (A)
and
′
lim (y (x) + A0 sin x − A1 cos x) = 0, (C)
x→∞
where
∞ ∞
∞ ∞
HINT: Recall from calculus that if ∫ f (t)dt converges absolutely, then lim
0
∫ x→∞
x
|f (t)|dt = 0 .
c. Show that if A and A are arbitrary constants, then there’s a unique solution of y
0 1
′′
+ y = f (x) on (a, ∞) that satisfies
(B) and (C).
1 9/16/2020
6.1: Spring Problems I
We consider the motion of an object of mass m, suspended from a spring of negligible mass. We say that the spring–mass
system is in equilibrium when the object is at rest and the forces acting on it sum to zero. The position of the object in this case
is the equilibrium position. We define y to be the displacement of the object from its equilibrium position (Figure 6.1.1),
measured positive upward.
attached. We assume that the spring obeys Hooke’s law: If the length of the spring is changed by an amount ΔL from its
natural length, then the spring exerts a force F = kΔL , where k is a positive number called the spring constant. If the
s
spring is stretched then ΔL > 0 and F > 0 , so the spring force is upward, while if the spring is compressed then ΔL < 0
s
A damping force F = −cy that resists the motion with a force proportional to the velocity of the object. It may be due to
d
′
air resistance or friction in the spring. However, a convenient way to visualize a damping force is to assume that the object
is rigidly attached to a piston with negligible mass immersed in a cylinder (called a dashpot) filled with a viscous liquid
(Figure 6.1.1). As the piston moves, the liquid exerts a damping force. We say that the motion is undamped if c = 0 , or
damped if c > 0 .
An external force F , other than the force due to gravity, that may vary with t , but is independent of displacement and
velocity. We say that the motion is free if F ≡ 0 , or forced if F ≢ 0 .
We must now relate F to y . In the absence of external forces the object stretches the spring by an amount Δl to assume its
s
equilibrium position (Figure 6.1.3). Since the sum of the forces acting on the object is then zero, Hooke’s Law implies that
mg = kΔl . If the object is displaced y units from its equilibrium position, the total change in the length of the spring is
Figure 6.1.3 :(a) Natural length of spring (b) Spring stretched by mass
Example 6.1.1
An object stretches a spring 6 inches in equilibrium.
a. Set up the equation of motion and find its general solution.
b. Find the displacement of the object for t > 0 if it is initially displaced 18 inches above equilibrium and given a
downward velocity of 3 ft/s.
Solution a:
Setting c = 0 and F =0 in Equation 6.1.2 yields the equation of motion
′′
my + ky = 0,
which we rewrite as
k
′′
y + y = 0. (6.1.3)
m
Although we would need the weight of the object to obtain k from the equation mg = kΔl we can obtain k/m from Δl
alone; thus, k/m = g/Δl . Consistent with the units used in the problem statement, we take g = 32 ft/s . Although Δl is
2
which has the zeros r = ±8i . Therefore the general solution of Equation 6.1.4 is
y = c1 cos 8t + c2 sin 8t. (6.1.5)
Solution b:
The initial upward displacement of 18 inches is positive and must be expressed in feet. The initial downward velocity is
negative; thus,
3 ′
y(0) = and y (0) = −3.
2
Setting t =0 in Equation 6.1.5 and Equation 6.1.6 and imposing the initial conditions shows that c1 = 3/2 and
c2 = −3/8 . Therefore
3 3
y = cos 8t − sin 8t,
2 8
−−−−
where m and k are arbitrary positive numbers. Dividing through by m and defining ω 0 = √k/m yields
′′ 2
y + ω y = 0.
0
2
cos 8t −
3
8
sin 8t
and
Substituting from Equation 6.1.9 into Equation 6.1.7 and applying the identity
yields
From Equation 6.1.8 and Equation 6.1.9 we see that the R and ϕ can be interpreted as polar coordinates of the point with
rectangular coordinates (c , c ) (Figure 6.1.5). Given c and c , we can compute R from Equation 6.1.8. From Equation
1 2 1 2
c1 c2
cos ϕ = −−−−−− and sin ϕ = −−−−−−.
2 2 2 2
√c +c √c +c
1 2 1 2
There are infinitely many angles ϕ , differing by integer multiples of 2π , that satisfy these equations. We will always
choose ϕ so that −π ≤ ϕ < π .
The motion described by Equation 6.1.7 or Equation 6.1.10 is simple harmonic motion. We see from either of these
equations that the motion is periodic, with period
T = 2π/ ω0 .
This is the time required for the object to complete one full cycle of oscillation (for example, to move from its highest
position to its lowest position and back to its highest position). Since the highest and lowest positions of the object are
y = R and y = −R , we say that R is the amplitude of the oscillation. The angle ϕ in Equation 6.1.10 is the phase angle.
It’s measured in radians. Equation Equation 6.1.10 is the amplitude–phase form of the displacement. If t is in seconds
then ω is in radians per second (rad/s); it is the frequency of the motion. It is also called the natural frequency of the
0
Example 6.1.2
We found the displacement of the object in Example 6.1.1 to be
3 3
y = cos 8t − sin 8t.
2 8
Find the frequency, period, amplitude, and phase angle of the motion.
Solution:
The frequency is ω 0 =8 rad/s, and the period is T = 2π/ ω0 = π/4 s. Since c 1 = 3/2 and c 2 = −3/8 , the amplitude is
−−−−−−−−−−−−
2 2
−−−−−− 3 3 3
2 2
−−
R = √c + c = √( ) +( ) = √17.
1 2
2 8 8
2 4
cos ϕ = = −− (6.1.11)
3 −−
√17 √17
8
and
3
− 1
8
sin ϕ = =− −−. (6.1.12)
3 −−
√17 √17
8
Since sin ϕ < 0 (see Equation 6.1.12), the minus sign applies here; that is,
ϕ ≈ −.245 rad.
Example 6.1.3
The natural length of a spring is 1 m. An object is attached to it and the length of the spring increases to 102 cm when the
object is in equilibrium. Then the object is initially displaced downward 1 cm and given an upward velocity of 14 cm/s.
Find the displacement for t > 0 . Also, find the natural frequency, period, amplitude, and phase angle of the resulting
motion. Express the answers in cgs units.
Solution:
In cgs units g = 980 cm/s . Since Δl = 2 cm, ω
2 2
0
= g/Δl = 490 . Therefore
so
′ −− −− −−
y = 7 √10 (−c1 sin 7 √10t + c2 cos 7 √10t) .
−−
Substituting the initial conditions into the last two equations yields c 1 = −1 and c2 = 2/ √10 . Hence,
−− 2 −−
y = − cos 7 √10t + sin 7 √10t.
−−
√10
−− −−
The frequency is 7√10 rad/s, and the period is T = 2π/(7 √10) s. The amplitude is
−−−−−−−−−−−−−−
2 −
−
−−−−−− 2 7
2 2 2
R = √c +c = √ (−1 ) +( ) =√ cm
1 2 −−
√10 5
F (t) = F0 cos ωt
′′
my + ky = F0 cos ωt,
which we rewrite as
′′
F0
2
y +ω y = cos ωt (6.1.13)
0
m
−−−−
with ω = √k/m . We’ll see from the next two examples that the solutions of Equation
0 6.1.13 with ω ≠ ω0 behave very
differently from the solutions with ω = ω . 0
Example 6.1.4
Solve the initial value problem
′′
F0 ′
2
y +ω y = cos ωt, y(0) = 0, y (0) = 0, (6.1.14)
0
m
given that ω ≠ ω . 0
Solution:
Since
′′ 2
yp = −ω (A cos ωt + B sin ωt),
F0
′′ 2
yp + ω yp = cos ωt
0
m
if and only if
2
F0
2
(ω − ω ) (A cos ωt + B sin ωt) = cos ωt.
0
m
so
F0
yp = cos ωt.
2 2
m(ω −ω )
0
so
′
−ωF0
y = sin ωt + ω0 (−c1 sin ω0 t + c2 cos ω0 t).
2 2
m(ω −ω )
0
It is revealing to write this in a different form. We start with the trigonometric identities
cos(α − β) = cos α cos β + sin α sin β
Now let
α − β = ωt and α + β = ω0 t, (6.1.18)
so that
where
2F0 (ω0 − ω)t
R(t) = sin . (6.1.21)
2 2
m(ω −ω ) 2
0
From Equation 6.1.20 we can regard y as a sinusoidal variation with frequency (ω + ω)/2 and variable amplitude 0
|R(t)|. In Figure 6.1.6 the dashed curve above the t axis is y = |R(t)| , the dashed curve below the t axis is y = −|R(t)| ,
and the displacement y appears as an oscillation bounded by them. The oscillation of y for t on an interval between
successive zeros of R(t) is called a beat.
moreover, if ω + ω is sufficiently large compared with ω − ω , then |y| assumes values close to (perhaps equal to) this
0 0
upper bound during each beat. However, the oscillation remains bounded for all t . (This assumes that the spring can
withstand deflections of this size and continue to obey Hooke’s law.) The next example shows that this isn’t so if ω = ω . 0
Example 6.1.5 :
Find the general solution of
′′
F0
2
y +ω y = cos ω0 t. (6.1.22)
0
m
Solution:
We first obtain a particular solution y of Equation 6.1.22. Since cos ω
p 0t is a solution of the complementary equation, the
form for y is
p
Then
′
yp = A cos ω0 t + B sin ω0 t + ω0 t(−A sin ω0 t + B cos ω0 t)
and
′′ 2
yp = 2 ω0 (−A sin ω0 t + B cos ω0 t) − ω t(A cos ω0 t + B sin ω0 t). (6.1.24)
0
F0
−2Aω0 sin ω0 t + 2Bω0 cos ω0 t = cos ω0 t;
m
that is, if
F0
A =0 and B = .
2mω0
Therefore
F0 t
yp = sin ω0 t
2mω0
F0 t F0 t
y = and y =−
2mω0 2mω0
with increasing amplitude that approaches ∞ as t → ∞ . Of course, this means that the spring must eventually fail to
obey Hooke’s law or break.
This phenomenon of unbounded displacements of a spring–mass system in response to a periodic forcing function at its
natural frequency is called resonance. More complicated mechanical structures can also exhibit resonance–like
phenomena. For example, rhythmic oscillations of a suspension bridge by wind forces or of an airplane wing by periodic
vibrations of reciprocating engines can cause damage or even failure if the frequencies of the disturbances are close to
critical frequencies determined by the parameters of the mechanical system in question.
Q6.1.1
1. An object stretches a spring 4 inches in equilibrium. Find and graph its displacement for t > 0 if it is initially displaced 36
inches above equilibrium and given a downward velocity of 2 ft/s.
2. An object stretches a string 1.2 inches in equilibrium. Find its displacement for t >0 if it is initially displaced 3 inches
below equilibrium and given a downward velocity of 2 ft/s.
3. A spring with natural length .5 m has length 50.5 cm with a mass of 2 gm suspended from it. The mass is initially displaced
1.5 cm below equilibrium and released with zero velocity. Find its displacement for t > 0 .
4. An object stretches a spring 6 inches in equilibrium. Find its displacement for t > 0 if it is initially displaced 3 inches above
equilibrium and given a downward velocity of 6 inches/s. Find the frequency, period, amplitude and phase angle of the motion.
5. An object stretches a spring 5 cm in equilibrium. It is initially displaced 10 cm above equilibrium and given an upward
velocity of .25 m/s. Find and graph its displacement for t > 0 . Find the frequency, period, amplitude, and phase angle of the
motion.
6. A 10 kg mass stretches a spring 70 cm in equilibrium. Suppose a 2 kg mass is attached to the spring, initially displaced 25
cm below equilibrium, and given an upward velocity of 2 m/s. Find its displacement for t > 0 . Find the frequency, period,
amplitude, and phase angle of the motion.
7. A weight stretches a spring 1.5 inches in equilibrium. The weight is initially displaced 8 inches above equilibrium and given
a downward velocity of 4 ft/s. Find its displacement for t > 0 .
8. A weight stretches a spring 6 inches in equilibrium. The weight is initially displaced 6 inches above equilibrium and given a
downward velocity of 3 ft/s. Find its displacement for t > 0 .
−−
9. A spring–mass system has natural frequency 7√10 rad/s. The natural length of the spring is .7 m. What is the length of the
spring when the mass is in equilibrium?
10. A 64 lb weight is attached to a spring with constant k = 8 lb/ft and subjected to an external force F (t) = 2 sin t . The
weight is initially displaced 6 inches above equilibrium and given an upward velocity of 2 ft/s. Find its displacement for
t > 0.
11. A unit mass hangs in equilibrium from a spring with constant k = 1/16 . Starting at t =0 , a force F (t) = 3 sin t is
applied to the mass. Find its displacement for t > 0 .
12. A 4 lb weight stretches a spring 1 ft in equilibrium. An external force F (t) = .25 sin 8t lb is applied to the weight, which
is initially displaced 4 inches above equilibrium and given a downward velocity of 1 ft/s. Find and graph its displacement for
t > 0.
13. A 2 lb weight stretches a spring 6 inches in equilibrium. An external force F (t) = sin 8t lb is applied to the weight, which
is released from rest 2 inches below equilibrium. Find its displacement for t > 0 .
14. A 10 gm mass suspended on a spring moves in simple harmonic motion with period 4 s. Find the period of the simple
harmonic motion of a 20 gm mass suspended from the same spring.
15. A 6 lb weight stretches a spring 6 inches in equilibrium. Suppose an external force F (t) = sin ωt + cos ωt lb is
3
16
3
applied to the weight. For what value of ω will the displacement be unbounded? Find the displacement if ω has this value.
Assume that the motion starts from equilibrium with zero initial velocity.
16. A 6 lb weight stretches a spring 4 inches in equilibrium. Suppose an external force F (t) = 4 sin ωt − 6 cos ωt lb is
applied to the weight. For what value of ω will the displacement be unbounded? Find and graph the displacement if ω has this
value. Assume that the motion starts from equilibrium with zero initial velocity.
17. A mass of one kg is attached to a spring with constant k = 4 N/m. An external force F (t) = − cos ωt − 2 sin ωt n is
applied to the mass. Find the displacement y for t > 0 if ω equals the natural frequency of the spring–mass system. Assume
t > 0 . Also, find the amplitude of the oscillation and give formulas for the sine and cosine of the initial phase angle.
19. Two objects suspended from identical springs are set into motion. The period of one object is twice the period of the other.
How are the weights of the two objects related?
20. Two objects suspended from identical springs are set into motion. The weight of one object is twice the weight of the other.
How are the periods of the resulting motions related?
21. Two identical objects suspended from different springs are set into motion. The period of one motion is 3 times the period
of the other. How are the two spring constants related?
Now suppose the object is displaced from equilibrium and given an initial velocity. Intuition suggests that if the damping force
is sufficiently weak the resulting motion will be oscillatory, as in the undamped case considered in the previous section, while
if it is sufficiently strong the object may just move slowly toward the equilibrium position without ever reaching it. We’ll now
confirm these intuitive ideas mathematically. The characteristic equation of Equation 6.2.1 is
2
mr + cr + k = 0.
We saw in Section 5.2 that the form of the solution of Equation 6.2.1 depends upon whether c 2
− 4mk is positive, negative, or
zero. We’ll now consider these three cases.
Underdamped Motion
−−−−
We say the motion is underdamped if c < √4mk . In this case r and r in Equation 6.2.2 are complex conjugates, which we
1 2
write as
c c
r1 = − − i ω1 and r2 = − + i ω1 ,
2m 2m
where
− −−−−−−−
√ 4mk − c2
ω1 = .
2m
By the method used in Section 6.1 to derive the amplitude–phase form of the displacement of an object in simple harmonic
motion, we can rewrite this equation as
−ct/2m
y = Re cos(ω1 t − ϕ), (6.2.3)
The factor Re in Equation 6.2.3 is called the time–varying amplitude of the motion, the quantity ω is called the
−ct/2m
1
frequency, and T = 2π/ω (which is the period of the cosine function in Equation 6.2.3 is called the quasi–period. A typical
1
graph of Equation 6.2.3 is shown in Figure 6.2.1. As illustrated in that figure, the graph of y oscillates between the dashed
exponential curves y = ±Re . −ct/2m
Overdamped Motion
−−−−
We say the motion is overdamped if c > √4mk . In this case the zeros r and r of the characteristic polynomial are real, with
1 2
r1 t r2 t
y = c1 e + c2 e .
Again lim t→∞y(t) = 0 as in the underdamped case, but the motion isn’t oscillatory, since y can’t equal zero for more than
one value of t unless c = c = 0 . (Exercise 6.2.23.)
1 2
−ct/2m
y =e (c1 + c2 t).
Again limt→∞ y(t) = 0 and the motion is nonoscillatory, since y can’t equal zero for more than one value of t unless
c1 = c2 = 0 . (Exercise 6.2.22).
Example 6.2.1
Suppose a 64 lb weight stretches a spring 6 inches in equilibrium and a dashpot provides a damping force of c lb for each
ft/sec of velocity.
a. Write the equation of motion of the object and determine the value of c for which the motion is critically damped.
b. Find the displacement y for t > 0 if the motion is critically damped and the initial conditions are y(0) = 1 and
y (0) = 20 .
′
c. Find the displacement y for t > 0 if the motion is critically damped and the initial conditions are y(0) = 1 and
y (0) = −20 .
′
Solution a
Solution b
Setting c = 32 in Equation 6.2.4 and cancelling the common factor 2 yields
′′
y + 16y + 64y = 0.
Imposing the initial conditions y(0) = 1 and y (0) = 20 in the last two equations shows that
6.2.2
′
1 = c1 and
20 = −8 + c . Hence, the solution of the initial value problem is
2
−8t
y =e (1 + 28t).
Therefore the object moves downward through equilibrium just once, and then approaches equilibrium from below as
t → ∞ . Again, there’s no oscillation. The solutions of these two initial value problems are graphed in Figure 6.2.2.
Example 6.2.2
Find the displacement of the object in Example 6.2.1 if the damping constant is c = 4 lb–sec/ft and the initial conditions
are y(0) = 1.5 ft and y (0) = −3 ft/sec.
′
Solution
With c = 4 , the equation of motion Equation 6.2.4 becomes
′′ ′
y + 2 y + 64y = 0 (6.2.7)
Therefore the motion is underdamped and the general solution of Equation 6.2.7is
−t – –
y =e (c1 cos 3 √7t + c2 sin 3 √7t).
Imposing the initial conditions y(0) = 1.5 and y (0) = −3 in the last two equations yields
′
1.5 = c1 and
–
−3 = −1.5 + 3 √7c . Hence, the solution of the initial value problem is
2
−t
3 – 1 –
y =e ( cos 3 √7t − – sin 3 √7t) . (6.2.8)
2 2 √7
where
–
3 3 √7 1 1
cos ϕ = = and sin ϕ = − – =− .
2R 8 2 √7R 8
Example 6.2.3
Let the damping constant in Example 6.2.1 be c = 40 lb–sec/ft. Find the displacement y for t >0 if y(0) = 1 and
y (0) = 1 .
′
Solution
With c = 40 , the equation of motion Equation 6.2.4 reduces to
Figure 6.2.3 : y = 17
12
e
−4t
−
5
12
−16t
e
The last two equations and the initial conditions y(0) = 1 and y ′
(0) = 1 imply that
c1 + c2 =1
−4c1 − 16c2 = 1.
′′ ′
my + c y + ky = F0 cos ωt. (6.2.11)
In Section 6.1 we considered this equation with c = 0 and found that the resulting displacement y assumed arbitrarily large
−−−−
values in the case of resonance (that is, when ω = ω = √k/m ). Here we’ll see that in the presence of damping the
0
displacement remains bounded for all t , and the initial conditions have little effect on the motion as t → ∞ . In fact, we’ll see
that for large t the displacement is closely approximated by a function of the form
y = R cos(ωt − ϕ), (6.2.12)
where the amplitude R depends upon m, c , k , F , and ω. We’re interested in the following question:
0
QUESTION
To answer this question, we must solve Equation 6.2.11 and determine R in terms of F , ω 0 0, , and c . We can obtain a
ω
particular solution of Equation 6.2.11 by the method of undetermined coefficients. Since cos ωt does not satisfy the
complementary equation
′′ ′
my + c y + ky = 0,
and
′′ 2
yp = −ω (A cos ωt + B sin ωt).
2
(k − m ω )A + cωB = F0
2
−cωA + (k − m ω )B = 0.
Solving for A and B and substituting the results into Equation 6.2.13 yields
F0 2
yp = [(k − m ω ) cos ωt + cω sin ωt] ,
2 2 2 2
(k − m ω ) +c ω
(6.2.14)
where
2
k − mω cω
cos ϕ = − −−−−−−−−−−−−− − and sin ϕ = − −−−−−−−−−−−−− −. (6.2.15)
√ (k − m ω2 )2 + c2 ω2 √ (k − m ω2 )2 + c2 ω2
To compare this with the undamped forced vibration that we considered in Section 6.1 it is useful to write
k
2 2 2 2
k − mω =m( − ω ) = m(ω − ω ), (6.2.16)
0
m
−−−−
where ω = √k/m is the natural angular frequency of the undamped simple harmonic motion of an object with mass m on a
0
spring with constant k . Substituting Equation 6.2.16 into Equation 6.2.14 yields
F0
yp = cos(ωt − ϕ). (6.2.17)
−−−−−−−−−−−−−−−−
2 2 2 2 2 2
√ m (ω −ω ) +c ω
0
−ct/2m
yc = e (c1 + c2 t),
r1 t r2 t
yc = c1 e + c2 e (r1 , r2 < 0).
In all three cases lim t→∞y (t) = 0 for any choice of c and c . For this reason we say that y is the transient component of
c 1 2 c
the solution y . The behavior of y for large t is determined by y , which we call the steady state component of y . Thus, for
p
large t the motion is like simple harmonic motion at the frequency of the external force.
The amplitude of y in Equation 6.2.17 is
p
F0
R = −−−−−−−−−−−−−−−−, (6.2.18)
2 2 2 2 2 2
√ m (ω −ω ) +c ω
0
which is finite for all ω; that is, the presence of damping precludes the phenomenon of resonance that we encountered in
studying undamped vibrations under a periodic forcing function. We’ll now find the value ω of ω for which R is max
in the denominator of Equation 6.2.18 attains its minimum value. By rewriting this as
2 4 4 2 2 2 2
ρ(ω) = m (ω + ω ) + (c − 2 m ω )ω , (6.2.19)
0 0
−−−−−−
2 2
−−−−
c ≥ √2m ω = √2mk.
0
which is consistent with Hooke’s law: if the mass is subjected to a constant force F , its displacement should approach a 0
−−−−
constant y such that ky = F . Now suppose c < √2mk . Then, from Equation 6.2.19,
p p 0
′ 2 2 2 2 2
ρ (ω) = 2ω(2 m ω +c − 2 m ω ),
0
and ω max is the value of ω for which the expression in parentheses equals zero; that is,
−−−−−−−− −−−−−−−−−−−−−
2 2
c k c
2
ωmax = √ ω − =√ (1 − ).
0 2
2m m 2km
(To see that ρ(ω max ) is the minimum value of ρ(ω), note that ρ (ω) < 0 if ω < ω and ρ (ω) > 0 if′
max
′
ω > ωmax .)
Substituting ω = ω max in Equation 6.2.18 and simplifying shows that the maximum amplitude R is max
2mF0 −−−−
Rmax = if c < √2mk.
− −−−−−−−
c √ 4mk − c2
Theorem 6.2.1
Suppose we consider the amplitude R of the steady state component of the solution of
′′ ′
my + c y + ky = F0 cos ωt
as a function of ω.
−−−−
a. If c ≥ √2mk , the maximum amplitude is R max = F0 /k and it is attained when ω = ω max =0 .
4. A 96 lb weight stretches a spring 3.2 ft in equilibrium. It is attached to a dashpot with damping constant c =18 lb-sec/ft. The
weight is initially displaced 15 inches below equilibrium and given a downward velocity of 12 ft/sec. Find its displacement for
t > 0.
5. A 16 lb weight stretches a spring 6 inches in equilibrium. It is attached to a damping mechanism with constant c . Find all
values of c such that the free vibration of the weight has infinitely many oscillations.
6. An 8 lb weight stretches a spring .32 ft. The weight is initially displaced 6 inches above equilibrium and given an upward
velocity of 4 ft/sec. Find its displacement for t > 0 if the medium exerts a damping force of 1.5 lb for each ft/sec of velocity.
7. A 32 lb weight stretches a spring 2 ft in equilibrium. It is attached to a dashpot with constant c = 8 lb-sec/ft. The weight is
initially displaced 8 inches below equilibrium and released from rest. Find its displacement for t > 0 .
8. A mass of 20 gm stretches a spring 5 cm. The spring is attached to a dashpot with damping constant 400 dyne sec/cm.
Determine the displacement for t > 0 if the mass is initially displaced 9 cm above equilibrium and released from rest.
9. A 64 lb weight is suspended from a spring with constant k = 25 lb/ft. It is initially displaced 18 inches above equilibrium
and released from rest. Find its displacement for t > 0 if the medium resists the motion with 6 lb of force for each ft/sec of
velocity.
10. A 32 lb weight stretches a spring 1 ft in equilibrium. The weight is initially displaced 6 inches above equilibrium and
given a downward velocity of 3 ft/sec. Find its displacement for t > 0 if the medium resists the motion with a force equal to 3
times the speed in ft/sec.
11. An 8 lb weight stretches a spring 2 inches. It is attached to a dashpot with damping constant c = 4 lb-sec/ft. The weight is
initially displaced 3 inches above equilibrium and given a downward velocity of 4 ft/sec. Find its displacement for t > 0 .
12. A 2 lb weight stretches a spring .32 ft. The weight is initially displaced 4 inches below equilibrium and given an upward
velocity of 5 ft/sec. The medium provides damping with constant c = 1/8 lb-sec/ft. Find and graph the displacement for
t > 0.
13. An 8 lb weight stretches a spring 8 inches in equilibrium. It is attached to a dashpot with damping constant c = .5 lb-sec/ft
and subjected to an external force F (t) = 4 cos 2t lb. Determine the steady state component of the displacement for t > 0 .
14. A 32 lb weight stretches a spring 1 ft in equilibrium. It is attached to a dashpot with constant c = 12 lb-sec/ft. The weight
is initially displaced 8 inches above equilibrium and released from rest. Find its displacement for t > 0 .
15. A mass of one kg stretches a spring 49 cm in equilibrium. A dashpot attached to the spring supplies a damping force of 4
N for each m/sec of speed. The mass is initially displaced 10 cm above equilibrium and given a downward velocity of 1 m/sec.
Find its displacement for t > 0 .
16. A mass of 100 grams stretches a spring 98 cm in equilibrium. A dashpot attached to the spring supplies a damping force of
600 dynes for each cm/sec of speed. The mass is initially displaced 10 cm above equilibrium and given a downward velocity
21. A mass m is suspended from a spring with constant k and subjected to an external force F (t) = α cos ω t + β sin ω t , 0 0
where ω is the natural frequency of the spring–mass system without damping. Find the steady state component of the
0
r1 t
y =e (c1 + c2 t) (6.2E.1)
r1 t r2 t
y = c1 e + c2 e (6.2E.2)
given that the motion is underdamped, so the general solution of the equation is
−ct/2m
y =e (c1 cos ω1 t + c2 sin ω1 t). (6.2E.4)
given that the motion is overdamped, so the general solution of the equation is
r1 t r2 t
y = c1 e + c2 e (r1 , r2 < 0). (6.2E.6)
given that the motion is critically damped, so that the general solution of the equation is of the form
r1 t
y =e (c1 + c2 t) (r1 < 0). (6.2E.8)
1 9/16/2020
7.1: Prelude to Series Solutions of Linear Second Order Equations
In this Chapter, we study a class of second order differential equations that occur in many applications, but cannot be solved in
closed form in terms of elementary functions. Here are some examples:
1. Bessel’s equation which occurs in problems displaying cylindrical symmetry, such as diffraction of light through a circular
aperture, propagation of electromagnetic radiation through a coaxial cable, and vibrations of a circular drum head.
2 ′′ ′ 2 2
x y + x y + (x −ν )y = 0,
3. Legendre’s equation which occurs in problems displaying spherical symmetry, particularly in electromagnetism.
2 ′′ ′
(1 − x )y − 2x y + α(α + 1)y = 0,
These equations and others considered in this chapter can be written in the form
′′ ′
P0 (x)y + P1 (x)y + P2 (x)y = 0, (7.1.1)
where P , P , and P are polynomials with no common factor. For most equations that occur in applications, these
0 1 2
polynomials are of degree two or less. We’ll impose this restriction, although the methods that we’ll develop can be extended
to the case where the coefficient functions are polynomials of arbitrary degree, or even power series that converge in some
circle around the origin in the complex plane.
Since Equation 7.1.1 does not in general have closed form solutions, we seek series representations for solutions. We’ll see
that if P (0) ≠ 0 then solutions of (A) can be written as power series
0
∞
n
y = ∑ an x
n=0
Definition 7.2.1
An infinite series of the form
∞
n
∑ an (x − x0 ) , (7.2.1)
n=0
N
n
lim ∑ an (x − x0 )
N →∞
n=0
exists; otherwise, we say that the power series diverges for the given x.
A power series in x − x must converge if x = x , since the positive powers of x − x are all zero in this case. This may be
0 0 0
the only value of x for which the power series converges. However, the next theorem shows that if the power series converges
for some x ≠ x then the set of all values of x for which it converges forms an interval.
0
Theorem 7.2.2
For any power series
∞
n
∑ an (x − x0 ) ,
n=0
In case (iii) we say that R is the radius of convergence of the power series. For convenience, we include the other two cases in
this definition by defining R = 0 in case (i) and R = ∞ in case (ii). We define the open interval of convergence of
∑
∞
n=0
a (x − x )
n 0 to be
n
If R is finite, no general statement can be made concerning convergence at the endpoints x = x 0 ±R of the open interval of
convergence; the series may converge at one or both points, or diverge at both.
Recall from calculus that a series of constants ∑ α is said to converge absolutely if the series of absolute values
∞
n=0 n
| α | converges. It can be shown that a power series ∑ with a positive radius of convergence R
∞ ∞ n
∑ n a (x − x ) n 0
n=0 n=0
converges absolutely in its open interval of convergence; that is, the series
∞
n
∑ | an ||x − x0 |
n=0
The next theorem provides a useful method for determining the radius of convergence of a power series. It’s derived in
calculus by applying the ratio test to the corresponding series of absolute values. For related theorems see Exercises 7.2.2 and
7.2.4.
Theorem 7.2.3
Suppose there’s an integer N such that a n ≠0 if n ≥ N and
∣ an+1 ∣
lim ∣ ∣ = L,
n→∞ ∣ an ∣
n=0
an (x − x0 )
n
is R = 1/L, which should be interpreted to
mean that R = 0 if L = ∞, or R = ∞ if L = 0 .
Example 7.2.1
Find the radius of convergence of the series:
a. ∞
n
∑ n! x
n=0
b. ∞
n
x
n
∑ (−1 )
n!
n=10
c. ∞
n 2 n
∑ 2 n (x − 1 ) .
n=0
Solution a
Here a n = n! , so
∣ an+1 ∣ (n + 1)!
lim ∣ ∣ = lim = lim (n + 1) = ∞.
n→∞ ∣ an ∣ n→∞ n! n→∞
Hence, R = 0 .
Solution b
Here a n
n
= (1 ) /n! for n ≥ N = 10 , so
∣ an+1 ∣ n! 1
lim ∣ ∣ = lim = lim = 0.
n→∞ ∣ an ∣ n→∞ (n + 1)! n→∞ n+1
Hence, R = ∞ .
Solution c
Here a n
n
=2 n
2
, so
n+1 2 2
∣ an+1 ∣ 2 (n + 1 ) 1
lim ∣ ∣ = lim = 2 lim (1 + ) = 2.
n
n→∞ ∣ an ∣ n→∞ 2 n2 n→∞ n
Hence, R = 1/2 .
Taylor Series
If a function f has derivatives of all orders at a point x = x , then the Taylor series of f about x is defined by
0 0
∞ 2n+1
n
x
sin x = ∑(−1 ) , −∞ < x < ∞ (7.2.3)
(2n + 1)!
n=0
∞ 2n
n
x
cos x = ∑(−1 ) −∞ < x < ∞ (7.2.4)
(2n)!
n=0
∞
1 n
= ∑x −1 < x < 1 (7.2.5)
1 −x
n=0
n
f (x) = ∑ an (x − x0 )
n=0
on its open interval of convergence. We say that the series represents f on the open interval of convergence. A function f
represented by a power series may be a familiar elementary function as in Equations 7.2.2 - 7.2.5; however, it often happens
that f isn’t a familiar function, so the series actually defines f .
The next theorem shows that a function represented by a power series has derivatives of all orders on the open interval of
convergence of the power series, and provides power series representations of the derivatives.
n=0
with positive radius of convergence R has derivatives of all orders in its open interval of convergence, and successive
derivatives can be obtained by repeatedly differentiating term by term; that is,
∞
′ n−1
f (x) = ∑ nan (x − x0 ) , (7.2.6)
n=1
∞
′′ n−2
f (x) = ∑ n(n − 1)an (x − x0 ) , (7.2.7)
n=2
⋮
∞
(k) n−k
f (x) = ∑ n(n − 1) ⋯ (n − k + 1)an (x − x0 ) . (7.2.8)
n=k
Example 7.2.2
Theorem 7.2.5
If the power series
∞
n
f (x) = ∑ an (x − x0 )
n=0
that is, ∑ ∞
n=0
n
an (x − x0 ) is the Taylor series of f about x . 0
(k)
f (x0 ) = k(k − 1) ⋯ 1 ⋅ ak = k! ak .
Theorem 7.2.6
a. If
∞ ∞
n n
∑ an (x − x0 ) = ∑ bn (x − x0 )
n=0 n=0
n
∑ an (x − x0 ) =0
n=0
Taylor Polynomials
If f has N derivatives at a point x , we say that
0
N (n)
f (x0 )
n
TN (x) = ∑ (x − x0 )
n!
n=0
is the N -th Taylor polynomial of f about x . This definition and Theorem 7.2.4 imply that if
0
n
f (x) = ∑ an (x − x0 ) ,
n=0
where the power series has a positive radius of convergence, then the Taylor polynomials of f about x are given by 0
n
TN (x) = ∑ an (x − x0 ) .
n=0
In numerical applications, we use the Taylor polynomials to approximate f on subintervals of the open interval of convergence
of the power series. For example, Equation 7.2.2 implies that the Taylor polynomial T of f (x) = e is N
x
N n
x
TN (x) = ∑ .
n!
n=0
The solid curve in Figure 7.2.1 is the graph of y = e on the interval [0, 5]. The dotted curves in Figure 7.2.1 are the graphs of
x
the Taylor polynomials T , …, T of y = e about x = 0 . From this figure, we conclude that the accuracy of the
1 6
x
0
7.2.6, Equation 7.2.7, and Equation 7.2.8, where the general terms are constant multiples of (x − x ) , (x − x ) , and n−1 n−2
0 0
(x − x )
0 , respectively. However, these series can all be rewritten so that their n -th terms are constant multiples of
n−k
′ k
f (x) = ∑(k + 1)ak+1 (x − x0 ) , (7.2.10)
k=0
where we start the new summation index k from zero so that the first term in Equation 7.2.10 (obtained by setting k = 0 ) is
the same as the first term in Equation 7.2.6 (obtained by setting n = 1 ). However, the sum of a series is independent of the
symbol used to denote the summation index, just as the value of a definite integral is independent of the symbol used to denote
the variable of integration. Therefore we can replace k by n in Equation 7.2.10 to obtain
∞
′ n
f (x) = ∑(n + 1)an+1 (x − x0 ) , (7.2.11)
n=0
n=n0
can be rewritten as
∞
n
∑ bn+k (x − x0 )
n=n0 −k
that is, replacing n by n+k in the general term and subtracting k from the lower limit of summation leaves the series
unchanged.
Example 7.2.3
Rewrite the following power series from Equation 7.2.7 and Equation 7.2.8 so that the general term in each is a constant
multiple of (x − x ) :0
n
∞ ∞
n−2 n−k
(a) ∑ n(n − 1)an (x − x0 ) (b) ∑ n(n − 1) ⋯ (n − k + 1)an (x − x0 ) .
n=2 n=k
Solution a
Replacing n by n + 2 in the general term and subtracting 2 from the lower limit of summation yields
∞ ∞
n−2 n
∑ n(n − 1)an (x − x0 ) = ∑(n + 2)(n + 1)an+2 (x − x0 ) .
n=2 n=0
Solution b
Replacing n by n + k in the general term and subtracting k from the lower limit of summation yields
∞ ∞
n−k n
∑ n(n − 1) ⋯ (n − k + 1)an (x − x0 ) = ∑(n + k)(n + k − 1) ⋯ (n + 1)an+k (x − x0 ) .
n=k n=0
Example 7.2.4
Given that
n
f (x) = ∑ an x ,
n=0
write the function xf as a power series in which the general term is a constant multiple of x .
′′ n
Solution
From Theorem 7.2.4 with x 0 =0 ,
∞
′′ n−2
f (x) = ∑ n(n − 1)an x .
n=2
Therefore
∞
′′ n−1
xf (x) = ∑ n(n − 1)an x .
n=2
Replacing n by n + 1 in the general term and subtracting 1 from the lower limit of summation yields
∞
′′ n
xf (x) = ∑(n + 1)nan+1 x .
n=1
′′ n
xf (x) = ∑(n + 1)nan+1 x ,
n=0
since the first term in this last series is zero. (We’ll see later that sometimes it is useful to include zero terms at the
beginning of a series.)
n=0 n=0
n n
f (x) = ∑ an (x − x0 ) and g(x) = ∑ bn (x − x0 )
n=0 n=0
with positive radii of convergence can be added term by term at points common to their open intervals of convergence; thus, if
the first series converges for |x − x | < R and the second converges for |x − x | < R , then
0 1 0 2
n
f (x) + g(x) = ∑(an + bn )(x − x0 )
n=0
for |x − x | < R , where R is the smaller of R and R . More generally, linear combinations of power series can be formed
0 1 2
n
c1 f (x) + c2 f (x) = ∑(c1 an + c2 bn )(x − x0 ) .
n=0
Example 7.2.5
Find the Maclaurin series for cosh x as a linear combination of the Maclaurin series for e and e x −x
.
Solution
Since
∞ n ∞ n
x
x −x n
x
e =∑ and e = ∑(−1 ) ,
n! n!
n=0 n=0
it follows that
∞ n
1 x
n
cosh x = ∑ [1 + (−1 ) ] . (7.2.12)
2 n!
n=0
Since
1 1 if n = 2m, an even integer,
n
[1 + (−1 ) ] = {
2 0 if n = 2m + 1, an odd integer,
This result is valid on (−∞, ∞), since this is the open interval of convergence of the Maclaurin series for e and e x −x
.
Example 7.2.6
Suppose
∞
n
y = ∑ an x
n=0
as a power series in x on I .
b. Use the result of (a) to find necessary and sufficient conditions on the coefficients { an } for y to be a solution of the
homogeneous equation
′′
(2 − x)y + 2y = 0 (7.2.13)
on I .
Solution a
From Equation 7.2.7 with x 0 =0 ,
∞
′′ n−2
y = ∑ n(n − 1)an x .
n=2
Therefore
′′ ′′ ′
(2 − x)y + 2y = 2y − x y + 2y
∞ ∞ ∞
(7.2.14)
n−2 n−1 n
= ∑n=2 2n(n − 1)an x − ∑n=2 n(n − 1)an x + ∑n=0 2 an x .
To combine the three series we shift indices in the first two to make their general terms constant multiples of x ; thus, n
n−2 n
∑ 2n(n − 1)an x = ∑ 2(n + 2)(n + 1)an+2 x (7.2.15)
n=2 n=0
and
∞ ∞ ∞
n−1 n n
∑ n(n − 1)an x = ∑(n + 1)nan+1 x = ∑(n + 1)nan+1 x , (7.2.16)
where we added a zero term in the last series so that when we substitute from Equation 7.2.15 and Equation 7.2.16 into
Equation 7.2.14 all three series will start with n = 0 ; thus,
∞
′′ n
(2 − x)y + 2y = ∑[2(n + 2)(n + 1)an+2 − (n + 1)nan+1 + 2 an ] x . (7.2.17)
n=0
Solution b
From Equation 7.2.17 we see that y satisfies Equation 7.2.13 on I if
n=0
an x
n
satisfies Equation 7.2.13 on I , then Equation 7.2.18 holds.
Example 7.2.7
Suppose
∞
n
y = ∑ an (x − 1 )
n=0
as a power series in x − 1 on I .
Solution
Since we want a power series in x −1 , we rewrite the coefficient of y
′′
in Equation 7.2.19 as 1 + x = 2 + (x − 1) , so
Equation 7.2.19 becomes
′′ ′′ 2 ′
2y + (x − 1)y + 2(x − 1 ) y + 3y.
n=1 n=2
Therefore
∞
′′ n−2
2y = ∑ 2n(n − 1)an (x − 1 ) ,
n=2
′′ n−1
(x − 1)y = ∑ n(n − 1)an (x − 1 ) ,
n=2
2 ′ n+1
2(x − 1) y = ∑ 2nan (x − 1 ) ,
n=1
n
3y = ∑ 3 an (x − 1 ) .
n=0
′′ n
2y = ∑ 2(n + 2)(n + 1)an+2 (x − 1 ) , (7.2.20)
n=0
′′ n
(x − 1)y = ∑(n + 1)nan+1 (x − 1 ) , (7.2.21)
n=0
2 ′ n
2(x − 1) y = ∑ 2(n − 1)an−1 (x − 1 ) , (7.2.22)
n=1
n
3y = ∑ 3 an (x − 1 ) , (7.2.23)
n=0
where we added initial zero terms to the series in Equation 7.2.21 and Equation 7.2.22 . Adding Equation 7.2.20 –
Equation 7.2.23 yields
′′ 2 ′ ′′ ′′ 2 ′
(1 + x)y + 2(x − 1 ) y + 3y = 2y + (x − 1)y + 2(x − 1 ) y + 3y
∞
n
= ∑ bn (x − 1 ) ,
n=0
where
b0 = 4 a2 + 3 a0 , (7.2.24)
The formula Equation 7.2.24 for b can’t be obtained by setting n = 0 in Equation 7.2.25, since the summation in
0
Equation 7.2.22 begins with n = 1 , while those in Equation 7.2.20, Equation 7.2.21, and Equation 7.2.23 begin with
n = 0.
2 n
n=0
∞
b. ∑ 2 n
n(x − 2 )
n
n=0
∞
n!
c. ∑
n
x
n
9
n=0
∞
n(n + 1)
d. ∑ n
(x − 2 )
n
16
n=0
∞ n
7
e. ∑(−1 )
n
x
n
n!
n=0
∞ n
3
f. ∑ n+1
(x + 7 )
n
2
n=0
4 (n + 1 )
2m
∑ bm (x − x0 )
m=0
−
−
is R = 1/ √L , which is interpreted to mean that R = 0 if L = ∞ or R = ∞ if L = 0 .
3. For each power series, use the result of Exercise 7.1.2 to find the radius of convergence R . If R > 0 , find the open interval
of convergence.
∞
a. ∑ (−1 )
m
(3m + 1)(x − 1 )
2m+1
m=0
∞
m(2m + 1)
b. ∑ (−1) m
m
(x + 2 )
2m
2
m=0
∞
m!
c. ∑ (x − 1 )
2m
(2m)!
m=0
∞
m!
d. ∑ (−1) m
m
(x + 8 )
2m
9
m=0
∞
(2m − 1)
e. ∑ (−1 )
m
m
x
2m+1
3
m=0
∞
f. ∑ (x − 1) 2m
m=0
km
∑ bm (x − x0 )
m=0
−
−
is R = 1/ √L
k
, which is interpreted to mean that R = 0 if L = ∞ or R = ∞ if L = 0 .
5. For each power series use the result of Exercise 7.1.4 to find the radius of convergence R . If R > 0 , find the open interval
of convergence.
∞ m
(−1)
a. ∑ (x − 3 )
3m+2
(27)m
m=0
∞ 7m+6
x
b. ∑
m
m=0
∞ m
9 (m + 1)
c. ∑ (x − 3 )
4m+2
(m + 2)
m=0
∞ m
2
d. ∑ (−1) m
x
4m+3
m!
m=0
∞
m!
e. ∑ (x + 1 )
4m+3
(26)m
m=0
∞ m
(−1)
f. ∑ m
(x − 1 )
3m+1
8 m(m + 1)
m=0
on the interval (−2π, 2π) for M =1 , 2, 3, …, until you find a value of M for which there’s no perceptible difference between
the two graphs.
7. Graph y = cos x and the Taylor polynomial
M n 2n
(−1) x
T2M (x) = ∑
(2n)!
n=0
on the interval (−2π, 2π) for M =1 , 2, 3, …, until you find a value of M for which there’s no perceptible difference between
the two graphs.
8. Graph y = 1/(1 − x) and the Taylor polynomial
N
n
TN (x) = ∑ x
n=0
on the interval [0, .95] for N = 1 , 2, 3, …, until you find a value of N for which there’s no perceptible difference between the
two graphs. Choose the scale on the y -axis so that 0 ≤ y ≤ 20 .
9. Graph y = cosh x and the Taylor polynomial
M 2n
x
T2M (x) = ∑
(2n)!
n=0
on the interval (−5, 5) for M = 1 , 2, 3, …, until you find a value of M for which there’s no perceptible difference between
the two graphs. Choose the scale on the y -axis so that 0 ≤ y ≤ 75 .
10. Graph y = sinh x and the Taylor polynomial
M 2n+1
x
T2M+1 (x) = ∑
(2n + 1)!
n=0
Q7.1.2
In Exercises 7.1.11-7.1.15 find a power series solution \(y(x)=\sum_{n=0}^{\infty} a_{n}x^{n}\].
11. (2 + x)y ′′
+ x y + 3y
′
12. (1 + 3x 2
)y
′′ 2
+ 3 x y − 2y
′
13. (1 + 2x 2
)y
′′
+ (2 − 3x)y + 4y
′
14. (1 + x 2
)y
′′
+ (2 − x)y + 3y
′
15. (1 + 3x 2
)y
′′ ′
− 2x y + 4y
Q7.1.3
∞
′′ ′
xy + (4 + 2x)y + (2 + x)y.
2 ′′ ′
x y + 2x y − 3xy.
18. Do the following experiment for various choices of real numbers a and a . 0 1
numerically on (−1.95, 1.95). Choose the most accurate method your software package provides. (See Section 10.1 for a
brief discussion of one such method.)
b. For N = 2 , 3, 4, …, compute a , …, a from Equation 7.1.18 and graph
2 N
n
TN (x) = ∑ an x
n=0
and the solution obtained in (a) on the same axes. Continue increasing N until it is obvious that there’s no point in
continuing. (This sounds vague, but you’ll know when to stop.)
19. Follow the directions of Exercise 7.1.18 for the initial value problem
′′ 2 ′ ′
(1 + x)y + 2(x − 1 ) y + 3y = 0, y(1) = a0 , y (1) = a1 ,
on the interval (0, 2). Use Equations 7.1.24 and 7.1.25 to compute {a n} .
∞
∞ ∞
r n n+r
y(x) = x ∑ an x = ∑ an x
n=0 n=0
on (0, R). Use Theorem 7.1.4 and the rule for differentiating the product of two functions to show that
′ n+r−1
y (x) = ∑(n + r)an x ,
n=0
′′ n+r−2
y (x) = ∑(n + r)(n + r − 1)an x ,
n=0
⋮
∞
(k) n+r−k
y (x) = ∑(n + r)(n + r − 1) ⋯ (n + r − k)an x
n=0
on (0, R)
Q7.1.4
21. x2
(1 − x)y
′′ ′
+ x(4 + x)y + (2 − x)y
22. x2
(1 + x)y
′′ ′
+ x(1 + 2x)y − (4 + 6x)y
23. x2
(1 + x)y
′′ 2 ′
− x(1 − 6x − x )y + (1 + 6x + x )y
2
24. x2
(1 + 3x)y
′′ 2
+ x(2 + 12x + x )y + 2x(3 + x)y
′
25. x2 2
(1 + 2 x )y
′′ 2 ′
+ x(4 + 2 x )y + 2(1 − x )y
2
26. x2 2
(2 + x )y
′′ 2 ′
+ 2x(5 + x )y + 2(3 − x )y
2
where P , P , and P are polynomials. Usually the solutions of these equations can’t be expressed in terms of familiar
0 1 2
elementary functions. Therefore we’ll consider the problem of representing solutions of Equation 7.3.1 with series.
We assume throughout that P , P and P have no common factors. Then we say that
0 1 2 x0 is an ordinary point of Equation
7.3.1 if P (x ) ≠ 0 , or a singular point if P (x ) = 0 . For Legendre’s equation,
0 0 0 0
2 ′′ ′
(1 − x )y − 2x y + α(α + 1)y = 0, (7.3.2)
x0 = 1 and x 0 = −1 are singular points and all other points are ordinary points. For Bessel’s equation,
2 ′′ ′ 2 2
x y + x y + (x −ν )y = 0,
x0 = 0 is a singular point and all other points are ordinary points. If P is a nonzero constant as in Airy’s equation, 0
′′
y − xy = 0, (7.3.3)
Therefore, if x is an ordinary point of Equation 7.3.1 and a and a are arbitrary real numbers, then the initial value problem
0 0 1
′′ ′ ′
P0 (x)y + P1 (x)y + P2 (x)y = 0, y(x0 ) = a0 , y (x0 ) = a1 (7.3.4)
has a unique solution on the largest open interval that contains x and does not contain any zeros of P . To see this, we rewrite
0 0
and apply Theorem 5.1.1 with p = P /P and q = P /P . In this section and the next we consider the problem of
1 0 2 0
representing solutions of Equation 7.3.1 by power series that converge for values of x near an ordinary point x . 0
Theorem 7.3.1
Suppose P , P , and P are polynomials with no common factor and P isn’t identically zero. Let x be a point such that
0 1 2 0 0
′′ ′
P0 (x)y + P1 (x)y + P2 (x)y = 0 (7.3.5)
n=0
that converges at least on the open interval (x − ρ, x + ρ) . ( If P is nonconstant, so that ρ is necessarily finite, then
0 0 0
the open interval of convergence of (7.3.6) may be larger than (x − ρ, x + ρ). If P is constant then ρ = ∞ and 0 0 0
(x − ρ, x + ρ) = (−∞, ∞) .
0 0
We call Equation 7.3.6 a power series solution in x − x of Equation 7.3.5. We’ll now develop a method for finding power
0
series solutions of Equation 7.3.5. For this purpose we write Equation 7.3.5 as Ly = 0 , where
′′ ′
Ly = P0 y + P1 y + P2 y. (7.3.7)
n
y = ∑ an (x − x0 ) .
n=0
′ n−1
y = ∑ nan (x − x0 )
n=1
shows that y(x ) = a and y (x ) = a . Since every initial value problem Equation 7.3.4 has a unique solution, this means
0 0
′
0 1
that a and a can be chosen arbitrarily, and a , a , …are uniquely determined by them.
0 1 2 3
n
y = ∑ an (x − x0 ) ,
n=0
′ n−1
y = ∑ nan (x − x0 ) ,
n=1
′′ n−2
y = ∑ n(n − 1)an (x − x0 )
n=2
into Equation 7.3.7, and collect the coefficients of like powers of x − x . This yields 0
n
Ly = ∑ bn (x − x0 ) , (7.3.8)
n=0
powers of x − x . Since Equation 7.3.8 and (a) of Theorem 7.1.6 imply that Ly = 0 if and only if b = 0 for n ≥ 0 , all
0 n
power series solutions in x − x of Ly = 0 can be obtained by choosing a and a arbitrarily and computing a , a , …,
0 0 1 2 3
successively so that b = 0 for n ≥ 0 . For simplicity, we call the power series obtained this way the power series in x − x
n 0
for the general solution of Ly = 0 , without explicitly identifying the open interval of convergence of the series.
Example 7.3.1
Let x be an arbitrary real number. Find the power series in x − x for the general solution of
0 0
′′
y + y = 0. (7.3.9)
Solution
Here
′′
Ly = y + y.
If
∞
n
y = ∑ an (x − x0 ) ,
n=0
then
∞
′′ n−2
y = ∑ n(n − 1)an (x − x0 ) ,
n=2
so
n−2 n
Ly = ∑ n(n − 1)an (x − x0 ) + ∑ an (x − x0 ) .
n=2 n=0
To collect coefficients of like powers of x − x , we shift the summation index in the first sum. This yields
0
∞ ∞ ∞
n n n
Ly = ∑(n + 2)(n + 1)an+2 (x − x0 ) + ∑ an (x − x0 ) = ∑ bn (x − x0 ) ,
with
bn = (n + 2)(n + 1)an+2 + an .
where a and a are arbitrary. Since the indices on the left and right sides of Equation
0 1 7.3.10 differ by two, we write
Equation 7.3.10 separately for n even (n = 2m) and n odd (n = 2m + 1) . This yields
−a2m
a2m+2 = , m ≥ 0, (7.3.11)
(2m + 2)(2m + 1)
and
−a2m+1
a2m+3 = , m ≥ 0. (7.3.12)
(2m + 3)(2m + 2)
Computing the coefficients of the even powers of x − x from Equation 7.3.11 yields
0
a0
a2 = −
2⋅1
a2 1 a0 a0
a4 = − =− (− ) = ,
4⋅3 4⋅3 2⋅1 4⋅3⋅2⋅1
a4 1 a0 a0
a6 = − =− ( ) =− ,
6⋅5 6⋅5 4⋅3⋅2⋅1 6⋅5⋅4⋅3⋅2⋅1
and, in general,
m
a0
a2m = (−1 ) , m ≥ 0. (7.3.13)
(2m)!
Computing the coefficients of the odd powers of x − x from Equation 7.3.12 yields
0
a1
a3 = −
3⋅2
a3 1 a1 a1
a5 = − =− (− ) = ,
5⋅4 5⋅4 3⋅2 5⋅4⋅3⋅2
a5 1 a1 a1
a7 = − =− ( ) =− ,
7⋅6 7⋅6 5⋅4⋅3⋅2 7⋅6⋅5⋅4⋅3⋅2
and, in general,
m
(−1) a1
a2m+1 = m ≥ 0. (7.3.14)
(2m + 1)!
2m 2m+1
y = ∑ a2m (x − x0 ) + ∑ a2m+1 (x − x0 ) ,
m=0 m=0
y = a0 cos(x − x0 ) + a1 sin(x − x0 ),
Equations like Equations 7.3.10, 7.3.11, and 7.3.12, which define a given coefficient in the sequence {a } in terms of one or n
more coefficients with lesser indices are called recurrence relations. When we use a recurrence relation to compute terms of a
sequence we are computing recursively.
In the remainder of this section we consider the problem of finding power series solutions in x − x for equations of the form 0
2 ′′ ′
(1 + α(x − x0 ) ) y + β(x − x0 )y + γy = 0. (7.3.16)
Many important equations that arise in applications are of this form with x0 = 0 , including Legendre’s equation Equation
7.3.2, Airy’s equation Equation 7.3.3, Chebyshev’s equation,
2 ′′ ′ 2
(1 − x )y − x y + α y = 0,
Since
2
P0 (x) = 1 + α(x − x0 )
in Equation 7.3.16, the point x is an ordinary point of Equation 7.3.16, and Theorem 7.3.1 implies that the solutions of
0
Equation 7.3.16 can be written as power series in x − x that converge on the interval (x − 1/√|α|, x + 1/√|α|) if
0 0 0
α ≠ 0 , or on (−∞, ∞) if α = 0 . We’ll see that the coefficients in these power series can be obtained by methods similar to
∏ bj = br br+1 ⋯ bs if s ≥ r.
j=r
Thus,
7
∏ bj = b2 b3 b4 b5 b6 b7 ,
j=2
j=0
and
2
2 2
∏j =2 = 4.
j=2
We define
∏ bj = 1 if s < r,
j=r
Example 7.3.2
Find the power series in x for the general solution of
2 ′′ ′
(1 + 2 x )y + 6x y + 2y = 0. (7.3.17)
Solution
Here
2 ′′ ′
Ly = (1 + 2 x )y + 6x y + 2y.
If
∞
n
y = ∑ an x
n=0
then
∞ ∞
′ n−1 ′′ n−2
y = ∑ nan x and y = ∑ n(n − 1)an x ,
n=1 n=2
so
∞ ∞ ∞
2 n−2 n−1 n
Ly = (1 + 2 x ) ∑ n(n − 1)an x + 6x ∑ nan x + 2 ∑ an x
∞ ∞
n−2 n
= ∑ n(n − 1)an x + ∑ [2n(n − 1) + 6n + 2] an x
n=2 n=0
∞ ∞
n−2 2 n
= ∑ n(n − 1)an x + 2 ∑(n + 1 ) an x .
n=2 n=0
To collect coefficients of x , we shift the summation index in the first sum. This yields
n
∞ ∞ ∞
n 2 n n
Ly = ∑(n + 2)(n + 1)an+2 x + 2 ∑(n + 1 ) an x = ∑ bn x ,
with
2
bn = (n + 2)(n + 1)an+2 + 2(n + 1 ) an , n ≥ 0.
To obtain solutions of Equation 7.3.17, we set b n =0 for n ≥ 0 . This is equivalent to the recurrence relation
n+1
an+2 = −2 an , n ≥ 0. (7.3.18)
n+2
Since the indices on the left and right differ by two, we write Equation 7.3.18separately for n = 2m and n = 2m + 1 , as
in Example 7.3.1. This yields
2m + 1 2m + 1
a2m+2 = −2 a2m = − a2m , m ≥ 0, (7.3.19)
2m + 2 m +1
and
2m + 2 m +1
a2m+3 = −2 a2m+1 = −4 a2m+1 , m ≥ 0. (7.3.20)
2m + 3 2m + 3
3 3 1 1⋅3
a4 = − a2 = (− ) (− ) a0 = a0 ,
2 2 1 1⋅2
5 5 1⋅3 1⋅3⋅5
a6 = − a4 = − ( ) a0 = − a0 ,
3 3 1⋅2 1⋅2⋅3
7 7 1⋅3⋅5 1⋅3⋅5⋅7
a8 = − a6 = − (− ) a0 = a0 .
4 4 1⋅2⋅3 1⋅2⋅3⋅4
In general,
m
∏ (2j − 1)
j=1
m
a2m = (−1 ) a0 , m ≥ 0. (7.3.21)
m!
0
(Note that Equation 7.3.21is correct for m = 0 because we defined ∏ j=1
bj = 1 for any b .) j
2 2 1 1⋅2
2
a5 = −4 a3 = −4 (−4 ) a1 = 4 a1 ,
5 5 3 3⋅5
3 3 1⋅2 1⋅2⋅3
2 3
a7 = −4 a5 = −4 (4 ) a1 = −4 a1 ,
7 7 3⋅5 3⋅5⋅7
4 4 3
1⋅2⋅3 4
1⋅2⋅3⋅4
a9 = −4 a7 = −4 (4 ) a1 = 4 a1 .
9 9 3⋅5⋅7 3⋅5⋅7⋅9
In general,
m m
(−1 ) 4 m!
a2m+1 = a1 , m ≥ 0. (7.3.22)
m
∏ (2j + 1)
j=1
is the power series in x for the general solution of Equation 7.3.17. Since P (x) = 1 + 2x has no real zeros, Theorem 0
2
–
5.1.1 implies that every solution of Equation 7.3.17 is defined on (−∞, ∞). However, since P (±i/√2) = 0 , Theorem 0
– –
7.3.1 implies only that the power series converges in (−1/ √2, 1/ √2) for any choice of a and a . 0 1
The results in Examples 7.3.1 and 7.3.2 are consequences of the following general theorem.
Theorem 7.3.2
The coefficients {a n} in any solution y = ∑ ∞
n=0
an (x − x0 )
n
of
2 ′′ ′
(1 + α(x − x0 ) ) y + β(x − x0 )y + γy = 0 (7.3.23)
where
p(n) = αn(n − 1) + βn + γ. (7.3.25)
p(2m)
a2m+2 = − a2m , m ≥0 (7.3.26)
(2m + 2)(2m + 1)
p(2m + 1)
a2m+3 = − a2m+1 , m ≥ 0, (7.3.27)
(2m + 3)(2m + 2)
Proof
Here
2 ′′ ′
Ly = (1 + α(x − x0 ) )y + β(x − x0 )y + γy.
If
∞
n
y = ∑ an (x − x0 ) ,
n=0
then
∞ ∞
′ n−1 ′′ n−2
y = ∑ nan (x − x0 ) and y = ∑ n(n − 1)an (x − x0 ) .
n=1 n=2
Hence,
∞ n−2 ∞ n
Ly =∑ n(n − 1)an (x − x0 ) +∑ [αn(n − 1) + βn + γ] an (x − x0 )
n=2 n=0
∞ n−2 ∞ n
=∑ n(n − 1)an (x − x0 ) +∑ p(n)an (x − x0 ) ,
n=2 n=0
from Equation 7.3.25. To collect coefficients of powers of x − x , we shift the summation index in the first sum. This
0
yields
∞
n
Ly = ∑ [(n + 2)(n + 1)an+2 + p(n)an ] (x − x0 ) .
n=0
which is equivalent to Equation 7.3.24. Writing Equation 7.3.24 separately for the cases where n = 2m and
n = 2m + 1 yields Equation 7.3.26and Equation 7.3.27.
Example 7.3.3
Find the power series in x − 1 for the general solution of
2 ′′ ′
(2 + 4x − 2 x )y − 12(x − 1)y − 12y = 0. (7.3.28)
Solution
We must first write the coefficient P (x) = 2 + 4x − x in powers of
0
2
x −1 . To do this, we write x = (x − 1) + 1 in
P (x) and then expand the terms, collecting powers of x − 1 ; thus,
0
2 2
2 + 4x − 2x = 2 + 4[(x − 1) + 1] − 2[(x − 1) + 1]
2
= 4 − 2(x − 1 ) .
This is of the form Equation 7.3.23with α = −1/2 , β = −3 , and γ = −3 . Therefore, from Equation 7.3.25
n(n − 1) (n + 2)(n + 3)
p(n) = − − 3n − 3 = − .
2 2
(2m + 2)(2m + 3) 2m + 3
= a2m = a2m , m ≥0
2(2m + 2)(2m + 1) 2(2m + 1)
and
p(2m + 1)
a2m+3 = − a2m+1
(2m + 3)(2m + 2)
(2m + 3)(2m + 4) m +2
= a2m+1 = a2m+1 , m ≥ 0.
2(2m + 3)(2m + 2) 2(m + 1)
which implies that the power series in x − 1 for the general solution of Equation 7.3.28is
∞ ∞
2m + 1 2m
m +1 2m+1
y = a0 ∑ (x − 1 ) + a1 ∑ (x − 1 ) .
m m
2 2
m=0 m=0
In the examples considered so far we were able to obtain closed formulas for coefficients in the power series solutions. In
some cases this is impossible, and we must settle for computing a finite number of terms in the series. The next example
illustrates this with an initial value problem.
Example 7.3.4
∞
Compute a , a , …, a in the series solution y = ∑
0 1 7 n=0
an x
n
of the initial value problem
2 ′′ ′ ′
(1 + 2 x )y + 10x y + 8y = 0, y(0) = 2, y (0) = −3. (7.3.29)
Solution
Since α = 2 , β = 10, and γ = 8 in Equation 7.3.29,
2
p(n) = 2n(n − 1) + 10n + 8 = 2(n + 2 ) .
Therefore
2
(n + 2) n+2
an+2 = −2 an = −2 an , n ≥ 0.
(n + 2)(n + 1) n+1
2m + 3 2m + 3
a2m+3 = −2 a2m+1 = − a2m+1 , m ≥ 0. (7.3.31)
2m + 2 m +1
1
a2 = −4 2 = −8,
1
2 64
a4 = −4 (−8) = ,
3 3
3 64 256
a6 = −4 ( ) =− .
5 3 5
Starting with a 1
′
= y (0) = −3 , we compute a 3, a5 and a from Equation 7.3.31:
7
3
a3 = − (−3) = 9,
1
5 45
a5 = − 9 =− ,
2 2
7 45 105
a7 = − (− ) = .
3 2 2
2 3
64 4
45 5
256 6
105 7
y = 2 − 3x − 8 x + 9x + x − x − x + x +⋯ .
3 2 5 2
Using Technology
Computing coefficients recursively as in Example 7.3.4 is tedious. We recommend that you do this kind of computation by
writing a short program to implement the appropriate recurrence relation on a calculator or computer. You may wish to do this
in verifying examples and doing exercises (identified by the symbol C ) in this chapter that call for numerical computation of
the coefficients in series solutions. We obtained the answers to these exercises by using software that can produce answers in
the form of rational numbers. However, it is perfectly acceptable - and more practical - to get your answers in decimal form.
You can always check them by converting our fractions to decimals.
If you’re interested in actually using series to compute numerical approximations to solutions of a differential equation, then
whether or not there’s a simple closed form for the coefficients is essentially irrelevant. For computational purposes it is
usually more efficient to start with the given coefficients a = y(x ) and a = y (x ) , compute a , …, a recursively, and
0 0 1
′
0 2 N
then compute approximate values of the solution from the Taylor polynomial
N
n
TN (x) = ∑ an (x − x0 ) .
n=0
The trick is to decide how to choose N so the approximation y(x) ≈ T (x) is sufficiently accurate on the subinterval of the
N
interval of convergence that you’re interested in. In the computational exercises in this and the next two sections, you will
often be asked to obtain the solution of a given problem by numerical integration with software of your choice (see Section
10.1 for a brief discussion of one such method), and to compare the solution obtained in this way with the approximations
obtained with T for various values of N . This is a typical textbook kind of exercise, designed to give you insight into how
N
the accuracy of the approximation y(x) ≈ T (x) behaves as a function of N and the interval that you’re working on. In real
N
life, you would choose one or the other of the two methods (numerical integration or series solution). If you choose the method
of series solution, then a practical procedure for determining a suitable value of N is to continue increasing N until the
maximum of |T − T N | on the interval of interest is within the margin of error that you’re willing to accept.
N −1
In doing computational problems that call for numerical solution of differential equations you should choose the most accurate
numerical integration procedure your software supports, and experiment with the step size until you’re confident that the
numerical results are sufficiently accurate for the problem at hand.
2. (1 + x 2
)y
′′ ′
+ 2x y − 2y = 0
3. (1 + x 2
)y
′′ ′
− 8x y + 20y = 0
4. (1 − x 2
)y
′′ ′
− 8x y − 12y = 0
5. (1 + 2x 2
)y
′′ ′
+ 7x y + 2y = 0
6. (1 + x 2
)y
′′
+ 2x y +
′ 1
4
y =0
7. (1 − x 2
)y
′′ ′
− 5x y − 4y = 0
8. (1 + x 2
)y
′′
− 10x y + 28y = 0
′
Q7.2.2
9.
a. Find the power series in x for the general solution of y + x y + 2y = 0 . ′′ ′
b. For several choices of a and a , use differential equations software to solve the initial value problem
0 1
′′ ′ ′
y + x y + 2y = 0, y(0) = a0 , y (0) = a1 , (A)
n=0
and the solution obtained in (a) on (−r, r). Continue increasing N until there’s no perceptible difference between the two
graphs.
10. Follow the directions of Exercise [exer:7.2.9} for the differential equation
′′ ′
y + 2x y + 3y = 0.
Q7.2.3
In Exercises 7.2.11-7.2.13 find a0 , . . . , aN for N at least 7 in the power series solution y =∑
∞
n=0
n
an x of the initial value
problem.
11. (1 + x 2
)y
′′ ′
+ x y + y = 0, y(0) = 2,
′
y (0) = −1
12. (1 + 2x 2
)y
′′ ′
− 9x y − 6y = 0, y(0) = 1,
′
y (0) = −1
13. (1 + 8x 2
)y
′′
+ 2y = 0, y(0) = 2,
′
y (0) = −1
Q7.2.4
–
14. Do the next experiment for various choices of real numbers a , a , and r, with 0 < r < 1/√2 . 0 1
n=0
n
an x of (A), and graph
n
TN (x) = ∑ an x
n=0
and the solution obtained in (a) on (−r, r). Continue increasing N until there’s no perceptible difference between the two
graphs.
15. Do (a) and (b) for several values of r in (0, 1):
a. Use differential equations software to solve the initial value problem
2 ′′ ′ ′
(1 + x )y + 10x y + 14y = 0, y(0) = 5, y (0) = 1, (A)
n=0
an x
n
of (A), and graph
N
n
TN (x) = ∑ an x
n=0
and the solution obtained in (a) on (−r, r). Continue increasing N until there’s no perceptible difference between the two
graphs. What happens to the required N as r → 1 ?
c. Try (a) and (b) with r = 1.2. Explain your results.
Q7.2.5
In Exercises 7.2.16-7.2.20 find the power series in x − x for the general solution. 0
16. y ′′
− y = 0; x0 = 3
17. y ′′ ′
− (x − 3)y − y = 0; x0 = 3
18. (1 − 4x + 2x 2
)y
′′
+ 10(x − 1)y + 6y = 0;
′
x0 = 1
19. (11 − 8x + 2x 2
)y
′′
− 16(x − 2)y + 36y = 0;
′
x0 = 2
20. (5 + 6x + 3x 2
)y
′′
+ 9(x + 1)y + 3y = 0;
′
x0 = −1
Q7.2.6
In Exercises 7.2.21-7.2.26 find a , . . . a for N at least 7 in the power series y = ∑
0 N
∞
n=0
n
an (x − x0 ) for the solution of the
initial value problem. Take x to be the point where the initial conditions are imposed.
0
21. (x 2
− 4)y
′′ ′
− x y − 3y = 0, y(0) = −1,
′
y (0) = 2
22. y ′′ ′
+ (x − 3)y + 3y = 0, y(3) = −2,
′
y (3) = 3
23. (5 − 6x + 3x 2
)y
′′ ′
+ (x − 1)y + 12y = 0, y(1) = −1,
′
y (1) = 1
24. (4x 2
− 24x + 37)y
′′
+ y = 0, y(3) = 4,
′
y (3) = −6
25. (x 2
− 8x + 14)y
′′
− 8(x − 4)y + 20y = 0,
′
y(4) = 3,
′
y (4) = −4
26. (2x 2
+ 4x + 5)y
′′
− 20(x + 1)y + 60y = 0,
′
y(−1) = 3,
′
y (−1) = −3
Q7.2.7
27.
a. Find a power series in x for the general solution of
2 ′′ ′
(1 + x )y + 4x y + 2y = 0. (A)
for the sum of a geometric series to find a closed form expression for the general solution of (A) on (−1, 1).
is
∞ m−1 2m ∞ m−1 2m+1
m
x m
x
y = a0 ∑ (−1 ) [ ∏ p(2j)] + a1 ∑ (−1 ) [ ∏ p(2j + 1)] .
(2m)! (2m + 1)!
m=0 j=0 m=0 j=0
is y = a0 y1 + a1 y2 , where
∞ m−1 2m
x
y1 = ∑ [ ∏ (2j − α)(2j + α + 2b − 1)]
(2m)!
m=0 j=0
and
∞ m−1 2m+1
x
y2 = ∑ [ ∏ (2j + 1 − α)(2j + α + 2b)]
(2m + 1)!
m=0 j=0
b. Suppose 2b isn’t a negative odd integer and k is a nonnegative integer. Show that y is a polynomial of degree 2k such that 1
Conclude that if n is a nonnegative integer, then there’s a polynomial P of degree n such that P (−x) = (−1) P (x) n n
n
n
and
2 ′′ ′
(1 − x )Pn − 2bx Pn + n(n + 2b − 1)Pn = 0. (A)
and use this to show that if m and n are nonnegative integers, then
2 b ′ 2 b ′ 2 b−1
[(1 − x ) Pn ] Pm − [(1 − x ) Pm ] Pn = [m(m + 2b − 1) − n(n + 2b − 1)](1 − x ) Pm Pn (B)
d. Now suppose b > 0 . Use (B) and integration by parts to show that if m ≠ n , then
1
2 b−1
∫ (1 − x ) Pm (x)Pn (x) dx = 0.
−1
(We say that P and P are orthogonal on (−1, 1) with respect to the weighting function(1 − x
m n
2 b−1
) .)
31.
a. Use Exercise 7.2.28 to show that the power series in x for the general solution of Hermite’s equation
′′ ′
y − 2x y + 2αy = 0
and
∞ m−1 m 2m+1
2 x
y2 = ∑ [ ∏ (2j + 1 − α)]
(2m + 1)!
m=0 j=0
b. Suppose k is a nonnegative integer. Show that y is a polynomial of degree 2k such that y (−x) = y (x) if α = 2k , while
1 1 1
integer then there’s a polynomial P of degree n such that P (−x) = (−1) P (x) and
n n
n
n
′′ ′
Pn − 2x Pn + 2nPn = 0. (A)
and use this to show that if m and n are nonnegative integers, then
2 2 2
−x ′ ′ −x ′ ′ −x
[e Pn ] Pm − [ e Pm ] Pn = 2(m − n)e Pm Pn . (B)
(We say that P and P are orthogonal on (−∞, ∞) with respect to the weighting function e
m n
−x
.)
32. Consider the equation
3 ′′ 2 ′
(1 + α x ) y + β x y + γxy = 0, (A)
n=0
b. Show from (a) that a n =0 unless n = 3m or n = 3m + 1 for some nonnegative integer m, and that
p(3m)
a3m+3 = − a3m , m ≥ 0,
(3m + 3)(3m + 2)
and
p(3m + 1)
a3m+4 = − a3m+1 , m ≥ 0,
(3m + 4)(3m + 3)
c. Conclude from (b) that the power series in x for the general solution of (A) is
3m
∞ m−1 p(3j) x
m
y = a0 ∑ (−1 ) [∏ ] m
m=0 j=0 3j+2 3 m!
∞ m−1 p(3j+1) x
3m+1
m
+a1 ∑ (−1 ) [∏ ] m
.
m=0 j=0 3j+4 3 m!
34. (1 − 2x 3
)y
′′ 2
− 10 x y − 8xy = 0
′
35. (1 + x 3
)y
′′ 2
+ 7 x y + 9xy = 0
′
36. (1 − 2x 3
)y
′′ 2
+ 6 x y + 24xy = 0
′
37. (1 − x 3
)y
′′ 2
+ 15 x y − 63xy = 0
′
Q7.2.9
38. Consider the equation
k+2 ′′ k+1 ′ k
(1 + α x )y + βx y + γ x y = 0, (A)
n
y = ∑ an x
n=0
b. Show from (a) that a n =0 unless n = (k + 2)m or n = (k + 2)m + 1 for some nonnegative integer m, and that
p ((k + 2)m)
a(k+2)(m+1) = − a(k+2)m , m ≥ 0,
(k + 2)(m + 1)[(k + 2)(m + 1) − 1]
and
p ((k + 2)m + 1)
a(k+2)(m+1)+1 = − a(k+2)m+1 , m ≥ 0,
[(k + 2)(m + 1) + 1](k + 2)(m + 1)
c. Conclude from (b) that the power series in x for the general solution of (A) is
∞ m−1 p((k+2)j) ( k+2) m
m x
y = a0 ∑ (−1 ) [∏ ] m
m=0 j=0 (k+2)(j+1)−1 (k+2 ) m!
( k+2) m+1
∞ m−1 p((k+2)j+1) x
m
+a1 ∑ (−1 ) [∏ ] m
.
m=0 j=0 (k+2)(j+1)+1 (k+2 ) m!
Q7.2.10
In Exercises 7.2.39-7.2.44 use the method of Exercise 7.2.38 to find the power series in x for the general solution.
39. (1 + 2x 5
)y
′′ 4
+ 14 x y + 10 x y = 0
′ 3
40. y ′′ 2
+x y = 0
41. y ′′ 6 ′
+ x y + 7x y = 0
5
42. (1 + x 8
)y
′′ 7
− 16 x y + 72 x y = 0
′ 6
43. (1 − x 6
)y
′′ 5
− 12 x y − 30 x y = 0
′ 4
44. y ′′ 5 ′
+ x y + 6x y = 0
4
n
y = ∑ an (x − x0 )
n=0
consider cases where the differential equation in Equation 7.4.1 is not of the form
2 ′′ ′
(1 + α(x − x0 ) ) y + β(x − x0 )y + γy = 0,
so Theorem 7.2.2 does not apply, and the computation of the coefficients {a } is more complicated. For the equations n
considered here it is difficult or impossible to obtain an explicit formula for a in terms of n . Nevertheless, we can calculate as n
Example 7.4.1
Find the coefficients a , …, a in the series solution y = ∑
0 7
∞
n=0
an x
n
of the initial value problem
2 ′′ ′ ′
(1 + x + 2 x )y + (1 + 7x)y + 2y = 0, y(0) = −1, y (0) = −2. (7.4.2)
Solution
Here
2 ′′ ′
Ly = (1 + x + 2 x )y + (1 + 7x)y + 2y.
– –
The zeros (−1 ± i √7)/4 of P (x) = 1 + x + 2x have absolute value 1/√2, so Theorem 7.2.2 implies that the series
0
2
– –
solution converges to the solution of Equation 7.4.2 on (−1/√2, 1/√2). Since
∞ ∞ ∞
n ′ n−1 ′′ n−2
y = ∑ an x , y = ∑ nan x and y = ∑ n(n − 1)an x ,
∞ ∞ ∞
n−2 n−1 n
Ly = ∑ n(n − 1)an x + ∑ n(n − 1)an x + 2 ∑ n(n − 1)an x
∞ ∞ ∞
n−1 n n
+ ∑ nan x + 7 ∑ nan x + 2 ∑ an x .
Shifting indices so the general term in each series is a constant multiple of x yields n
∞ ∞ ∞
n n n
Ly = ∑(n + 2)(n + 1)an+2 x + ∑(n + 1)nan+1 x + 2 ∑ n(n − 1)an x
∞ ∞ ∞ ∞
n n n n
+ ∑(n + 1)an+1 x + 7 ∑ nan x + 2 ∑ an x = ∑ bn x ,
where
2
bn = (n + 2)(n + 1)an+2 + (n + 1 ) an+1 + (n + 2)(2n + 1)an .
Therefore y = ∑ ∞
n=0
an x
n
is a solution of Ly = 0 if and only if
n+1 2n + 1
an+2 = − an+1 − an , n ≥ 0. (7.4.3)
n+2 n+1
yields
1 1
a2 = − a1 − a0 = − (−2) − (−1) = 2.
2 2
2
5 3
55 4
3 5
61 6
443 7
y = −1 − 2x + 2 x + x − x + x + x − x +⋯ .
3 12 4 8 56
We also leave it to you (Exercise [exer:7.3.13}) to verify numerically that the Taylor polynomials T N (x) = ∑
N
n=0
n
an x
– –
converge to the solution of Equation 7.4.2 on (−1/√2, 1/√2).
Example 7.4.2
Find the coefficients a , …, a in the series solution
0 5
n
y = ∑ an (x + 1 )
n=0
Since the desired series is in powers of x + 1 we rewrite the differential equation in Equation 7.4.4 as Ly = 0 , with
′′ ′
Ly = (2 + (x + 1)) y − (1 − 2(x + 1)) y − (3 − (x + 1)) y.
Since
∞ ∞ ∞
n ′ n−1 ′′ n−2
y = ∑ an (x + 1 ) , y = ∑ nan (x + 1 ) and y = ∑ n(n − 1)an (x + 1 ) ,
∞ ∞
n−2 n−1
Ly = 2 ∑ n(n − 1)an (x + 1 ) + ∑ n(n − 1)an (x + 1 )
n=2 n=2
∞ ∞
n−1 n
− ∑ nan (x + 1 ) + 2 ∑ nan (x + 1 )
n=1 n=1
∞ ∞
n n+1
− 3 ∑ an (x + 1 ) + ∑ an (x + 1 ) .
n=0 n=0
Shifting indices so that the general term in each series is a constant multiple of (x + 1) yields n
∞ ∞
n n
Ly = 2 ∑(n + 2)(n + 1)an+2 (x + 1 ) + ∑(n + 1)nan+1 (x + 1 )
n=0 n=0
∞ ∞ ∞
n n n
− ∑(n + 1)an+1 (x + 1 ) + ∑(2n − 3)an (x + 1 ) + ∑ an−1 (x + 1 )
∞
n
= ∑ bn (x + 1 ) ,
n=0
where
and
2
bn = 2(n + 2)(n + 1)an+2 + (n − 1)an+1 + (2n − 3)an + an−1 , n ≥ 1.
Therefore y = ∑ ∞
n=0
an (x + 1 )
n
is a solution of Ly = 0 if and only if
1
a2 = (a1 + 3 a0 ) (7.4.5)
4
and
1 2
an+2 = − [(n − 1)an+1 + (2n − 3)an + an−1 ] , n ≥ 1. (7.4.6)
2(n + 2)(n + 1)
From the initial conditions in Equation 7.4.4, a = y(−1) = 2 and a = y (−1) = −3 . We leave it to you to compute
0 1
′
a , …, a with Equation 7.4.5 and Equation 7.4.6 and show that the solution of Equation 7.4.4 is
2 5
3 2
5 3
7 4
1 5
y = −2 − 3(x + 1) + (x + 1 ) − (x + 1 ) + (x + 1 ) − (x + 1 ) +⋯ .
4 12 48 60
We also leave it to you (Exercise [exer:7.3.14}) to verify numerically that the Taylor polynomials T N
N
(x) = ∑
n=0
n
an x
converge to the solution of Equation 7.4.4 on the interval of convergence of the power series solution.
Example 7.4.3
Find the coefficients a , …, a in the series solution y = ∑
0 5
∞
n=0
an x
n
of the initial value problem
′′ ′ 2 ′
y + 3x y + (4 + 2 x )y = 0, y(0) = 2, y (0) = −3. (7.4.7)
Solution
Here
′′ ′ 2
Ly = y + 3x y + (4 + 2 x )y.
Since
∞ ∞ ∞
n ′ n−1 ′′ n−2
y = ∑ an x , y = ∑ nan x , and y = ∑ n(n − 1)an x ,
∞ ∞ ∞ ∞
n−2 n n n+2
Ly = ∑ n(n − 1)an x + 3 ∑ nan x + 4 ∑ an x + 2 ∑ an x .
Shifting indices so that the general term in each series is a constant multiple of x yields n
∞ ∞ ∞ ∞
n n n n
Ly = ∑(n + 2)(n + 1)an+2 x + ∑(3n + 4)an x + 2 ∑ an−2 x = ∑ bn x
where
b0 = 2 a2 + 4 a0 , b1 = 6 a3 + 7 a1 ,
and
Therefore y = ∑ ∞
n=0
an x
n
is a solution of Ly = 0 if and only if
7
a2 = −2 a0 , a3 = − a1 , (7.4.8)
6
and
From the initial conditions in Equation 7.4.7, a = y(0) = 2 and a = y (0) = −3 . We leave it to you to compute
0 1
′
a2 ,
…, a with Equation 7.4.8 and Equation 7.4.9 and show that the solution of Equation 7.4.7 is
5
2
7 3 4
79 5
y = 2 − 3x − 4 x + x + 3x − x +⋯ .
2 40
We also leave it to you (Exercise [exer:7.3.15}) to verify numerically that the Taylor polynomials T N (x) = ∑
N
n=0
n
an x
converge to the solution of Equation 7.4.9 on the interval of convergence of the power series solution.
n=0
n
an x of the initial
value problem.
1. (1 + 3x)y ′′ ′
+ x y + 2y = 0, y(0) = 2,
′
y (0) = −3
2. (1 + x + 2x 2
)y
′′
+ (2 + 8x)y + 4y = 0,
′
y(0) = −1,
′
y (0) = 2
3. (1 − 2x 2
)y
′′
+ (2 − 6x)y − 2y = 0,
′
y(0) = 1,
′
y (0) = 0
4. (1 + x + 3x 2
)y
′′
+ (2 + 15x)y + 12y = 0,
′
y(0) = 0,
′
y (0) = 1
5. (2 + x)y ′′
+ (1 + x)y + 3y = 0,
′
y(0) = 4,
′
y (0) = 3
6. (3 + 3x + x 2
)y
′′
+ (6 + 4x)y + 2y = 0,
′
y(0) = 7,
′
y (0) = 3
7. (4 + x)y ′′
+ (2 + x)y + 2y = 0,
′
y(0) = 2,
′
y (0) = 5
8. (2 − 3x + 2x 2
)y
′′
− (4 − 6x)y + 2y = 0,
′
y(1) = 1,
′
y (1) = −1
9. (3x + 2x 2
)y
′′
+ 10(1 + x)y + 8y = 0,
′
y(−1) = 1,
′
y (−1) = −1
10. (1 − x + x 2
)y
′′
− (1 − 4x)y + 2y = 0,
′
y(1) = 2,
′
y (1) = −1
11. (2 + x)y ′′
+ (2 + x)y + y = 0,
′
y(−1) = −2,
′
y (−1) = 3
12. x2
y
′′
− (6 − 7x)y + 8y = 0,
′
y(1) = 1,
′
y (1) = −2
Q7.3.2
–
13. Do the following experiment for various choices of real numbers a , a , and r, with 0 < r < 1/√2 . 0 1
n=0
n
an x of (A), and graph
N
n
TN (x) = ∑ an x
n=0
and the solution obtained in (a) on (−r, r). Continue increasing N until there’s no perceptible difference between the two
graphs.
14. Do the following experiment for various choices of real numbers a , a , and r, with 0 < r < 2 . 0 1
n
y = ∑ an (x + 1 )
n=0
n
TN (x) = ∑ an (x + 1 )
n=0
n
TN (x) = ∑ an x
n=0
and the solution obtained in (a) on (−r, r). Continue increasing N until there’s no perceptible difference between the two
graphs.
16. Do the following experiment for several choices of a and a . 0 1
n=0
an x
n
of (A), and graph
N
n
TN (x) = ∑ an x
n=0
and the solution obtained in (a) on (−r, r). Continue increasing N until there’s no perceptible difference between the two
graphs. What happens as you let r → 1 ?
17. Follow the directions of Exercise 7.3.16 for the initial value problem
′′ ′ ′
(1 + x)y + 3 y + 32y = 0, y(0) = a0 , y (0) = a1 .
18. Follow the directions of Exercise 7.3.16 for the initial value problem
2 ′′ ′ ′
(1 + x )y + y + 2y = 0, y(0) = a0 , y (0) = a1 .
Q7.3.3
In Exercises 7.3.19-7.3.28 find the coefficients a 0, . . . aN for N at least 7 in the series solution
∞
n
y = ∑ an (x − x0 )
n=0
of the initial value problem. Take x to be the point where the initial conditions are imposed.
0
19. (2 + 4x)y ′′ ′
− 4 y − (6 + 4x)y = 0, y(0) = 2,
′
y (0) = −7
20. (1 + 2x)y ′′ ′
− (1 − 2x)y − (3 − 2x)y = 0, y(1) = 1,
′
y (1) = −2
21. (5 + 2x)y ′′ ′
− y + (5 + x)y = 0, y(−2) = 2,
′
y (−2) = −1
22. (4 + x)y ′′ ′
− (4 + 2x)y + (6 + x)y = 0, y(−3) = 2,
′
y (−3) = −2
23. (2 + 3x)y ′′ ′
− x y + 2xy = 0, y(0) = −1,
′
y (0) = 2
24. (3 + 2x)y ′′ ′
+ 3 y − xy = 0, y(−1) = 2,
′
y (−1) = −3
25. (3 + 2x)y ′′ ′
− 3 y − (2 + x)y = 0, y(−2) = −2,
′
y (−2) = 3
27. (7 + x)y ′′ ′
+ (8 + 2x)y + (5 + x)y = 0, y(−4) = 1,
′
y (−4) = 2
Q7.3.4
29. Show that the coefficients in the power series in x for the general solution of
2 ′′ ′
(1 + αx + β x )y + (γ + δx)y + ϵy = 0
30.
a. Let α and β be constants, with β ≠ 0 . Show that y = ∑ ∞
n=0
an x
n
is a solution of
2 ′′ ′
(1 + αx + β x )y + (2α + 4βx)y + 2βy = 0 (A)
if and only if
an+2 + α an+1 + β an = 0, n ≥ 0. (B)
An equation of this form is called a second order homogeneous linear difference equation. The polynomial
p(r) = r + αr + β is called the characteristic polynomial of (B). If r and r are the zeros of p , then 1/r and 1/r are
2
1 2 1 2
the zeros of
2
P0 (x) = 1 + αx + β x
b. Suppose p(r) = (r − r1 )(r − r2 ) where r and r are real and distinct, and let
1 2 ρ be the smaller of the two numbers
{1/| r1 |, 1/| r2 |} . Show that if c and c are constants then the sequence
1 2
n n
an = c1 r + c2 r , n ≥0
1 2
satisfies (B). Conclude from this that any function of the form
∞
n n n
y = ∑(c1 r + c2 r )x
1 2
n=0
e. Suppose p(r) = (r − r ) , and let ρ = 1/|r |. Show that if c and c are constants then the sequence
1
2
1 1 2
n
an = (c1 + c2 n)r , n ≥0
1
satisfies (B). Conclude from this that any function of the form
∞
n n
y = ∑(c1 + c2 n)r x
1
n=0
a. (1 + 3x + 2x )y + (6 + 8x)y + 4y = 0 2 ′′ ′
c. (1 − 4x + 4x )y − (8 − 16x)y + 8y = 0 2 ′′ ′
d. (4 + 4x + x )y + (8 + 4x)y + 2y = 0
2 ′′ ′
e. (4 + 8x + 3x )y + (16 + 12x)y + 6y = 0 2 ′′ ′
Q7.3.5
∞
In Exercises 7.3.32-7.3.38 find the coefficients a 0, . . . , aN for N at least 7 in the series solution y = ∑ n=0
n
an x of the initial
value problem.
32. y ′′ ′
+ 2x y + (3 + 2 x )y = 0,
2
y(0) = 1,
′
y (0) = −2
33. y ′′ ′
− 3x y + (5 + 2 x )y = 0,
2
y(0) = 1,
′
y (0) = −2
34. y ′′ ′
+ 5x y − (3 − x )y = 0,
2
y(0) = 6,
′
y (0) = −2
35. y ′′ ′
− 2x y − (2 + 3 x )y = 0,
2
y(0) = 2,
′
y (0) = −5
36. y ′′ ′
− 3x y + (2 + 4 x )y = 0,
2
y(0) = 3,
′
y (0) = 6
37. 2y ′′ ′
+ 5x y + (4 + 2 x )y = 0,
2
y(0) = 3,
′
y (0) = −2
38. 3y ′′ ′
+ 2x y + (4 − x )y = 0,
2
y(0) = −2,
′
y (0) = 3
Q7.3.6
39. Find power series in x for the solutions y and y of 1 2
′′ ′ 2
y + 4x y + (2 + 4 x )y = 0
Q7.3.7
In Exercises 7.3.40-7.3.49 find the coefficients a 0, . . . , aN for N at least 7 in the series solution
∞
n
y = ∑ an (x − x0 )
n=0
of the initial value problem. Take x to be the point where the initial conditions are imposed. 0
40. (1 + x)y ′′ 2
+ x y + (1 + 2x)y = 0,
′
y(0) − 2,
′
y (0) = 3
41. y ′′
+ (1 + 2x + x )y + 2y = 0,
2 ′
y(0) = 2,
′
y (0) = 3
42. (1 + x 2
)y
′′
+ (2 + x )y + xy = 0,
2 ′
y(0) = −3,
′
y (0) = 5
43. (1 + x)y ′′
+ (1 − 3x + 2 x )y − (x − 4)y = 0,
2 ′
y(1) = −2,
′
y (1) = 3
44. y ′′
+ (13 + 12x + 3 x )y + (5 + 2x),
2 ′
y(−2) = 2,
′
y (−2) = −3
45. (1 + 2x + 3x 2
)y
′′
+ (2 − x )y + (1 + x)y = 0,
2 ′
y(0) = 1,
′
y (0) = −2
46. (3 + 4x + x 2
)y
′′
− (5 + 4x − x )y − (2 + x)y = 0,
2 ′
y(−2) = 2,
′
y (−2) = −1
47. (1 + 2x + x 2
)y
′′
+ (1 − x)y = 0, y(0) = 2,
′
y (0) = −1
48. (x − 2x 2
)y
′′
+ (1 + 3x − x )y + (2 + x)y = 0,
2 ′
y(1) = 1,
′
y (1) = 0
1 9/16/2020
8.1: Introduction to the Laplace Transform
Definition of the Laplace Transform
To define the Laplace transform, we first recall the definition of an improper integral. If g is integrable over the interval [a, T ]
for every T > a , then the improper integral of g over [a, ∞) is defined as
∞ T
We say that the improper integral converges if the limit in Equation 8.1.1 exists; otherwise, we say that the improper integral
diverges or does not exist. Here’s the definition of the Laplace transform of a function f .
It is important to keep in mind that the variable of integration in Equation 8.1.2 is t , while s is a parameter independent of t .
We use t as the independent variable for f because in applications the Laplace transform is usually applied to functions of
time.
The Laplace transform can be viewed as an operator L that transforms the function f = f (t) into the function F = F (s) .
Thus, Equation 8.1.2 can be expressed as
F = L(f ).
The functions f and F form a transform pair, which we’ll sometimes denote by
f (t) ↔ F (s).
It can be shown that if F (s) is defined for s = s then it is defined for all s > s (Exercise 8.1.14b).
0 0
Example 8.1.1
Find the Laplace transform of f (t) = 1 .
Solution
From Equation 8.1.2 with f (t) = 1 ,
∞ T
−st −st
F (s) = ∫ e dt = lim ∫ e dt.
T →∞
0 0
If s ≠ 0 then
T −sT
1 T 1 −e
−st −st ∣
∫ e dt = − e = . (8.1.3)
∣
0
s 0 s
Therefore
T 1
, s > 0,
−st s
lim ∫ e dt = { (8.1.4)
T →∞
0 ∞, s < 0.
Note
It is convenient to combine the steps of integrating from 0 to T and letting T → ∞ . Therefore, instead of writing
Equation 8.1.3 and 8.1.4 as separate steps we write
∞ ∞ 1
1 ∣ , s >0
−st −st s
∫ e dt = − e ∣ ={
s ∣
0 0 ∞, s <0
Example 8.1.2
Find the Laplace transform of f (t) = t .
From Equation 8.1.2 with f (t) = t ,
∞
−st
F (s) = ∫ e t dt. (8.1.5)
0
1
2
, s > 0,
s
={
∞, s < 0.
Example 8.1.3 :
Find the Laplace transform of f (t) = e , where a is a constant.
at
∞
−st at
F (s) = ∫ e e dt.
0
Example 8.1.4
[Find the Laplace transforms of f (t) = sin ωt and g(t) = cos ωt , where ω is a constant.
Define
∞
−st
F (s) = ∫ e sin ωt dt (8.1.6)
0
and
∞
−st
G(s) = ∫ e cos ωt dt. (8.1.7)
0
so
ω
F (s) = G(s). (8.1.8)
s
so
Example 8.1.5
Use the table of Laplace transforms to find L(t 3
e
4t
) .
The table includes the transform pair
n at
n!
t e ↔ .
n+1
(s − a)
We’ll sometimes write Laplace transforms of specific functions without explicitly stating how they are obtained. In such cases
you should refer to the table of Laplace transforms.
be constants. Then
Proof
We give the proof for the case where n = 2 . If s > s then 0
∞
−st
L(c1 f1 + c2 f2 ) = ∫ e (c1 f1 (t) + c2 f2 (t))) dt
0
∞ ∞
−st −st
= c1 ∫ e f1 (t) dt + c2 ∫ e f2 (t) dt
0 0
= c1 L(f1 ) + c2 L(f2 ).
at
1
L(e ) =
s−a
Therefore
1 bt 1 −bt
L(cosh bt) = L( e + e )
2 2
1 bt 1 −bt
= L(e )+ L(e ) (linearity property) (8.1.9)
2 2
1 1 1 1
= + ,
2 s−b 2 s+b
where the first transform on the right is defined for s > b and the second for s > −b ; hence, both are defined for s > |b| .
Simplifying the last expression in Equation 8.1.9 yields
s
L(cosh bt) = , s > |b|.
s2 − b2
The next theorem enables us to start with known transform pairs and derive others. (For other results of this kind, see
Exercises 8.1.6 and 8.1.13.)
is the Laplace transform of f (t) for s > s , then F (s − a) is the Laplace transform of e
0
at
f (t) for s > s
0 +a .
Proof
Replacing s by s − a in Equation 8.1.10yields
∞
−(s−a)t
F (s − a) = ∫ e f (t) dt (8.1.11)
0
Example 8.1.7
Use Theorem 8.1.3 and the known Laplace transforms of 1, t , cos ωt, and sin ωt to find
at at λt λt
L(e ), L(te ), L(e sin ωt), and L(e cos ωt).
Solution
1 at 1
1 ↔ , s > 0 e ↔ , s > a
s (s−a)
1 at 1
t ↔ , s > 0 te ↔ , s > a
2 2
s (s−a)
ω λt ω
sin ωt ↔ , s > 0 e sin ωt ↔ , s > λ
2 2 2
s +ω (s−λ ) +ω2
s λt s−λ
cosωt ↔ , s > 0 e sin ωt ↔ , s > λ
2 2 2
s +ω (s−λ ) +ω2
for every real number s . Hence, the function f (t) = e does not have a Laplace transform.
2
t
Our next objective is to establish conditions that ensure the existence of the Laplace transform of a function. We first review
some relevant definitions from calculus.
Recall that a limit
lim f (t)
t→t0
Recall also that f is continuous at a point t in an open interval (a, b) if and only if
0
which is equivalent to
If f (t
0 +) and f (t 0 −) have finite but distinct values, we say that f has a jump discontinuity at t , and 0
f (t0 +) − f (t0 −)
we say that f has a removable discontinuity at t (Figure 8.1.2). This terminolgy is appropriate since a function
0 f with a
removable discontinuity at t can be made continuous at t by defining (or redefining)
0 0
Note
We know from calculus that a definite integral isn’t affected by changing the values of its integrand at isolated points.
Therefore, redefining a function f to make it continuous at removable discontinuities does not change L(f ).
T
−st
∫ e f (t) dt
0
Figure 8.1.2
converges for s in some interval (s , ∞). For example, we noted earlier that Equation 8.1.13 diverges for all s if f (t) = e .
0
t
Stated informally, this occurs because e increases too rapidly as t → ∞ . The next definition provides a constraint on the
2
t
growth of a function that guarantees convergence of its Laplace transform for s in some interval (s , ∞) . 0
s0 t
|f (t)| ≤ M e , t ≥ t0 . (8.1.14)
In situations where the specific value of s is irrelevant we say simply that f is of exponential order.
0
The next theorem gives useful sufficient conditions for a function f to have a Laplace transform. The proof is sketched in
Exercise 8.1.10.
Theorem 8.1.6
If f is piecewise continuous on [0, ∞) and of exponential order s , then L(f ) is defined for s > s .
0 0
Note
We emphasize that the conditions of Theorem 8.1.6 are sufficient, but not necessary, for f to have a Laplace transform.
For example, Exercise 8.1.14(c) shows that f may have a Laplace transform even though f isn’t of exponential order
Example 8.1.8
If f is bounded on some interval [t 0, ∞) , say
|f (t)| ≤ M , t ≥ t0 ,
then Equation 8.1.14 holds with s = 0 , so f is of exponential order zero. Thus, for example, sin ωt and cos ωt are of
0
exponential order zero, and Theorem 8.1.6 implies that L(sin ωt) and L(cos ωt) exist for s > 0 . This is consistent with
the conclusion of Example 8.1.4.
Example 8.1.9
It can be shown that if lim e
t→∞ f (t) exists and is finite then f is of exponential order s (Exercise 8.1.9). If α is any
−s0 t
0
−s0 t α
lim e t = 0,
t→∞
s > 0 . This is consistent with the results of Example 8.1.2 and Exercises 8.1.6 and 8.1.8.
Example 8.1.10
Find the Laplace transform of the piecewise continuous function
1, 0 ≤ t < 1,
f (t) = {
−t
−3 e , t ≥ 1.
Solution
Since f is defined by different formulas on [0, 1) and [1, ∞), we write
∞ 1 ∞
−st −st −st −t
F (s) = ∫ e f (t) dt = ∫ e (1) dt + ∫ e (−3 e ) dt.
0 0 1
Since
−s
1 1−e
−st s ≠0
s
∫ e dt = {
0 1 s =0
and
∞ ∞ −(s+1)
−st −t −(s+1)t
3e
∫ e (−3 e ) dt = −3 ∫ e dt = − , s > −1,
1 1
s+1
it follows that
−s −( s+1)
1−e e
−3 s > −1, s ≠ 0
s s+1
F (s) = {
3
1− s =0
e
Note
In Section 8.4 we’ll develop a more efficient method for finding Laplace transforms of piecewise continuous functions.
Example 8.1.11
We stated earlier that
∞
2
−st t
∫ e e dt = ∞
0
for all s , so Theorem 8.1.6 implies that f (t) = e is not of exponential order, since
2
t
2
t
e 1 2
t −s0 t
lim = lim e = ∞,
s0 t
t→∞ Me t→∞ M
so
2
t s0 t
e > Me
for sufficiently large values of t , for any choice of M and s (Exercise 8.1.3). 0
c. sinh bt
d. e − 3e
2t t
e. t 2
2. Use the table of Laplace transforms to find the Laplace transforms of the following functions.
a. cosh t sin t
b. sin t 2
c. cos 2t 2
d. cosh t 2
e. t sinh 2t
f. sin t cos t
g. sin(t + ) π
h. cos 2t − cos 3t
i. sin 2t + cos 4t
3. Show that
∞
2
−st t
∫ e e dt = ∞
0
a. f (t) = ⎨ t − 4, 2 ≤ t < 3,
⎩
⎪
1, t ≥ 3.
2
⎧t
⎪ + 2, 0 ≤ t < 1,
b. f (t) = ⎨ 4, t = 1,
⎩
⎪
t, t > 1.
⎧ t, 0 ≤ t < 1,
⎪
⎪
⎪
⎪
⎪ 2, t = 1,
⎪
d. f (t) = ⎨ 2 − t, 1 ≤ t < 2,
⎪
⎪
⎪
⎪ 3, t = 2,
⎪
⎩
⎪
6, t > 2.
1, 0 ≤ t < 4,
b. f (t) = {
t, t ≥ 4.
t, 0 ≤ t < 1,
c. f (t) = {
1, t ≥ 1.
6. Prove that if f (t) ↔ F (s) then t f (t) ↔ (−1) F (s) . HINT: Assume that it's permissible to differentiate the integral
k k (k)
∞
∫
0
e
−st
f (t)dt with respect to s under the integral sign.
7. Use the known Laplace transforms
ω s−λ
λt λt
L(e sin ωt) = and L(e cos ωt) =
2 2 2 2
(s − λ ) +ω (s − λ ) +ω
n
n!
L(t ) = , n = integer.
n+1
s
9.
a. Show that if lim e f (t) exists and is finite then f is of exponential order s .
t→∞
−s0 t
0
b. Show that if f is of exponential order s then lim e f (t) = 0 for all s > s .
0 t→∞
−st
0
c. Show that if f is of exponential order s and g(t) = f (t + τ ) where τ > 0 , then g is also of exponential order s .
0 0
Theorem 8.1E . 1
Let g be integrable on [0, T ] for every T > 0. Suppose there’s a function w defined on some interval [τ , ∞) (with τ ≥0 )
∞ ∞
such that |g(t)| ≤ w(t) for t ≥ τ and ∫ τ
converges. Then ∫ g(t) dt converges.
w(t) dt
0
Use Theorem 8.1E. 1 to show that if f is piecewise continuous on [0, ∞) and of exponential order s0 , then f has a Laplace
transform F (s) defined for s > s . 0
11. Prove: If f is piecewise continuous and of exponential order then lim s→∞ F (s) = 0 .
12. Prove: If f is continuous on [0, ∞) and of exponential order s 0 >0 , then
t
1
L (∫ f (τ ) dτ ) = L(f ), s > s0 .
0
s
T T
−st −(s−s0 )T −(s−s0 )t
∫ e f (t)dt = e g(T ) + (s − s0 ) ∫ e g(t)dt
0 0
b. Show that if L(f ) exists for s = s then it exists for s > s . Show that the function
0 0
2 2
t t
f (t) = te cos(e )
has a Laplace transform defined for s > 0 , even though f isn’t of exponential order.
c. Show that the function
has a Laplace transform defined for s > 0 , even though f isn’t of exponential order.
15. Use the table of Laplace transforms and the result of Exercise 8.1.13 to find the Laplace transforms of the following
functions.
a. sin ωt
t
(ω > 0)
b. cos ωt−1
t
(ω > 0)
at bt
c. e −e
d. cosh t−1
t
2
e. sinh
t
t
if α is a nonnegative integer. Show that this formula is valid for any α > −1 . HINT: Change the variable of integration in
the integral for Γ(α + 1) .
17. Suppose f is continuous on [0, T ] and f (t + T ) = f (t) for all t ≥0 . (We say in this case that f is periodic with period
T .)
a. Conclude from Theorem 8.1.6 that the Laplace transform of f is defined for s > 0 .
b. Show that
T
1 −st
F (s) = ∫ e f (t) dt, s > 0.
−sT
1 −e 0
HINT: Write
∞ (n+1)T
−st
F (s) = ∑ ∫ e f (t)dt
n=0 nT
c. f (t) = | sin t|
To solve differential equations with the Laplace transform, we must be able to obtain f from its transform F . There’s a
formula for doing this, but we can’t use it because it requires the theory of functions of a complex variable. Fortunately, we
can use the table of Laplace transforms to find inverse transforms that we’ll need.
Example 8.2.1 :
Use the table of Laplace transforms to find
a. −1
1
L ( )
2
s −1
b. −1
s
L ( ).
2
s +9
Solution a
Setting b = 1 in the transform pair
b
sinh bt ↔
2 2
s −b
shows that
−1
1
L ( ) = sinh t.
s2 − 1
Solution b
Setting ω = 3 in the transform pair
s
cos ωt ↔
2 2
s +ω
shows that
s
−1
L ( ) = cos 3t.
2
s +9
The next theorem enables us to find inverse transforms of linear combinations of transforms in the table. We omit the proof.
−1 −1 −1 −1
L (c1 F1 + c2 F2 + ⋯ + cn Fn ) = c1 L (F1 ) + c2 L (F2 ) + ⋯ + cn L Fn .
−1
8 7
L ( + ).
2
s+5 s +3
Solution
From the table of Laplace transforms in Section 8.8,,
1 ω
at
e ↔ and sin ωt ↔ .
2 2
s−a s +ω
–
Theorem 8.2.1 with a = −5 and ω = √3 yields
−1
8 7 −1
1 −1
1
L ( + ) = 8L ( ) + 7L ( )
2 2
s+5 s +3 s+5 s +3
–
1 7 √3
−1 −1
= 8L ( )+ –L ( )
s+5 √3 s2 + 3
−5t
7 –
= 8e + – sin √3t.
√3
Example 8.2.3
Find
−1
3s + 8
L ( ).
2
s + 2s + 5
Solution
Completing the square in the denominator yields
3s + 8 3s + 8
= .
2 2
s + 2s + 5 (s + 1 ) +4
and write
3s + 8 3s + 3 5
−1 −1 −1
L ( ) =L ( ) +L ( )
2 2 2
(s + 1 ) +4 (s + 1 ) +4 (s + 1 ) +4
−1
s+1 5 −1
2
= 3L ( )+ L ( )
2
(s + 1 ) +4 2 (s + 1 )2 + 4
−t
5
=e (3 cos 2t + sin 2t).
2
Note
We’ll often write inverse Laplace transforms of specific functions without explicitly stating how they are obtained. In
such cases you should refer to the table of Laplace transforms in Section 8.8.
where P and Q are polynomials in s with no common factors. Since it can be shown that lim F (s) = 0 if F is a Laplace
s→∞
transform, we need only consider the case where degree(P ) < degree(Q). To obtain L (F ), we find the partial fraction
−1
expansion of F , obtain inverse transforms of the individual terms in the expansion from the table of Laplace transforms, and
use the linearity property of the inverse transform. The next two examples illustrate this.
Example 8.2.4
Find the inverse Laplace transform of
3s + 2
F (s) = . (8.2.1)
2
s − 3s + 2
Solution
(Method 1)
Factoring the denominator in Equation 8.2.1 yields
3s + 2
F (s) = . (8.2.2)
(s − 1)(s − 2)
3s + 2 = (s − 2)A + (s − 1)B.
and
−1 −1
1 −1
1
t 2t
L (F ) = −5 L ( ) + 8L ( ) = −5 e + 8 e .
s−1 s−2
(Method 2) We don’t really have to multiply Equation 8.2.3 by (s − 1)(s − 2) to compute A and B . We can obtain A by
simply ignoring the factor s − 1 in the denominator of Equation 8.2.2 and setting s = 1 elsewhere; thus,
3s + 2 ∣ 3 ⋅ 1 +2
A = ∣ = = −5. (8.2.4)
s−2 ∣ s=1
1 −2
Similarly, we can obtain B by ignoring the factor s−2 in the denominator of Equation 8.2.2 and setting s =2
elsewhere; thus,
3s + 2 ∣ 3 ⋅ 2 +2
B = ∣ = = 8. (8.2.5)
s − 1 ∣s=2 2 −1
and setting s = 1 leads to Equation 8.2.4. Similarly, multiplying Equation 8.2.3 by s − 2 yields
3s + 2 A
= (s − 2) +B
s−1 s−2
The shortcut employed in the second solution of Example 8.2.4 is Heaviside’s method. The next theorem states this method
formally. For a proof and an extension of this theorem, see Exercise 8.2.10.
Theorem 8.2.2
Suppose
P (s)
F (s) = , (8.2.6)
(s − s1 )(s − s2 ) ⋯ (s − sn )
A1 A2 An
F (s) = + +⋯ + ,
s − s1 s − s2 s − sn
where A can be computed from Equation 8.2.6 by ignoring the factor s − s and setting s = s elsewhere.
i i i
Example 8.2.5
Find the inverse Laplace transform of
2
6 + (s + 1)(s − 5s + 11)
F (s) = . (8.2.7)
s(s − 1)(s − 2)(s + 1)
Solution
The partial fraction expansion of Equation 8.2.7 is of the form
A B C D
F (s) = + + + . (8.2.8)
s s−1 s−2 s+1
To find A , we ignore the factor s in the denominator of Equation 8.2.7 and set s = 0 elsewhere. This yields
6 + (1)(11) 17
A = = .
(−1)(−2)(1) 2
6 + 3(5) 7
C = = ,
2(1)(3) 2
and
6
D = = −1.
(−1)(−2)(−3)
Therefore
17 1 10 7 1 1
F (s) = − + −
2 s s−1 2 s−2 s+1
and
17 t
7 2t −t
= − 10 e + e −e .
2 2
Note
We didn’t “multiply out” the numerator in Equation 8.2.7 before computing the coefficients in Equation , since it
8.2.8
Example 8.2.6
Find the inverse Laplace transform of
8 − (s + 2)(4s + 10)
F (s) = . (8.2.9)
2
(s + 1)(s + 2)
Solution
The form for the partial fraction expansion is
A B C
F (s) = + + . (8.2.10)
2
s+1 s+2 (s + 2)
Because of the repeated factor (s + 2) in Equation 8.2.9, Heaviside’s method doesn’t work. Instead, we find a common
2
The two sides of this equation are polynomials of degree two. From a theorem of algebra, they will be equal for all s if
they are equal for any three distinct values of s . We may determine A , B and C by choosing convenient values of s .
The left side of Equation 8.2.12 suggests that we take s = −2 to obtain C = −8 , and s = −1 to obtain A = 2 . We can
now choose any third value of s to determine B . Taking s = 0 yields 4A + 2B + C = −12 . Since A = 2 and C = −8
this implies that B = −6 . Therefore
2 6 8
F (s) = − −
2
s+1 s+2 (s + 2)
and
1 1 1
−1 −1 −1 −1
L (F ) = 2 L ( ) − 6L ( ) − 8L ( )
2
s+1 s+2 (s + 2)
−t −2t −2t
= 2e − 6e − 8te .
Example 8.2.7
Find the inverse Laplace transform of
2
s − 5s + 7
F (s) = .
(s + 2)3
Solution
The form for the partial fraction expansion is
The easiest way to obtain A , B , and C is to expand the numerator in powers of s + 2 . This yields
2 2 2
s − 5s + 7 = [(s + 2) − 2 ] − 5[(s + 2) − 2] + 7 = (s + 2 ) − 9(s + 2) + 21.
Therefore
2
(s + 2 ) − 9(s + 2) + 21
F (s) =
3
(s + 2)
1 9 21
= − +
2 3
s+2 (s + 2) (s + 2)
and
−1 −1
1 −1
1 21 −1
2
L (F ) = L ( ) − 9L ( )+ L ( )
2
s+2 (s + 2) 2 (s + 2)3
21
−2t 2
=e (1 − 9t + t ).
2
Example 8.2.8
Find the inverse Laplace transform of
1 − s(5 + 3s)
F (s) = . (8.2.13)
2
s [(s + 1 ) + 1]
Solution
One form for the partial fraction expansion of F is
A Bs + C
F (s) = + . (8.2.14)
2
s (s + 1 ) +1
However, we see from the table of Laplace transforms that the inverse transform of the second fraction on the right of
Equation 8.2.14 will be a linear combination of the inverse transforms
−t −t
e cos t and e sin t
of
s+1 1
and
(s + 1 )2 + 1 (s + 1 )2 + 1
A B(s + 1) + C
F (s) = + . (8.2.15)
2
s (s + 1 ) +1
This is true for all s if it is true for three distinct values of s . Choosing s = 0 , −1, and 1 yields the system
A−C = 3
5A + 2B + C = −7.
Therefore
−1
1 −1
1 7 −1
s+1 5 −1
1
L (F ) = L ( )− L ( )− L ( )
2 2
2 s 2 (s + 1 ) +1 2 (s + 1 ) +1
1 7 5
−t −t
= − e cos t − e sin t.
2 2 2
Example 8.2.9 :
Find the inverse Laplace transform of
8 + 3s
F (s) = . (8.2.17)
2 2
(s + 1)(s + 4)
Solution
The form for the partial fraction expansion is
A + Bs C + Ds
F (s) = + .
2 2
s +1 s +4
The coefficients A , B , C and D can be obtained by finding a common denominator and equating the resulting numerator
to the numerator in Equation 8.2.17. However, since there’s no first power of s in the denominator of Equation 8.2.17,
there’s an easier way: the expansion of
1
F1 (s) =
2 2
(s + 1)(s + 4)
1 1 1 1
= ( − ).
2 2 2 2
(s + 1)(s + 4) 3 s +1 s +4
Therefore
−1
8 4
L (F ) = sin t + cos t − sin 2t − cos 2t.
3 3
b. 2s−4
s2 −4s+13
c. s2 +4s+20
1
d. 2
s +9
2
2
s −1
e. 2
2
( s +1 )
f. 1
2
(s−2 ) −4
g. 2
12s−24
2
( s −4s+85 )
h. 2
2
(s−3 ) −9
2
i. s −4s+3
2
2
( s −4s+5 )
2. Use Theorem 8.2.1 and the table of Laplace transforms to find the inverse Laplace transform.
a. 2s+3
4
(s−7)
2
s −1
b. 6
(s−2)
s+5
c. 2
s +6s+18
d. 2s+1
s2 +9
e. 2
s +2s+1
s
s+1
f. 2
s −9
3 2
g. s +2 s −s−3
4
(s+1)
h. 2s+3
2
(s−1 ) +4
i. 1
s
−
s2 +1
s
j. 3s+4
s2 −1
k. s−1
3
+
4s+1
s +9
2
l. 3
2
−
2s+6
s +4
2
(s+2)
7+(s+4)(18−3s)
b. (s−3)(s−1)(s+4)
2+(s−2)(3−2s)
c. (s−2)(s+2)(s−3)
3−(s−1)(s+1)
d. (s+4)(s−2)(s−1)
2
3+(s−2)(10−2s−s )
e. (s−2)(s+2)(s−1)(s+3)
2
3+(s−3)(2 s +s−21)
f. (s−3)(s−1)(s+4)(s−2)
c. (s−2)( s +2s+5)
3s+2
2
d. 3 s +2s+1
2
(s−1 ) (s+2)(s+3)
2
2 s +s+3
e. 2 2
(s−1 ) (s+2 )
3s+2
f. 2
( s2 +1)(s−1 )
5. Use the method of Example 8.2.9 to find the inverse Laplace transform.
a. 3s+2
( s2 +4)( s2 +9)
b. 2
−4s+1
( s +1)( s +16)
2
c. 2
5s+3
( s +1)( s +4)
2
−s+1
d. 2
(4 s +1)( s +1)
2
e. 2
17s−34
( s +16)(16 s +1)
2
f. 2s−1
(4 s2 +1)(9 s2 +1)
( s2 −2s+5)( s2 +2s+10)
b. 2
8s+56
( s −6s+13)( s +2s+5)
2
c. 2
( s +4s+5)( s −4s+13)
s+9
2
d. ( s2 −4s+5)( s2 −6s+13)
3s−2
3s−1
e. 2
( s −2s+2)( s +2s+5)
2
f. 2
20s+40
(4 s −4s+5)(4 s +4s+5)
2
s( s2 +1)
b. (s−1)( s −2s+17)
1
2
c. 3s+2
(s−2)( s2 +2s+10)
34−17s
d. (2s−1)( s −2s+5)
2
e. s+2
(s−3)( s +2s+5)
2
f. 2s−2
(s−2)( s2 +2s+10)
b. 2
( s +2s+2)( s −1)
s+2
2
c. 2
2s−1
( s −2s+2)(s+1)(s−2)
d. s−6
( s2 −1)( s2 +4)
2s−3
e. s(s−2)( s −2s+5)
2
f. 2
5s−15
( s −4s+13)(s−2)(s−1)
9. Given that f (t) ↔ F (s) , find the inverse Laplace transform of F (as − b) , where a > 0 .
10.
a. If s , s , …, s are distinct and P is a polynomial of degree less than n , then
1 2 n
P (s) A1 A2 An
= + +⋯ + .
(s − s1 )(s − s2 ) ⋯ (s − sn ) s − s1 s − s2 s − sn
Multiply through by s − si to show that Ai can be obtained by ignoring the factor s − si on the left and setting s = si
elsewhere.
is P (s )/Q (s ).
1 1 1
Theorem 8.3.1
Suppose f is continuous on [0, ∞) and of exponential order s , and f is piecewise continuous on [0, ∞). Then f and f 0
′ ′
′
L(f ) = sL(f ) − f (0). (8.3.1)
Proof
We know from Theorem 8.1.6 that L(f ) is defined for s > s . We first consider the case where 0 f
′
is continuous on
[0, ∞). Integration by parts yields
T T
T
−st ′ −st ∣ −st
∫ e f (t) dt = e f (t) +s∫ e f (t) dt
∣
0
0 0
T
−sT −st
=e f (T ) − f (0) + s ∫ e f (t) dt (8.3.2)
0
∞ ∞
−st ′ −st
∫ e f (t) dt = −f (0) + s ∫ e f (t) dt
0 0
= −f (0) + sL(f ),
ti ti
ti
−st ′ −st ∣ −st
∫ e f (t) dt = e f (t) +s∫ e f (t) dt
∣
ti− 1
ti− 1 ti− 1
ti
−sti −sti− 1 −st
=e f (ti ) − e f (ti−1 ) + s ∫ e f (t) dt.
ti− 1
Example 8.3.1 :
In Example 8.1.4 we saw that
s
L(cos ωt) = .
2 2
s +ω
Solution
Therefore
ω
L(sin ωt) = ,
2 2
s +ω
In Section 2.1 we showed that the solution of the initial value problem
′
y = ay, y(0) = y0 , (8.3.3)
is y = y 0e
at
. We’ll now obtain this result by using the Laplace transform.
Let Y (s) = L(y) be the Laplace transform of the unknown solution of Equation 8.3.3 . Taking Laplace transforms of both
sides of Equation 8.3.3 yields
′
L(y ) = L(ay),
or
sY (s) − y0 = aY (s).
so
−1 −1
y0 −1
1 at
y =L (Y (s)) = L ( ) = y0 L ( ) = y0 e ,
s−a s−a
Theorem 8.3.2
Suppose f and f are continuous on [0, ∞) and of exponential order
′
s0 , and that f
′′
is piecewise continuous on [0, ∞).
′
L(f ) = sL(f ) − f (0), (8.3.4)
and
′′ 2 ′
L(f ) = s L(f ) − f (0) − sf (0). (8.3.5)
Proof
Theorem 8.3.1 implies that L(f ) exists and satisfies Equation 8.3.4 for s > s . To prove that L(f ) exists and
′
0
′′
satisfies Equation 8.3.5 for s > s , we first apply Theorem 8.3.1 to g = f . Since g satisfies the hypotheses of
0
′
′
L(g ) = sL(g) − g(0)
Example 8.3.2
Use the Laplace transform to solve the initial value problem
′′ ′ 2t ′
y − 6 y + 5y = 3 e , y(0) = 2, y (0) = 3. (8.3.6)
Solution
Taking Laplace transforms of both sides of the differential equation in Equation 8.3.6 yields
′′ ′ 2t
3
L(y − 6 y + 5y) = L (3 e ) = ,
s−2
which we rewrite as
3
′′ ′
L(y ) − 6L(y ) + 5L(y) = . (8.3.7)
s−2
Now denote L(y) = Y (s) . Theorem 8.3.2 and the initial conditions in Equation 8.3.6 imply that
′
L(y ) = sY (s) − y(0) = sY (s) − 2
and
′′ 2 ′ 2
L(y ) = s Y (s) − y (0) − sy(0) = s Y (s) − 3 − 2s.
Substituting from the last two equations into Equation 8.3.7 yields
3
2
(s Y (s) − 3 − 2s) − 6 (sY (s) − 2) + 5Y (s) = .
s−2
Therefore
2
3
(s − 6s + 5)Y (s) = + (3 + 2s) + 6(−2), (8.3.8)
s−2
so
3 + (s − 2)(2s − 9)
(s − 5)(s − 1)Y (s) = ,
s−2
and
3 + (s − 2)(2s − 9)
Y (s) = .
(s − 2)(s − 5)(s − 1)
Taking Laplace transforms of both sides of the differential equation in Equation 8.3.9 yields
′′ ′
aL(y ) + bL(y ) + cL(y) = F (s). (8.3.10)
Now let Y (s) = L(y) . Theorem 8.3.2 and the initial conditions in Equation 8.3.9 imply that
′ ′′ 2
L(y ) = sY (s) − k0 and L(y ) = s Y (s) − k1 − k0 s.
of the complementary equation for Equation 8.3.9 . Using this and moving the terms involving k0 and k1 to the right side of
Equation 8.3.11 yields
p(s)Y (s) = F (s) + a(k1 + k0 s) + b k0 . (8.3.12)
This equation corresponds to Equation 8.3.8 of Example 8.3.2. Having established the form of this equation in the general
case, it is preferable to go directly from the initial value problem to this equation. You may find it easier to remember Equation
8.3.12 rewritten as
′
p(s)Y (s) = F (s) + a (y (0) + sy(0)) + by(0). (8.3.13)
Example 8.3.3
Use the Laplace transform to solve the initial value problem
′′ ′ −2t ′
2y + 3y + y = 8e , y(0) = −4, y (0) = 2. (8.3.14)
Solution
The characteristic polynomial is
2
p(s) = 2 s + 3s + 1 = (2s + 1)(s + 1)
and
8
−2t
F (s) = L(8 e ) = ,
s+2
(Figure 8.3.1).
Figure 8.3.1 : y = 4
3
e
−t/2
− 8e
−t
+
8
3
−2t
e
Example 8.3.4
Solve the initial value problem
′′ ′ ′
y + 2 y + 2y = 1, y(0) = −3, y (0) = 1. (8.3.15)
Solution
The characteristic polynomial is
2 2
p(s) = s + 2s + 2 = (s + 1 ) +1
and
1
F (s) = L(1) = ,
s
Figure 8.3.2 : y = 1
2
−
7
2
e
−t
cos t −
5
2
e
−t
sin t
Note
2. y ′′ ′
− y − 6y = 2, y(0) = 1,
′
y (0) = 0
3. y ′′ ′
+ y − 2y = 2 e
3t
, y(0) = −1,
′
y (0) = 4
4. y ′′
− 4y = 2 e
3t
, y(0) = 1,
′
y (0) = −1
5. y ′′ ′
+ y − 2y = e
3t
, y(0) = 1,
′
y (0) = −1
6. y ′′ ′
+ 3 y + 2y = 6 e ,
t
y(0) = 1,
′
y (0) = −1
7. y ′′
+ y = sin 2t, y(0) = 0,
′
y (0) = 1
8. y ′′ ′
− 3 y + 2y = 2 e
3t
, y(0) = 1,
′
y (0) = −1
9. y ′′
− 3 y + 2y = e
′ 4t
, y(0) = 1,
′
y (0) = −2
10. y ′′
− 3 y + 2y = e
′ 3t
, y(0) = −1,
′
y (0) = −4
11. y ′′ ′
+ 3 y + 2y = 2 e ,
t
y(0) = 0,
′
y (0) = −1
12. y ′′ ′
+ y − 2y = −4, y(0) = 2,
′
y (0) = 3
13. y ′′
+ 4y = 4, y(0) = 0,
′
y (0) = 1
14. y ′′ ′
− y − 6y = 2, y(0) = 1,
′
y (0) = 0
15. y ′′ ′
+ 3 y + 2y = e ,
t
y(0) = 0,
′
y (0) = 1
16. y ′′
− y = 1, y(0) = 1,
′
y (0) = 0
17. y ′′
+ 4y = 3 sin t, y(0) = 1,
′
y (0) = −1
18. y ′′
+y
′
= 2e
3t
, y(0) = −1,
′
y (0) = 4
19. y ′′
+ y = 1, y(0) = 2,
′
y (0) = 0
20. y ′′
+ y = t, y(0) = 0,
′
y (0) = 2
21. y ′′
+ y = t − 3 sin 2t, y(0) = 1,
′
y (0) = −3
22. y ′′ ′
+ 5 y + 6y = 2 e
−t
, y(0) = 1,
′
y (0) = 3
23. y ′′ ′
+ 2 y + y = 6 sin t − 4 cos t, y(0) = −1, y (0) = 1
′
24. y ′′ ′
− 2 y − 3y = 10 cos t, y(0) = 2,
′
y (0) = 7
25. y ′′
+ y = 4 sin t + 6 cos t, y(0) = −6, y (0) = 2
′
26. y ′′
+ 4y = 8 sin 2t + 9 cos t, y(0) = 1,
′
y (0) = 0
27. y ′′ ′
− 5 y + 6y = 10 e cos t,
t
y(0) = 2,
′
y (0) = 1
28. y ′′ ′
+ 2 y + 2y = 2t, y(0) = 2,
′
y (0) = −7
29. y ′′ ′
− 2 y + 2y = 5 sin t + 10 cos t, y(0) = 1, y (0) = 2
′
30. y ′′ ′
+ 4 y + 13y = 10 e
−t
− 36 e ,
t
y(0) = 0, y (0) = −16
′
31. y ′′
+ 4 y + 5y = e
′ −t
(cos t + 3 sin t), y(0) = 0,
′
y (0) = 4
Q8.3.2
32. 2y ′′ ′
− 3 y − 2y = 4 e ,
t
y(0) = 1, y (0) = −2
′
34. 2y ′′ ′
+ 2 y + y = 2t,
′
y(0) = 1, y (0) = −1
35. 4y ′′ ′
− 4 y + 5y = 4 sin t − 4 cos t, y(0) = 0, y (0) = 11/17
′
36. 4y ′′ ′
+ 4 y + y = 3 sin t + cos t, y(0) = 2, y (0) = −1
′
37. 9y ′′ ′
+ 6y + y = 3e
3t
, y(0) = 0, y (0) = −3
′
−1
as + b −1
a
y1 = L ( ) and y2 = L ( ).
2 2
as + bs + c as + bs + c
Show that
′ ′
y1 (0) = 1, y (0) = 0 and y2 (0) = 0, y (0) = 1.
1 2
HINT: Use the Laplace transform to solve the initial value problems
′′ ′ ′
ay + b y + cy = 0, y(0) = 1, y (0) = 0
′′ ′ ′
ay + b y + cy = 0, y(0) = 0, y (0) = 1
1
1 s
(s > 0)
n n!
t (s > 0)
sn + 1
(n = integer > 0 )
p Γ(p + 1)
t , p > −1 (s > 0)
(p+ 1)
s
at 1
e (s > a)
s−a
n at n!
t e n+1
(s > 0)
(s−a)
(n = integer > 0 )
s
cosωt (s > 0)
2 2
s +ω
ω
sin ωt (s > 0)
s2 +ω2
λt s−λ
e cosωt 2
(s > λ)
(s−λ ) +ω2
λt ω
e sin ωt 2 2
(s > λ)
(s−λ ) +ω
s
cosh bt 2 (s > |b|)
s2 −b
b
sinh bt 2
(s > |b|)
2
s −b
2 2
s −ω
t cosωt (s > 0)
2
(s2 +ω2 )
2ωs
t sin ωt 2 (s > 0)
(s2 +ω2 )
3
2ω
sin ωt − ωt cosωt (s > 0)
2
(s2 +ω2 )
3
ω
ωt − sin ωt (s > 0)
2
s2 (s2 +ω2 )
1 ω
sin ωt arctan( ) (s > 0)
t s
at
e f (t) F (s − a)
k k (k)
t f (t) (−1 ) F (s)
1 s
f (ωt) F( ), ω > 0
ω ω
−τ s
e
u(t − τ ) (s > 0)
s
−τ s
u(t − τ )f (t − τ ) (τ > 0) e F (s)
−as
δ(t − a) e (s > 0)
1 9/16/2020
9.1: Introduction to Linear Higher Order Equations
An n th order differential equation is said to be linear if it can be written in the form
(n) (n−1)
y + p1 (x)y + ⋯ + pn (x)y = f (x). (9.1.1)
We considered equations of this form with n = 1 in Section 2.1 and with n = 2 in Chapter 5. In this chapter n is an arbitrary
positive integer. In this section we sketch the general theory of linear n th order equations. Since this theory has already been
discussed for n = 2 in Sections 5.1 and 5.3, we’ll omit proofs.
For convenience, we consider linear differential equations written as
(n) (n−1)
P0 (x)y + P1 (x)y + ⋯ + Pn (x)y = F (x), (9.1.2)
which can be rewritten as Equation 9.1.1 on any interval on which P has no zeros, with p = P /P , …, p = P 0 1 1 0 n n / P0 and
f = F /P . For simplicity, throughout this chapter we’ll abbreviate the left side of Equation 9.1.2 by Ly; that is,
0
(n) (n−1)
Ly = P0 y + P1 y + ⋯ + Pn y.
We say that the equation Ly = F is normal on (a, b) if P , P , …, P and F are continuous on (a, b) and P has no zeros on
0 1 n 0
(a, b) . If this is so then Ly = F can be written as Equation 9.1.1 with p , …, p and f continuous on (a, b) . 1 n
Theorem 9.1.1
Suppose Ly = F is normal on (a, b), let x be a point in (a, b), and let k , k , …, k
0 0 1 n−1 be arbitrary real numbers. Then
the initial value problem
′ (n−1)
Ly = F , y(x0 ) = k0 , y (x0 ) = k1 , … , y (x0 ) = kn−1
Homogeneous Equations
Equation 9.1.2 is said to be homogeneous if F ≡ 0 and nonhomogeneous otherwise. Since y ≡0 is obviously a solution of
Ly = 0 , we call it the trivial solution. Any other solution is nontrivial.
y = c1 y1 + c2 y2 + ⋯ + cn yn (9.1.3)
is a linear combination of {y , y … , y }. It’s easy to show that if y , y , …, y are solutions of Ly = 0 on (a, b), then so is
1 2 n 1 2 n
any linear combination of {y , y , … , y }. (See the proof of Theorem 5.1.2.) We say that {y , y , … , y } is a fundamental
1 2 n 1 2 n
set of solutions of Ly = 0 on (a, b) if every solution of Ly = 0 on (a, b) can be written as a linear combination of
{ y , y , … , y }, as in Equation 9.1.3. In this case we say that Equation 9.1.3 is the general solution of Ly = 0 on (a, b) .
1 2 n
It can be shown (Exercises 9.1.14 and 9.1.15) that if the equation Ly = 0 is normal on (a, b) then it has infinitely many
fundamental sets of solutions on (a, b). The next definition will help to identify fundamental sets of solutions of Ly = 0 .
We say that {y 1, y2 , … , yn } is linearly independent on (a, b) if the only constants c , c , …, c such that 1 2 n
are c1 = c2 = ⋯ = cn = 0 . If Equation 9.1.4 holds for some set of constants c1 , c2 , …, cn that are not all zero, then
{ y1 , y2 , … , yn } is linearly dependent on (a, b)
The next theorem is analogous to Theorem 5.1.3.
Theorem 9.1.1
Example 9.1.1
The equation
3 ′′′ 2 ′′ ′
x y −x y − 2x y + 6y = 0 (9.1.5)
is normal and has the solutions y = x , y = x , and y = 1/x on (−∞, 0) and (0, ∞). Show that {y , y , y } is
1
2
2
3
3 1 2 3
linearly independent on (−∞, 0) and (0, ∞). Then find the general solution of Equation 9.1.5 on (−∞, 0) and (0, ∞).
Solution
Suppose
c3
2 3
c1 x + c2 x + =0 (9.1.6)
x
on (0, ∞). We must show that c 1 = c2 = c3 = 0 . Differentiating Equation 9.1.6 twice yields the system
c3
2 3
c1 x + c2 x + =0
x
c3
2
2 c1 x + 3 c2 x − =0 (9.1.7)
2
x
2c3
2 c1 + 6 c2 x + = 0.
3
x
If Equation 9.1.7 holds for all x in (0, ∞), then it certainly holds at x = 1 ; therefore,
c1 + c2 + c3 =0
2 c1 + 3 c2 − c3 =0 (9.1.8)
2 c1 + 6 c2 + 2 c3 = 0.
By solving this system directly, you can verify that it has only the trivial solution c = c = c = 0 ; however, for our 1 2 3
purposes it is more useful to recall from linear algebra that a homogeneous linear system of n equations in n unknowns
has only the trivial solution if its determinant is nonzero. Since the determinant of Equation 9.1.8 is
∣1 1 1∣ ∣1 0 0∣
∣ ∣ ∣ ∣
2 3 −1 = 2 1 −3 = 12,
∣ ∣ ∣ ∣
∣2 6 2∣ ∣2 4 0∣
it follows that Equation 9.1.8 has only the trivial solution, so { y1 , y2 , y3 } is linearly independent on (0, ∞) . Now
Theorem 9.1.2 implies that
2 3
c3
y = c1 x + c2 x +
x
is the general solution of Equation 9.1.5 on (0, ∞). To see that this is also true on (−∞, 0), assume that Equation 9.1.6
−2 c1 + 3 c2 − c3 = 0
2 c1 − 6 c2 − 2 c3 = 0.
{ y , y , y , y } is linearly independent on (−∞, ∞) . Then find the general solution of Equation 9.1.9.
1 2 3 4
Solution
Suppose c , c , c , and c are constants such that
1 2 3 4
x −x 2x −3x
c1 e + c2 e + c3 e + c4 e =0 (9.1.10)
for all x. We must show that c 1 = c2 = c3 = c4 = 0 . Differentiating Equation 9.1.10 three times yields the system
x −x 2x −3x
c1 e + c2 e + c3 e + c4 e = 0
x −x 2x −3x
c1 e − c2 e + 2 c3 e − 3 c4 e = 0
(9.1.11)
x −x 2x −3x
c1 e + c2 e + 4 c3 e + 9 c4 e = 0
x −x 2x −3x
c1 e − c2 e + 8 c3 e − 27 c4 e = 0.
If Equation 9.1.11 holds for all x, then it certainly holds for x = 0 . Therefore
c1 + c2 + c3 + c4 = 0
c1 − c2 + 2 c3 − 3 c4 = 0
c1 + c2 + 4 c3 + 9 c4 = 0
c1 − c2 + 8 c3 − 27 c4 = 0.
∣ −2 1 −4 ∣
∣ ∣ ∣3 8∣
= 0 3 8 = −2 ∣ ∣ = 240,
∣ ∣ ∣6 −24 ∣
∣ 0 6 −24 ∣
so the system has only the trivial solution c 1 = c2 = c3 = c4 = 0 . Now Theorem 9.1.2 implies that
x −x 2x −3x
y = c1 e + c2 e + c3 e + c4 e
The Wronskian
We can use the method used in Examples 9.1.1 and 9.1.2 to test n solutions {y , y , … , y } of any n th order equation 1 2 n
Ly = 0 for linear independence on an interval (a, b) on which the equation is normal. Thus, if c , c ,…, c are constants such 1 2 n
that
c1 y1 + c2 y2 + ⋯ + cn yn = 0, a < b,
′ ′ ′
c1 y (x) + c2 y (x)+ ⋯ +cn yn (x) = 0
1 2
(9.1.13)
⋮
We call this determinant the Wronskian of {y , y , … , y }. If W (x) ≠ 0 for some x in (a, b) then the system Equation
1 2 n
9.1.13 has only the trivial solution c = c = ⋯ = c = 0 , and Theorem 9.1.2 implies that
1 2 n
y = c1 y1 + c2 y2 + ⋯ + cn yn
Theorem 9.1.4
Suppose Ly = 0 is normal on (a, b) and let y , y , …, y be n solutions of 1 2 n Ly = 0 on (a, b) . Then the following
statements are equivalent; that is, they are either all true or all false:
a. The general solution of Ly = 0 on (a, b) is y = c y + c y + ⋯ + c y 1 1 2 2 n n.
Example 9.1.3
In Example 9.1.1 we saw that the solutions y 1 =x
2
,y 2 =x
3
, and y 3 = 1/x of
3 ′′′ 2 ′′ ′
x y −x y − 2x y + 6y = 0
are linearly independent on (−∞, 0) and (0, ∞). Calculate the Wronskian of {y 1, y2 , y3 } .
Solution
If x ≠ 0 , then
where we factored x , x, and 2 out of the first, second, and third rows of
2
W (x) , respectively. Adding the second row of
the last determinant to the first and third rows yields
∣3 4x 0 ∣
∣ ∣ 1 ∣3 4x ∣
3 1 3
W (x) = 2 x ∣2 3x − 3
∣ = 2x ( )∣ ∣ = 12x
x 3
∣ ∣ x ∣3 6x ∣
∣3 6x 0 ∣
Example 9.1.4
In Example 9.1.2 we saw that the solutions y 1 =e
x
,y 2 =e
−x
,y 3 =e
2x
, and y 4 =e
−3x
of
(4) ′′′ ′′ ′
y +y − 7y − y + 6y = 0
Note
Under the assumptions of Theorem 9.1.4, it isn’t necessary to obtain a formula for W (x) . Just evaluate W (x) at a
convenient point in (a, b), as we did in Examples 9.1.1 and 9.1.2.
Theorem 9.1.5
Suppose c is in (a, b) and α , α , …, are real numbers, not all zero. Under the assumptions of Theorem 10.3.3, suppose
1 2
(n−1)
′
α yi (c) + y (c) + ⋯ + y (c) = 0, 1 ≤ i ≤ n. (9.1.16)
i i
Proof
Since α , α , …, α are not all zero, Equation 9.1.14implies that
1 2 n
so
∣ y1 (c) y2 (c) ⋯ yn (c) ∣
∣ ∣
′ ′ ′
y (c) y (c) ⋯ yn (c)
∣ 1 2 ∣
∣ ∣ =0
∣ ⋮ ⋮ ⋱ ⋮ ∣
∣ ∣
(n−1) (n−1) (n−1)
∣y (c) y (c)(c) ⋯ yn (c)(c) ∣
1 2
Theorem 9.1.6
The probabilities assigned to events by a distribution function on a sample space are given by.
Proof
Suppose Ly = F is normal on (a, b). Let y be a particular solution of Ly = F on (a, b), and let {y , y , … , y } be
p 1 2 n
y = yp + c1 y1 + c2 y2 + ⋯ + cn yn ,
yp = yp + yp + ⋯ + yp
1 2 r
is a particular solution of
on (a, b).
We’ll apply Theorems 9.1.6 and 9.1.7 throughout the rest of this chapter.
x
x
2
1 1 x −4 3 1 1
b. y ′′′
− y
′′
−y +
′
y =
4
, y(1) = ,
′
y (1) = ,y
′′
(1) = 1 ;y = x +
x x x 2 2 2x
c. xy
′′′
−y
′′
− xy + y = x ,
′ 2
y(1) = 2,
′
y (1) = 5, y
′′
(1) = −1 ;y = −x 2
− 2 + 2e
(x−1)
−e
−(x−1)
+ 4x
17
d. 4x 3
y
′′′
+ 4x y
2 ′′ ′
− 5x y + 2y = 30 x ,
2
y(1) = 5, y (1) =
′
;
2
63
′′ 2 1/2 −1/2 2
y (1) = ; y = 2x ln x − x + 2x + 4x
4
e. x y
4 (4)
− 4x y
3 ′′′
+ 12 x y
2 ′′ ′
− 24x y + 24y = 6 x ,
4
y(1) = −2 ;y ′
(1) = −9, y
′′
(1) = −27, y
′′′
(1) = −52 ;
4 2 3 4
y =x ln x + x − 2 x + 3x − 4x
f. x y (4)
−y
′′′
− 4x y
′′
+ 4y
′
= 96 x ,
2
y(1) = −5,
′
y (1) = −24 ;
′′ ′′′ 2 3
y (1) = −36; y (1) = −48; y = 9 − 12x + 6 x − 8x
0, j ≠ i − 1,
(j)
y (x0 ) = { 1 ≤ i ≤ n.
i
1, j = i − 1,
5.
a. Verify that the function
c3
3 2
y = c1 x + c2 x +
x
satisfies
3 ′′′ 2 ′′ ′
x y −x y − 2x y + 6y = 0 (A)
′ ′′
y1 (1) = 1, y (1) = 0, y (1) = 0
1 1
′ ′′
y2 (1) = 0, y (1) = 1, y (1) = 0
2 2
′ ′′
y3 (1) = 0, y (1) = 0, y (1) = 1.
3 3
6. Verify that the given functions are solutions of the given equation, and show that they form a fundamental set of solutions of
the equation on any interval on which the equation is normal.
c. x y − y − x y + y = 0; {e , e , x}
′′′ ′′ ′ x −x
e. (x − 2x + 2)y − x y + 2x y − 2y = 0; {x, x , e }
2 ′′′ 2 ′′ ′ 2 x
g. x y − y − 4x y + 4y = 0; {1, x , e , e }
(4) ′′′ ′ ′ 2 2x −2x
′′′ ′′ ′
y − 3y + 3 y − y = 0. (A)
b. {e , e sin x, e cos x}
x x x
c. {2, x + 1, x + 2} 2
d. x, x ln x, 1/x}
2 3 n
e. {1, x, , ,⋯,
x
2!
}
x
3!
x
n!
f. {e , e , x}
x −x
g. {e /x, e /x, 1}
x −x
h. {x, x , e } 2 x
j. {e , e , x, e }
x −x 2x
k. {e , e , 1, x }
2x −2x 2
11. Suppose Ly = 0 is normal on (a, b) and x is in (a, b). Use Theorem 9.1.1 to show that y ≡ 0 is the only solution of the
0
on (a, b).
12. Prove: If y , y , …, y are solutions of Ly = 0 and the functions
1 2 n
zi = ∑ aij yj , 1 ≤ i ≤ n,
j=1
13. Prove: If
y = c1 y1 + c2 y2 + ⋯ + ck yk + yp
(j) 0, j ≠ i − 1,
Lyi = 0, y (x0 ) = { 1 ≤ i ≤ n,
i
1, j = i − 1,
where x is an arbitrary point in (a, b). Show that any solution of Ly = 0 on (a, b), can be written as
0
y = c1 y1 + c2 y2 + ⋯ + cn yn ,
with cj =y
(j−1)
(x0 ) .
15. Suppose {y 1, y2 , … , yn } is a fundamental set of solutions of
(n) (n−1)
P0 (x)y + P1 (x)y + ⋯ + Pn (x)y = 0
⋮ ⋮ ⋮ ⋮
where the {a ij } are constants. Show that { z1 , z2 , … , zn } is a fundamental set of solutions of (A) if and only if the
determinant
∣ a11 a12 ⋯ a1n ∣
∣ ∣
∣ a21 a22 ⋯ a2n ∣
∣ ∣
∣ ∣
⋮ ⋮ ⋱ ⋮
∣ ∣
∣ an1 an2 ⋯ ann ∣
is nonzero. HINT: The determinant of a product of n × n matrices equals the product of the determinants.
16. Show that {y , y , … , y } is linearly dependent on (a, b) if and only if at least one of the functions y , y , …, y can be
1 2 n 1 2 n
Q9.1.2
Take the following as a hint in Exercises 9.1.17-9.1.19:
By the definition of determinant,
∣ a11 a12 ⋯ a1n ∣
∣ ∣
∣ a21 a22 ⋯ a2n ∣
∣ ∣ = ∑ ±a1i a2i , ⋯ , ani ,
1 2 n
∣ ∣
⋮ ⋮ ⋱ ⋮
∣ ∣
∣ an1 an2 ⋯ ann ∣
where the sum is over all permutations (i , i i 2, ⋯ , in ) of (1, 2, ⋯ , n) and the choice of + or − in each term depends only on
the permutation associated with that term.
17. Prove: If
then
18. Let
∣ f11 f12 ⋯ f1n ∣
∣ ∣
∣ f f22 ⋯ f2n ∣
21
∣ ∣
F = ,
∣ ∣
∣ ⋮ ⋮ ⋱ ⋮ ∣
∣ ∣
∣ fn1 fn2 ⋯ fnn ∣
19. Use Exercise 9.1.18 to show that if W is the Wronskian of the n -times differentiable functions y , y , …, y , then 1 2 n
∣ y1 y2 ⋯ yn ∣
∣ ∣
′ ′ ′
∣ y y ⋯ yn ∣
1 2
∣ ∣
∣ ∣
′
W =∣ ⋮ ⋮ ⋱ ⋮ ∣.
∣ ∣
∣ (n−2) (n−2) (n−2) ∣
y y ⋯ yn
∣ 1 2 ∣
∣ ∣
(n) (n) (n)
∣ y y ⋯ yn ∣
1 2
Q9.1.3
20. Use Exercises 9.1.17 and 9.1.19 to show that if W is the Wronskian of solutions {y 1, y2 , … , yn } of the normal equation
(n) (n−1)
P0 (x)y + P1 (x)y + ⋯ + Pn (x)y = 0, (A)
then W ′
= −P1 W / P0 . Derive Abel’s formula (Equation 9.1.15) from this.
21. Prove Theorem 9.1.6.
22. Prove Theorem 9.1.7.
23. Show that if the Wronskian of the n -times continuously differentiable functions { y1 , y2 , … , yn } has no zeros in (a, b) ,
then the differential equation obtained by expanding the determinant
∣ y y1 y2 ⋯ yn ∣
∣ ∣
′ ′ ′ ′
∣ y y y ⋯ yn ∣
1 2
∣ ∣
= 0,
∣ ∣
∣ ⋮ ⋮ ⋮ ⋱ ⋮ ∣
∣ ∣
(n) (n) (n) (n)
∣y y
1
y
2
⋯ yn ∣
in cofactors of its first column is normal and has {y 1, y2 , … , yn } as a fundamental set of solutions on (a, b).
b. {e , e , x}
x −x
c. {e , x e , 1}
x −x
d. {x, x , e }
2 x
e. {x, x , 1/x}
2
f. {x + 1, e , e }
x 3x
h. {x, x ln x, 1/x, x } 2
i. {e , e , x, e }
x −x 2x
j. {e , e , 1, x }
2x −2x 2
is said to be a constant coefficient equation. In this section we consider the homogeneous constant coefficient equation
(n) (n−1)
a0 y + a1 y + ⋯ + an y = 0. (9.2.1)
Since Equation 9.2.1 is normal on (−∞, ∞), the theorems in Section 9.1 all apply with (a, b) = (−∞, ∞) .
As in Section 5.2, we call
n n−1
p(r) = a0 r + a1 r + ⋯ + an (9.2.2)
the characteristic polynomial of Equation 9.2.1. We saw in Section 5.2 that when n = 2 the solutions of Equation 9.2.1 are
determined by the zeros of the characteristic polynomial. This is also true when n > 2 , but the situation is more complicated
in this case. Consequently, we take a different approach.
If k is a positive integer, let D stand for the k -th derivative operator; that is
k
k (k)
D y =y .
If
m m−1
q(r) = b0 r + b1 r + ⋯ + bm
such that
m m−1 (m) (m−1)
q(D)y = (b0 D + b1 D + ⋯ + bm )y = b0 y + b1 y + ⋯ + bm y
n n−1 rx
= (a0 r + a1 r + ⋯ + an )e ;
that is
rx rx
p(D)(e ) = p(r)e .
This shows that y = e is a solution of Equation 9.2.1 if p(r) = 0 . In the simplest case, where p has n distinct real zeros r ,
rx
1
r1 x r2 x rn x
y1 = e , y2 = e ,…, yn = e .
It can be shown (Exercise 9.2.39) that the Wronskian of {e , e , … , e } is nonzero if r , r , …, r are distinct; hence,
r1 x r2 x rn x
1 2 n
Example 9.2.1
a. Find the general solution of
Solution a
The characteristic polynomial of Equation 9.2.3 is
3 2
p(r) = r − 6r + 11r − 6 = (r − 1)(r − 2)(r − 3).
Therefore {e x
,e
2x
,e
3x
} is a set of solutions of Equation 9.2.3. It is a fundamental set, since its Wronskian is
∣ ex e
2x
e
3x
∣ ∣1 1 1∣
∣ ∣
x 2x 3x 6x ∣ ∣ 6x
W (x) = ∣ e 2e 3e ∣ =e 1 2 3 = 2e ≠ 0.
∣ ∣ ∣ ∣
x 2x 3x
∣e 4e 9e ∣ ∣1 4 9∣
Solution b
We must determine c , c and c in Equation
1 2 3 9.2.5 so that y satisfies the initial conditions in Equation 9.2.4 .
Differentiating Equation 9.2.5 twice yields
′ x 2x 3x
y = c1 e + 2 c2 e + 3 c3 e
(9.2.6)
′′ x 2x 3x
y = c1 e + 4 c2 e + 9 c3 e .
Setting x = 0 in Equation 9.2.5 and Equation 9.2.6 and imposing the initial conditions yields
c1 + c2 + c3 = 4
c1 + 2 c2 + 3 c3 = 5
c1 + 4 c2 + 9 c3 = 9.
(Figure 9.2.1).
Figure 9.2.1 : y = 4e x
−e
2x
+e
3x
Now we consider the case where the characteristic polynomial Equation 9.2.2 does not have n distinct real zeros. For this
purpose it is useful to define what we mean by a factorization of a polynomial operator. We begin with an example.
Example 9.2.2
However, before we can make this assertion we must define what we mean by saying that two operators are equal, and
what we mean by the products of operators in Equation 9.2.7. We say that two operators are equal if they apply to the
same functions and always produce the same result. The definitions of the products in Equation 9.2.7 is this: if y is any
three-times differentiable function then
a. (D − 1)(D
2
+ 1)y is the function obtained by first applying D
2
+1 to y and then applying D−1 to the resulting
function
b. (D + 1)(D − 1)y is the function obtained by first applying
2
D−1 to y and then applying D
2
+1 to the resulting
function.
From (a),
2 2
(D − 1)(D + 1)y = (D − 1)[(D + 1)y]
′′ ′′ ′′
= (D − 1)(y + y) = D(y + y) − (y + y)
(9.2.8)
′′′ ′ ′′
= (y + y ) − (y + y)
′′′ ′′ ′ 3 2
= y −y + y − y = (D −D + D − 1)y.
From (b),
2 2
(D + 1)(D − 1)y = (D + 1)[(D − 1)y]
2 ′ 2 ′ ′
= (D + 1)(y − y) = D (y − y) + (y − y)
(9.2.9)
′′′ ′′ ′
= (y −y ) + (y − y)
′′′ ′′ ′ 3 2
= y −y + y − y = (D −D + D − 1)y,
2 3 2
(D + 1)(D − 1) = (D −D + D − 1),
Example 9.2.3
Use the result of Example 9.2.2 to find the general solution of
′′′ ′′ ′
y −y + y − y = 0. (9.2.10)
Solution
From Equation 9.2.8, we can rewrite Equation 9.2.10 as
2
(D − 1)(D + 1)y = 0,
which implies that any solution of (D − 1)y = 0 is a solution of Equation 9.2.10 . Therefore y3 = e
x
is solution of
Equation 9.2.10.
The Wronskian of {e x
, cos x, sin x} is
x
∣ cos x sin x e ∣
∣ x ∣
W (x) = − sin x cos x e .
∣ ∣
x
∣ − cos x − sin x e ∣
Since
∣ 1 0 1∣
∣ ∣
W (0) = 0 1 1 = 2,
∣ ∣
∣ −1 0 1∣
{cos x, sin x, e }
x
is linearly independent and
x
y = c1 cos x + c2 sin x + c3 e
Example 9.2.4
Find the general solution of
(4)
y − 16y = 0. (9.2.11)
Solution
The characteristic polynomial of Equation 9.2.11 is
4
p(r) = r − 16
2 2
= (r − 4)(r + 4)
2
= (r − 2)(r + 2)(r + 4).
By arguments similar to those used in Examples 9.2.2 and 9.2.4, it can be shown that Equation 9.2.11 can be written as
2
(D + 4)(D + 2)(D − 2)y = 0
or
2
(D + 4)(D − 2)(D + 2)y = 0
or
2
(D − 2)(D + 2)(D + 4)y = 0.
Hence, {e 2x
,e
−2x
, cos 2x, sin 2x} is a set of solutions of Equation 9.2.11. The Wronskian of this set is
2x −2x
∣ e e cos 2x sin 2x ∣
∣ ∣
2x −2x
∣ 2e −2e −2 sin 2x 2 cos 2x ∣
W (x) = .
∣ 2x −2x ∣
4e 4e −4 cos 2x −4 sin 2x
∣ ∣
2x −2x
∣ 8e −8e 8 sin 2x −8 cos 2x ∣
{e
2x
,e
−2x
, cos 2x, sin 2x} is linearly independent, and
2x −2x
y1 = c1 e + c2 e + c3 cos 2x + c4 sin 2x
where no pair of the polynomials p , p , …, p has a commom factor and each is either of the form
1 2 k
mj
pj (r) = (r − rj ) , (9.2.12)
mj
2 2
pj (r) = [(r − λj ) +ω ] , (9.2.13)
j
where λ and ω are real, ω ≠ 0 , and m is a positive integer. If Equation 9.2.12 holds then r is a real zero of p, while if
j j j j j
Equation 9.2.13 holds then λ + iω and λ − iω are complex conjugate zeros of p. In either case, m is the multiplicity of the j
zero(s).
By arguments similar to those used in our examples, it can be shown that
and that the order of the factors on the right can be chosen arbitrarily. Therefore, if p (D)y = 0 for some j then p(D)y = 0. j
To see this, we simply rewrite Equation 9.2.14 so that p (D) is applied first. Therefore the problem of finding solutions of
j
p(D)y = 0 with p as in Equation 9.2.14 reduces to finding solutions of each of these equations
pj (D)y = 0, 1 ≤ j ≤ k,
where is a power of a first degree term or of an irreducible quadratic. To find a fundamental set of solutions
pj
of p(D)y = 0, we find fundamental set of solutions of each of the equations and take {y , y , … , y } to be
{ y1 , y2 , … , yn } 1 2 n
the set of all functions in these separate fundamental sets. In Exercise 9.2.40 we sketch the proof that {y , y , … , y } is 1 2 n
and
m
2 2
[(D − λ ) +ω ] y = 0,
where m is an arbitrary positive integer. The next two theorems show how to do this.
Theorem 9.2.1
If m is a positive integer, then
Proof
We’ll show that if
m−1
f (x) = c1 + c2 x + ⋯ + cm x
so
ax ax ′
(D − a)e g =e g . (9.2.17)
Therefore
ax ax ′
(D − a)e f = e f (from (9.2.17) with g = f )
2 ax ax ′ ax ′′ ′
(D − a) e f = (D − a)e f =e f (from (9.2.17) with g = f )
3 ax ax ′′ ax ′′′ ′′
(D − a) e f = (D − a)e f =e f (from (9.2.17) with g = f )
degree ≤ m − 1 . In particular, each function in Equation 9.2.15is a solution of Equation 9.2.16. To see that Equation
9.2.15is linearly independent (and therefore a fundamental set of solutions of Equation 9.2.16), note that if
ax ax m−1 ax
c1 e + c2 x e + c ⋯ + cm−1 x e =0
for all x in (a, b) . However, we know from algebra that if this polynomial has more than m −1 zeros then
c1 = c2 = ⋯ = cn = 0 .
Example 9.2.5
Find the general solution of
′′′ ′′ ′
y + 3y + 3 y + y = 0. (9.2.18)
Solution
The characteristic polynomial of Equation 9.2.18 is
3 2 3
p(r) = r + 3r + 3r + 1 = (r + 1 ) .
Theorem 9.2.2
If ω ≠ 0 and m is a positive integer, then
λx λx m−1 λx
{e cos ωx, x e cos ωx, …,x e cos ωx,
λx λx m−1 λx
e sin ωx, x e sin ωx, …,x e sin ωx}
Example 9.2.6
Find the general solution of
2 3
(D + 4D + 13 ) y = 0. (9.2.19)
Solution
The characteristic polynomial of Equation 9.2.19 is
3
2 3 2
p(r) = (r + 4r + 13 ) = ((r + 2 ) + 9) .
Example 9.2.7
Find the general solution of
(4) ′′′ ′′ ′
y + 4y + 6y + 4y = 0. (9.2.20)
Solution
The characteristic polynomial of Equation 9.2.20 is
4 3 2
p(r) = r + 4r + 6r + 4r
3 2
= r(r + 4r + 6r + 4)
2
= r(r + 2)(r + 2r + 2)
2
= r(r + 2)[(r + 1 ) + 1].
are given by
−x −x −2x
{e cos x, e sin x}, {e }, and {1},
Example 9.2.8
Find a fundamental set of solutions of
2 2 3 2
[(D + 1 ) + 1 ] (D − 1 ) (D + 1)D y = 0. (9.2.21)
Solution
A fundamental set of solutions of Equation 9.2.21 can be obtained by combining fundamental sets of solutions of
2
2
[(D + 1 ) + 1] y =0
3
(D − 1 ) y = 0
(D + 1)y = 0
2
D y =0
{e
−x
,
}
{1, x}
respectively. These ten functions form a fundamental set of solutions of Equation 9.2.21.
2. y (4)
+ 8y
′′
− 9y = 0
3. y ′′′
−y
′′
+ 16 y − 16y = 0
′
4. 2y ′′′
+ 3y
′′
− 2 y − 3y = 0
′
5. y ′′′
+ 5y
′′
+ 9 y + 5y = 0
′
6. 4y ′′′
− 8y
′′
+ 5y − y = 0
′
7. 27y ′′′
+ 27 y
′′
+ 9y + y = 0
′
8. y (4)
+y
′′
=0
9. y (4)
− 16y = 0
10. y (4)
+ 12 y
′′
+ 36y = 0
12. 6y (4)
+ 5y
′′′
+ 7y
′′ ′
+ 5y + y = 0
13. 4y (4)
+ 12 y
′′′
+ 3y
′′ ′
− 13 y − 6y = 0
14. y (4)
− 4y
′′′
+ 7y
′′ ′
− 6 y + 2y = 0
Q9.2.2
In Exercises 9.2.15-9.2.27 solve the initial value problem. Graph the solution for Exercises 9.2.17-9.2.19 and 9.2.27.
15. y ′′′
− 2y
′′
+ 4 y − 8y = 0,
′
y(0) = 2,
′
y (0) = −2, y
′′
(0) = 0
16. y ′′′
+ 3y
′′
− y − 3y = 0,
′
y(0) = 0,
′
y (0) = 14, y
′′
(0) = −40
17. y ′′′
−y
′′
− y + y = 0,
′
y(0) = −2,
′
y (0) = 9, y
′′
(0) = 4
18. y ′′′
− 2 y − 4y = 0,
′
y(0) = 6,
′
y (0) = 3, y
′′
(0) = 22
19. 3y ′′′
−y
′′
− 7 y + 5y = 0,
′
y(0) =
14
5
,
′
y (0) = 0, y
′′
(0) = 10
20. y ′′′
− 6y
′′
+ 12 y − 8y = 0,
′
y(0) = 1,
′
y (0) = −1, y
′′
(0) = −4
21. 2y ′′′
− 11 y
′′
+ 12 y + 9y = 0,
′
y(0) = 6,
′
y (0) = 3, y
′′
(0) = 13
22. 8y ′′′
− 4y
′′
− 2 y + y = 0,
′
y(0) = 4,
′
y (0) = −3, y
′′
(0) = −1
23. y (4)
− 16y = 0, y(0) = 2, y (0) = 2, y
′ ′′
(0) = −2, y
′′′
(0) = 0
24. y (4)
− 6y
′′′
+ 7y
′′ ′
+ 6 y − 8y = 0, y(0) = −2,
′
y (0) = −8, y
′′
(0) = −14, y
′′′
(0) = −62
25. 4y (4)
− 13 y
′′
+ 9y = 0, y(0) = 1,
′
y (0) = 3, y
′′
(0) = 1, y
′′′
(0) = 3
26. y (4)
+ 2y
′′′
− 2y
′′ ′
− 8 y − 8y = 0, y(0) = 5,
′
y (0) = −2, y
′′
(0) = 6, y
′′′
(0) = 8
27. 4y (4)
+ 8y
′′′
+ 19 y
′′ ′
+ 32 y + 12y = 0, y(0) = 3,
′
y (0) = −3, y
′′
(0) = −
7
2
, y
′′′
(0) =
31
Q9.2.3
28. Find a fundamental set of solutions of the given equation, and verify that it is a fundamental set by evaluating its
Wronskian at x = 0 .
a. 2
(D − 1 ) (D − 2)y = 0
c. (D + 2D + 2)(D − 1)y = 0
2
d. D (D − 1)y = 0
3
e. (D − 1)(D + 1)y = 0
2 2
f. (D − 2D + 2)(D + 1)y = 0
2 2
Q9.2.4
In Exercises 9.2.29-9.2.38 find a fundamental set of solutions.
29. (D 2
+ 6D + 13)(D − 2 ) D y = 0
2 3
30. (D − 1) 2
(2D − 1 ) (D
3 2
+ 1)y = 0
31. (D 2 3
+ 9) D y = 0
2
32. (D − 2) 3
(D + 1 ) Dy = 0
2
33. (D 2
+ 1)(D
2 2
+ 9 ) (D − 2)y = 0
34. (D 4
− 16 ) y = 0
2
35. (4D 2
+ 4D + 9 ) y = 0
3
36. D 3
(D − 2 ) (D
2 2 2
+ 4) y = 0
37. (4D 2
+ 1 ) (9 D
2 2 3
+ 4) y = 0
38. [(D − 1) 4
− 16] y = 0
Q9.2.5
39 It can be shown that
∣ 1 1 ⋯ 1 ∣
∣ ∣
∣ a1 a2 ⋯ an ∣
∣ ∣
2 2 2
∣ a a ⋯ an ∣
1 2 = ∏ (aj − ai ), (A)
∣ ∣
1≤i<j≤n
∣ ∣
∣ ⋮ ⋮ ⋱ ⋮ ∣
∣ ∣
n−1 n−1 n−1
∣a a ⋯ an ∣
1 2
where the left side is the Vandermonde determinant and the right side is the product of all factors of the form (a j − ai ) with i
and j between 1 and n and i < j .>
a. Verify (A) for n = 2 and n = 3 .
b. Find the Wronskian of {e , e a1 x a2 x
,…,e
an x
} .
40. A theorem from algebra says that if P1 and P2 are polynomials with no common factors then there are polynomials Q1
Q1 P1 + Q2 P2 = 1.
for every function y with enough derivatives for the left side to be defined.
a. Use this to show that if P and P have no common factors and
1 2
P1 (D)y = P2 (D)y = 0
then y = 0 .
nj 2 2 mj
pj (r) = (r − rj ) or pj (r) = [(r − λj ) +w ] (ωj > 0)
j
and no two of the polynomials p , p , …, p have a common factor. Show that we can find a fundamental set of solutions
1 2 k
pj (D)y = 0, 1 ≤ j ≤ k,
and taking {y 1, y2 , … , yn } to be the set of all functions in these separate fundamental sets.
41.
a. Show that if
z = p(x) cos ωx + q(x) sin ωx, (A)
b. Apply (a) m times to show that if z is of the form (A) where p and q are polynomial of degree ≤ m − 1 , then
2 2 m
(D +ω ) z = 0. (B)
d. Conclude from (b) and (c) that if p and q are arbitrary polynomials of degree ≤ m − 1 then
λx
y =e (p(x) cos ωx + q(x) sin ωx)
is a solution of
2 2 m
[(D − λ ) +ω ] y = 0. (C)
to show that
k=1
d. Show that (A) also holds if n = 0 or a negative integer. HINT: Verify by direct calculation that
−1
(cos θ + i sin θ) = (cos θ − i sin θ).
and
(2k + 1)π (2k + 1)π
ζk = cos( ) + i sin( ), k = 0, 1, … , n − 1,
n n
then
n n
z =1 and ζ = −1, k = 0, 1, … , n − 1.
k k
and
n 1/n 1/n 1/n
z + ρ = (z − ρ ζ0 )(z − ρ ζ1 ) ⋯ (z − ρ ζn−1 ).
43. Use (e) of Exercise 9.2.42 to find a fundamental set of solutions of the given equation.
a. y − y = 0
′′′
b. y + y = 0
′′′
c. y + 64y = 0
(4)
d. y − y = 0
(6)
e. y + 64y = 0
(6)
f. [(D − 1) − 1] y = 0
6
g. y + y + y + y + y
(5) (4) ′′′ ′′ ′
+y = 0
t
x =e and Y (t) = y(x(t)), (B)
then
where A , …, A
1r are integers. Use these results to show that the substitution (B) transforms (A) into a constant coefficient
r−1,r
or
λx k k
e [(p0 + p1 x + ⋯ + pk x ) cos ωx + (q0 + q1 x + ⋯ + qk x ) sin ωx] .
From Theorem 9.1.5, the general solution of Equation 9.3.1 is y = y p + yc , where y is a particular solution of Equation 9.3.1
p
(n) (n−1)
a0 y + a1 y + ⋯ + an y = 0.
In Section 9.2 we learned how to find y . Here we will learn how to find y when the forcing function has the form stated
c p
above. The procedure that we use is a generalization of the method that we used in Sections 5.4 and 5.5, and is again called
method of undetermined coefficients. Since the underlying ideas are the same as those in these section, we’ll give an informal
presentation based on examples.
Example 9.3.1
Find a particular solution of
′′′ ′′ ′ x 2 3
y + 3y + 2 y − y = e (21 + 24x + 28 x + 5 x ). (9.3.2)
Solution
Substituting
x
y = ue ,
′ x ′
y = e (u + u),
′′ x ′′ ′
y = e (u + 2 u + u),
′′′ x ′′′ ′′ ′
y = e (u + 3u + 3 u + u)
′′′ ′′ ′ ′′ ′ ′ 2 3
(u + 3u + 3 u + u) + 3(u + 2 u + u) + 2(u + u) − u = 21 + 24x + 28 x + 5x ,
or
′′′ ′′ ′ 2 3
u + 6u + 11 u + 5u = 21 + 24x + 28 x + 5x . (9.3.3)
Since the unknown u appears on the left, we can see that Equation 9.3.3 has a particular solution of the form
2 3
up = A + Bx + C x + Dx .
Then
′ 2
up = B + 2C x + 3Dx
′′
up = 2C + 6Dx
′′′
up = 6D.
2 3
+ 5(A + Bx + C x + Dx )
2 3
+ (5C + 33D)x + 5Dx .
Comparing coefficients of like powers of x on the right sides of this equation and Equation 9.3.3 shows that up satisfies
Equation 9.3.3 if
5D =5
5C + 33D = 28
5B + 22C + 36D = 24
Figure 9.3.1: y p
x
= e (1 + 2x − x
2
+x )
3
Example 9.3.2
Find a particular solution of
(4) ′′′ ′′ ′ 2x 2
y −y − 6y + 4 y + 8y = e (4 + 19x + 6 x ). (9.3.4)
Solution
Substituting
2x
y = ue ,
′ 2x ′
y =e (u + 2u),
′′ 2x ′′ ′
y =e (u + 4 u + 4u),
′′′ 2x ′′′ ′′ ′
y =e (u + 6u + 12 u + 8u),
Since neither u nor u appear on the left, we can see that Equation 9.3.5 has a particular solution of the form
′
2 3 4
up = Ax + Bx + Cx . (9.3.6)
Then
′ 2 3
up = 2Ax + 3Bx + 4C x
′′ 2
up = 2A + 6Bx + 12C x
′′′
up = 6B + 24C x
(4)
up = 24C .
(4)
Substituting u , u , and u
′′
p
′′′
p p into the left side of Equation 9.3.5 yields
(4) 2
′′′ ′′
up + 7 up + 12 up = 24C + 7(6B + 24C x) + 12(2A + 6Bx + 12C x )
2
= (24A + 42B + 24C ) + (72B + 168C )x + 144C x .
Comparing coefficients of like powers of x on the right sides of this equation and Equation 9.3.5 shows that up satisfies
Equation 9.3.5 if
144C =6
72B + 168C = 19
Solving these equations successively yields C = 1/24 , B = 1/6 , A = −1/6 . Substituting these into Equation 9.3.6
shows that
2
x
2
up = (−4 + 4x + x )
24
2 2x
x e
Figure 9.3.2: y p = (−4 + 4x + x )
2
24
Example 9.3.3
Find a particular solution of
′′′ ′′ ′ x
y +y − 4 y − 4y = e [(5 − 5x) cos x + (2 + 5x) sin x]. (9.3.7)
Solution
Substituting
x
y = ue ,
′ x ′
y = e (u + u),
′′ x ′′ ′
y = e (u + 2 u + u),
′′′ x ′′′ ′′ ′
y = e (u + 3u + 3 u + u)
′′′ ′′ ′ ′′ ′ ′
(u + 3u + 3 u + u) + (u + 2 u + u) − 4(u + u) − 4u = (5 − 5x) cos x + (2 + 5x) sin x,
or
′′′ ′′ ′
u + 4u + u − 6u = (5 − 5x) cos x + (2 + 5x) sin x. (9.3.8)
Since cos x and sin x are not solutions of the complementary equation
′′′ ′′ ′
u + 4u + u − 6u = 0,
a theorem analogous to Theorem 5.5.1 implies that Equation 9.3.8 has a particular solution of the form
Then
′
up = (A1 + B0 + B1 x) cos x + (B1 − A0 − A1 x) sin x,
′′
up = (2 B1 − A0 − A1 x) cos x − (2 A1 + B0 + B1 x) sin x,
′′′
up = −(3 A1 + B0 + B1 x) cos x − (3 B1 − A0 − A1 x) sin x,
so
′′′ ′′ ′
up + 4 up + up − 6 up = − [10 A0 + 2 A1 − 8 B1 + 10 A1 x] cos x − [10 B0 + 2 B1 + 8 A1 + 10 B1 x] sin x.
Comparing the coefficients of x cos x, x sin x, cos x, and sin x here with the corresponding coefficients in Equation 9.3.8
shows that u is a solution of Equation 9.3.8 if
p
−10A1 = −5
−10B1 = 5
−10 A0 − 2 A1 + 8 B1 =5
−10 B0 − 2 B1 − 8 A1 = 2.
Solving the first two equations yields A 1 = 1/2 ,B 1 = −1/2 . Substituting these into the last two equations yields
−10A0 = 5 + 2 A1 − 8 B1 = 10
−10B0 = 2 + 2 B1 + 8 A1 = 5,
so A 0 = −1 ,B 0 = −1/2 . Substituting A 0 = −1 ,A 1 = 1/2 ,B 0 = −1/2 ,B 1 = −1/2 into Equation 9.3.9 shows that
1
up = − [(2 − x) cos x + (1 + x) sin x]
2
x
e
Figure 9.3.3: y p = {e up = −
x
[(2 − x) cos x + (1 + x) sin x]
2
Example 9.3.4
Find a particular solution of
′′′ ′′ ′ −x
y + 4y + 6 y + 4y = e [(1 − 6x) cos x − (3 + 2x) sin x] . (9.3.10)
Solution
Substituting
−x
y = ue ,
′ −x ′
y =e (u − u),
′′ −x ′′ ′
y =e (u − 2 u + u),
′′′ −x ′′′ ′′ ′
y =e (u − 3u + 3 u − u)
or
′′′ ′′ ′
u +u + u + u = (1 − 6x) cos x − (3 + 2x) sin x. (9.3.11)
a theorem analogous to Theorem 5.5.1 implies that Equation 9.3.11 has a particular solution of the form
2 2
up = (A0 x + A1 x ) cos x + (B0 x + B1 x ) sin x. (9.3.12)
Then
′ 2 2
up = [ A0 + (2 A1 + B0 )x + B1 x ] cos x + [ B0 + (2 B1 − A0 )x − A1 x ] sin x,
′′ 2
up = [2 A1 + 2 B0 − (A0 − 4 B1 )x − A1 x ] cos x
2
+ [2 B1 − 2 A0 − (B0 + 4 A1 )x − B1 x ] sin x,
′′′ 2
up = − [3 A0 − 6 B1 + (6 A1 + B0 )x + B1 x ] cos x
2
− [3 B0 + 6 A1 + (6 B1 − A0 )x − A1 x ] sin x,
so
− [2 B0 + 2 A0 − 2 B1 + 6 A1 + (4 B1 + 4 A1 )x] sin x.
Comparing the coefficients of x cos x, x sin x, cos x, and sin x here with the corresponding coefficients in Equation
9.3.11 shows that u is a solution of Equation 9.3.11 if
p
−4 A1 + 4 B1 = −6
−4 A1 − 4 B1 = −2
−2 A0 + 2 B0 + 2 A1 + 6 B1 = 1
−2 A0 − 2 B0 − 6 A1 + 2 B1 = −3.
Solving the first two equations yields A 1 =1 ,B 1 = −1/2 . Substituting these into the last two equations yields
−2 A0 + 2 B0 = 1 − 2 A1 − 6 B1 = 2
−2 A0 − 2 B0 = −3 + 6 A1 − 2 B1 = 4,
−x
xe
Figure 9.3.4: y p = − [(3 − 2x) cos x + (1 + x) sin x]
2
so A = −3/2 and
0 B0 = −1/2 . Substituting A0 = −3/2 , A1 = 1 , B0 = −1/2 , B1 = −1/2 into Equation 9.3.12
shows that
x
up = − [(3 − 2x) cos x + (1 + x) sin x]
2
2. y ′′′
− 2y
′′
− 5 y + 6y = e
′ −3x
(32 − 23x + 6 x )
2
3. 4y ′′′
+ 8y
′′ ′
− y − 2y = −e (4 + 45x + 9 x )
x 2
4. y ′′′
+ 3y
′′
− y − 3y = e
′ −2x
(2 − 17x + 3 x )
2
5. y ′′′
+ 3y
′′ ′
− y − 3y = e (−1 + 2x + 24 x
x 2
+ 16 x )
3
6. y ′′′
+y
′′
− 2y = e (14 + 34x + 15 x )
x 2
7. 4y ′′′
+ 8y
′′ ′
− y − 2y = −e
−2x
(1 − 15x)
8. y ′′′
−y
′′ ′
− y + y = e (7 + 6x)
x
9. 2y ′′′
− 7y
′′
+ 4 y + 4y = e
′ 2x
(17 + 30x)
10. y ′′′
− 5y
′′ ′
+ 3 y + 9y = 2 e
3x
(11 − 24 x )
2
11. y ′′′
− 7y
′′ ′
+ 8 y + 16y = 2 e
4x
(13 + 15x)
12. 8y ′′′
− 12 y
′′
+ 6y − y = e
′ x/2
(1 + 4x)
13. y (4)
+ 3y
′′′
− 3y
′′
− 7 y + 6y = −e
′ −x
(12 + 8x − 8 x )
2
14. y (4)
+ 3y
′′′
+y
′′
− 3 y − 2y = −3 e
′ 2x
(11 + 12x)
15. y (4)
+ 8y
′′′
+ 24 y
′′
+ 32 y
′
= −16 e
−2x
(1 + x + x
2
−x )
3
16. 4y (4)
− 11 y
′′ ′
− 9 y − 2y = −e (1 − 6x)
x
17. y (4)
− 2y
′′′ ′
+ 3 y − y = e (3 + 4x + x )
x 2
18. y (4)
− 4y
′′′
+ 6y
′′
− 4 y + 2y = e
′ 2x
(24 + x + x )
4
19. 2y (4)
+ 5y
′′′ ′
− 5 y − 2y = 18 e (5 + 2x)
x
20. y (4)
+y
′′′
− 2y
′′
− 6 y − 4y = −e
′ 2x
(4 + 28x + 15 x )
2
21. 2y (4)
+y
′′′
− 2y − y = 3e
′ −x/2
(1 − 6x)
22. y (4)
− 5y
′′
+ 4y = e (3 + x − 3 x )
x 2
23. y (4)
− 2y
′′′
− 3y
′′
+ 4 y + 4y = e
′ 2x
(13 + 33x + 18 x )
2
24. y (4)
− 3y
′′′
+ 4y
′
=e
2x
(15 + 26x + 12 x )
2
25. y (4)
− 2y
′′′ ′
+ 2 y − y = e (1 + x)
x
26. 2y (4)
− 5y
′′′
+ 3y
′′ ′
+ y − y = e (11 + 12x)
x
27. y (4)
+ 3y
′′′
+ 3y
′′
+y
′
=e
−x
(5 − 24x + 10 x )
2
28. y (4)
− 7y
′′′
+ 18 y
′′ ′
− 20 y + 8y = e
2x
(3 − 8x − 5 x )
2
29. y ′′′
−y
′′
− 4 y + 4y = e
′ −x
[(16 + 10x) cos x + (30 − 10x) sin x]
30. y ′′′
+y
′′
− 4 y − 4y = e
′ −x
[(1 − 22x) cos 2x − (1 + 6x) sin 2x]
31. y ′′′
−y
′′
+ 2 y − 2y = e
′ 2x 2
[(27 + 5x − x ) cos x + (2 + 13x + 9 x ) sin x]
2
32. y ′′′
− 2y
′′ ′ x
+ y − 2y = −e [(9 − 5x + 4 x ) cos 2x − (6 − 5x − 3 x ) sin 2x]
2 2
33. y ′′′
+ 3y
′′ ′
+ 4 y + 12y = 8 cos 2x − 16 sin 2x
35. y ′′′
− 7y
′′ ′
+ 20 y − 24y = −e
2x
[(13 − 8x) cos 2x − (8 − 4x) sin 2x]
36. y ′′′
− 6y
′′
+ 18 y
′
= −e
3x
[(2 − 3x) cos 3x − (3 + 3x) sin 3x]
37. y (4)
+ 2y
′′′
− 2y
′′ ′
− 8 y − 8y = e (8 cos x + 16 sin x)
x
38. y (4)
− 3y
′′′
+ 2y
′′ ′
+ 2 y − 4y = e (2 cos 2x − sin 2x)
x
39. y (4)
− 8y
′′′
+ 24 y
′′ ′
− 32 y + 15y = e
2x
(15x cos 2x + 32 sin 2x)
40. y (4)
+ 6y
′′′
+ 13 y
′′ ′
+ 12 y + 4y = e
−x
[(4 − x) cos x − (5 + x) sin x]
41. y (4)
+ 3y
′′′
+ 2y
′′ ′
− 2 y − 4y = −e
−x
(cos x − sin x)
42. y (4)
− 5y
′′′
+ 13 y
′′ ′
− 19 y + 10y = e (cos 2x + sin 2x)
x
43. y (4)
+ 8y
′′′
+ 32 y
′′ ′
+ 64 y + 39y = e
−2x
[(4 − 15x) cos 3x − (4 + 15x) sin 3x]
44. y (4)
− 5y
′′′
+ 13 y
′′ ′ x
− 19 y + 10y = e [(7 + 8x) cos 2x + (8 − 4x) sin 2x]
45. y (4)
+ 4y
′′′
+ 8y
′′ ′
+ 8 y + 4y = −2 e
−x
(cos x − 2 sin x)
46. y (4)
− 8y
′′′
+ 32 y
′′ ′
− 64 y + 64y = e
2x
(cos 2x − sin 2x)
47. y (4)
− 8y
′′′
+ 26 y
′′ ′
− 40 y + 25y = e
2x
[3 cos x − (1 + 3x) sin x]
48. y ′′′
− 4y
′′ ′
+ 5 y − 2y = e
2x
− 4e
x
− 2 cos x + 4 sin x
49. y ′′′
−y
′′ ′
+ y − y = 5e
2x
+ 2e
x
− 4 cos x + 4 sin x
50. y ′′′
−y
′
= −2(1 + x) + 4 e
x
− 6e
−x
+ 96 e
3x
51. y ′′′
− 4y
′′ ′
+ 9 y − 10y = 10 e
2x
+ 20 e
x
sin 2x − 10
52. y ′′′
+ 3y
′′ ′
+ 3 y + y = 12 e
−x
+ 9 cos 2x − 13 sin 2x
53. y ′′′
+y
′′ ′
− y − y = 4e
−x
(1 − 6x) − 2x cos x + 2(1 + x) sin x
54. y (4)
− 5y
′′
+ 4y = −12 e
x
+ 6e
−x
+ 10 cos x
55. y (4)
− 4y
′′′
+ 11 y
′′ ′
− 14 y + 10y = −e (sin x + 2 cos 2x)
x
56. y (4)
+ 2y
′′′
− 3y
′′ ′
− 4 y + 4y = 2 e (1 + x) + e
x −2x
57. y (4)
+ 4y = sinh x cos x − cosh x sin x
58. y (4)
+ 5y
′′′
+ 9y
′′ ′
+ 7 y + 2y = e
−x
(30 + 24x) − e
−2x
59. y (4)
− 4y
′′′
+ 7y
′′ ′ x
− 6 y + 2y = e (12x − 2 cos x + 2 sin x)
Q9.3.2
In Exercises 9.3.60-9.3.68 find the general solution.
60. y ′′′
−y
′′ ′
−y +y = e
2x
(10 + 3x)
61. y ′′′
+y
′′
− 2y = −e
3x
(9 + 67x + 17 x )
2
62. y ′′′
− 6y
′′
+ 11 y − 6y = e
′ 2x
(5 − 4x − 3 x )
2
63. y ′′′
+ 2y
′′
+y
′
= −2 e
−x
(7 − 18x + 6 x )
2
64. y ′′′
− 3y
′′ ′
+ 3 y − y = e (1 + x)
x
65. y (4)
− 2y
′′
+ y = −e
−x
(4 − 9x + 3 x )
2
66. y ′′′
+ 2y
′′ ′
− y − 2y = e
−2x
[(23 − 2x) cos x + (8 − 9x) sin x]
67. y (4)
− 3y
′′′
+ 4y
′′
− 2y
′
=e
x
[(28 + 6x) cos 2x + (11 − 12x) sin 2x]
68. y (4)
− 4y
′′′
+ 14 y
′′ ′
− 20 y + 25y = e
x
[(2 + 6x) cos 2x + 3 sin 2x]
70. y ′′′
−y
′′ ′
− y + y = −e
−x
(4 − 8x), y(0) = 2,
′
y (0) = 0, y
′′
(0) = 0
71. 4y ′′′
− 3y − y = e
′ −x/2
(2 − 3x), y(0) = −1,
′
y (0) = 15, y
′′
(0) = −17
72. y (4)
+ 2y
′′′
+ 2y
′′ ′
+ 2y + y = e
−x
(20 − 12x), y(0) = 3, y (0) = −4, y
′ ′′
(0) = 7, y
′′′
(0) = −22
73. y ′′′
+ 2y
′′ ′
+ y + 2y = 30 cos x − 10 sin x, y(0) = 3,
′
y (0) = −4, y
′′
(0) = 16
74. y (4)
− 3y
′′′
+ 5y
′′
− 2y
′ x
= −2 e (cos x − sin x), y(0) = 2, y (0) = 0, y
′ ′′
(0) = − 1, y
′′′
(0) = −5
Q9.3.4
75. Prove: A function y is a solution of the constant coefficient nonhomogeneous equation
(n) (n−1) αx
a0 y + a1 y + ⋯ + an y = e G(x) (A)
if and only if y = ue αx
, where u satisfies the differential equation
(n−1) (n−2)
p (α) p (α)
(n) (n−1) (n−2)
a0 u + u + u + ⋯ + p(α)u = G(x) (B)
(n − 1)! (n − 2)!
and
n n−1
p(r) = a0 r + a1 r + ⋯ + an
76. Prove:
a. The equation
( n−1) ( n−2)
p (α) p (α)
(n) (n−1) (n−2)
a0 u + u + u + ⋯ + P (α)u
(n−1)! (n−2)!
k (A)
= (p0 + p1 x + ⋯ + pk x ) cos ωx
k
+ (q0 + q1 x + ⋯ + qk x ) sin ωx
where
2 k+1 2 k+1
U (x) = u0 x + u1 x + ⋯ + uk x , V (x) = v0 x + v1 x + ⋯ + vk x
and
′′ ′ k
a(U (x) + 2ωV (x)) = p0 + p1 x + ⋯ + pk x
′′ ′ k
a(V (x) − 2ωU (x)) = q0 + q1 x + ⋯ + qk x .
When we speak of solutions of this equation and its complementary equation Ly = 0 , we mean solutions on (a, b). We’ll
show how to use the method of variation of parameters to find a particular solution of Ly = F , provided that we know a
fundamental set of solutions {y , y , … , y } of Ly = 0 .
1 2 n
yp = u1 y1 + u2 y2 + ⋯ + un yn (9.4.2)
′ ′ ′
u y1 + u y2 + ⋯ +un yn = 0
1 2
′ ′ ′ ′ ′ ′
u y +u y + ⋯ +un yn = 0
1 1 2 2
(9.4.3)
⋮
These formulas are easy to remember, since they look as though we obtained them by differentiating Equation 9.4.2 n − 1
times while treating u , u , …, u as constants. To see that Equation 9.4.3 implies Equation 9.4.4, we first differentiate
1 2 n
which reduces to
′ ′ ′ ′
yp = u1 y + u2 y + ⋯ + un yn
1 2
which reduces to
′′ ′′ ′′ ′′
yp = u1 y + u2 y + ⋯ + un yn
1 2
because of the second equation in Equation 9.4.3. Continuing in this way yields Equation 9.4.4.
The last equation in Equation 9.4.4 is
(n−1) (n−1) (n−1) (n−1)
yp = u1 y + u2 y + ⋯ + un yn .
1 2
yp = u1 y1 + u2 y2 + ⋯ + un yn
′ ′ ′ ′ ′ ′
u y +u y + ⋯ +un yn = 0
1 1 2 2
The determinant of this system is the Wronskian W of the fundamental set of solutions {y 1, y2 , … , yn } , which has no zeros
on (a, b), by Theorem 9.1.4. Solving Equation 9.4.5 by Cramer’s rule yields
F Wj
′ n−j
u = (−1 ) , 1 ≤ j ≤ n, (9.4.6)
j
P0 W
where W is the Wronskian of the set of functions obtained by deleting y from {y , y , … , y } and keeping the remaining
j j 1 2 n
functions in the same order. Equivalently, W is the determinant obtained by deleting the last row and j -th column of W .
j
Having obtained u , u , … ,u , we can integrate to obtain u , u , … , u . As in Section 5.7, we take the constants of
′
1
′
2
′
n 1 2 n
integration to be zero, and we drop any linear combination of {y , y , … , y } that may appear in y . 1 2 n p
Note
For efficiency, it's best to compute W 1, W2 , ⋯ , Wn first, and then compute W by expanding in cofactors of the last row;
thus,
n
n−j (n−1)
W = ∑(−1 ) y Wj .
j
j=1
Therefore
∣ y2 y3 ∣ ∣ y1 y3 ∣ ∣ y1 y2 ∣
W1 = ∣ ∣, W2 = ∣ ∣, W3 = ∣ ∣,
′ ′ ′ ′ ′ ′
∣ y2 y
3
∣ ∣ y1 y
3
∣ ∣ y1 y
2
∣
Example 9.4.1
Find a particular solution of
′′′ ′′ ′ 2 x
xy −y − xy + y = 8x e , (9.4.8)
The Wronskian of {y 1, y2 , y3 } is
x −x
∣x e e ∣
∣ x −x ∣
W (x) = 1 e −e ,
∣ ∣
x −x
∣ 0 e e ∣
so
x −x
∣e e ∣
W1 = ∣ ∣ = −2,
x −x
∣e −e ∣
−x
∣x e ∣
−x
W2 = ∣ ∣ = −e (x + 1),
−x
∣ 1 −e ∣
x
∣x e ∣
x
W3 =∣ ∣ = e (x − 1).
x
∣ 1 e ∣
Since F (x) = 8x 2
e
x
and P 0 (x) =x ,
2 x
F 8x e x
= = 4e .
P0 W x ⋅ 2x
′ x x −x
u = −4 e W2 = −4 e (−e (x + 1)) = 4(x + 1),
2
′ x x x 2x
u = 4 e W3 = 4e (e (x − 1)) = 4 e (x − 1).
3
Hence,
yp = u1 y1 + u2 y2 + u3 y3
x x 2 −x 2x
= (−8 e )x + e (2(x + 1 ) ) + e (e (2x − 3))
x 2
= e (2 x − 2x − 1).
x
yp = 2x e (x − 1).
Therefore
∣ y2 y3 y4 ∣ ∣ y1 y3 y4 ∣
∣ ∣ ∣ ∣
∣ ′ ′ ′ ∣ ∣ ′ ′ ′ ∣
W1 = y y y , W2 = y y y ,
∣ 2 3 4 ∣ ∣ 1 3 4 ∣
∣ ∣ ∣ ∣
′′ ′′ ′′ ′′ ′′ ′′
∣ y2 y
3
y
4
∣ ∣ y1 y
3
y
4
∣
∣ y1 y2 y4 ∣ ∣ y1 y2 y3 ∣
∣ ∣ ∣ ∣
′ ′ ′ ∣ ′ ′ ′ ∣
W3 = ∣ y1 y
2
y
4
∣, W4 = y
1
y
2
y
3
,
∣ ∣
∣ ∣ ∣ ∣
′′ ′′ ′′ ′′ ′′ ′′
∣y y y ∣ ∣ y1 y y ∣
1 2 4 2 3
Example 9.4.2
Find a particular solution of
4 (4) 3 ′′′ 2 ′′ ′ 2
x y + 6x y + 2x y − 4x y + 4y = 12 x , (9.4.10)
given that y = x , y = x , y = 1/x and y = 1/x form a fundamental set of solutions of the complementary
1 2
2
3 4
2
equation. Then find the general solution of Equation 9.4.10 on (−∞, 0) and (0, ∞).
Solution
We seek a particular solution of Equation 9.4.10 of the form
u3 u4
2
yp = u1 x + u2 x + + .
2
x x
The Wronskian of {y 1, y2 , y3 , y4 } is
so
2 2
∣ x 1/x 1/x ∣
∣ ∣
12
∣ 2 3 ∣
W1 = 2x −1/x −2/x =− ,
∣ ∣ 4
x
∣ ∣
3 4
∣ 2 2/x 6/x ∣
2
∣x 1/x 1/x ∣
∣ ∣
∣ ∣ 6
2 3
W2 = 1 −1/x −2/x =− ,
∣ ∣
x5
∣ ∣
3 4
∣ 0 2/x 6/x ∣
2 2
∣x x 1/x ∣
∣ ∣
∣ ∣ 12
3
W3 = 1 2x −2/x = ,
∣ ∣
x2
∣ ∣
4
∣ 0 2 6/x ∣
2
∣x x 1/x ∣
∣ ∣
6
∣ 2 ∣
W4 = 1 2x −1/x = .
∣ ∣
x
∣ ∣
3
∣ 0 2 2/x ∣
6 12 24 6 72
= − =− .
4 2 5 6
x x x x x
4 4
x x 6 1
′
u = − W2 = − (− ) = ,
2 5
6 6 x x
4 4
x x 12 2
′
u = − (− ) W3 = = 2x ,
3
6 6 x2
4 4
x x 6
′ 3
u = − W4 = − = −x .
4
6 6 x
Hence,
3 4
2x 1 x 1
2
= (−2x)x + (ln |x|)x + + (− )
2
3 x 4 x
2
19x
2
=x ln |x| − .
12
Since −19x 2
/12 is a solution of the complementary equation, we redefine
2
yp = x ln |x|.
Therefore
2 2
c3 c4
y =x ln |x| + c1 x + c2 x + +
2
x x
2. y ; {e
2 2 2 2
′′′ ′′ 2 ′ 3 1/2 −x −x −x 2 −x
+ 6x y + (6 + 12 x )y + (12x + 8 x )y = x e , xe , x e }
3. x 3
y
′′′
− 3x y
2 ′′
+ 6x y − 6y = 2x
′
; {x, x 2
,x }
3
4. x 2
y
′′′
+ 2x y
′′
− (x
2
+ 2)y
′
= 2x
2
;{1, x
e /x, e
−x
/x}
5. x 3
y
′′′ 2
− 3 x (x + 1)y
′′
+ 3x(x
2
+ 2x + 2)y − (x
′ 3
+ 3x
2
+ 6x + 6)y = x e
4 −3x
;{x e x 2
, x e , x e }
x 3 x
6. x(x 2
− 2)y
′′′
+ (x
2
− 6)y
′′
+ x(2 − x )y + (6 − x )y = 2(x
2 ′ 2 2
− 2)
2
; {e x
, e
−x
, 1/x}
7. x y ′′′
− (x − 3)y
′′
− (x + 2)y + (x − 1)y = −4 e
′ −x
; {e x x
, e /x, e
−x
/x}
− −
8. 4x 3
y
′′′
+ 4x y
2 ′′
− 5x y + 2y = 30 x
′ 2
; {√x, 2
1/ √x , x }
9. x(x 2
− 1)y
′′′
+ (5 x
2
+ 1)y
′′
+ 2x y − 2y = 12 x
′ 2
; {x, 1/(x − 1), 1/(x + 1)}
11. x 3
y
′′′
+x y
2 ′′
− 2x y + 2y = x
′ 2
; {x, x , 1/x}
2
12. x y ′′′
−y
′′
− xy + y = x
′ 2
; {x, e , e
x −x
}
13. x y (4)
+ 4y
′′′
= 6 ln |x| ; {1, x, x , 1/x}
2
14. 16x 4
y
(4)
+ 96 x y
3 ′′′
+ 72 x y
2 ′′
− 24x y + 9y = 96 x
′ 5/2
; {√−
x,
−
1/ √x , x
3/2
, x
−3/2
}
15. x(x 2
− 6)y
(4)
+ 2(x
2
− 12)y
′′′
+ x(6 − x )y
2 ′′
+ 2(12 − x )y
2 ′
= 2(x
2
− 6)
2
;{1, 1/x, e , e
x −x
}
16. x 4
y
(4)
− 4x y
3 ′′′
+ 12 x y
2 ′′
− 24x y + 24y = x
′ 4
; {x, 2
x , x , x }
3 4
17. x 4
y
(4)
− 4x y
3 ′′′
+ 2 x (6 − x )y
2 2 ′′
+ 4x(x
2
− 6)y + (x
′ 4
− 4x
2
+ 24)y = 4 x e
5 x
;{x e x 2
, x e , xe
x −x 2
, x e
−x
}
18. x 4
y
(4)
+ 6x y
3 ′′′
+ 2x y
2 ′′
− 4x y + 4y = 12 x
′ 2
; {x, x 2
, 1/x, 1/ x }
2
19. x y (4)
+ 4y
′′′
− 2x y
′′
− 4 y + xy = 4 e
′ x
; {e x
, e
−x
, e /x, e
x −x
/x}
20. x y (4)
+ (4 − 6x)y
′′′
+ (13x − 18)y
′′
+ (26 − 12x)y + (4x − 12)y = 3 e
′ x
; {e x
, e
2x
, e /x, e
x 2x
/x}
21. x 4
y
(4)
− 4x y
3 ′′′ 2
+ x (12 − x )y
2 ′′
+ 2x(x
2 ′
− 12)y + 2(12 − x )y = 2 x
2 5
; {x, 2
x , xe , xe
x −x
}
Q9.4.2
In Exercises 9.4.22-9.4.33 solve the initial value problem, given the fundamental set of solutions of the complementary
equation. Graph the solution for Exercises 9.4.22, 9.4.26, 9.4.29, and 9.4.30.
22. x 3
y
′′′
− 2x y
2 ′′
+ 3x y − 3y = 4x,
′
y(1) = 4,
′
y (1) = 4, y
′′
(1) = 2 ; {x, 3
x , x ln x}
23. x 3
y
′′′
− 5x y
2 ′′
+ 14x y − 18y = x ,
′ 3
y(1) = 0,
′
y (1) = 1, y
′′
(1) = 7 ; {x 2
, x , x
3 3
ln x}
2
,
′′ x 2x −x
y (0) = −19; { e , e , xe }
25. x 3
y
′′′
− 6x y
2 ′′
+ 16x y − 16y = 9 x ,
′ 4
y(1) = 2,
′
y (1) = 1, y
′′
(1) = 5 ;{x, 4
x , x
4
ln |x|}
26. (x 2
− 2x + 2)y
′′′
−x y
2 ′′
+ 2x y − 2y = (x
′ 2
− 2x + 2 ) ,
2
y(0) = 0,
′
y (0) = 5 ,y ′′
(0) = 0 {x, x , e } ; 2 x
27. x 3
y
′′′
+x y
2 ′′ ′
− 2x y + 2y = x(x + 1), y(−1) = −6,
′
y (−1) =
43
6
, y
′′
(−1) = −
5
2
; {x, x , 1/x}
2
4
,
′
y (0) =
5
4
,
′′ 1 x 3x
y (0) = ; {x + 1, e , e }
4
30. x 4
y
(4)
+ 3x y
3 ′′′
−x y
2 ′′ ′
+ 2x y − 2y = 9 x ,
2
y(1) = −7,
′
y (1) = −11, y
′′
(1) = −5, y
′′′
(1) = 6;
2
{x, x , 1/x, x ln x}
4
,
′
y (0) = 0, y
′′
(0) = 13,
′′′ x −x 2x
y (0) = 1; {x, e , e , e }
32. 4x 4
y
(4)
+ 24 x y
3 ′′′
+ 23 x y
2 ′′
− x y + y = 6x,
′
y(1) = 2,
′
y (1) = 0, y
′′
(1) = 4,
′′′ 37 − −
y (1) = − ; {x, √x , 1/x, 1/ √x }
4
33. x 4
y
4
+ 5x y
3 ′′′
− 3x y
2 ′′ ′
− 6x y + 6y = 40 x ,
3
y(−1) = −1, y (−1) = −7
′
,
y
′′
(−1) = −1, y
′′′
(−1) = −31 ; {x, 3
x , 1/x, 1/ x }
2
Q9.4.3
34. Suppose the equation
(n) (n−1)
P0 (x)y + P1 (x)y + ⋯ + Pn (x)y = F (x) (A)
is normal on an interval (a, b). Let {y , y , … , y } be a fundamental set of solutions of its complementary equation on (a, b),
1 2 n
let W be the Wronskian of {y , y , … , y }, and let W be the determinant obtained by deleting the last row and the j -th
1 2 n j
x
F (t)Wj (t)
(n−j)
uj (x) = (−1 ) ∫ dt, 1 ≤ j ≤ n,
x0 P0 (t)W (t)
and define
yp = u1 y1 + u2 y2 + ⋯ + un yn .
and
HINT: See the derivation of the method of variation of parameters at the beginning of the section.
b. Show that y is the solution of the initial value problem
p
(n) (n−1)
P0 (x)y + P1 (x)y + ⋯ + Pn (x)y = F (x),
′ (n−1)
y(x0 ) = 0, y (x0 ) = 0, … , y (x0 ) = 0.
where
∣ y1 (t) y2 (t) ⋯ yn (t) ∣
∣ ∣
′ ′ ′
∣ y (t) y (t) ⋯ yn (t) ∣
1 2
∣ ∣
1 ∣ ∣
G(x, t) = ⋮ ⋮ ⋱ ⋮ ,
P0 (t)W (t) ∣ ∣
∣ ∣
(n−2) (n−2) (n−2)
∣y (t) y (t) ⋯ yn (t) ∣
1 2
∣ ∣
∣ y1 (x) y2 (x) ⋯ yn (x) ∣
f. Show that
j
x ∂ G(x,t)
⎧
⎪ ∫ j
F (t)dt, 1 ≤ j ≤ n − 1,
(j) x0 ∂x
yp (x) = ⎨ ( n)
F (x) x ∂ G(x,t)
⎩
⎪ +∫ n
F (t)dt, j = n.
P0 (x) x0 ∂x
Q9.4.4
In Exercises 9.4.35-9.4.42 use the method suggested by Exercise 9.4.34 to find a particular solution in the form
x
y =∫
p G(x, t)F (t)dt , given the indicated fundamental set of solutions. Assume that x and x are in an interval on which 0
x0
36. x 3
y
′′′
+x y
2 ′′ ′
− 2x y + 2y = F (x); {x, x , 1/x}
2
37. x 3
y
′′′ 2
− x (x + 3)y
′′ ′
+ 2x(x + 3)y − 2(x + 3)y = F (x); {x, x , x e }
2 x
39. y (4)
− 5y
′′
+ 4y = F (x);
x
{e , e
−x
, e
2x
, e
−2x
}
40. x y (4)
+ 4y
′′′
= F (x); {1, x, x , 1/x}
2
41. x 4
y
(4)
+ 6x y
3 ′′′
+ 2x y
2 ′′ ′
− 4x y + 4y = F (x) ; {x, x 2
, 1/x, 1/ x }
2
42. x y (4)
−y
′′′
− 4x y + 4 y
′ ′
= F (x); {1, x , e
2 2x
,e
−2x
}
1 9/16/2020
10.1: Introduction to Systems of Differential Equations
Many physical situations are modelled by systems of n differential equations in n unknown functions, where n ≥ 2 . The next
three examples illustrate physical problems that lead to systems of differential equations. In these examples and throughout
this chapter we’ll denote the independent variable by t .
added to both tanks from external sources, pumped from each tank to the other, and drained from both tanks (Figure
10.1.1 ). A solution with 1 pound of salt per gallon is pumped into T from an external source at 5 gal/min, and a solution
1
with 2 pounds of salt per gallon is pumped into T from an external source at 4 gal/min. The solution from T is pumped
2 1
into T at 2 gal/min, and the solution from T is pumped into T at 3 gal/min. T is drained at 6 gal/min and T is drained
2 2 1 1 2
at 3 gal/min. Let Q (t) and Q (t) be the number of pounds of salt in T and T , respectively, at time t > 0 . Derive a
1 2 1 2
system of differential equations for Q and Q . Assume that both mixtures are well stirred.
1 2
Figure 10.1.1
As in Section 4.2, let rate in and rate out denote the rates (lb/min) at which salt enters and leaves a tank; thus,
′
Q = (rate in)1 − (rate out)1 ,
1
′
Q = (rate in)2 − (rate out)2 .
2
Note that the volumes of the solutions in T and T remain constant at 100 gallons and 300 gallons, respectively.
1 2
1
(lb/gal in T2 ) × (3~gal/min) = Q2 × 3
300
1
= Q2 lb/min.
100
Therefore
1
(rate in) 1 = 5 + Q2 . (10.1.1)
100
Solution leaves T at the rate of 8 gal/min, since 6 gal/min are drained and 2 gal/min are pumped to T ; hence,
1 2
1 2
(rate out)1 = ( lb/gal in T1 ) × (8~gal/min) = Q1 × 8 = Q1 . (10.1.2)
100 25
1
(lb/gal in T1 ) × (2~gal/min) = Q1 × 2
100
1
= Q1 lb/min.
50
Therefore
1
(rate in) =8+ Q1 . (10.1.4)
2
50
Solution leaves T at the rate of 6 gal/min, since 3 gal/min are drained and 3 gal/min are pumped to T ; hence,
2 1
1 1
(rate out)2 = ( lb/gal in T ) × (6~gal/min) = Q2 × 6 = Q2 . (10.1.5)
2
300 50
We say that Equations 10.1.3 and 10.1.6 form a system of two first order equations in two unknowns, and write them
together as
2 1
′
Q =5− Q1 + Q2
1
25 100
1 1
′
Q =8+ Q1 − Q2 .
2
50 50
spring S (Figure 10.1.2). The springs obey Hooke’s law, with spring constants k and k . Internal friction causes the
2 1 2
springs to exert damping forces proportional to the rates of change of their lengths, with damping constants c and c . Let 1 2
y = y (t) and y = y (t) be the displacements of the two masses from their equilibrium positions at time t , measured
1 1 2 2
positive upward. Derive a system of differential equations for y and y , assuming that the masses of the springs are
1 2
negligible and that vertical external forces F and F also act on the objects.
1 2
Solution
In equilibrium, S supports both m and m and S supports only m . Therefore, if Δℓ and Δℓ are the elongations of
1 1 2 2 2 1 2
Let H be the Hooke’s law force acting on m , and let D be the damping force on m . Similarly, let H and D be the
1 1 1 1 2 2
Hooke’s law and damping forces acting on m . According to Newton’s second law of motion,
2
′′
m1 y = −m1 g + H1 + D1 + F1 ,
1
(10.1.8)
′′
m2 y = −m2 g + H2 + D2 + F2 .
2
When the displacements are y and y , the change in length of S is −y + Δℓ and the change in length of S is
1 2 1 1 1 2
−y + y + Δℓ . Both springs exert Hooke’s law forces on m , while only S exerts a Hooke’s law force on m . These
2 1 2 1 2 2
forces are in directions that tend to restore the springs to their natural lengths. Therefore
H1 = k1 (−y1 + Δℓ1 ) − k2 (−y2 + y1 + Δℓ2 ) and H2 = k2 (−y2 + y1 + Δℓ2 ). (10.1.9)
When the velocities are y and y , S and S are changing length at the rates −y and −y + y , respectively. Both
′
1
′
2
1 2
′
1
′
2
′
1
springs exert damping forces on m , while only S exerts a damping force on m . Since the force due to damping exerted
1 2 2
by a spring is proportional to the rate of change of length of the spring and in a direction that opposes the change, it
follows that
′ ′ ′ ′ ′
D1 = −c1 y + c2 (y −y ) and D2 = −c2 (y − y ). (10.1.10)
1 2 1 2 1
and
′′ ′ ′
m2 y = −m2 g + k2 (−y2 + y1 + Δℓ2 ) − c2 (y − y ) + F2
2 2 1
(10.1.12)
′ ′
= −(m2 g − k2 Δℓ2 ) − k2 (y2 − y1 ) − c2 (y − y ) + F2 .
2 1
′′ ′ ′
m2 y = c2 y − c2 y + k2 y1 − k2 y2 + F2 .
2 1 2
Figure 10.1.3
According to Newton’s law of gravitation, Earth’s gravitational force F = F(x, y, z) on the object is inversely
proportional to the square of the distance of the object from Earth’s center, and directed toward the center; thus,
K X xi+yj +z k
F = (− ) = −K , (10.1.13)
2 3/2
||X|| ||X|| (x2 + y2 + z2 )
Let R be Earth’s radius. Since ∥F∥ = mg when the object is at Earth’s surface,
K
2
mg = , so K = mgR .
2
R
2
x i+y j +zk
F = −mgR .
2 2 2 3/2
(x +y +z )
Now suppose F is the only force acting on the object. According to Newton’s second law of motion, F = mX ; that is, ′′
x i+y j +zk
′′ ′′ ′′ 2
m(x i+y j +z k) = −mgR .
3/2
(x2 + y 2 + z 2 )
Cancelling the common factor m and equating components on the two sides of this equation yields the system
2
gR x
′′
x =−
2 2 2 3/2
(x +y +z )
2
gR y
′′
y =− (10.1.14)
2 2 2 3/2
(x +y +z )
2
gR z
′′
z =− .
2 2 2 3/2
(x +y +z )
′
y = g2 (t, y1 , y2 , … , yn )
2
(10.1.15)
′
yn = gn (t, y1 , y2 , … , yn )
is called a first order system, since the only derivatives occurring in it are first derivatives. The derivative of each of the
unknowns may depend upon the independent variable and all the unknowns, but not on the derivatives of other unknowns.
When we wish to emphasize the number of unknown functions in Equation 10.1.15 we will say that Equation 10.1.15 is an
n × n system.
Systems involving higher order derivatives can often be reformulated as first order systems by introducing additional
unknowns. The next two examples illustrate this.
Example 10.1.4
Rewrite the system
′′ ′ ′
m1 y = −(c1 + c2 )y + c2 y − (k1 + k2 )y1 + k2 y2 + F1
1 1 2
(10.1.16)
′′ ′ ′
m2 y = c2 y − c2 y + k2 y1 − k2 y2 + F2 .
2 1 2
′
m2 v = c2 v1 − c2 v2 + k2 y1 − k2 y2 + F2 .
2
′
y = v2
2
1
v
′
= [−(c1 + c2 )v1 + c2 v2 − (k1 + k2 )y1 + k2 y2 + F1 ] (10.1.17)
1
m1
1
′
v = [ c2 v1 − c2 v2 + k2 y1 − k2 y2 + F2 ] .
2
m2
Note
The difference in form between Equation 10.1.15 and Equation 10.1.17, due to the way in which the unknowns are
denoted in the two systems, isn’t important; Equation 10.1.17 is a first order system, in that each equation in Equation
10.1.17 expresses the first derivative of one of the unknown functions in a way that does not involve derivatives of any of
Example 10.1.5
Rewrite the system
′′ ′ ′ ′′
x = f (t, x, x , y, y , y )
′′′ ′ ′ ′′
y = g(t, x, x , y, y y )
′
x = f (t, x1 , x2 , y1 , y2 , y3 )
2
′
y = y2
1
′
y = y3
2
′
y = g(t, x1 , x2 , y1 , y2 , y3 ).
3
Example 10.1.6
Rewrite the equation
(4) ′′′ ′′ ′
y + 4y + 6y + 4y + y = 0 (10.1.18)
′ ′′ ′′′
y = y1 , y = y2 , y = y3 , and y = y4 .
Then y (4)
=y
4
′
, so Equation 10.1.18 can be written as
′
y + 4 y4 + 6 y3 + 4 y2 + y1 = 0.
4
′
y = y3
2
′
y = y4
3
′
y = −4 y4 − 6 y3 − 4 y2 − y1 .
4
Example 10.1.7
Rewrite
′′′ ′ ′′
x = f (t, x, x , x )
′ ′′
x = y1 , x = y2 , and x = y3 .
Then
′ ′ ′ ′′ ′ ′′′
y = x = y2 , y =x = y3 , and y =x .
1 2 3
′
y = y3
2
′
y = f (t, y1 , y2 , y3 ).
3
Since systems of differential equations involving higher derivatives can be rewritten as first order systems by the method used
in Examples 10.1.5-10.1.7 , we’ll consider only first order systems.
′
y = g2 (t, y1 , y2 ), y2 (t0 ) = y20
2
ti = t0 + ih, i = 0, 1, … , n,
where
b − t0
h = .
n
h h h
I2i = g1 ( ti + , y1i + I1i , y2i + J1i ) ,
2 2 2
h h h
J2i = g2 ( ti + , y1i + I1i , y2i + J1i ) ,
2 2 2
h h h
I3i = g1 ( ti + , y1i + I2i , y2i + J2i ) ,
2 2 2
h h h
J3i = g2 ( ti + , y1i + I2i , y2i + J2i ) ,
2 2 2
and
h
y1,i+1 = y1i + (I1i + 2 I2i + 2 I3i + I4i ),
6
h
y2,i+1 = y2i + (J1i + 2 J2i + 2 J3i + J4i )
6
for i = 0 , …, n − 1 . Under appropriate conditions on g and g , it can be shown that the global truncation error for the
1 2
gallon is pumped into T from an external source at 1 gal/min, and a solution with 3 pounds of salt per gallon is pumped into
1
T2 from an external source at 2 gal/min. The solution from T is pumped into T at 3 gal/min, and the solution from T is 1 2 2
pumped into T at 4 gal/min. T is drained at 2 gal/min and T is drained at 1 gal/min. Let Q (t) and Q (t) be the number of
1 1 2 1 2
pounds of salt in T and T , respectively, at time t > 0 . Derive a system of differential equations for Q and Q . Assume that
1 2 1 2
gallon is pumped into T from an external source at 6 gal/min, and a solution with 1 pound of salt per gallon is pumped into
1
T2 from an external source at 5 gal/min. The solution from T is pumped into T at 2 gal/min, and the solution from T is 1 2 2
pumped into T at 1 gal/min. Both tanks are drained at 3 gal/min. Let Q (t) and Q (t) be the number of pounds of salt in T
1 1 2 1
and T , respectively, at time t > 0 . Derive a system of differential equations for Q and Q that’s valid until a tank is about to
2 1 2
mass m is suspended from the first on a spring S with spring constant k and damping constant c , and a third mass m is
2 2 2 2 3
suspended from the second on a spring S with spring constant k and damping constant c . Let y = y (t) , y = y (t) , and
3 3 3 1 1 2 2
y = y (t) be the displacements of the three masses from their equilibrium positions at time t , measured positive upward.
3 3
Derive a system of differential equations for y , y and y , assuming that the masses of the springs are negligible and that
1 2 3
4. Let X = x i + y j + z k be the position vector of an object with mass m, expressed in terms of a rectangular coordinate
system with origin at Earth’s center (Figure 10.1.3). Derive a system of differential equations for x, y , and z , assuming that the
object moves under Earth’s gravitational force (given by Newton’s law of gravitation, as in Example 10.1.3) and a resistive
force proportional to the speed of the object. Let α be the constant of proportionality.
5. Rewrite the given system as a first order system.
′′′ ′
x = f (t, x, y, y )
a.
′′ ′
y = g(t, y, y )
′ ′ ′
u = f (t, u, v, v , w )
b. v
′′
= g(t, u, v, v , w)
′
′′ ′ ′
w = h(t, u, v, v , w, w )
c. y ′′′
= f (t, y, y , y
′ ′′
)
d. y (4)
= f (t, y)
′′
x = f (t, x, y)
e.
′′
y = g(t, x, y)
6. Rewrite the system Equation 10.1.14 of differential equations derived in Example 10.1.3 as a first order system.
7. Formulate a version of Euler’s method (Section 3.1) for the numerical solution of the initial value problem
′
y = g1 (t, y1 , y2 ), y1 (t0 ) = y10 ,
1
′
y = g2 (t, y1 , y2 ), y2 (t0 ) = y20 ,
2
on an interval [t 0, b] .
8. Formulate a version of the improved Euler method (Section 3.2) for the numerical solution of the initial value problem
′
y = g1 (t, y1 , y2 ), y1 (t0 ) = y10 ,
1
′
y = g2 (t, y1 , y2 ), y2 (t0 ) = y20 ,
2
′
y = a21 (t)y1 + a22 (t)y2 + ⋯ + a2n (t)yn + f2 (t)
2
(10.2.1)
′
yn = an1 (t)y1 + an2 (t)y2 + ⋯ + ann (t)yn + fn (t)
or more briefly as
′
y = A(t)y + f (t), (10.2.2)
where
We call A the coefficient matrix of Equation 10.2.2 and f the forcing function. We’ll say that A and f are continuous if their
entries are continuous. If f = 0 , then Equation 10.2.2 is homogeneous; otherwise, Equation 10.2.2 is nonhomogeneous.
An initial value problem for Equation 10.2.2 consists of finding a solution of Equation 10.2.2 that equals a given constant
vector
k1
⎡ ⎤
⎢ k2 ⎥
k = ⎢ ⎥.
⎢ ⎥
⎢ ⋮ ⎥
⎣ ⎦
kn
′
y = A(t)y + f (t), y(t0 ) = k.
The next theorem gives sufficient conditions for the existence of solutions of initial value problems for Equation 10.2.2 . We
omit the proof.
in matrix form and conclude from Theorem 10.2.1 that every initial value problem for Equation 10.2.3 has a unique
solution on (−∞, ∞).
b. Verify that
1 8 4t
1 3t
1 −t
y = [ ]e + c1 [ ]e + c2 [ ]e (10.2.4)
5 7 1 −1
′
1 2 2 4t
1 3
y =[ ]y+[ ]e , y(0) = [ ]. (10.2.5)
2 1 1 5 22
Solution a
The system Equation 10.2.3 can be written in matrix form as
1 2
′ 4t
y =[ ]y+[ ]e .
2 1
′
1 2 2 4t
k1
y =[ ]y+[ ]e , y(t0 ) = [ ].
2 1 1 k2
Since the coefficient matrix and the forcing function are both continuous on (−∞, ∞), Theorem 10.2.1 implies that this
problem has a unique solution on (−∞, ∞).
Solution b
If y is given by Equation 10.2.4, then
1 1 2 8 1 2 1 1 2 1 2
4t 3t −t 4t
Ay + f = [ ][ ]e + c1 [ ][ ]e + c2 [ ][ ]e +[ ]e
5 2 1 7 2 1 1 2 1 −1 1
1 22 4t
3 3t
−1 −t
2 4t
= [ ]e + c1 [ ]e + c2 [ ]e +[ ]e
5 23 3 1 1
1 32 1 1
4t 3t −t
= [ ]e + 3 c1 [ ]e − c2 [ ]e
5 28 1 −1
′
=y .
Solution c
We must choose c and c in Equation 10.2.4 so that
1 2
1 8 1 1 1 3
[ ] + c1 [ ] + c2 [ ] = [ ],
5 7 1 −1 5 22
which is equivalent to
1 1 c1 −1
[ ][ ] =[ ].
1 −1 c2 3
Note
The theory of n × n linear systems of differential equations is analogous to the theory of the scalar n-th order equation
(n) (n−1)
P0 (t)y + P1 (t)y + ⋯ + Pn (t)y = F (t) (10.2.6)
as developed in Sections 9.1. For example by rewriting Equation 10.2.6 as an equivalent linear system it can be shown
that Theorem 10.2.1 implies Theorem 9.1.1 (Exercise 10.2.12).
′
y = 2 y1 + 4 y2 1 1
a. 1
′
y = c1 [ ]e
6t
+ c2 [ ]e
−2t
y = 4 y1 + 2 y2 ; 1 −1
2
′
y = −2 y1 − 2 y2 1 −2
b. 1
′
y = c1 [ ]e
−4t
+ c2 [ ]e
3t
y = −5 y1 + y2 ; 1 5
2
′
y = −4 y1 − 10 y2 −5 2
c. 1
′
y = c1 [ ]e
2t
+ c2 [ ]e
t
y = 3 y1 + 7 y2 ; 3 −1
2
′
y = 2 y1 + y2 1 1
d. 1
′
y = c1 [ ]e
3t
+ c2 [ ]e
t
y = y1 + 2 y2 ; 1 −1
2
2. Rewrite the system in matrix form and verify that the given vector function satisfies the system for any choice of the
constants c , c , and c .
1 2 3
′
y = −y1 + 2 y2 + 3 y3
1
a. y
2
′
= y2 + 6 y3
′
y = −2 y3 ;
3
1 1 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
t −t −2t
y = c1 ⎢ 1 ⎥ e + c2 ⎢ 0 ⎥ e + c3 ⎢ −2 ⎥ e
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
0 0 1
′
y = 2 y2 + 2 y3
1
b. y
2
′
= 2 y1 + 2 y3
′
y = 2 y1 + 2 y2 ;
3
−1 0 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
−2t −2t 4t
y = c1 ⎢ 0⎥e + c2 ⎢ −1 ⎥ e + c3 ⎢ 1 ⎥ e
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 1 1
′
y = −y1 + 2 y2 + 2 y3
1
c. y
2
′
= 2 y1 − y2 + 2 y3
′
y = 2 y1 + 2 y2 − y3 ;
3
−1 0 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
−3t −3t 3t
y = c1 ⎢ 0⎥e + c2 ⎢ −1 ⎥ e + c3 ⎢ 1 ⎥ e
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 1 1
′
y = 3 y1 − y2 − y3
1
d. y
2
′
= −2 y1 + 3 y2 + 2 y3
′
y = 4 y1 − y2 − 2 y3 ;
3
1 1 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
2t 3t −t
y = c1 ⎢ 0 ⎥ e + c2 ⎢ −1 ⎥ e + c3 ⎢ −3 ⎥ e
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 1 7
3. Rewrite the initial value problem in matrix form and verify that the given vector function is a solution.
′
y = y1 + y2 y1 (0) = 1 1 1
a. 1
′
y =2[ ]e
2t
−[ ]e
3t
y = −2 y1 + 4 y2 , y2 (0) = 0; 1 2
2
′
y = 5 y1 + 3 y2 y1 (0) = 12 1 3
b. 1
′
y =3[ ]e
2t
+3 [ ]e
4t
4. Rewrite the initial value problem in matrix form and verify that the given vector function is a solution.
a. y
2
′
= −7 y1 − 2 y2 − y3 , , y2 (0) = −6
′
y = 7 y1 + 4 y2 + 3 y3 y3 (0) = 4
3
1 1 0
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
6t 2t −t
y = ⎢ −1 ⎥ e + 2 ⎢ −2 ⎥ e + ⎢ −1 ⎥ e
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 1 1
′
y = 8 y1 + 7 y2 + 7 y3 y1 (0) = 2
1
b. y
2
′
= −5 y1 − 6 y2 − 9 y3 , y2 (0) = −4
′
y = 5 y1 + 7 y2 + 10 y3 , y3 (0) = 3
3
1 0 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
8t 3t t
y = ⎢ −1 ⎥ e + ⎢ −1 ⎥ e + ⎢ −2 ⎥ e
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 1 1
5. Rewrite the system in matrix form and verify that the given vector function satisfies the system for any choice of the
constants c and c .
1 2
′
y = −3 y1 + 2 y2 + 3 − 2t
a. 1
′
y = −5 y1 + 3 y2 + 6 − 3t
2
2 cos t 2 sin t 1
y = c1 [ ] + c2 [ ]+[ ]
3 cos t − sin t 3 sin t + cos t t
′ t
y = 3 y1 + y2 − 5 e
b. 1
′ t
y = −y1 + y2 + e
2
−1 1 +t 1
2t 2t t
y = c1 [ ]e + c2 [ ]e +[ ]e
1 −t 3
′ t t
y = −y1 − 4 y2 + 4 e + 8te
c. 1
′ 3t t
y = −y1 − y2 + e + (4t + 2)e
2
3t
2 −3t
−2 t
e
y = c1 [ ]e + c2 [ ]e +[ ]
t
1 1 2te
′ 2t t
y = −6 y1 − 3 y2 + 14 e + 12 e
d. 1
′ 2t t
y = y1 − 2 y2 + 7 e − 12 e
2
2t t
−3 −1 e + 3e
−5t −3t
y = c1 [ ]e + c2 [ ]e +[ ]
2t t
1 1 2e − 3e
and show that A and f are continuous on an interval (a, b) if and only if (A) is normal on (a, b).
7. A matrix function
q11 (t) q12 (t) ⋯ q1s (t)
⎡ ⎤
⎣ ⎦
qr1 (t) qr2 (t) ⋯ qrs (t)
is said to be differentiable if its entries {q ij } are differentiable. Then the derivative Q is defined by ′
⎣ ′ ′ ′ ⎦
q (t) q (t) ⋯ qrs (t)
r1 r2
a. Prove: If P and Q are differentiable matrices such that P +Q is defined and if c and c are constants, then
1 2
′ ′ ′
(c1 P + c2 Q ) = c1 P + c2 Q .
8. Verify that Y ′
= AY .
6t −2t
e e 2 4
a. Y =[
6t −2t
], A =[ ]
e −e 4 2
−4t 3t
e −2e −2 −2
b. Y =[
−4t 3t
], A =[ ]
e 5e −5 1
2t t
−5e 2e −4 −10
c. Y =[
2t t
], A =[ ]
3e −e 3 7
3t t
e e 2 1
d. Y =[
3t t
], A =[ ]
e −e 1 2
t −t −2t
e e e −1 2 3
⎡ ⎤ ⎡ ⎤
e. Y =⎢e
t
0 −2e
−2t
⎥, A =⎢ 0 1 6 ⎥
⎣ −2t ⎦ ⎣ ⎦
0 0 e 0 0 −2
−2t −2t 4t
−e −e e 0 2 2
⎡ ⎤ ⎡ ⎤
f. Y =⎢ 0 e
−2t
e
4t
⎥, A =⎢2 0 2⎥
⎣ −2t 4t ⎦ ⎣ ⎦
e 0 e 2 2 0
3t −3t
e e 0 −9 6 6
⎡ ⎤ ⎡ ⎤
g. Y =⎢e
3t
0 −e
−3t
⎥, A = ⎢ −6 3 6⎥
⎣ 3t −3t −3t ⎦ ⎣ ⎦
e e e −6 6 3
2t 3t −t
e e e 3 −1 −1
⎡ ⎤ ⎡ ⎤
h. Y =⎢ 0 −e
3t
−3e
−t
⎥, A = ⎢ −2 3 2 ⎥
⎣ 2t 3t −t ⎦ ⎣ ⎦
e e 7e 4 −1 −2
9. Suppose
y11 y12
y1 = [ ] and y2 = [ ]
y21 y22
and define
y11 y12
Y =[ ].
y21 y22
a. Show that Y = AY . ′
n
b. Find the derivative of Y −n
= (Y
−1
) , where n is a positive integer.
c. State how the results obtained in (a) and (b) are analogous to results from calculus concerning scalar functions.
12. Show that Theorem 10.2.1 implies Theorem 9.1.1. HINT: Write the scalar function
(n) (n−1)
P0 (x)y + P1 (x)y + ⋯ + Pn (x)y = F (x)
differentiable on (a, b). Find a matrix B such that the function x = P y is a solution of x = Bx on (a, b).
′
on an interval (a, b). The theory of linear homogeneous systems has much in common with the theory of linear homogeneous
scalar equations, which we considered in Sections 2.1, 5.1, and 9.1.
Whenever we refer to solutions of y = A(t)y we’ll mean solutions on
′
(a, b) . Since y ≡0 is obviously a solution of
y = A(t)y , we call it the trivial solution. Any other solution is nontrivial.
′
If y , y , …, y are vector functions defined on an interval (a, b) and c , c , …, c are constants, then
1 2 n 1 2 n
y = c1 y1 + c2 y2 + ⋯ + cn yn (10.3.1)
is a linear combination of y , y , …,y . It’s easy show that if y , y , …,y are solutions of y = A(t)y on (a, b), then so is
1 2 n 1 2 n
′
any linear combination of y , y , …, y (Exercise 10.3.1). We say that {y , y , … , y } is a fundamental set of solutions of
1 2 n 1 2 n
y = A(t)y on (a, b) on if every solution of y = A(t)y on (a, b) can be written as a linear combination of y , y , …, y , as
′ ′
1 2 n
in Equation 10.3.1. In this case we say that Equation 10.3.1 is the general solution of y = A(t)y on (a, b). ′
It can be shown that if A is continuous on (a, b) then y = A(t)y has infinitely many fundamental sets of solutions on (a, b)
′
(Exercises 10.3.15 and 10.3.16). The next definition will help to characterize fundamental sets of solutions of y = A(t)y . ′
such that
are c1 = c2 = ⋯ = cn = 0 . If Equation 10.3.2 holds for some set of constants c1 , c2 , …, cn that are not all zero, then
{ y1 , y2 , … , yn } is linearly dependent on (a, b)
The next theorem is analogous to Theorems 5.1.3 and 9.1.2.
Theorem 10.3.1
Suppose the n × n matrix A = A(t) is continuous on (a, b). Then a set {y , y 1 2
, … , yn } of n solutions of ′
y = A(t)y
Example 10.3.1
Show that the vector functions
t 2t
e 0 e
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
3t 3t
y1 = ⎢ 0 ⎥, y2 = ⎢ e ⎥, and y3 = ⎢ e ⎥
⎣ −t ⎦ ⎣ ⎦ ⎣ ⎦
e 1 0
⎣ −t ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
e 1 0 0
⎣ −t ⎦ ⎣ ⎦ ⎣ ⎦
e 1 0 c3 0
We can use the method in Example 10.3.1 to test n solutions {y , y , … , y } of any n × n system y = A(t)y for linear
1 2 n
′
independence on an interval (a, b) on which A is continuous. To explain this (and for other purposes later), it is useful to write
a linear combination of y , y , …, y in a different way. We first write the vector functions in terms of their components as
1 2 n
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
yn1 yn2 ynn
If
y = c1 y1 + c2 y2 + ⋯ + cn yn
then
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
yn1 yn2 ynn
⎣ ⎦⎣ ⎦
yn1 yn2 ⋯ ynn cn
where
c1
⎡ ⎤
⎢ c2 ⎥
c =⎢ ⎥
⎢ ⎥
⎢ ⋮ ⎥
⎣ ⎦
cn
and
y11 y12 ⋯ y1n
⎡ ⎤
= A[ y1 y2 ⋯ yn ] = AY ;
The determinant of Y ,
∣ y11 y12 ⋯ y1n ∣
∣ ∣
∣ y21 y22 ⋯ y2n ∣
W =∣ ∣ (10.3.5)
∣ ⋮ ⋮ ⋱ ⋮ ∣
∣ ∣
∣ yn1 yn2 ⋯ ynn ∣
is called the Wronskian of {y , y , … , y }. It can be shown (Exercises 10.3.2 and 10.3.3) that this definition is analogous to
1 2 n
definitions of the Wronskian of scalar functions given in Sections 5.1 and 9.1. The next theorem is analogous to Theorems
5.1.4 and 9.1.3. The proof is sketched in Exercise 10.3.4 for n = 2 and in Exercise 10.3.5 for general n .
W (t) = W (t0 ) exp( ∫ [ a11 (s) + a22 (s) + ⋯ + ann (s)] ds), a < t < b. (10.3.6)
t0
Note
The sum of the diagonal entries of a square matrix A is called the trace of A , denoted by tr(A) . Thus, for an n×n
matrix A ,
Theorem 10.3.3
Suppose the n × n matrix A = A(t) is continuous on (a, b) and let y , y , …,y be solutions of 1 2 n
′
y = A(t)y on (a, b) .
Then the following statements are equivalent; that is, they are either all true or all false:
a. The general solution of y = A(t)y on (a, b) is y = c y + c y + ⋯ + c
′
1 1 2 2 n yn , where c1 , c2 , …, cn are arbitrary
constants.
b. {y , y , … , y } is a fundamental set of solutions of y = A(t)y on (a, b).
1 2 n
′
Theorem 10.3.2 are true for the columns of Y . In this case, Equation 10.3.3 implies that the general solution of y = A(t)y ′
Example 10.3.2
The vector functions
2t −t
−e −e
y1 = [ ] and y2 = [ ]
2t −t
2e e
′
−4 −3
y =[ ]y (10.3.7)
6 5
′
−4 −3 4
y =[ ] y, y(0) = [ ]. (10.3.8)
6 5 −5
Solution a
From Equation 10.3.5
2t −t
∣ −e −e ∣ −1 −1
2t −t t
W (t) = ∣ ∣ =e e [ ] =e . (10.3.9)
2t −t
∣ 2e e ∣ 2 1
Solution b
Here
−4 −3
A =[ ],
6 5
t 2t0 −t0
∣ −e −e ∣
(t−t0 ) t0 t−t0 t
W (t) = W (t0 ) exp ( ∫ 1 ds) = ∣ ∣e =e e =e ,
2t0 −t0
t0 ∣ 2e e ∣
is a fundamental matrix for Equation 10.3.7. Therefore the general solution of Equation 10.3.7 is
2t −t 2t −t
−e −e −e −e c1
y = c1 y + c2 y = c1 [ ] + c2 [ ] =[ ][ ]. (10.3.10)
1 2 2t −t 2t −t
2e e 2e e c2
Solution d
Setting t = 0 in Equation 10.3.10 and imposing the initial condition in Equation 10.3.8 yields
Thus,
−c1 − c2 = 4
2 c1 + c2 = −5.
2. In Section 5.1 the Wronskian of two solutions y and y of the scalar second order equation
1 2
′′ ′
P0 (x)y + P1 (x)y + P2 (x)y = 0 (A)
was defined to be
∣ y1 y2 ∣
W =∣ ∣.
′ ′
∣ y1 y ∣
2
a. Rewrite (A) as a system of first order equations and show that W is the Wronskian (as defined in this section) of two
solutions of this system.
b. Apply Equation 10.3.6 to the system derived in (a), and show that
x
P1 (s)
W (x) = W (x0 ) exp{− ∫ ds},
x0 P0 (s)
(n) (n−1)
P0 (x)y + P1 (x)y + ⋯ + Pn (x)y = 0 (A)
was defined to be
∣ y1 y2 ⋯ yn ∣
∣ ∣
′ ′ ′
y y ⋯ yn
∣ 1 2 ∣
W =∣ ∣.
∣ ⋮ ⋮ ⋱ ⋮ ∣
∣ ∣
(n−1) (n−1) (n−1)
∣y y ⋯ yn ∣
1 2
a. Rewrite (A) as a system of first order equations and show that W is the Wronskian (as defined in this section) of n
and
′ ′
[y y ] = a21 [ y11 y12 ] + a22 [ y21 y22 ].
21 22
W (t) = W (t0 ) exp( ∫ [ a11 (s) + a22 (s)] ds) a < t < b.
t0
b. Use the equation Y = AY and the definition of matrix multiplication to show that
′
′
rm = am1 r1 + am2 r2 + ⋯ + amn rn .
det(Wm ) = amm W .
W (t) = W (t0 ) exp( ∫ [ a11 (s) + a22 (s) + ⋯ + ann (s)] ds), a < t < b.
t0
6. Suppose the n × n matrix A is continuous on (a, b) and t0 is a point in (a, b) . Let Y be a fundamental matrix for
y = A(t)y on (a, b) .
′
is
−1
y = Y (t)Y (t0 )k.
7. Let
6t −2t
2 4 e e −3
A =[ ], y1 = [ ], y2 = [ ], k =[ ].
6t −2t
4 2 e −e 9
c. Use the result of Exercise 10.3.6 (b) to find a formula for the solution of (A) for an arbitrary initial vector k.
8. Repeat Exercise 10.3.7 with
−4t 3t
−2 −2 e −2e 10
A =[ ], y1 = [ ], y2 = [ ], k =[ ].
−4t 3t
−5 1 e 5e −4
11. Let
3 −1 −1
⎡ ⎤
A = ⎢ −2 3 2 ⎥,
⎣ ⎦
4 −1 −2
2t 3t −t
e e e 2
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
3t −t
y1 = ⎢ 0 ⎥, y2 = ⎢ −e ⎥, y3 = ⎢ −3e ⎥, k = ⎢ −7 ⎥ .
⎣ 2t ⎦ ⎣ 3t ⎦ ⎣ −t ⎦ ⎣ ⎦
e e 7e 20
c. Use the result of Exercise 10.3.6 (b) to find a formula for the solution of (A) for an arbitrary initial vector k.
12. Repeat Exercise 10.3.11 with
0 2 2
⎡ ⎤
A =⎢2 0 2⎥,
⎣ ⎦
2 2 0
−2t −2t 4t
−e −e e 0
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
−2t 4t
y =⎢ 0 ⎥, y =⎢ e ⎥, y =⎢e ⎥, k = ⎢ −9 ⎥ .
1 2 3
⎣ −2t ⎦ ⎣ ⎦ ⎣ 4t ⎦ ⎣ ⎦
e 0 e 12
14. Suppose Y and Z are fundamental matrices for the n × n system y = A(t)y . Then some of the four matrices ′
Y Z
−1
,
Z, Z Y , ZY are necessarily constant. Identify them and prove that they are constant.
−1 −1 −1
Y
15. Suppose the columns of an n×n matrix Y are solutions of the n×n system ′
y = Ay and C is an n×n constant
matrix.
a. Show that the matrix Z = Y C satisfies the differential equation Z = AZ . ′
b. Show that Z is a fundamental matrix for y = A(t)y if and only if C is invertible and
′
Y is a fundamental matrix for
y = A(t)y .
′
16. Suppose the n × n matrix A = A(t) is continuous on (a, b) and t is in (a, b). For i = 1 , 2, …, n , let y be the solution 0 i
1 0 0
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
0 0 1
b. Conclude from (a) and Exercise 10.3.15 that y = A(t)y has infinitely many fundamental sets of solutions on (a, b).
′
17. Show that Y is a fundamental matrix for the system y = A(t)y if and only if ′
Y
−1
is a fundamental matrix for
(t)y , where A denotes the transpose of A . HINT: See Exercise 10.3.11.
′ T T
y = −A
18. Let Z be the fundamental matrix for the constant coefficient system y ′
= Ay such that Z(0) = I .
a. Show that Z(t)Z(s) = Z(t + s) for all s and t . HINT: For fixed s let Γ (t) = Z(t)Z(s) and Γ (t) = Z(t + s) . Show 1 2
that Γ and Γ are both solutions of the matrix initial value problem Γ = AΓ, Γ(0) = Z(s) . Then conclude from
1 2
′
c. The matrix Z defined above is sometimes denoted by e . Discuss the motivation for this notation. tA
where A is an n × n constant matrix. Since A is continuous on (−∞, ∞), Theorem 10.2.1 implies that all solutions of
Equation 10.4.1 are defined on (−∞, ∞). Therefore, when we speak of solutions of y = Ay , we’ll mean solutions on ′
(−∞, ∞) .
In this section we assume that all the eigenvalues of A are real and that A has a set of n linearly independent eigenvectors. In
the next two sections we consider the cases where some of the eigenvalues of A are complex, or where A does not have n
linearly independent eigenvectors.
In Example 10.3.2 we showed that the vector functions
2t −t
−e −e
y1 = [ ] and y2 = [ ]
2t −t
2e e
′
−4 −3
y =[ ] y, (10.4.2)
6 5
but we did not show how we obtained y1 and y2 in the first place. To see how these solutions can be obtained we write
Equation 10.4.2 as
′
y = −4 y1 − 3 y2
1
(10.4.3)
′
y = 6 y1 + 5 y2
2
′ λt ′ λt
y = λ x1 e and y = λ x2 e .
1 2
Substituting this and Equation 10.4.4 into Equation 10.4.3 and canceling the common factor e λt
yields
−4 x1 − 3 x2 = λx1
6 x1 + 5 x2 = λ x2 .
The trivial solution x = x = 0 of this system isn’t useful, since it corresponds to the trivial solution y ≡ y ≡ 0 of
1 2 1 2
Equation 10.4.3, which can’t be part of a fundamental set of solutions of Equation 10.4.2. Therefore we consider only those
values of λ for which Equation 10.4.5 has nontrivial solutions. These are the values of λ for which the determinant of
Equation 10.4.5 is zero; that is,
∣ −4 − λ −3 ∣
∣ ∣ = (−4 − λ)(5 − λ) + 18
∣ 6 5 −λ ∣
2
=λ −λ −2
= (λ − 2)(λ + 1) = 0,
6 x1 + 3 x2 = 0,
−1
2t
y1 = [ ]e . (10.4.6)
2
6 x1 + 6 x2 = 0,
so x = −x . Taking x
1 2 2 =1 here yields the solution y 1 = −e
−t
,y 2 =e
−t
of Equation 10.4.3. We can write this solution in
vector form as
−1 −t
y2 = [ ]e . (10.4.7)
1
In Equation 10.4.6 and Equation 10.4.7 the constant coefficients in the arguments of the exponential functions are the
eigenvalues of the coefficient matrix in Equation 10.4.2, and the vector coefficients of the exponential functions are associated
eigenvectors. This illustrates the next theorem.
Theorem 10.4.1
Suppose the n × n constant matrix A has n real eigenvalues λ , λ , … , λ (which need not be distinct) with associated
1 2 n
λ1 t λ2 t λn t
y1 = x1 e , y2 = x2 e , … , yn = xn e
Proof
Differentiating y
i
= xi e
λi t
and recalling that Ax i = λi xi yields
′ λi t λi t
y = λi xi e = Axi e = Ayi .
i
Since the columns of the determinant on the right are x , x , …, x , which are assumed to be linearly independent,
1 2 n
the determinant is nonzero. Therefore Theorem 10.3.3 implies that {y , y , … , y } is a fundamental set of solutions 1 2 n
of y = Ay .
′
Example 10.4.1
a. Find the general solution of
′
2 4
y =[ ] y. (10.4.8)
4 2
′
2 4 5
y =[ ] y, y(0) = [ ]. (10.4.9)
4 2 −1
Solution a
The characteristic polynomial of the coefficient matrix A in Equation 10.4.8 is
∣ 2 −λ 4 ∣
2
∣ ∣ = (λ − 2 ) − 16
∣ 4 2 −λ ∣
= (λ − 2 − 4)(λ − 2 + 4)
= (λ − 6)(λ + 2).
Hence, λ 1 =6 and λ
2 = −2 are eigenvalues of A . To obtain the eigenvectors, we must solve the system
2 −λ 4 x1 0
[ ][ ] =[ ] (10.4.10)
4 2 −λ x2 0
−4 4 x1 0
[ ][ ] =[ ],
4 −4 x2 0
so
1
6t
y1 = [ ]e
1
4 4 x1 0
[ ][ ] =[ ],
4 4 x2 0
so
−1
−2t
y2 = [ ]e
1
is a solution of Equation 10.4.8. From Theorem 10.4.1, the general solution of Equation 10.4.8 is
1 6t
−1 −2t
y = c1 y1 + c2 y2 = c1 [ ]e + c2 [ ]e . (10.4.11)
1 1
Solution b
To satisfy the initial condition in Equation 10.4.9, we must choose c and c in Equation 10.4.11 so that 1 2
1 −1 5
c1 [ ] + c2 [ ] =[ ].
1 1 −1
c1 + c2 = −1,
Example 10.4.2
a. Find the general solution of
3 −1 −1
⎡ ⎤
′
y = ⎢ −2 3 2 ⎥ y. (10.4.12)
⎣ ⎦
4 −1 −2
⎣ ⎦ ⎣ ⎦
4 −1 −2 8
Solution a
The characteristic polynomial of the coefficient matrix A in Equation 10.4.12 is
∣ 3 −λ −1 −1 ∣
∣ ∣
−2 3 −λ 2 = −(λ − 2)(λ − 3)(λ + 1).
∣ ∣
∣ 4 −1 −2 − λ ∣
Hence, the eigenvalues of A are λ 1 =2 ,λ 2 =3 , and λ 3 = −1 . To find the eigenvectors, we must solve the system
3 −λ −1 −1 x1 0
⎡ ⎤⎡ ⎤ ⎡ ⎤
⎢ −2 3 −λ 2 ⎥ ⎢ x2 ⎥ = ⎢ 0 ⎥ (10.4.14)
⎣ ⎦⎣ ⎦ ⎣ ⎦
4 −1 −2 − λ x3 0
⎡ 1 −1 −1 ⋮ 0⎤
⎢ ⎥
⎢ ⎥,
⎢ −2 1 2 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
4 −1 −4 ⋮ 0
⎡1 0 −1 ⋮ 0⎤
⎢ ⎥
⎢ ⎥.
⎢0 1 0 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
0 0 0 ⋮ 0
Hence, x 1 = x3 and x
2 =0 . Taking x 3 =1 yields
1
⎡ ⎤
2t
y1 = ⎢ 0 ⎥ e
⎣ ⎦
1
⎡ 0 −1 −1 ⋮ 0⎤
⎢ ⎥
⎢ ⎥,
⎢ −2 0 2 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
4 −1 −5 ⋮ 0
⎡1 0 −1 ⋮ 0⎤
⎢ ⎥
⎢ ⎥.
⎢0 1 1 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
0 0 0 ⋮ 0
⎣ ⎦
1
⎡ 4 −1 −1 ⋮ 0⎤
⎢ ⎥
⎢ ⎥,
⎢ −2 4 2 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
4 −1 −1 ⋮ 0
1
⎡1 0 − ⋮ 0⎤
7
⎢ ⎥
⎢ 3 ⎥.
⎢0 1 ⋮ 0⎥
⎢ 7 ⎥
⎣ ⎦
0 0 0 ⋮ 0
⎣ ⎦
7
as a solution of Equation 10.4.12. By Theorem 10.4.1, the general solution of Equation 10.4.12 is
1 1 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
2t 3t −t
y = c1 ⎢ 0 ⎥ e + c2 ⎢ −1 ⎥ e + c3 ⎢ −3 ⎥ e ,
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 1 7
⎣ 2t 3t −t ⎦⎣ ⎦
e e 7e c3
Solution b
To satisfy the initial condition in Equation 10.4.13 we must choose c , c , c in Equation 10.4.15 so that 1 2 3
1 1 1 c1 2
⎡ ⎤⎡ ⎤ ⎡ ⎤
⎢0 −1 −3 ⎥ ⎢ c2 ⎥ = ⎢ −1 ⎥ .
⎣ ⎦⎣ ⎦ ⎣ ⎦
1 1 7 c3 8
1 1 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
2t 3t −t
= 3⎢0 ⎥e − 2 ⎢ −1 ⎥ e + ⎢ −3 ⎥ e .
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 1 7
Example 10.4.3
Find the general solution of
−3 2 2
⎡ ⎤
′
y =⎢ 2 −3 2 ⎥ y. (10.4.16)
⎣ ⎦
2 2 −3
Solution
The characteristic polynomial of the coefficient matrix A in Equation 10.4.16 is
∣ −3 − λ 2 2 ∣
∣ ∣ 2
2 −3 − λ 2 = −(λ − 1)(λ + 5 ) .
∣ ∣
∣ 2 2 −3 − λ ∣
⎡ −4 2 2 ⋮ 0⎤
⎢ ⎥
⎢ ⎥,
⎢ 2 −4 2 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
2 2 −4 ⋮ 0
⎡1 0 −1 ⋮ 0⎤
⎢ ⎥
⎢ ⎥.
⎢0 1 −1 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
0 0 0 ⋮ 0
1
⎡ ⎤
t
y1 = ⎢ 1 ⎥ e (10.4.17)
⎣ ⎦
1
of Equation 10.4.16. Eigenvectors associated with λ 2 = −5 are solutions of the system with augmented matrix
⎡2 2 2 ⋮ 0⎤
⎢ ⎥
⎢ ⎥.
⎢2 2 2 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
2 2 2 ⋮ 0
Hence, the components of these eigenvectors need only satisfy the single condition
x1 + x2 + x3 = 0.
−1 −1
⎡ ⎤ ⎡ ⎤
⎢ 0⎥ and ⎢ 1⎥
⎣ ⎦ ⎣ ⎦
1 0
are linearly independent eigenvectors associated with λ 2 = −5 , and the corresponding solutions of Equation 10.4.16 are
−1 −1
⎡ ⎤ ⎡ ⎤
−5t −5t
y2 = ⎢ 0⎥e and y3 = ⎢ 1⎥e .
⎣ ⎦ ⎣ ⎦
1 0
Because of this and Equation 10.4.17, Theorem 10.4.1 implies that the general solution of Equation 10.4.16 is
1 −1 −1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
t −5t −5t
y = c1 ⎢ 1 ⎥ e + c2 ⎢ 0⎥e + c3 ⎢ 1⎥e .
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 1 0
y1
It is convenient to think of a “y -y plane," where a point is identified by rectangular coordinates (y
1 2 1, y2 ) . If y = [ ] is a
y2
non-constant solution of Equation 10.4.18, then the point (y (t), y (t)) moves along a curve C in the y -y plane as t varies
1 2 1 2
from −∞ to ∞. We call C the trajectory of y. (We also say that C is a trajectory of the system Equation 10.4.18.) I’s
important to note that C is the trajectory of infinitely many solutions of Equation 10.4.18, since if τ is any real number, then
y(t − τ ) is a solution of Equation 10.4.18 (Exercise 10.4.28b), and (y (t − τ ), y (t − τ )) also moves along C as t varies
1 2
from −∞ to ∞. Moreover, Exercise 10.4.28c implies that distinct trajectories of Equation 10.4.18 can’t intersect, and that two
solutions y and y of Equation 10.4.18 have the same trajectory if and only if y (t) = y (t − τ ) for some τ .
1 2 2 1
From Exercise 10.4.28a, a trajectory of a nontrivial solution of Equation 10.4.18 can’t contain (0, 0) , which we define to be
k1
the trajectory of the trivial solution y ≡0 . More generally, if y =[ ] ≠0 is a constant solution of Equation 10.4.18
k2
(which could occur if zero is an eigenvalue of the matrix of Equation 10.4.18 ), we define the trajectory of y to be the single
point (k , k ) .
1 2
To be specific, this is the question: What do the trajectories look like, and how are they traversed? In this section we’ll answer
this question, assuming that the matrix
a11 a12
A =[ ]
a21 a22
of Equation 10.4.18 has real eigenvalues λ1 and λ2 with associated linearly independent eigenvectors x1 and x2 . Then the
general solution of Equation 10.4.18 is
λ1 t λ2 t
y = c1 x1 e + c2 x2 e . (10.4.19)
λ1 t
y = (c1 x1 + c2 x2 )e .
solution of Equation 10.4.18 can be written as y = xe where x is an arbitrary 2-vector, and the trajectories of nontrivial
λ1 t
solutions of Equation 10.4.18 are half-lines through (but not including) the origin. The direction of motion is away from the
origin if λ > 0 (Figure 10.4.1), toward it if λ < 0 (Figure 10.4.2). (In these and the next figures an arrow through a point
1 1
indicates the direction of motion along the trajectory through the point.)
L (or L ), we mean either of the rays obtained by removing the origin from L (or L ).
1 2 1 2
Letting c = 0 in Equation 10.4.19 yields y = c x e . If c ≠ 0 , the trajectory defined by this solution is a half-line of L .
2 1 1
λ1 t
1 1
The direction of motion is away from the origin if λ > 0 , toward the origin if λ < 0 . Similarly, the trajectory of
1 1
y =c x e
2 2 with c ≠ 0 is a half-line of L .
λ2 t
2 2
Henceforth, we assume that c and c in Equation 10.4.19 are both nonzero. In this case, the trajectory of Equation 10.4.19
1 2
can’t intersect L or L , since every point on these lines is on the trajectory of a solution for which either c = 0 or c = 0 .
1 2 1 2
(Remember: distinct trajectories can’t intersect!). Therefore the trajectory of Equation 10.4.19 must lie entirely in one of the
four open sectors bounded by L and L , but do not any point on L or L . Since the initial point (y (0), y (0)) defined by
1 2 1 2 1 2
y(0) = c1 x1 + c2 x2
is on the trajectory, we can determine which sector contains the trajectory from the signs of c1 and c2 , as shown in Figure
10.4.3.
and of
Since the right side of Equation 10.4.20 approaches c x as t → ∞ , the trajectory is asymptotically parallel to L as t → ∞ .
2 2 2
Since the right side of Equation 10.4.21 approaches c x as t → −∞ , the trajectory is asymptotically parallel to L as
1 1 1
t → −∞ .
The shape and direction of traversal of the trajectory of Equation 10.4.19 depend upon whether λ1 and λ are both positive,
2
both negative, or of opposite signs. We’ll now analyze these three cases.
Henceforth ∥u∥ denote the length of the vector u .
Figure 10.4.4 shows some typical trajectories. In this case, lim ∥y(t)∥ = 0 , so the trajectory is not only asymptotically
t→−∞
parallel to L as t → −∞ , but is actually asymptotically tangent to L at the origin. On the other hand, lim
1 1 ∥y(t)∥ = ∞ t→∞
and
λ2 t
lim ∥ ∥ = lim ∥ c x eλ1 t ∥ = ∞,
∥y(t) − c2 x2 e ∥ 1 1
t→∞ t→∞
so, although the trajectory is asymptotically parallel to L2 as t → ∞ , it is not asymptotically tangent to L2 . The direction of
motion along each trajectory is away from the origin.
Figure 10.4.5 shows some typical trajectories. In this case, lim ∥y(t)∥ = 0 , so the trajectory is asymptotically tangent to
t→∞
λ1 t λ2 t
lim ∥y(t) − c x e ∥ = lim ∥ c2 x2 e ∥ = ∞,
∥ 1 1 ∥
t→−∞ t→−∞
so, although the trajectory is asymptotically parallel to L as t → −∞ , it is not asymptotically tangent to it. The direction of
1
so the trajectory is asymptotically tangent to L as t → −∞ . The direction of motion is toward the origin on
1 L1 and away
from the origin on L . The direction of motion along any other trajectory is away from L , toward L .
2 1 2
−5 3
2. y′
=
1
4
[ ]y
3 −5
−4 3
3. y′
=
1
5
[ ]y
−2 −11
−1 −4
4. y′
=[ ]y
−1 −1
2 −4
5. y′
=[ ]y
−1 −1
4 −3
6. y′
=[ ]y
2 −1
−6 −3
7. y′
=[ ]y
1 −2
1 −1 −2
⎡ ⎤
8. y′
=⎢ 1 −2 −3 ⎥ y
⎣ ⎦
−4 1 −1
−6 −4 −8
⎡ ⎤
9. y′
= ⎢ −4 0 −4 ⎥ y
⎣ ⎦
−8 −4 −6
3 5 8
⎡ ⎤
10. y ′
=⎢ 1 −1 −2 ⎥ y
⎣ ⎦
−1 −1 −1
1 −1 2
⎡ ⎤
11. y ′
= ⎢ 12 −4 10 ⎥ y
⎣ ⎦
−6 1 −7
4 −1 −4
⎡ ⎤
12. y ′
= ⎢4 −3 −2 ⎥ y
⎣ ⎦
1 −1 −1
−2 2 −6
⎡ ⎤
13. y ′
=⎢ 2 6 2⎥y
⎣ ⎦
−2 −2 2
3 2 −2
⎡ ⎤
14. y ′
=⎢ −2 7 −2 ⎥ y
⎣ ⎦
−10 10 −5
3 1 −1
⎡ ⎤
15. y ′
=⎢ 3 5 1⎥y
⎣ ⎦
−6 2 4
Q10.4.2
7 2 0
17. y′
=
1
6
[ ] y, y(0) = [ ]
−2 2 −3
21 −12 5
18. y′
=[ ] y, y(0) = [ ]
24 −15 3
−7 4 −1
19. y′
=[ ] y, y(0) = [ ]
−6 7 7
1 2 0 4
⎡ ⎤ ⎡ ⎤
20. y′
=
1
6
⎢4 −1 0 ⎥ y, y(0) = ⎢ 7 ⎥
⎣ ⎦ ⎣ ⎦
0 0 3 1
2 −2 3 1
⎡ ⎤ ⎡ ⎤
21. y′
=
1
3
⎢ −4 4 3 ⎥ y, y(0) = ⎢ 1 ⎥
⎣ ⎦ ⎣ ⎦
2 1 0 5
6 −3 −8 0
⎡ ⎤ ⎡ ⎤
22. y′
=⎢2 1 −2 ⎥ y, y(0) = ⎢ −1 ⎥
⎣ ⎦ ⎣ ⎦
3 −3 −5 −1
2 4 −7 4
⎡ ⎤ ⎡ ⎤
23. y′
=
1
3
⎢ 1 5 −5 ⎥ y, y(0) = ⎢ 1 ⎥
⎣ ⎦ ⎣ ⎦
−4 4 −1 3
3 0 1 2
⎡ ⎤ ⎡ ⎤
24. y′
= ⎢ 11 −2 7 ⎥ y, y(0) = ⎢ 7 ⎥
⎣ ⎦ ⎣ ⎦
1 0 3 6
−2 −5 −1 8
⎡ ⎤ ⎡ ⎤
25. y′
= ⎢ −4 −1 1 ⎥ y, y(0) = ⎢ −10 ⎥
⎣ ⎦ ⎣ ⎦
4 5 3 −4
3 −1 0 7
⎡ ⎤ ⎡ ⎤
26. y′
=⎢4 −2 0 ⎥ y, y(0) = ⎢ 10 ⎥
⎣ ⎦ ⎣ ⎦
4 −4 2 2
−2 2 6 6
⎡ ⎤ ⎡ ⎤
27. y′
=⎢ 2 6 2 ⎥ y, y(0) = ⎢ −10 ⎥
⎣ ⎦ ⎣ ⎦
−2 −2 2 7
Q10.4.3
28. Let A be an n × n constant matrix. Then Theorem 10.2.1 implies that the solutions of
′
y = Ay (A)
Q10.4.4
In Exercises 10.4.29-10.4.34 describe and graph trajectories of the given system.
−4 3
30. y′
=[ ]y
−2 −11
9 −3
31. y′
=[ ]y
−1 11
−1 −10
32. y′
=[ ]y
−5 4
5 −4
33. y′
=[ ]y
1 10
−7 1
34. y′
=[ ]y
3 −5
Q10.4.5
35. Suppose the eigenvalues of the 2 × 2 matrix A are λ = 0 and μ ≠ 0 , with corresponding eigenvectors x and x . Let L
1 2 1
b. Show that the trajectories of nonconstant solutions of y = Ay are half-lines parallel to x and on either side of
′
2 L1 , and
that the direction of motion along these trajectories is away from L if μ > 0 , or toward L if μ < 0 .
1 1
Q10.4.6
The matrices of the systems in Exercises 10.4.36-10.4.41 are singular. Describe and graph the trajectories of nonconstant
solutions of the given systems.
−1 1
36. y′
=[ ]y
1 −1
−1 −3
37. y′
=[ ]y
2 6
1 −3
38. y′
=[ ]y
−1 3
1 −2
39. y′
=[ ]y
−1 2
−4 −4
40. y′
=[ ]y
1 1
3 −1
41. y′
=[ ]y
−3 1
Q10.4.6
42. Let P = P (t) and Q = Q(t) be the populations of two species at time t , and assume that each population would grow
exponentially if the other didn’t exist; that is, in the absence of competition,
′ ′
P = aP and Q = bQ, (A)
where a and b are positive constants. One way to model the effect of competition is to assume that the growth rate per
individual of each population is reduced by an amount proportional to the other population, so (A) is replaced by
′
P = aP − αQ
′
Q = −βP + bQ,
where α and β are positive constants. (Since negative population doesn’t make sense, this system holds only while P and Q
are both positive.) Now suppose P (0) = P > 0 and Q(0) = Q > 0 .
0 0
what happens if Q = ρP .
0 0
c. Confirm your experimental results and determine γ by expressing the eigenvalues and associated eigenvectors of
α −α
A =[ ]
−β b
in terms of a , b , α , and β, and applying the geometric arguments developed at the end of this section.
λ1 t λ2 t λn t
y = c1 x1 e + c2 x2 e + ⋯ + cn xn e .
In this section we consider the case where A has n real eigenvalues, but does not have n linearly independent eigenvectors. It
is shown in linear algebra that this occurs if and only if A has at least one eigenvalue of multiplicity r > 1 such that the
associated eigenspace has dimension less than r. In this case A is said to be defective. Since it is beyond the scope of this book
to give a complete analysis of systems with defective coefficient matrices, we will restrict our attention to some commonly
occurring special cases.
Example 10.5.1
Show that the system
′
11 −25
y =[ ]y (10.5.1)
4 −9
does not have a fundamental set of solutions of the form {x e , x e }, where λ and λ are eigenvalues of the
1
λ1 t
2
λ2 t
1 2
coefficient matrix A of Equation 10.5.1 and x , and x are associated linearly independent eigenvectors.
1 2
Solution
The characteristic polynomial of A is
∣ 11 − λ ∣
∣ ∣ = (λ − 11)(λ + 9) + 100
∣ 4 −λ ∣
2
=λ − 2λ + 1
2
= (λ − 1 ) .
⎡ 10 −25 ⋮ 0⎤
,
⎣ ⎦
4 −10 ⋮ 0
5
⎡1 − ⋮ 0⎤
2
⎣ ⎦
0 0 ⋮ 0
5
Hence, x 1 = 5 x2 /2 where x is arbitrary. Therefore all eigenvectors of A are scalar multiples of x
2 1 = [ ] , so A does
2
5
From Example 10.5.1, we know that all scalar multiples of y
1
= [ ]e
t
are solutions of Equation 10.5.1 ; however, to find
2
in the case where the characteristic polynomial has a repeated root, you might expect to obtain a second solution of Equation
5
10.5.1 by multiplying the first solution by t . However, this yields y 2
=[ ] te
t
, which does not work, since
2
Theorem 10.5.1
Suppose the n × n matrix A has an eigenvalue λ of multiplicity ≥ 2 and the associated eigenspace has dimension 1;
1
that is, all λ -eigenvectors of A are scalar multiples of an eigenvector x. Then there are infinitely many vectors u such
1
that
(A − λ1 I )u = x. (10.5.2)
A complete proof of this theorem is beyond the scope of this book. The difficulty is in proving that there’s a vector u
satisfying Equation 10.5.2, since det(A − λ I ) = 0 . We’ll take this without proof and verify the other assertions of the
1
theorem. We already know that y in Equation 10.5.3 is a solution of y = Ay . To see that y is also a solution, we compute
1
′
2
′ λ1 t λ1 t λ1 t λ1 t λ1 t
y − Ay2 = λ1 u e + xe + λ1 xte − Au e − Axte
2
λ1 t λ1 t
= (λ1 u + x − Au)e + (λ1 x − Ax)te .
By differentiating this with respect to t , we see that c 2x =0 , which implies c = 0 , because x ≠ 0 . Substituting c
2 2 =0 into
Equation 10.5.5 yields c x = 0 , which implies that c
1 1 =0 , again because x ≠ 0
Example 10.5.2
Use Theorem 10.5.1 to find the general solution of the system
′
11 −25
y =[ ]y (10.5.6)
4 −9
Therefore
⎡ 10 −25 ⋮ 5⎤
,
⎣ ⎦
4 −10 ⋮ 2
5 1
⎡1 − ⋮ ⎤
2 2
,
⎣ ⎦
0 0 ⋮ 0
2
u =[ ].
0
Thus,
t
1 e 5 t
y2 = [ ] +[ ] te .
0 2 2
Since y1 and y are linearly independent by Theorem 10.5.1, they form a fundamental set of solutions of Equation
2
Note that choosing the arbitrary constant u2 to be nonzero is equivalent to adding a scalar multiple of y1 to the second
solution y (Exercise 10.5.33).
2
Example 10.5.3
Find the general solution of
3 4 −10
⎡ ⎤
′
y =⎢2 1 −2 ⎥ y. (10.5.7)
⎣ ⎦
2 2 −5
Solution
The characteristic polynomial of the coefficient matrix A in Equation 10.5.7 is
∣ 3 −λ 4 −10 ∣
∣ ∣ 2
2 1 −λ −2 = −(λ − 1)(λ + 1 ) .
∣ ∣
∣ 2 2 −5 − λ ∣
Hence, the eigenvalues are λ = 1 with multiplicity 1 and λ = −1 with multiplicity 2. Eigenvectors associated with
1 2
⎣ ⎦
2 2 −6 ⋮ 0
⎡1 0 −1 ⋮ 0⎤
⎢ ⎥
⎢ ⎥.
⎢0 1 −2 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
0 0 0 ⋮ 0
⎣ ⎦
1
Therefore
1
⎡ ⎤
t
y1 = ⎢ 2 ⎥ e
⎣ ⎦
1
is a solution of Equation 10.5.7. Eigenvectors associated with λ 2 = −1 satisfy (A + I )x = 0 . The augmented matrix of
this system is
⎡4 4 −10 ⋮ 0⎤
⎢ ⎥
⎢ ⎥,
⎢2 2 −2 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
2 2 −4 ⋮ 0
⎡1 1 0 ⋮ 0⎤
⎢ ⎥
⎢ ⎥.
⎢0 0 1 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
0 0 0 ⋮ 0
so
−1
⎡ ⎤
−t
y2 = ⎢ 1⎥e
⎣ ⎦
0
is a solution of Equation 10.5.7. Since all the eigenvectors of A associated with λ2 = −1 are multiples of x2 , we must
now use Theorem 10.5.1 to find a third solution of Equation 10.5.7 in the form
−1
⎡ ⎤
−t −t
y3 = u e +⎢ 1 ⎥ te , (10.5.8)
⎣ ⎦
0
⎣ ⎦
2 2 −4 ⋮ 0
⎡1 1 0 ⋮ 1⎤
⎢ ⎥
⎢ 1 ⎥.
⎢0 0 1 ⋮ ⎥
2
⎢ ⎥
⎣ ⎦
0 0 0 ⋮ 0
1
⎡ ⎤
u = ⎢ 0⎥,
⎣ 1 ⎦
2
{ y1 , y2 , y3 } is a fundamental set of solutions of Equation 10.5.7. Therefore the general solution of Equation 10.5.7 is
1 −1 2 −1
⎡ ⎤ ⎡ ⎤ ⎛⎡ ⎤ −t ⎡ ⎤ ⎞
e
t −t −t
y = c1 ⎢ 2 ⎥ e + c2 ⎢ 1⎥e + c3 ⎜⎢ 0 ⎥ +⎢ 1 ⎥ te ⎟.
2
⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎦ ⎣ ⎦ ⎠
1 0 1 0
Theorem 10.5.2
Suppose the n × n matrix A has an eigenvalue λ of multiplicity ≥ 3 and the associated eigenspace is one–dimensional;
1
that is, all eigenvectors associated with λ are scalar multiples of the eigenvector x. Then there are infinitely many
1
and, if u is any such vector, there are infinitely many vectors v such that
(A − λ1 I )v = u. (10.5.10)
λ1 t λ1 t
y2 = u e + xte , and
2 λ1 t
λ1 t λ1 t
t e
y3 = ve + ute +x
2
10.5.34).
Example 10.5.4
Use Theorem 10.5.2 to find the general solution of
1 1 1
⎡ ⎤
′
y =⎢1 3 −1 ⎥ y. (10.5.11)
⎣ ⎦
0 2 2
Solution:
The characteristic polynomial of the coefficient matrix A in Equation 10.5.11 is
∣ 1 −λ 1 1 ∣
∣ ∣ 3
1 3 −λ −1 = −(λ − 2 ) .
∣ ∣
∣ 0 2 2 −λ ∣
⎡ −1 1 1 ⋮ 0⎤
⎢ ⎥
⎢ ⎥,
⎢ 1 1 −1 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
0 2 0 ⋮ 0
⎡1 0 −1 ⋮ 0⎤
⎢ ⎥
⎢ ⎥.
⎢0 1 0 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
0 0 0 ⋮ 0
Hence, x 1 = x3 and x
2 =0 , so the eigenvectors are all scalar multiples of
1
⎡ ⎤
x1 = ⎢ 0 ⎥ .
⎣ ⎦
1
Therefore
1
⎡ ⎤
2t
y1 = ⎢ 0 ⎥ e
⎣ ⎦
1
is a solution of Equation 10.5.11. We now find a second solution of Equation 10.5.11 in the form
1
⎡ ⎤
2t 2t
y2 = u e + ⎢ 0 ⎥ te ,
⎣ ⎦
1
⎡ −1 1 1 ⋮ 1⎤
⎢ ⎥
⎢ ⎥
⎢ 1 1 −1 ⋮ 0⎥,
⎢ ⎥
⎣ ⎦
0 2 0 ⋮ 1
1
⎡1 0 −1 ⋮ − ⎤
2
⎢ ⎥
⎢ 1 ⎥.
⎢0 1 0 ⋮ ⎥
⎢ 2 ⎥
⎣ ⎦
0 0 0 ⋮ 0
and
−1 1
⎡ ⎤ 2t ⎡ ⎤
e
2t
y2 = ⎢ 1⎥ + ⎢ 0 ⎥ te
2
⎣ ⎦ ⎣ ⎦
0 1
is a solution of Equation 10.5.11. We now find a third solution of Equation 10.5.11 in the form
−1 1
⎡ ⎤ 2t ⎡ ⎤ 2 2t
2t
te t e
y3 = ve +⎢ 1⎥ +⎢0 ⎥
2 2
⎣ ⎦ ⎣ ⎦
0 1
1
⎡ −1 1 1 ⋮ − ⎤
2
⎢ ⎥
⎢ 1 ⎥,
⎢ 1 1 −1 ⋮ ⎥
2
⎢ ⎥
⎣ ⎦
0 2 0 ⋮ 0
1
⎡1 0 −1 ⋮ ⎤
2
⎢ ⎥
⎢ ⎥.
⎢0 1 0 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
0 0 0 ⋮ 0
Therefore
1 −1 1
⎡ ⎤ 2t ⎡ ⎤ 2t ⎡ ⎤ 2 2t
e te t e
y3 = ⎢ 0 ⎥ +⎢ 1⎥ +⎢0 ⎥
2 2 2
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
0 0 1
is a solution of Equation 10.5.11. Since y , y , and y are linearly independent by Theorem 10.5.2, they form a
1 2 3
fundamental set of solutions of Equation 10.5.11. Therefore the general solution of Equation 10.5.11 is
1 −1 1 1 −1 1
⎡ ⎤ ⎛⎡ ⎤ 2t ⎡ ⎤ ⎞ ⎛⎡ ⎤ 2t ⎡ ⎤ 2t ⎡ ⎤ 2 2t ⎞
2t
e 2t
e te t e
y = c1 ⎢ 0 ⎥ e + c2 ⎜⎢ 1 ⎥ + ⎢ 0 ⎥ te ⎟ + c3 ⎜⎢ 0 ⎥ +⎢ 1 ⎥ +⎢ 0 ⎥ ⎟
2 2 2 2
⎣ ⎦ ⎝⎣ ⎦ ⎣ ⎦ ⎠ ⎝⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎠
1 0 1 0 0 1
that is, all eigenvectors of A associated with λ are linear combinations of two linearly independent eigenvectors x and
1 1
x . Then there are constants α and β ( not both zero) such that if
2
x3 = α x1 + β x2 , (10.5.12)
λ1 t
y2 = x2 e , and (10.5.14)
λ1 t λ1 t
y3 ue + x3 te ,
Example 10.5.5
Use Theorem 10.5.3 to find the general solution of
0 0 1
⎡ ⎤
′
y = ⎢ −1 1 1 ⎥ y. (10.5.15)
⎣ ⎦
−1 0 2
Solution
The characteristic polynomial of the coefficient matrix A in Equation 10.5.15 is
∣ −λ 0 1 ∣
∣ ∣ 3
−1 1 −λ 1 = −(λ − 1 ) .
∣ ∣
∣ −1 0 2 −λ ∣
⎡ −1 0 1 ⋮ 0⎤
⎢ ⎥
⎢ ⎥,
⎢ −1 0 1 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
−1 0 1 ⋮ 0
⎡1 0 −1 ⋮ 0⎤
⎢ ⎥
⎢ ⎥.
⎢0 0 0 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
0 0 0 ⋮ 0
x3 1 0
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
x1 = ⎢ x2 ⎥ = x3 ⎢ 0 ⎥ + x2 ⎢ 1 ⎥ .
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
x3 1 0
⎣ ⎦ ⎣ ⎦
1 0
⎣ ⎦ ⎣ ⎦
1 0
are linearly independent solutions of Equation 10.5.15. To find a third linearly independent solution of Equation 10.5.15,
we must find constants α and β (not both zero) such that the system
(A − I )u = α x1 + β x2 (10.5.17)
⎡ −1 0 1 ⋮ α ⎤
⎢ ⎥
⎢ ⎥,
⎢ −1 0 1 ⋮ β ⎥
⎢ ⎥
⎣ ⎦
−1 0 1 ⋮ α
⎡1 0 −1 ⋮ −α ⎤
⎢ ⎥
⎢ ⎥. (10.5.18)
⎢0 0 0 ⋮ β −α ⎥
⎢ ⎥
⎣ ⎦
0 0 0 ⋮ 0
Therefore Equation 10.5.17 has a solution if and only if β = α , where α is arbitrary. If α = β = 1 then Equation 10.5.12
and Equation 10.5.16 yield
1 0 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
x3 = x1 + x2 = ⎢ 0 ⎥ + ⎢ 1 ⎥ = ⎢ 1 ⎥ ,
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 0 1
⎡1 0 −1 ⋮ −1 ⎤
⎢ ⎥
⎢ ⎥.
⎢0 0 0 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
0 0 0 ⋮ 0
−1
⎡ ⎤
u =⎢ 0⎥.
⎣ ⎦
0
−1 1
⎡ ⎤ ⎡ ⎤
t t t t
y3 = u e + x3 te =⎢ 0 ⎥ e + ⎢ 1 ⎥ te
⎣ ⎦ ⎣ ⎦
0 1
is a solution of Equation 10.5.15. Since y , y , and y are linearly independent by Theorem [thmtype:10.5.3}, they form
1 2 3
a fundamental set of solutions for Equation 10.5.15. Therefore the general solution of Equation 10.5.15 is
⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎦ ⎣ ⎦ ⎠
1 0 0 1
under the assumptions of this section; that is, when the matrix
a11 a12
A =[ ]
a21 a22
has a repeated eigenvalue λ and the associated eigenspace is one-dimensional. In this case we know from Theorem
1 10.5.1
We assume that λ 1 ≠0 .
of L in the direction of −x if c < 0 . The origin is the trajectory of the trivial solution y ≡ 0 .
1
Henceforth, we assume that c ≠ 0 . In this case, the trajectory of Equation 10.5.21 can’t intersect L, since every point of L is
2
on a trajectory obtained by setting c = 0 . Therefore the trajectory of Equation 10.5.21 must lie entirely in one of the open
2
half-planes bounded by L, but does not contain any point on L. Since the initial point (y (0), y (0)) defined by 1 2
y(0) = c x + c u is on the trajectory, we can determine which half-plane contains the trajectory from the sign of c , as
1 1 2 2
shown in Figure . For convenience we’ll call the half-plane where c > 0 the positive half-plane. Similarly, the-half plane2
where c < 0 is the negative half-plane. You should convince yourself (Exercise 10.5.35) that even though there are infinitely
2
many vectors u that satisfy Equation 10.5.22, they all define the same positive and negative half-planes. In the figures simply
regard u as an arrow pointing to the positive half-plane, since wen’t attempted to give u its proper length or direction in
comparison with x. For our purposes here, only the relative orientation of x and u is important; that is, whether the positive
half-plane is to the right of an observer facing the direction of x (as in Figures 10.5.2 and 10.5.5), or to the left of the observer
(as in Figures 10.5.3 and 10.5.4).
Multiplying Equation 10.5.21 by e −λ1 t
yields
−λ1 t
e y(t) = c1 x + c2 u + c2 tx.
or
there are four possible patterns for the trajectories of Equation 10.5.20, depending upon the signs of c and λ . Figures 10.5.2
2 1
direction of x as ∥y∥ → ∞. If λ and c have opposite signs then the direction of the trajectory approaches the direction of x
1 2
0 −1
2. y′
=[ ]y
1 −2
−7 4
3. y′
=[ ]y
−1 −11
3 1
4. y′
=[ ]y
−1 1
4 12
5. y′
=[ ]y
−3 −8
−10 9
6. y′
=[ ]y
−4 2
−13 16
7. y′
=[ ]y
−9 11
0 2 1
⎡ ⎤
8. y′
= ⎢ −4 6 1⎥y
⎣ ⎦
0 4 2
1 1 −3
⎡ ⎤
9. y′
=
1
3
⎢ −4 −4 3 ⎥y
⎣ ⎦
−2 1 0
−1 1 −1
⎡ ⎤
10. y ′
= ⎢ −2 0 2 ⎥y
⎣ ⎦
−1 3 −1
4 −2 −2
⎡ ⎤
11. y ′
= ⎢ −2 3 −1 ⎥ y
⎣ ⎦
2 −1 3
6 −5 3
⎡ ⎤
12. y ′
=⎢2 −1 3⎥y
⎣ ⎦
2 1 1
Q10.5.2
In Exercises 10.5.13-10.5.23 solve the initial value problem.
−11 8 6
13. y ′
=[ ] y, y(0) = [ ]
−2 −3 2
15 −9 5
14. y ′
=[ ] y, y(0) = [ ]
16 −9 8
−3 −4 2
15. y ′
=[ ] y, y(0) = [ ]
1 −7 3
−7 24 3
16. y ′
=[ ] y, y(0) = [ ]
−6 17 1
−1 1 0 6
⎡ ⎤ ⎡ ⎤
18. y′
=⎢ 1 −1 −2 ⎥ y, y(0) = ⎢ 5 ⎥
⎣ ⎦ ⎣ ⎦
−1 −1 −1 −7
−2 2 1 −6
⎡ ⎤ ⎡ ⎤
19. y′
= ⎢ −2 2 1 ⎥ y, y(0) = ⎢ −2 ⎥
⎣ ⎦ ⎣ ⎦
−3 3 2 0
−7 −4 4 −6
⎡ ⎤ ⎡ ⎤
20. y′
= ⎢ −1 0 1 ⎥ y, y(0) = ⎢ 9 ⎥
⎣ ⎦ ⎣ ⎦
−9 −5 6 −1
−1 −4 −1 −2
⎡ ⎤ ⎡ ⎤
21. y′
=⎢ 3 6 1 ⎥ y, y(0) = ⎢ 1 ⎥
⎣ ⎦ ⎣ ⎦
−3 −2 3 3
4 −8 −4 −4
⎡ ⎤ ⎡ ⎤
22. y′
= ⎢ −3 −1 −3 ⎥ y, y(0) = ⎢ 1 ⎥
⎣ ⎦ ⎣ ⎦
1 −1 9 −3
−5 −1 11 0
⎡ ⎤ ⎡ ⎤
23. y′
= ⎢ −7 1 13 ⎥ y, y(0) = ⎢ 2 ⎥
⎣ ⎦ ⎣ ⎦
−4 0 8 2
Q10.5.3
The coefficient matrices in Exercises 10.5.24-10.5.32 have eigenvalues of multiplicity 3. Find the general solution.
5 −1 1
⎡ ⎤
24. y′
= ⎢ −1 9 −3 ⎥ y
⎣ ⎦
−2 2 4
1 10 −12
⎡ ⎤
25. y′
=⎢2 2 3 ⎥y
⎣ ⎦
2 −1 6
−6 −4 −4
⎡ ⎤
26. y′
=⎢ 2 −1 1 ⎥y
⎣ ⎦
2 3 1
0 2 −2
⎡ ⎤
27. y′
= ⎢ −1 5 −3 ⎥ y
⎣ ⎦
1 1 1
−2 −12 10
⎡ ⎤
28. y′
=⎢ 2 −24 11 ⎥ y
⎣ ⎦
2 −24 8
−1 −12 8
⎡ ⎤
29. y′
=⎢ 1 −9 4⎥y
⎣ ⎦
1 −6 1
−4 0 −1
⎡ ⎤
30. y′
= ⎢ −1 −3 −1 ⎥ y
⎣ ⎦
1 0 −2
−3 −1 0
⎡ ⎤
32. y′
=⎢ 1 −1 0 ⎥y
⎣ ⎦
−1 −1 −2
Q10.5.4
33. Under the assumptions of Theorem 10.5.1, suppose u and u
^ are vectors such that
(A − λ1 I )u = x and ^ = x,
(A − λ1 I )u
and let
λ1 t λ1 t λ1 t λ1 t
y = ue + xte and y
^ =u
^e + xte .
2 2
Show that y 2
−y
^
2
is a scalar multiple of y 1
= xe
λ1 t
.
34. Under the assumptions of Theorem 10.5.2, let
λ1 t
y1 = xe ,
λ1 t λ1 t
y2 = u e + xte , and
2 λ1 t
t e
λ1 t λ1 t
y3 = ve + ute +x .
2
a11 a12
A =[ ]
a21 a22
has a repeated eigenvalue λ and the associated eigenspace is one-dimensional. Let x be a λ -eigenvector of A . Show that if
1 1
(A − λ I )u = x and (A − λ I )u = x , then u − u
1 1 1 2 is parallel to x. Conclude from this that all vectors u such that
2 1
(A − λ I )u = x define the same positive and negative half-planes with respect to the line L through the origin parallel to x .
1
Q10.5.5
In Exercises 10.5.36-10.5.45 plot trajectories of the given system.
−3 −1
36. y′
=[ ]y
4 1
2 −1
37. y′
=[ ]y
1 0
−1 −3
38. y′
=[ ]y
3 5
−5 3
39. y′
=[ ]y
−3 1
−2 −3
40. y′
=[ ]y
3 4
−4 −3
41. y′
=[ ]y
3 2
0 −1
42. y′
=[ ]y
1 −2
−2 1
44. y′
=[ ]y
−1 0
0 −4
45. y′
=[ ]y
1 −4
that A has real entries, so the characteristic polynomial of A has real coefficients. This implies that λ = α − iβ is also an ¯¯
¯
eigenvalue of A .
An eigenvector x of A associated with λ = α + iβ will have complex entries, so we’ll write
x = u + iv
where u and v have real entries; that is, u and v are the real and imaginary parts of x. Since Ax = λx ,
A(u + iv) = (α + iβ)(u + iv). (10.6.1)
Taking complex conjugates here and recalling that A has real entries yields
¯¯
¯ ¯¯
¯
which shows that x = u − iv is an eigenvector associated with λ = α − iβ . The complex conjugate eigenvalues λ and λ can
be separately associated with linearly independent solutions y = Ay ; however, we will not pursue this approach, since
′
solutions obtained in this way turn out to be complex–valued. Instead, we’ll obtain solutions of y = Ay in the form ′
y = f1 u + f2 v (10.6.2)
where f and f are real–valued scalar functions. The next theorem shows how to do this.
1 2
Theorem 10.6.1
Let A be an n × n matrix with real entries. Let λ = α + iβ (β ≠ 0 ) be a complex eigenvalue of A and let x = u + iv
be an associated eigenvector, where u and v have real components. Then u and v are both nonzero and
αt αt
y1 = e (u cos βt − v sin βt) and y2 = e (u sin βt + v cos βt),
Proof
A function of the form Equation 10.6.2is a solution of y ′
= Ay if and only if
′ ′
f u + f v = f1 Au + f2 Av. (10.6.4)
1 2
Carrying out the multiplication indicated on the right side of Equation 10.6.1 and collecting the real and imaginary
parts of the result yields
Equating real and imaginary parts on the two sides of this equation yields
Au = αu − βv
Av = αv + βu.
We leave it to you (Exercise 10.6.25) to show from this that u and v are both nonzero. Substituting from these
equations into Equation 10.6.4yields
′ ′
f u +f v = f1 (αu − βv) + f2 (αv + βu)
1 2
= (α f1 + β f2 )u + (−β f1 + α f2 )v.
This is true if
If we let f 1 = g1 e
αt
and f 2 = g2 e
αt
, where g and g are to be determined, then the last two equations become
1 2
′
g = βg2
1
′
g = −β g1 ,
2
so
′′ 2
g + β g1 = 0.
1
Moreover, since g 2
′
= g /β
1
,
Multiplying g and g by e
1 2
αt
shows that
αt
f1 = e ( c1 cos βt + c2 sin βt),
αt
f2 = e (−c1 sin βt + c2 cos βt).
is a solution of y = Ay for any choice of the constants c and c . In particular, by first taking c = 1 and c = 0 and
′
1 2 1 2
then taking c = 0 and c = 1 , we see that y and y are solutions of y = Ay . We leave it to you to verify that they
1 2 1 2
′
are, respectively, the real and imaginary parts of Equation 10.6.3 (Exercise 10.6.26), and that they are linearly
independent (Exercise 10.6.27).
Example 10.6.1
Find the general solution of
4 −5
′
y =[ ] y. (10.6.6)
5 −2
Solution
The characteristic polynomial of the coefficient matrix A in Equation 10.6.6 is
∣ 4 −λ −5 ∣
2
∣ ∣ = (λ − 1 ) + 16.
∣ 5 −2 − λ ∣
⎡ 3 − 4i −5 ⋮ 0⎤
,
⎣ ⎦
5 −3 − 4i ⋮ 0
3 + 4i
x =[ ]
5
t
3 + 4i
e (cos 4t + i sin 4t) [ ]
5
are
t
3 cos 4t − 4 sin 4t t
3 sin 4t + 4 cos 4t
y1 = e [ ] and y2 = e [ ],
5 cos 4t 5 sin 4t
which are linearly independent solutions of Equation 10.6.6. The general solution of Equation 10.6.6 is
t
3 cos 4t − 4 sin 4t t
3 sin 4t + 4 cos 4t
y = c1 e [ ] + c2 e [ ].
5 cos 4t 5 sin 4t
Example 10.6.2
Find the general solution of
′
−14 39
y =[ ] y. (10.6.7)
−6 16
Solution
The characteristic polynomial of the coefficient matrix A in Equation 10.6.7 is
∣ −14 − λ 39 ∣
2
∣ ∣ = (λ − 1 ) + 9.
∣ −6 16 − λ ∣
⎡ −15 − 3i 39 ⋮ 0⎤
,
⎣ ⎦
−6 15 − 3i ⋮ 0
−5+i
⎡1 ⋮ 0⎤
2
.
⎣ ⎦
0 0 ⋮ 0
5 −i
x =[ ]
2
t
5 −i
e (cos 3t + i sin 3t) [ ]
2
are
which are linearly independent solutions of Equation 10.6.7. The general solution of Equation 10.6.7 is
t
sin 3t + 5 cos 3t t
− cos 3t + 5 sin 3t
y = c1 e [ ] + c2 e [ ].
2 cos 3t 2 sin 3t
Example 10.6.3
Find the general solution of
−5 5 4
⎡ ⎤
′
y = ⎢ −8 7 6 ⎥ y. (10.6.8)
⎣ ⎦
1 0 0
Solution
The characteristic polynomial of the coefficient matrix A in Equation 10.6.8 is
∣ −5 − λ 5 4 ∣
∣ ∣ 2
−8 7 −λ 6 = −(λ − 2)(λ + 1).
∣ ∣
∣ 1 0 −λ ∣
⎡ −7 5 4 ⋮ 0⎤
⎢ ⎥
⎢ ⎥,
⎢ −8 5 6 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
1 0 −2 ⋮ 0
⎡1 0 −2 ⋮ 0⎤
⎢ ⎥
⎢ ⎥.
⎢0 1 −2 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
0 0 0 ⋮ 0
⎣ ⎦
1
so
2
⎡ ⎤
2t
y1 = ⎢ 2 ⎥ e
⎣ ⎦
1
⎡ −5 − i 5 4 ⋮ 0⎤
⎢ ⎥
⎢ ⎥,
⎢ −8 7 −i 6 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
1 0 −i ⋮ 0
⎡1 0 −i ⋮ 0⎤
⎢ ⎥
⎢ ⎥.
⎢0 1 1 −i ⋮ 0⎥
⎢ ⎥
⎣ ⎦
0 0 0 ⋮ 0
Therefore x 1 = i x3 and x
2 = −(1 − i)x3 . Taking x 3 =1 yields the eigenvector
i
⎡ ⎤
x2 = ⎢ −1 + i ⎥ .
⎣ ⎦
1
⎣ ⎦
1
are
− sin t cos t
⎡ ⎤ ⎡ ⎤
y2 = ⎢ − cos t − sin t ⎥ and y3 = ⎢ cos t − sin t ⎥ ,
⎣ ⎦ ⎣ ⎦
cos t sin t
{ y1 , y2 , y3 } is a fundamental set of solutions of Equation 10.6.8. The general solution of Equation 10.6.8 is
2 − sin t cos t
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
2t
y = c1 ⎢ 2 ⎥ e + c2 ⎢ − cos t − sin t ⎥ + c3 ⎢ cos t − sin t ⎥ .
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 cos t sin t
Example 10.6.4
Find the general solution of
1 −1 −2
⎡ ⎤
′
y =⎢1 3 2 ⎥ y. (10.6.9)
⎣ ⎦
1 −1 2
Solution
The characteristic polynomial of the coefficient matrix A in Equation 10.6.9 is
∣ 1 −λ −1 −2 ∣
∣ ∣ 2
1 3 −λ 2 = −(λ − 2) ((λ − 2 ) + 4) .
∣ ∣
∣ 1 −1 2 −λ ∣
⎡ −1 −1 −2 ⋮ 0⎤
⎢ ⎥
⎢ ⎥,
⎢ 1 1 2 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
1 −1 0 ⋮ 0
⎡1 0 1 ⋮ 0⎤
⎢ ⎥
⎢ ⎥.
⎢0 1 1 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
0 0 0 ⋮ 0
⎣ ⎦
1
so
−1
⎡ ⎤
2t
y1 = ⎢ −1 ⎥ e
⎣ ⎦
1
⎡ −1 − 2i −1 −2 ⋮ 0⎤
⎢ ⎥
⎢ ⎥,
⎢ 1 1 − 2i 2 ⋮ 0⎥
⎢ ⎥
⎣ ⎦
1 −1 −2i ⋮ 0
⎡1 0 −i ⋮ 0⎤
⎢ ⎥
⎢ ⎥.
⎢0 1 i ⋮ 0⎥
⎢ ⎥
⎣ ⎦
0 0 0 ⋮ 0
Therefore x 1 = i x3 and x
2 = −i x3 . Taking x 3 =1 yields the eigenvector
i
⎡ ⎤
x2 = ⎢ −i ⎥
⎣ ⎦
1
are
− sin 2t cos 2t
⎡ ⎤ ⎡ ⎤
2t 2t
y2 = e ⎢ sin 2t ⎥ and y2 = e ⎢ − cos 2t ⎥ ,
⎣ ⎦ ⎣ ⎦
cos 2t sin 2t
{ y1 , y2 , y3 } is a fundamental set of solutions of Equation 10.6.9. The general solution of Equation 10.6.9 is
under the assumptions of this section; that is, when the matrix
a11 a12
A =[ ]
a21 a22
has a complex eigenvalue λ = α + iβ (β ≠ 0 ) and x = u + iv is an associated eigenvector, where u and v have real
components. To describe the trajectories accurately it is necessary to introduce a new rectangular coordinate system in the y - 1
y plane. This raises a point that hasn’t come up before: It is always possible to choose x so that (u, v) = 0 . A special effort is
2
required to do this, since not every eigenvector has this property. However, if we know an eigenvector that doesn’t, we can
multiply it by a suitable complex constant to obtain one that does. To see this, note that if x is a λ -eigenvector of A and k is
an arbitrary real number, then
so
2 2 2
(u1 , v1 ) = (u − kv, v + ku) = − [(u, v)k + (∥v∥ − ∥u ∥ )k − (u, v)] .
Therefore (u 1, v1 ) = 0 if
2 2 2
(u, v)k + (∥v∥ − ∥u ∥ )k − (u, v) = 0. (10.6.12)
If (u, v) ≠ 0 we can use the quadratic formula to find two real values of k such that (u 1, v1 ) = 0 (Exercise 10.6.28).
Example 10.6.5
In Example 10.6.1 we found the eigenvector
3 + 4i 3 4
x =[ ] =[ ]+i [ ]
5 5 0
3 4
for the matrix of the system Equation 10.6.6. Here u = [ ] and v = [ ] are not orthogonal, since (u, v) = 12 . Since
5 0
∥v∥
2
− ∥u ∥
2
= −18 , Equation 10.6.12 is equivalent to
2
2k − 3k − 2 = 0.
The zeros of this equation are k 1 =2 and k 2 = −1/2 . Letting k = 2 in Equation 10.6.11 yields
−5 10
u1 = u − 2v = [ ] and v1 = v + 2u = [ ],
5 10
and again (u 1, v1 ) = 0 .
(The numbers don’t always work out as nicely as in this example. You’ll need a calculator or computer to do Exercises
10.6.29-10.6.40.) Henceforth, we’ll assume that (u, v) = 0 . Let U and V be unit vectors in the directions of u and v,
respectively; that is, U = u/∥u∥ and V = v/∥v∥ . The new rectangular coordinate system will have the same origin as the
y -y system. The coordinates of a point in this system will be denoted by (z , z ), where z and z are the displacements in
1 2 1 2 1 2
the directions of U and V, respectively. From Equation 10.6.5, the solutions of Equation 10.6.10 are given by
αt
y =e [(c1 cos βt + c2 sin βt)u + (−c1 sin βt + c2 cos βt)v] . (10.6.13)
−αt
e y(t) = z1 (t)U + z2 (t)V,
where
z1 (t) = ∥u∥(c1 cos βt + c2 sin βt)
Therefore
2 2
(z1 (t)) (z2 (t))
2 2
+ =c +c
2 2 1 2
∥u∥ ∥v∥
(verify!), which means that the shadow trajectories of Equation 10.6.10 are ellipses centered at the origin, with axes of
symmetry parallel to U and V. Since
β∥u∥ β∥v∥
′ ′
z = z2 and z =− z1 ,
1 2
∥v∥ ∥u∥
the vector from the origin to a point on the shadow ellipse rotates in the same direction that V would have to be rotated by π/2
radians to bring it into coincidence with U (Figures 10.6.1 and 10.6.2).
so the trajectory spirals away from the origin as t varies from −∞ to ∞. The direction of the spiral depends upon the relative
orientation of U and V, as shown in Figures 10.6.3 and 10.6.4. If α < 0 , then
so the trajectory spirals toward the origin as t varies from −∞ to ∞. Again, the direction of the spiral depends upon the
relative orientation of U and V, as shown in Figures 10.6.5 and 10.6.6.
−11 4
2. y′
=[ ]y
−26 9
1 2
3. y′
=[ ]y
−4 5
5 −6
4. y′
=[ ]y
3 −1
3 −3 1
⎡ ⎤
5. y′
=⎢0 2 2⎥y
⎣ ⎦
5 1 1
−3 3 1
⎡ ⎤
6. y′
=⎢ 1 −5 −3 ⎥ y
⎣ ⎦
−3 7 3
2 1 −1
⎡ ⎤
7. y′
=⎢0 1 1 ⎥y
⎣ ⎦
1 0 1
−3 1 −3
⎡ ⎤
8. y′
=⎢ 4 −1 2 ⎥y
⎣ ⎦
4 −2 3
5 −4
9. y′
=[ ]y
10 1
7 −5
10. y ′
=
1
3
[ ]y
2 5
3 2
11. y ′
=[ ]y
−5 1
34 52
12. y ′
=[ ]y
−20 −30
1 1 2
⎡ ⎤
13. y ′
=⎢ 1 0 −1 ⎥ y
⎣ ⎦
−1 −2 −1
3 −4 −2
⎡ ⎤
14. y ′
= ⎢ −5 7 −8 ⎥ y
⎣ ⎦
−10 13 −8
6 0 −3
⎡ ⎤
15. y ′
= ⎢ −3 3 3 ⎥y
⎣ ⎦
1 −2 6
1 2 −2
⎡ ⎤
16. y ′
=⎢0 2 −1 ⎥ y
⎣ ⎦
1 0 0
7 15 5
18. y′
=[ ] y, y(0) = [ ]
−3 1 1
7 −15 17
19. y′
=[ ] y, y(0) = [ ]
3 −5 7
4 −2 1
20. y′
=
1
6
[ ] y, y(0) = [ ]
5 2 −1
5 2 −1 4
⎡ ⎤ ⎡ ⎤
21. y′
= ⎢ −3 2 2 ⎥ y, y(0) = ⎢ 0 ⎥
⎣ ⎦ ⎣ ⎦
1 3 2 6
4 4 0 8
⎡ ⎤ ⎡ ⎤
22. y′
=⎢8 10 −20 ⎥ y, y(0) = ⎢ 6 ⎥
⎣ ⎦ ⎣ ⎦
2 3 −2 5
1 15 −15 15
⎡ ⎤ ⎡ ⎤
23. y′
= ⎢ −6 18 −22 ⎥ y, y(0) = ⎢ 17 ⎥
⎣ ⎦ ⎣ ⎦
−3 11 −15 10
4 −4 4 16
⎡ ⎤ ⎡ ⎤
24. y′
= ⎢ −10 3 15 ⎥ y, y(0) = ⎢ 14 ⎥
⎣ ⎦ ⎣ ⎦
2 −3 1 6
Q10.6.3
25. Suppose an n × n matrix A with real entries has a complex eigenvalue λ = α + iβ (β ≠ 0 ) with associated eigenvector
x = u + iv , where u and v have real components. Show that u and v are both nonzero.
27. Show that if the vectors u and v are not both 0 and β ≠ 0 then the vector functions
αt αt
y1 = e (u cos βt − v sin βt) and y2 = e (u sin βt + v cos βt)
U =V2 and V = −U , and that therefore the counterclockwise angles from U to V and from U to V are both
1 2 1 1 1 2 2
Q10.6.4
In Exercises 10.6.29-10.6.32 find vectors U and V parallel to the axes of symmetry of the trajectories, and plot some typical
trajectories.
3 −5
29. y′
=[ ]y
5 −3
−15 10
30. y′
=[ ]y
−25 15
−4 8
31. y′
=[ ]y
−4 4
−3 −15
32. y′
=[ ]y
3 3
Q10.6.5
In Exercises 10.6.33-10.6.40 find vectors U and V parallel to the axes of symmetry of the shadow trajectories, and plot a
typical trajectory.
−5 6
33. y′
=[ ]y
−12 7
5 −12
34. y′
=[ ]y
6 −7
4 −5
35. y′
=[ ]y
9 −2
−4 9
36. y′
=[ ]y
−5 2
−1 10
37. y′
=[ ]y
−10 −1
−1 −5
38. y′
=[ ]y
20 −1
−7 10
39. y′
=[ ]y
−10 9
−7 6
40. y′
=[ ]y
−12 5
11.1: GEOMETRY
11.2: ELEMENTARY OPERATIONS
We have taken an in depth look at graphical representations of systems of equations, as well as
how to find possible solutions graphically. Our attention now turns to working with systems
algebraically.
11.4: EXERCISES
1 9/16/2020
11.1: Geometry
Learning Objectives
1. Relate the types of solution sets of a system of two (three) variables to the intersections of lines in a plane (the
intersection of planes in three space)
As you may remember, linear equations like 2x + 3y = 6 can be graphed as straight lines in the coordinate plane. We say that
this equation is in two variables, in this case x and y . Suppose you have two such equations, each of which can be graphed as a
straight line, and consider the resulting graph of two lines. What would it mean if there exists a point of intersection between
the two lines? This point, which lies on both graphs, gives x and y values for which both equations are true. In other words,
this point gives the ordered pair (x, y) that satisfy both equations. If the point (x, y) is a point of intersection, we say that
(x, y) is a solution to the two equations. In linear algebra, we often are concerned with finding the solution(s) to a system of
equations, if such solutions exist. First, we consider graphical representations of solutions and later we will consider the
algebraic methods for finding solutions.
When looking for the intersection of two lines in a graph, several situations may arise. The following picture demonstrates the
possible situations when considering two equations (two lines in the graph) involving two variables.
In the first diagram, there is a unique point of intersection, which means that there is only one (unique) solution to the two
equations. In the second, there are no points of intersection and no solution. When no solution exists, this means that the two
lines are parallel and they never intersect. The third situation which can occur, as demonstrated in diagram three, is that the
two lines are really the same line. For example, x + y = 1 and 2x + 2y = 2 are equations which when graphed yield the same
line. In this case there are infinitely many points which are solutions of these two equations, as every ordered pair which is on
the graph of the line satisfies both equations. When considering linear systems of equations, there are always three types of
solutions possible; exactly one (unique) solution, infinitely many solutions, or no solution.
Solution
Through graphing the above equations and identifying the point of intersection, we can find the solution(s). Remember
that we must have either one solution, infinitely many, or no solutions at all. The following graph shows the two
equations, as well as the intersection. Remember, the point of intersection represents the solution of the two equations, or
the (x, y) which satisfy both equations. In this case, there is one point of intersection at (−1, 4) which means we have one
unique solution, x = −1, y = 4 .
In the above example, we investigated the intersection point of two equations in two variables, x and y . Now we will consider
the graphical solutions of three equations in two variables.
Consider a system of three equations in two variables. Again, these equations can be graphed as straight lines in the plane, so
that the resulting graph contains three straight lines. Recall the three possible types of solutions; no solution, one solution, and
infinitely many solutions. There are now more complex ways of achieving these situations, due to the presence of the third
line. For example, you can imagine the case of three intersecting lines having no common point of intersection. Perhaps you
can also imagine three intersecting lines which do intersect at a single point. These two situations are illustrated below.
Consider the first picture above. While all three lines intersect with one another, there is no common point of intersection
where all three lines meet at one point. Hence, there is no solution to the three equations. Remember, a solution is a point
(x, y) which satisfies all three equations. In the case of the second picture, the lines intersect at a common point. This means
Exercise 11.1.1
Graphically, find the point (x 1, y1 ) which lies on both lines, x + 3y = 1 and 4x − y = 3 . That is, graph each line and see
where they intersect.
Exercise 11.1.2
Graphically, find the point of intersection of the two lines, 3x + y = 3 and x + 2y = 1 . That is, graph each line and see
where they intersect
We have taken an in depth look at graphical representations of systems of equations, as well as how to find possible solutions
graphically. Our attention now turns to working with systems algebraically.
where a and b are real numbers. The above is a system of m equations in the n variables, x
ij j 1, x2 ⋯ , xn . Written more
simply in terms of summation notation, the above can be written in the form
n
∑ aij xj = bi , i = 1, 2, 3, ⋯ , m (11.2.2)
j=1
The relative size of m and n is not important here. Notice that we have allowed a and b to be any real number. We can also
ij j
call these numbers scalars . We will use this term throughout the text, so keep in mind that the term scalar just means that we
are working with real numbers.
Now, suppose we have a system where bi = 0 for all i. In other words every equation equals 0. This is a special type of
system.
Recall from the previous section that our goal when working with systems of linear equations was to find the point of
intersection of the equations when graphed. In other words, we looked for the solutions to the system. We now wish to find
these solutions algebraically. We want to find values for x , ⋯ , x which solve all of the equations. If such a set of values
1 n
where a and b are real numbers. The above is a system of m equations in the n variables, x
ij j 1, x2 ⋯ , xn . Written more
simply in terms of summation notation, the above can be written in the form
n
∑ aij xj = bi , i = 1, 2, 3, ⋯ , m (11.2.5)
j=1
A system of linear equations is called consistent if there exists at least one solution. It is called inconsistent if there is no
solution.
If you think of each equation as a condition which must be satisfied by the variables, consistent would mean there is some
choice of variables which can satisfy all the conditions. Inconsistent would mean there is no choice of the variables which can
satisfy all of the conditions.
The following sections provide methods for determining if a system is consistent or inconsistent, and finding solutions if they
exist.
Elementary Operations
We begin this section with an example. Recall from Example [graphicalsoln] that the solution to the given system was
(x, y) = (−1, 4).
Solution
By graphing these two equations and identifying the point of intersection, we previously found that (x, y) = (−1, 4) is
the unique solution.
We can verify algebraically by substituting these values into the original equations, and ensuring that the equations hold.
First, we substitute the values into the first equation and check that it equals 3.
x + y = (−1) + (4) = 3 (11.2.7)
This equals 3 as needed, so we see that (−1, 4) is a solution to the first equation. Substituting the values into the second
equation yields
y − x = (4) − (−1) = 4 + 1 = 5 (11.2.8)
which is true. For (x, y) = (−1, 4) each equation is true and therefore, this is a solution to the system.
Now, the interesting question is this: If you were not given these numbers to verify, how could you algebraically determine the
solution? Linear algebra gives us the tools needed to answer this question. The following basic operations are important tools
It is important to note that none of these operations will change the set of solutions of the system of equations. In fact,
elementary operations are the key tool we use in linear algebra to find solutions to systems of equations.
Consider the following example.
x +y = 7
(11.2.9)
2x − y = 8
Solution
Notice that the second system has been obtained by taking the second equation of the first system and adding -2 times the
first equation, as follows:
By simplifying, we obtain
−3y = −6 (11.2.12)
which is the second equation in the second system. Now, from here we can solve for y and see that y =2 . Next, we
substitute this value into the first equation as follows
x +y = x +2 = 7 (11.2.13)
Hence x = 5 and so (x, y) = (5, 2) is a solution to the second system. We want to check if (5, 2) is also a solution to the
first system. We check this by substituting (x, y) = (5, 2) into the system and ensuring the equations are true.
x + y = (5) + (2) = 7
(11.2.14)
2x − y = 2 (5) − (2) = 8
This example illustrates how an elementary operation applied to a system of two equations in two variables does not affect the
solution set. However, a linear system may involve many equations and many variables and there is no reason to limit our
study to small systems. For any size of system in any number of variables, the solution set is still the collection of solutions to
the equations. In every case, the above operations of Definition [def:elementaryoperations] do not change the set of solutions
to the system of linear equations.
In the following theorem, we use the notation E to represent an equation, while b denotes a constant.
i i
Then the following systems have the same solution set as [system]:
1. E2 = b2
(11.2.16)
E1 = b1
2. E1 = b1
(11.2.17)
kE2 = kb2
3. E1 = b1
(11.2.18)
E2 + kE1 = b2 + kb1
Recall the elementary operations that we used to modify the system in the solution to the example. First, we added (−2)
times the first equation to the second equation. In terms of Theorem [thm:elementaryoperationsandsolns], this action is
given by
E2 + (−2) E1 = b2 + (−2) b1 (11.2.20)
or
2x − y + (−2) (x + y) = 8 + (−2) 7 (11.2.21)
From this point, we were able to find the solution to the system. Theorem [thm:elementaryoperationsandsolns] tells us
that the solution we found is in fact a solution to the original system.
We will now prove Theorem [thm:elementaryoperationsandsolns].
Proof
The proof that the systems [system] and [thm1.9.1] have the same solution set is as follows. Suppose that
(x , ⋯ , x ) is a solution to E = b , E = b
1 n 1 1 2 . We want to show that this is a solution to the system in [thm1.9.1]
2
above. This is clear, because the system in [thm1.9.1] is the original system, but listed in a different order. Changing
the order does not effect the solution set, so (x , ⋯ , x ) is a solution to [thm1.9.1].
1 n
Next we want to prove that the systems [system] and [thm1.9.2] have the same solution set. That is E = b , E = b 1 1 2 2
has the same solution set as the system E = b , kE = kb provided k ≠ 0 . Let (x , ⋯ , x ) be a solution of
1 1 2 2 1 n
between these two systems is that the second involves multiplying the equation, E = b by the scalar k . Recall that
2 2
when you multiply both sides of an equation by the same number, the sides are still equal to each other. Hence if
(x , ⋯ , x ) is a solution to E = b , then it will also be a solution to kE = kb . Hence, (x , ⋯ , x ) is also a
1 n 2 2 2 2 1 n
solution to [thm1.9.2].
Similarly, let (x , ⋯ , x ) be a solution of E = b , kE = kb . Then we can multiply the equation kE = kb by
1 n 1 1 2 2 2 2
the scalar 1/k, which is possible only because we have required that k ≠ 0 . Just as above, this action preserves
Finally, we will prove that the systems [system] and [thm1.9.3] have the same solution set. We will show that any
solution of E = b , E = b is also a solution of [thm1.9.3]. Then, we will show that any solution of [thm1.9.3] is
1 1 2 2
E = b . Hence, it solves the first equation in [thm1.9.3]. Similarly, it also solves E = b . By our proof of
1 1 2 2
[thm1.9.2], it also solves kE = kb . Notice that if we add E and kE , this is equal to b + kb . Therefore, if
1 1 2 1 2 1
(x , ⋯ , x ) solves E = b , E = b
1 n 1 1 it must also solve E + kE = b + kb .
2 2 2 1 2 1
E = b . Again by our proof of [thm1.9.2], it is also a solution to kE = kb . Now if we subtract these equal
1 1 1 1
quantities from both sides of E + kE = b + kb we obtain E = b , which shows that the solution also satisfies
2 1 2 1 2 2
E1 = b1 , E2 = b2 .
Stated simply, the above theorem shows that the elementary operations do not change the solution set of a system of equations.
We will now look at an example of a system of three equations and three variables. Similarly to the previous examples, the
goal is to find values for x, y, z such that each of the given equations are satisfied when these values are substituted in.
2x + 7y + 14z = 58 (11.2.23)
2y + 5z = 19
Solution
We can relate this system to Theorem [thm:elementaryoperationsandsolns] above. In this case, we have
E1 = x + 3y + 6z, b1 = 25
E2 = 2x + 7y + 14z, b2 = 58 (11.2.24)
E3 = 2y + 5z, b3 = 19
Theorem [thm:elementaryoperationsandsolns] claims that if we do elementary operations on this system, we will not
change the solution set. Therefore, we can solve this system using the elementary operations given in Definition
[def:elementaryoperations]. First, replace the second equation by (−2) times the first equation added to the second. This
yields the system
x + 3y + 6z = 25
y + 2z = 8 (11.2.25)
2y + 5z = 19
Now, replace the third equation with (−2) times the second added to the third. This yields the system
x + 3y + 6z = 25
y + 2z = 8 (11.2.26)
z =3
At this point, we can easily find the solution. Simply take z =3 and substitute this back into the previous equation to
solve for y , and similarly to solve for x.
x + 3y + 6 (3) = x + 3y + 18 = 25
y + 2 (3) = y + 6 = 8 (11.2.27)
z =3
You can see from this equation that y = 2 . Therefore, we can substitute this value into the first equation as follows:
x + 3 (2) + 18 = 25 (11.2.29)
By simplifying this equation, we find that x = 1 . Hence, the solution to this system is (x, y, z) = (1, 2, 3). This process is
called back substitution.
Alternatively, in [solvingasystem3] you could have continued as follows. Add (−2) times the third equation to the second
and then add (−6) times the second to the first. This yields
x + 3y = 7
y =2 (11.2.30)
z =3
Now add (−3) times the second to the first. This yields
x =1
y =2 (11.2.31)
z =3
a system which has the same solution set as the original system. This avoided back substitution and led to the same
solution set. It is your decision which you prefer to use, as both methods lead to the correct solution, (x, y, z) = (1, 2, 3).
x + 3y + 6z = 25
2x + 7y + 14z = 58 (11.3.1)
2y + 5z = 19
⎡ 1 3 6 25 ⎤
⎢ 2 7 14 58 ⎥ (11.3.2)
⎢ ⎥
⎣ 0 2 5 19 ⎦
Notice that it has exactly the same information as the original system. Here it is understood that the first column contains the
1
⎡ ⎤
coefficients from x in each equation, in order, ⎢2 ⎥. Similarly, we create a column from the coefficients on y in each
⎣ ⎦
0
3 6
⎡ ⎤ ⎡ ⎤
equation, ⎢ 7 ⎥ and a column from the coefficients on z in each equation, ⎢ 14 ⎥ . For a system of more than three variables,
⎣ ⎦ ⎣ ⎦
2 5
we would continue in this way constructing a column for each variable. Similarly, for a system of less than three variables, we
simply construct a column for each variable.
25
⎡ ⎤
Finally, we construct a column from the constants of the equations, ⎢ 58 ⎥ .
⎣ ⎦
19
The rows of the augmented matrix correspond to the equations in the system. For example, the top row in the augmented
matrix, [ 1 3 6 | 25 ] corresponds to the equation
x + 3y + 6z = 25. (11.3.3)
a11 x1 + ⋯ + a1n xn = b1
(11.3.4)
⋮
am1 x1 + ⋯ + amn xn = bm
where the x are variables and the a and b are constants, the augmented matrix of this system is given by
i ij i
⎡ a11 ⋯ a1n b1 ⎤
⎢ ⎥
⎢ ⎥ (11.3.5)
⎢ ⋮ ⋮ ⋮ ⎥
⎣ am1 ⋯ amn bm ⎦
Recall how we solved Example [exa:solvingasystemwithelementaryops]. We can do the exact same steps as above, except now
in the context of an augmented matrix and using row operations. The augmented matrix of this system is
⎡ 1 3 6 25 ⎤
⎢ 2 7 14 58 ⎥ (11.3.6)
⎢ ⎥
⎣ 0 2 5 19 ⎦
Thus the first step in solving the system given by [solvingasystem1] would be to take (−2) times the first row of the
augmented matrix and add it to the second row,
⎡ 1 3 6 25 ⎤
⎢ 0 1 2 8 ⎥ (11.3.7)
⎢ ⎥
⎣ 0 2 5 19 ⎦
Note how this corresponds to [solvingasystem2]. Next take (−2) times the second row and add to the third,
⎡ 1 3 6 25 ⎤
⎢ 0 1 2 8 ⎥ (11.3.8)
⎢ ⎥
⎣ 0 0 1 3 ⎦
x + 3y + 6z = 25
y + 2z = 8 (11.3.9)
z =3
which is the same as [solvingasystem3]. By back substitution you obtain the solution x = 1, y = 2, and z = 3.
Through a systematic procedure of row operations, we can simplify an augmented matrix and carry it to row-echelon form or
reduced row-echelon form, which we define next. These forms are used to find the solutions of the system of equations
corresponding to the augmented matrix.
In the following definitions, the term leading entry refers to the first nonzero entry of a row when scanning the row from left
to right.
Notice that the first three conditions on a reduced row-echelon form matrix are the same as those for row-echelon form.
Hence, every reduced row-echelon form matrix is also in row-echelon form. The converse is not necessarily true; we cannot
assume that every matrix in row-echelon form is also in reduced row-echelon form. However, it often happens that the row-
echelon form is sufficient to provide information about the solution of a system.
The following examples describe matrices in these various forms. As an exercise, take the time to carefully verify that they are
in the specified form.
⎡ 0 0 0 0 ⎤
⎡ 0 2 3 3 ⎤
⎢ 1 2 3 3 ⎥ ⎡ 1 2 3 ⎤
⎢ ⎥ ⎢ ⎥
⎢ ⎥ 1 5 0 2
⎢ 0 1 0 2 ⎥,⎢
⎢ 2 4 −6 ⎥,⎢
⎥ ⎢
⎥ (11.3.10)
⎢ ⎥ ⎥
⎢ 7 5 0 1 ⎥
⎢ ⎥ ⎣ ⎦
⎢ 0 0 0 1 ⎥ 4 0 7
⎣ 0 0 1 0 ⎦
⎣ 0 0 0 0 ⎦
⎡ 1 3 5 4 ⎤
⎡ 1 0 6 5 8 2 ⎤ ⎡ 1 0 6 0 ⎤
⎢ 0 1 0 7 ⎥
⎢ 0 0 1 2 7 3 ⎥ ⎢ ⎥ ⎢
0 1 4 0 ⎥
⎢ ⎥,⎢
⎢ 0 0 1 0
⎥ ⎢
⎥, ⎥ (11.3.11)
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 0 0 0 1 1 ⎥ ⎢ 0 0 1 0
⎥ ⎢ ⎥
⎢ 0 0 0 1 ⎥
⎣ 0 0 0 0 0 0 ⎦ ⎣ 0 0 0 0 ⎦
⎣ 0 0 0 0 ⎦
Notice that we could apply further row operations to these matrices to carry them to reduced row-echelon form. Take the time
to try that on your own. Consider the following matrices, which are in reduced row-echelon form.
⎡ 1 0 0 0 ⎤
⎡ 1 0 0 5 0 0 ⎤
⎢ 0 1 0 0 ⎥ ⎡ 1 0 0 4 ⎤
⎢ 0 0 1 2 0 0 ⎥ ⎢
⎢
⎥
⎥
⎢ ⎥,⎢ 0 0 1 0 ⎥,⎢ 0 1 0 3 ⎥ (11.3.12)
⎢ ⎥ ⎢ ⎢ ⎥
0 0 0 0 1 1 ⎥
⎢ ⎥ ⎢ ⎥ ⎣
0 0 0 1 0 0 1 2 ⎦
⎢ ⎥
⎣ 0 0 0 0 0 0 ⎦
⎣ 0 0 0 0 ⎦
⎡ 1 2 3 4 ⎤
A =⎢
⎢ 3 2 1 6 ⎥
⎥ (11.3.13)
⎣ 4 4 4 10 ⎦
Where are the pivot positions and pivot columns of the augmented matrix A ?
Solution
The row-echelon form of this matrix is
⎡ 1 2 3 4 ⎤
3
⎢ 0 1 2 ⎥ (11.3.14)
⎢ 2 ⎥
⎣ 0 0 0 0 ⎦
This is all we need in this example, but note that this matrix is not in reduced row-echelon form.
In order to identify the pivot positions in the original matrix, we look for the leading entries in the row-echelon form of
the matrix. Here, the entry in the first row and first column, as well as the entry in the second row and second column are
the leading entries. Hence, these locations are the pivot positions. We identify the pivot positions in the original matrix, as
in the following:
⎡ 1 2 3 4 ⎤
⎢ ⎥ (11.3.15)
⎢ 3 2 1 6 ⎥
⎣ 4 4 4 10 ⎦
Thus the pivot columns in the matrix are the first two columns.
The following is an algorithm for carrying a matrix to row-echelon form and reduced row-echelon form. You may wish to use
this algorithm to carry the above matrix to row-echelon form or reduced row-echelon form yourself for practice.
Most often we will apply this algorithm to an augmented matrix in order to find the solution to a system of linear equations.
However, we can use this algorithm to compute the reduced row-echelon form of any matrix which could be useful in other
applications.
Consider the following example of Algorithm [algo:rrefalgorithm].
0 −5 −4
⎡ ⎤
A = ⎢1 4 3⎥ (11.3.16)
⎣ ⎦
5 10 7
Find the row-echelon form of A . Then complete the process until A is in reduced row-echelon form.
Solution
In working through this example, we will use the steps outlined in Algorithm [algo:rrefalgorithm].
1. The first pivot column is the first column of the matrix, as this is the first nonzero column from the left. Hence the
first pivot position is the one in the first row and first column. Switch the first two rows to obtain a nonzero entry in
the first pivot position, outlined in a box below.
⎡ 1 4 3⎤
⎢ 0 −5 −4 ⎥ (11.3.17)
⎣ ⎦
5 10 7
2. Step two involves creating zeros in the entries below the first pivot position. The first entry of the second row is
already a zero. All we need to do is subtract 5 times the first row from the third row. The resulting matrix is
1 4 3
⎡ ⎤
⎢0 −5 −4 ⎥ (11.3.18)
⎣ ⎦
0 10 8
3. Now ignore the top row. Apply steps 1 and 2 to the smaller matrix
−5 −4
[ ] (11.3.19)
10 8
In this matrix, the first column is a pivot column, and −5 is in the first pivot position. Therefore, we need to create a
zero below it. To do this, add 2 times the first row (of this matrix) to the second. The resulting matrix is
−5 −4
[ ] (11.3.20)
0 0
⎢0 −5 −4 ⎥ (11.3.21)
⎣ ⎦
0 0 0
⎣ ⎦
0 0 0
⎢ 4 ⎥ (11.3.23)
⎢0 1
5
⎥
⎣ ⎦
0 0 0
5x + 10y − 7z = −2 (11.3.24)
3x + 6y + 5z = 9
Solution
The augmented matrix for this system is
⎡ 2 4 −3 −1 ⎤
⎢ 5 10 −7 −2 ⎥ (11.3.25)
⎢ ⎥
⎣ 3 6 5 9 ⎦
In order to find the solution to this system, we wish to carry the augmented matrix to . We will do so using Algorithm
[algo:rrefalgorithm]. Notice that the first column is nonzero, so this is our first pivot column. The first entry in the
first row, 2, is the first leading entry and it is in the first pivot position. We will use row operations to create zeros in
the entries below the 2. First, replace the second row with −5 times the first row plus 2 times the second row. This
yields
⎡ 2 4 −3 −1 ⎤
⎢ 0 0 1 1 ⎥ (11.3.26)
⎢ ⎥
⎣ 3 6 5 9 ⎦
Now, replace the third row with −3 times the first row plus to 2 times the third row. This yields
⎡ 2 4 −3 −1 ⎤
⎢ 0 0 1 1 ⎥ (11.3.27)
⎢ ⎥
⎣ 0 0 1 21 ⎦
⎡ 2 4 −3 −1 ⎤
⎢ 0 0 1 1 ⎥ (11.3.28)
⎢ ⎥
⎣ 0 0 0 20 ⎦
We could proceed with the algorithm to carry this matrix to row-echelon form or reduced row-echelon form.
However, remember that we are looking for the solutions to the system of equations. Take another look at the third
row of the matrix. Notice that it corresponds to the equation
0x + 0y + 0z = 20 (11.3.29)
There is no solution to this equation because for all x, y, z, the left side will equal 0 and 0 ≠ 20. This shows there is
no solution to the given system of equations. In other words, this system is inconsistent.
The following is another example of how to find the solution to a system of equations by carrying the corresponding
augmented matrix to reduced row-echelon form.
3x − y − 5z = 9
y − 10z = 0 (11.3.30)
−2x + y = −6
Solution
The augmented matrix of this system is
⎡ 3 −1 −5 9 ⎤
⎢ 0 1 −10 0 ⎥ (11.3.31)
⎢ ⎥
⎣ −2 1 0 −6 ⎦
In order to find the solution to this system, we will carry the augmented matrix to reduced row-echelon form, using
Algorithm [algo:rrefalgorithm]. The first column is the first pivot column. We want to use row operations to create
zeros beneath the first entry in this column, which is in the first pivot position. Replace the third row with 2 times the
first row added to 3 times the third row. This gives
⎡ 3 −1 −5 9 ⎤
⎢ 0 1 −10 0 ⎥ (11.3.32)
⎢ ⎥
⎣ 0 1 −10 0 ⎦
Now, we have created zeros beneath the 3 in the first column, so we move on to the second pivot column (which is
the second column) and repeat the procedure. Take −1 times the second row and add to the third row.
⎡ 3 −1 −5 9 ⎤
⎢ 0 1 −10 0 ⎥ (11.3.33)
⎢ ⎥
⎣ 0 0 0 0 ⎦
The entry below the pivot position in the second column is now a zero. Notice that we have no more pivot columns
because we have only two leading entries.
At this stage, we also want the leading entries to be equal to one. To do so, divide the first row by 3.
⎢ ⎥ (11.3.34)
⎢ 0 1 −10 0 ⎥
⎣ 0 0 0 0 ⎦
⎡ 1 0 −5 3 ⎤
⎢ 0 1 −10 0 ⎥ (11.3.35)
⎢ ⎥
⎣ 0 0 0 0 ⎦
This is in reduced row-echelon form, which you should verify using Definition [def:rref]. The equations
corresponding to this reduced row-echelon form are
x − 5z = 3
(11.3.36)
y − 10z = 0
or
x = 3 + 5z
(11.3.37)
y = 10z
Observe that z is not restrained by any equation. In fact, z can equal any number. For example, we can let z = t ,
where we can choose t to be any number. In this context t is called a parameter . Therefore, the solution set of this
system is
x = 3 + 5t
y = 10t (11.3.38)
z =t
where t is arbitrary. The system has an infinite set of solutions which are given by these equations. For any value of
t we select, x, y, and z will be given by the above equations. For example, if we choose t = 4 then the
x = 3 + 5(4) = 23
y = 10(4) = 40 (11.3.39)
z =4
In Example [exa:infinitesetofsoln] the solution involved one parameter. It may happen that the solution to a system
involves more than one parameter, as shown in the following example.
x +y −z+w = 1 (11.3.40)
x + 3y − z + w = 5
Solution
The augmented matrix is
⎢ 1 1 −1 1 1 ⎥ (11.3.41)
⎢ ⎥
⎣ ⎦
1 3 −1 1 5
We wish to carry this matrix to row-echelon form. Here, we will outline the row operations used. However, make
sure that you understand the steps in terms of Algorithm [algo:rrefalgorithm].
Take −1 times the first row and add to the second. Then take −1 times the first row and add to the third. This yields
⎡ 1 2 −1 1 3 ⎤
⎢ 0 −1 0 0 −2 ⎥ (11.3.42)
⎢ ⎥
⎣ 0 1 0 0 2 ⎦
Now add the second row to the third row and divide the second row by −1.
⎡ 1 2 −1 1 3 ⎤
⎢ 0 1 0 0 2 ⎥ (11.3.43)
⎢ ⎥
⎣ 0 0 0 0 0 ⎦
This matrix is in row-echelon form and we can see that x and y correspond to pivot columns, while z and w do not.
Therefore, we will assign parameters to the variables z and w. Assign the parameter s to z and the parameter t to w.
Then the first row yields the equation x + 2y − s + t = 3 , while the second row yields the equation y = 2 . Since
y = 2 , the first equation becomes x + 4 − s + t = 3 showing that the solution is given by
x = −1 + s − t
y =2
(11.3.44)
z =s
w =t
x −1 + s − t
⎡ ⎤ ⎡ ⎤
⎢ y ⎥ ⎢ 2 ⎥
⎢ ⎥ =⎢ ⎥ (11.3.45)
⎢ z ⎥ ⎢ s ⎥
⎣ ⎦ ⎣ ⎦
w t
This example shows a system of equations with an infinite solution set which depends on two parameters. It can be less
confusing in the case of an infinite solution set to first place the augmented matrix in reduced row-echelon form rather
than just row-echelon form before seeking to write down the description of the solution.
In the above steps, this means we don’t stop with the row-echelon form in equation [twoparameters1]. Instead we first
place it in reduced row-echelon form as follows.
⎡ 1 0 −1 1 −1 ⎤
⎢ 0 1 0 0 2 ⎥ (11.3.46)
⎢ ⎥
⎣ 0 0 0 0 0 ⎦
Then the solution is y = 2 from the second row and x = −1 + z − w from the first. Thus letting z =s and w = t, the
solution is given by [twoparameters2].
You can see here that there are two paths to the correct answer, which both yield the same answer. Hence, either approach
may be used. The process which we first used in the above solution is called Gaussian Elimination This process
involves carrying the matrix to row-echelon form, converting back to equations, and using back substitution to find the
solution. When you do row operations until you obtain reduced row-echelon form, the process is called Gauss-Jordan
Elimination.
[0 0 0 | 1] (11.3.47)
This row indicates that the system is inconsistent and has no solution.
One Solution: In the case where the system of equations has one solution, every column of the coefficient matrix is a
pivot column. The following is an example of an augmented matrix in for a row-echelon form system of equations
with one solution.
⎡ 1 0 0 5 ⎤
⎢ 0 1 0 0 ⎥ (11.3.48)
⎢ ⎥
⎣ 0 0 1 2 ⎦
Infinitely Many Solutions: In the case where the system of equations has infinitely many solutions, the solution
contains parameters. There will be columns of the coefficient matrix which are not pivot columns. The following are
examples of augmented matrices in reduced row-echelon form for systems of equations with infinitely many
solutions.
⎡ 1 0 0 5 ⎤
⎢ 0 1 2 −3 ⎥ (11.3.49)
⎢ ⎥
⎣ 0 0 0 0 ⎦
or
1 0 0 5
[ ] (11.3.50)
0 1 0 −3
Exercise 11.4.1
Graphically, find the point (x1, y1 ) which lies on both lines, x + 3y = 1 and 4x − y = 3. That is, graph each line and see
where they intersect.
Answer
x + 3y = 1
, Solution is: [x = 10
13
,y =
1
13
] .
4x − y = 3
Exercise 11.4.2
Graphically, find the point of intersection of the two lines 3x + y = 3 and x + 2y = 1. That is, graph each line and see
where they intersect.
Answer
3x + y = 3
, Solution is: [x = 1, y = 0]
x + 2y = 1
Exercise 11.4.3
You have a system of k equations in two variables, k ≥ 2 . Explain the geometric significance of
1. No solution.
2. A unique solution.
3. An infinite number of solutions.
Exercise 11.4.4
Consider the following augmented matrix in which ∗ denotes an arbitrary number and ■ denotes a nonzero number.
Determine whether the given augmented matrix is consistent. If consistent, is the solution unique?
⎡ ■ ∗ ∗ ∗ ∗ ∗ ⎤
⎢ 0 ■ ∗ ∗ 0 ∗ ⎥
⎢ ⎥ (11.4.1)
⎢ ⎥
⎢ 0 0 ■ ∗ ∗ ∗ ⎥
⎣ ⎦
0 0 0 0 ■ ∗
Answer
The solution exists but is not unique.
Exercise 11.4.5
Consider the following augmented matrix in which ∗ denotes an arbitrary number and ■ denotes a nonzero number.
Determine whether the given augmented matrix is consistent. If consistent, is the solution unique?
⎡ ■ ∗ ∗ ∗ ⎤
⎢ 0 ■ ∗ ∗ ⎥ (11.4.2)
⎢ ⎥
⎣ 0 0 ■ ∗ ⎦
Exercise 11.4.6
Consider the following augmented matrix in which ∗ denotes an arbitrary number and ■ denotes a nonzero number.
Determine whether the given augmented matrix is consistent. If consistent, is the solution unique?
⎡ ■ ∗ ∗ ∗ ∗ ∗ ⎤
⎢ 0 ■ 0 ∗ 0 ∗ ⎥
⎢ ⎥ (11.4.3)
⎢ ⎥
⎢ 0 0 0 ■ ∗ ∗ ⎥
⎣ 0 0 0 0 ■ ∗ ⎦
Exercise 11.4.7
Consider the following augmented matrix in which ∗ denotes an arbitrary number and ■ denotes a nonzero number.
Determine whether the given augmented matrix is consistent. If consistent, is the solution unique?
⎡ ■ ∗ ∗ ∗ ∗ ∗ ⎤
⎢ 0 ■ ∗ ∗ 0 ∗ ⎥
⎢ ⎥ (11.4.4)
⎢ ⎥
⎢ 0 0 0 0 ■ 0 ⎥
⎣ 0 0 0 0 ∗ ■ ⎦
Answer
There might be a solution. If so, there are infinitely many.
Exercise 11.4.8
Suppose a system of equations has fewer equations than variables. Will such a system necessarily be consistent? If so,
explain why and if not, give an example which is not consistent.
Answer
No. Consider x + y + z = 2 and x + y + z = 1.
Exercise 11.4.9
If a system of equations has more equations than variables, can it have a solution? If so, give an example and if not, tell
why not.
Answer
These can have a solution. For example, x + y = 1, 2x + 2y = 2, 3x + 3y = 3 even has an infinite set of solutions.
Exercise 11.4.10
Find h such that
2 h 4
[ ] (11.4.5)
3 6 7
Exercise 11.4.11
Find h such that
1 h 3
[ ] (11.4.6)
2 4 6
Answer
Any h will work.
Exercise 11.4.12
Find h such that
1 1 4
[ ] (11.4.7)
3 h 12
Answer
Any h will work.
Exercise 11.4.13
Choose h and k such that the augmented matrix shown has each of the following:
1. one solution
2. no solution
3. infinitely many solutions
1 h 2
[ ] (11.4.8)
2 4 k
Answer
If h ≠ 2 there will be a unique solution for any k . If h = 2 and there are no solutions. If h = 2 and k = 4, then there
are infinitely many solutions.
Exercise 11.4.14
Choose h and k such that the augmented matrix shown has each of the following:
1. one solution
2. no solution
3. infinitely many solutions
1 2 2
[ ] (11.4.9)
2 h k
Exercise 11.4.15
Determine if the system is consistent. If so, is the solution unique?
x + 2y + z − w = 2
x −y +z+w = 1
(11.4.10)
2x + y − z = 1
4x + 2y + z = 5
Answer
There is no solution. The system is inconsistent. You can see this from the augmented matrix.
1
1 2 1 −1 2 ⎡1 0 0 \vspace0.05in
3
0⎤
⎡ ⎤
⎢ 2 ⎥
⎢1 −1 1 1 1⎥ ⎢0 1 0 −\vspace0.05in 0⎥
⎢ ⎥ ,: ⎢ 3
⎥.
⎢2 1 −1 0 1⎥ ⎢ ⎥
⎢0 0 1 0 0⎥
⎣ ⎦
4 2 1 0 5 ⎣ ⎦
0 0 0 0 1
Exercise 11.4.16
Determine if the system is consistent. If so, is the solution unique?
x + 2y + z − w = 2
x −y +z+w = 0
(11.4.11)
2x + y − z = 1
4x + 2y + z = 3
Answer
Solution is: [w = 3
2
y − 1, x =
2
3
−
1
2
y, z =
1
3
]
Exercise 11.4.17
Determine which matrices are in reduced row-echelon form.
1 2 0
1. [ ]
0 1 7
1 0 0 0
⎡ ⎤
2. ⎢ 0 0 1 2⎥
⎣ ⎦
0 0 0 0
1 1 0 0 0 5
⎡ ⎤
3. ⎢ 0 0 1 2 0 4⎥
⎣ ⎦
0 0 0 0 1 3
Answer
1. This one is not.
2. This one is.
3. This one is.
⎢1 0 2 1⎥ (11.4.12)
⎣ ⎦
1 −1 1 −2
Exercise 11.4.19
Row reduce the following matrix to obtain the row-echelon form. Then continue to obtain the reduced row-echelon form.
0 0 −1 −1
⎡ ⎤
⎢1 1 1 0⎥ (11.4.13)
⎣ ⎦
1 1 0 −1
Exercise 11.4.20
Row reduce the following matrix to obtain the row-echelon form. Then continue to obtain the reduced row-echelon form.
3 −6 −7 −8
⎡ ⎤
⎢1 −2 −2 −2 ⎥ (11.4.14)
⎣ ⎦
1 −2 −3 −4
Exercise 11.4.21
Row reduce the following matrix to obtain the row-echelon form. Then continue to obtain the reduced row-echelon form.
2 4 5 15
⎡ ⎤
⎢1 2 3 9⎥ (11.4.15)
⎣ ⎦
1 2 2 6
Exercise 11.4.22
Row reduce the following matrix to obtain the row-echelon form. Then continue to obtain the reduced row-echelon form.
4 −1 7 10
⎡ ⎤
⎢1 0 3 3⎥ (11.4.16)
⎣ ⎦
1 −1 −2 1
Exercise 11.4.23
Row reduce the following matrix to obtain the row-echelon form. Then continue to obtain the reduced row-echelon form.
3 5 −4 2
⎡ ⎤
⎢1 2 −1 1⎥ (11.4.17)
⎣ ⎦
1 1 −2 0
Exercise 11.4.24
Row reduce the following matrix to obtain the row-echelon form. Then continue to obtain the reduced row-echelon form.
⎢ 1 −2 5 −5 ⎥ (11.4.18)
⎣ ⎦
1 −3 7 −8
Exercise 11.4.25
Find the solution of the system whose augmented matrix is
⎡ 1 2 0 2 ⎤
⎢ 1 3 4 2 ⎥ (11.4.19)
⎢ ⎥
⎣ 1 0 2 1 ⎦
Exercise 11.4.26
Find the solution of the system whose augmented matrix is
⎡ 1 2 0 2 ⎤
⎢ 2 0 1 1 ⎥ (11.4.20)
⎢ ⎥
⎣ 3 2 1 3 ⎦
Answer
1 1
⎡ 1 0 \vspace0.05in \vspace0.05in ⎤
2 2
⎣ 0 0 0 0 ⎦
the form z = t, y = 3
4
+t (
1
4
),x =
1
2
−
1
2
t where t ∈ R .
Exercise 11.4.27
Find the solution of the system whose augmented matrix is
1 1 0 1
[ ] (11.4.21)
1 0 4 2
Answer
1 0 4 2
The reduced row-echelon form is [ ] and so the solution is z = t, y = 4t, x = 2 − 4t.
0 1 −4 −1
Exercise 11.4.28
Find the solution of the system whose augmented matrix is
⎡ 1 0 2 1 1 2 ⎤
⎢ 0 1 0 1 2 1 ⎥
⎢ ⎥ (11.4.22)
⎢ ⎥
⎢ 1 2 0 0 1 3 ⎥
⎣ 1 0 1 0 2 2 ⎦
Answer
⎢ 0 1 0 0 −4 0 ⎥
The reduced row-echelon form is ⎢
⎢
⎥
⎥
and so
⎢ 0 0 1 0 −7 −1 ⎥
⎣ 0 0 0 1 6 1 ⎦
Exercise 11.4.29
Find the solution of the system whose augmented matrix is
⎡ 1 0 2 1 1 2 ⎤
⎢ 0 1 0 1 2 1 ⎥
⎢ ⎥ (11.4.23)
⎢ ⎥
⎢ 0 2 0 0 1 3 ⎥
⎣ 1 −1 2 2 2 0 ⎦
Answer
1 5
⎡ 1 0 2 0 −\vspace0.05in \vspace0.05in ⎤
2 2
⎢ 1 3 ⎥
⎢ 0 1 0 0 \vspace0.05in \vspace0.05in ⎥
The reduced row-echelon form is ⎢
⎢
2 2 ⎥
⎥
. Therefore, let
3 1
⎢ 0 0 0 1 \vspace0.05in −\vspace0.05in ⎥
⎢ 2 2 ⎥
⎣ 0 0 0 0 0 0 ⎦
2
−
3
2
t, x2 =
3
2
−t
1
2
, , x1 =
5
2
+
1
2
t − 2s.
Exercise 11.4.30
Find the solution to the system of equations, 7x + 14y + 15z = 22, 2x + 4y + 3z = 5, and 3x + 6y + 10z = 13.
Answer
Solution is: [x = 1 − 2t, z = 1, y = t]
Exercise 11.4.31
Find the solution to the system of equations, 3x − y + 4z = 6, y + 8z = 0, and −2x + y = −4.
Answer
Solution is: [x = 2 − 4t, y = −8t, z = t]
Exercise 11.4.32
Find the solution to the system of equations, 9x − 2y + 4z = −17, 13x − 3y + 6z = −25, and −2x − z = 3.
Answer
Solution is: [x = −1, y = 2, z = −1]
Exercise 11.4.33
Find the solution to the system of equations, 65x + 84y + 16z = 546, 81x + 105y + 20z = 682, and
84x + 110y + 21z = 713.
Exercise 11.4.34
Find the solution to the system of equations, 8x + 2y + 3z = −3, 8x + 3y + 3z = −1, and 4x + y + 3z = −9.
Answer
Solution is: [x = 1, y = 2, z = −5]
Exercise 11.4.35
Find the solution to the system of equations, −8x + 2y + 5z = 18, −8x + 3y + 5z = 13, and −4x + y + 5z = 19.
Answer
Solution is: [x = −1, y = −5, z = 4]
Exercise 11.4.36
Find the solution to the system of equations, 3x − y − 2z = 3, y − 4z = 0, and −2x + y = −2.
Answer
Solution is: [x = 2t + 1, y = 4t, z = t]
Exercise 11.4.37
Find the solution to the system of equations, −9x + 15y = 66, −11x + 18y = 79 , −x + y = 4 , and z = 3 .
Answer
Solution is: [x = 1, y = 5, z = 3]
Exercise 11.4.38
Find the solution to the system of equations, −19x + 8y = −108, −71x + 30y = −404, −2x + y = −12,
4x + z = 14.
Answer
Solution is: [x = 4, y = −4, z = −2]
Exercise 11.4.39
Suppose a system of equations has fewer equations than variables and you have found a solution to this system of
equations. Is it possible that your solution is the only one? Explain.
Answer
No. Consider x + y + z = 2 and x + y + z = 1.
Exercise 11.4.40
Answer
No. This would lead to 0 = 1.
Exercise 11.4.41
Suppose the coefficient matrix of a system of n equations with n variables has the property that every column is a pivot
column. Does it follow that the system of equations must have a solution? If so, must the solution be unique? Explain.
Answer
Yes. It has a unique solution.
Exercise 11.4.42
Suppose there is a unique solution to a system of linear equations. What must be true of the pivot columns in the
augmented matrix?
Answer
The last column must not be a pivot column. The remaining columns must each be pivot columns.
Exercise 11.4.43
The steady state temperature, u, of a plate solves Laplace’s equation, Δu = 0. One way to approximate the solution is to
divide the plate into a square mesh and require the temperature at each node to equal the average of the temperature at the
four adjacent nodes. In the following picture, the numbers represent the observed temperature at the indicated nodes. Find
the temperature at the interior nodes, indicated by x, y, z, and w. One of the equations is
z = \vspace0.05in (10 + 0 + w + x) .
1
(1,60) (100,0)
(1,1) (0,30)(1,0)90 (0,60)(1,0)90 (30,0)(0,1)90 (60,0)(0,1)90 (60,0) (30,0) (0,60) (0,30) (30,30) (30,60) (60,30) (60,60)
(60,90) (30,90) (90,30) (90,60) (63,-4)10 (33,-4)10 (-10,60)20 (-10,30)20 (34,34)x (34,64)y (64,34)z (64,64)w (64,90)30
(34,90)30 (94,30)0 (94,60)0
Answer
1
(20 + 30 + w + x) − y = 0
4
1
(y + 30 + 0 + z) − w = 0
You need 4
1
, Solution is: [w = 15, x = 15, y = 20, z = 10] .
(20 + y + z + 10) − x = 0
4
1
(x + w + 0 + 10) − z = 0
4
Exercise 11.4.44
Find the rank of the following matrix.
4 −16 −1 −5
⎡ ⎤
⎢1 −4 0 −1 ⎥ (11.4.24)
⎣ ⎦
1 −4 −1 −2
⎢1 2 2 5⎥ (11.4.25)
⎣ ⎦
1 2 1 2
Exercise 11.4.46
Find the rank of the following matrix.
0 0 −1 0 3
⎡ ⎤
⎢ 1 4 1 0 −8 ⎥
⎢ ⎥ (11.4.26)
⎢ 1 4 0 1 2⎥
⎣ ⎦
−1 −4 0 −1 −2
Exercise 11.4.47
Find the rank of the following matrix.
4 −4 3 −9
⎡ ⎤
⎢1 −1 1 −2 ⎥ (11.4.27)
⎣ ⎦
1 −1 0 −3
Exercise 11.4.48
Find the rank of the following matrix.
2 0 1 0 1
⎡ ⎤
⎢1 0 1 0 0⎥
⎢ ⎥ (11.4.28)
⎢1 0 0 1 7⎥
⎣ ⎦
1 0 0 1 7
Exercise 11.4.49
Find the rank of the following matrix.
4 15 29
⎡ ⎤
⎢1 4 8⎥
⎢ ⎥ (11.4.29)
⎢1 3 5⎥
⎣ ⎦
3 9 15
Exercise 11.4.50
Find the rank of the following matrix.
0 0 −1 0 1
⎡ ⎤
⎢ 1 2 3 −2 −18 ⎥
⎢ ⎥ (11.4.30)
⎢ 1 2 2 −1 −11 ⎥
⎣ ⎦
−1 −2 −2 1 11
⎢1 −2 0 4 15 ⎥
⎢ ⎥ (11.4.31)
⎢1 −2 0 3 11 ⎥
⎣ ⎦
0 0 0 0 0
Exercise 11.4.52
Find the rank of the following matrix.
−2 −3 −2
⎡ ⎤
⎢ 1 1 1⎥
⎢ ⎥ (11.4.32)
⎢ 1 0 1⎥
⎣ ⎦
−3 0 −3
Exercise 11.4.53
Find the rank of the following matrix.
4 4 20 −1 17
⎡ ⎤
⎢1 1 5 0 5⎥
⎢ ⎥ (11.4.33)
⎢1 1 5 −1 2⎥
⎣ ⎦
3 3 15 −3 6
Exercise 11.4.54
Find the rank of the following matrix.
−1 3 4 −3 8
⎡ ⎤
⎢ 1 −3 −4 2 −5 ⎥
⎢ ⎥ (11.4.34)
⎢ 1 −3 −4 1 −2 ⎥
⎣ ⎦
−2 6 8 −2 4
Exercise 11.4.55
Suppose A is an m × n matrix. Explain why the rank of A is always no larger than min (m, n) .
Answer
It is because you cannot have more than min (m, n) nonzero rows in the reduced row-echelon form. Recall that the
number of pivot columns is the same as the number of nonzero rows from the description of this reduced row-echelon
form.
Exercise 11.4.56
State whether each of the following sets of data are possible for the matrix equation AX = B . If possible, describe the
solution set. That is, tell whether there exists a unique solution, no solution or infinitely many solutions. Here, [A|B]
denotes the augmented matrix.
1. A is a 5 × 6 matrix, rank (A) = 4 and rank [A|B] = 4.
2. A is a 3 × 4 matrix, rank (A) = 3 and rank [A|B] = 2.
Answer
This says B is in the span of four of the columns. Thus the columns are not independent. Infinite solution set.
This surely can’t happen. If you add in another column, the rank does not get smaller.
This says B is in the span of the columns and the columns must be independent. You can’t have the rank equal 4 if you
only have two columns.
This says B is not in the span of the columns. In this case, there is no solution to the system of equations represented
by the augmented matrix.
In this case, there is a unique solution since the columns of A are independent.
Exercise 11.4.57
Consider the system −5x + 2y − z = 0 and −5x − 2y − z = 0. Both equations equal zero and so
−5x + 2y − z = −5x − 2y − z which is equivalent to y = 0. Does it follow that x and z can equal anything? Notice
that when x = 1 , z = −4, and y = 0 are plugged in to the equations, the equations do not equal 0. Why?
Answer
These are not legitimate row operations. They do not preserve the solution set of the system.
Exercise 11.4.58
Balance the following chemical reactions.
1. KN O + H C O → K C O + H N O
3 2 3 2 3 3
2. AgI + N a S → Ag S + N aI
2 2
3. Ba N + H O → Ba(OH ) + N H
3 2 2 2 3
4. C aC l + N a P O → C a (P O ) + N aC l
2 3 4 3 4 2
Exercise 11.4.59
In the section on dimensionless variables it was observed that ρV AB has the units of force. Describe a systematic way
2
to obtain such combinations of the variables which will yield something which has the units of force.
Exercise 11.4.60
Consider the following diagram of four circuits.
The current in amps in the four circuits is denoted by I , I , I , I and it is understood that the motion is in the counter
1 2 3 4
clockwise direction. If I ends up being negative, then it just means the current flows in the clockwise direction.
k
In the above diagram, the top left circuit should give the equation
2 I2 − 2 I1 + 5 I2 − 5 I3 + 3 I2 = 5 (11.4.35)
Write equations for each of the other two circuits and then give a solution to the resulting system of equations.
Answer
2 I4 + 3 I4 + 6 I4 − 6 I3 + I4 − I1 = 0
2 I2 − 2 I1 + 5 I2 − 5 I3 + 3 I2 = 5
4 I1 + I1 − I4 + 2 I1 − 2 I2 = −10
(11.4.37)
6 I3 − 6 I4 + I3 + I3 + 5 I3 − 5 I2 = −20
2 I4 + 3 I4 + 6 I4 − 6 I3 + I4 − I1 = 0
Exercise 11.4.61
Consider the following diagram of three circuits.
The current in amps in the four circuits is denoted by I , I , I and it is understood that the motion is in the counter
1 2 3
clockwise direction. If I ends up being negative, then it just means the current flows in the clockwise direction.
k
Find I 1, I2 , I3 .
Answer
You have
2 I1 + 5 I1 + 3 I1 − 5 I2 = 10
I2 − I3 + 3 I2 + 7 I2 + 5 I2 − 5 I1 = −12
2 I3 + 4 I3 + 4 I3 + I3 − I2 = 0
10 I1 − 5 I2 = 10
−5 I1 + 16 I2 − I3 = −12
−I2 + 11 I3 = 0
12.E: EXERCISES
1 9/16/2020
12.1: Matrix Arithmetic
Learning Objectives
1. Perform the matrix operations of matrix addition, scalar multiplication, transposition and matrix multiplication.
Identify when these operations are not defined. Represent these operations in terms of the entries of a matrix.
2. Prove algebraic properties for matrix addition, scalar multiplication, transposition, and matrix multiplication. Apply
these properties to manipulate an algebraic expression involving matrices.
3. Compute the inverse of a matrix using row operations, and prove identities involving matrix inverses.
4. Solve a linear system using matrix algebra.
5. Use multiplication by an elementary matrix to apply row operations.
6. Write a matrix as a product of elementary matrices.
You have now solved systems of equations by writing them in terms of an augmented matrix and then doing row operations on
this augmented matrix. It turns out that matrices are important not only for systems of equations but also in many applications.
Recall that a matrix is a rectangular array of numbers. Several of them are referred to as matrices. For example, here is a
matrix.
1 2 3 4
⎡ ⎤
⎢5 2 8 7⎥ (12.1.1)
⎣ ⎦
6 −9 1 2
Recall that the size or dimension of a matrix is defined as m × n where m is the number of rows and n is the number of
columns. The above matrix is a 3 × 4 matrix because there are three rows and four columns. You can remember the columns
are like columns in a Greek temple. They stand upright while the rows lay flat like rows made by a tractor in a plowed field.
When specifying the size of a matrix, you always list the number of rows before the number of columns.You might remember
that you always list the rows before the columns by using the phrase Rowman Catholic.
Consider the following definition.
There is some notation specific to matrices which we now introduce. We denote the columns of a matrix A by A as follows j
A = [ A1 A2 ⋯ An ] (12.1.2)
Therefore, A is the j
j
th
column of A , when counted from left to right.
The individual elements of the matrix are called entries or components of A . Elements of the matrix are identified according
to their position. The (i, j) -entry of a matrix is the entry in the i row and j column. For example, in the matrix in Equation
th th
12.1.1, 8 is in position (2, 3) (and is called the (2, 3)-entry) because it is in the second row and the third column.
In order to remember which matrix we are speaking of, we will denote the entry in the i row and the j column of matrix A
th th
by a . Then, we can write A in terms of its entries, as A = [a ] . Using this notation on the matrix in Equation 12.1.1,
ij ij
a23 = 8, a = −9, a
32 = 2, etc.
12
There are various operations which are done on matrices of appropriate sizes. Matrices can be added to and subtracted from
other matrices, multiplied by a scalar, and multiplied by other matrices. We will never divide a matrix by another matrix, but
we will see later how matrix inverses play a similar role.
In doing arithmetic with matrices, we often define the action by what happens in terms of the entries (or components) of the
matrices. Before looking at these operations in depth, consider a few general definitions.
Note there is a 2 × 3 zero matrix, a 3 × 4 zero matrix, etc. In fact there is a zero matrix for every size!
In other words, two matrices are equal exactly when they are the same size and the corresponding entries are identical. Thus
0 0
⎡ ⎤
0 0
⎢0 0⎥ ≠[ ] (12.1.3)
⎣ ⎦ 0 0
0 0
because, although they are the same size, their corresponding entries are not identical.
In the following section, we explore addition of matrices.
Addition of Matrices
When adding matrices, all matrices in the sum need have the same size. For example,
1 2
⎡ ⎤
⎢3 4⎥ (12.1.5)
⎣ ⎦
5 2
and
−1 4 8
[ ] (12.1.6)
2 8 5
cannot be added, as one has size 3 × 2 while the other has size 2 × 3 .
However, the addition
4 6 3 0 5 0
⎡ ⎤ ⎡ ⎤
⎢ 5 0 4 ⎥+⎢4 −4 14 ⎥ (12.1.7)
⎣ ⎦ ⎣ ⎦
11 −2 3 1 2 6
is possible.
The formal definition is as follows.
This definition tells us that when adding matrices, we simply add corresponding entries of the matrices. This is demonstrated
in the next example.
Solution
Notice that both A and B are of size 2 × 3 . Since A and B are of the same size, the addition is possible. Using Definition
12.1.4, the addition is done as follows.
1 2 3 5 2 3 1 +5 2 +2 3 +3 6 4 6
A+B = [ ]+[ ] = [ ] =[ ] (12.1.10)
1 0 4 −6 2 1 1 + −6 0 +2 4 +1 −5 2 5
Addition of matrices obeys very much the same properties as normal addition with numbers. Note that when we write for
example A + B then we assume that both matrices are of equal size so that the operation is indeed possible.
Theorem 12.1.1
Properties of Matrix Additionpropertiesofaddition Let A, B and C be matrices. Then, the following properties hold.
Commutative Law of Addition
A+B = B+A (12.1.11)
Proof
Consider the Commutative Law of Addition given in [mat1]. Let A, B, C , and D be matrices such that A + B = C
and B + A = D. We want to show that D = C . To do so, we will use the definition of matrix addition given in
Definition 12.1.4. Now,
cij = aij + bij = bij + aij = dij (12.1.15)
Therefore, C = D because the ij entries are the same for all i and j . Note that the conclusion follows from the
th
commutative law of addition of numbers, which says that if a and b are two numbers, then a + b = b + a . The proof
of the other results are similar, and are left as an exercise.
is defined to equal (−1) A = [−a ]. In other words, every entry of A is multiplied by −1.
ij
In the next section we will study scalar multiplication in more depth to understand what is meant by (−1) A.
⎣ ⎦ ⎣ ⎦
6 −9 1 2 18 −27 3 6
The new matrix is obtained by multiplying every entry of the original matrix by the given scalar.
The formal definition of scalar multiplication is as follows.
Solution
By Definition [def:scalarmultofmatrices], we multiply each element of A by 7. Therefore,
2 0 7(2) 7(0) 14 0
7A = 7 [ ] =[ ] =[ ] (12.1.18)
1 −4 7(1) 7(−4) 7 −28
Similarly to addition of matrices, there are several properties of scalar multiplication which hold.
The n × 1 matrix
x1
⎡ ⎤
X =⎢
⎢
⎥
⎥ (12.2.1)
⋮
⎣ ⎦
xn
We may simply use the term vector throughout this text to refer to either a column or row vector. If we do so, the context will
make it clear which we are referring to.
In this chapter, we will again use the notion of linear combination of vectors as in Definition [def:linearcombination]. In this
context, a linear combination is a sum consisting of vectors multiplied by scalars. For example,
50 1 2 3
[ ] =7[ ]+8 [ ]+9 [ ] (12.2.3)
122 4 5 6
(12.2.4)
⋮
am1 x1 + ⋯ + amn xn = bm
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
am1 am2 amn bm
Notice that each vector used here is one column from the corresponding augmented matrix. There is one vector for each
variable in the system, along with the constant vector.
In general terms,
x1
⎡ ⎤
a11 a12 a13 a11 a12 a13
[ ] ⎢ x2 ⎥ = x1 [ ] + x2 [ ] + x3 [ ]
a21 a22 a23 a21 a22 a23
⎣ ⎦
x3
Thus you take x times the first column, add to x times the second column, and finally x times the third column. The above
1 2 3
sum is a linear combination of the columns of the matrix. When you multiply a matrix on the left by a vector on the right, the
numbers making up the vector are just the scalars to be used in the linear combination of the columns as illustrated above.
Here is the formal definition of how to multiply an m × n matrix by an n × 1 column vector.
x1
⎡ ⎤
A = [ A1 ⋯ An ] , X = ⎢ ⎥ (12.2.8)
⎢ ⋮⎥
⎣ ⎦
xn
Then the product AX is the m × 1 column vector which equals the following linear combination of the columns of A :
n
x1 A1 + x2 A2 + ⋯ + xn An = ∑ xj Aj (12.2.9)
j=1
If we write the columns of A in terms of their entries, they are of the form
a1j
⎡ ⎤
⎢ a2j ⎥
⎢ ⎥
Aj =⎢ ⎥ (12.2.10)
⎢ ⎥
⎢ ⋮ ⎥
⎣ ⎦
amj
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
am1 am2 amn
Solution
We will use Definition [def:multiplicationvectormatrix] to compute the product. Therefore, we compute the product AX
as follows.
1 2 1 3
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
1 ⎢ 0 ⎥ + 2 ⎢ 2 ⎥ + 0 ⎢ 1 ⎥ + 1 ⎢ −2 ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
2 1 4 1
1 4 0 3
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
= ⎢ 0 ⎥ + ⎢ 4 ⎥ + ⎢ 0 ⎥ + ⎢ −2 ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
2 2 0 1
8
⎡ ⎤
= ⎢2 ⎥
⎣ ⎦
5
Using the above operation, we can also write a system of linear equations in matrix form. In this form, we express the system
as a matrix multiplied by a vector. Consider the following definition.
a21 x1 + ⋯ + a2n xn = b2
(12.2.13)
⋮
am1 x1 + ⋯ + amn xn = bm
⎣ ⎦⎣ ⎦ ⎣ ⎦
am1 am2 ⋯ amn xn bm
The expression AX = B is also known as the Matrix Form of the corresponding system of linear equations. The matrix A is
simply the coefficient matrix of the system, the vector X is the column vector constructed from the variables of the system,
and finally the vector B is the column vector constructed from the constants of the system. It is important to note that any
system of linear equations can be written in this form.
Notice that if we write a homogeneous system of equations in matrix form, it would have the form AX = 0 , for the zero
vector 0.
You can see from this definition that a vector
⎢ x2 ⎥
X =⎢ ⎥ (12.2.15)
⎢ ⎥
⎢ ⋮ ⎥
⎣ ⎦
xn
will satisfy the equation AX = B only when the entries x 1, x2 , ⋯ , xn of the vector X are solutions to the original system.
Now that we have examined how to multiply a matrix by a vector, we wish to consider the case where we multiply two
matrices of more general sizes, although these sizes still need to be appropriate as we will see. For example, in Example
[exa:vectormultbymatrix], we multiplied a 3 × 4 matrix by a 4 × 1 vector. We want to investigate how to multiply other sizes
of matrices.
We have not yet given any conditions on when matrix multiplication is possible! For matrices A and B , in order to form the
product AB, the number of columns of A must equal the number of rows of B. Consider a product AB where A has size
m × n and B has size n × p . Then, the product in terms of size of matrices is given by
ˆ
(m × n) (n × p ) = m ×p (12.2.16)
Note the two outside numbers give the size of the product. One of the most important rules regarding matrix multiplication is
the following. If the two middle numbers don’t match, you can’t multiply the matrices!
When the number of columns of A equals the number of rows of B the two matrices are said to be conformable and the
product AB is obtained as follows.
B = [ B1 ⋯ Bp ] (12.2.17)
1 2 0
⎡ ⎤
1 2 1
A =[ ],B = ⎢ 0 3 1⎥ (12.2.19)
0 2 1 ⎣ ⎦
−2 1 1
Solution
The first thing you need to verify when calculating a product is whether the multiplication is possible. The first matrix has
size 2 × 3 and the second matrix has size 3 × 3 . The inside numbers are equal, so A and B are conformable matrices.
According to the above discussion AB will be a 2 × 3 matrix. Definition [def:multiplicationoftwomatrices] gives us a
way to calculate each column of AB, as follows.
First column Second column Third column
⎡ ⎤
⎢ 1 2 0 ⎥
⎢ 1 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎥
2 1 1 2 1 1 2 1
⎢[ ]⎢ 0 ⎥, [ ] ⎢ 3 ⎥, [ ] ⎢ 1 ⎥⎥ (12.2.20)
⎢ ⎥
⎢ 0 2 1 ⎣ ⎦ 0 2 1 ⎣ ⎦ 0 2 1 ⎣ ⎦⎥
⎢ −2 1 1 ⎥
⎣ ⎦
Since vectors are simply n × 1 or 1 × m matrices, we can also multiply a vector by another vector.
Solution
In this case we are multiplying a matrix of size 3 × 1 by a matrix of size 1 × 4. The inside numbers match so the product
is defined. Note that the product will be a matrix of size 3 × 4 . Using Definition [def:multiplicationoftwomatrices], we
can compute this product as follows
First column Second column Third column Fourth column
⎡ ⎤
1 ⎢ 1 1 1 1 ⎥
⎡ ⎤ ⎢⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎥
⎢2 ⎥[1 2 1 0 ] = ⎢⎢ 2 ⎥ [ 1 ], ⎢2 ⎥[2 ] , ⎢2 ⎥[1 ] , ⎢2 ⎥[0 ] ⎥ (12.2.22)
⎢ ⎥
⎣ ⎦ ⎢⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎥
1 ⎢ 1 1 1 1 ⎥
⎣ ⎦
⎢2 4 2 0⎥ (12.2.23)
⎣ ⎦
1 2 1 0
Solution
First check if it is possible. This product is of the form (3 × 3) (2 × 3) . The inside numbers do not match and so you
can’t do this multiplication.
In this case, we say that the multiplication is not defined. Notice that these are the same matrices which we used in Example
[exa:multiplicationoftwomatrices]. In this example, we tried to calculate BA instead of AB. This demonstrates another
property of matrix multiplication. While the product AB maybe be defined, we cannot assume that the product BA will be
possible. Therefore, it is important to always check that the product is defined before carrying out any calculations.
Earlier, we defined the zero matrix 0 to be the matrix (of appropriate size) containing zeros in all entries. Consider the
following example for multiplication by the zero matrix.
Solution
In this product, we compute
1 2 0 0 0 0
[ ][ ] =[ ] (12.2.27)
3 4 0 0 0 0
Hence, A0 = 0 .
0
Notice that we could also multiply A by the 2 ×1 zero vector given by [ ] . The result would be the 2 ×1 zero vector.
0
Therefore, it is always the case that A0 = 0 , for an appropriately sized zero matrix or vector.
The j th
column of AB is of the form
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
am1 am2 amn
Therefore, the ij entry is the entry in row i of this vector. This is computed by
th
k=1
The following is the formal definition for the ij entry of a product of matrices.
th
k=1
⎢ b2j ⎥
⎢ ⎥
(AB)ij = [ ai1 ai2 ⋯ ain ]⎢ ⎥ = ai1 b1j + ai2 b2j + ⋯ + ain bnj (12.3.6)
⎢ ⎥
⎢ ⋮ ⎥
⎣ ⎦
bnj
In other words, to find the (i, j) -entry of the product AB, or (AB) , you multiply the ij i
th
row of A, on the left by the j
th
Solution
First check if the product is possible. It is of the form (3 × 2) (2 × 3) and since the inside numbers match, it is possible to
do the multiplication. The result should be a 3 × 3 matrix. We can first compute AB:
1 2 1 2 1 2
⎡⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎤
2 3 1
⎢⎢ 3 1⎥[ ],⎢3 1⎥[ ],⎢3 1⎥[ ]⎥ (12.3.8)
⎣⎣ ⎦ 7 ⎣ ⎦ 6 ⎣ ⎦ 2 ⎦
2 6 2 6 2 6
where the commas separate the columns in the resulting product. Thus the above product equals
16 15 5
⎡ ⎤
⎢ 13 15 5⎥ (12.3.9)
⎣ ⎦
46 42 14
k=1
= 2 × 3 + 6 × 6 = 42
Solution
This product is of the form (3 × 3) (3 × 2) . The middle numbers match so the matrices are conformable and it is possible
to compute the product.
We want to find the (2, 1)-entry of AB, that is, the entry in the second row and first column of the product. We will use
Definition [def:ijentryofproduct], which states
n
k=1
k=1 ⎣ ⎦
b31
b11 1
⎡ ⎤ ⎡ ⎤
Hence, (AB) 21 = 29 .
You should take a moment to find a few other entries of AB . You can multiply the matrices to check that your answers
are correct. The product AB is given by
13 13
⎡ ⎤
AB = ⎢ 29 32 ⎥ (12.3.14)
⎣ ⎦
0 0
Solution
First, notice that A and B are both of size 2 ×2 . Therefore, both products AB and BA are defined. The first product,
AB is
1 2 0 1 2 1
AB = [ ][ ] =[ ]
3 4 1 0 4 3
Therefore, AB ≠ BA .
This example illustrates that you cannot assume AB = BA even when multiplication is defined in both orders. If for some
matrices A and B it is true that AB = BA , then we say that A and B commute. This is one important property of matrix
multiplication.
The following are other important properties of matrix multiplication. Notice that these properties hold only when the size of
matrices are such that the products are defined.
(B + C ) A = BA + C A (12.4.2)
Proof
First we will prove 12.4.1. We will use Definition [def:ijentryofproduct] and prove this statement using the ij entries th
of a matrix. Therefore,
(A (rB + sC )) = ∑ aik (rB + sC )
ij kj
k k
= r(AB) + s(AC )
ij ij
= (r (AB) + s (AC )) ij
Thus
as claimed.
The proof of Equation 12.4.2follows the same pattern and is left as an exercise.
Statement Equation 12.4.3is the associative law of multiplication. Using Definition [def:ijentryofproduct],
k l
Before formally defining the transpose, we explore this operation on the following matrix.
T
1 4
⎡ ⎤
1 3 2
⎢3 1⎥ = [ ]
⎣ ⎦ 4 1 6
2 6
What happened? The first column became the first row and the second column became the second row. Thus the 3 × 2 matrix
became a 2 × 3 matrix. The number 4 was in the first row and the second column and it ended up in the second row and first
column.
The definition of the transpose is as follows.
T T
A = [ aij ] = [ aji ] (12.5.1)
1 2 −6
A =[ ]
3 5 4
Solution
By Definition 12.5.1, we know that for A = [a ] , A
ij
T
= [ aji ] . In other words, we switch the row and column location of
each entry. The (1, 2)-entry becomes the (2, 1)-entry.
Thus,
1 3
⎡ ⎤
T
A =⎢ 2 5⎥
⎣ ⎦
−6 4
1. (A
T
)
T
=A (12.5.2)
2. (AB)
T
=B
T
A
T
(12.5.3)
3. (rA + sB)
T
= rA
T
+ sB
T
(12.5.4)
Proof
k k
T T T T T T
= ∑ [ bik ] [ akj ] = [ bij ] [ aij ] =B A
The transpose of a matrix is related to other important topics. Consider the following definition.
2 1 3
⎡ ⎤
A = ⎢1 5 −3 ⎥
⎣ ⎦
3 −3 7
2 1 3
⎡ ⎤
T
A = ⎢1 5 −3 ⎥
⎣ ⎦
3 −3 7
Hence, A = A , so A is symmetric.
T
The first is the 1 × 1 identity matrix, the second is the 2 × 2 identity matrix, and so on. By extension, you can likely see what
the n × n identity matrix would be. When it is necessary to distinguish which size of identity matrix is being discussed, we
will use the notation I for the n × n identity matrix.
n
The identity matrix is so important that there is a special symbol to denote the ij entry of the identity matrix. This symbol is
th
1 if i = j
δij = { (12.6.2)
0 if i ≠ j
In is called the identity matrix because it is a multiplicative identity in the following sense.
Proof
The (i, j)-entry of AI is given by:
n
We now define the matrix operation which in some ways plays the role of division.
Such a matrix A will have the same size as the matrix A . It is very important to observe that the inverse of a matrix, if it
−1
exists, is unique. Another way to think of this is that if it acts like the inverse, then it is the inverse.
Proof
−1 −1 −1 −1
A =A I =A (AB) = (A A) B = I B = B (12.6.5)
Hence, A −1
=B which tells us that the inverse is unique.
Solution
To check this, multiply
1 1 2 −1 1 0
[ ][ ] = [ ] =I (12.6.6)
1 2 −1 1 0 1
and
2 −1 1 1 1 0
[ ][ ] = [ ] =I (12.6.7)
−1 1 1 2 0 1
Unlike ordinary multiplication of numbers, it can happen that A ≠0 but A may fail to have an inverse. This is illustrated in
the following example.
Solution
One might think A would have an inverse because it does not equal zero. However, note that
1 1 −1 0
[ ][ ] =[ ] (12.6.8)
1 1 1 0
If A −1
existed, we would have the following
0 −1
0
[ ] = A ([ ])
0 0
−1
−1
= A (A [ ])
1
−1
−1
= (A A) [ ]
1
−1
= I [ ]
1
−1
= [ ]
1
In the next section, we will explore how to find the inverse of a matrix, if it exists.
Let
1 1
A =[ ] (12.7.1)
1 2
x z
as in Example [exa:verifyinginverse]. In order to find A −1
, we need to find a matrix [ ] such that
y w
1 1 x z 1 0
[ ][ ] =[ ] (12.7.2)
1 2 y w 0 1
We can multiply these two matrices, and see that in order for this equation to be true, we must find the solution to the systems
of equations,
x +y = 1
(12.7.3)
x + 2y = 0
and
z+w = 0
(12.7.4)
z + 2w = 1
1 1 1
[ ] (12.7.5)
1 2 0
1 1 0
[ ] (12.7.6)
1 2 1
1 1 1
[ ] (12.7.7)
0 1 −1
Now take −1 times the second row and add to the first to get
1 0 2
[ ] (12.7.8)
0 1 −1
−1
x z 2 −1
A =[ ] =[ ] (12.7.9)
y w −1 1
After taking the time to solve the second system, you may have noticed that exactly the same row operations were used to
solve both systems. In each case, the end result was something of the form [I |X] where I is the identity and X gave a column
the first column of the inverse was obtained by solving the first system and then the second column
z
[ ] (12.7.11)
w
To simplify this procedure, we could have solved both systems at once! To do so, we could have written
1 1 1 0
[ ] (12.7.12)
1 2 0 1
1 0 2 −1
[ ] (12.7.13)
0 1 −1 1
and read off the inverse as the 2 × 2 matrix on the right side.
This exploration motivates the following important algorithm.
[A|I ] (12.7.14)
[I |B] (12.7.15)
When this has been done, B = A . In this case, we say that A is invertible. If it is impossible to row reduce to a matrix
−1
This algorithm shows how to find the inverse if it exists. It will also tell you if A does not have an inverse.
Consider the following example.
Solution
Set up the augmented matrix
⎡ 1 2 2 1 0 0 ⎤
[A|I ] = ⎢
⎢ 1 0 2 0 1 0 ⎥
⎥ (12.7.16)
⎣ 3 1 −1 0 0 1 ⎦
Now we row reduce, with the goal of obtaining the 3 × 3 identity matrix on the left hand side. First, take −1 times the
first row and add to the second followed by −3 times the first row added to the third row. This yields
⎢ 0 −2 0 −1 1 0 ⎥ (12.7.17)
⎢ ⎥
⎣ ⎦
0 −5 −7 −3 0 1
Then take 5 times the second row and add to -2 times the third row.
⎡ 1 2 2 1 0 0 ⎤
⎢ 0 −10 0 −5 5 0 ⎥ (12.7.18)
⎢ ⎥
⎣ 0 0 14 1 5 −2 ⎦
Next take the third row and add to −7 times the first row. This yields
⎡ −7 −14 0 −6 5 −2 ⎤
⎢ 0 −10 0 −5 5 0 ⎥ (12.7.19)
⎢ ⎥
⎣ 0 0 14 1 5 −2 ⎦
Now take − times the second row and add to the first row.
7
⎡ −7 0 0 1 −2 −2 ⎤
⎢ 0 −10 0 −5 5 0 ⎥ (12.7.20)
⎢ ⎥
⎣ 0 0 14 1 5 −2 ⎦
Finally divide the first row by -7, the second row by -10 and the third row by 14 which yields
1 2 2
⎡ 1 0 0 − ⎤
7 7 7
⎢ 1 1 ⎥
⎢ 0 1 0 − 0 ⎥ (12.7.21)
⎢ 2 2 ⎥
1 5 1
⎣ 0 0 1 − ⎦
14 14 7
Notice that the left hand side of this matrix is now the 3 × 3 identity matrix I . Therefore, the inverse is the 3 × 3 matrix 3
It may happen that through this algorithm, you discover that the left hand side cannot be row reduced to the identity matrix.
Consider the following example of this situation.
Let A = ⎢ 1 0 2⎥ . Find A
−1
if it exists.
⎣ ⎦
2 2 4
Solution
Write the augmented matrix [A|I ]
⎡ 1 2 2 1 0 0 ⎤
⎢ 1 0 2 0 1 0 ⎥ (12.7.23)
⎢ ⎥
⎣ 2 2 4 0 0 1 ⎦
⎢ 0 −2 0 −1 1 0 ⎥ (12.7.24)
⎢ ⎥
⎣ ⎦
0 −2 0 −2 0 1
⎡ 1 2 2 1 0 0 ⎤
⎢ 0 −2 0 −1 1 0 ⎥ (12.7.25)
⎢ ⎥
⎣ 0 0 0 −1 −1 1 ⎦
At this point, you can see there will be no way to obtain I on the left side of this augmented matrix. Hence, there is no
way to complete this algorithm, and therefore the inverse of A does not exist. In this case, we say that A is not invertible.
If the algorithm provides an inverse for the original matrix, it is always possible to check your answer. To do so, use the
method demonstrated in Example [exa:verifyinginverse]. Check that the products AA and A A both equal the identity −1 −1
matrix. Through this method, you can always be sure that you have calculated A properly! −1
One way in which the inverse of a matrix is useful is to find the solution of a system of linear equations. Recall from
Definition [def:matrixform] that we can write a system of equations in matrix form, which is of the form AX = B . Suppose
you find the inverse of the matrix A . Then you could multiply both sides of this equation on the left by A and simplify to
−1 −1
obtain
−1 −1
(A ) AX = A B
−1 −1
(A A) X = A B
(12.7.26)
−1
IX = A B
−1
X =A B
Therefore we can find X, the solution to the system, by computing X = A B . Note that once you have found A
−1 −1
, you can
easily get the solution for different right hand sides (different B ). It is always just A B . −1
We will explore this method of finding the solution to a system in the following example.
x −y +z = 3 (12.7.27)
x +y −z = 2
Solution
First, we can write the system of equations in matrix form
1 0 1 x 1
⎡ ⎤⎡ ⎤ ⎡ ⎤
AX = ⎢ 1 −1 1 ⎥⎢ y ⎥ = ⎢3 ⎥ = B (12.7.28)
⎣ ⎦⎣ ⎦ ⎣ ⎦
1 1 −1 z 2
1 0 1
⎡ ⎤
A = ⎢1 −1 1⎥ (12.7.29)
⎣ ⎦
1 1 −1
is
0
⎡ ⎤
What if the right side, B , of [inversesystem1] had been ⎢ 1 ⎥? In other words, what would be the solution to
⎣ ⎦
3
1 0 1 x 0
⎡ ⎤⎡ ⎤ ⎡ ⎤
⎢1 −1 1 ⎥ ⎢ y ⎥ = ⎢ 1 ⎥? (12.7.32)
⎣ ⎦⎣ ⎦ ⎣ ⎦
1 1 −1 z 3
3. If A ,A ,...,A
1 2 kare invertible, then the product A 1 A2 ⋯ Ak is invertible, and
−1 −1 −1 −1 −1
(A1 A2 ⋯ Ak ) =A A ⋯A A
k k−1 2 1
p
−1
A
Exercise 12.E . 1
For the following pairs of matrices, determine if the sum A + B is defined. If so, find the sum.
1 0 0 1
1. A = [ ],B = [ ]
0 1 1 0
2 1 2 −1 0 3
2. A = [ ],B = [ ]
1 1 0 0 1 4
1 0
⎡ ⎤
2 7 −1
3. A = ⎢ −2 3⎥,B = [ ]
0 3 4
⎣ ⎦
4 2
Exercise 12.E . 2
For each matrix A , find the matrix −A such that A + (−A) = 0 .
1 2
1. A = [ ]
2 1
−2 3
2. A = [ ]
0 2
0 1 2
⎡ ⎤
3. A = ⎢ 1 −1 3⎥
⎣ ⎦
4 2 0
Exercise 12.E . 3
In the context of Proposition [prop:propertiesofaddition], describe −A and 0.
Answer
To get −A, just replace every entry of A with its additive inverse. The 0 matrix is the one which has all zeros in it.
Exercise 12.E . 4
For each matrix A , find the product (−2)A, 0A, and 3A.
1 2
1. A = [ ]
2 1
−2 3
2. A = [ ]
0 2
0 1 2
⎡ ⎤
3. A = ⎢ 1 −1 3⎥
⎣ ⎦
4 2 0
Exercise 12.E . 5
Using only the properties given in Proposition [prop:propertiesofaddition] and Proposition [prop:propertiesscalarmult],
show −A is unique.
−A = −A + (A + B) = (−A + A) + B = 0 + B = B (12.E.1)
Exercise 12.E . 6
Using only the properties given in Proposition [prop:propertiesofaddition] and Proposition [prop:propertiesscalarmult]
show 0 is unique.
Answer
Suppose 0 also works. Then 0
′ ′ ′
= 0 + 0 = 0.
Exercise 12.E . 7
Using only the properties given in Proposition [prop:propertiesofaddition] and Proposition [prop:propertiesscalarmult]
show 0A = 0. Here the 0 on the left is the scalar 0 and the 0 on the right is the zero matrix of appropriate size.
Answer
0A = (0 + 0) A = 0A + 0A. Now add − (0A) to both sides. Then 0 = 0A.
Exercise 12.E . 8
Using only the properties given in Proposition [prop:propertiesofaddition] and Proposition [prop:propertiesscalarmult], as
well as previous problems show (−1) A = −A.
Answer
A + (−1) A = (1 + (−1)) A = 0A = 0. Therefore, from the uniqueness of the additive inverse proved in the above
Problem [addinvrstunique], it follows that −A = (−1) A .
2.2
Exercise 12.E . 9
1 2 3 3 −1 2 1 2
Consider the matrices A = [ ],B = [ ],C = [ ], .
2 1 7 −3 2 1 3 1
−1 2 2
D =[ ],E = [ ]
2 −3 3
Exercise 12.E . 10
Find the following if possible. If it is not possible explain why.
1. −3A
2. 3B − A
3. AC
4. C B
5. AE
6. EA
Answer
8 −5 3
2. [ ]
−11 5 −4
3. Not possible
−3 3 4
4. [ ]
6 −1 7
5. Not possible
6. Not possible
Exercise 12.E . 11
1 2
⎡ ⎤
2 −5 2 1 2
Consider the matrices A = ⎢ 3 2⎥,B = [ ],C = [ ],
⎣ ⎦ −3 2 1 5 0
1 −1
−1 1 1
D =[ ],E = [ ]
4 −3 3
Exercise 12.E . 12
Find the following if possible. If it is not possible explain why.
1. −3A
2. 3B − A
3. AC
4. C A
5. AE
6. EA
7. BE
8. DE
Answer
−3 −6
⎡ ⎤
1. ⎢ −9 −6 ⎥
⎣ ⎦
−3 3
2. Not possible.
11 2
⎡ ⎤
3. ⎢ 13 6⎥
⎣ ⎦
−4 2
4. Not possible.
7
⎡ ⎤
5. ⎢ 9⎥
⎣ ⎦
−2
6. Not possible.
7. Not possible.
2
8. [ ]
−5
Exercise 12.E . 13
1. AB
2. BA
3. AC
4. C A
5. C B
6. BC
Answer
3 0 −4
⎡ ⎤
1. ⎢ −4 1 6⎥
⎣ ⎦
5 1 −6
1 −2
2. [ ]
−2 −3
3. Not possible
−4 −6
⎡ ⎤
4. ⎢ −5 −3 ⎥
⎣ ⎦
−1 −2
8 1 −3
5. [ ]
7 6 −6
Exercise 12.E . 14
−1 −1
Let A = [ ] . Find all 2 × 2 matrices, B such that AB = 0.
3 3
Answer
−1 −1 x y −x − z −w − y
[ ][ ] = [ ]
3 3 z w 3x + 3z 3w + 3y
0 0
= [ ]
0 0
x y
Solution is: w = −y, x = −z so the matrices are of the form [ ].
−x −y
Exercise 12.E . 15
Let X = [ −1 −1 1] and Y =[0 1 2]. Find X T
Y and XY T
if possible.
Answer
0 −1 −2
⎡ ⎤
T T
X Y = ⎢0 −1 −2 ⎥ , X Y =1
⎣ ⎦
0 1 2
Exercise 12.E . 16
1 2 1 2
Let A = [ ],B = [ ]. Is it possible to choose k such that AB = BA? If so, what should k equal?
3 4 3 k
1 2 1 2 7 10
[ ][ ] = [ ]
3 k 3 4 3k + 3 4k + 6
3k + 3 = 15
Thus you must have , Solution is: [k = 4]
2k + 2 = 10
Exercise 12.E . 17
1 2 1 2
Let A = [ ],B = [ ]. Is it possible to choose k such that AB = BA? If so, what should k equal?
3 4 1 k
Answer
1 2 1 2 3 2k + 2
[ ][ ] = [ ]
3 4 1 k 7 4k + 6
1 2 1 2 7 10
[ ][ ] = [ ]
1 k 3 4 3k + 1 4k + 2
However, 7 ≠ 3 and so there is no possible choice of k which will make these matrices commute.
Exercise 12.E . 18
Find 2 × 2 matrices, A , B, and C such that A ≠ 0, C ≠ B, but AC = AB.
Exercise 12.E . 19
1 −1 1 1 2 2
Let A = [ ],B = [ ],C = [ ] .
−1 1 1 1 2 2
Answer
1 −1 1 1 0 0
[ ][ ] = [ ]
−1 1 1 1 0 0
1 −1 2 2 0 0
[ ][ ] = [ ]
−1 1 2 2 0 0
Exercise 12.E . 20
Give an example of matrices (of any size), A, B, C such that B ≠ C , A ≠ 0, and yet AB = AC .
Exercise 12.E . 21
Find 2 × 2 matrices A and B such that A ≠ 0 and B ≠ 0 but AB = 0 .
Exercise 12.E . 22
1 −1 1 1
Let A = [ ],B = [ ].
−1 1 1 1
Exercise 12.E . 23
Give an example of matrices (of any size), A, B such that A ≠ 0 and B ≠ 0 but AB = 0.
Exercise 12.E . 24
Find 2 × 2 matrices A and B such that A ≠ 0 and B ≠ 0 with AB ≠ BA .
Exercise 12.E . 25
0 1 1 2
Let A = [ ],B = [ ] .
1 0 3 4
0 1 1 2 3 4
[ ][ ] = [ ]
1 0 3 4 1 2
1 2 0 1 2 1
[ ][ ] = [ ]
3 4 1 0 4 3
Exercise 12.E . 26
Write the system
x1 − x2 + 2 x3
2 x3 + x1
(12.E.3)
3x3
3 x4 + 3 x2 + x1
x1
⎡ ⎤
⎢ x2 ⎥
in the form A⎢ ⎥ where A is an appropriate matrix.
⎢x ⎥
3
⎣ ⎦
x4
Answer
1 −1 2 0
⎡ ⎤
⎢1 0 2 0⎥
A =⎢ ⎥
⎢0 0 3 0⎥
⎣ ⎦
1 3 0 3
Exercise 12.E . 27
Write the system
x1 + 3 x2 + 2 x3
2 x3 + x1
(12.E.4)
6x3
x4 + 3 x2 + x1
⎢ x2 ⎥
in the form A⎢ ⎥ where A is an appropriate matrix.
⎢x ⎥
3
⎣ ⎦
x4
Answer
1 3 2 0
⎡ ⎤
⎢1 0 2 0⎥
A =⎢ ⎥
⎢0 0 6 0⎥
⎣ ⎦
1 3 0 1
Exercise 12.E . 28
Write the system
x1 + x2 + x3
2 x3 + x1 + x2
(12.E.5)
x3 − x1
3 x4 + x1
x1
⎡ ⎤
⎢ x2 ⎥
in the form A⎢ ⎥ where A is an appropriate matrix.
⎢x ⎥
3
⎣ ⎦
x4
Answer
1 1 1 0
⎡ ⎤
⎢ 1 1 2 0⎥
A =⎢ ⎥
⎢ −1 0 1 0⎥
⎣ ⎦
1 0 0 3
Exercise 12.E . 29
A matrix A is called idempotent if A 2
= A. Let
2 0 2
⎡ ⎤
A =⎢ 1 1 2⎥ (12.E.6)
⎣ ⎦
−1 0 −1
2.3
Exercise 12.E . 30
For each pair of matrices, find the (1, 2)-entry and (2, 3)-entry of the product AB.
1 2 −1 4 6 −2
⎡ ⎤ ⎡ ⎤
1. A = ⎢ 3 4 0⎥,B = ⎢ 7 2 1⎥
⎣ ⎦ ⎣ ⎦
2 5 1 −1 0 0
1 3 1 2 3 0
⎡ ⎤ ⎡ ⎤
2. A = ⎢ 0 2 4 ⎥ , B = ⎢ −4 16 1⎥
⎣ ⎦ ⎣ ⎦
1 0 5 0 2 2
Exercise 12.E . 31
Suppose A and B are square matrices of the same size. Which of the following are necessarily true?
1. (A − B) = A − 2AB + B
2 2 2
2
2. (AB) = A B 2 2
3. (A + B) = A + 2AB + B
2 2 2
4. (A + B) = A + AB + BA + B
2 2 2
5. A B = A (AB) B
2 2
3
6. (A + B) = A + 3A B + 3AB + B 3 2 2 3
7. (A + B) (A − B) = A − B 2 2
Answer
1. Not necessarily true.
2. Not necessarily true.
3. Not necessarily true.
4. Necessarily true.
5. Necessarily true.
6. Not necessarily true.
7. Not necessarily true.
2.5
Exercise 12.E . 1
1 2
⎡ ⎤
2 −5 2 1 2
Consider the matrices A = ⎢ 3 2⎥,B = [ ],C = [ ],
−3 2 1 5 0
⎣ ⎦
1 −1
−1 1 1
D =[ ],E = [ ]
4 −3 3
Exercise 12.E . 1
Find the following if possible. If it is not possible explain why.
1. −3A T
2. 3B − A T
3. E B
T
4. EE T
5. B B
T
6. C A T
7. D BE
T
Answer
−3 −9 −3
1. [ ]
−6 −6 3
5 −18 5
2. [ ]
−11 4 4
3. [ −7 1 5]
13 −16 1
⎡ ⎤
5. ⎢ −16 29 −8 ⎥
⎣ ⎦
1 −8 5
5 7 −1
6. [ ]
5 15 5
7. Not possible.
Exercise 12.E . 1
Let A be an n × n matrix. Show A equals the sum of a symmetric and a skew symmetric matrix.
Hint
Show that 1
2
(A
T
+ A) is symmetric and then consider using this as one of the matrices.
Exercise 12.E . 1
T T
Show that 1
2
(A
T
+ A) is symmetric and then consider using this as one of the matrices. A = A+A
2
+
A−A
2
.
Exercise 12.E . 1
Show that the main diagonal of every skew symmetric matrix consists of only zeros. Recall that the main diagonal
consists of every entry of the matrix which is of the form a . ii
Answer
If A is symmetric then A = −A T
. It follows that a
ii = −aii and so each a
ii =0 .
Exercise 12.E . 1
Prove [matrixtranspose2]. That is, show that for an m × n matrix A , an m × n matrix B , and scalars r, s , the following
holds:
T T T
(rA + sB) = rA + sB (12.E.7)
2.6
Exercise 12.E . 1
Prove that I mA =A where A is an m × n matrix.
Answer
(Im A) ≡∑ δik Akj = Aij
ij j
Exercise 12.E . 1
Suppose AB = AC and A is an invertible n × n matrix. Does it follow that B = C ? Explain why or why not.
Answer
Yes B = C . Multiply AB = AC on the left by A −1
.
Exercise 12.E . 1
Give an example of a matrix A such that A 2
=I and yet A ≠ I and A ≠ −I .
Answer
1 0 0
⎡ ⎤
A = ⎢0 −1 0⎥
⎣ ⎦
0 0 1
2.7
Exercise 12.E . 1
Let
2 1
A =[ ] (12.E.8)
−1 3
Find A −1
if possible. If A −1
does not exist, explain why.
Answer
−1 3 1
2 1 \vspace0.05in −\vspace0.05in
7 7
[ ] =[ ]
1 2
−1 3 \vspace0.05in \vspace0.05in
7 7
Exercise 12.E . 1
Let
0 1
A =[ ] (12.E.9)
5 3
Find A −1
if possible. If A −1
does not exist, explain why.
Answer
−1 3 1
0 1 −\vspace0.05in \vspace0.05in
5 5
[ ] =[ ]
5 3 1 0
Exercise 12.E . 1
Add exercises text here.Let
2 1
A =[ ] (12.E.10)
3 0
Find A −1
if possible. If A −1
does not exist, explain why.
Answer
Exercise 12.E . 1
Let
2 1
A =[ ] (12.E.11)
4 2
Find A −1
if possible. If A −1
does not exist, explain why.
Answer
−1 1
2 1 1 \vspace0.05in
[ ] does not exist. The of this matrix is [ 2
]
4 2 0 0
Exercise 12.E . 1
a b
Let A be a 2 × 2 invertible matrix, with A = [ ]. Find a formula for A −1
in terms of a, b, c, d.
c d
Answer
−1 d b
a b −
ad−bc ad−bc
[ ] =[ ]
c a
c d −
ad−bc ad−bc
Exercise 12.E . 1
Let
1 2 3
⎡ ⎤
A = ⎢2 1 4⎥ (12.E.12)
⎣ ⎦
1 0 2
Find A −1
if possible. If A −1
does not exist, explain why.
Answer
−1
1 2 3 −2 4 −5
⎡ ⎤ ⎡ ⎤
⎢2 1 4⎥ =⎢ 0 1 −2 ⎥
⎣ ⎦ ⎣ ⎦
1 0 2 1 −2 3
Exercise 12.E . 1
Let
1 0 3
⎡ ⎤
A = ⎢2 3 4⎥ (12.E.13)
⎣ ⎦
1 0 2
Find A −1
if possible. If A −1
does not exist, explain why.
Answer
Exercise 12.E . 1
Let
1 2 3
⎡ ⎤
A = ⎢2 1 4⎥ (12.E.14)
⎣ ⎦
4 5 10
Find A −1
if possible. If A −1
does not exist, explain why.
Answer
5
⎡1 0 \vspace0.05in
3 ⎤
The is ⎢
⎢0 1 \vspace0.05in
2 ⎥
⎥
. There is no inverse.
3
⎣ ⎦
0 0 0
Exercise 12.E . 1
Let
1 2 0 2
⎡ ⎤
⎢1 1 2 0⎥
A =⎢ ⎥ (12.E.15)
⎢2 1 −3 2⎥
⎣ ⎦
1 2 1 2
Find A −1
if possible. If A −1
does not exist, explain why.
Answer
1 1 1
−1
1 2 0 2 ⎡ −1 \vspace0.05in
2
\vspace0.05in
2
\vspace0.05in
2 ⎤
⎡ ⎤
⎢ 1 1 5⎥
⎢1 1 2 0⎥ ⎢ 3 \vspace0.05in −\vspace0.05in −\vspace0.05in ⎥
2 2 2
⎢ ⎥ =⎢ ⎥
⎢2 1 −3 2⎥ ⎢ ⎥
⎢ −1 0 0 1⎥
⎣ ⎦
1 2 1 2 ⎣ 3 1 9 ⎦
−2 −\vspace0.05in \vspace0.05in \vspace0.05in
4 4 4
Exercise 12.E . 1
Using the inverse of the matrix, find the solution to the systems:
1. 2 4 x 1
[ ][ ] =[ ] (12.E.16)
1 1 y 2
2. 2 4 x 2
[ ][ ] =[ ] (12.E.17)
1 1 y 0
1. ⎡
1 0 3
⎤⎡
x
⎤ ⎡
1
⎤
⎢2 3 4 ⎥⎢ y ⎥ = ⎢0 ⎥ (12.E.19)
⎣ ⎦⎣ ⎦ ⎣ ⎦
1 0 2 z 1
2. ⎡
1 0 3
⎤⎡
x
⎤ ⎡
3
⎤
⎢2 3 4 ⎥ ⎢ y ⎥ = ⎢ −1 ⎥ (12.E.20)
⎣ ⎦⎣ ⎦ ⎣ ⎦
1 0 2 z −2
⎢2 3 4⎥⎢ y ⎥ = ⎢ b ⎥ (12.E.21)
⎣ ⎦⎣ ⎦ ⎣ ⎦
1 0 2 z c
Answer
x 1
⎡ ⎤ ⎡ ⎤
1. ⎢ y ⎥ = ⎢ −\vspace0.05in
2
3
⎥
⎣ ⎦ ⎣ ⎦
z 0
x −12
⎡ ⎤ ⎡ ⎤
2. ⎢ y ⎥ =⎢ 1⎥
⎣ ⎦ ⎣ ⎦
z 5
x 3c − 2a
⎡ ⎤ ⎡ ⎤
3. ⎢ y ⎥ =⎢
1
3
b−
2
3
c ⎥
⎣ ⎦ ⎣ ⎦
z a−c
Exercise 12.E . 1
Show that if A is an n×n invertible matrix and X is a n×1 matrix such that AX = B for B an n×1 matrix, then
B.
−1
X =A
Answer
Multiply both sides of AX = B on the left by A −1
.
Exercise 12.E . 1
Prove that if A −1
exists and AX = 0 then X = 0 .
Answer
Multiply on both sides on the left by A −1
. Thus
−1 −1 −1
0 =A 0 =A (AX) = (A A) X = I X = X (12.E.22)
Answer
−1 −1 −1 −1
A =A I =A (AB) = (A A) B = I B = B.
Answer
T
You need to show that (A ) acts like the inverse of A because from uniqueness in the above problem, this will
−1 T
T T
−1 T −1 T
(A ) A = (AA ) =I =I
T −1
Hence (A −1
) = (A
T
) and this last matrix exists.
Exercise 12.E . 1
Show (AB) −1
=B
−1 −1
A by verifying that
−1 −1
AB (B A ) =I (12.E.23)
and
−1 −1
B A (AB) = I (12.E.24)
Answer
−1 −1 −1 −1 −1 −1 −1 −1 −1 −1 −1
(AB) B A = A (BB )A = AA =I B A (AB) = B (A A) B = B IB = B B =I
Exercise 12.E . 1
Show that (ABC ) −1
=C
−1
B
−1
A
−1
by verifying that
−1 −1 −1
(ABC ) (C B A ) =I (12.E.25)
and
−1 −1 −1
(C B A ) (ABC ) = I (12.E.26)
Answer
The proof of this exercise follows from the previous one.
Exercise 12.E . 1
−1 2
If A is invertible, show (A 2
) = (A
−1
) . Hint: Use Problem [exerinverseprod].
Answer
2 2
2 −1 −1 −1 −1 −1 −1 2 −1 −1 −1 −1
A (A ) = AAA A = AI A = AA =I (A ) A =A A AA = A IA = A A =I
Exercise 12.E . 1
Answer
−1
−1
A A = AA
−1
=I and so by uniqueness, (A −1
) =A .
2.8
Exercise 12.E . 1
2 3 1 2
Let A = [ ] . Suppose a row operation is applied to A and the result is B = [ ] . Find the elementary matrix E
1 2 2 3
Exercise 12.E . 1
4 0 8 0
Let A = [ ] . Suppose a row operation is applied to A and the result is B = [ ] . Find the elementary matrix E
2 1 2 1
Exercise 12.E . 1
1 −3 1 −3
Let A =[ ] . Suppose a row operation is applied to A and the result is B =[ ] . Find the elementary
0 5 2 −1
Exercise 12.E . 1
1 2 1 1 2 1
⎡ ⎤ ⎡ ⎤
Let A = ⎢ 0 5 1⎥ . Suppose a row operation is applied to A and the result is B = ⎢ 2 −1 4⎥ .
⎣ ⎦ ⎣ ⎦
2 −1 4 0 5 1
Exercise 12.E . 1
1 2 1 1 2 1
⎡ ⎤ ⎡ ⎤
Let A = ⎢ 0 5 1⎥ . Suppose a row operation is applied to A and the result is B = ⎢ 0 10 2⎥ .
⎣ ⎦ ⎣ ⎦
2 −1 4 2 −1 4
Exercise 12.E . 1
1 2 1 1 2 1
⎡ ⎤ ⎡ ⎤
2.10
Exercise 12.E . 1
1 2 0
⎡ ⎤
Find an LU factorization of ⎢ 2 1 3⎥.
⎣ ⎦
1 2 3
Answer
1 2 0 1 0 0 1 2 0
⎡ ⎤ ⎡ ⎤⎡ ⎤
Exercise 12.E . 1
1 2 3 2
⎡ ⎤
Find an LU factorization of ⎢ 1 3 2 1⎥.
⎣ ⎦
5 0 1 3
Answer
1 2 3 2 1 0 0 1 2 3 2
⎡ ⎤ ⎡ ⎤⎡ ⎤
⎢1 3 2 1⎥ = ⎢1 1 0 ⎥⎢0 1 −1 −1 ⎥ (12.E.28)
⎣ ⎦ ⎣ ⎦⎣ ⎦
5 0 1 3 5 −10 1 0 0 −24 −17
Exercise 12.E . 1
1 −2 −5 0
⎡ ⎤
Find an LU factorization of the matrix ⎢ −2 5 11 3⎥.
⎣ ⎦
3 −6 −15 1
Answer
1 −2 −5 0 1 0 0 1 −2 −5 0
⎡ ⎤ ⎡ ⎤⎡ ⎤
⎢ −2 5 11 3 ⎥ = ⎢ −2 1 0 ⎥⎢0 1 1 3⎥ (12.E.29)
⎣ ⎦ ⎣ ⎦⎣ ⎦
3 −6 −15 1 3 0 1 0 0 0 1
Exercise 12.E . 1
1 −1 −3 −1
⎡ ⎤
Answer
⎢ −1 2 4 3 ⎥ = ⎢ −1 1 0 ⎥⎢0 1 1 2⎥ (12.E.30)
⎣ ⎦ ⎣ ⎦⎣ ⎦
2 −3 −7 −3 2 −1 1 0 0 0 1
Exercise 12.E . 1
1 −3 −4 −3
⎡ ⎤
Answer
1 −3 −4 −3 1 0 0 1 −3 −4 −3
⎡ ⎤ ⎡ ⎤⎡ ⎤
⎢ −3 10 10 10 ⎥ = ⎢ −3 1 0 ⎥⎢0 1 −2 1⎥ (12.E.31)
⎣ ⎦ ⎣ ⎦⎣ ⎦
1 −6 2 −5 1 −3 1 0 0 0 1
Exercise 12.E . 1
1 3 1 −1
⎡ ⎤
Find an LU factorization of the matrix ⎢ 3 10 8 −1 ⎥ .
⎣ ⎦
2 5 −3 −3
Answer
1 3 1 −1 1 0 0 1 3 1 −1
⎡ ⎤ ⎡ ⎤⎡ ⎤
⎢3 10 8 −1 ⎥ = ⎢ 3 1 0 ⎥⎢0 1 5 2⎥ (12.E.32)
⎣ ⎦ ⎣ ⎦⎣ ⎦
2 5 −3 −3 2 −1 1 0 0 0 1
Exercise 12.E . 1
3 −2 1
⎡ ⎤
⎢ 9 −8 6⎥
Find an LU factorization of the matrix ⎢ ⎥.
⎢ −6 2 2⎥
⎣ ⎦
3 2 −7
Answer
3 −2 1 1 0 0 0 3 −2 1
⎡ ⎤ ⎡ ⎤⎡ ⎤
⎢ 9 −8 6⎥ ⎢ 3 1 0 0 ⎥⎢0 −2 3⎥
⎢ ⎥ =⎢ ⎥⎢ ⎥ (12.E.33)
⎢ −6 2 2⎥ ⎢ −2 1 1 0 ⎥⎢0 0 1⎥
⎣ ⎦ ⎣ ⎦⎣ ⎦
3 2 −7 1 −2 −2 1 0 0 0
Exercise 12.E . 1
−3 −1 3
⎡ ⎤
⎢ 9 9 −12 ⎥
Find an LU factorization of the matrix ⎢ ⎥.
⎢ 3 19 −16 ⎥
⎣ ⎦
12 40 −26
Exercise 12.E . 1
⎢ 1 3 0⎥
Find an LU factorization of the matrix ⎢ ⎥.
⎢ 3 9 0⎥
⎣ ⎦
4 12 16
Answer
−1 −3 −1 1 0 0 0 −1 −3 −1
⎡ ⎤ ⎡ ⎤⎡ ⎤
⎢ 1 3 0⎥ ⎢ −1 1 0 0⎥⎢ 0 0 −1 ⎥
⎢ ⎥ =⎢ ⎥⎢ ⎥ (12.E.34)
⎢ 3 9 0 ⎥ ⎢ −3 0 1 0⎥⎢ 0 0 −3 ⎥
⎣ ⎦ ⎣ ⎦⎣ ⎦
4 12 16 −4 0 −4 1 0 0 0
Exercise 12.E . 1
Find the LU factorization of the coefficient matrix using Dolittle’s method and use it to solve the system of equations.
x + 2y = 5
(12.E.35)
2x + 3y = 6
Answer
An LU factorization of the coefficient matrix is
1 2 1 0 1 2
[ ] =[ ][ ] (12.E.36)
2 3 2 1 0 −1
First solve
1 0 u 5
[ ][ ] =[ ] (12.E.37)
2 1 v 6
u 5
which gives [ ] = [ ]. Then solve
v −4
1 2 x 5
[ ][ ] =[ ] (12.E.38)
0 −1 y −4
Exercise 12.E . 1
Find the LU factorization of the coefficient matrix using Dolittle’s method and use it to solve the system of equations.
x + 2y + z = 1
y + 3z = 2 (12.E.39)
2x + 3y = 6
Answer
An LU factorization of the coefficient matrix is
1 2 1 1 0 0 1 2 1
⎡ ⎤ ⎡ ⎤⎡ ⎤
⎢0 1 3 ⎥ = ⎢0 1 0 ⎥⎢0 1 3⎥ (12.E.40)
⎣ ⎦ ⎣ ⎦⎣ ⎦
2 3 0 2 −1 1 0 0 1
First solve
1 2 1 x 1
⎡ ⎤⎡ ⎤ ⎡ ⎤
⎢0 1 3⎥⎢ y ⎥ = ⎢2⎥ (12.E.42)
⎣ ⎦⎣ ⎦ ⎣ ⎦
0 0 1 z 6
Exercise 12.E . 1
Find the LU factorization of the coefficient matrix using Dolittle’s method and use it to solve the system of equations.
x + 2y + 3z = 5
2x + 3y + z = 6 (12.E.43)
x −y +z = 2
Exercise 12.E . 1
Find the LU factorization of the coefficient matrix using Dolittle’s method and use it to solve the system of equations.
x + 2y + 3z = 5
2x + 3y + z = 6 (12.E.44)
3x + 5y + 4z = 11
Answer
An LU factorization of the coefficient matrix is
1 2 3 1 0 0 1 2 3
⎡ ⎤ ⎡ ⎤⎡ ⎤
⎢2 3 1 ⎥ = ⎢2 1 0 ⎥⎢0 −1 −5 ⎥ (12.E.45)
⎣ ⎦ ⎣ ⎦⎣ ⎦
3 5 4 3 1 1 0 0 0
First solve
1 0 0 u 5
⎡ ⎤⎡ ⎤ ⎡ ⎤
⎢2 1 0⎥⎢ v ⎥ = ⎢ 6 ⎥ (12.E.46)
⎣ ⎦⎣ ⎦ ⎣ ⎦
3 1 1 w 11
u 5
⎡ ⎤ ⎡ ⎤
Solution is: ⎢ v ⎥ = ⎢ −4 ⎥ . Next solve
⎣ ⎦ ⎣ ⎦
w 0
1 2 3 x 5
⎡ ⎤⎡ ⎤ ⎡ ⎤
⎢0 −1 −5 ⎥ ⎢ y ⎥ = ⎢ −4 ⎥ (12.E.47)
⎣ ⎦⎣ ⎦ ⎣ ⎦
0 0 0 z 0
x 7t − 3
⎡ ⎤ ⎡ ⎤
Solution is: ⎢ y ⎥ = ⎢ 4 − 5t ⎥ , t ∈ R .
⎣ ⎦ ⎣ ⎦
z t
Exercise 12.E . 1
Answer
Sometimes there is more than one LU factorization as is the case in this example. The given equation clearly gives an
LU factorization. However, it appears that the following equation gives another LU factorization.
0 1 1 0 0 1
[ ] =[ ][ ] (12.E.49)
0 1 0 1 0 1
13.E: EXERCISES
1 9/16/2020
13.1: Basic Techniques
Learning Objectives
Evaluate the determinant of a square matrix using either Laplace Expansion or row operations.
Demonstrate the effects that row operations have on determinants.
Verify the following:
The determinant of a product of matrices is the product of the determinants.
The determinant of a matrix is equal to the determinant of its transpose.
Let A be an n × n matrix. That is, let A be a square matrix. The determinant of A , denoted by det (A) is a very important
number which we will explore throughout this section.
If A is a 2×2 matrix, the determinant is given by the following formula.
The determinant is also often denoted by enclosing the matrix with two vertical lines. Thus
a b ∣a b∣
det [ ] =∣ ∣ = ad − bc (13.1.2)
c d ∣ c d∣
Solution
From Equation 13.1.1:
det (A) = (2) (6) − (−1) (4) = 12 + 4 = 16 (13.1.3)
The 2 × 2 determinant can be used to find the determinant of larger matrices. We will now explore how to find the
determinant of a 3 × 3 matrix, using several tools including the 2 × 2 determinant.
We begin with the following definition.
Hence, there is a minor associated with each entry of A . Consider the following example which demonstrates this definition.
⎣ ⎦
3 2 1
Find minor(A) 12
and minor(A) . 23
Solution
First we will find minor(A) . By Definition [def:ijthminor], this is the determinant of the
12
2 ×2 matrix which results
when you delete the first row and the second column. This minor is given by
4 2
minor(A)12 = det [ ] (13.1.5)
3 1
Therefore minor(A) 12
= −2 .
Similarly, minor(A) is the determinant of the 2 × 2 matrix which results when you delete the second row and the third
23
i+j
cof (A) = (−1) minor(A) (13.1.8)
ij ij
It is also convenient to refer to the cofactor of an entry of a matrix as follows. If aij is the ij
th
entry of the matrix, then its
cofactor is just cof (A) . ij
Solution
We will use Definition [def:ijthcofactor] to compute these cofactors.
First, we will compute cof (A) . Therefore, we need to find minor(A) . This is the determinant of the
12 12
2 ×2 matrix
which results when you delete the first row and the second column. Thus minor(A) is given by 12
Then,
1+2 1+2
cof (A)12 = (−1) minor(A)12 = (−1) (−2) = 2
Hence,
2+3 2+3
cof (A)23 = (−1) minor(A)23 = (−1) (−4) = 4 (13.1.10)
You may wish to find the remaining cofactors for the above matrix. Remember that there is a cofactor for every entry in the
matrix.
We have now established the tools we need to find the determinant of a 3 × 3 matrix.
det (A) = ai1 cof (A)i1 + ai2 cof (A)i2 + ai3 cof (A)i3
When calculating the determinant, you can choose to expand any row or any column. Regardless of your choice, you will
always get the same number which is the determinant of the matrix A. This method of evaluating a determinant by expanding
along a row or a column is called Laplace Expansion or Cofactor Expansion.
Consider the following example.
⎣ ⎦
3 2 1
1+1
∣3 2∣
1 (−1) ∣ ∣ = (1)(1)(−1) = −1
∣2 1∣
Similarly, we take the 4 in the first column and multiply it by its cofactor, as well as with the 3 in the first column. Finally,
we add these numbers together, as given in the following equation.
2+1
∣2 3∣ 2+2
∣1 3∣ 2+3
∣1 2∣
det (A) = 4 (−1) ∣ ∣ + 3 (−1) ∣ ∣ + 2 (−1) ∣ ∣
∣2 1∣ ∣3 1∣ ∣3 2∣
You can see that for both methods, we obtained det (A) = 0 .
As mentioned above, we will always come up with the same value for det (A) regardless of the row or column we choose to
expand along. You should try to compute the above determinant by expanding along other rows and columns. This is a good
way to check your work, because you should come up with the same number each time!
We present this idea formally in the following theorem.
We have now looked at the determinant of 2 × 2 and 3 × 3 matrices. It turns out that the method used to calculate the
determinant of a 3 × 3 matrix can be used to calculate the determinant of any sized matrix. Notice that Definition
[def:ijthminor], Definition [def:ijthcofactor] and Definition [def:threebythreedeterminant] can all be applied to a matrix of any
size.
For example, the ij minor of a 4 × 4 matrix is the determinant of the 3 × 3 matrix you obtain when you delete the i row
th th
and the j column. Just as with the 3 × 3 determinant, we can compute the determinant of a 4 × 4 matrix by Laplace
th
⎢5 4 2 3⎥
A =⎢ ⎥ (13.1.13)
⎢1 3 4 5⎥
⎣ ⎦
3 4 3 2
Solution
As in the case of a 3 × 3 matrix, you can expand this along any row or column. Lets pick the third column. Then, using
Laplace Expansion,
∣1 2 4∣ ∣1 2 4∣
3+3 ∣ ∣ 4+3 ∣ ∣
4 (−1) 5 4 3 + 3 (−1) 5 4 3 (13.1.15)
∣ ∣ ∣ ∣
∣3 4 2∣ ∣1 3 5∣
Now, you can calculate each 3 × 3 determinant using Laplace Expansion, as we did above. You should complete these as
an exercise and verify that det (A) = −12 .
The following provides a formal definition for the determinant of an n × n matrix. You may wish to take a moment and
consider the above definitions for 2 × 2 and 3 × 3 determinants in context of this definition.
j=1 i=1
In the following sections, we will explore some important properties and characteristics of the determinant.
⎢ ⎥
⎢0 ∗ ⋯ ⋮ ⎥
⎢ ⎥ (13.1.17)
⎢ ⎥
⎢ ⎥
⎢ ⋮ ⋮ ⋱ ∗⎥
⎣ ⎦
0 ⋯ 0 ∗
A lower triangular matrix is defined similarly as a matrix for which all entries above the main diagonal are equal to zero.
The following theorem provides a useful way to calculate the determinant of a triangular matrix.
The verification of this Theorem can be done by computing the determinant using Laplace Expansion along the first row or
column.
Consider the following example.
⎢0 2 6 7⎥
A =⎢ ⎥ (13.1.18)
⎢0 0 3 33.7 ⎥
⎣ ⎦
0 0 0 −1
∣2 3 77 ∣ ∣2 3 77 ∣
3+1 ∣ ∣ 4+1 ∣ ∣
0 (−1) 2 6 7 + 0 (−1) 2 6 7
∣ ∣ ∣ ∣
∣0 0 −1 ∣ ∣0 3 33.7 ∣
Now find the determinant of this 3 × 3 matrix, by expanding along the first column to obtain
∣3 33.7 ∣ 2+1
∣6 7∣ 3+1
∣6 7∣
det (A) = 1 × (2 × ∣ ∣ + 0 (−1) ∣ ∣ + 0 (−1) ∣ ∣) (13.1.20)
∣0 −1 ∣ ∣0 −1 ∣ ∣3 33.7 ∣
∣3 33.7 ∣
= 1 ×2 ×∣ ∣ (13.1.21)
∣0 −1 ∣
Next use Definition [def:twobytwodeterminant] to find the determinant of this 2 ×2 matrix, which is just
3 × −1 − 0 × 33.7 = −3 . Putting all these steps together, we have
which is just the product of the entries down the main diagonal of the original matrix!
You can see that while both methods result in the same answer, Theorem [thm:determinantoftriangularmatrix] provides a much
quicker method.
In the next section, we explore some important properties of determinants.
We will now consider the effect of row operations on the determinant of a matrix. In future sections, we will see that using the
following properties can greatly assist in finding determinants. This section will use the theorems as motivation to provide
various examples of the usefulness of the properties.
The first theorem explains the effect on the determinant of a matrix when two rows are switched.
When we switch two rows of a matrix, the determinant is multiplied by −1. Consider the following example.
Solution
By Definition [def:twobytwodeterminant], det (A) = 1 × 4 − 3 × 2 = −2 . Notice that the rows of B are the rows of A
but switched. By Theorem 13.2.1 since two rows of A have been switched, det (B) = − det (A) = − (−2) = 2 . You
can verify this using Definition [def:twobytwodeterminant].
The next theorem demonstrates the effect on the determinant of a matrix when we multiply a row by a scalar.
Notice that this theorem is true when we multiply one row of the matrix by k . If we were to multiply two rows of A by k to
obtain B , we would have det (B) = k det (A) . Suppose we were to multiply all n rows of A by k to obtain the matrix B , so
2
that B = kA . Then, det (B) = k det (A) . This gives the next theorem.
n
Solution
By Definition [def:twobytwodeterminant], det (A) = −2. We can also compute det (B) using Definition
[def:twobytwodeterminant], and we see that det (B) = −10 .
Now, let’s compute det (B) using Theorem [thm:multiplyingrowbyscalar] and see if we obtain the same answer. Notice
that the first row of B is 5 times the first row of A , while the second row of B is equal to the second row of A . By
Theorem [thm:multiplyingrowbyscalar], det (B) = 5 × det (A) = 5 × −2 = −10.
You can see that this matches our answer above.
Finally, consider the next theorem for the last row operation, that of adding a multiple of a row to another row.
Therefore, when we add a multiple of a row to another row, the determinant of the matrix is unchanged. Note that if a matrix
A contains a row which is a multiple of another row, det (A) will equal 0 . To see this, suppose the first row of A is equal to
−1 times the second row. By Theorem [thm:addingmultipleofrow], we can add the first row to the second row, and the
determinant will be unchanged. However, this row operation will result in a row of zeros. Using Laplace Expansion along the
row of zeros, we find that the determinant is 0.
Consider the following example.
Solution
By Definition [def:twobytwodeterminant], det (A) = −2 . Notice that the second row of B is two times the first row of A
added to the second row. By Theorem [thm:switchingrows], det (B) = det (A) = −2 . As usual, you can verify this
answer using Definition [def:twobytwodeterminant].
Solution
Using Definition [def:twobytwodeterminant], the determinant is given by
det (A) = 1 × 4 − 2 × 2 = 0
However notice that the second row is equal to 2 times the first row. Then by the discussion above following Theorem
[thm:addingmultipleofrow] the determinant will equal 0.
Until now, our focus has primarily been on row operations. However, we can carry out the same operations with columns,
rather than rows. The three operations outlined in Definition [def:operations] can be done with columns instead of rows. In this
In order to find the determinant of a product of matrices, we can simply take the product of the determinants.
Consider the following example.
Solution
First compute AB, which is given by
1 2 3 2 11 4
AB = [ ][ ] =[ ]
−3 2 4 1 −1 −4
11 4
det (AB) = det [ ] = −40
−1 −4
Now
1 2
det (A) = det [ ] =8
−3 2
and
3 2
det (B) = det [ ] = −5
4 1
Computing det (A) × det (B) we have 8 × −5 = −40 . This is the same answer as above and you can see that
det (A) det (B) = 8 × (−5) = −40 = det (AB) .
T
det (A ) = det (A) (13.2.3)
Find det (A T
.
)
Solution
First, note that
2 4
T
A =[ ]
5 3
Using Definition [def:twobytwodeterminant], we can compute det (A) and det (A ). It follows T
that
det (A) = 2 × 3 − 4 × 5 = −14 and det (A ) = 2 × 3 − 5 × 4 = −14 . Hence, det (A) = det (A ) .
T T
The following provides an essential property of the determinant, as well as a useful way to determine if a matrix is invertible.
−1
1
det(A ) =
det(A)
Solution
Consider the matrix A first. Using Definition [def:twobytwodeterminant] we can find the determinant as follows:
1
=
−13
1
= −
13
i=1
Example 13.2.7 :
1. Let E be the elementary matrix obtained by interchanging ith and j th rows of I . Then det E = −1 .
ij ij
2. Let E be the elementary matrix obtained by multiplying the ith row of I by k . Then det E = k .
ik ik
3. Let E be the elementary matrix obtained by multiplying ith row of I by k and adding it to its j th row. Then
ijk
det E =1.
ijk
4. If C and B are such that C B is defined and the ith row of C consists of zeros, then the ith row of C B consists of
zeros.
5. If E is an elementary matrix, then det E = det E . T
Many of the proofs in section use the Principle of Mathematical Induction. This concept is discussed in Appendix A.2 and is
reviewed here for convenience. First we check that the assertion is true for n = 2 (the case n = 1 is either completely trivial
or meaningless).
Next, we assume that the assertion is true for n − 1 (where n ≥ 3 ) and prove it for n . Once this is accomplished, by the
Principle of Mathematical Induction we can conclude that the statement is true for all n × n matrices for every n ≥ 2 .
If A is an n×n matrix and 1 ≤ j ≤ n , then the matrix obtained by removing 1st column and j th row from A is an
n−1 ×n−1 matrix (we shall denote this matrix by A(j) below). Since these matrices are used in computation of cofactors
cof (A)1,i , for 1 ≤ i ≠ n , the inductive assumption applies to these matrices.
Consider the following lemma.
Lemma 13.2.1 :
If A is an n × n matrix such that one of its rows consists of zeros, then det A = 0 .
Proof
We will prove this lemma using Mathematical Induction.
If n = 2 this is easy (check!).
Let n ≥ 3 be such that every matrix of size n − 1 × n − 1 with a row consisting of zeros has determinant equal to
zero. Let i be such that the ith row of A consists of zeros. Then we have a = 0 for 1 ≤ j ≤ n .
ij
Fix j ∈ {1, 2, … , n} such that j ≠ i . Then matrix A(j) used in computation of cof (A)1,j has a row consisting of
zeros, and by our inductive assumption cof (A) = 0 .
1,j
On the other hand, if j = i then a1,j =0 . Therefore a 1,j cof (A)1,j =0 for all j and by [E1] we have
n
j=1
Lemma 13.2.2 :
Assume A , B and C are n × n matrices that for some 1 ≤ i ≤ n satisfy the following.
1. j th rows of all three matrices are identical, for j ≠ i .
2. Each entry in the j th row of A is the sum of the corresponding entries in j th rows of B and C .
Proof
This is not difficult to check for n = 2 (do check it!).
Now assume that the statement of Lemma is true for n − 1 × n − 1 matrices and fix A, B and C as in the statement.
The assumptions state that we have a = b = c l,j for j ≠ i and for 1 ≤ l ≤ n and a = b + c
l,j l,j for all l,i l,i l,i
1 ≤ l ≤ n . Therefore A(i) = B(i) = C (i) , and A(j) has the property that its ith row is the sum of ith rows of B(j)
and C (j) for j ≠ i while the other rows of all three matrices are identical. Therefore by our inductive assumption we
have cof (A) = cof (B) + cof (C ) for j ≠ i .
1j 1j 1j
l=1
l≠i
= det B + det C
This proves that the assertion is true for all n and completes the proof.
Theorem 13.2.7 :
Let A and B be n × n matrices.
1. If A is obtained by interchanging ith and j th rows of B (with i ≠ j ), then det A = − det B .
2. If A is obtained by multiplying ith row of B by k then det A = k det B .
3. If two rows of A are identical then det A = 0 .
4. If A is obtained by multiplying ith row of B by k and adding it to j th row of B (i ≠ j ) then det A = det B .
Proof
We prove all statements by induction. The case n = 2 is easily checked directly (and it is strongly suggested that you
do check it).
We assume n ≥ 3 and (1)–(4) are true for all matrices of size n − 1 × n − 1 .
(1) We prove the case when j = i + 1 , i.e., we are interchanging two consecutive rows.
Let l ∈ {1, … , n} ∖ {i, j}. Then A(l) is obtained from B(l) by interchanging two of its rows (draw a picture) and by
our assumption
cof (A)1,l = −cof (B)1,l . (13.2.9)
Now consider a 1,i cof (A)1,l . We have that a 1,i = b1,j and also that A(i) = B(j) . Since j = i + 1 , we have
1+j 1+i+1 1+i
(−1 ) = (−1 ) = −(−1 ) (13.2.10)
and therefore a cof (A) = −b cof (B) and a cof (A) = −b cof (B) . Putting this together with [E2] into
1i 1i 1j 1j 1j 1j 1i 1i
[E1] we see that if in the formula for det A we change the sign of each of the summands we obtain the formula for
det B .
n n
l=1 l=1
We have therefore proved the case of (1) when j = i + 1 . In order to prove the general case, one needs the following
fact. If i < j , then in order to interchange ith and j th row one can proceed by interchanging two adjacent rows
2(j − i) + 1 times: First swap ith and i + 1 st, then i + 1 st and i + 2 nd, and so on. After one interchanges j − 1 st
and j th row, we have ith row in position of j th and lth row in position of l − 1 st for i + 1 ≤ l ≤ j . Then proceed
backwards swapping adjacent rows until everything is in place.
for 1 ≤ j ≤ n . In particular a = kb , and for l ≠ i matrix A(l) is obtained from B(l) by multiplying one of its
1i 1i
rows by k . Therefore cof (A) = kcof (B) for l ≠ i , and for all l we have a cof (A) = kb cof (B) . By [E1],
1l 1l 1l 1l 1l 1l
and we ‘only’ need to show that det C = 0 . But ith and j th rows of C are proportional. If D is obtained by
multiplying the j th row of C by then by (2) we have det C = det D (recall that k ≠ 0 !). But ith and j th rows of
1
k
1
Theorem 13.2.8 :
If A is an n × n matrix such that one of its rows consists of zeros, then det A = 0 .
Proof
We will prove this lemma using Mathematical Induction.
If n = 2 this is easy (check!).
Let n ≥ 3 be such that every matrix of size n − 1 × n − 1 with a row consisting of zeros has determinant equal to
zero. Let i be such that the ith row of A consists of zeros. Then we have a = 0 for 1 ≤ j ≤ n .
ij
Fix j ∈ {1, 2, … , n} such that j ≠ i . Then matrix A(j) used in computation of cof (A)1,j has a row consisting of
zeros, and by our inductive assumption cof (A) = 0 . 1,j
On the other hand, if j = i then a 1,j =0 . Therefore a 1,j cof (A)1,j =0 for all j and by [E1] we have
n
j=1
Theorem 13.2.9 :
Let A and B be two n × n matrices. Then
det (AB) = det (A) det (B) (13.2.14)
Proof
If A is an elementary matrix of either type, then multiplying by A on the left has the same effect as performing the
corresponding elementary row operation. Therefore the equality det(AB) = det A det B in this case follows by
Example [exa:EX1] and Theorem [thm:T1].
If C is the reduced row-echelon form of A then we can write A = E ⋅ E ⋅ ⋯ ⋅ E ⋅ C for some elementary
1 2 m
matrices E , … , E .
1 m
= det(E1 ⋅ E2 ⋅ ⋯ ⋅ Em ) det B
= det A det B.
Now assume C ≠ I . Since it is in reduced row-echelon form, its last row consists of zeros and by (4) of Example
[exa:EX1] the last row of C B consists of zeros. By Lemma [lem:L1] we have det C = det(C B) = 0 and therefore
and also
The same ‘machine’ used in the previous proof will be used again.
Theorem 13.2.10 :
Let A be a matrix where A is the transpose of A . Then,
T
T
det (A ) = det (A) (13.2.17)
Proof
Note first that the conclusion is true if A is elementary by (5) of Example [exa:EX1].
Let C be the reduced row-echelon form of A. Then we can write A = E 1 ⋅ E2 ⋅ ⋯ ⋅ Em C . Then
A
T
=C
T
⋅E
T
m ⋅⋯⋅E
T
2
⋅E1 . By Theorem [thm:T2] we have
T T T T
det(A ) = det(C ) ⋅ det(Em ) ⋅ ⋯ ⋅ det(E ) ⋅ det(E1 ). (13.2.18)
2
By (5) of Example [exa:EX1] we have that det E = det E for all j . Also, det C is either 0 or 1 (depending on
j
T
j
whether C = I or not) and in either case det C = det C . Therefore det A = det A .
T T
The above discussions allow us to now prove Theorem [thm:welldefineddeterminant]. It is restated below.
Theorem 13.2.11 :
Expanding an n × n matrix along any row or column always gives the same result, which is the determinant.
Proof
We first show that the determinant can be computed along any row. The case n = 1 does not apply and thus let n ≥ 2 .
Let Abe an n × n matrix and fix j > 1 . We need to prove that
n
i=1
Now we have
i=1
Since B is obtained by interchanging the 1st and 2nd rows of A we have that b 1,i = a2,i for all i and one can see that
minor(B) = minor(A)
1,i . 2,i
Further,
1+i 2+i
cof (B)1,i = (−1 ) minorB1,i = −(−1 ) minor(A)2,i = −cof (A)2,i (13.2.22)
n n
hence det B = − ∑ i=1
a2,i cof (A)2,i , and therefore det A = − det B = ∑ i=1
a2,i cof (A)2,i as desired.
The case when j>2 is very similar; we still have minor(B)1,i = minor(A)j,i but checking that
det B = − ∑
n
i=1
aj,i cof (A)j,i is slightly more involved.
Now the cofactor expansion along column j of A is equal to the cofactor expansion along row j of A , which is by T
the above result just proved equal to the cofactor expansion along row 1 of A , which is equal to the cofactor
T
expansion along column 1 of A. Thus the cofactor cofactor along any column yields the same result.
Finally, since det A = det A by Theorem [thm:T.T], we conclude that the cofactor expansion along row 1 of A is
T
equal to the cofactor expansion along row 1 of A , which is equal to the cofactor expansion along column 1 of A.
T
⎢5 1 2 3⎥
A =⎢ ⎥ (13.3.1)
⎢4 5 4 3⎥
⎣ ⎦
2 2 −4 5
Solution
We will use the properties of determinants outlined above to find det (A) . First, add −5 times the first row to the second
row. Then add −4 times the first row to the third row, and −2 times the first row to the fourth row. This yields the matrix
1 2 3 4
⎡ ⎤
⎢0 −9 −13 −17 ⎥
B =⎢ ⎥ (13.3.2)
⎢0 −3 −8 −13 ⎥
⎣ ⎦
0 −2 −10 −3
Notice that the only row operation we have done so far is adding a multiple of a row to another row. Therefore, by
Theorem [thm:addingmultipleofrow], det (B) = det (A) .
At this stage, you could use Laplace Expansion to find det (B) . However, we will continue with row operations to find an
even simpler matrix to work with.
Add −3 times the third row to the second row. By Theorem [thm:addingmultipleofrow] this does not change the value of
the determinant. Then, multiply the fourth row by −3. This results in the matrix
1 2 3 4
⎡ ⎤
⎢0 0 11 22 ⎥
C =⎢ ⎥ (13.3.3)
⎢0 −3 −8 −13 ⎥
⎣ ⎦
0 6 30 9
3
) det (C )
Since det (A) = det (B) , we now have that det (A) = (− ) det (C ) . Again, you could use Laplace Expansion here to
1
1 2 3 4
⎡ ⎤
⎢0 −3 −8 −13 ⎥
D =⎢ ⎥ (13.3.4)
⎢0 0 11 22 ⎥
⎣ ⎦
0 0 14 −17
3
) det (C ) = (
1
3
) det (D)
You could do more row operations or you could note that this can be easily expanded along the first column. Then,
expand the resulting 3 × 3 matrix also along the first column. This results in
∣ 11 22
∣
det (D) = 1 (−3) ∣ ∣ = 1485 (13.3.5)
∣ 14 −17 ∣
3
) (1485) = 495.
You can see that by using row operations, we can simplify a matrix to the point where Laplace Expansion involves only a few
steps. In Example [exa:findingdeterminant], we also could have continued until the matrix was in upper triangular form, and
taken the product of the entries on the main diagonal. Whenever computing the determinant, it is useful to consider all the
possible methods and tools.
Consider the next example.
⎢1 −3 2 1⎥
A =⎢ ⎥ (13.3.6)
⎢2 1 2 5⎥
⎣ ⎦
3 −4 1 2
Solution
Once again, we will simplify the matrix through row operations. Add −1 times the first row to the second row. Next add
−2 times the first row to the third and finally take −3 times the first row and add to the fourth row. This yields
1 2 3 2
⎡ ⎤
⎢0 −5 −1 −1 ⎥
B =⎢ ⎥ (13.3.7)
⎢0 −3 −4 1⎥
⎣ ⎦
0 −10 −8 −4
1 −8 3 2
⎡ ⎤
⎢0 0 −1 −1 ⎥
C =⎢ ⎥ (13.3.8)
⎢0 −8 −4 1⎥
⎣ ⎦
0 10 −8 −4
⎢0 0 −1 −1 ⎥
D =⎢ ⎥ (13.3.9)
⎢0 −8 −4 1⎥
⎣ ⎦
0 10 −8 −4
⎣ ⎦
10 −8 −4
Now since det (A) = det (D) , it follows that det (A) = −82 .
Remember that you can verify these answers by using Laplace Expansion on A . Similarly, if you first compute the determinant
using Laplace Expansion, you can use the row operation method to verify.
A =I.
−1
A
We now define a new matrix called the cofactor matrix of A . The cofactor matrix of A is the matrix whose ij th
entry is the
ij cofactor of A . The formal definition is as follows.
th
Note that cof (A) denotes the ij entry of the cofactor matrix.
ij
th
We will use the cofactor matrix to create a formula for the inverse of A . First, we define the adjugate of A to be the transpose
of the cofactor matrix. We can also call this matrix the classical adjoint of A , and we denote it by adj (A) .
In the specific case where A is a 2 × 2 matrix given by
a b
A =[ ] (13.4.1)
c d
In general, adj (A) can always be found by taking the transpose of the cofactor matrix of A . The following theorem provides
a formula for A using the determinant and adjugate of A .
−1
−1
1
A = adj (A) (13.4.4)
det (A)
Notice that the first formula holds for any n × n matrix A , and in the case A is invertible we actually have a formula for A −1
.
Consider the following example.
⎣ ⎦
1 2 1
First we will find the determinant of this matrix. Using Theorems [thm:switchingrows], [thm:multiplyingrowbyscalar],
and [thm:addingmultipleofrow], we can first simplify the matrix through row operations. First, add −3 times the first row
to the second row. Then add −1 times the first row to the third row to obtain
1 2 3
⎡ ⎤
B = ⎢0 −6 −8 ⎥ (13.4.7)
⎣ ⎦
0 0 −2
Now, we need to find adj (A) . To do so, first we will find the cofactor matrix of A . This is given by
−2 −2 6
⎡ ⎤
cof (A) = ⎢ 4 −2 0⎥ (13.4.8)
⎣ ⎦
2 8 −6
⎣− 5 2 1 ⎦
−
6 3 2
6
. The inverse is
therefore equal to
−1
1
A = adj (A) = 6 adj (A) (13.4.12)
(1/6)
We continue to calculate as follows. Here we show the 2 × 2 determinants needed to find the cofactors.
T
1 1 1 1 1 1
∣ − ∣ ∣− − ∣ ∣− ∣
⎡ 3 2 6 2 6 3 ⎤
∣ ∣ ∣ ∣ ∣ ∣
−
2 ∣ 1 ∣ ∣ 5 1 ∣ ∣ 5 2 ∣ ⎥
⎢ −
⎢ 3
∣ 2
∣ ∣− 6
−
2
∣ ∣− 6 3
∣⎥
⎢ ⎥
⎢ 1 1 1 1 ⎥
⎢ ∣ 0 ∣ ∣ ∣ ∣ 0∣⎥
−1 ∣ 2 ∣ ∣ 2 2 ∣ ∣ 2 ∣⎥
A = 6⎢ − − (13.4.13)
⎢ ∣ 2 1 ∣ ∣ 5 1 ∣ ∣ 5 2 ∣ ⎥
⎢ ∣ 3 − ∣ ∣− − ∣ ∣− ∣⎥
⎢ 2 6 2 6 3 ⎥
⎢ ⎥
1 1 1 1
⎢ ∣ 0 ∣ ∣ ∣ ∣ 0∣⎥
⎢ ∣ 2 ∣ ∣ 2 2 ∣ ∣ 2 ∣⎥
−
⎣ ∣ 1 1 ∣ ∣ 1 1 ∣ ∣ 1 1 ∣ ⎦
∣ − ∣ ∣− − ∣ ∣− ∣
3 2 6 2 6 3
The verification step is very important, as it is a simple way to check your work! If you multiply A A and AA and these −1 −1
products are not both equal to I , be sure to go back and double check each step. One common error is to forget to take the
transpose of the cofactor matrix, so be sure to complete this step.
We will now prove Theorem [thm:inverseanddeterminant].
(of Theorem [thm:inverseanddeterminant]) Recall that the (i, j)-entry of adj(A) is equal to cof (A) . Thus the ji (i, j) -
entry of B = A ⋅ adj(A) is :
n n
k=1 k=1
By the cofactor expansion theorem, we see that this expression for B is equal to the determinant of the matrix ij
and thus det (A) ≠ 0 . Equivalently, if det (A) = 0 , then A is not invertible.
Finally if det (A) ≠ 0 , then the above formula shows that A is invertible and that:
1
−1
A = adj (A) (13.4.20)
det (A)
This method for finding the inverse of A is useful in many contexts. In particular, it is useful with complicated matrices where
the entries are functions, rather than numbers.
Consider the following example.
−1
Show that A(t) exists and then find it.
Solution
−1
First note det (A (t)) = e t 2
(cos t + sin
2
t) = e
t
≠0 so A(t) exists.
The cofactor matrix is
1 0 0
⎡ ⎤
t t
C (t) = ⎢ 0 e cos t e sin t ⎥ (13.4.22)
⎣ t t ⎦
0 −e sin t e cos t
Cramer’s Rule
Another context in which the formula given in Theorem [thm:inverseanddeterminant] is important is Cramer’s Rule. Recall
that we can represent a system of linear equations in the form AX = B , where the solutions to this system are given by X.
Cramer’s Rule gives a formula for the solutions X in the special case that A is a square invertible matrix. Note this rule does
AX = B
−1 −1
A (AX) = A B
−1 −1
(A A) X = A B
−1
IX = A B
−1
X = A B
Hence, the solutions X to the system are given by X = A B . Since we assume that A
−1 −1
exists, we can use the formula for
A
−1
given above. Substituting this formula into the equation for X, we have
−1
1
X =A B = adj (A) B (13.4.24)
det (A)
Let x be the i
i
th
entry of X and b be the j
j
th
entry of B . Then this equation becomes
n n
−1
1
xi = ∑ [ aij ] bj = ∑ adj(A) bj (13.4.25)
ij
det (A)
j=1 j=1
∗ ⋯ b1 ⋯ ∗
⎡ ⎤
1
xi = det ⎢
⎢
⎥ (13.4.26)
⋮ ⋮ ⋮ ⎥
det (A)
⎣ ⎦
∗ ⋯ bn ⋯ ∗
T
where here the i column of A is replaced with the column vector [b
th
1 ⋯ ⋅, bn ] . The determinant of this modified matrix is
taken and divided by det (A) . This formula is known as Cramer’s rule.
We formally define this method now.
b1
⎡ ⎤
B =⎢ ⎥
⎢ ⋮ ⎥ (13.4.28)
⎣ ⎦
bn
Example 13.4.1
Find x, y, z if
⎢3 2 1 ⎥⎢ y ⎥ = ⎢2 ⎥ (13.4.29)
⎣ ⎦⎣ ⎦ ⎣ ⎦
2 −3 2 z 3
Solution
We will use method outlined in Procedure [proc:cramersrule] to find the values for x, y, z which give the solution to this
system. Let
1
⎡ ⎤
B = ⎢2 ⎥ (13.4.30)
⎣ ⎦
3
where A is the matrix obtained from replacing the first column of A with B .
1
Hence, A is given by
1
1 2 1
⎡ ⎤
A1 = ⎢ 2 2 1⎥ (13.4.32)
⎣ ⎦
3 −3 2
Therefore,
∣1 2 1∣
∣ ∣
2 2 1
∣ ∣
det (A1 ) ∣3 −3 2∣ 1
x = = = (13.4.33)
det (A) ∣1 2 1∣ 2
∣ ∣
3 2 1
∣ ∣
∣2 −3 2∣
Similarly, to find y we construct A by replacing the second column of A with B . Hence, A is given by
2 2
1 1 1
⎡ ⎤
A2 = ⎢ 3 2 1⎥ (13.4.34)
⎣ ⎦
2 3 2
Therefore,
∣1 1 1∣
∣ ∣
3 2 1
∣ ∣
det (A2 ) ∣2 3 2∣ 1
y = = =− (13.4.35)
det (A) ∣1 2 1∣ 7
∣ ∣
3 2 1
∣ ∣
∣2 −3 2∣
1 2 1
⎡ ⎤
A3 = ⎢ 3 2 2⎥ (13.4.36)
⎣ ⎦
2 −3 3
Cramer’s Rule gives you another tool to consider when solving a system of linear equations.
We can also use Cramer’s Rule for systems of non linear equations. Consider the following system where the matrix A
Example 13.4.1
Solve for z if
1 0 0 x 1
⎡ ⎤⎡ ⎤ ⎡ ⎤
t t
⎢0 e cos t e sin t ⎥ ⎢ y ⎥ = ⎢ t ⎥ (13.4.38)
⎣ t t ⎦⎣ ⎦ ⎣ 2 ⎦
0 −e sin t e cos t z t
Solution
We are asked to find the value of z in the solution. We will solve using Cramer’s rule. Thus
∣1 0 1 ∣
∣ t ∣
0 e cos t t
∣ ∣
t 2
∣0 −e sin t t ∣
−t
z = \vspace.05in = t ((cos t) t + sin t) e (13.4.39)
∣1 0 0 ∣
∣ t t
∣
0 e cos t e sin t
∣ ∣
t t
∣0 −e sin t e cos t ∣
Polynomial Interpolation
In studying a set of data that relates variables x and y , it may be the case that we can use a polynomial to “fit” to the data. If
such a polynomial can be established, it can be used to estimate values of x and y which have not been provided.
Consider the following example.
Polynomial Interpolationpolynomialinterpolation Given data points (1, 4), (2, 9), (3, 12), find an interpolating polynomial
p(x) of degree at most 2 and then estimate the value corresponding to x = . 1
such that p(1) = 4, p(2) = 9 and p(3) = 12 . To find this polynomial, substitute the known values in for x and solve for
r , r , and r .
0 1 2
p(1) = r0 + r1 + r2 = 4
p(2) = r0 + 2 r1 + 4 r2 = 9
p(3) = r0 + 3 r1 + 9 r2 = 12
⎢ 1 2 4 9 ⎥ (13.4.41)
⎢ ⎥
⎣ ⎦
1 3 9 12
⎡ 1 0 0 −3 ⎤
⎢ 0 1 0 8 ⎥ (13.4.42)
⎢ ⎥
⎣ 0 0 1 −1 ⎦
Therefore the solution to the system is r 0 = −3, r1 = 8, r2 = −1 and the required interpolating polynomial is
2
p(x) = −3 + 8x − x (13.4.43)
2
, we calculate p( 1
2
) :
1 1 1 2
p( ) = −3 + 8( )−( )
2 2 2
1
= −3 + 4 −
4
3
=
4
This procedure can be used for any number of data points, and any degree of polynomial. The steps are outlined below.
Finding an Interpolating Polynomialfindinginterpolynomial Suppose that values of x and corresponding values of y are given,
such that the actual relationship between x and y is unknown. Then, values of y can be estimated using an interpolating
polynomial p(x). If given x , . . . , x and the corresponding y , . . . , y , the procedure to find p(x) is as follows:
1 n 1 n
2 n−1
r0 + r1 x2 + r2 x +. . . +rn−1 x = y2
2 2
(13.4.45)
2 n−1
r0 + r1 xn + r2 xn +. . . +rn−1 xn = yn
⎢ 2 n−1 ⎥
⎢ 1 x2 x ⋯ x y2 ⎥
2 2
⎢ ⎥ (13.4.46)
⎢ ⎥
⎢ ⎥
⎢ ⋮ ⋮ ⋮ ⋮ ⋮ ⎥
2 n−1
⎣ 1 xn xn ⋯ xn yn ⎦
4. Solving this system will result in a unique solution r0 , r1 , ⋯ , rn−1 . Use these values to construct p(x) , and estimate the
value of p(a) for any x = a .
This procedure motivates the following theorem.
Polynomial Interpolationpolynomialinterpolation Given n data points (x , y ), (x , y 1 1 2 2 ), ⋯ , (xn , yn ) with the distinct,
xi
p(0) = r0 = 1
p(1) = r0 + r1 + r2 + r3 = 2
p(3) = r0 + 3 r1 + 9 r2 + 27 r3 = 22
p(5) = r0 + 5 r1 + 25 r2 + 125 r3 = 66
⎡ 1 0 0 0 1 ⎤
⎢ 1 1 1 1 2 ⎥
⎢ ⎥ (13.4.48)
⎢ ⎥
⎢ 1 3 9 27 22 ⎥
⎣ 1 5 25 125 66 ⎦
⎡ 1 0 0 0 1 ⎤
⎢ 0 1 0 0 −2 ⎥
⎢ ⎥ (13.4.49)
⎢ ⎥
⎢ 0 0 1 0 3 ⎥
⎣ ⎦
0 0 0 1 0
1 9/16/2020
14.1: Algebraic Considerations
Learning Objectives
Develop the abstract concept of a vector space through axioms.
Deduce basic properties of vector spaces.
Use the vector space axioms to determine if a set and its operations constitute a vector space.
In this section we consider the idea of an abstract vector space. A vector space is something which has two operations
satisfying the following vector space axioms.
In the following definition we define two operations; vector addition, denoted by + and scalar multiplication denoted by
placing the scalar next to the vector. A vector space need not have usual operations, and for this reason the operations will
always be given in the definition of the vector space. The below axioms for addition (written +) and scalar multiplication must
hold for however addition and scalar multiplication are defined for the vector space.
It is important to note that we have seen much of this content before, in terms of R . We will prove in this section that R is
n n
an example of a vector space and therefore all discussions in this chapter will pertain to R . While it may be useful to consider
n
all concepts of this chapter in terms of R , it is also important to understand that these concepts apply to all vector spaces.
n
In the following definition, we will choose scalars a, b to be real numbers and are thus dealing with real vector spaces.
However, we could also choose scalars which are complex numbers. In this case, we would call the vector space V complex.
⃗ = 0⃗
v ⃗ + (−v) (14.1.5)
vectorspaceaxiomsaddition
⃗ = (ab)v ⃗
a (b v) (14.1.9)
1 v ⃗ = v ⃗ (14.1.10)
Consider the following example, in which we prove that R is in fact a vector space.
n
Example 14.1.1 : R n
Rnvectorspace R , under the usual operations of vector addition and scalar multiplication, is a vector space.
n
Solution
To show that R is a vector space, we need to show that the above axioms hold. Let
n
x⃗ , y ,⃗ z ⃗ be vectors in R
n
. We first
prove the axioms for vector addition.
To show that R is closed under addition, we must show that for two vectors in R their sum is also in R . The sum
n n n
x1 y1 x1 + y1
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜ x2 ⎟ ⎜ y2 ⎟ ⎜ x2 + y2 ⎟
⎜ ⎟+⎜ ⎟ =⎜ ⎟ (14.1.11)
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
⎜ ⋮ ⎟ ⎜ ⋮ ⎟ ⎜ ⋮ ⎟
⎝ ⎠ ⎝ ⎠ ⎝ ⎠
xn yn xn + yn
The sum is a vector with n entries, showing that it is in R . Hence R is closed under vector addition.
n n
⎜ x2 ⎟ ⎜ y2 ⎟
x⃗ + y ⃗ = ⎜ ⎟+⎜ ⎟
⎜ ⎟ ⎜ ⎟
⎜ ⋮ ⎟ ⎜ ⋮ ⎟
⎝ ⎠ ⎝ ⎠
xn yn
x1 + y1
⎛ ⎞
⎜ x2 + y2 ⎟
= ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
xn + yn
y1 + x1
⎛ ⎞
⎜ y2 + x2 ⎟
= ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
yn + xn
y1 x1
⎛ ⎞ ⎛ ⎞
⎜ y2 ⎟ ⎜ x2 ⎟
= ⎜ ⎟+⎜ ⎟
⎜ ⎟ ⎜ ⎟
⎜ ⋮ ⎟ ⎜ ⋮ ⎟
⎝ ⎠ ⎝ ⎠
yn xn
= y ⃗ + x⃗
x1 y1 z1
⎛⎛ ⎞ ⎛ ⎞⎞ ⎛ ⎞
⎜⎜ x2 ⎟ ⎜ y2 ⎟⎟ ⎜ z2 ⎟
(x⃗ + y )
⃗ + z ⃗ = ⎜⎜ ⎟+⎜ ⎟⎟ + ⎜ ⎟
⎜⎜ ⎟ ⎜ ⎟⎟ ⎜ ⎟
⎜⎜ ⋮ ⎟ ⎜ ⋮ ⎟⎟ ⎜ ⋮ ⎟
⎝⎝ ⎠ ⎝ ⎠⎠ ⎝ ⎠
xn yn zn
x1 + y1 z1
⎛ ⎞ ⎛ ⎞
⎜ x2 + y2 ⎟ ⎜ z2 ⎟
= ⎜ ⎟+⎜ ⎟
⎜ ⎟ ⎜ ⎟
⎜ ⋮ ⎟ ⎜ ⋮ ⎟
⎝ ⎠ ⎝ ⎠
xn + yn zn
(x1 + y1 ) + z1
⎛ ⎞
⎜ (x2 + y2 ) + z2 ⎟
= ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
(xn + yn ) + zn
x1 + (y1 + z1 )
⎛ ⎞
⎜ x2 + (y2 + z2 ) ⎟
= ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
xn + (yn + zn )
x1 y1 + z1
⎛ ⎞ ⎛ ⎞
⎜ x2 ⎟ ⎜ y2 + z2 ⎟
= ⎜ ⎟+⎜ ⎟
⎜ ⎟ ⎜ ⎟
⎜ ⋮ ⎟ ⎜ ⋮ ⎟
⎝ ⎠ ⎝ ⎠
xn yn + zn
x1 y1 z1
⎛ ⎞ ⎛⎛ ⎞ ⎛ ⎞⎞
⎜ x2 ⎟ ⎜⎜ y2 ⎟ ⎜ z2 ⎟⎟
= ⎜ ⎟ + ⎜⎜ ⎟+⎜ ⎟⎟
⎜ ⎟ ⎜⎜ ⎟ ⎜ ⎟⎟
⎜ ⋮ ⎟ ⎜⎜ ⋮ ⎟ ⎜ ⋮ ⎟⎟
⎝ ⎠ ⎝⎝ ⎠ ⎝ ⎠⎠
xn yn zn
= x⃗ + (y ⃗ + z )
⃗
⎜0⎟
Next, we show the existence of an additive identity. Let ⃗
0 =⎜
⎜
⎟.
⎟
⎜ ⋮ ⎟
⎝ ⎠
0
⎜ x2 ⎟ ⎜0⎟
⃗ ⎜ ⎟+⎜ ⎟
x⃗ + 0 =
⎜ ⎟ ⎜ ⎟
⎜ ⋮ ⎟ ⎜ ⋮ ⎟
⎝ ⎠ ⎝ ⎠
xn 0
x1 + 0
⎛ ⎞
⎜ x2 + 0 ⎟
= ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
xn + 0
x1
⎛ ⎞
⎜ x2 ⎟
= ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
xn
= x⃗
⎜ −x2 ⎟
Next, we prove the existence of an additive inverse. Let −x⃗ = ⎜
⎜
⎟
⎟
.
⎜ ⋮ ⎟
⎝ ⎠
−xn
x1 −x1
⎛ ⎞ ⎛ ⎞
⎜ x2 ⎟ ⎜ −x2 ⎟
x⃗ + (−x⃗ ) = ⎜ ⎟+⎜ ⎟
⎜ ⎟ ⎜ ⎟
⎜ ⋮ ⎟ ⎜ ⋮ ⎟
⎝ ⎠ ⎝ ⎠
xn −xn
x1 − x1
⎛ ⎞
⎜ x2 − x2 ⎟
= ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
xn − xn
0
⎛ ⎞
⎜0⎟
= ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
0
⃗
= 0
We first show that R is closed under scalar multiplication. To do so, we show that ax⃗ is also a vector with n entries.
n
x1 ax1
⎛ ⎞ ⎛ ⎞
⎜ x2 ⎟ ⎜ ax2 ⎟
ax⃗ = a ⎜ ⎟ =⎜ ⎟ (14.1.12)
⎜ ⎟ ⎜ ⎟
⎜ ⋮ ⎟ ⎜ ⋮ ⎟
⎝ ⎠ ⎝ ⎠
xn axn
The vector ax⃗ is again a vector with n entries, showing that R is closed under scalar multiplication.
n
⎜⎜ x2 ⎟ ⎜ x2 ⎟⎟
a(x⃗ + y )
⃗ = a ⎜⎜ ⎟+⎜ ⎟⎟
⎜⎜ ⎟ ⎜ ⎟⎟
⎜⎜ ⋮ ⎟ ⎜ ⋮ ⎟⎟
⎝⎝ ⎠ ⎝ ⎠⎠
xn xn
x1 + y1
⎛ ⎞
⎜ x2 + y2 ⎟
= a⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
xn + yn
a(x1 + y1 )
⎛ ⎞
⎜ a(x2 + y2 ) ⎟
= ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
a(xn + yn )
ax1 + ay1
⎛ ⎞
⎜ ax2 + ay2 ⎟
= ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
axn + ayn
ax1 ay1
⎛ ⎞ ⎛ ⎞
⎜ ax2 ⎟ ⎜ ay2 ⎟
= ⎜ ⎟+⎜ ⎟
⎜ ⎟ ⎜ ⎟
⎜ ⋮ ⎟ ⎜ ⋮ ⎟
⎝ ⎠ ⎝ ⎠
axn ayn
= ax⃗ + ay ⃗
⎜ x2 ⎟
(a + b)x⃗ = (a + b) ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
xn
(a + b)x1
⎛ ⎞
⎜ (a + b)x2 ⎟
⎜ ⎟
=
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
(a + b)xn
ax1 + b x1
⎛ ⎞
⎜ ax2 + b x2 ⎟
= ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
axn + b xn
ax1 bx1
⎛ ⎞ ⎛ ⎞
⎜ ax2 ⎟ ⎜ bx2 ⎟
= ⎜ ⎟+⎜ ⎟
⎜ ⎟ ⎜ ⎟
⎜ ⋮ ⎟ ⎜ ⋮ ⎟
⎝ ⎠ ⎝ ⎠
axn bxn
= ax⃗ + b x⃗
⎜ ⎜ x2 ⎟⎟
a(b x⃗ ) = a ⎜b ⎜ ⎟⎟
⎜ ⎜ ⎟⎟
⎜ ⎜ ⋮ ⎟⎟
⎝ ⎝ ⎠⎠
xn
bx1
⎛⎛ ⎞⎞
⎜⎜ bx2 ⎟⎟
= a ⎜⎜ ⎟⎟
⎜⎜ ⎟⎟
⎜⎜ ⋮ ⎟⎟
⎝⎝ ⎠⎠
bxn
a(b x1 )
⎛ ⎞
⎜ a(b x2 ) ⎟
= ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
a(b xn )
(ab)x1
⎛ ⎞
⎜ (ab)x2 ⎟
= ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
(ab)xn
x1
⎛ ⎞
⎜ x2 ⎟
= (ab) ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
xn
= (ab)x⃗
⎜ x2 ⎟
1x⃗ = 1⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
xn
1x1
⎛ ⎞
⎜ 1x2 ⎟
= ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
1xn
x1
⎛ ⎞
⎜ x2 ⎟
= ⎜ ⎟
⎜ ⎟
⎜ ⋮ ⎟
⎝ ⎠
xn
= x⃗
By the above proofs, it is clear that R satisfies the vector space axioms. Hence,
n
R
n
is a vector space under the usual
operations of vector addition and scalar multiplication.
addition of polynomials, and scalar multiplication the usual multiplication of a polynomial by a number. Then P is a 2
vector space.
2
P2 = { a2 x + a1 x + a0 | ai ∈ R for all i} (14.1.13)
To show that P is a vector space, we verify the axioms. Let p(x), q(x), r(x) be polynomials in P and let
2 2 a, b, c be real
numbers. Write p(x) = p x + p x + p , q(x) = q x + q x + q , and r(x) = r x + r x + r .
2
2
1 0 2
2
1 0 2
2
1 0
We first prove that addition of polynomials in P is closed. For two polynomials in P we need to show that their sum
2 2
is also a polynomial in P . From the definition of P , a polynomial is contained in P if it is of degree at most 2 or the
2 2 2
zero polynomial.
2 2
p(x) + q(x) = p2 x + p1 x + p0 + q2 x + q1 x + q0
2
= (p2 + q2 )x + (p1 + q1 )x + (p0 + q0 )
The sum is a polynomial of degree 2 and therefore is in P . It follows that P is closed under addition.
2 2
We need to show that addition is commutative, that is p(x) + q(x) = q(x) + p(x) .
2 2
p(x) + q(x) = p2 x + p1 x + p0 + q2 x + q1 x + q0
2
= (p2 + q2 )x + (p1 + q1 )x + (p0 + q0 )
2
= (q2 + p2 )x + (q1 + p1 )x + (q0 + p0 )
2 2
= q2 x + q1 x + q0 + p2 x + p1 x + p0
= q(x) + p(x)
Next, we need to show that addition is associative. That is, that (p(x) + q(x)) + r(x) = p(x) + (q(x) + r(x)) .
2 2 2
(p(x) + q(x)) + r(x) = (p2 x + p1 x + p0 + q2 x + q1 x + q0 ) + r2 x + r1 x + r0
2 2
= (p2 + q2 )x + (p1 + q1 )x + (p0 + q0 ) + r2 x + r1 x + r0
2
= (p2 + q2 + r2 )x + (p1 + q1 + r1 )x + (p0 + q0 + r0 )
2 2
= p2 x + p1 x + p0 + (q2 + r2 )x + (q1 + r1 )x + (q0 + r0 )
2 2 2
= p2 x + p1 x + p0 + (q2 x + q1 x + q0 + r2 x + r1 x + r0 )
Next, we must prove that there exists an additive identity. Let 0(x) = 0x 2
+ 0x + 0 .
2 2
p(x) + 0(x) = p2 x + p1 x + p0 + 0 x + 0x + 0
2
= (p2 + 0)x + (p1 + 0)x + (p0 + 0)
2
= p2 x + p1 x + p0
= p(x)
2
= (p2 − p2 )x + (p1 − p1 )x + (p0 − p0 )
2
= 0x + 0x + 0
= 0(x)
Hence an additive inverse −p(x) exists such that p(x) + (−p(x)) = 0(x) .
We now need to verify the axioms related to scalar multiplication.
First we prove that P is closed under scalar multiplication. That is, we show that ap(x) is also a polynomial of degree
2
at most 2.
2 2
ap(x) = a (p2 x + p1 x + p0 ) = ap2 x + ap1 x + ap0 (14.1.14)
2
= a ((p2 + q2 )x + (p1 + q1 )x + (p0 + q0 ))
2
= a(p2 + q2 )x + a(p1 + q1 )x + a(p0 + q0 )
2
= (ap2 + aq2 )x + (ap1 + aq1 )x + (ap0 + aq0 )
2 2
= ap2 x + ap1 x + ap0 + aq2 x + aq1 x + aq0
= ap(x) + aq(x)
2
= (a + b)p2 x + (a + b)p1 x + (a + b)p0
2 2
= ap2 x + ap1 x + ap0 + b p2 x + b p1 x + b p0
= ap(x) + bp(x)
2
= a (b p2 x + b p1 x + b p0 )
2
= ab p2 x + ab p1 x + ab p0
2
= (ab) (p2 x + p1 x + p0 )
= (ab)p(x)
2
= 1 p2 x + 1 p1 x + 1 p0
2
= p2 x + p1 x + p0
= p(x)
Since the above axioms hold, we know that P as described above is a vector space.
2
Another important example of a vector space is the set of all matrices of the same size.
You can see that the sum is a 2 × 3 matrix, so it is in M . It follows that M is closed under matrix addition.
2,3 2,3
The remaining axioms regarding matrix addition follow from properties of matrix addition. Therefore M satisfies 2,3
This is a 2 × 3 matrix in M 2,3 which proves that the set is closed under scalar multiplication.
The remaining axioms of scalar multiplication follow from properties of scalar multiplication of matrices. Therefore
M 2,3satisfies the axioms of scalar multiplication.
In conclusion, M 2,3 satisfies the required axioms and is a vector space.
While here we proved that the set of all 2 × 3 matrices is a vector space, there is nothing special about this choice of matrix
size. In fact if we instead consider M , the set of all m × n matrices, then M
m,n is a vector space under the operations of
m,n
1 0 0
= ( )
0 0 0
B+A = B
0 0 0
= ( )
1 0 0
It follows that A + B ≠ B + A . Therefore addition as defined for V is not commutative and V fails this axiom.
Hence V is not a vector space.
Example 14.1.1
Solution
To verify that F is a vector space, we must prove the axioms beginning with those for addition. Let f , g, h be functions
S
in F .
S
First we check that addition is closed. For functions f , g defined on the set S , their sum given by
is again a function defined on S . Hence this sum is in F and F is closed under addition.
S S
Since x is arbitrary, f + g = g + f .
Next we check the associative law of addition:
and so (f + g) + h = f + (g + h) .
Next we check for an additive identity. Let 0 denote the function which is given by 0 (x) = 0. Then this is an additive
identity because
(f + 0) (x) = f (x) + 0 (x) = f (x) (14.1.20)
and so f + 0 = f .
Finally, check for an additive inverse. Let −f be the function which satisfies (−f ) (x) = −f (x) . Then
(f + (−f )) (x) = f (x) + (−f ) (x) = f (x) + −f (x) = 0 (14.1.21)
Hence f + (−f ) = 0 .
Now, check the axioms for scalar multiplication.
We first need to check that F is closed under scalar multiplication. For a function f (x) in F and real number a , the
S S
function (af )(x) = a(f (x)) is again a function defined on the set S . Hence a(f (x)) is in F and F is closed under
S S
scalar multiplication.
((a + b) f ) (x) = (a + b) f (x) = af (x) + bf (x) = (af + bf ) (x) (14.1.22)
and so (a + b) f = af + bf .
(a (f + g)) (x) = a (f + g) (x) = a (f (x) + g (x)) (14.1.23)
and so a (f + g) = af + bg .
((ab) f ) (x) = (ab) f (x) = a (bf (x)) = (a (bf )) (x) (14.1.25)
so (abf ) = a (bf ) .
Finally (1f ) (x) = 1f (x) = f (x) so 1f =f .
It follows that V satisfies all the required axioms and is a vector space.
Proof
When we say that the additive identity, 0⃗ , is unique, we mean that if a vector acts like the additive identity, then it is
the additive identity. To prove this uniqueness, we want to show that another vector which acts like the additive
identity is actually equal to 0⃗ .
′
Now, for 0⃗ the additive identity given above in the axioms, we have that
′ ′
⃗ ⃗ ⃗
0 +0 = 0 (14.1.27)
This says that if a vector acts like an additive identity (such as 0⃗ ), it in fact equals 0⃗ . This proves the uniqueness of 0⃗ .
When we say that the additive inverse, −x⃗ , is unique, we mean that if a vector acts like the additive inverse, then it is
the additive inverse. Suppose that y ⃗ acts like an additive inverse:
⃗
x⃗ + y ⃗ = 0 (14.1.29)
Thus if y ⃗ acts like the additive inverse, it is equal to the additive inverse −x⃗ . This proves the uniqueness of −x⃗ .
This statement claims that for all vectors x⃗ , scalar multiplication by 0 equals the zero vector 0 . Consider the
⃗
We use a small trick here: add −0x⃗ to both sides. This gives
⃗
0+0 = 0 x⃗ + 0
⃗
0 = 0x⃗
This proves that scalar multiplication of any vector by 0 results in the zero vector 0⃗ .
Finally, we wish to show that scalar multiplication of −1 and any vector x⃗ results in the additive inverse of that
vector, −x⃗ . Recall from 2. above that the additive inverse is unique. Consider the following:
(−1) x⃗ + x⃗ = (−1) x⃗ + 1 x⃗
= (−1 + 1) x⃗
= 0x⃗
⃗
= 0
Theorem 14.1.1
Let V be a vector space. Then v ⃗ + w⃗ = v ⃗ + z ⃗ implies that w⃗ = z ⃗ for all v,⃗ w⃗ , z ⃗ ∈ V
Proof
The proof follows from the vector space axioms, in particular the existence of an additive inverse (−u⃗ ). The proof is
left as an exercise to the reader.
In this section we will examine the concept of spanning introduced earlier in terms of R . Here, we will discuss these concepts
n
X ⊆Y (14.2.1)
In particular, we often speak of subsets of a vector space, such as X ⊆ V . By this we mean that every element in the set X is
contained in the vector space V .
v⃗ = c1 v⃗1
+ c2 v⃗2
+ ⋯ + cn v⃗n
(14.2.2)
⃗ , ⋯ , vn
span { v1 ⃗ } = { ∑ ci vi⃗ : ci ∈ R} (14.2.3)
i=1
When we say that a vector w⃗ is in span {v ⃗ , ⋯ , v ⃗ } we mean that w⃗ can be written as a linear combination of the v ⃗ . We say
1 n 1
1 0 0 0
span { M1 , M2 } = span {[ ],[ ]} (14.2.4)
0 0 0 1
Solution
First consider A . We want to see if scalars s, t can be found such that A = sM 1 + tM2 .
1 0 1 0 0 0
[ ] = s[ ]+t [ ] (14.2.5)
0 2 0 0 0 1
2 = t
0 1 1 0 0 0
[ ] = s[ ]+t [ ] (14.2.6)
1 0 0 0 0 1
Clearly no values of s and t can be found such that this equation holds. Therefore B is not in span {M 1, .
M2 }
4a + b = 7
a − 2b = 4
3b = −3
You can verify that a = 2, b = −1 satisfies this system of equations. This means that we can write p(x) as follows:
2 2 2
7x + 4x − 3 = 2(4 x + x) − (x − 2x + 3) (14.2.8)
Solution
Let p(x) = ax + bx + c be an arbitrary polynomial in P . To show that S is a spanning set, it suffices to show that p(x)
2
2
can be written as a linear combination of the elements of S . In other words, can we find r, s, t such that:
2 2 2
p(x) = ax + bx + c = r(x + 1) + s(x − 2) + t(2 x − x) (14.2.9)
If a solution r, s, t can be found, then this shows that for any such polynomial p(x), it can be written as a linear
combination of the above polynomials and S is a spanning set.
2 2 2
ax + bx + c = r(x + 1) + s(x − 2) + t(2 x − x)
2 2
= rx + r + sx − 2s + 2tx − tx
2
= (r + 2t)x + (s − t)x + (r − 2s)
b = s−t
c = r − 2s
To check that a solution exists, set up the augmented matrix and row reduce:
1 1
⎡ 1 0 0 a + 2b + c ⎤
⎡ 1 0 2 a ⎤ 2 2
⎢ 1 1 ⎥
⎢ 0 1 −1 b ⎥ → ⋯ → ⎢ 0 1 0 a− c ⎥ (14.2.10)
⎢ ⎥ 4 4
⎢ ⎥
⎣ 1 −2 0 c ⎦ 1 1
⎣ 0 0 1 a−b − c ⎦
4 4
Clearly a solution exists for any choice of a, b, c. Hence S is a spanning set for P . 2
⃗
∑ ai vi⃗ = 0 implies a1 = ⋯ = an = 0 (14.3.1)
i=1
2 2
S = {x + 2x − 1, 2 x − x + 3} (14.3.2)
2 2 2
ax + 2ax − a + 2b x − bx + 3b = 0x + 0x + 0
2 2
(a + 2b)x + (2a − b)x − a + 3b = 0x + 0x + 0
It follows that
a + 2b = 0
2a − b = 0
−a + 3b = 0
⎡ 1 2 0 ⎤ ⎡ 1 0 0 ⎤
⎢ 2 −1 0 ⎥ → ⋯ → ⎢ 0 1 0 ⎥ (14.3.4)
⎢ ⎥ ⎢ ⎥
⎣ −1 3 0 ⎦ ⎣ 0 0 0 ⎦
⎧ −1 1 1
⎪⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎫
⎪
S = ⎨⎢ 0 ⎥ , ⎢ 1 ⎥ , ⎢ 3 ⎥⎬ (14.3.5)
⎩
⎪⎣ ⎭
⎦ ⎣ ⎦ ⎣ ⎦⎪
1 1 5
Solution
To determine if S is linearly independent, we look for solutions to
−1 1 1 0
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
Notice that this equation has nontrivial solutions, for example a = 2 , b = 3 and c = −1 . Therefore S is dependent.
Revisit Example [exa:dependent] with this in mind. Notice that we can write one of the three vectors as a combination of the
others.
1 −1 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
5 1 1
Solution
To determine if R is linearly independent, we write
1
a(2 u⃗ − w⃗ ) + b(w⃗ + v⃗)
+ c(3 v⃗ + u⃗ ) = 0⃗ (14.3.8)
2
If the set is linearly independent, the only solution will be a = b = c = 0 . We proceed as follows.
1
⃗
a(2 u⃗ − w⃗ ) + b(w⃗ + v)
⃗ + c(3 v ⃗ + u⃗ ) = 0
2
1
⃗
2au⃗ − aw⃗ + b w⃗ + b v ⃗ + 3c v ⃗ + c u⃗ = 0
2
1
⃗
(2a + c)u⃗ + (b + 3c)v ⃗ + (−a + b)w⃗ = 0
2
We know that the set S = {u⃗ , v,⃗ w⃗ } is linearly independent, which implies that the coefficients in the last line of this
equation must all equal 0. In other words:
b + 3c = 0
−a + b = 0
⎢ ⎥ → ⋯ → ⎢ ⎥ (14.3.9)
⎢ 0 1 3 0 ⎥ ⎢ 0 1 0 0 ⎥
⎣ ⎦ ⎣ 0 0 1 0 ⎦
−1 1 0 0
The following theorem was discussed in terms in R . We consider it here in the general case.
n
uniquerepresentation
Consider the span of a linearly independent set of vectors. Suppose we take a vector which is not in this span and add it to the
set. The following lemma claims that the resulting set is still linearly independent.
{ u⃗ 1 , ⋯ , u⃗ k , v}
⃗ (14.3.10)
Proof
Suppose ∑ c u⃗ + dv ⃗ = 0⃗ . It is required to verify that each
k
k
ci
v ⃗ = − ∑ ( ) u⃗ i (14.3.11)
d
i=1
k
contrary to the assumption that v ⃗ is not in the span of the u⃗ . Therefore, d = 0. But then ∑i i=1
ci u⃗ i = 0
⃗
and the linear
independence of {u⃗ , ⋯ , u⃗ } implies each c = 0 also.
1 k i
1 0 0 1
S = {[ ],[ ]} (14.3.12)
0 0 0 0
1 0 0 1 0 0
R = {[ ],[ ],[ ]} (14.3.13)
0 0 0 0 1 0
0 0 1 0 0 1
[ ] ∉ span {[ ],[ ]} (14.3.14)
1 0 0 0 0 0
Write
0 0 1 0 0 1
[ ] = a[ ]+b [ ]
1 0 0 0 0 0
a 0 0 b
= [ ]+[ ]
0 0 0 0
a b
= [ ]
0 0
Clearly there are no possible a, b to make this equation true. Hence the new matrix does not lie in the span of the matrices
in S . By Lemma [lem:addinglinearlyindependent], R is also linearly independent.
In this section we will examine the concept of subspaces introduced earlier in terms of R . Here, we will discuss these concepts in terms of abstract vector spaces.
n
The span of a set of vectors as described in Definition [def:span] is an example of a subspace. The following fundamental result says that subspaces are subsets of a vector space which are
themselves vector spaces.
Proof
Suppose first that W is a subspace. It is obvious that all the algebraic laws hold on W because it is a subset of V and they hold on V . Thus u⃗ + v ⃗ = v ⃗ + u⃗ along with the other
axioms. Does W contain 0⃗ ? Yes because it contains 0u⃗ = 0⃗ . See Theorem [thm:axiomuniqueness].
Are the operations of V defined on W ? That is, when you add vectors of W do you get a vector in W ? When you multiply a vector in W by a scalar, do you get a vector in W ? Yes.
This is contained in the definition. Does every vector in W have an additive inverse? Yes by Theorem [thm:axiomuniqueness] because −v ⃗ = (−1) v ⃗ which is given to be in W
provided v ⃗ ∈ W .
Next suppose W is a vector space. Then by definition, it is closed with respect to linear combinations. Hence it is a subspace.
In other words, this theorem claims that any subspace that contains a set of vectors must also contain the span of these vectors.
The following example will show that two spans, described differently, can in fact be equal.
Notice that 2p(x) − q(x) and p(x) + 3q(x) are both in W = span {p(x), q(x)} . Then by Theorem 14.4.3 W must contain the span of these polynomials and so U ⊆W .
2. W ⊆U
Notice that
3 2
p(x) = (2p(x) − q(x)) + (p(x) + 3q(x))
7 7
1 2
q(x) = − (2p(x) − q(x)) + (p(x) + 3q(x))
7 7
Hence p(x), q(x) are in span {2p(x) − q(x), p(x) + 3q(x)}. By Theorem 14.4.3 U must contain the span of these polynomials and so W ⊆U .
1.
To prove that a set is a vector space, one must verify each of the axioms given in Definition [def:vectorspaceaxiomsaddition] and [def:vectorspaceaxiomsscalarmult]. This is a cumbersome
task, and therefore a shorter procedure is used to verify a subspace.
3. For any vector w⃗ in W and scalar a , the product aw⃗ is also in W .
1 1
Solution
Using the subspace test in Procedure [proc:subspacetest] we can show that V and {0⃗ } are subspaces of V .
Since V satisfies the vector space axioms it also satisfies the three steps of the subspace test. Therefore V is a subspace.
1. The vector 0⃗ is clearly contained in {0⃗ }, so the first condition is satisfied.
2. Let w⃗ , w⃗ be in {0⃗ }. Then w⃗
1 2 1 = 0⃗ and w⃗ 2 = 0⃗ and so
⃗ ⃗ ⃗
w⃗ 1 + w⃗ 2 = 0 + 0 = 0 (14.4.1)
Hence the product is contained in {0⃗ } and the third condition is satisfied.
The two subspaces described above are called improper subspaces. Any subspace of a vector space V which is not equal to V or {0⃗ } is called a proper subspace.
Solution
First, express W as follows:
2
W = {p(x) = ax + bx + c, a, b, c, ∈ R|p(1) = 0} (14.4.3)
2. Let p(x), q(x) be polynomials in W . It follows that p(1) = 0 and q(1) = 0 . Now consider p(x) + q(x). Let r(x) represent this sum.
r(1) = p(1) + q(1)
= 0 +0
= 0
= 0
Recall the definition of basis, considered now in the context of vector spaces.
Solution
It can be verified that P is a vector space defined under the usual addition and scalar multiplication of polynomials.
2
Now, since P 2
2
= span { x , x, 1} , the set {x 2
, x, 1} is a basis if it is linearly independent. Suppose then that
2 2
ax + bx + c = 0 x + 0x + 0 (14.4.4)
where a, b, c are real numbers. It is clear that this can only occur if a = b = c = 0 . Hence the set is linearly independent and forms a basis of P . 2
Proof
The proof will proceed as follows. First, we set up the necessary steps for the proof. Next, we will assume that r > s and show that this leads to a contradiction, thus requiring that
r ≤s.
Define span{y ⃗ , ⋯ , y ⃗ } = V . Since each x⃗ is in span{y ⃗ , ⋯ , y ⃗ }, it follows there exist scalars c , ⋯ , c such that
1 s i 1 s 1 s
i=1
Note that not all of these scalars c can equal zero. Suppose that all the c
i i =0 . Then it would follow that x⃗ 1 = 0
⃗
and so { x⃗ 1 , ⋯ , x⃗ r } would not be linearly independent. Indeed, if
⃗
,
x⃗ 1 = 0 1 x⃗ 1 + ∑
r
i=2
0 x⃗ i = x⃗ 1 = 0
⃗
and so there would exist a nontrivial linear combination of the vectors {x⃗ , ⋯ , x⃗ } which equals zero. Therefore at least one c is nonzero.
1 r i
⃗ ∈ span ⎨x⃗ 1 , y ⃗ , ⋯ , y ⃗
yk ⃗
, y k+1 ⃗ ⎬ .
, ⋯ , ys (14.4.6)
1 k−1
⎩
⎪ ⎭
⎪
Define {z ⃗ , ⋯ , z ⃗
1 s−1 } to be
⃗ , ⋯ , z s−1
{z 1 ⃗ ⃗ , ⋯ , y ⃗
} = {y 1 ⃗
, y k+1 ⃗ }
, ⋯ , ys (14.4.7)
k−1
v⃗ = ∑ ci z i⃗ + cs y k
⃗ . (14.4.9)
i=1
Replace this y ⃗ with a linear combination of the vectors {x⃗ , z ⃗ , ⋯ , z ⃗ } to obtain v⃗ ∈ span {x⃗ , z ⃗ , ⋯ , z ⃗ } . The vector
k 1 1 s−1 1 1 s−1
⃗ ,
yk in the list ⃗ , ⋯ , y ⃗ } ,
{y 1 s
has now been replaced
with the vector x⃗ and the resulting modified list of vectors has the same span as the original list of vectors, {y ⃗ , ⋯ , y ⃗ } .
1 1 s
We are now ready to move on to the proof. Suppose that r > s and that
where the process established above has continued. In other words, the vectors z ⃗ , ⋯ , z ⃗ are each taken from the set {y ⃗ , ⋯ , y ⃗ } and
1 p 1 s
l + p = s. This was done for l =1 above.
Then since r > s, it follows that l ≤ s < r and so l + 1 ≤ r. Therefore, x⃗ is a vector not in the list, {x⃗ , ⋯ , x⃗ } and since
l+1 1 l
l p
x⃗ l+1 = ∑ ci x⃗ i + ∑ dj z j
⃗ . (14.4.12)
i=1 j=1
Not all the d can equal zero because if this were so, it would follow that {x⃗ , ⋯ , x⃗ } would be a linearly dependent set because one of the vectors would equal a linear combination
j 1 r
of the others. Therefore, [lincomb2] can be solved for one of the z ⃗ , say z ⃗ , in terms of x⃗ and the other z ⃗ and just as in the above argument, replace that z ⃗ with x⃗ to obtain
i k l+1 i i l+1
But then x⃗ r ∈ span { x⃗ 1 , ⋯ , x⃗ s } contrary to the assumption that {x⃗ , ⋯ , x⃗ } is linearly independent. Therefore, r ≤ s as claimed.
1 r
Proof
By Theorem [thm:exchangetheorem], m ≤ n and n ≤ m . Therefore m = n .
This corollary is very important so we provide another proof independent of the exchange theorem above.
Proof
Suppose n > m. Then since the vectors {u⃗ , ⋯ , u⃗ 1 m} span V , there exist scalars c such that ij
i=1
Therefore,
n n m
= 0⃗
∑ dj v⃗j if and only if ∑ ∑ cij dj u⃗ i = 0
⃗
(14.4.16)
if and only if
⃗
∑ ( ∑ cij dj ) u⃗ i = 0 (14.4.17)
i=1 j=1
∑ cij dj = 0, i = 1, 2, ⋯ , m. (14.4.18)
j=1
However, this is a system of m equations in n variables, d 1, ⋯ , dn and m < n. Therefore, there exists a solution to this system of equations in which not all the d are equal to zero.
j
Recall why this is so. The augmented matrix for the system is of the form [ C
⃗
0 ] where C is a matrix which has more columns than rows. Therefore, there are free variables and
hence nonzero solutions to the system of equations. However, this contradicts the linear independence of {u⃗ , ⋯ , u⃗ }. Similarly it cannot happen that m > n .
1 m
Given the result of the previous corollary, the following definition follows.
Notice that the dimension is well defined by Corollary [cor:baseslength]. It is assumed here that n < ∞ and therefore such a vector space is said to be finite dimensional.
Solution
If we can find a basis of P then the number of vectors in the basis will give the dimension. Recall from Example [exa:polydegreetwo] that a basis of P is given by
2 2
2
S = { x , x, 1} (14.4.19)
It is important to note that a basis for a vector space is not unique. A vector space can have many bases. Consider the following example.
Solution
Suppose these vectors are linearly independent but do not form a spanning set for P . Then by Lemma [lem:addinglinearlyindependent], we could find a fourth polynomial in P to create
2 2
a new linearly independent set containing four polynomials. However this would imply that we could find a basis of P of more than three polynomials. This contradicts the result of
2
Example [exa:dimension] in which we determined the dimension of P is three. Therefore if these vectors are linearly independent they must also form a spanning set and thus a basis for
2
P .
2
2
(a + 3c) x + (a + 2b) x + (a + b + c) = 0
We know that {x 2
, x, 1} is linearly independent, and so it follows that
a + 3c = 0
a + 2b = 0
a+b +c = 0
and there is only one solution to this system of equations, a = b = c = 0 . Therefore, these are linearly independent and form a basis for P . 2
Proof
Let v⃗ ∈ V where v⃗ ≠ 0. If span {v⃗ } = V , then it follows that {v⃗ } is a basis for V . Otherwise, there exists v⃗ ∈ V which is not in span {v⃗ } . By Lemma
1 1 1 1 2 1
[lem:addinglinearlyindependent] {v⃗ , v⃗ } is a linearly independent set of vectors. Then {v⃗ , v⃗ } is a basis for V and we are done. If span {v⃗ , v⃗ } ≠ V , then there exists
1 2 1 2 1 2
v⃗ ∉ span { v⃗ , v⃗ } and { v⃗ , v⃗ , v⃗ } is a larger linearly independent set of vectors. Continuing this way, the process must stop before n + 1 steps because if not, it would be possible to
3 1 2 1 2 3
obtain n + 1 linearly independent vectors contrary to the exchange theorem, Theorem [thm:exchangetheorem].
Proof
First suppose W = V . Then obviously the dimension of W = n.
Now suppose that the dimension of W is n . Let a basis for W be {w⃗ , ⋯ , w⃗ }. If W is not equal to V , then let v⃗ be a vector of V which is not contained in W . Thus v⃗ is not in
1 n
independent set of n + 1 vectors even though each of these vectors is in a spanning set of n vectors, a basis of V .
Solution
a b
Let A = [ ] ∈ M22 . Then
c d
1 0 a b 1 0 a+b −b
A[ ] =[ ][ ] =[ ] (14.4.20)
1 −1 c d 1 −1 c +d −d
and
1 1 1 1 a b a+c b +d
[ ]A = [ ][ ] =[ ]. (14.4.21)
0 −1 0 −1 c d −c −d
a+b −b a+c b +d
If A ∈ U , then [ ] =[ ] .
c +d −d −c −d
Equating entries leads to a system of four equations in the four variables a, b, c and d .
\begin{array}{ccc} a+b & = & a + c \\ -b & = & b + d \\ c + d & = & -c \\ -d & = & -d \end{array} \hspace*{.2in}\mbox{ or }\hspace*{.2in} \begin{array}{rcc} b - c & = & 0 \\ -2b - d & = & 0 \\ 2c + d & = & 0 \e
2
t ,c=− 1
2
t , d = t for any s, t ∈ R , and thus
t 1
s 1 0 0 −
2 2
A =[ ] = s[ ]+t [ ]. (14.4.22)
t 1
− t 0 0 − 1
2 2
Let
1
1 0 0 −
2
B = {[ ],[ ]} . (14.4.23)
1
0 0 − 1
2
Then span(B) = U , and it is routine to verify that B is an independent subset of M . Therefore B is a basis of U , and dim(U ) = 2 . 22
The following theorem claims that a spanning set of a vector space V can be shrunk down to a basis of V . Similarly, a linearly independent set within V can be enlarged to create a basis of V .
Proof
Let
S = {E ⊆ { u⃗ 1 , ⋯ , u⃗ n } such that (14.4.24)
such that
, ⋯ , v⃗m
span { v⃗1 } =V (14.4.27)
and m is as small as possible for this to happen. If this set is linearly independent, it follows it is a basis for V and the theorem is proved. On the other hand, if the set is not linearly
independent, then there exist scalars, c , ⋯ , c such that 1 m
⃗
0 = ∑ ci v⃗i (14.4.28)
i=1
, ⋯ , v⃗k−1
V = span { v⃗1
, v⃗k+1 }
, ⋯ , v⃗m (14.4.29)
contradicting the definition of m. This proves the first part of the theorem.
To obtain the second part, begin with {u⃗ , ⋯ , u⃗ } and suppose a basis for V is
1 k
, ⋯ , v⃗n
{ v⃗1 } (14.4.30)
If
span { u⃗ 1 , ⋯ , u⃗ k } = V , (14.4.31)
Then from Lemma [lem:addinglinearlyindependent], { u⃗ 1 , ⋯ , u⃗ k , u⃗ k+1 } is also linearly independent. Continue adding vectors in this way until n linearly independent vectors have
been obtained. Then
because if it did not do so, there would exist u⃗ as just described and {u⃗ , ⋯ , u⃗ } would be a linearly independent set of vectors having n + 1 elements. This contradicts the fact
n+1 1 n+1
that {v⃗ , ⋯ , v⃗ } is a basis. In turn this would contradict Theorem [thm:exchangetheorem]. Therefore, this list is a basis.
1 n
1 0 0 1
S = {[ ],[ ]} (14.4.34)
0 0 0 0
Enlarge S to a basis of M . 22
Solution
Recall from the solution of Example [exa:addinglinind] that the set R ⊆ M 22 given by
1 0 0 1 0 0
R = {[ ],[ ],[ ]} (14.4.35)
0 0 0 0 1 0
0 0
is also linearly independent. However this set is still not a basis for M 22 as it is not a spanning set. In particular, [ ] is not in spanR. Therefore, this matrix can be added to the set by
0 1
1 0 0 1 0 0 0 0
T = {[ ],[ ],[ ],[ ]} (14.4.36)
0 0 0 0 1 0 0 1
Next we consider the case where you have a spanning set and you want a subset which is a basis. The above discussion involved adding vectors to a set. The next theorem involves removing
vectors.
Proof
Let S denote the set of positive integers such that for k ∈ S, there exists a subset of {w⃗ , ⋯ , w⃗ } consisting of exactly k vectors which is a spanning set for W . Thus m ∈ S . Pick
1 m
the smallest positive integer in S . Call it k . Then there exists {u⃗ , ⋯ , u⃗ } ⊆ {w⃗ , ⋯ , w⃗ } such that \limfuncspan {u⃗ , ⋯ , u⃗ } = W . If
1 k 1 m 1 k
⃗
∑ ci w⃗ i = 0 (14.4.37)
i=1
and not all of the c i = 0, then you could pick cj ≠0 , divide by it and solve for u⃗ in terms of the others.
j
ci
w⃗ j = ∑ (− ) w⃗ i (14.4.38)
cj
i≠j
Then you could delete w⃗ from the list and have the same span. In any linear combination involving w⃗ , the linear combination would equal one in which w⃗ is replaced with the above
j j j
sum, showing that it could have been obtained as a linear combination of w⃗ for i ≠ j . Thus k − 1 ∈ S contrary to the choice of k . Hence each c = 0 and so {u⃗ , ⋯ , u⃗ } is a basis
i i 1 k
2 3 2 3 2
2x + x + 1, x + 4x + 2x + 2, 2 x + 2x + 2x + 1,
3 2 3 2
x + 4x − 3x + 2, x + 3x + 2x + 1
Then, as mentioned above, V has dimension 4 and so clearly these vectors are not linearly independent. A basis for V is {1, x, x 2 3
,x } . Determine a linearly independent subset of these
which has the same span. Determine whether this subset is a basis for V .
Solution
Consider an isomorphism which maps R to V in the obvious way. Thus
1
⎡ ⎤
⎢1⎥
⎢ ⎥ (14.4.39)
⎢2⎥
⎣ ⎦
0
corresponds to 2x 2
+x +1 through the use of this isomorphism. Then corresponding to the above vectors in V we would have the following vectors in R 4
.
1 2 1 2 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
⎢ 1 ⎥ ⎢ 2 ⎥ ⎢ 2 ⎥ ⎢ −3 ⎥ ⎢ 2 ⎥
⎢ ⎥,⎢ ⎥,⎢ ⎥,⎢ ⎥,⎢ ⎥ (14.4.40)
⎢2⎥ ⎢4⎥ ⎢2⎥ ⎢ 4⎥ ⎢3⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
0 1 2 1 1
Now if we obtain a subset of these which has the same span but which is linearly independent, then the corresponding vectors from V will also be linearly independent. If there are four in
the list, then the resulting vectors from V must be a basis for V . The for the matrix which has the above vectors as columns is
1 0 0 −15 0
⎡ ⎤
⎢0 1 0 11 0⎥
⎢ ⎥ (14.4.41)
⎢0 0 1 −5 0⎥
⎣ ⎦
0 0 0 0 1
Note how this is a subset of the original set of vectors. If there had been only three pivot columns in this matrix, then we would not have had a basis for V but we would at least have
obtained a linearly independent subset of the original set of vectors in this way.
Note also that, since all linear relations are preserved by an isomorphism,
2 3 2 3 2
−15 (2 x + x + 1) + 11 (x + 4x + 2x + 2) + (−5) (2 x + 2x + 2x + 1)
3 2
= x + 4x − 3x + 2
2 2
S = {1, x, x , x + 1} (14.4.42)
Show that S spans P , then remove vectors from S until it creates a basis.
2
Solution
First we need to show that S spans P . Let ax
2
2
+ bx + c be an arbitrary polynomial in P . Write 2
2 2 2
ax + bx + c = r(1) + s(x) + t(x ) + u(x + 1) (14.4.43)
Then,
2 2 2
ax + bx + c = r(1) + s(x) + t(x ) + u(x + 1)
2
= (t + u)x + s(x) + (r + u)
It follows that
a = t +u
b = s
c = r+u
Clearly a solution exists for all a, b, c and so S is a spanning set for P . By Theorem [thm:basisfromspanninglinind], some subset of S is a basis for P .
2 2
Recall that a basis must be both a spanning set and a linearly independent set. Therefore we must remove a vector from S keeping this in mind. Suppose we remove x from S . The
resulting set would be {1, x , x + 1} . This set is clearly linearly dependent (and also does not span P ) and so is not a basis.
2 2
2
Also if {w⃗ , ⋯ , w⃗ } is a linearly independent set of vectors, then W has a basis of the form {w⃗ , ⋯ , w⃗ , ⋯ , w⃗ } for r ≥ s .
1 s 1 s r
Proof
Let the dimension of V be n . Pick w⃗ ∈ W where w⃗ ≠ 0⃗ . If w⃗ , ⋯ , w⃗ have been chosen such that {w⃗ , ⋯ , w⃗ } is linearly independent, if \limfuncspan {w⃗ , ⋯ , w⃗ } = W ,
1 1 1 s 1 s 1 r
stop. You have the desired basis. Otherwise, there exists w⃗ ∉ \limfuncspan {w⃗ , ⋯ , w⃗ } and {w⃗ , ⋯ , w⃗ , w⃗ } is linearly independent. Continue this way until the process
s+1 1 s 1 s s+1
stops. It must stop since otherwise, you could obtain a linearly independent set of vectors having more than n vectors which is impossible.
The last claim is proved by following the above procedure starting with {w⃗ , ⋯ , w⃗ } as above.
1 s
This also proves the following corollary. Let V play the role of W in the above theorem and begin with a basis for W , enlarging it to form a basis for V as discussed above.
⎧ 1 0
⎪⎡ ⎤ ⎡ ⎤⎫
⎪
⎪ ⎪
⎪ ⎪
⎢0⎥ ⎢ 1⎥
W = span ⎨⎢ ⎥,⎢ ⎥⎬ (14.4.45)
⎢1⎥ ⎢ 0 ⎥⎪
⎪
⎪
⎩⎣
⎪ ⎪
⎦ ⎣ ⎦⎪
⎭
1 1
⎢0 1 0 1 0 0⎥
⎢ ⎥ (14.4.46)
⎢1 0 0 0 1 0⎥
⎣ ⎦
1 1 0 0 0 1
Note how the given vectors were placed as the first two and then the matrix was extended in such a way that it is clear that the span of the columns of this matrix yield all of 4
R . Now
determine the pivot columns. The is
1 0 0 0 1 0
⎡ ⎤
⎢0 1 0 0 −1 1⎥
⎢ ⎥ (14.4.47)
⎢0 0 1 0 −1 0⎥
⎣ ⎦
0 0 0 1 1 −1
These are
1 0 1 0
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 1 0 0
and now this is an extension of the given basis for W to a basis for R .
4
Why does this work? The columns of [vectorspaceeq1] obviously span R the span of the first four is the same as the span of all six.
4
sumintersection
Therefore the intersection of two subspaces is all the vectors shared by both. If there are no vectors shared by both subspaces,
meaning that U ∩ W ⃗
= {0} , the sum U + W takes on a special name.
directsum
An interesting result is that both the sum U + W and the intersection U ∩ W are subspaces of V .
3. Let a be a scalar and v ⃗ ∈ U ∩ W . Then in particular, v ⃗ ∈ U . Since U is a subspace, it follows that av ⃗ ∈ U . The same
argument holds for W so av ⃗ is in both U and W . By definition, it is in U ∩ W .
intersectionsubspace
Therefore U ∩ W is a subspace of V .
It can also be shown that U + W is a subspace of V .
dimensionsum
Recall that a function is simply a transformation of a vector to result in a new vector. Consider the following definition.
T (kv⃗1
+ p v⃗2
) = kT (v⃗1
) + pT (v⃗2
) (14.6.1)
Several important examples of linear transformations include the zero transformation, the identity transformation, and the
scalar transformation.
By Definition [def:lineartransformation] we must show that for all scalars k, p and vectors ⃗
v1 and ⃗
v2 in V ,
s (kv ⃗ + p v ⃗ ) = ks (v ⃗ ) + p s (v ⃗ ) . Assume that a is also a scalar.
a 1 2 a 1 a 2
⃗ + p v2
sa (kv1 ⃗ ) = ⃗ + p v2
a (kv1 ⃗ )
= ⃗ + ap v2
akv1 ⃗
= ⃗ ) + p (av2
k (av1 ⃗ )
= ⃗ ) + p sa (v2
ksa (v1 ⃗ )
Proof
Let 0⃗ denote the zero vector of V and let 0⃗ denote the zero vector of W . We want to prove that T (0⃗
V W V
⃗
) = 0W . Let
v ⃗ ∈ V . Then ⃗
0 v ⃗ = 0V and
⃗ ⃗ = 0⃗ W .
⃗ = 0T (v)
T (0V ) = T (0 v) (14.6.5)
Let v ⃗ ∈ V ; then −v ⃗ ∈ V is the additive inverse of v ,⃗ so v ⃗ + (−v)⃗ = 0⃗ . Thus V
⃗
T (v ⃗ + (−v))
⃗ = T (0V )
⃗ + T (−v))
⃗ ⃗
T (v) = 0W
⃗ = ⃗ ⃗ = −T (v).
⃗
T (−v) 0W − T (v)
This result follows from preservation of addition and preservation of scalar multiplication. A formal proof would be
by induction on m.
Find T (4x 2
+ 5x − 3) .
We provide two solutions to this problem.
Solution 1:
Suppose a(x 2
+ x) + b(x
2
− x) + c(x
2
+ 1) = 4 x
2
+ 5x − 3 . Then
2 2
(a + b + c)x + (a − b)x + c = 4 x + 5x − 3. (14.6.7)
1 2 1 2
x = (x + x) − (x − x)
2 2
2 1 2 1 2
1 = (x + 1) − (x + x) − (x − x).
2 2
Then
2 1 2 1 2 1 2 1 2
T (x ) = T ( (x + x) + (x − x)) = T (x + x) + T (x − x)
2 2 2 2
1 1
= (−1) + (1) = 0.
2 2
1 2 1 2 1 2 1 2
T (x) = T ( (x + x) − (x − x)) = T (x + x) − T (x − x)
2 2 2 2
1 1
= (−1) − (1) = −1.
2 2
2 1 2 1 2
T (1) = T ((x + 1) − (x + x) − (x − x))
2 2
2 1 2 1 2
= T (x + 1) − T (x + x) − T (x − x)
2 2
1 1
= 3− (−1) − (1) = 3.
2 2
Therefore,
The advantage of Solution 2 over Solution 1 is that if you were now asked to find T (−6x 2
− 13x + 9) , it is easy to use
T (x ) = 0 , T (x) = −1 and T (1) = 3 :
2
2 2
T (−6 x − 13x + 9) = −6T (x ) − 13T (x) + 9T (1)
More generally,
2 2
T (ax + bx + c) = aT (x ) + bT (x) + cT (1)
Suppose two linear transformations act in the same way on v ⃗ for all vectors. Then we say that these transformations are equal.
The definition above requires that two transformations have the same action on every vector in order for them to be equal. The
next theorem argues that it is only necessary to check the action of the transformations on basis vectors.
This theorem tells us that a linear transformation is completely determined by its actions on a spanning set. We can also
examine the effect of a linear transformation on a basis.
Furthermore, if
v⃗ = k1 v⃗1
+ k2 v⃗2
+ ⋯ + kn v⃗n
(14.6.9)
is a vector of V , then
T (v⃗)
= k1 w⃗ 1 + k2 w⃗ 2 + ⋯ + kn w⃗ n . (14.6.10)
1 9/16/2020
A Brief Table of Integrals
α+1
α u
∫ u du = + c, α ≠ −1
α+1
du
∫ = ln |u| + c
u
∫ cos u du = sin u + c
∫ sin u du = − cos u + c
∫ tan u du = − ln | cos u| + c
∫ cot u du = ln | sin u| + c
2
∫ sec u du = tan u + c
2
∫ csc u du = − cot u + c
2 u 1
∫ cos u du = + sin 2u + c
2 4
2 u 1
∫ sin u du = − sin 2u + c
2 4
du −1
∫ 2
du = tan u +c
1+u
du −1
∫ du = sin u +c
√1−u2
1 1 u−1
∫ du = ln | | +c
u2 −1 2 u+1
∫ cosh u du = sinh u + c
∫ sinh u du = cosh u + c
∫ u dv = uv − ∫ v du
u u u
∫ u e du = u e −e +c
λu
e (λ cos ωu+ω sin ωu)
λu
∫ e cos ωu du = 2
+c
λ +ω2
λu
e (λ sin ωu+ω cos ωu)
λu
∫ e sin ωu du = 2
+c
2
λ +ω
∫ ln |u| du = u ln |u| − u + c
2
u ln |u| u
2
∫ u ln |u| du = − +c
2 4
9/11/2020 1 https://math.libretexts.org/@go/page/45874
Index
A F method of undetermined coefficients
Abel's theorem forcing function 5.4: The Method of Undetermined Coefficients I
5.1: Homogeneous Linear Equations 5.5: The Method of Undetermined Coefficients II
9.1: Introduction to Linear Higher Order Equations 9.3: Undetermined Coefficients for Higher Order
10.2: Linear Systems of Differential Equations
10.3: Basic Theory of Homogeneous Linear Equations
Systems midpoint method
augmented matrix G 3.2: The Improved Euler Method and Related
11.3: Gaussian Elimination Gaussian Elimination Methods
autonomous differential equation 11.2: Elementary Operations
11.3: Gaussian Elimination
minor of a matrix
4.4: Autonomous Second Order Equations 13.1: Basic Techniques
gravity
Autonomous Second Order Equations minors or a matrix
4.3: Elementary Mechanics
4.4: Autonomous Second Order Equations 13.1: Basic Techniques
H multiplicity
B Heun’s method 9.2: Higher Order Constant Coefficient
Bernoulli Equations Homogeneous Equations
3.2: The Improved Euler Method and Related
2.4: Transformation of Nonlinear Equations into Methods
Separable Equations
homogeneous N
5.1: Homogeneous Linear Equations
Newton’s Law of Cooling
C Homogeneous Differential Equations 4.2: Cooling and Mixing
characteristic polynomial Nonhomgeneous Linear Equations
5.2: Constant Coefficient Homogeneous Equations with constant Coefficients 5.3: Nonhomogeneous Linear Equations
9.2: Higher Order Constant Coefficient 5.2: Constant Coefficient Homogeneous Equations
Homogeneous Equations
Chebyshev’s equation O
I open interval of convergence
7.3: Series Solutions Near an Ordinary Point I identity matrix 7.2: Review of Power Series
coefficient matrix 12.6: The Identity and Inverses
open rectangle
10.2: Linear Systems of Differential Equations implicit function Theorem 2.3: Existence and Uniqueness of Solutions of
cofactor 2.2: Separable Equations Nonlinear Equations
13.1: Basic Techniques improved Euler method ordinary point
complementary equation 3.2: The Improved Euler Method and Related 7.3: Series Solutions Near an Ordinary Point I
2.1: Linear First Order Equations Methods 7.4: Series Solutions Near an Ordinary Point II
5.3: Nonhomogeneous Linear Equations initial value problems Overdamped Motion
compound interest 8.3: Solution of Initial Value Problems 6.2: Spring Problems II
4.1: Growth and Decay integrating factor
constant coefficient equation 2.6: Integrating Factors P
9.2: Higher Order Constant Coefficient Inverse Laplace Transform particular solution
Homogeneous Equations
8.2: The Inverse Laplace Transform 5.3: Nonhomogeneous Linear Equations
constant matrix phase plane equivalent
11.3: Gaussian Elimination
K 4.4: Autonomous Second Order Equations
Critically Damped Motion Kronecker delta Poincaré phase plane
6.2: Spring Problems II 12.6: The Identity and Inverses 4.4: Autonomous Second Order Equations
polynomial operator
D L 9.2: Higher Order Constant Coefficient
damping Laplace Expansion Homogeneous Equations
4.4: Autonomous Second Order Equations 13.1: Basic Techniques power series
defective Laplace Transform 7.2: Review of Power Series
10.5: Constant Coefficient Homogeneous Systems 8: Laplace Transforms
II
determinant
8.1: Introduction to the Laplace Transform R
8.3: Solution of Initial Value Problems
radius of convergence
13.1: Basic Techniques Linear Independence 7.2: Review of Power Series
Determinants 14.3: Linear Independence
reduction of orders
13.2: Properties of Determinants
5.6: Reduction of Order
direction field M Riccati equation
1.3: Direction Fields for First Order Equations Malthusian model 2.4E: Transformation of Nonlinear Equations into
1.1: Applications Leading to Differential Separable Equations (Exercises)
E Equations
roundoff errors
Elementary operations matrix 3.1: Euler's Method
11.2: Elementary Operations 12.1: Matrix Arithmetic
Row Operations
escape velocity matrix inverse 13.2: Properties of Determinants
4.3: Elementary Mechanics 12.7: Finding the Inverse of a Matrix
Euler’s Method Matrix Multiplication S
3.1: Euler's Method 12.4: Properties of Matrix Multiplication
Separable Differential Equations
Exact Differentials Mechanics 2.2: Separable Equations
2.5: Exact Equations 4.3: Elementary Mechanics
separatrix
Exact Equations 4.4: Autonomous Second Order Equations
2.5: Exact Equations Simple Harmonic Motion
6.1: Spring Problems I
skew lines T V
11.1: Geometry temperature decay constant van der Pol equation
skew symmetric matrix 4.2: Cooling and Mixing 4.4E: Autonomous Second Order Equations
12.5: The Transpose terminal velocity (Exercises)
span 4.3: Elementary Mechanics Variation of Parameters
14.2: Spanning Sets transpose 5.7: Variation of Parameters
9.4: Variation of Parameters for Higher Order
Spanning Sets 12.5: The Transpose
Equations
14.2: Spanning Sets Triangular Matrix Verhulst model
square matrix 13.1: Basic Techniques
1.1: Applications Leading to Differential
12.1: Matrix Arithmetic truncation errors Equations
subsets 3.1: Euler's Method
14.2: Spanning Sets W
superposition principle U Wronskian
5.3: Nonhomogeneous Linear Equations Underdamped Motion 9.1: Introduction to Linear Higher Order Equations
system of linear equations 6.2: Spring Problems II 10.3: Basic Theory of Homogeneous Linear
11.2: Elementary Operations Systems
Section 1.2 Answers
1. 3
2. 2
3. 1
4. 2
2
13. y = − x
2
+c
15. y = x
2
ln x −
x
4
+c
18. y = x
3
− sin x + e
x
+ c1 + c2 x
19. y = sin x + c 1 + c2 x + c3 x
2
20. y = − x
60
+e
x
+ c1 + c2 x + c3 x
2
21. y = 7
64
e
4x
+ c1 + c2 x + c3 x
2
23. y = 1 − 1
2
cos x
2
–
24. y = 3 − ln(√2 cos x)
5
25. y = − 47
15
−
37
5
(x − 2) +
x
30
26. y = 1
4
xe
2x
−
1
4
e
2x
+
29
28. y = (x 2
− 6x + 12)e
x
+
x
2
− 8x − 11
29. y = x
3
+
cos 2x
6
+
7
4
x
2
− 6x +
7
4 3
30. y = x
12
+
x
6
+
1
2
(x − 2 )
2
−
26
3
(x − 2) −
5
40.
a. 576 ft
b. 10 s
41.
b. y = 0
43.
a. (−2c − 2, ∞) (−∞, ∞)
9/11/2020 1 https://math.libretexts.org/@go/page/45875
Section 2.1 Answers
1. y = e −ax
2. y = ce
3
−x
3. y = ce −(ln x) /2
4. y = c
x
3
5. y = ce 1/x
−( x−1)
6. y = e
7. y = x ln x
e
8. y = x sin x
π
9. y = 2(1 + x 2
)
10. y = 3x −k
12. y = 1
3
+ ce
−3x
13. y = 2
x
+
x
c
e
x
14. y = e
2
−x x
( + c)
2
−x
15. y = − e
1+x
+c
2
7 ln |x|
16. x
+
3
2
x+
c
17. y = (x − 1) −4
(ln |x − 1| − cos x + c)
18. y = e
2
−x x c
( + )
4 x
2 ln |x|
19. y = x2
+
1
2
+
c
x2
20. y = (x + c) cos x
21. y = c−cos x
2
(1+x)
3 5
(x−2) (x−2)
22. y = − 1
2 (x−1)
+c
(x−1)
23. y = (x + c)e
2
− sin x
x x
24. y = e
x2
−
e
x
3
+
c
x2
3x −7x
25. y = e −e
10
26. y = 2x+1
2
2
(1+x )
27. y = x2
1
ln(
1+x
2
)
28. y = 1
2
(sin x + csc x)
2 ln |x|
29. y = x
+
x
2
−
1
2x
30. y = (x − 1) −3
[ln(1 − x) − cos x]
31. y = 2x 2
+
x
1
2
(0, ∞)
32. y = x 2
(1 − ln x)
9/11/2020 1 https://math.libretexts.org/@go/page/45893
33. y =
2
1 5 −x
+ e
2 2
ln |x−1|+tan x+1
34. y = 3
(x−1)
2
ln |x|+x +1
35. y = 4
(x+2)
36. y = (x 2
− 1) (
1
2
ln | x
2
− 1| − 4)
37. y = −(x 2
− 5)(7 + ln | x
2
− 5|)
x
38. y = e
2 2
−x 2 t
(3 + ∫ t e dt)
0
x
39. y = 1
x
(2 + ∫
1
sin t
t
dt)
x
40. y = e −x
∫
1
tan t
t
dt
t
x
41. y = 1+x
1
2
(1 + ∫
0
e
1+t
2
dt)
2
x
42. y = 1
x
(2 e
−(x−1)
+e
−x
∫
1
t
e e
t
dt)
43. G = r
λ
+ (G0 −
r
λ
)e
−λt
limt→∞ G(t) =
r
x
45. y = y 0e
−a(x−x0 )
+e
−ax
∫
x0
e
at
f (t)dt
48.
a. y = tan
−1
(
1
3
+ ce
3x
)
1/2
b. y = ±[ln( 1
x
+
c
x2
)]
c. y = exp (x
2
+
x
c
2
)
d. y = −1 + c+3 ln |x|
x
9/11/2020 2 https://math.libretexts.org/@go/page/45893
Section 2.2 Answers
−−−− −−−− −−−−−−−
1. y = 2 ± √2(x 3 2
+ x + x + c)
3. y = x−c
c
y ≡ −1
2
(ln y) 3
4. 2
=−
x
3
+c
5. y 3
+ 3 sin y + ln |y| + ln(1 + x ) + tan
2 −1
x = c; y ≡0
1/2
2
6. y = ±(1 + ( 1+cx
x
) ) ; y ≡ ±1
7. y = tan( x
3
+ c)
8. y = c
√1+x2
2
( x−1) /2
2−ce
9. y = ( x−1)
2
/2
; y ≡1
1−ce
10. y = 1 + (3x 2
+ 9x + c )
1/3
−−−−−−−−−−−−−−−−−
11. y = 2 + √ 2
3
x
3
+ 3x
2
+ 4x −
11
2
−( x −4) /2
12. y = e
2
2−e−( x −4) /2
13. y 3
+ 2y
2
+x
2
+ sin x = 3
14. (y + 1)(y − 1) −3
(y − 2 )
2
= −256(x + 1 )
−6
15. y = −1 + 3e −x
16. y = 1
2
√2 e−2x −1
2
−x
18. y = 4−e
−x
2
; (−∞, ∞)
2−e
20. y = 1+e
2
−2x
(−∞, ∞)
−−−−−−
21. y = −√25 − x 2
; (−5, 5)
22. y ≡ 2, (−∞, ∞)
1/3
23. y = 3( x+1
2x−4
) ; (−∞, 2)
24. y = x+c
1−cx
−−−−−
25. y = −x cos c + √1 − x 2
sin c; y ≡ 1; y ≡ −1
26. y = −x + 3π/2
P0
28. P =
αP0 +(1−αP0 ) e
−a t
; limt→∞ P (t) = 1/α
SI0
29. I =
I0 +(S−I0 ) e−rSt
I0 αI0
30. If q = rS then I =
1+rI0 t
and limt→∞ I (t) = 0. If q ≠ Rs, then I =
I0 +(α−I0 ) e
−rαt
. If q < rs, then limt→∞ I (t)
q
=α =S− if q > rS, then limt→∞ I (t) = 0
r
9/11/2020 1 https://math.libretexts.org/@go/page/45894
34. f = ap, where a = constant
−− −−−−
35. y = e −x 2
(−1 ± √2 x + c )
−− −−−
36. y = x 2 2
(−1 + √x + c )
37. y = e x
(−1 + (3x e
x
+ c)
1/3
)
−−− −−
38. y =e
2x 2
(1 ± √c − x )
39.
a. y 1 = 1/x; g(x) = h(x)
b. y 1 = x; g(x) = h(x)/ x
2
c. y 1 =e
−x
;
x
g(x) = e h(x)
d. y 1 =x
−r
; g(x) = x
r−1
h(x)
9/11/2020 2 https://math.libretexts.org/@go/page/45894
Section 2.3 Answers
1. (b) x 0 ≠ kπ(l = integer)
2. (b) x 0, y0 ) ≠ (0, 0)
3. (b) x 0 y0 ≠ (2k + 1)
π
2
(k = integer)
5.
a. all (x , y ) 0 0
b. (x , y ) woth y
0 0 0 ≠0
6. (b) all (x 0, y0 )
7. (b) all (x 0, y0 )
9.
a. all (x 0, y0 )
b. all (x 0, y0 ) ≠ (0, 0)
10.
a. all (x 0, y0 )
b. all (x 0, y0 ) with y 0 ≠ ±1
2
(k = integer)
5/3
16. y = ( 3
5
x + 1) , −∞ < x < ∞ , is a solution.
Also,
5
0, −∞ < x ≤ −
3
y ={
3 5/3 5
( x + 1) , − <x <∞
5 3
3
, the following function is also a solution:
3 5/3
⎧ ( (x + a)) , −∞ < x < −a,
⎪ 5
⎪
5
y =⎨ 0, −a ≤ x ≤ −
3
⎪
⎩
⎪ 3 5/3 5
( x + 1) , − <x <∞
5 3
17.
a. all (x 0, y0 )
b. all (x 0, y0 ) with y 0 ≠1
18. y 1
3
= 1; y2 = 1 + |x | ; y3 = 1 − |x | ; y4 = 1 + x ; y5 = 1 − x
3 3 3
3 3
1 +x , x ≥ 0, 1 −x , x ≥ 0,
y6 = { ; y7 = { ;
1, x <0 1, x <0
1, x ≥ 0, 1, x ≥ 0,
y8 = { ; y9 = {
3 3
1 +x , x <0 1 −x , x <0
19. y = 1 + (x 2
+ 4)
3/2
, −∞ < x < ∞
9/11/2020 1 https://math.libretexts.org/@go/page/45895
20.
a. The solution is unique on (0, ∞). It is given by
–
1, 0 < x ≤ √5
y ={ –
2 3/2
1 − (x − 5) , √5 < x < ∞
–
b. 1, −∞ < x ≤ √5,
y ={
2 3/2 –
1 − (x − 5) , √5 < x < ∞
and
2 2 3/2
⎧ 1 − (x −α ) , −∞ < x < −α,
⎪
–
y =⎨ 1, −α ≤ x ≤ √5,
⎩
⎪ –
2 3/2
1 − (x − 5) , √5 < x < ∞,
9/11/2020 2 https://math.libretexts.org/@go/page/45895
Section 2.4 Answers
1. y = 1−ce
1
x
2. y = x 2/7
(c − ln |x| )
1/7
3. y = e 2/x
(c − 1/x )
2
√2x+c
4. y = ± 1+x
2
5. y = ±(1 − x
2
2 −x −1/2
+ ce )
1/3
6. y = [ 3(1−x)+ce
x
−x
]
2 √2
7. y = √1−4x
−2
8. y = [1 −
2
3 −( x −1)/4
e ]
2
9. y = 1
1/3
x(11−3x)
10. y = (2e x
− 1)
2
1/2
12. y = [ 5x
2(1+4 x )
5
]
⎧ ∞ if L = 0,
⎪
at
P0 e
14. P = t
; limt→∞ P (t) = ⎨ 0 if L = ∞,
1+aP0 ∫ α(τ) ea τ dτ
0 ⎩
⎪
1/aL if 0 < L < ∞
16. y = cx
1−cx
y = −x
18. y = x sin −1
(ln |x| + c)
2 ln |x|+1
4
1/2
9−x
24. y = 1
x
(
2
)
25. y = −x
x(4x−3)
26. y = − (2x−3)
−−−−−−
27. y = x √4x 6
−1
y
28. tan −1
x
−
1
2
ln(x
2
+y ) = c
2
9/11/2020 1 https://math.libretexts.org/@go/page/45896
31. (y + x) = c(y − x ) 3
; y = x; y = −x
32. y 2
(y − 3x) = c; y ≡ 0; y = 3x
33. (x − y ) 3
(x + y) = c y x ;
2 4
y = 0; y = x; y = −x
3
y y
34. x
+
x
3
= ln |x| + c
aX0 + b Y0 = α
c X0 + dY0 = β
41. (y + 2x + 1) 4
)2y − 6x − 3) = c; y = 3x + 3/2; y = −2x − 1
42. (y + x − 1)(y − x − 5) 3
= c; y = x + 5; y = −x + 1
2(x+2)
43. ln |y − x − 6| − y−x−6
= c; y = x +6
44. (y 1 =x
1/3
y =x
1/3
(ln |x| + c )
1/3
−− − −−−
45. y 1 =x ;
3 3 6
y = ±x √c x − 1
2 4
x (1+cx )
46. y 1 =x ;
2
y =
1−cx
4
y = −x
2
x x
e (1−2ce
47. y 1 =e ;
x
y =−
1−cex
; y = −2 e
x
4
2 ln x(1+c(ln x) )
49. y 1 = ln x; y =
4
; y = −2 ln x
1−c(ln x)
−−−−−− −
50. y 1 =x
1/2
; y =x
1/2
(−2 ± √ln |x| + c )
−− −−−−
51. y
2 2
x x 2
1 =e ; y =e (−1 ± √2 x + c )
−3+√1+60x
52. y = 2x
−5+√1+48x
53. y = 2x2
56. y = 1 + x+1+ce
1
x
57. y = e x
−
1+ce
1
−x
58. y = 1 − x(1−cx)
1
59. y = x − 2x
x2 +c
9/11/2020 2 https://math.libretexts.org/@go/page/45896
Section 2.5 Answers
1. 2x 3
y
2
=c
2. 3y sin x + 2x 2
e
x
+ 3y = c
3. Not exact
4. x 2
− 2x y
2
+ 4y
3
=c
5. x + y = c
6. Not exact
7. 2y 2
cos x + 3x y
3
−x
2
=c
8. Not exact
9. x 3
+ x y + 4x y
2 2
+ 9y
2
=c
14. x 2
y e
2 x
+ 2y + 3 x
2
=c
15. x
2
3 x +y 3 2
e − 4y + 2x =c
16. x 4
e
xy
+ 3xy = c
17. x 3
cos xy + 4 y
2
+ 2x
2
=c
2
x+√2 x +3x−1
18. y = x
2
−−−−−−−
19. y = sin x − √1 − tan x
x
1/3
20. y = ( e −1
ex +1
)
21. y = 1 + 2 tan x
2
22. y = x −x+6
(x+2)(x−3)
2 2
3y
23. 7x
2
+ 4xy +
2
=c
24. (x 4
y
2
+ 1)e
x
+y
2
=c
29.
a. M (x, y) = 2xy + f (x)
b. M (x, y) = 2(sin x + x cos x)(y sin y + cos y) + f (x)
c. M (x, y) = y e − e cos x + f (x) x y
30.
4
x y
a. N (x, y) = + x + 6xy + g(y)
2
2
37.
9/11/2020 1 https://math.libretexts.org/@go/page/45897
a. 2x + x y + y = c
2 4 4 2
b. x + 3x y = c
3 2
c. x + y + 2xy = c
3 2
38. y = −1 − x
1
2
2 4 2
−3( x +1)+√9 x +34 x +21
39. y = x 3
(
2
)
2
2x+√9−5x
40. y = −e
2
−x
( )
3
44.
a. G(x, y) = 2xy + c
b. G(x, y) = e sin y + c x
c. G(x, y) = 3x y − y + c 2 3
9/11/2020 2 https://math.libretexts.org/@go/page/45897
Section 2.6 Answers
3. μ(x) = 1/x 2
; y = cx and μ(y) = 1/ y ;
2
x = cy
4. μ(x) = x −3/2
; x
3/2
y =c
5. μ(y) = 1/y 3
; y e
3 2x
=c
6. μ(x) = e 5x/2
; e
5x/2
(xy + 1) = c
7. μ(x) = e x
;
x
e (xy + y + x) = c
8. μ(x) = x; 2 2
x y (9x + 4y) = c
9. μ(y) = y 2
;
3 2
y (3 x y + 2x + 1) = c
10. μ(y) = y e y
;
y
e (x y
3
+ 1) = c
11. μ(y) = y 2
; y (3 x
3 4 3
+ 8 x y + y) = c
12.μ(x) = x e x
;
2
x y(x + 1)e
x
=c
13. μ(x) = (x 3
− 1)
−4/3
; xy(x
3
− 1)
−1/3
= c and x ≡ 1
14. μ(y) = e y
;
y
e (sin x cos y + y − 1) = c
15. μ(y) = e
2 2
−y −y
; xy e (x + y) = c
xy
16. sin y
= c and y = kπ(k = integer)
17. μ(x, y) = x 4
y ;
3
x y
5 4
ln x = c
19. μ(x, y) = x −2
y
−3
; 3x y
2 2 2
+ y = 1 + cx y and x ≡ 0, y ≡ 0
20. μ(x, y) = x −2
y
−1
; −
2
x
+y
3
+ 3 ln |y| = c and x ≡ 0, y ≡ 0
21. μ(x, y) = e ax
e
by
; e
ac
e
by
cos xy = c
22. μ(x, y) = x −4
y
−3
(and others) xy = c
23. μ(x, y) = x e y
; x ye
2 y
sin x = c
3 3
x y y
24. μ(x, y) = 1/x 2
;
3
−
x
=c
26. μ(x, y) = x 2
y ;
2 3 3
x y (3x + 2 y ) = c
2
27. μ(x, y) = x −2
y
−2
;
2
3 x y = cxy + 2 and x ≡ 0, y ≡ 0
9/11/2020 1 https://math.libretexts.org/@go/page/45898
Section 3.1 Answers
1. y 1 = 1.450000000, y2 = 2.085625000, y3 = 3.079099746
6.
7.
8.
9.
10.
11.
12.
13.
Euler's Method
x h = 0.1 h = 0.05 h = 0.025 Exact
9/11/2020 1 https://math.libretexts.org/@go/page/45899
Euler semilinear Method
x h = 0.1 h = 0.05 h = 0.025 Exact
Applying variation of parameters to the given initial value problem yields y = ue −3x
, where (A) u = 7, u(0) = 6 . Since
′
u = 0 , Euler’s method yields the exact solution of (A). Therefore the Euler semilinear method produces the exact solution of
′′
15.
Euler's method
x h = 0.2 h = 0.1 h = 0.05 "Exact"
16.
Euler's method
x h = 0.2 h = 0.1 h = 0.05 "Exact"
17.
Euler's method
x h = 0.0500 h = 0.0250 h = 0.0125 "Exact"
18.
Euler's method
9/11/2020 2 https://math.libretexts.org/@go/page/45899
x h = 0.2 h = 0.1 h = 0.05 "Exact"
19.
Euler's method
x h = 0.0500 h = 0.0250 h = 0.0125 "Exact"
20.
Euler's method
x h = 0.1 h = 0.05 h = 0.025 "Exact"
21.
Euler's method
x h = 0.1 h = 0.05 h = 0.025 "Exact"
22.
Euler's method
x h = 0.1 h = 0.05 h = 0.025 "Exact"
9/11/2020 3 https://math.libretexts.org/@go/page/45899
Section 3.2 Answers
1. y 1 = 1.542812500, y2 = 2.421622101, y3 = 4.208020541
6.
7.
8.
9.
10.
11.
12.
13. Applying variation of parameters to the given initial value problem y = ue , where (A)u = 1 − 2x, u(0) = 2 . Since
−3x ′
= 0 , the improved Euler method yields the exact solution of (A). Therefore the improved Euler semilinear method
′′′
u
9/11/2020 1 https://math.libretexts.org/@go/page/45900
1.0 0.105660401 0.100924399 0.099893685 0.099574137
14.
Improved Euler method
x h = 0.1 h = 0.05 h = 0.025 "Exact"
15.
16.
Improved Euler method
x h = 0.2 h = 0.1 h = 0.05 "Exact"
17.
Improved Euler method
x h = 0.0500 h = 0.0250 h = 0.0125 "Exact"
18.
9/11/2020 2 https://math.libretexts.org/@go/page/45900
2.0 0.732679223 0.732721613 0.732667905 0.732638628
19.
Improved Euler method
x h = 0.0500 h = 0.0250 h = 0.0125 "Exact"
20.
21.
Improved Euler method
x h = 0.1 h = 0.05 h = 0.025 "Exact"
22.
Improved Euler method
x h = 0.1 h = 0.05 h = 0.025 "Exact"
23.
9/11/2020 3 https://math.libretexts.org/@go/page/45900
2.0 1.349489056 1.352345900 1.352990822 1.353193719
24.
25.
26.
27.
28.
29.
30.
9/11/2020 4 https://math.libretexts.org/@go/page/45900
Section 3.3 Answers
1. y 1 = 1.550598190, y2 = 2.469649729
2. y 1 = 1.221551366, y2 = 1.492920208
3. y 1 = 1.890339767, y2 = 1.763094323
4. y 1 = 2.961316248, y2 = 2.920128958
5. y 1 = 2.475605264, y2 = 1.825992433
6.
7.
8.
9.
10.
11.
12.
(A)u = 1 − 4x + 3 x − 4 x , u(0) = −3 . Since u = 0 , the Runge-Kutta method yields the exact solution of (A).
′ 2 3 (5)
Therefore the Euler semilinear method produces the exact solution of the given problem.
9/11/2020 1 https://math.libretexts.org/@go/page/45901
14.
Runge-Kutta method
x h = 0.1 h = 0.05 h = 0.025 "Exact"
15.
Runge-Kutta method
x h = 0.2 h = 0.1 h = 0.05 "Exact"
16.
Runge-Kutta method
x h = 0.2 h = 0.1 h = 0.05 "Exact"
17.
9/11/2020 2 https://math.libretexts.org/@go/page/45901
Runge-Kutta method
x h = 0.0500 h = 0.0250 h = 0.0125 "Exact"
18.
Runge-Kutta method
x h = 0.2 h = 0.1 h = 0.05 "Exact"
19.
Runge-Kutta method
x h = 0.0500 h = 0.0250 h = 0.0125 "Exact"
20.
Runge-Kutta method
x h = 0.1 h = 0.05 h = 0.025 "Exact"
21.
Runge-Kutta method
x h = 0.1 h = 0.05 h = 0.025 "Exact"
9/11/2020 3 https://math.libretexts.org/@go/page/45901
x h = 0.1 h = 0.05 h = 0.025 "Exact"
22.
Runge-Kutta method
x h = 0.1 h = 0.05 h = 0.025 "Exact"
24.
25.
26.
27.
9/11/2020 4 https://math.libretexts.org/@go/page/45901
Section 4.1 Answers
1. Q = 20e −(t ln 2)/3200
2. 2 ln 10
ln 2
days
3. τ = 10
ln 2
ln 4/3
minutes
ln( p0 / p1 )
4. τ ln 2
tp ln p
5. tq
=
ln q
Q1
6. k = 1
t2 −t1
ln
Q
2
7. 20 g
8. 50 ln 2
3
yrs
9. 25
2
ln 2
10.
a. = 20 ln 3yr
b. Q = 100000e
0
−5
11.
a. Q(t) = 5000 − 4750e −t/10
b. 5000lbs
12. 25
1
yrs
13. V = V0 e
t ln 10/2
4 hours
4
1500 ln
14. ln 2
3
yrs; 2
−4/3
Q0
17. 10 gallons
18. V (t) = 15000 + 10000e t/20
19. W (t) = 4 × 10 6
(t + 1 )
2
dollars t years from now
20. p = 100
−t/2
25−24e
21.
.06t
a. P (t) = 1000 e
.06t
+ 50
e
.06/52
−1
e −1
b. 5.64 × 10 −4
22.
a. P − = rP − 12M
b. P = 12M
(1 − e ) + P
r
rt
0e
rt
rP0
c. M ≈ 12(1−e−rN )
d. For (i) approximate M = $402.25 , exact M = $402.80 for (ii) approximate M = $1206.05 , exact M = $1206.93 .
23.
P0
a. T (α) = −
1
r
ln(1 − (1 − e
−rN
)/α)) years S(α) =
(1−e
−rN
)
[rN + α ln(1 − (1 − e
−rN
)/α)]
9/11/2020 1 https://math.libretexts.org/@go/page/45902
b. T (1.05) = 13.69yrs, S(1.05) = $3579.94T (1.10) = 12.61yrs, S(1.10) = $6476.63T (1.15) = 11.70yrs, S(1.15) =
$8874.
( a −r) T
S0 (1−e )
if a ≠ r
24. P0 ={ r−a
S0 T if a = r
9/11/2020 2 https://math.libretexts.org/@go/page/45902
Section 4.2 Answers
1. ≈ 15.15 ∘
F
11
2. T = −10 + 110e
−t ln
9
3. ≈ 24.33 ∘
F
4.
a. ≈ 91.30 F ∘
7. 32 ∘
F
15.
a. Q + Q = 6 − 2e
′ 2
25
−t/25
b. Q = 75 − 50e − 25 e
−t/25 −2t/25
c. 75
16.
k( S0 −Tm )
a. T = Tm + (T0 − Tm )e
−kt
+
(k−km )
(e
−kmt
−e
−kt
)
b. T = T m + k(S0 − Tm )te
−kt
+ (T0 − Tm )e
−kt
17.
a. T
′
= −k(1 +
a
am
)T + k(Tm0 +
a
am
T0 )
V0
18. V =
a
b V0 −( V0 −a/b) e
−a t
; limt→∞ V (t) = a/b
19. c 1 = c(1 − e
−rt/W
), c2 = c(1 − e
−rt/W
−
r
W
te
−rt/W
)
20.
n−1 j
a. cn = c (1 − e
−rt/W
∑j=0
1
j!
(
W
rt
) )
b. c
9/11/2020 1 https://math.libretexts.org/@go/page/45903
c. 0
2 2
c1 W1 +c2 W2 c2 W −c1 W W1 +W2
21. Let c ∞ =
W1 +W2
, α =
2
W1 +W2
1
, andβ =
W1 W2
. Then:
a. c1 (t) = c∞ +
α
W1
e
−rβt
, c2 (t) = c∞ −
α
W2
e
−rβt
9/11/2020 2 https://math.libretexts.org/@go/page/45903
Section 4.3 Answers
1. v = − 384
5
(1 − e
−5t/12
); −
384
5
ft/s
2. k = 12; v = −16(1 − e
−2t
)
3. v = 25(1 − e −t
); 25ft/s
4. v = 20 − 27e −t/40
5. ≈ 17.10 ft
−4t/5
40(13+3 e )
6. v = − −4t/5
; −40ft/s
13−3e
7. v = −128(1 − e −t/4
)
v0 k mg v0 k
9. T =
m
k
ln(1 +
mg
); ym = y0 =
m
k
[v0 −
l
ln(1 +
mg
)]
−t
64(1−e )
10. v = − 1+e−t
; 64ft/s
−βt −βt −
−− −−
v0 (1+e )−α(1−e ) mg kg
11. v = α α(1+e−βt )−v0 (1−e−βt )
; −α where α = √ k
and β = 2√ m
ak
−− −
−− −
−− −2√
m
( t−T )
mg
12. T =√
m
kg
tan
−1
(v0 √
mg
k
) v = −√
k
;
1−e
ak
−2√ ( t−T )
m
1+e
13. s ′
= mg −
s+1
as
; a0 = mg
14. (a) ms ′
= mg − f (s)
15.
a. v ′
= −9.8 + v /81
4
b. v T ≈ −5.308m/s
16.
−−
a. v = −32 + 8√|v| ; v = −16 ft/s
′
T
17. ≈ 6.76miles/s
18. ≈ 1.47miles/s
2
gR
20. α = 2
( ym +R)
9/11/2020 1 https://math.libretexts.org/@go/page/45904
Section 4.4 Answers
2
y
1. ȳ = 0 0 is a stable equilibrium; trajectories are v
¯
¯ 2
+
4
=c
3
2y
2. ȳ = 0 0 is an unstable equilibrium; trajectories are v
¯
¯ 2
+
3
=c
3
2|y|
3. ȳ = 0 0 is a stable equilibrium; trajectories are v
¯
¯ 2
+
3
=c
−
− −
−
9. No equilibria if a < 0; 0 is unstable if a = 0 ; √a is stable and −√a is unstable if a > 0 .
−
− −
−
10. 0 is a stable equilibrium if a ≤ 0 ; −√a and √a are stable and 0 is unstable if a > 0 .
11. 0 is unstable if a ≤ 0 ; −√−
− −
−
a and √a are unstable and 0 is stable if a > 0 .
22. An equilibrium solution ȳ of y + p(y) = 0 is unstable if there’s an € > 0 such that, for every δ > 0 , there’s a solution of
¯
¯ ′′
− −−−−−−−−−−−−− − −−−−−−−−−−−−− −
(A) with √(y(0) − y) + v (0) < δ , but √(y(t) − y) + v (t) ≥ € for some t > 0 .
¯
¯¯ 2 2 ¯
¯¯ 2 2
9/11/2020 1 https://math.libretexts.org/@go/page/45905
Section 4.5 Answers
2xy
1. y ′
=
x2 +3 y 2
2
y
2. y ′
=−
(xy−1)
2 2 2
y( x +y −2 x ln |xy|)
3. y ′
=−
x( x +y
2 2
−2 y
2
ln |xy|)
1/2
4. x y ′
−y = −
x
5. y
2
′ x
+ 2xy = 4x e
6. x y ′
+ y = 4x
3
7. y ′
− y = cos x − sin x
8. (1 + x 2 ′
)y − 2xy = (1 − x ) e
2 x
10. y ′
g − yg
′
= f g −fg
′ ′
11. (x − x 0 )y
′
= y − y0
12. y ′
(y
2
−x
2
+ 1) + 2xy = 0
14.
a. y = −81 + 18x, (9, 81) y = −1 + 2x, (1, 1)
b. y = −121 + 22x, (11, 121) y = −1 + 2x, (1, 1)
c. y = −100 − 20x, (−10, 100) y = −4 − 4x, (−2, 4)
d. y = −25 − 10x, (−5, 25) y = −1 − 2x, (−1, 1)
15. (e) y = 5+3x
4
, (−3/5, 4/5) y =−
5−4x
3
, (4/5, −3/5)
17.
a. y = − (1 + x), (1, −1); y = + , (25, 5)
1
2
5
2
x
10
4
1
2
7
2
x
14
2
5
2
x
10
18. y = 2x 2
19. y = cx
√| x2 =1|
20. y = y 1 + c(x − x1 )
21. y = − x
2
−
x
22. y = −x ln |x| + cx
−−−−−
23. y = √2x + 4
−−−−−
24. y = √x 2
−3
25. y = kx 2
26. (y − x ) 3
(y + x) = k
27. y 2
= −x + k
28. y 2
=−
1
2
ln(1 + 2 x ) + k
2
29. y 2
= −2x − ln(x − 1 )
2
+k
9/11/2020 1 https://math.libretexts.org/@go/page/45906
−−−−
2
30. y = 1 + √ 9−x
2
; those with c > 0
y
33. tan −1
x
−
1
2
ln(x
2 2
+y ) = k
y
34. 1
2
ln(x
2 2
+ y ) + (tan α) tan
−1
x
=k
9/11/2020 2 https://math.libretexts.org/@go/page/45906
Section 5.1 Answers
1.
(c) y = −2e 2x
+e
5x
2x 5x
(d) y = (5k 0 − k1 )
e
3
+ (k1 − 2 k0 )
e
2.
(c) y = e 3x
(3 cos x − 5 sin x)
(d) y = e x
(k0 cos x + (k1 − k0 ) sin x
3.
(c) y = e x
(7 − 3x)
(d) y = e x
(k0 + (k1 − k0 )x)
4.
c1 c2
a. y = x−1
+
x+1
b. y = 2
x−1
−
3
x+1
; (−1, 1)
5.
a. e x
b. e cos x
2x
c. x + 2x − 2
2
d. − x 5
6
−5/6
e. − x
1
2
f. (x ln |x|) 2
2x
g. e
2 √x
6. 0
7. W (x) = (1 − x 2
)
−1
8. W (x) = 1
10. y 2 =e
−x
11. y 2 = xe
3x
12. y 2 = xe
ax
13. y 2 =
1
14. y 2 = x ln x
15. y 2 =x
a
ln x
16. y 2 =x
1/2
e
−2x
17. y 2 =x
18. y 2 = x sin x
19. y 2 =x
1/2
cos x
20. y 2 = xe
−x
21. y 2 =
1
x2 −4
22. y 2 =e
2x
9/11/2020 1 https://math.libretexts.org/@go/page/45907
23. y 2 =x
2
35.
a. y " −2x y + 5y = 0 ′
c. x y " −x y + y = 0
2 ′
d. x y " +x y + y = 0
2 ′
e. y " −y = 0
f. xy " −y = 0 ′
37. (c) y = k 0 y1 + k1 y2
38. y 1 = 1, y2 = x − x0 ; y = k0 + k1 (x − x0 )
k1
40. y 1 = cos ω(x − x0 ), y2 =
1
ω
sin ω(x − x0 )y = k0 cos ω(x − x0 ) +
ω
sin ω(x − x0 )
k0 +k1 x
41. y 1 =
1
1−x
2
y2 =
x
1−x
2
y =
1−x
2
42.
2 3
c1 x + c2 x x ≥0
(c) k 0 = k1 = 0; y = {
2 3
c1 x + c3 x x <0
44.
3 4
a1 x + a2 x x ≥0
(c) k 0 = k1 = 0; y = {
3 4
b1 x + b2 x x <0
9/11/2020 2 https://math.libretexts.org/@go/page/45907
Section 5.2 Answers
1. y = c 1e
−6x
+ c2 e
x
2. y = e 2x
(c1 cos x + c2 sin x)
3. y = c 1e
−7x
+ c2 e
−x
4. y = e 2x
(c1 + c2 x)
5. y = e −x
(c1 cos 3x + c2 sin 3x)
6. y = e −3x
(c1 cos x + c2 sin x)
7. y = e 4x
(c1 + c2 x)
8. y = c 1 + c2 e
−x
– –
9. y = e x
(c1 cos √2x + c2 sin √2x)
10. y = e −3x
(c1 cos 2x + c2 sin 2x)
11. y = e −x/2
(c1 cos
3x
2
+ c2 sin
3x
2
)
12. y = c 1e
−x/5
+ c2 e
x/2
13. y = e −7x
(2 cos x − 3 sin x)
14. y = 4e x/2
+ 6e
−x/3
15. y = 3e x/3
− 4e
−x/2
−x/2 3x/2
16. y = e
3
+
3e
17. y = e 3x/2
(3 − 2x)
18. y = 3 −4x
− 4e
−3x
19. y = 2xe 3x
20. y = e x/6
(3 + 2x)
– 2 √6 –
21. y = e −2x
(3 cos √6x +
3
sin √6x)
23. y = 2e −(x−1)
− 3e
−2(x−1)
24. y = 1
3
e
−(x−2)
−
2
3
e
7(x−2)
25. y = e 7(x−1)
(2 − 3(x − 1))
26. y = e −(x−2)/3
(2 − 4(x − 2))
27. y = 2 cos 2
3
(x −
π
4
) − 3 sin
2
3
(x −
π
4
)
– –
28. y = 2 cos √3 (x −
π
3
)−
1
√3
sin √3 (x −
π
3
)
k0 k1
30. y = r2 −r1
(r2 e
r1 (x−x0 )
− r1 e
r2 (x−x0 )
)+
r2 −r1
(e
r2 (x−x0 )
−e
r1 (x−x0 )
)
31. y = e r1 (x−x0 )
[ k0 + (k1 − r1 k0 )(x − x0 )]
k1 −λk0
32. y = e λ(x−x0 )
[k0 cos ω(x − x0 ) + (
ω
) sin ω(x − x0 )]
9/11/2020 1 https://math.libretexts.org/@go/page/45908
Section 5.3 Answers
1. y
p = −1 + 2x + 3 x ; y = −1 + 2x + 3 x
2 2
+ c1 e
−6x
+ c2 e
x
2. y
p = 1 + x; y = 1 + x + e
2x
(c1 cos x + c2 sin x)
3. y
p = −x + x ; y = −x + x
3 3
+ c1 e
−7x
+ c2 e
−x
4. y
p = 1 −x ; y = 1 −x
2 2
+e
2x
(c1 + c2 x)
5. y
p = 2x + x ; y = 2x + x
3 3
+e
−x
(c1 cos 3x + c2 sin 3x); y = 2x + x
3
+e
−x
(2 cos 3x + 3 sin 3x)
6. y
p = 1 + 2x; y = 1 + 2x + e
−3x
(c1 cos x + c2 sin x); y = 1 + 2x + e
−3x
(cos x − sin x)
8. y
p =
2
9. y
p = 4x
1/2
10. y p =
x
11. y p =
1
x3
12. y p = 9x
1/3
13. y p =
2x
13
3x 3x
16. y p =
e
3
; y =
e
3
+ c1 e
−6x
+ c2 e
x
17. y p =e
2x
; y =e
2x
(1 + c1 cos x + c2 sin x)
19. y p
x
=e ; y =e
x
+e
2x
(c1 + c2 x); y = e
x
+e
2x
(1 − 3x)
20. y p =
4
25
e
x/2
; y =
45
4
e
x/2
+e
−x
(c1 cos 3x + c2 sin 3x)
21. y p =e
−3x
; y =e
−3x
(1 + c1 cos x + c2 sin x)
– –
26. y p
x
= cos 3x; y = cos 3x + e (c1 cos √2x + c2 sin √2x)
30. y = 2
ω −ω
1
2
(M cos ωx + N sin ωx) + c1 cos ω0 x + c2 sin ω0 x
0
3x 3x
33. y p = −1 + 2x + 3 x
2
+
e
3
; y = −1 + 2x + 3 x
2
+
e
3
+ c1 e
−6x
+ c2 e
x
34. y p = 1 +x +e
2x
; y = 1 +x +e
2x
(1 + c1 cos x + c2 sin x)
35. y p = −x + x
3
− 2e
−2x
; y = −x + x
3
− 2e
−2x
+ c1 e
−7x
+ c2 e
−x
36. y p = 1 −x
2 x
+e ; y = 1 −x
2
+e
x
+e
2x
(c1 + c2 x)
37. y p = 2x + x
3
+
45
4
e
x/2
; y = 2x + x
3
+
45
4
e
x/2
+e
−x
(c1 cos 3x + c2 sin 3x)
38. y p = 1 −x
2 x
+e ; y = 1 −x
2
+e
x
+e
2x
(1 + c1 cos x + c2 sin x)
9/11/2020 1 https://math.libretexts.org/@go/page/45909
Section 5.4 Answers
1. y
p =e
3x
(−
1
4
+
x
2
)
2. y
p =e
−3x
(1 −
x
4
)
3. y
p =e
x
(2 −
3x
4
)
4. y
p =e
2x
(1 − 3x + x )
2
5. y
p =e
−x
(1 + x )
2
6. y
p
x
= e (−2 + x + 2 x )
2
7. y
p = xe
−x
(
1
6
+
x
2
)
8. y
p
x
= x e (1 + 2x)
9. y
p = xe
3x
(−1 +
x
2
)
10. y p = xe
2x
(−2 + x)
11. y p =x e
2 −x
(1 +
x
2
)
12. y p =x e
2 x
(
1
2
− x)
2 2x
13. y p =
x e
2
(1 − x + x )
2
2 −x/3
14. y p =
x e
27
(3 − 2x + x )
2
3x
15. y = e
4
(−1 + 2x) + c1 e
x
+ c2 e
2x
16. y = e x
(1 − 2x) + c1 e
2x
+ c2 e
4x
2x
17. y = e
5
(1 − x) + e
−3x
(c1 + c2 x)
18. y = x e x
(1 − 2x) + c1 e
x
+ c2 e
−3x
19. y = e x 2
[ x (1 − 2x) + c1 + c2 x]
20. y = −e 2x
(1 + x) + 2 e
−x
−e
5x
21. y = x e 2x
+ 3e
x
−e
−4x
22. y = e −x
(2 + x − 2 x ) − e
2 −3x
23. y = e −2x
(3 − x) − 2 e
5x
24. y p =−
e
3
(1 − x) + e
−x
(3 + 2x)
25. y p
x
= e (3 + 7x) + x e
3x
26. y p =x e
3 4x
+ 1 + 2x + x
2
27. y p = xe
2x
(1 − 2x) + x e
x
28. y p
x
= e (1 + x) + x e
2 −x
29. y p =x e
2 −x
+e
3x
(1 − x )
2
31. y p = 2e
2x
32. y p = 5x e
4x
33. y p =x e
2 4x
3x
34. y p =−
e
4
(1 + 2x − 2 x )
2
35. y p = xe
3x
(4 − x + 2 x )
2
9/11/2020 1 https://math.libretexts.org/@go/page/45910
36. y p =x e
2 −x/2
(−1 + 2x + 3 x )
2
37.
a. y =e
−x
(
4
3
x
3/2
+ c1 x + c2 )
2
b. y = e −3x
[
x
4
(2 ln x − 3) + c1 x + c2 ]
c. y =e
2x
[(x + 1) ln |x + 1| + c1 x + c2 ]
3
d. y = e −x/2
(x ln |x| +
x
6
+ c1 x + c2 )
39.
a. e (3 + x) + c
x
b. −e (1 + x ) + c
−x 2
−2x
c. − e
(3 + 6x + 6 x = 4 x
8
2 3
)+c
d. e (1 + x ) + c
x 2
e. e (−6 + 4x + 9x ) + c
3x 2
f. −e (1 − 2x + 3x ) + c
−x 3 4
k αx r
(−1 ) k! e (−αx)
40. α
k+1
∑
k
r=0 r!
+c
9/11/2020 2 https://math.libretexts.org/@go/page/45910
Section 5.5 Answers
1. y p = cos x + 2 sin x
3. y p
x
= e (−2 cos x + 3 sin x)
2x
4. y p =
e
2
(cos 2x − sin 2x)
5. y p
x
= −e (x cos x − sin x)
6. y p =e
−2x
(1 − 2x)(cos 3x − sin 3x)
9. y p = x [x cos(
x
2
) − 3 sin(
x
2
)]
10. y p = xe
−x
(3 cos x + 4 sin x)
11. y p
x
= x e [(−1 + x) cos 2x + (1 + x) sin 2x]
13. y p
2
= (1 + 2x + x ) cos x + (1 + 3 x ) sin x
2
14. y p =
x
2
(cos 2x − sin 2x)
15. y p = e (x
x 2
cos x + 2 sin x)
16. y p
x
= e (1 − x )(cos x + sin x)
2
17. y p = e (x
x 2 3
− x )(cos x + sin x)
18. y p =e
−x
[(1 + 2x) cos x − (1 − 3x) sin x]
20. y p = −x
3
cos x + (x + 2 x ) sin x
2
21. y p = −e
−x 2
[(x + x ) cos x − (1 + 2x) sin x]
22. y = e x
(2 cos x + 3 sin x) + 3 e
x
−e
6x
23. y = e x
[(1 + 2x) cos x + (1 − 3x) sin x]
24. y = e x
(cos x − 2 sin x) + e
−3x
(cos x + sin x)
25. y = e 3x
[(2 + 2x) cos x − (1 + 3x) sin x]
26. y = e 3x
[(2 + 3x) cos x + (4 − x) sin x] + 3 e
x
− 5e
2x
27. y p = xe
3x
−
e
5
(cos x − 2 sin x)
x −x
2
(1 − x) +
e
29. y p =−
xe
2
(2 + x) + 2x e
2x
+
1
10
(3 cos x + sin x)
x 2
30. y p
x
= x e (cos x + x sin x) +
e
25
(4 + 5x) + 1 + x +
x
2 2x
31. y p =
x e
6
(3 + x) − e
2x
(cos x − sin x) + 3 e
3x
+
1
4
(2 + x)
32. y = (1 − 2x + 3x 2
)e
2x
+ 4 cos x + 3 sin x
33. y = x e −2x
cos x + 3 cos 2x
34. y = − 3
8
cos 2x +
1
4
sin 2x + e
−x
−
13
8
e
−2x
−
3
4
xe
−2x
40.
9/11/2020 1 https://math.libretexts.org/@go/page/45911
a. 2x cos x − (2 − x ) sin x + c
2
2
2 2
−x
25
−x
2
2 2
2
2 3
9/11/2020 2 https://math.libretexts.org/@go/page/45911
Section 5.6 Answers
1. y = 1 − 2x + c 1e
−x
+ c2 x e ; { e
x −x
, xe }
x
c2
2. y = 4
3x
2
+ c1 x +
x
; {x, 1/x}
2
x(ln |x|)
3. y = 2
+ c1 x + c2 x ln |x|; {x, x ln |x|}
4. y = (e 2x x
+ e ) ln(1 + e
−x
) + c1 e
2x
+ c2 e ; { e
x 2x x
,e }
5. y = e x
(
4
5
x
7/2
+ c1 + c2 x) ; { e , x e }
x x
6. y = e x
(2 x
3/2
+x
1/2
ln x + c1 x
1/2
+ c2 x
−1/2
); { x
1/2 x
e ,x
−1/2
e
−x
}
7. y = e x
(x sin x + cos x ln | cos x| + c1 cos x + c2 sin x); { e
x
cos x, e
x
sin x}
8. y = e
2 2 2
−x −2x −x −x
(2 e + c1 + c2 x); { e , xe }
c2
9. y = 2x + 1 + c 1x
2
+
x
2
2
; { x , 1/ x }
2
2x
10. y = xe
9
+ xe
−x
(c1 + c2 x); {x e
−x
,x e
2 −x
}
11. y = x e x
(
x
3
+ c1 +
c2
x2
) ; {x e , e /x}
x x
2 x
(2x−1) e
12. y = − 8
+ c1 e
x
+ c2 x e
−x
; {e , xe
x −x
}
13. y = x 4
+ c1 x
2
+ c2 x
2
ln |x|; { x , x
2 2
ln |x|}
14. y = e −x
(x
3/2
+ c1 + c2 x
1/2
); { e
−x
,x
1/2
e
−x
}
15. y = e x
(x + c1 + c2 x ); { e , x e }
2 x 2 x
2x
16. y = x 1/2
(
e
2
+ c1 + c2 e ) ; { x
x 1/2
,x
1/2
e }
x
17. y = −2x 2
ln x + c1 x
2 4
+ c2 x ; { x , x }
2 4
18. {e x x
, e /x}
19. {x 2
,x }
3
22. {e x
,x e }
3 x
23. {x a
,x
a
ln |x|}
26. {x 1/2
,x
1/2
cos x}
27. {x 1/2
e
2x
,x
1/2
e
−2x
}
28. {1/x, e 2x
}
29. {e x
,x }
2
30. {e 2x
,x e
2 2x
}
31. y = x 4
= 6x
2
− 8x
2
ln |x|
32. y = 2e 2x
− xe
−x
(x+1)
33. y = 4
x
[−e (3 − 2x) + 7 e
−x
]
9/11/2020 1 https://math.libretexts.org/@go/page/45912
2
34. y = x
4
+x
2
(x+2)
35. y = 6(x−2)
+
2x
2
x −4
38.
−kc1 sin kx+kc2 cos kx
a. y =
c1 cos kx+c2 sin kx
x
c1 +2 c2 e
b. y = c1 +c2 e
x
7x
−6 c1 +c2 e
c. y =
c1 +c2 e
7x
6x
7 c1 +c2 e
d. y = − c1 +c2 e6x
c1 +c2 (x+6)
g. y = 6( c1 +c2 x)
39.
x
c1 +c2 e (1+x)
a. y =
x( c1 +c2 e )
x
2
−2 c1 x+c2 (1−2 x )
b. y = c1 +c2 x
2x
−c1 +c2 e (x+1)
c. y =
c1 +c2 xe2x
−3x
2 c1 +c2 e (1−x)
d. y = c1 +c2 xe
−3x
9/11/2020 2 https://math.libretexts.org/@go/page/45912
Section 5.7 Answers
− cos 3x ln | sec 3x+tan 3x|
1. y
p =
9
3. y
p
x
= 4 e (1 + e ) ln(1 + e
x −x
)
4. y
p
x
= 3 e (cos x ln | cos x| + x sin x)
5. y
p =
8
5
x
7/2
e
x
6. y
p =e
x
ln(1 − e
−2x
)−e
−x
ln(e
2x
− 1)
3
2( x −3)
7. y
p =
3
2x
8. y
p =
e
9. y
p =x
1/2
e
x
ln x
10. y p =e
−x(x+2)
11. y p = −4 x
5/2
12. y p = −2 x
2
sin x − 2x cos x
−x
xe (x+1)
13. y p =−
2
√x cos √x
14. y p =−
2
4 x
15. y p =
3x e
16. y p =x
a+1
17. y p =
x sin x
18. y p = −2 x
2
19. y p = −e
−x
sin x
√x
20. y p =−
2
3/2
21. y p =
x
22. y p = −3 x
2
3 x
23. y p =
x e
3/2
24. y p =−
4x
15
25. y p =x e
3 x
26. y p = xe
x
27. y p =x
2
28. y p = x e (x − 2)
x
− x
29. y p = √x e (x − 1)/4
2x 2
e (3 x −2x+6) −x
30. y = 6
+
xe
31. y = (x − 1) 2
ln(1 − x) + 2 x
2
− 5x + 3
32. y = (x 2
− 1)e
x
− 5(x − 1)
2
x( x +6)
33. y = 3( x2 −1)
9/11/2020 1 https://math.libretexts.org/@go/page/45913
2
34. y = − x
2
+x +
1
2x
2
2
x (4x+9)
35. y = 6(x+1)
38.
x
a. y = k 0 cosh x + k1 sinh x + ∫
0
sinh(x − t)f (t)dt
x
b. y = k
′
0 sinh x + k1 cosh x + ∫
0
cosh(x − t)f (t)dt
39.
x
a. y(x) = k cos x + k sin x + ∫ sin(x − t)f (t)dt
0 1
0
x
b. y (x) = −k sin x + k cos x + ∫ cos(x − t)f (t)dt
′
0 1 0
9/11/2020 2 https://math.libretexts.org/@go/page/45913
Section 6.1 Answers
– –
1. y = 3 cos 4√6t − 1
sin 4 √6t ft
2 √6
– –
2. y = − 1
4
cos 8 √5t −
1
sin 8 √5t ft
4 √5
−−
−
3. y = 1.5 cos 14√10t cm
√17
4. y = 1
4
cos 8t −
16
1
sin 8t ft; R =
16
ft; ω0 = 8 rad/s; T = π/4 s; ϕ ≈ −.245 rad ≈ −14.04
∘
√5 −−−
5. y = 10 cos 14t + 25
14
sin 14t cmt; R =
14
√809 cm; ω0 = 14 rad/s; T = π/7 s; ϕ ≈ .177 rad ≈ 10.12
∘
−−
−− −− −− −−
6. y = − 1
4
cos √70t +
2
√70
sin √70t m; R =
1
4
√
67
35
m; ω0 = √70 rad/s; T = 2π/ √70 s; ϕ ≈ 2.38 rad ≈ 136.28
∘
7. y = 2
3
cos 16t −
1
4
sin 16t ft
8. y = 1
2
cos 8t −
3
8
sin 8t ft
9. .72 m
10. y = 1
3
sin t +
1
2
cos 2t +
5
6
sin 2t ft
11. y = 16
5
(4 sin
t
4
− sin t)
– –
12. y = − 1
16
sin 8t +
1
3
cos 4 √2t −
1
sin 4 √2t
8 √2
13. y = −t cos 8t − 1
6
cos 8t +
1
8
sin 8t ft
–
14. T = 4 √2 s
15. ω = 8 rad/s y = − t
16
(− cos 8t + 2 sin 8t) +
128
1
sin 8t ft
– −
− −
− –
16. ω = 4√6 rad/s; y =−
t
√6
[
8
3
cos 4 √6t + 4 sin 4 √6t] +
1
9
sin 4 √6t ft
17. y = t
2
cos 2t −
t
4
sin 2t + 3 cos 2t + 2 sin 2t m
v0 −−−−−−−−−−− − y0 ω0 v0
18. y = y 0 cos ω0 t +
ω0
sin ω0 t; R =
1
ω0
√(ω0 y0 )2 + (v0 )2 ; cos ϕ =
2 2
; sin ϕ =
2 2
√( ω y ) +( v ) √( ω y ) +( v )
0 0 0 0 0 0
19. The object with the longer period weighs four times as much as the other.
–
20. T 2 = √2T1 , where T is the period of the smaller object
1
21. k 1 = 9 k2 , where k is the spring constant of the system with the shorter period.
1
9/11/2020 1 https://math.libretexts.org/@go/page/45914
Section 6.2 Answers
−2t
−
−
1. y = e
2
(3 cos 2t − sin 2t) ft; √
5
2
e
−2t
ft
√82
2. y = −e −t
(3 cos 3t +
1
3
sin 3t) ft
3
e
−t
ft
3. y = e −16t
(
1
4
+ 10t) ft
−3t
4. y = − e
4
(5 cos t + 63 sin t) ft
5. 0 ≤ c < 8 lb-sec/ft
−− −−
6. y = 1
2
e
−3t
(cos √91t +
11
√91
sin √91t) ft
−4t
7. y = − e
3
(2 + 8t) ft
– –
8. y = e −10t
(9 cos 4 √6t +
45
sin 4 √6t) cm
2 √6
√41 √41
9. y = e −3t/2
(
3
2
cos
2
t+
9
sin
2
t) ft
2 √41
3
√119 √119
10. y = e −
2
t
(
1
2
cos
2
t−
2 √119
9
sin
2
t) ft
– –
11. y = e −8t
(
1
4
cos 8 √2t −
1
4 √2
sin 8 √2t) ft
−− −−
12. y = e −t
(−
1
3
cos 3 √11t +
14
sin 3 √11t) ft
9 √11
13. yp =
22
61
cos 2t +
61
2
sin 2t ft
14. y = − 2
3
(e
−8t
− 2e
−4t
)
15. y = e −2t
(
1
10
cos 4t −
1
5
sin 4t) m
16. y = e −3t
(10 cos t − 70 sin t) cm
17. yp =−
2
15
cos 3t
1
15
sin 3t ft
18. yp =
11
100
cos 4t +
100
27
sin 4t cm
19. yp =
42
73
cos t +
39
73
sin t ft
20. y = − 1
2
cos 2t +
1
4
sin 2t m
21. yp =
1
cω0
(−β cos ω0 t + α sin ω0 t)
cy
24. y = e −ct/2m
(y0 cos ω1 t +
1
ω1
(v0 +
2m
0
) sin ω1 t)
r2 y0 −v0 v0 −r1 y0
25. y = r2 −r1
e
r1 t
+
r2 −r1
e
r2 t
26. y = e r1 t
(y0 + (v0 − r1 y0 )t)
9/11/2020 1 https://math.libretexts.org/@go/page/45915
Section 6.3 Answers
−−
− −−
1. I =e
−15t
(2 cos 5 √15t −
6
√31
sin 5 √31t)
2. I =e
−20t
(2 cos 40t − 101 sin 40t)
3. I =−
200
3
e
−10t
sin 30t
4. I = −10 e
−30t
(cos 40t + 18 sin 40t)
5. I = −e
−40t
(2 cos 30t − 86 sin 30t)
6. I
p =−
1
3
(cos 10t + 2 sin 10t)
7. I
p =
20
37
(cos 25t − 6 sin 25t)
8. I
p =
3
13
(8 cos 50t − sin 50t)
9. I
p =
20
123
(17 sin 100t − 11 cos 100t)
10. I p =−
45
52
(cos 30t + 8 sin 30t)
−−
− −−−−−−−
12. ω 0 = 1/ √LC
2
maximum amplitude = √U + V
2
/R
9/11/2020 1 https://math.libretexts.org/@go/page/45916
Section 6.4 Answers
2 2 2
eρ ρ
1. If e =1 , then Y
2
= ρ(ρ − 2X) ; if e ≠ 1 (X +
1−e
2
) +
Y
1−e
2
= 2
if; e <1 let
(1−e2 )
eρ ρ ρ
X0 = − 2
, a = 2
, b = 2
1−e 1−e 1−e
1/2
2 ′ 2
2
ρ ρr
2. Let h =r θ
2
0
′
0
; then ρ =
h
k
, e = [(
r0
− 1) +(
h
0
) ] . If e =0 , then θ0 is undefined, but also irrelevant if e ≠0
′
ρ pr
then ϕ = θ 0 −α , where −π ≤ α < π cos α = 1
e
(
r0
− 1) and sin α = eh
0
.
3.
γ2 −γ1
a. e =
γ +γ
1 2
1/2
2gγ2
b. r 0 = Rγ1 , r
′
0
=0 , θ arbitrary, θ
0
′
0
=[ 3
]
Rγ ( γ1 +γ2 )
1
4. f (r) = −mh 2
(
6c
r
4
+
1
r
3
)
2 2
mh ( γ +1)
5. f (r) = − r3
6.
2 ′
du( θ0 ) r0
a. d u
2
+ (1 −
k
2
) u = 0, u(θ0 ) =
1
r0
,
dθ
=−
h
dθ h
1/2
b. with γ = ∣∣1 − k
2
∣
∣
h
′ −1
r0 r
i. r = r0 (cosh γ(θ − θ0 ) −
γh
0
sinh γ(θ − θ0 ))
′ −1
r0 r
ii. r = r 0 (1 −
h
0
(θ − θ0 ))
′ −1
r0 r
iii. r = r0 (cos γ(θ − θ0 ) −
γh
0
sin γ(θ − θ0 ))
9/11/2020 1 https://math.libretexts.org/@go/page/45917
Section 7.1 Answers
1.
a. R = 2; I = (−1, 3)
b. R = 1/2; I = (3/2, 5/2)
c. R = 0
d. R = 16; I = (−14, 18)
e. R = ∞; I = (−∞, ∞)
f. R = 4/3; I = (−25/3, −17/3)
3.
a. R = 1; I = (0, 2)
– – –
b. R = √2; I = (−2 − √2, −2 + √2)
c. R = ∞; I = (−∞, ∞)
d. R = 0
– – –
e. R = √3; I = (−√3, √3)
f. R = 1
: I = (0, 2)
5.
a. R = 3; I = (0, 6)
b. R = 1; I = (−1, 1)
– – –
c. R = 1/√3; I = (3 − 1/√3, 3 + 1/√3)
d. R = ∞; I = (−∞, ∞)
e. R = 0
f. R = 2; I = (−1, 3)
11. b n = 2(n + 2)(n + 1)an+2 + (n + 1)nan+1 + (n + 3)an
17. b 0
2
= 8 a2 + 4 a1 − 6 a0 , bn = 4(n + 2)(n + 1)an+2 + 4(n + 1 ) an+1 + (n
2
+ n − 6)an − 3 an−1 , n ≥ 1
21. b 0
2
= (r + 1)(r + 2)a0 , bn = (n + r + 1)(n + r + 2)an − (n + r − 2 ) an−1 , n ≥ 1.
23. b 0
2 2 2
= (r − 1 ) a0 , b1 = r a1 + (r + 2)(r + 3)a0 , bn = (n + r − 1 ) an + (n + r + 1)(n + r + 2)an−1
+ (n + r − 1)an−2 , n ≥ 2
24. b 0 = r(r + 1)a0 , b1 = (r + 1)(r + 2)a1 + 3(r + 1)(r + 2)a0 , bn = (n + r)(n + r + 1)an
25. b 0 = (r + 2)(r + 1)a0 , b1 = (r + 3)(r + 2)a1 , bn = (n + r + 2)(n + r + 1)an + 2(n + r − 1)(n + r − 3)an−2 , n
≥2
26. b 0 = 2(r + 1)(r + 3)a0 , b1 = 2(r + 2)(r + 4)a1 , bn = 2(n + r + 1)(n + r + 3)an + (n + r − 3)(n + r)an−2 , n ≥ 2
9/11/2020 1 https://math.libretexts.org/@go/page/45918
Section 7.2 Answers
1. y = a0 ∑
∞
m=0
(−1 )
m
(2m + 1)x
2m
+ a1 ∑
∞
m=0
(−1 )
m
(m + 1)x
2m+1
2m
∞
2. y = a0 ∑m=0 (−1 )
m+1
2m−1
x
+ a1 x
3. y = a0 (1 − 10 x
2 4
+ 5 x ) + a1 (x − 2 x
3
+
1
5
x )
5
4. y = a0 ∑
∞
m=0
(m + 1)(2m + 1)x
2m
+
a1
3
∑
∞
m=0
(m + 1)(2m + 3)x
2m+1
4j+1 2m+1
5. y = a0 ∑
∞
m=0
(−1 )
m
[∏
m−1
j=0 2j+1
]x
2m
+ a1 ∑
∞
m=0
(−1 )
m
[∏
m−1
j=0
(4j + 3)]
x
2
m
m!
2 2
∞ m−1 (4j+1) 2m
∞ m−1 (4j+3) 2m+1
6. y = a0 ∑m=0 (−1 )
m
[∏j=0
2j+1
]
8
x
m
m!
+ a1 ∑m=0 (−1 )
m
[∏j=0
2j+3
]
x
m
8 m!
m−1
m ∏ (2j+3)
∞ ∞
7. y = a 2 m! j=0
2m 2m+1
0 ∑m=0 m−1
x + a1 ∑m=0 m x
∏ (2j+1) 2 m!
j=0
8. y = a0 (1 − 14 x
2
+
35
3
4
x ) + a1 (x − 3 x
3
+
3
5
x
5
+
1
35
x )
7
2m 2m+1
∞ ∞
9. (a) y = a 0 ∑
m=0
(−1 )
m
m−1
x
+ a1 ∑
m=0
(−1 )
m x
2
m
m!
∏j=0 (2j+1)
∞ m−1 4j+3 2m
∞ m−1 4j+5 2m+1
10. (a) y = a 0 ∑
m=0
(−1 )
m
[∏
j=0 2j+1
]
2
x
m
m!
+ a1 ∑
m=0
(−1 )
m
[∏
j=0 2j+3
]
x
2
m
m!
11. y = 2 − x − x 2
+
1
3
x
3
+
5
12
x
4
−
1
6
x
5
−
17
72
x
6
+
126
13
x
7
+…
12. y = 1 − x + 3x 2
−
5
2
x
3
+ 5x
4
−
21
8
x
5
+ 3x
6
−
11
16
x
7
+…
13. y = 2 − x − 2x 2
+
1
3
x
3
+ 3x
4
−
5
6
x
5
−
49
5
x
6
+
45
14
x
7
+…
2m 2m+1
∞ (x−3) ∞ (x−3)
16. y = a 0 ∑m=0
(2m)!
+ a1 ∑m=0
(2m+1)!
2m 2m+1
(x−3) (x−3)
17. y = a 0 ∑
∞
m=0 2
m
m!
+ a1 ∑
∞
m=0 m−1
∏ (2j+3)
j=0
2m m
(x−1) 4 (m+1)!
18. y = a 0 ∑
∞
m=0
[∏
m−1
j=0
(2j + 3)]
m!
+ a1 ∑
∞
m=0 m−1
(x − 1 )
2m+1
∏j=0 (2j+3)
19. y = a 0 (1 − 6(x − 2 )
2
+
4
3
(x − 2 )
4
+
135
8 6
(x − 2 ) ) + a1 ((x − 2) −
10
9
(x − 2 ) )
3
m m
20. y = a 0 ∑
∞
m=0
(−1 )
m
[∏
m−1
j=0
(2j + 1)]
4
m
3
m!
(x + 1 )
2m
+ a1 ∑
∞
m=0
(−1 )
m
m−1
3 m!
(x + 1 )
2m+1
∏j=0 (2j+3)
21. y = −1 + 2x + 3
8
x
2
−
1
3
x
3
−
128
3
x
4
−
1024
1
x
6
+…
4
(x − 3 )
4
+
3
5
(x − 3 )
5
+
7
24
(x − 3 )
6
−
35
4
(x − 3 )
7
+…
23. y = −1 + (x − 1) + 3(x − 1) 2
−
5
2
(x − 1 )
3
−
27
4
(x − 1 )
4
+
21
4
(x − 1 )
5
+
27
2
(x − 1 )
6
−
81
8
(x − 1 )
7
+…
2
(x − 3 )
4
−
5
4
(x − 3 )
5
−
49
20
(x − 3 )
6
+
135
56
(x − 3 )
7
+…
4
(x − 4 )
4
−
1
5
(x − 4 )
5
3
(x + 1 )
3
+ 20(x + 1 )
4
−
4
3
(x + 1 )
5
−
8
9
(x + 1 )
6
27.
a. y = a 0 ∑
∞
m=0
(−1 )
m
x
2m
+ a1 ∑
∞
m=0
(−1 )
m
x
2m+1
a0 +a1 x
b. y = 1+x
2
3m 3m+1
∞ ∞
33. y = a 0 ∑m=0
m
x
m−1
+ a1 ∑m=0
m
x
m−1
3 m! ∏ (3j+2) 3 m! ∏ (3j+4)
j=0 j=0
∞ m m−1 3m
∞
m
34. y = a 0 ∑m=0 (
2
3
) [∏j=0 (3j + 2)]
x
m!
+ a1 ∑m=0 m−1
6 m!
x
3m+1
∏ (3j+4)
j=0
9/11/2020 1 https://math.libretexts.org/@go/page/45919
m 3m+1
35. y = a 0 ∑
∞
m=0
(−1 )
m
m−1
3 m!
x
3m
+ a1 ∑
∞
m=0
(−1 )
m
[∏
m−1
j=0
(3j + 4)]
x
3
m
m!
∏j=0 (3j+2)
3j−5
36. y = a 0 (1 − 4x
3 6
+ 4 x ) + a1 ∑
∞
m=0
m
2 [∏
m−1
j=0 3j+4
]x
3m+1
37. y = a 0 (1 +
21
2
x
3
+
42
5
x
6
+
7
20
9
x ) + a1 (x + 4 x
4
+
10
7
x )
7
5j+1 m 5m+1
39. y = a 0 ∑
∞
m=0
(−2 )
m
[∏
m−1
j=0 5j+4
]x
5m
+ a1 ∑
∞
m=0
(−
2
5
) [∏
m−1
j=0
(5j + 2)]
x
m!
4m 4m+1
40. y = a 0 ∑
∞
m=0
(−1 )
m
m
x
m−1
+ a1 ∑
∞
m=0
(−1 )
m
m
x
m−1
4 m! ∏ (4j+3) 4 m! ∏ (4j+5)
j=0 j=0
7m 7m+1
41. y = a 0 ∑
∞
m=0
(−1 )
m
∏
∞
x
(7j+6)
+ a1 ∑
∞
m=0
(−1 )
m x
7
m
m!
j=0
42. y = a 0 (1 −
9
7
8
x ) + a1 (x −
7
9
9
x )
43. y = a 0 ∑
∞
m=0
x
6m
+ a1 ∑
∞
m=0
x
6m+1
6m 6m+1
∞ ∞
44. y = a 0 ∑m=0 (−1 )
m
m−1
x
+ a1 ∑m=0 (−1 )
m x
6
m
m!
∏ (6j+5)
j=0
9/11/2020 2 https://math.libretexts.org/@go/page/45919
Section 7.3 Answers
1. y = 2 − 3x − 2x 2
+
7
2
x
3
−
55
12
x
4
+
59
8
x
5
−
83
6
x
6
+
9547
336
x
7
+…
2. y = −1 + 2x − 4x 3
+ 4x
4
+ 4x
5
− 12 x
6
+ 4x
7
+…
3. y = 1 + x 2
−
2
3
x
3
+
11
6
x
4
−
9
5
x
5
+
329
90
x
6
−
1301
315
x
7
+…
4. y = x − x 2
−
7
2
x
3
+
15
2
x
4
+
45
8
x
5
−
261
8
x
6
+
207
16
x
7
+…
5. y = 4 + 3x − 15
4
x
2
+
1
4
x
3
+
11
16
x
4
−
5
16
x
5
+
1
20
x
6
+
120
1
x
7
+…
6. y = 7 + 3x − 16
3
x
2
+
13
3
x
3
−
23
9
x
4
+
10
9
x
5
−
27
7
x
6
−
1
9
x7 + …
7. y = 2 + 5x − 7
4
x
2
−
3
16
x
3
+
37
192
x
4
−
192
7
x
5
−
1920
1
x
6
+
19
11520
x
7
+…
8. y = 1 − (x − 1) + 4
3
(x − 1 )
3
−
4
3
(x − 1 )
4
−
4
5
(x − 1 )
5
+
136
45
(x − 1 )
6
−
104
63
(x − 1 )
7
+…
9. y = 1 − (x + 1) + 4(x + 1) 2
−
13
3
(x + 1 )
3
+
77
6
(x + 1 )
4
−
278
15
(x + 1 )
5
+
1942
45
(x + 1 )
6
−
23332
315
(x + 1 )
7
+…
10. y = 2 − (x − 1) − 1
2
(x − 1 )
2
+
5
3
(x − 1 )
3
−
19
12
(x − 1 )
4
+
7
30
(x − 1 )
5
+
59
45
(x − 1 )
6
−
1091
630
(x − 1 )
7
+…
11. y = −2 + 3(x + 1) − 1
2
(x + 1 )
2
−
2
3
(x + 1 )
3
+
5
8
(x + 1 )
4
−
11
30
(x + 1 )
5
+
29
144
(x + 1 )
6
−
101
840
(x + 1 )
7
+…
5
(x − 1 )
5
+ 19(x − 1 )
6
−
604
35
(x − 1 )
7
+…
19. y = 2 − 7x − 4x 2
−
17
6
x
3
−
3
4
x
4
−
40
9
x
5
+…
20. y = 1 − 2(x − 1) + 1
2
(x − 1 )
2
−
1
6
(x − 1 )
3
+
5
36
(x − 1 )
4
−
1080
73
(x − 1 )
5
+…
21. y = 2 − (x + 2) − 7
2
(x + 2 )
2
+
4
3
(x + 2 )
3
−
1
24
(x + 2 )
4
+
1
60
(x + 2 )
5
+…
22. y = 2 − 2(x + 3) − (x + 3) 2
+ (x + 3 )
3
−
11
12
(x + 3 )
4
+
67
60
(x + 3 )
5
+…
23. y = −1 + 2x + 1
3
x
3
−
12
5
x
4
+
2
5
x
5
+…
24. y = 2 − 3(x + 1) + 7
2
(x + 1 )
2
− 5(x + 1 )
3
+
197
24
(x + 1 )
4
−
287
20
(x + 1 )
5
+…
25. y = −2 + 3(x + 2) − 9
2
(x + 2 )
2
+
11
6
(x + 2 )
3
+
5
24
(x + 2 )
4
+
7
20
(x + 2 )
5
+…
26. y = 2 − 4(x − 2) − 1
2
(x − 2 )
2
+
2
9
(x − 2 )
3
+
49
432
(x − 2 )
4
+
23
1080
(x − 2 )
5
+…
27. y = 1 + 2(x + 4) − 1
6
(x + 4 )
2
−
10
27
(x + 4 )
3
+
19
648
(x + 4 )
4
+
13
324
(x + 4 )
5
+…
28. y = −1 + 2(x + 1) − 1
4
(x + 1 )
2
+
1
2
(x + 1 )
3
−
65
96
(x + 1 )
4
+
67
80
(x + 1 )
5
+…
31.
c1 c2
a. y = 1+x
+
1+2x
c1 c2
b. y = 1−2x
+
1−3x
c1 c2 x
c. y = 1−2x
+ 2
(1−2x)
c1 c2 x
d. y = 2+x
+ 2
(2+x)
e. y =
c1
2+x
+
2+3x
c2
32. y = 1 − 2x − 3
2
x
2
+
5
3
x
3
+
17
24
x
4
−
11
20
x
5
+…
33. y = 1 − 2x − 5
2
x
2
+
2
3
x
3
−
3
8
x
4
+
1
3
x
5
+…
34. y = 6 − 2x + 9x 2
+
2
3
x
3
−
23
4
x
4
−
10
3
x
5
+…
35. y = 2 − 5x + 2x 2
−
10
3
x
3
+
3
2
x
4
−
25
12
x
5
+…
36. y = 3 + 6x − 3x 2
+x
3
− 2x
4
−
17
20
x
5
+…
37. y = 3 − 2x − 3x 2
+
3
2
x
3
+
3
2
x
4
−
49
80
x
5
+…
9/11/2020 1 https://math.libretexts.org/@go/page/45920
38. y = −2 + 3x + 4
3
x
2
−x
3
−
19
54
x
4
+
13
60
x
5
+…
m 2m m 2m+1
∞ (−1) x 2
∞ (−1) x 2
39. y1 = ∑m=0
m!
=e
−x
, y2 = ∑m=0
m!
= xe
−x
40. y = −2 + 3x + x 2
−
1
6
x
3
−
3
4
x
4
+
31
120
x
5
+…
41. y = 2 + 3x − 7
2
x
2
−
5
6
x
3
+
41
24
x
4
+
41
120
x
5
+…
42. y = −3 + 5x − 5x 2
+
23
6
x
3
−
23
12
x
4
+
11
30
x
5
+…
43. y = −2 + 3(x − 1) + 3
2
(x − 1 )
2
−
17
12
(x − 1 )
3
−
12
1
(x − 1 )
4
+
1
8
(x − 1 )
5
+…
44. y = 2 − 3(x + 2) + 1
2
(x + 2 )
2
−
1
3
(x + 2 )
3
+
31
24
(x + 2 )
4
−
53
120
(x + 2 )
5
+…
45. y = 1 − 2x + 3
2
x
2
−
11
6
x
3
+
15
8
x
4
−
71
60
x
5
+…
46. y = 2 − (x + 2) − 7
2
(x + 2 )
2
−
43
6
(x + 2 )
3
−
203
24
(x + 2 )
4
−
167
30
(x + 2 )
5
+…
47. y = 2 − x − x 2
+
7
6
x
3
−x
4
+
120
89
x
5
+…
48. y = 1 + 3
2
(x − 1 )
2
+
1
6
(x − 1 )
3
−
1
8
(x − 1 )
5
+…
49. y = 1 − 2(x − 3) + 1
2
(x − 3 )
2
−
1
6
(x − 3 )
3
+
1
4
(x − 3 )
4
−
1
6
(x − 3 )
5
+…
9/11/2020 2 https://math.libretexts.org/@go/page/45920
Section 7.4 Answers
1. y = x 1x
−4
+ c2 x
−2
2. y = c 1x + c2 x
7
3. y = x(c 1 + c2 ln x)
4. y = x −2
(c1 + c2 ln x)
5. y = c 1 cos(ln x) + c2 sin(ln x)
6. y = x 2
[ c1 cos(3 ln x) + c2 sin(3 ln x)]
c2
7. y = c 1x +
x3
8. y = c 1x
2/3
+ c2 x
3/4
9. y = x −1/2
(c1 + c2 ln x)
10. y = c 1x + c2 x
1/3
11. y = c 1x
2
+ c2 x
1/2
12. y = 1
x
[ c1 cos(2 ln x) + c2 sin(2 ln x]
13. y = x −1/3
(c1 + c2 ln x)
x2
15. y = c 1x
3
+
x
2
c1
16. y = x
+ c2 x
1/2
17. y = x 2
(c1 + c2 ln x)
18. y = x2
1
[c1 cos(
1
ln x) + c2 sin(
1
ln x)]
√2 √2
9/11/2020 1 https://math.libretexts.org/@go/page/45921
Section 7.5 Answers
1. y
1 =x
1/2
(1 −
1
5
x−
2
35
x
2
+
315
31
x
3
+ …) ; y2 = x
−1
(1 + x +
1
2
x
2
−
1
6
x
3
+ …)
2. y
1 =x
1/3
(1 −
2
3
x+
8
9
x
2
−
40
81
x
3
+ …) ; y2 = 1 − x +
6
5
x
2
−
4
5
x
3
+…
3. y
1 =x
1/3
(1 −
4
7
x−
7
45
x
2
+
970
2457
x
3
+ …) ; y2 = x
−1
(1 − x
2
+
2
3
x
3
+ …)
4. y
1 =x
1/4
(1 −
1
2
x−
19
104
x
2
+
1571
10608
x
3
+ …) ; y2 = x
−1
(1 + 2x −
11
6
x
2
−
1
7
x
3
+ …)
5. y
1 =x
1/3
(1 − x +
28
31
x
2
−
1111
1333
x
3
+ …) ; y2 = x
−1/4
(1 − x +
7
8
x
2
−
19
24
x
3
+ …)
6. y
1 =x
1/5
(1 −
6
25
x−
1217
625
x
2
+
41972
46875
x
3
+ …) ; y2 = x −
1
4
x
2
−
35
18
x
3
+
11
12
x
4
+…
7. y
1 =x
3/2
(1 − x +
11
26
x
2
−
109
1326
x
3
+ …) ; y2 = x
1/4
(1 + 4x −
131
24
x
2
+
39
14
x
3
+ …)
8. y
1 =x
1/3
(1 −
1
3
x+
2
15
x
2
−
63
5
x
3
+ …) ; y2 = x
−1/6
(1 −
12
1
x
2
+
1
18
x
3
+ …)
9. y
1 =1−
14
1
x
2
+
105
1
x
3
+… ; y2 = x
−1/3
(1 −
1
18
x−
405
71
x
2
+
719
34992
x
3
+ …)
10. y 1 =x
1/5
(1 +
3
17
x−
153
7
x
2
−
547
5661
x
3
+ …) ; y2 = x
−1/2
(1 + x +
14
13
x
2
−
556
897
3
x + …)
n n
(−2) (−1)
14. y 1 =x
1/2
∑
∞
n=0 n
x ;
n
y2 = x
−1
∑
∞
n=0 n!
x
n
∏j=1 (2j+3)
n n
(−1 ) ∏ (3j+1)
15. y 1 =x
1/3
∑
∞
n=0 9 n!
j=1
n
x ;
n
x
−1
n
(−1) n
16. y 1 =x
1/2
∑
∞
n=0 n
x ;
n
y2 =
1
x2
∑
∞
n=0 n
2
x
n
n n
∞ (−1) ∞ (−1)
17. y 1 =x∑
n=0 ∏
n
(3j+4)
x ;
n
y2 = x
−1/3
∑
n=0 3 n!
n
x
n
j=1
n n
18. y 1 =x∑
∞
n=0 n! ∏
n
2
(2j+1)
x ;
n
y2 = x
1/2
∑
∞
n=0 n! ∏
n
2
(2j−1)
x
n
j=1 j=1
19. y 1 =x
1/3
∑
∞
n=0 n! ∏
n
1
(3j+2)
x ;
n
y2 = x
−1/3
∑
∞
n=0 n! ∏
n
1
(3j−2)
x
n
j=1 j=1
n
∞ (−1) n 3j−13
20. y 1 = x (1 +
2
7
x+
70
1
x );
2
y2 = x
−1/3
∑n=0 n
3 n!
(∏j=1
3j−4
)x
n
n
2j+1 (−1)
21. y 1 =x
1/2
∑
∞
n=0
(−1 )
n
(∏
n
j=1 6j+1
)x ;
n
y2 = x
1/3
∑
∞
n=0 n
9 n!
(∏
n
j=1
(3j + 1)) x
n
n n
(−1 ) (n+2)! (−1)
22. y 1 =x∑
∞
n=0 2 ∏
n
(4j+3)
x ;
n
y2 = x
1/4
∑
∞
n=0 16 n!
n
∏
n
j=1
(4j + 5)x
n
j=1
n n
∞ (1) ∞ (1)
23. y 1 =x
−1/2
∑n=0 n
n! ∏j=1 (2j+1)
x ;
n
y2 = x
−1
∑n=0 n
n! ∏j=1 (2j−1)
x
n
n
(−1) n 2j−1
24. y 1 =x
1/3
∑
∞
n=0 n!
(
2
9
) (∏
n
j=1
(6j + 5)) x ;
n
y2 = x
−1
∑
∞
n=0
(−1 ) 2
n n
(∏
n
j=1 3j−4
)x
n
25. y 1 = 4x
1/3
∑
∞
n=0 n
6 n!(3n+4)
1
x ;
n
x
−1
28. y 1 =x
1/2
(1 −
9
40
x+
128
5
x
2
−
245
39936
x
3
+ …) ; y2 = x
1/4
(1 −
25
96
x+
675
14336
x
2
−
38025
5046272
x
3
+ …)
29. y 1 =x
1/3
(1 +
32
117
x−
1053
28
x
2
+
4480
540189
x
3
+ …) ; y2 = x
−3
(1 +
32
7
x+
48
7
x )
2
30.y 1 =x
1/2
(1 −
5
8
x+
55
96
x
2
−
935
1536
x
3
+ …) ; y2 = x
−1/2
(1 +
1
4
x−
5
32
x
2
−
55
384
x
3
+ …)
31. y 1 =x
1/2
(1 −
3
4
x+
5
96
x
2
+
4224
5
x
3
+ …) ; y2 = x
−2
(1 + 8x + 60 x
2
− 160 x
3
+ …)
32. y 1 =x
−1/3
(1 −
10
63
x+
7371
200
x
2
−
17600
3781323
x
3
+ …) ; y2 = x
−1/2
(1 −
20
3
x+
352
9
x
2
−
105
23936
x
3
+ …)
m m
(−1) 4j−3 (−1) 8j−7
33. y 1 =x
1/2
∑
∞
m=0 8
m
m!
(∏
m
j=1 8j+1
)x
2m
; y2 = x
1/4
∑
∞
m=0 16
m
m!
(∏
m
j=1 8j−1
)x
2m
∞ m 8j−3 ∞ m
34. y 1 =x
1/2
∑
m=0
(∏
j=1 8j+1
)x
2m
; y2 = x
1/4
∑
m=0 2
m
1
m!
(∏
j=1
(2j − 1)) x
2m
9/11/2020 1 https://math.libretexts.org/@go/page/45922
35. y1 =x
4
∑
∞
m=0
(−1 )
m
(m + 1)x
2m
; y2 = −x ∑
∞
m=0
(−1 )
m
(2m − 1)x
2m
m
(−1)
36. y1 =x
1/3
∑
∞
m=0 18
m
m!
(∏
m
j=1
(6j − 17)) x
2m
; y2 = 1 +
4
5
x
2
+
55
8
x
4
m
8j+1 ∏ (2j−1)
37. y ∞ m ∞ j=1
1/4 2m −1 2m
1 =x ∑ (∏ )x ; y2 = x ∑ m
x
m=0 j=1 8j+5 m=0 2 m!
∞ m ∞ m 3j−1
38. y1 =x
1/2
∑m=0
8
m
1
m!
(∏j=1 (4j − 1)) x
2m
; y2 = x
1/3
∑m=0 2
m
(∏j=1
12j−1
)x
2m
m
m
∏j=1 (4j+5) (−1) 4j−1
39. y1 =x
7/2
∑
∞
m=0
(−1 )
m
8
m
m!
x
2m
; y2 = x
1/2
∑
∞
m=0 4
m
(∏
m
j=1 2j−3
)x
2m
m m
(−1) 4j−1 (−1)
40. y1 =x
1/2
∑
∞
m=0 4
m
(∏
m
j=1 2j+1
)x
2m
; y2 = x
−1/2
∑
∞
m=0 8
m
m!
(∏
m
j=1
(4j − 3)) x
2m
m
∞ (−1) m ∞ m 4j−3
41. y1 =x
1/2
∑
m=0 m!
(∏
j=1
(2j + 1)) x
2m
; y2 =
1
x
2
∑
m=0
(−2 )
m
(∏
j=1 4j−5
)x
2m
∞ m 3j−4
42. y1 =x
1/3
∑m=0 (−1 )
m
(∏j=1
3j+2
)x
2m
; y2 = x
−1
(1 + x )
2
m
m
2 (m+1)! ∏ (2j−1)
43. y ∞ 1 ∞ j=1
m 2m m 2m
1 =∑ (−1 ) m
x ; y2 = ∑ (−1 ) m
x
m=0 ∏ (2j+3) x
3 m=0 2 m!
j=1
m 2 m 2
(−1) (4j−3) (−1) (2j−3)
44. y1 =x
1/2
∑
∞
m=0 8
m
m!
(∏
m
j=1 4j+3
)x
2m
; y2 = x
−1
∑
∞
m=0 2
m
m!
(∏
m
j=1 4j−3
)x
2m
m
2j+1 (−1)
45. y1 =x∑
∞
m=0
(−2 )
m
(∏
m
j=1 4j+5
)x
2m
; y2 = x
−3/2
∑
∞
m=0 4
m
m!
(∏
m
j=1
(4j − 3)) x
2m
m m
(−1) (−1)
46. y1 =x
1/3
∑
∞
m=0 2
m
∏
m
(3j+1)
x
2m
; y2 = x
−1/3
∑
∞
m=0 m
6 m!
x
2m
j=1
47. y1 =x
1/2
(1 −
6
13
x
2
+
325
36
x
4
−
216
12025
x
6
+ …) ; y2 = x
1/3
(1 −
1
2
x
2
+
1
8
x
4
−
1
48
x
6
+ …)
48. y1 =x
1/4
(1 −
13
64
x
2
+
8192
273
x
4
−
2639
524288
x
6
+ …) ; y2 = x
−1
(1 −
1
3
x
2
+
2
33
x
4
−
209
2
x
6
+ …)
49. y1 =x
1/3
(1 −
3
4
x
2
+
14
9
x
4
−
81
140
x
6
+ …) ; y2 = x
−1/3
(1 −
2
3
x
2
+
5
9
x
4
−
40
81
x
6
+ …)
50. y1 =x
1/2
(1 −
3
2
x
2
+
15
8
x
4
−
35
16
x
6
+ …) ; y2 = x
−1/2
(1 − 2 x
2
+
8
3
x
4
−
16
5
x
6
+ …)
51. y1 =x
1/4
(1 − x
2
+
3
2
x
4
−
5
2
x
6
+ …) ; y2 = x
−1/2
(1 −
2
5
x
2
+
36
65
x
4
−
408
455
x
6
+ …)
m m
∞ (−1) ∞ (−1)
53. (a) y1 =x
v
∑
m=0 4
m
m! ∏
m
(j+v)
x
2m
; y2 = x
−v
∑
m=0 4
m
m! ∏
m
(j−v)
x
2m
y1 =
sin x
√x
; y2 =
cos x
√x
j=1 j=1
1/2
61. y1 =
x
1+x
; y2 =
1+x
x
1/3 1/2
62. y1 =
x
1+2x
2
; y2 =
1+2x
x
2
1/4 2
63. y1 =
x
1−3x
; y2 =
1−3x
x
1/3 −1/3
64. y1 =
x
5+x
; y2 =
x
5+x
1/4 −1/2
65. y1 =
x
2−x
2
; y2 =
x
2−x
2
1/2 3/2
66. y1 =
x
1+3x+x2
; y2 =
x
1+3x+x2
1/3
67. y1 =
x
2
; y2 =
x
2
(1+x) (1+x)
1/4
68. y1 =
3+2x+x
x
2
; y2 =
3+2x+x
x
2
9/11/2020 2 https://math.libretexts.org/@go/page/45922
Section 7.6 Answers
1. y
1 = x (1 − x +
3
4
x
2
−
13
36
x
3
+ …) ; y2 = y1 ln x + x
2
(1 − x +
65
108
x
2
+ …)
2. y
1 =x
−1
(1 − 2x +
9
2
x
2
−
20
3
x
3
+ …) ; y2 = y1 ln x + 1 −
15
4
x+
133
18
x
2
+…
3. y
1 = 1 +x −x
2
+
1
3
x
3
+… ; y2 = y1 ln x − x (3 −
1
2
x−
31
18
x
2
+ …)
4. y
1 =x
1/2
(1 − 2x +
5
2
x
2
− 2x
3
+ …) ; y2 = y1 ln x + x
3/2
(1 −
9
4
x+
17
6
x
2
+ …)
5. y
1 = x (1 − 4x +
19
2
x
2
−
49
3
x
3
+ …) ; y2 = y1 ln x + x
2
(3 −
43
4
x+
208
9
x
2
+ …)
6. y
1 =x
−1/3
(1 − x +
5
6
x
2
−
1
2
x
3
+ …) ; y2 = y1 ln x + x
2/3
(1 −
11
12
x+
25
36
x
2
+ …)
7. y
1 = 1 − 2x +
7
4
x
2
−
7
9
x
3
+… ; y2 = y1 ln x + x (3 −
15
4
x+
239
108
x
2
+ …)
8. y
1 =x
−2
(1 − 2x +
5
2
x
2
− 3x
3
+ …) ; y2 = y1 ln x +
3
4
−
13
6
x +…
9. y
1 =x
−1/2
(1 − x +
1
4
x
2
+
18
1
x
3
+ …) ; y2 = y1 ln x + x
1/2
(
3
2
−
13
16
x+
1
54
x
2
+ …)
10. y 1 =x
−1/4
(1 −
1
4
x−
32
7
x
2
+
384
23
x
3
+ …) ; y2 = y1 ln x + x
3/4
(
1
4
+
5
64
x−
2304
157
x
2
+ …)
11. y 1 =x
−1/3
(1 − x +
7
6
x
2
−
23
18
x
3
+ …) ; y2 = y1 ln x − x
5/3
(
1
12
−
108
13
x …)
n n
(−1) (−1)
12. y 1 =x
1/2
∑
∞
n=0 2
x ;
n
y2 = y1 ln x − 2 x
1/2
∑
∞
n=1 2
(∑
n
j=1
1
j
)x
n
(n!) (n!)
n n
n ∏j=1 (3j+1) n ∏j=1 (3j+1)
13. y 1 =x
1/6
∑
∞
n=0
(
2
3
)
n!
n
x ; y2 = y1 ln x − x
1/6
∑
∞
n=1
(
2
3
)
n!
(∑
n
j=1
1
j(3j+1)
)x
n
∞ ∞
14. y 1 =x
2
∑n=0 (−1 ) (n + 1 ) x ;
n 2 n
y2 = y1 ln x − 2 x
2
∑n=1 (−1 ) n(n + 1)x
n n
15. y 1 =x
3
∑
∞
n=0
n
2 (n + 1)x ;
n
y2 = y1 ln x − x
3
∑
∞
n=1
n
2 nx
n
n n n n
(−1 ) ∏j=1 (5j+1) (−1 ) ∏j=1 (5j+1) 5j+2
16. y 1 =x
1/5
∑
∞
n=0 n 2
x ;
n
y2 = y1 ln x − x
1/5
∑
∞
n=1 n 2
(∑
n
j=1 j(5j+1)
)x
n
n n n n
(−1 ) ∏ (2j−3) (−1 ) ∏ (2j−3)
17. y ∞ ∞ n
j=1 j=1
1/2 n 1/2 1 n
1 =x ∑ n
x ; y2 = y1 ln x + 3 x ∑ n
(∑ )x
n=0 4 n! n=1 4 n! j=1 j(2j−3)
n n 2 n n 2
(−1 ) ∏ (6j−7 ) (−1 ) ∏ (6j−7 )
18. y ∞ ∞ n 1
j=1 j=1
1/3 n 1/3 n
1 =x ∑ x ; y2 = y1 ln x + 14 x ∑ (∑ )x
n=0 n 2 n=1 n 2 j=1 j(6j−7)
81 (n! ) 81 (n! )
n n n n
(−1 ) ∏ (2j+5) (−1 ) ∏ (2j+5) j+5
19. y ∞ ∞ n
j=1 j=1
2 n 2 n
1 =x ∑ x ; y2 = y1 ln x − 2 x ∑ (∑ )x
n=0 2 n=1 2 j=1 j(2j+5)
(n!) (n!)
n n n n
(2 ) ∏ (2j−1) (2 ) ∏ (2j−1)
∞ ∞ n
20. y 1 1 1
j=1 j=1
n n
1 = ∑ x ; y2 = y1 ln x + ∑ (∑ )x
x n=0 n! x n=1 n! j=1 j(2j−1)
n n n n
(−1 ) ∏j=1 (2j−5) (−1 ) ∏j=1 (2j−5)
21. y 1 =
1
x
∑
∞
n=0 n!
x ;
n
y2 = y1 ln x +
5
x
∑
∞
n=1 n!
(∑
n
j=1 j(2j−5)
1
)x
n
n n n n
(−1 ) ∏ (2j+3) (−1 ) ∏ (2j+3)
∞ ∞ n
22. y 1
j=1 j=1
2 n 2 n
1 =x ∑n=0 n
x ; y2 = y1 ln x − 3 x ∑n=1 n
(∑j=1 )x
2 n! 2 n! j(2j+3)
2
23. y 1 =x
−2
(1 + 3x +
3
2
−
1
2
x
3
+ …) ; y2 = y1 ln x − 5 x
−1
(1 +
5
4
x−
1
4
x
2
+ …)
24. y 1
3
= x (1 + 20x + 180 x
2
+ 1120 x
3
+ …); y2 = y1 ln x − x
4
(26 + 324x + 69683 x
2
+ …)
25. y 1 = x (1 − 5x +
85
4
x
2
−
3145
36
x
3
+ …) ; y2 = y1 ln x + x
2
(2 −
39
4
x+
4499
108
x
2
+ …)
26. y 1 = 1 −x +
3
4
x
2
−
12
7
x
3
+… ; y2 = y1 ln x + x (1 −
3
4
x+
5
9
x
2
+ …)
27. y 1 =x
−3
(1 + 16x + 36 x
2
+ 16 x
3
+ …); y2 = y1 ln x − x
−2
(40 + 150x +
280
3
x
2
+ …)
m m
∞ (−1) ∞ (−1) ∞
28. y 1 = x ∑m=0
2
m
m!
x
2m
; y2 = y1 ln x −
x
2
∑m=1
2
m
m!
(∑j=1
1
j
)x
2m
2
∞ ∞
29. y 1 =x
2
∑m=0 (−1 )
m
(m + 1)x
2m
; y2 = y1 ln x −
x
2
∑m=1 (−1 )
m
mx
2m
9/11/2020 1 https://math.libretexts.org/@go/page/45923
m m
(−1) 1/2 (−1)
30. y 1 =x
1/2
∑
∞
m=0 m
4 m!
x
2m
; y2 = y1 ln x −
x
2
∑
∞
m=1 4
m
m!
(∑
m
j=1
1
j
)x
2m
m m m m
(−1 ) ∏ (2j−1) (−1 ) ∏ (2j−1)
∞ ∞ m
31. y x 1
j=1 j=1
2m 2m
1 = x ∑m=0 m x ; y2 = y1 ln x + ∑m=1 m (∑j=1 )x
2 m! 2 2 m! j(2j−1)
m m m m
(−1 ) ∏ (4j−1) 1/2 (−1 ) ∏ (4j−1)
32. y ∞ x ∞ m 1
j=1 j=1
1/2 2m 2m
1 =x ∑ m
x ; y2 = y1 ln x + ∑ m
(∑ )x
m=0 8 m! 2 m=1 8 m! j=1 j(4j−1)
m m m m
(−1 ) ∏ (2j+1) (−1 ) ∏ (2j+1)
33. y 1 =x∑
∞
m=0 2
m
j=1
m!
x
2m
; y2 = y1 ln x −
x
2
∑
∞
m=1 2
m
j=1
m!
(∑
m
j=1
1
j(2j+1)
)x
2m
m m m m
(−1 ) ∏ (8j−13) (−1 ) ∏ (8j−13)
34. y ∞ 13 ∞ m 1
j=1 j=1
−1/4 2m −1/4 2m
1 =x ∑ m
x ; y2 = y1 ln x + x ∑ m
(∑ )x
m=0 (32 ) m! 2 m=1 (32 ) m! j=1 j(8j−13)
m m m m
(−1 ) ∏j=1 (3j−1) 1/3 (−1 ) ∏j=1 (3j−1)
35. y 1 =x
1/3
∑
∞
m=0 9
m
m!
x
2m
; y2 = y1 ln x +
x
2
∑
∞
m=1 9
m
m!
(∑
m
j=1
1
j(3j−1)
)x
2m
36.
m m m m
(−1 ) ∏ (4j−3)(4j−1) (−1 ) ∏ (4j−3)(4j−1) 8j−3
1/2 ∞ j=1
2m 1/2 ∞ j=1 m 2m
y1 = x ∑ x ; y2 = y1 ln x + x ∑ (∑ )x
m=0 m 2 m=1 m 2 j=1 j(4j−3)(4j−1)
4 (m! ) 4 (m! )
m m
(−1) 5/3 (−1)
37. y 1 =x
5/3
∑
∞
m=0 m
3 m!
x
2m
; y2 = y1 ln x −
x
2
∑
∞
m=1 3
m
m!
(∑
m
j=1
1
j
)x
2m
m m m m
(−1 ) ∏ (4j−7) (−1 ) ∏ (4j−7)
38. y 1 =
1
x
∑
∞
m=0 2
m
j=1
m!
x
2m
; y2 = y1 ln x +
7
2x
∑
∞
m=1 m
2
j=1
m!
(∑
m
j=1 j(4j−7)
1
)x
2m
39. y 1 =x
−1
(1 −
3
2
x
2
+
15
8
x
4
−
35
16
x
6
+ …) ; y2 = y1 ln x + x (
1
4
−
13
32
x
2
+
101
192
x
4
+ …)
40. y 1 = x (1 −
1
2
x
2
+
1
8
x
4
−
1
48
x
6
+ …) ; y2 = y1 ln x + x
3
(
1
4
−
32
3
x
2
+
11
576
x
4
+ …)
41. y 1 =x
−2
(1 −
3
4
x
2
−
9
64
x
4
−
25
256
x
6
+ …) ; y2 = y1 ln x +
1
2
−
128
21
x
2
−
215
1536
x
4
+…
42. y 1 =x
−3
(1 −
17
8
x
2
+
85
256
x
4
−
85
18432
x
6
+ …) ; y2 = y1 ln x + x
−1
(
25
8
−
471
512
x
2
+
1583
110592
x
4
+ …)
43. y 1 =x
−1
(1 −
3
4
x
2
+
45
64
x
4
−
175
256
x
6
+ …) ; y2 = y1 ln x − x (
1
4
−
128
33
x
2
+
395
1536
x
4
+ …)
44. y 1 =
1
x
; y2 = y1 ln x − 6 + 6x −
8
3
x
2
45. y 1 = 1 − x; y2 = y1 ln x + 4x
2
(x−1)
46. y 1 =
x
; y2 = y1 ln x + 3 − 3x + 2 ∑
∞
n=2
1
n( n2 −1)
x
n
n
(−1)
47. y 1 =x
1/2
(x + 1 ) ;
2
y2 = y1 ln x − x
3/2
(3 + 3x + 2 ∑
∞
n=2 n( n −1)
2
x )
n
∞
48. y 1 = x (1 − x ) ;
2 3
y2 = y1 ln x + x
3
(4 − 7x +
11
3
x
2
−6 ∑
n=3 n(n−2)( n −1)
1
2
x )
n
49. y 1 = x − 4x
3
+x ;
5
y2 = y1 ln x + 6 x
3
− 3x
5
50. y 1 =x
1/3
(1 −
1
6
x );
2
y2 = y1 ln x + x
7/3
(
1
4
−
1
12
∑
∞
m=1 6
m
m(m+1)(m+1)!
1
x
2m
)
m
∞ (−1)
51. y 1 = (1 + x ) ;
2 2
y2 = y1 ln x −
3
2
x
2
−
3
2
x
4
+ ∑m=3
m(m−1)(m−2)
x
2m
52. y 1 =x
−1/2
(1 −
1
2
x
2
+
32
1
x );
4
y2 = y1 ln x + x
3/2
(
5
8
−
9
128
x
2
+∑
∞
m=2 m+1
1
x
2m
)
4 (m−1)m(m+1)(m+1)!
m m
∞ (−1) ∞ (−1) m
56. y 1 = ∑m=0
m 2
x
2m
; y2 = y1 ln x − ∑m=1
m 2
(∑j=1
1
j
)x
2m
4 (m! ) 4 (m! )
1/2 1/2
58. x
1+x
;
x
1+x
ln x
1/3 1/3
59. x
3+x
;
x
3+x
ln x
60. 2−x
x
2
;
x ln x
2−x
2
1/3 1/4
61. x
1+x
2
;
x
1+x
ln x
2
62. 4+3x
x
;
x ln x
4+3x
9/11/2020 2 https://math.libretexts.org/@go/page/45923
1/2 1/2
63. x
1+3x+x
2
;
x
1+3x+x
ln x
2
64. x
2
;
x ln x
2
(1−x) (1−x)
1/3 1/3
65. x
1+x+x2
;
x
1+x+x2
ln x
9/11/2020 3 https://math.libretexts.org/@go/page/45923
Section 7.7 Answers
n n
(−4) (−4) j+1
1. y
1 = 2x
3
∑
∞
n=0 n!(n+2)!
x ;
n
y2 = x + 4 x
2
− 8 (y1 ln x − 4 ∑
∞
n=1 n!(n+2)!
(∑
n
j=1 j(j+2)
)x )
n
n n
∞ (−1) ∞ (−1) n 2j+1
2. y
1 =x∑
n=0 n!(n+1)!
x ;
n
y2 = 1 − y1 ln x + x ∑
n=1 n!(n+1)!
(∑
j=1 j(j+1)
)x
n
n
∞ (−1)
3. y
1 =x
1/2
; y2 = x
−1/2
+ y1 ln x + x
1/2
∑n=1
n
x
n
n n
(−1) (−1)
4. y
1 =x∑
∞
n=0 n!
x
n
= xe
−x
; y2 = 1 − y1 ln x + x ∑
∞
n=1 n!
(∑ j = 1
n 1
j
)x
n
n n
n ∏j=1 (2j+1) n ∏j=1 (2j+1)
5. y
1 =x
1/2
∑
∞
n=0
(−
3
4
)
n!
x ;
n
y2 = x
−1/2
−
3
4
( y1 ln x − x
1/2
∑
∞
n=1
(−
3
4
)
n!
(∑
n
j=1
1
j(2j+1)
n
)x )
n n
(−1) (−1)
6. y
1 =x∑
∞
n=0 n!
x
n
= xe
−x
; y2 = x
−2
(1 +
1
2
x+
1
2
x )−
2 1
2
(y1 ln x − x ∑
∞
n=1 n!
(∑
n
j=1
1
j
)x )
n
n
∞ (−1)
7. y
1 = 6x
3/2
∑
n=0 n
4 n!(n+3)!
x ;
n
n
∞ (−1) n 2j+3
−3/2 1 1 2 1 3/2 n
y2 = x (1 + x+ x )− (y1 ln x − 6 x ∑ n
(∑ )x )
8 64 768 n=1 4 n!(n+3)! j=1 j(j+3)
n
∞ (−1)
8. y
1 =
120
x
2
∑
n=0 n!(n+5)!
x ;
n
y2 = x
−7
(1 +
1
4
x+
1
24
x
2
+
1
144
x
3
+
576
1
x )
4
n
120 ∞ (−1) n 2j+5
1 n
− (y1 ln x − ∑ (∑ )x )
2880 x
2 n=1 n!(n+f )! j=1 j(j+5)
1/2
∞
9. y
1 =
x
6
n
∑n=0 (−1 ) (n + 1)(n + 2)(n + 3)x ;
n
y2 = x
−5/2
(1 +
1
2
2
x + x ) − 3 y1 ln x +
3
2
x
1/2
∞ n n 1 n
∑ (−1 ) (n + 1)(n + 2)(n + 3) (∑ )x
n=1 j=1 j(j+3)
10. y 1 =x
4
(1 −
2
5
x) ; y2 = 1 + 10x + 50 x
2
+ 200 x
3
− 300 (y1 ln x +
27
25
x
5
−
1
30
x )
6
n
(−1 ) 6!
11. y 1 =x ;
3
y2 = x
−3
(1 −
6
5
x+
3
4
x
2
−
1
3
x
3
+
1
8
x
4
−
1
20
x )−
5
120
1
(y1 ln x + x
3
∑
∞
n=1 n(n+6)!
x )
n
∞ n 2j+3 2
12. y 1 =x
2
∑n=0
1
n!
(∏j=1
j+4
)x ;
n
y2 = x
−2
(1 + x +
1
4
x
2
−
12
1
x )−
3
16
1
y1 ln x +
x
2
∞ 1 n 2j+3 n ( j +3j+6)
n
∑ (∏ ) (∑ )x
n=1 n! j=1 j+4 j=1 j(j+4)(2j+3)
13. y 1 =x
5
∑
∞
n=0
n
(−1 ) (n + 1)(n + 2)x ;
n
y2 = 1 −
x
2
+
x
n
(−1) (j+3)(2j−3)
14. y 1 =
1
x
∑
∞
n=0 n!
(∏
n
j=1 j+6
)x ;
n
y2 = x
−7
(1 +
26
5
x+
143
20
x )
2
n
(−1)
15. y 1 =x
7/2
∑
∞
n=0 n
2 (n+4)!
x ;
n
y2 = x
−1/2
(1 −
1
2
x+
1
8
x
2
−
48
1
x )
3
n
∞ (−1 ) (n+1) n 3j+7
16. y 1 =x
10/3
∑n=0
9
n (∏j=1
j+4
)x ;
n
y2 = x
−2/3
(1 +
27
4
x−
1
243
x )
2
j−8
17. y 1 =x
3
∑
7
n=0
(−1 ) (n + 1) (∏
n n
j=1 j+6
)x ;
n
y2 = x
−3
(1 +
52
5
x+
234
5
x
2
+
572
5
x
3
+ 143 x )
4
n 2
(−1) (j+3)
18. y 1 =x
3
∑
∞
n=0 n!
(∏
n
j=1 j+5
)x ;
n
y2 = x
−2
(1 +
1
4
x)
j−5
19. y 1 =x
6
∑
4
n=0
(−1 ) 2
n n
(∏
n
j=1 j+5
)x ;
n
y2 = x(1 + 18x + 144 x
2
+ 672 x
3
+ 2016 x )
4
20. y 1 =x
6
(1 +
2
3
x+
1
7
x );
2
y2 = x (1 +
21
4
x+
21
2
x
2
+
35
4
x )
3
∞
21. y 1 =x
7/2
∑n=0 (−1 ) (n + 1)x ;
n n
y2 = x
−7/2
(1 −
5
6
x+
2
3
x
2
−
1
2
x
3
+
1
3
x
4
−
1
6
x )
5
10
22. y 1 =
x
6
∑
∞
n=0
n n
(−1 ) 2 (n + 1)(n + 2)(n + 3)x ;
n
y2 = (1 −
4
3
x+
5
3
x
2
−
40
21
x
3
+
40
21
x
4
−
32
21
x
5
+
16
21
6
x )
9/11/2020 1 https://math.libretexts.org/@go/page/45924
m m
(−1 ) ∏ (2j+5)
∞
23. y 3 15 75
j=1
6 2m 2 2 6
1 =x ∑m=0 m
x ; y2 = x (1 + x )− y1 ln x + x
2 m! 2 2 2
m m
(−1 ) ∏j=1 (2j+5)
∞ m 1 2m
∑ (∑ )x
m=1 m+1 j=1 j(2j+5)
2 m!
m m
(−1) 6 (−1)
24. y
2
6 ∞ 2m 6 −x 2 1 2 1 x ∞ m 1 2m
1 =x ∑ m
x =x e ; y2 = x (1 + x )− y1 ln x + ∑ m
(∑ )x
m=0 2 m! 2 2 4 m=1 2 m! j=1 j
m m
∞ (−1) ∞ (−1( m 2j+3
25. y1 = 6x
6
∑m=0
4
m
m!(m+3)!
x
2m
; y2 = 1 +
1
8
x
2
+
1
64
x
4
−
1
384
(y1 ln x − 3 x
6
∑m=1 m
4 m!(m+3)!
(∑j=1
j(j+3)
)x
2m
)
m m
2
∞ (−1 ) (m+2) ∞ (−1 ) (m+2) m j +4j+2
26. y1 =
x
2
∑m=0
m!
x
2m
; y2 = x
−1
− 4 y1 ln x ∑m=1
m!
(∑j=1
j(j+1)(j+2)
)x
2m
m m
(−1) (−1) j+1
27. y1 = 2x
3
∑
∞
m=0 4
m
m!(m+2)!
x
2m
; y2 = x
−1
(1 +
1
4
2
x )−
1
16
(y1 ln x − 2 x
3
∑
∞
m=1 4
m
m!(m+2)!
(∑
m
j=1 j(j+2)
)x
2m
)
m m
(−1 ) ∏ (2j−1)
28. y ∞ j=1
−1/2 2m
1 =x ∑ m
x ;
m=0 8 m!(m+1)!
m m
(−1 ) ∏ (2j−1) 2
2 j −2j−1
−5/2 1 −1/2 ∞ j=1 m 2m
y2 = x + y1 ln x − x ∑ (∑ )x
4 m=1 m+1 j=1 j(j+1)(2j−1)
8 m!(m+1)!
m m
(−1) (−1)
29. y
2
∞ 2m −x /2 −1 x ∞ m 1 2m
1 =x∑ m
x = xe ; y2 = x − y1 ln x + ∑ m
(∑ )x
m=0 2 m! 2 m=1 2 m! j=1 j
30. y
2
2 ∞ 1 2m 2 x −2 2 2 ∞ 1 m 1 2m
1 =x ∑ x =x e ; y2 = x (1 − x ) − 2 y1 ln x + x ∑ (∑ )x
m=0 m! m=1 m! j=1 j
m
(−1)
31. y1 = 6x
5/2
∑
∞
m=0 16
m
m!(m+3)!
x
2m
; y2 = x
−7/2
(1 +
1
32
x
2
+
1024
1
x )
4
m
∞ (−1) m 2j+3
1 5/2 2m
− (y1 ln x − 3 x ∑ m
(∑ )x )
24576 m=1 16 m!(m+3)! j=1 j(j+3)
m
∏ (3j+1)
∞
32. y
j=1
13/3 2m
1 = 2x ∑m=0 m
x ;
9 m!(m+2)!
m
∏ (3j+1) 2
3 j +2j+2
1/3 2 2 2 13/3 ∞ j=1 m 2m
y2 = x (1 + x )+ ( y1 ln x − x ∑ m
(∑ )x )
9 81 m=0 9 m!(m+2)! j=1 j(j+2)(3j+1)
∞
33. y1 =x ;
2
y2 = x
−2 2
(1 + 2 x ) − 2 (y1 ln x + x
2
∑m=1
1
m(m+2)!
x
2m
)
m
3
( )
34. y ∞ 2
2 1 2 −2 9 2 27 7 4 2 2m
1 =x (1 − x ); y2 = x (1 + x )− ( y1 ln x + x −x ∑ x )
2 2 2 12 m=2 m(m−1)(m+2)!
35. y1 =∑
∞
m=0
(−1 )
m
(m + 1)x
2m
; y2 = x
−4
m
∞ (−1)
36. y1 =x
5/2
∑m=0
(m+1)(m+2)(m+3)
x
2m
; y2 = x
−7/2
(1 + x )
2 2
37. y1 =
x
5
∑
∞
m=0
(−1 )
m
(m + 5)x
2m
; y2 = x
−1
(1 − 2 x
2
+ 3x
4
− 4x )
6
∞ 2j+1
m m+1 m
38. y1 =x
3
∑m=0 (−1 )
2
m (∏j=1
j+5
)x
2m
; y2 = x
−7
(1 +
21
8
x
2
+
35
16
x
4
+
35
64
x )
6
m
∏j=1 (4j+5)
39. y1 = 2x
4
∑
∞
m=0
(−1 )
m
2
m
(m+2)!
x
2m
; y2 = 1 −
1
2
x
2
m m
(−1 ) ∏j=1 (2j−1)
40. y1 =x
3/2
∑
∞
m=0 m−1
x
2m
; y2 = x
−5/2
(1 +
3
2
x )
2
2 (m+2)!
m m
∞ (−1) v−1 (−1)
42. y1 =x
v
∑m=0
4
m
m! ∏
m
(j+v)
x
2m
; y2 = x
−v
∑m=0
4
m
m! ∏
m
(j−v)
x
2m
j=1 j=1
m
2 x
v
∞ (−1) m 2j+v
2m
− v
( y1 ln x − ∑ m m
(∑ )x )
4 v!(v−1)! 2 m=1 4 m! ∏ (j+v) j=1 j(j+v)
j=1
9/11/2020 2 https://math.libretexts.org/@go/page/45924
Section 8.1 Answers
1.
a. 1
s
2
b. 1
2
(s+1)
c. 2
b
2
s −b
d. −2s+5
(s−1)(s−2)
e. 2
s3
2.
2
s +2
a. 2 2
[(s−1 ) +1][(s+1 ) +1]
b. s( s +4)
2
2
2
s +8
c. s( s +16)
2
d. s −2
s( s2 −4)
e. 4s
2
2
( s −4 )
f. s2 +4
1
g. 1 s+1
s2 +1
√2
h. 2
( s +4)( s +9)
5s
2
3 2
i. s +2 s +4s+32
2
( s +4)( s +16)
2
4.
a. f (3−) = −1, f (3) = f (3+) = 1
b. f (1−) = 3, f (1) = 4, f (1+) = 1
c. f ( −) = 1, f ( ) = f ( +) = 2, f (π−) = 0, f (π) = f (π+) = −1
π
2
π
2
π
a. 1−e
s+1
+
e
s+2
b. 1
s
+e
−4s
(
1
s2
+
3
s
)
−s
c. 1−e
s
2
−( s−1)
d. 1−e
2
(s−1)
2 2
(s−λ) −ω 2ω(s−λ)
7. L(e λt
cos ωt) = 2
2
2
L(e
λt
sin ωt) = 2
2
2
((s−λ) +ω ) ((s−λ) +ω )
15.
a. tan −1 ω
s
, s >0
2
b. ln1
2 2
s +ω
s
2
, s >0
c. ln
s−b
s−a
, s > max(a, b)
2
d. 1
2
ln
s2 −1
s
, s >1
2
e. 1
4
ln 2
s −4
s
, s >2
18.
a. 1
s
2
tanh
s
b. 1
s
tanh
s
9/11/2020 1 https://math.libretexts.org/@go/page/45925
c. 1
s2 +1
coth
πs
d. 2
1
( s +1)(1−e −πs
)
9/11/2020 2 https://math.libretexts.org/@go/page/45925
Section 8.2 Answers
1.
3 7t
a. t e
b. 2e 2t
cos 3t
−2t
c. e
sin 4t
4
d. sin 3t
2
e. t cos t
2t
f. sinh 2t
e
2
2t
g. 2te
3
sin 9t
3t
h. 2e
3
sinh 3t
i. e 2t
t cos t
2.
a. t e + t e
2 7t 17
6
3 7t
b. e ( t + t + t )
2t 1
6
3 1
6
4 1
40
5
d. 2 cos 3t + sin 3t 1
e. (1 − t)e −t
f. cosh 3t + sinh 3t 1
g. (1 − t − t − t ) e 2 1
6
3 −t
i. 1 − cos t
j. 3 cosh t + 4 sinh t
k. 3e + 4 cos 3t + sin 3t
t 1
3.
a. e − e − e
1
4
2t 1
4
−2t −t
b. e − e + 5e
1
5
−4t 41
5
t 3t
c. − e − e − e1
2
2t 13
10
−2t 1
5
3t
d. − e − e 2
5
−4t 3
5
t
e. e − e + e + e
3
20
2t 37
12
−2t 1
3
t 8
5
−3t
f. e + e +
39
10
e − e
t
14
3 3t 23
105
−4t 7
3
2t
4.
a. e − e − cos t + sin t
4
5
−2t 1
2
−t 3
10
11
10
5
6
5
7
5
−t 6
5
−t
c. e − e cos 2t + e sin 2t
8
13
2t
13
8 −t 15
26
−t
d. te + e + e − e
1
2
t 3
8
t −2t 11
8
−3t
e. te + e + te − e
2
3
t 1
9
t −2t 1
9
−2t
f. −e + te + cos t − sin t
t 5
2
t 3
5.
a. cos 2t + sin 2t − cos 3t − sin 3t
3
5
1
5
3
5 15
2
15
1
15
4
15
1
60
3
5
3
1
3
t
2
2
3
t
2
1
3
1
9/11/2020 1 https://math.libretexts.org/@go/page/45926
e. 1
15
cos
t
4
−
15
8
sin
t
4
−
15
1
cos 4t +
30
1
sin 4t
f. 2
5
cos
t
3
−
3
5
sin
t
3
−
2
5
cos
t
2
+
2
5
sin
t
6.
a. e t
(cos 2t + sin 2t) − e
−t
(cos 3t +
4
3
sin 3t)
b. e 3t
(− cos 2t +
3
2
sin 2t) + e
−t
(cos 2t +
1
2
sin 2t)
c. e −2t
(
1
8
cos t +
1
4
sin t) − e
2t
(
1
8
cos 3t −
12
1
sin 3t)
d. e 2t
(cos t +
1
2
sin t) − e
3t
(cos 2t −
1
4
sin 2t)
e. e t
(
1
5
cos t +
2
5
sin t) − e
−t
(
1
5
cos 2t +
2
5
sin 2t)
f. e t/2
(− cos t +
9
8
sin t) + e
−t/2
(cos t −
1
8
sin t)
7.
a. 1 − cos t
t
b. (1 − cos 4t)
e
16
c. e + e sin 3t − e cos 3t
4
9
2t 5
9
−t 4
9
−t
d. 3e − e sin 2t − 3e cos 2t
t/2 7
2
t t
e. e − e cos 2t
1
4
3t 1
4
−t
f. e − e cos 3t + e sin 3t
1
9
2t 1
9
−t 5
9
−t
8.
a. − 3
10
sin t +
2
5
cos t −
3
4
e +
t
20
7
e
3t
b. − 3
5
e
−t
sin t +
1
5
e
−t
cos t −
1
2
e
−t
+
3
10
e
t
c. − 1
10
t
e sin t −
7
10
t
e cos t +
1
5
e
−t
+
1
2
e
2t
d. − 1
2
e +
t 7
10
e
−t
−
1
5
cos 2t +
3
5
sin 2t
e. 3
10
+
1
10
e
2t
+
10
1 t
e sin 2t −
2
5
t
e cos 2t
f. − 4
9
e
2t
cos 3t +
1
3
e
2t
sin 3t −
5
9
e
2t
+e
t
9. 1
a
e a
t
f (
t
a
)
9/11/2020 2 https://math.libretexts.org/@go/page/45926
Section 8.3 Answers
1. y = 1
6
e −
t 9
2
e
−t
+
16
3
e
−2t
2. y = − 1
3
+
15
8
e
3t
+
4
5
e
−2t
3. y = − 23
15
e
−2t
+
1
3
e +
t 1
5
e
3t
4. y = − 1
4
e
2t
+
17
20
e
−2t
+
2
5
e
3t
5. y = 11
15
e
−2t
+
1
6
e +
t 1
10
e
3t
6. y = e t
+ 2e
−2t
− 2e
−t
7. y = 5
3
sin t −
1
3
sin 2t
8. y = 4e t
− 4e
2t
+e
3t
9. y = − 7
2
e
2t
+
13
3
e +
t 1
6
e
4t
10. y = 5
2
t
e − 4e
2t
+
1
2
e
3t
11. y = 1
3
t
e − 2e
−t
+
5
3
e
−2t
12. y = 2 − e −2t
+e
t
13. y = 1 − cos 2t + 1
2
sin 2t
14. y = − 1
3
+
15
8
e
3t
+
4
5
e
−2t
15. y = 1
6
e −
t 2
3
e
−2t
+
1
2
e
−t
16. y = −1 + e t
+e
−t
3
−
7
2
e
−t
+
1
6
e
3t
19. y = 1 + cos t
20. y = t + sin t
21. y = t − 6 sin t + cos t + sin 2t
22. y = e −t
+ 4e
−2t
− 4e
−3t
28. y = −1 + t + e −t
(3 cos t − 5 sin t)
30. y = e −t
− 2e + e
t −2t
(cos 3t − 11/3 sin 3t)
31. y = e −t
(sin t − cos t) + e
−2t
(cos t + 4 sin t)
32. y = 1
5
e
2t
−
4
3
e +
t 32
15
e
−t/2
33. y = 1
7
e
2t
−
2
5
e
t/2
+
9
35
e
−t/3
34. y = e −t/2
(5 cos(t/2) − sin(t/2)) + 2t − 4
9/11/2020 1 https://math.libretexts.org/@go/page/45927
35. y = 1
17
(12 cos t + 20 sin t − 3 e
t/2
(4 cos t + sin t))
−t/2
36. y = e
10
(5t + 26) −
1
5
(3 cos t + sin t)
37. y = 100
1
(3 e
3t
−e
t/3
(3 + 310t))
9/11/2020 2 https://math.libretexts.org/@go/page/45927
Section 8.4 Answers
1. 1 + u(t − 4)(t − 1); 1
s
+e
−4s
(
s
1
2
+
3
s
)
−s
s
2
s2
−
1
s
)−e
−2s
(
1
s2
+
1
s
)
s
+e
−s
(
s
1
2
+
2
s
)
s
−e
−2s
(
1
s
2
−
3
s
)
6. t
2
(1 − u(t − 1));
s
2
3
−e
−s
(
s
2
3
+
2
2
s
+
1
s
)
7. u(t − 2)(t 2
+ 3t); e
−2s
(
2
s3
+
s2
7
+
10
s
)
8. t
2
+ 2 + u(t − 1)(t − t
2
− 2);
s3
2
+
2
s
−e
−s
(
s3
2
+
1
s2
+
2
s
)
−( s−1)
9. te t
+ u(t − 1)(e − te );
t t 1−e
2
(s−1)
−( s+1) −( s+2)
10. e −t
+ u(t − 1)(e
−2t
−e
−t
);
1−e
s+1
+
e
s+2
−2s
s
2
+e
−3s
(
2
s
−
1
2
s
)
s
)−e
−2s
(
1
s
2
+
2
s
)
s
2
+e
−s
(
s
2
3
+
s
1
2
)−e
−2s
(
s
2
3
+
4
s
2
+
4
s
)
−s
s2
−2
e
s2
+e
−2s
(
s2
1
+
6
s
)
π
− s
−πs
1+e 2 s−e (s−2)
15. sin t + u(t − π/2) sin t + u(t − π)(cos t − 2 sin t); 2
s +1
s
−e
−s
(
2
s2
+
2
s
)+e
−3s
(
5
s2
+
13
s
)
s
+e
−2s
(
s2
3
+
5
s
)+e
−4s
(
1
s2
+
2
s
)
18. (t + 1) 2
+ u(t − 1)(2t + 3);
2
s
3
+
2
s
2
+
1
s
+e
−s
(
s
2
2
+
5
s
)
0, 0 ≤ t < 2,
19. u(t − 2)e 2(t−2)
={
2(t−2)
e , t ≥2
0, 0 ≤ t < 1,
20. u(t − 1) (1 − e −(t−1)
) ={
−(t−1)
1 −e , t ≥1
⎧ 0, 0 ≤ t < 1,
⎪
⎪
2
⎪ 2
(t−1) (t−1)
21. u(t − 1) 2
+ u(t − 2)(t − 2) = ⎨
2
, 1 ≤ t < 2,
⎪
⎪
⎩
⎪ 2
t −3
, t ≥2
2
⎧ 2 + t, 0 ≤ t < 1,
⎪
9/11/2020 1 https://math.libretexts.org/@go/page/45928
⎧ 5 − t, 0 ≤ t < 3,
⎪
2
u(t − 6)(t − 6 )
2
=⎨ 6t − 10, 3 ≤ t < 6,
⎩
⎪ 3 2
44 − 12t + t , t ≥6
2
0, 0 ≤ t < π,
24. u(t − π)e −2(t−π)
(2 cos t − 5 sin t) = {
−2(t−π)
e (2 cos t − 5 sin t) t ≥π
π
1 − cos t, 0 ≤t < ,
25. 1 − cos t + u(t − π/2)(3 sin t + cos t) = { π
2
1 + 3 sin t, t ≥
2
0, 0 ≤ t < 2,
26. u(t − 2)(4e −(t−2)
− 4e
2(t−2)
+ 2e
(t−2)
={
−(t−2) 2(t−2) (t−2)
4e − 4e + 2e , t ≥2
⎧ t + 1,
⎪
0 ≤ t < 1,
2
⎧ 1 −t , 0 ≤ t < 2,
⎪
⎪
2
2
3t
28. 1 − t 2
+ u(t − 2) (−
t
2
+ 2t + 1) + u(t − 4)(t − 4) = ⎨ −
2
+ 2t + 2, 2 ≤ t < 4,
⎪
⎩
⎪ 2
3t
− + 3t − 2, t ≥4
2
−τ s
29. e
m=1
u(t − m);
1
s(1−e
−s
)
−s
∞ 1−e
34. 1 + 2 ∑ m=1
(−1 )
m
u(t − m);
1
s
;
1+e
−s
−s −s
∞ e (1+e )
35. 1 + ∑ m=1
(2m + 1)u(t − m);
−s
2
s(1−e )
s
∞ (1−e )
36. ∑ m=1
(−1 )
m
(2m − 1)u(t − m);
1
s s
2
(1+e )
9/11/2020 2 https://math.libretexts.org/@go/page/45928
Section 8.5 Answers
1. y = 3(1 − cos t) − 3u(t − π)(1 + cos t)
2. y = 3 − 2 cos t + 2u(t − 4)(t − 4 − sin(t − 4))
u(t−1)
3. y = − 15
2
+
3
2
e
2t
− 2t +
2
(e
2(t−1)
− 2t + 1)
4. y = 1
2
e +
t 13
6
e
−t
+
1
3
e
2t
+ u(t − 2) (−1 +
1
2
e
t−2
+
1
2
e
−(t−2)
+
1
2
e
t+2
−
1
6
e
−(t−6)
−
1
3
e
2t
)
5. y = −7e t
+ 4e
2t
+ u(t − 1) (
1
2
−e
t−1
+
1
2
e
2(t−1)
) − 2u(t − 2) (
1
2
−e
t−2
+
1
2
e
2(t−2)
)
6. y = 1
3
sin 2t − 3 cos 2t +
1
3
sin t − 2u(t − π) (
1
3
sin t +
1
6
sin 2t) + u(t − 2π) (
1
3
sin t −
1
6
sin 2t)
7. y = 1
4
−
31
12
e
4t
+
16
3
t
e + u(t − 1) (
2
3
e
t−1
−
1
6
e
4(t−1)
−
1
2
) + u(t − 2) (
1
4
+
12
1
e
4(t−2)
−
1
3
e
t−2
)
8. y = 1
8
(cos t − cos 3t) −
1
8
u (y −
3π
2
) (sin t − cos t + sin 3t −
1
3
cos 3t)
9. y = 4
t
−
1
8
sin 2t +
1
8
u (t −
π
2
) (π cos 2t − sin 2t + 2π − 2t)
11. y = u(t − 2) (t − 1
2
+
e
2
− 2e
t−2
)
2
+
1
2
e
−2t
−e
−t
+ u(t − 2) (2 e
−(t−2)
−e
−2(t−2)
− 1)
14. y = − 1
3
−
1
6
e
3t
+
1
2
t
e + u(t − 1) (
2
3
+
1
3
e
3(t−1)
−e
t−1
)
15. y = 1
4
(e + e
t −t
(11 + 6t)) + u(t − 1)(te
−(t−1)
− 1)
16. y = e t
−e
−t
− 2te
−t
− u(t − 1) (e − e
t −(t−2)
− 2(t − 1)e
−(t−2)
)
17. y = te −t
+e
−2t
+ u(t − 1) (e
−t
(2 − t) − e
−(2t−1)
)
2 2t
18. y = t e
2
− te
2t
− u(t − 2)(t − 2 ) e
2 2t
19. y = t
12
+1 −
1
12
u(t − 1)(t
4
+ 2t
3
− 10t + 7) +
1
6
u(t − 2)(2 t
3
+ 3t
2
− 36t + 44)
20. y = 1
2
e
−t
(3 cos t + sin t) +
1
2
− u(t − 2π) (e
−(t−2π)
((π − 1) cos t +
2π−1
2
sin t) + 1 −
t
2
)
1 −(t−3π)
− u(t − 3π)(e (3π cos t + (3π + 1) sin t) + t)
2
2
2
∞ (t−m)
21. y = t
2
+∑
m=1
u(t − m)
2
22.
2m + 1 − cos t, 2mπ ≤ t < (2m + 1)π (m = 0, 1, …)
a. y ={
2m, (2m − 1)π ≤ t < 2mπ (m = 1, 2, …)
m+1
d. y = e
2(e−1)
−1
(e
t−m
+e
−t
) − m − 1, m ≤ t < m +1 (m = 0, 1, …)
2( m+1) π
e. y = (m + 1 − (
e
e
2π
−1
−1
)e
−t
) sin t 2mπ ≤ t < 2(m + 1)π (m = 0, 1, …)
m+1 2m+2
f. y = m+1
2
−e
t−m e
e−1
−1
+
1
2
e
2(t−m) e
e −1
2
−1
, m ≤ t < m +1 (m = 0, 1, …)
9/11/2020 1 https://math.libretexts.org/@go/page/45929
Section 8.6 Answers
1.
t
a. 1
2
∫
0
τ sin 2(t − τ )dτ
t
b. ∫ 0
e
−2τ
cos 3(t − τ )dτ
t t
c. 1
2
∫
0
sin 2τ cos 3(t − τ )dτ or 1
3
∫
0
sin 3τ cos 2(t − τ )dτ
t
d. ∫ 0
cos τ sin(t − τ )dτ
t
e. ∫ 0
e
aτ
dτ
t
f. e −t
∫
0
sin(t − τ )dτ
t
g. e −2t
∫
0
τe
τ
sin(t − τ )dτ
−2t
t
h. e
2
∫
0
τ
2
(t − τ )e
3τ
dτ
t
i. ∫ 0
(t − τ )e
τ
cos τ dτ
t
j. ∫ 0
e
−3τ
cos τ cos 2(t − τ )dτ
t
k. 4!5!
1
∫
0
τ
4
(t − τ ) e
5 3τ
dτ
t
l. 1
4
∫
0
τ
2
e
τ
sin 2(t − τ )dτ
t
m. 1
2
∫
0
τ (t − τ ) e
2 2(t−τ)
dτ
t
n. 5!6!
1
∫
0
(t − τ ) e
5 2(t−τ)
τ
6
dτ
2.
a. as
2
( s2 +a2 )( s2 +b )
b. (s−1)( s2 +a2 )
a
c. as
2
( s2 −a2 )
2 2
2ωs( s −ω )
d. 2 2
4
( s +ω )
(s−1)ω
e. 2 2 2
((s−1 ) +ω )
f. 3
2
2
(s−2 ) (s−1 )
s+1
g. 2 2
(s+2 ) [(s+1 ) +ω2 ]
h. 1
2
(s−3)((s−1 ) −1)
i. 2
2
2
(s−2 ) ( s +4)
j. 4
s (s−1)
6
k. 3⋅6!
2
7
s [(s+1 ) +9]
l. 12
s
7
m. 2⋅7!
2
8
s [(s+1 ) +4]
n. 5
48
s ( s +4)
2
3.
t √5τ
a. y =
√5
2
∫
0
f (t − τ )e
−3τ/2
sinh
2
dτ
t
b. y = 1
2
∫
0
f (t − τ ) sin 2τ dτ
t
c. y = ∫ τ e 0
−τ
f (t − τ )dτ
t
d. y(t) = − 1
k
sin kt + cos kt +
1
k
∫
0
f (t − τ ) sin kτ dτ
t
e. y = −2te + ∫ −3t
0
τe
−3τ
f (t − τ )dτ
t
f. y = sinh 2t + 3
2
1
2
∫
0
f (t − τ ) sinh 2τ dτ
9/11/2020 1 https://math.libretexts.org/@go/page/45930
t
g. y = e 3t
+∫
0
(e
3τ
−e
2τ
)f (t − τ )dτ
k1 t
h. y = ω
sin ωt + k0 cos ωt +
1
ω
∫
0
f (t − τ ) sin ωτ dτ
4.
a. y = sin t
b. y = te −t
c. y = 1 + 2te t
d. y = t + t
e. y = 4 + t + 5
2
2 1
24
4
t
f. y = 1 − t
5.
a. 7!8!
16!
t
16
b. 13!7!
21!
t
21
c. 6!7!
14!
t
14
d. 1
2
(e
−t
+ sin t − cos t)
e. 1
3
(cos t − cos 2t)
9/11/2020 2 https://math.libretexts.org/@go/page/45930
Section 8.7 Answers
1. y = 1
2
e
2t
− 4e
−t
+
11
2
e
−2t
+ 2u(t − 1)(e
−(t−1)
−e
−2(t−1)
)
2. y = 2e −2t
+ 5e
−t
+
5
3
u(t − 1)(e
(t−1)
−e
−2(t−1)
)
3. y = 1
6
e
2t
−
2
3
e
−t
−
1
2
e
−2t
+
5
2
u(t − 1) sinh 2(t − 1)
4. y = 1
8
(8 cos t − 5 sin t − sin 3t) − 2u(t − π/2) cos t
5. y = 1 − cos 2t + 1
2
sin 2t +
1
2
u(t − 3π) sin 2t
6. y = 4e t
+ 3e
−t
− 8 + 2u(t − 2) sinh(t − 2)
7. y = 1
2
e −
t 7
2
e
−t
+ 2 + 3u(t − 6)(1 − e
−(t−6)
)
8. y = e 2t
+ 7 cos 2t − sin 2t −
1
2
u(t − π/2) sin 2t
9. y = 1
2
(1 + e
−2t
) + u(t − 1)(e
−(t−1)
−e
−2(t−1)
)
10. y = 1
4
e +
t 1
4
e
−t
(2t − 5) + 2u(t − 2)(t − 2)e
−(t−2)
11. y = 1
6
(2 sin t + 5 sin 2t) −
1
2
u(t − π/2) sin 2t
12. y = e −t
(sin t − cos t) − e
−(t−π)
sin t − 3u(t − 2π)e
−(t−2π)
sin t
13. y = e −2t
(cos 3t +
4
3
sin 3t) −
1
3
u(t − π/6)e
−2(t−π/6)
cos 3t −
2
3
u(t − π/3)e
−2(t−π/3)
sin 3t
14. y = 7
10
e
2t
−
6
5
e
−t/2
−
1
2
+
1
5
u(t − 2)(e
2(t−2)
−e
−(t−2)/2
)
15. y = 1
17
(12 cos t + 20 sin t) +
1
34
e
t/2
(10 cos t − 11 sin t) − u(t − π/2)e
(2t−π)/4
cos t + u(t − π)e
(t−π)/2
sin t
16. y = 1
3
(cos t − cos 2t − 3 sin t) − 2u(t − π/2) cos t + 3u(t − π) sin t
17. y = e t
−e
−t
(1 + 2t) − 5u(t − 1) sinh(t − 1) + 3u(t − 2) sinh(t − 2)
18. y = 1
4
(e − e
t −t
(1 + 6t)) − u(t − 1)e
−(t−1)
+ 2u(t − 2)e
−(t−2)
)
19. y = 5
3
sin t −
1
3
sin 2t +
1
3
u(t − π)(sin 2t + 2 sin t) + u(t − 2π) sin t
20. y = 3
4
cos 2t −
1
2
sin 2t +
1
4
+
1
4
u(t − π/2)(1 + cos2t) +
1
2
u(t − π) sin 2t +
3
2
u(t − 3π/2) sin 2t
4
(8 e
3t
− 12 e
−2t
)
24. y = e −2t
(1 + 6t)
25. y = 1
4
e
−t/2
(4 − 19t)
29. y = (−1) k
m ω1 Re
−cτ/2m
δ(t − τ ) if ω 1τ − ϕ = (2k + 1)π/2(k = integer )
30.
m+1 t−m −t
(e −1)( e −e )
a. y =
2(e−1)
, m ≤ t < m + 1, (m = 0, 1, …)
9/11/2020 1 https://math.libretexts.org/@go/page/45931
Section 9.1 Answers
2. y = 2x 2
− 3x
3
+
x
1
3. y = 2e x
+ 3e
−x
−e
2x
+e
−3x
i−1
(x−x0 )
4. y i =
(i−1)!
, 1 ≤i ≤n
5.
b. y 1 =−
1
2
x
3
+x
2
+
2x
1
, y2 =
1
3
2
x −
1
3x
, y3 =
1
4
3
x −
1
3
2
x +
1
12x
c. y = k 0 y1 + k1 y2 + k2 y3
7. 2e −x
–
8. √2K cos x
9.
a. W (x) = 2e 3x
d. y = e x
(c1 + c2 x + c3 x )
2
10.
a. 2
b. −e 3x
c. 4
d. 4/x 2
e. 1
f. 2x
g. 2/x 2
h. e (x − 2x + 2)
x 2
i. −240/x 5
j. 6e (2x − 1)
2x
k. −128x
24.
a. y = 0 ′′′
b. x y − y − x y + y = 0
′′′ ′′ ′
d. (x − 2x + 2)y − x y + 2x y − 2y = 0
2 ′′′ 2 ′′ ′
e. x y + x y − 2x y + 2y = 9
3 ′′′ 2 ′′ ′
g. x y + 5x y − 3x y − 6x y + 6y = 0
4 (4) 3 ′′′ 2 ′′ ′
h. x y + 3x y − x y + 2x y − 2y = 0
4 (4) 2 ′′′ 2 ′′ ′
j. x y − y − 4x y + 4y = 0
(4) ′′′ ′′ ′
9/11/2020 1 https://math.libretexts.org/@go/page/45932
Section 9.2 Answers
1. y = e x
(c1 + c2 x + c3 x )
2
2. y = c 1e
x
+ c2 e
−x
+ c3 cos 3x + c4 sin 3x
3. y = c 1e
x
+ c2 cos 4x + c3 sin 4x
4. y = c 1e
x
+ c2 e
−x
+ c3 e
−3x/2
5. y = c 1e
−x
+e
−2x
(c1 cos x + c2 sin x)
6. y = c 1e
x
+e
x/2
(c2 + c3 x)
7. y = e −x/3
(c1 + c2 x + c3 x )
2
8. y = c 1 + c2 x + c3 cos x + c4 sin x
9. y = c 1e
2x
+ c2 e
−2x
+ c3 cos 2x + c4 sin 2x
– –
10. y = (c 1 + c2 x) cos √6x + (c3 + c4 x) sin √6x
11. y = e 3x/2
(c1 + c2 x) + e
−3x/2
(c3 + c4 x)
12. y = c 1e
−x/2
+ c2 e
−x/3
+ c3 cos x + c4 sin x
13. y = c 1e
x
+ c2 e
−2x
+ c3 e
−x/2
+ c4 e
−3x/2
14. y = e x
(c1 + c2 x + c3 cos x + c4 sin x)
16. y = 2e x
+ 3e
−x
− 5e
−3x
17. y = 2e x
+ 3x e
x
− 4e
−x
18. y = 2e −x
cos x − 3 e
−x
sin x + 4 e
2x
19. y = 9
5
e
−5x/3 x
+ e (1 + 2x)
20. y = e 2x
(1 − 3x + 2 x )
2
21. y = e 3x
(2 − x) + 4 e
−x/2
22. y = e x/2
(1 − 2x) + 3 e
−x/2
23. y = 1
8
(5 e
2x
+e
−2x
+ 10 cos 2x + 4 sin 2x)
24. y = −4e x
+e
2x
−e
4x
+ 2e
−x
25. y = 2e x
−e
−x
26. y = e 2x
+e
−2x
+e
−x
(3 cos x + sin x)
27. y = 2e −x/2
+ cos 2x − sin 2x
28.
a. {e , x e , e } : 1
x x 2x
c. {e cos x, e sin x, e } : 5
−x −x x
d. {1, x, x , e } 2e 2 x x
e. {e , e , cos x, sin x} 8
x −x
29. {e −3x
cos 2x, e
−3x
sin 2x, e
2x
, xe
2x 2
, 1, x, x }
30. {e x
, xe , e
x x/2
, xe
x/2
,x e
2 x/2
, cos x, sin x}
9/11/2020 1 https://math.libretexts.org/@go/page/45933
31. {cos 3x, x cos 3x, x 2
cos 3x, sin 3x, x sin 3x, x
2
sin 3x, 1, x}
32. {e 2x
, xe
2x
,x e
2 2x
,e
−x
, xe
−x
, 1}
33. {cos x, sin x, cos 3x, x cos 3x, sin 3x, x sin 3x, e 2x
}
34. {e 2x
, xe
2x
,e
−2x
, xe
−2x
, cos 2x, x cos 2x, sin 2x, x sin 2x}
35. {e −x/2
cos 2x, x e
−x/2
cos 2x, x e
2 −x/2
cos 2x, e
−x/2
sin 2x, x e
−x/2
sin 2x, x e
2 −x/2
sin 2x}
36. {1, x, x 2
,e
2x
, xe
2x
, cos 2x, x cos 2x, sin 2x, x sin 2x}
(2x/3)}
38. {e −x
,e
3x
,e
x
cos 2x, e
x
sin 2x}
43.
√3 √3
a. x
{e , e
−x/2
cos(
2
x), e
−x/2
sin(
2
x)}
√3 √3
b. {e −x
,e
x/2
cos(
2
x), e
x/2
sin(
2
x)}
c. {e
2x
cos 2x, e
2x
sin 2x, e
−2x
cos 2x, e
−2x
sin 2x}
√3 √3 √3 √3
d. {e x
,e
−x
,e
x/2
cos(
2
x), e
x/2
sin(
2
x), e
−x/2
cos(
2
x), e
−x/2
sin(
2
x)}
√3 √3 √3 √3
f. {1, e 2x
,e
3x/2
cos(
2
x), e
3x/2
sin(
2
x), e
x/2
cos(
2
x), e
x/2
sin(
2
x)}
√3 √3 √3 √3
g. {e −x
.e
x/2
cos(
2
x), e
x/2
sin(
2
x), e
−x/2
cos(
2
x), e
−x/2
sin(
2
x)}
45. y = c 1x
r1
+ c2 x
r2
+ c3 x
r3
(r1 , r2 , r3 distinct); y = c1 x
r1
+ (c2 + c3 ln x)x
r2
2 r1 r1 λ
(r1 , r2 distinct); y = [ c1 + c2 ln x + c3 (ln x ) ] x ; y = c1 x + x [ c2 cos(ω ln x) + c3 sin(ω ln x)]
9/11/2020 2 https://math.libretexts.org/@go/page/45933
Section 9.3 Answers
1. y
p =e
−x
(2 + x − x )
2
−3x
2. y
p =−
e
4
(3 − x + x )
2
3. y
p
x
= e (1 + x − x )
2
4. y
p =e
−2x
(1 − 5x + x )
2
5. y
p =−
xe
2
(1 − x + x
2
−x )
3
6. y
p
2
= x e (1 + x)
x
−2x
7. y
p =
xe
2
(2 + x)
2 x
8. y
p =
x e
2
(2 + x)
2 2x
9. y
p =
x e
2
(1 + 2x)
10. y p =x e
2 3x
(2 + x − x )
2
11. y p =x e
2 4x
(2 + x)
3 x/2
12. y p =
x e
48
(1 + x)
13. y p =e
−x
(1 − 2x + x )
2
14. y p =e
2x
(1 − x)
15. y p =e
−2x
(1 + x + x
2
−x )
3
16. y p =
e
3
(1 − x)
17. y p
x
= e (1 + x )
2
18. y p = x e (1 + x )
x 3
19. y p = x e (2 + x)
x
2x
20. y p =
xe
6
(1 − x )
2
21. y p = 4x e
−x/2
(1 + x)
22. y p =
xe
6
(1 + x )
2
2 2x
23. y p =
x e
6
(1 + x + x )
2
2 2x
24. y p =
x e
6
(3 + x + x )
2
3 x
25. y p =
x e
48
(2 + x)
3 x
26. y p =
x e
6
(1 + x)
3 −x
27. y p =−
x e
6
(1 − x + x )
2
3 2x
28. y p =
x e
12
(2 + x − x )
2
29. y p =e
−x
[(1 + x) cos x + (2 − x) sin x]
30. y p =e
−x
[(1 − x) cos 2x + (1 + x) sin 2x]
31. y p =e
2x 2
[(1 + x − x ) cos x + (1 + 2x) sin x]
32. y p =
e
2
[(1 + x) cos 2x + (1 − x + x ) sin 2x]
2
33. y p =
x
13
(8 cos 2x + 14 sin 2x)
34. y p = xe
x
[(1 + x) cos x + (3 + x) sin x]
9/11/2020 1 https://math.libretexts.org/@go/page/45934
2x
35. yp =
xe
2
[(3 − x) cos 2x + sin 2x]
3x
36. yp =−
xe
12
(x cos 3x + sin 3x)
37. yp =−
e
10
(cos x + 7 sin x)
38. yp =
e
12
(cos 2x − sin 2x)
39. yp = xe
2x
cos 2x
−x
40. yp =−
e
2
[(1 + x) cos x + (2 − x) sin x]
−x
41. yp =
xe
10
(cos x + 2 sin x)
42. yp =
xe
4−
(3 cos 2x − sin 2x)
−2x
43. yp =
xe
8
[(1 − x) cos 3x + (1 + x) sin 3x]
44. yp =−
xe
4
(1 + x) sin 2x
2 −x
45. yp =
x e
4
(cos x − 2 sin x)
2 2x
46. yp =−
x e
32
(cos 2x − sin 2x)
2 2x
47. yp =
x e
8
(1 + x) sin x
48. yp = 2x e
2 x
+ xe
2x
− cos x
49. yp =e
2x
+ xe
x
+ 2x cos x
50. yp = 2x + x
2
+ 2x e
x
− 3x e
−x
+ 4e
3x
51. yp
x
= x e (cos 2x − 2 sin 2x) + 2x e
2x
+1
52. yp =x e
2 −2x
(1 + 2x) − cos 2x + sin 2x
53. yp
2
= 2 x (1 + x)e
−x
+ x cos x − 2 sin x
54. yp = 2x e
x
+ xe
2x
+ cos x
55. yp =
xe
6
(cos x + sin 2x)
56. yp =
x
54
[(2 + 2x)e
x
+ 3e
−2x
]
57. yp =
x
8
sinh x sin x
58. yp
3
= x (1 + x)e
−x
+ xe
−2x
59. yp = x e (2 x
x 2
+ cos x + sin x)
60. y = e 2x
(1 + x) + c1 e
−x x
+ e (c2 + c3 x)
61. y = e 3x
(1 − x −
x
2
) + c1 e
x
+e
−x
(c2 cos x + c3 sin x)
62. y = x e 2x
(1 + x )
2
+ c1 e
x
+ c2 e
2x
+ c3 e
3x
63. y = x 2
e
−x
(1 − x )
2
+ c1 + e
−x
(c2 + c3 x)
3 x
64. y = x e
24
(4 + x) + e (c1 + c2 x + c3 x )
x 2
2 −x
65. y = x e
16
(1 + 2x − x ) + e (c1 + c2 x) + e
2 x −x
(c3 + c4 x)
66. y = e −2x
[(1 +
x
2
) cos x + (
3
2
− 2x) sin x] + c1 e
x
+ c2 e
−x
+ c3 e
−2x
67. y = −x e x
sin 2x + c1 + c2 e
x x
+ e (c3 cos x + c4 sin x)
2 x
68. y = − x e
16
(1 + x) cos 2x + e
x
[(c1 + c2 x) cos 2x + (c3 + c4 x) sin 2x]
69. y = (x 2
+ 2)e
x
−e
−2x
+e
3x
9/11/2020 2 https://math.libretexts.org/@go/page/45934
70. y = e −x 2
(1 + x + x ) + (1 − x)e
x
71. y = ( x
12
+ 16) x e
−x/2
−e
x
72. y = (2 − x)(x 2
+ 1)e
−x
+ cos x − sin x
74. 2 + e x
[(1 + x) cos x − sin x − 1]
9/11/2020 3 https://math.libretexts.org/@go/page/45934
Section 9.4 Answers
1. y p = 2x
3
2. y
2
8 7/2 −x
p = x e
105
3. y p = x ln |x|
2
2( x +2)
4. y p =−
x
−3x
5. y p =−
xe
64
6. y p =−
2x
−x
e (x+1)
7. y p =−
x
8. y p = 2x
2
ln |x|
9. y p =x
2
+1
10. y p =
2 x +6
2
x ln |x|
11. y p =
3
12. y p = −x
2
−2
13. 1
4
x
3
ln |x| −
25
48
x
3
5/2
14. y p =
x
2
x(12−x )
15. y p =
6
4
x ln |x|
16. y p =
6
3 x
17. y p =
x e
18. y p =x
2
ln |x|
19. y p =
xe
20. y p =
3xe
21. y p = −x
3
22. y = −x(ln x ) 2
+ 3x + x
3
− 2x ln x
23. y = x
2
(ln |x| )
2
+x
2
−x
3
+ 2x
3
ln |x|
24. y = − 1
2
(3x + 1)x e
x
− 3e
x
−e
2x
+ 4x e
−x
25. y = 3
2
4
x (ln x )
2
+ 3x − x
4
+ 2x
4
ln x
26. y = − x +12
6
+ 3x − x
2
+ 2e
x
27. y = ( x
3
−
x
2
) ln |x| + 4x − 2 x
2
x
xe (1+3x) x 3x
28. y = − 2
+
x+1
2
−
e
4
+
e
29. y = −8x + 2x 2
− 2x
3
+ 2e
x
−e
−x
30. y = 3x 2
ln x − 7 x
2
2
3(4 x +9) x −x 2x
31. y = 2
+
x
2
−
e
2
+
e
2
+
e
32. y = x ln x + x − √−
x+
1
x
+
1
√x
9/11/2020 1 https://math.libretexts.org/@go/page/45935
33. y = x 3
ln |x| + x − 2 x
3
+
1
x
−
1
x2
6
+2 e
F (t)dt
2
x (x−t) (2x+t)
36. yp =∫
x0 6xt
3
F (t)dt
( x−t) 2
x xe −x +x(t−1)
37. yp =∫
x0 4
t
F (t)dt
2 ( x−t)
x x −t(t−2)−2te
38. yp =∫
x0 2
F (t)dt
2x(t−1)
12
−e
F (t)dt
3
x (x−t)
40. yp =∫
x0 6x
F (t)dt
3
x (x+t)(x−t)
41. yp =∫
x0 12x t
2 3
F (t)dt
9/11/2020 2 https://math.libretexts.org/@go/page/45935
Section 10.1 Answers
′ 1 1
Q =2− Q1 + Q2
1
1. 10
3
25
1
′
Q =6+ Q1 − Q2
2 50 20
′ 5 1
Q = 12 − Q1 + Q2
1 100+2t 100+3t
2. 1 4
′
Q =5+ Q1 − Q2
2 50+t 100+3t
′′ ′ ′
m1 y = −(c1 + c2 )y + c2 y − (k1 + k2 )y1 + k2 y2 + F1
1 1 2
3. m2 y
2
′′
= (c2 − c3 )y
′
1
− (c2 + c3 )y
′
2
+ c3 y
3
′
+ (k2 − k3 )y1 − (k2 + k3 )y2 + k3 y3 + F2
′′ ′ ′ ′
m3 y = c3 y + c3 y − c3 y + k3 y1 + k3 y2 − k3 y3 + F3
3 1 2 3
2 2 2
gR x gR y gR z
4. x ′′
=−
α
m
′
x +
2 2 2 3/2
y
′′
=−
α
m
′
y +
2 2 2 3/2
z
′′
=−
α
m
′
z +
2 2 2 3/2
( x +y +x ) ( x +y +x ) ( x +y +z )
5.
′
x = x2
1
′
x = x3
2
a. x
′
3
= f (t, x1 , y1 , y2 )
′
y = y2
1
′
y = g(t, y1 , y2 )
2
′
u = f (t, u1 , v1 , v2 , w2 )
1
′
v = v2
1
b. v
′
2
= g(t, u1 , v1 , v2 , w1 )
′
w = w2
1
′
w = h(t, u1 , v1 , v2 , w1 , w2 )
2
′
y = y2
1
c. y
′
2
= y3
′
y = f (t, y1 , y2 , y3 )
3
′
y = y2
1
′
y = y3
d. 2
′
y = y4
3
′
y = f (t, y1 )
4
′
x = x2
1
′
x = f (t, x1 , y1 )
e. 2
′
y = y2
1
′
y = g(t, x1 , y1 )
2
2
′ gR x
′
x = x1 x =−
1 3/2
( x2 +y 2 +x2 )
2
gR y
6. y
′
= y1 y
′
1
=−
2 2 2 3/2
( x +y +x )
2
gR z
′ ′
z = z1 z =−
1 2 2 2
3/2
( x +y +z )
9/11/2020 1 https://math.libretexts.org/@go/page/45876
Section 10.2 Answers
1.
2 4
a. ′
y = [ ]y
4 2
−2 −2
b. y ′
= [ ]y
−5 1
−4 −10
c. ′
y = [ ]y
3 7
2 1
d. y ′
= [ ]y
1 2
2.
−1 2 3
⎡ ⎤
a. ′
y = ⎢ 0 1 6 ⎥y
⎣ ⎦
0 0 −2
0 2 2
⎡ ⎤
b. y ′
= ⎢2 0 2⎥y
⎣ ⎦
2 2 0
−1 2 2
⎡ ⎤
c. ′
y = ⎢ 2 −1 2 ⎥y
⎣ ⎦
2 2 −1
3 −1 −1
⎡ ⎤
d. y ′
= ⎢ −2 3 2 ⎥y
⎣ ⎦
4 −1 −2
3.
1 1 1
a. ′
y = [ ] y, y(0) = [ ]
−2 4 0
5 3 9
b. y ′
= [ ] y, y(0) = [ ]
−1 1 −5
4.
6 4 4 3
⎡ ⎤ ⎡ ⎤
a. ′
y = ⎢ −7 −2 −1 ⎥ y, y(0) = ⎢ −6 ⎥
⎣ ⎦ ⎣ ⎦
7 4 3 4
8 7 7 2
⎡ ⎤ ⎡ ⎤
b. y ′
= ⎢ −5 −6 −9 ⎥ y, y(0) = ⎢ −4 ⎥
⎣ ⎦ ⎣ ⎦
5 7 10 3
5.
−3 2 3 − 2t
a. ′
y =[ ]+[ ]
−5 3 6 − 3t
t
3 1 −5e
b. y ′
=[ ]y+[
t
]
−1 1 e
10.
a. d
dt
Y
2
=Y
′
Y +Y Y
′
b. d
dt
Y
n
=Y
′
Y
n−1
+Y Y
′
Y
n−2
+Y
2
Y
′
Y
n−3
+… +Y
n−1
Y
′ n−1
=∑
r=0
Y
r
Y
′
Y
n−r−1
13. B = (P ′
+ P A)P
−1
9/11/2020 1 https://math.libretexts.org/@go/page/45877
Section 10.3 Answers
0 1
2. y ′
=[ P2 (x) P1 (x) ]y
− −
P0 (x) P0 (x)
0 1 … 0
⎡ ⎤
⎢ ⎥
⎢ ⋮ ⋮ ⋱ ⋮ ⎥
3. ′
y =⎢
⎢
⎥y
⎥
⎢ 0 0 … 1 ⎥
⎢ ⎥
Pn (x) Pn−1 (x) P1 (x)
⎣− − … − ⎦
P0 (x) P0 (x) P0 (x)
7.
6t −2t
3e − 6e
b. y = [ 6t −2t
]
3e + 6e
6t −2t 6t −2t
e +e e −e
c. y = 1
2
[
6t −2t 6t −2t
]k
e −e e +e
8.
−4t 3t
6e + 4e
b. y = [ −4t 3t
]
6e − 10 e
−4t 3t −4t 3t
5e + 2e 2e − 2e
c. y = 1
7
[
−4t 3t −4t 3t
]k
5e − 5e 2e + 5e
9.
2t t
−15 e − 4e
b. y = [ 2t t
]
9e + 2e
2t t 2t t
−5 e + 6e −10 e + 10 e
c. y = [ 2t t 2t t
]k
3e − 3e 6e − 5e
10.
3t t
5e − 3e
b. y = [ 3t t
]
5e + 3e
3t t 3t t
e +e e −e
c. y = 1
2
[
3t t 3t t
]k
e −e e +e
11.
2t 3t −t
e − 2e + 3e
⎡ ⎤
b. y = ⎢ 2e
3t
− 9e
−t
⎥
⎣ 2t 3t −t ⎦
e − 2e + 21 e
2t 3t −t 2t 3t 2t 3t −t
4e + 3e −e 6e − 6e 2e − 3e +e
⎡ ⎤
c. y = 1
6
⎢ −3 e
3t
+ 3e
−t
6e
3t
3e
3t
− 3e
−t
⎥k
⎣ 2t 3t −t 2t 3t 2t 3t −t ⎦
4e + 3e − 7e 6e − 6e 2e − 3e + 7e
12.
−2t 4t
−e +e
⎡ ⎤
b. y = 1
3
⎢ −10 e
−2t
+e
4t
⎥
⎣ −2t 4t ⎦
11 e +e
9/11/2020 1 https://math.libretexts.org/@go/page/45878
−2t 4t −2t 4t −2t 4t
2e +e −e +e −e +e
⎡ ⎤
c. y = 1
3
⎢ −e
−2t
+e
4t
2e
−2t
+e
4t
−e
−2t
+e
4t
⎥k
⎣ −2t 4t −2t 4t −2t 4t ⎦
−e +e −e +e 2e +e
13.
t −t −2t
3e + 3e −e
⎡ ⎤
b. y = ⎢ t
3e + 2e
−2t
⎥
⎣ −2t ⎦
−e
−t t −t t −t −2t
e e −e 2e − 3e +e
⎡ ⎤
c. y = ⎢ 0 e
t t
2e − 2e
−2t
⎥k
⎣ −2 ⎦
0 0 e
14. Y Z −1
and ZY −1
9/11/2020 2 https://math.libretexts.org/@go/page/45878
Section 10.4 Answers
1 1
1. y = c1 [ ]e
3t
+ c2 [ ]e
−t
1 −1
1 −1
2. y = c1 [ ]e
−t/2
+ c2 [ ]e
−2t
1 1
−3 −1
3. y = c1 [ ]e
−t
+ c2 [ ]e
−2t
1 2
2 −2
4. y = c1 [ ]e
−3t
+ c2 [ ]e
t
1 1
1 −4
5. y = c1 [ ]e
−2t
+ c2 [ ]e
3t
1 1
3 1
6. y = c1 [ ]e
2t
+ c2 [ ]e
t
2 1
−3 −1
7. y = c1 [ ]e
−5t
+ c2 [ ]e
−3t
1 1
1 −1 −1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
8. y = c1 ⎢2⎥e
−3t
+ c2 ⎢ −4 ⎥ e
−t
+ c3 ⎢ −1 ⎥ e
2t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 1 1
2 −1 −1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
9. y = c1 ⎢1⎥e
−16t
+ c2 ⎢ 2 ⎥e
2t
+ c3 ⎢ 0 ⎥e
2t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
2 0 1
−2 −1 −7
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
10. y = c 1 ⎢ −4 ⎥ e + c2 ⎢
t
1 ⎥e
−2t
+ c3 ⎢ −5 ⎥ e
2t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
3 0 4
−1 −1 −2
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
11. y = c 1 ⎢ −1 ⎥ e
−2t
+ c2 ⎢ −2 ⎥ e
−3t
+ c3 ⎢ −6 ⎥ e
−5t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 1 3
11 1 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
12. y = c 1 ⎢ 7 ⎥e
3t
+ c2 ⎢ 2 ⎥ e
−2t
+ c3 ⎢ 1 ⎥ e
−t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 1 1
4 −1 −1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
13. y = c 1 ⎢ −1 ⎥ e
−4t
+ c2 ⎢ −1 ⎥ e
6t
+ c3 ⎢ 0 ⎥e
4t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 1 1
1 −1 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
14. y = c 1 ⎢1⎥e
−5t
+ c2 ⎢ 0 ⎥e
5t
+ c3 ⎢ 1 ⎥ e
5t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
5 1 0
1 −1 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
15. y = c 1 ⎢ −1 ⎥ + c2 ⎢ 0 ⎥e
6t
+ c3 ⎢ 3 ⎥ e
6t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
2 3 0
2 4
16. y = − [ ]e
5t
+[ ]e
−5t
6 2
2 −2
17. y = [ ]e
t/2
+[ ]e
t
−4 1
9/11/2020 1 https://math.libretexts.org/@go/page/45879
7 2
18. y = [ ]e
9t
−[ ]e
−3t
7 4
3 4
19. y = [ ]e
5t
−[ ]e
−5t
9 2
5 0 −1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
20. y = ⎢ 5 ⎥ e t/2
+⎢ 0 ⎥e
t/2
+⎢ 2 ⎥e
−t/2
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
0 1 0
3 −2
⎡ ⎤ ⎡ ⎤
21. y = ⎢ 3 ⎥ e t
+ ⎢ −2 ⎥ e
−t
⎣ ⎦ ⎣ ⎦
3 2
2 3 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
22. y = ⎢ −2 ⎥ e t
−⎢ 0 ⎥e
−2t
+⎢ 1 ⎥e
3t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
2 3 0
1 4 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
23. y = − ⎢ 2 ⎥ e t
+⎢ 2 ⎥e
−t
+⎢ 1 ⎥e
2t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 4 0
−2 0 4
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
24. y = ⎢ −2 ⎥ e 2t
−⎢ 3 ⎥e
−2t
+ ⎢ 12 ⎥ e
4t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
2 0 4
−1 2 7
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
25. y = ⎢ −1 ⎥ e −6t
+ ⎢ −2 ⎥ e
2t
+ ⎢ −7 ⎥ e
4t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 2 −7
1 6
⎡ ⎤ ⎡ ⎤
26. y = ⎢ 4 ⎥ e −t
+⎢ 6 ⎥e
2t
⎣ ⎦ ⎣ ⎦
4 −2
4 3 −1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
27. y = ⎢ −2 ⎥ + ⎢ −9 ⎥ e 4t
+⎢ 1 ⎥e
2t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
2 6 −1
29. Half lines of L : y = y and L : y = −y are trajectories other trajectories are asymptotically tangent to
1 2 1 2 2 1 L1 as
t → −∞ and asymptotically tangent to L as t → ∞ . 2
30. Half lines of L : y = −2y and L : y = −y /3 are trajectories other trajectories are asymptotically parallel to L as
1 2 1 2 2 1 1
31. Half lines of L : y = y /3 and L : y = −y are trajectories other trajectories are asymptotically tangent to
1 2 1 2 2 1 L1 as
t → −∞ and asymptotically parallel to L as t → ∞ . 2
32. Half lines of L : y = y /2 and L : y = −y are trajectories other trajectories are asymptotically tangent to
1 2 1 2 2 1 L1 as
t → −∞ and asymptotically tangent to L as t → ∞ . 2
33. Half lines of L : y = −y /4 and L : y = −y are trajectories other trajectories are asymptotically tangent to
1 2 1 2 2 1 L1 as
t → −∞ and asymptotically parallel to L as t → ∞ . 2
34. Half lines of L : y = −y and L : y = 3y are trajectories other trajectories are asymptotically parallel to
1 2 1 2 2 1 L1 as
t → −∞ and asymptotically tangent to L as t → ∞ . 2
36. Points on L2 : y2 = y1 are trajectories of constant solutions. The trajectories of nonconstant solutions are half-lines on
1
either side of L , parallel to [
1 ] , traversed toward L1.
−1
9/11/2020 2 https://math.libretexts.org/@go/page/45879
37. Points on L 1 : y2 = −y1 /3 are trajectories of constant solutions. The trajectories of nonconstant solutions are half-lines on
−1
either side of L , parallel to [
1 ] , traversed away from L . 1
2
38. Points on L1 : y2 = y1 /3 are trajectories of constant solutions. The trajectories of nonconstant solutions are half-lines on
1 −1
either side of L , parallel to [
1 ] [ , ] , traversed away from L .
1
−1 1
39. Points on L1 : y2 = y1 /2 are trajectories of constant solutions. The trajectories of nonconstant solutions are half-lines on
1
either side of L , parallel to [
1 ] L1, .
−1
40. Points on L2 : y2 = −y1 are trajectories of constant solutions. The trajectories of nonconstant solutions are half-lines on
−4
either side of L , parallel to [
2 ] , traversed toward L . 1
1
41. Points on L1 : y2 = 3 y1 are trajectories of constant solutions. The trajectories of nonconstant solutions are half-lines on
1
either side of L , parallel to [
1 ] , traversed away from L . 1
−1
9/11/2020 3 https://math.libretexts.org/@go/page/45879
Section 10.5 Answers
2 −1 2
1. y = c1 [ ]e
5t
+ c2 ([ ]e
5t
+[ ] te
5t
)
1 0 1
1 1 1
2. y = c1 [ ]e
−t
+ c2 ([ ]e
−t
+[ ] te
−t
)
1 0 1
−2 −1 −2
3. y = c1 [ ]e
−9t
+ c2 ([ ]e
−9t
+[ ] te
−9t
)
1 0 1
−1 −1 −1
4. y = c1 [ ]e
2t
+ c2 ([ ]e
2t
+[ ] te
2t
)
1 0 1
−1 −1 −2t −2
5. y = c1 [ ] + c2 ([ ]
e
3
+[ ] te
−2t
)
1 0 1
3 −1 −4t 3
6. y = c1 [ ]e
−4t
+ c2 ([ ]
e
2
+[ ] te
−4t
)
2 0 2
4 −1 −t 4
7. y = c1 [ ]e
−t
+ c2 ([ ]
e
3
+[ ] te
−t
)
3 0 3
−1 1 0 1
⎡ ⎤ ⎡ ⎤ ⎛⎡ ⎤ ⎡ ⎤ ⎞
4t
8. y = c1 ⎢ −1 ⎥ + c2 ⎢ 1 ⎥ e
4t
+ c3 ⎜⎢ 1 ⎥
e
2
+ ⎢ 1 ⎥ te
4t
⎟
⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎦ ⎣ ⎦ ⎠
2 2 0 2
−1 1 0 1
⎡ ⎤ ⎡ ⎤ ⎛⎡ ⎤ ⎡ ⎤ ⎞
9. y = c1 ⎢ 1
t
⎥ e + c2 ⎢ −1 ⎥ e
−t
+ c3 ⎜⎢ 3 ⎥ e
−t
+ ⎢ −1 ⎥ te
−t
⎟
⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎦ ⎣ ⎦ ⎠
1 1 0 1
0 1 1 1
⎡ ⎤ ⎡ ⎤ ⎛⎡ ⎤ ⎡ ⎤ ⎞
−2t
10. y = c 1 ⎢1⎥e
2t
+ c2 ⎢ 0 ⎥ e
−2t
+ c3 ⎜⎢ 1 ⎥
e
2
+ ⎢ 0 ⎥ te
−2t
⎟
⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎦ ⎣ ⎦ ⎠
1 1 0 1
−2 0 1 0
⎡ ⎤ ⎡ ⎤ ⎛⎡ ⎤ ⎡ ⎤ ⎞
4t
11. y = c 1 ⎢ −3 ⎥ e
2t
+ c2 ⎢ −1 ⎥ e
4t
+ c3 ⎜⎢ 0 ⎥
e
2
+ ⎢ −1 ⎥ te
4t
⎟
⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎦ ⎣ ⎦ ⎠
1 1 0 1
−1 1 1 1
⎡ ⎤ ⎡ ⎤ ⎛⎡ ⎤ ⎡ ⎤ ⎞
4t
12. y = c 1 ⎢ −1 ⎥ e
−2t
+ c2 ⎢ 1 ⎥ e
4t
+ c3 ⎜⎢ 0 ⎥
e
2
+ ⎢ 1 ⎥ te
4t
⎟
⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎦ ⎣ ⎦ ⎠
1 1 0 1
6 8
13. y = [ ]e
−7t
−[ ] te
−7t
2 4
5 12
14. y = [ ]e
3t
−[ ] te
3t
8 16
2 8
15. y = [ ]e
−5t
−[ ] te
−5t
3 4
3 12
16. y = [ ]e
5t
−[ ] te
5t
1 6
0 6
17. y = [ ]e
−4t
+[ ] te
−4t
2 6
4 2 −1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
18. y = ⎢ 8
t
⎥ e + ⎢ −3 ⎥ e
−2t
+⎢ 1 ⎥ te
−2t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
−6 −1 0
9/11/2020 1 https://math.libretexts.org/@go/page/45880
3 9 2
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
19. y = ⎢ 3 ⎥ e 2t
−⎢ 5 ⎥+⎢ 2 ⎥t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
6 6 0
2 −4 0
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
20. y = − ⎢ 0 ⎥ e −3t
+⎢ 9
t
⎥ e − ⎢ 4 ⎥ te
t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
2 1 4
−2 0 3
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
21. y = ⎢ 2 ⎥e
4t
+ ⎢ −1 ⎥ e
2t
+ ⎢ −3 ⎥ te
2t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
2 1 3
1 −3 8
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
22. y = − ⎢ 1 ⎥ e −4t
+⎢ 2 ⎥e
8t
+⎢ 0 ⎥ te
8t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
0 −3 −8
3 3 8
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
23. y = ⎢ 6 ⎥ e 4t
−⎢ 4 ⎥+⎢ 4 ⎥t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
3 1 4
0 −1 0 1 −1 0
⎡ ⎤ ⎛⎡ ⎤ ⎡ ⎤ ⎞ ⎛⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎞
6t 6t 6t 2 6t
24. y = c 1 ⎢1⎥e
6t
+ c2 ⎜⎢ 1 ⎥
e
4
+ ⎢ 1 ⎥ te
6t
⎟ + c3 ⎜⎢ 1 ⎥
e
8
+⎢ 1 ⎥
te
4
+⎢ 1 ⎥
t e
2
⎟
⎣ ⎦ ⎝⎣ ⎦ ⎣ ⎦ ⎠ ⎝⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎠
1 0 1 0 0 1
−1 1 −1 1 1 −1
⎡ ⎤ ⎛⎡ ⎤ ⎡ ⎤ ⎞ ⎛⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎞
3t 3t 3t 2 3t
25. y = c 1 ⎢ 1 ⎥e
3t
+ c2 ⎜⎢ 0 ⎥
e
2
+⎢ 1 ⎥ te
3t
⎟ + c3 ⎜⎢ 2 ⎥
e
36
+⎢ 0 ⎥
te
2
+⎢ 1 ⎥
t e
2
⎟
⎣ ⎦ ⎝⎣ ⎦ ⎣ ⎦ ⎠ ⎝⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎠
1 0 1 0 0 1
0 −1 0 3 −1 0
⎡ ⎤ ⎛⎡ ⎤ ⎡ ⎤ ⎞ ⎛⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎞
−2t 2 −2t
26. y = c 1 ⎢ −1 ⎥ e
−2t
+ c2 ⎜⎢ 1 ⎥e
−2t
+ ⎢ −1 ⎥ te
−2t
⎟ + c3 ⎜⎢ −2 ⎥
e
4
+⎢ 1 ⎥ te
−2t
+ ⎢ −1 ⎥
t e
2
⎟
⎣ ⎦ ⎝⎣ ⎦ ⎣ ⎦ ⎠ ⎝⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎠
1 0 1 0 0 1
0 1 0 −1 1 0
⎡ ⎤ ⎛⎡ ⎤ ⎡ ⎤ ⎞ ⎛⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎞
2t 2t 2t 2 2t
27. y = c 1 ⎢1⎥e
2t
+ c2 ⎜⎢ 1 ⎥
e
2
+ ⎢ 1 ⎥ te
2t
⎟ + c3 ⎜⎢ 1 ⎥
e
8
+⎢ 1 ⎥
te
2
+⎢ 1 ⎥
t e
2
⎟
⎣ ⎦ ⎝⎣ ⎦ ⎣ ⎦ ⎠ ⎝⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎠
1 0 1 0 0 1
−2 6 −2 12 6 −2
⎡ ⎤ ⎛ ⎡ ⎤ ⎡ ⎤ ⎞ ⎛ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎞
−6t −6t −6t 2 −6t
28. y = c 1 ⎢ 1 ⎥e
−6t
+ c2 ⎜− ⎢ 1 ⎥
e
6
+⎢ 1 ⎥ te
−6t
⎟ + c3 ⎜− ⎢ 1 ⎥
e
36
−⎢ 1 ⎥
te
6
+⎢ 1 ⎥
t e
2
⎟
⎣ ⎦ ⎝ ⎣ ⎦ ⎣ ⎦ ⎠ ⎝ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎠
2 0 2 0 0 2
−4 6 1 2
⎡ ⎤ ⎡ ⎤ ⎛⎡ ⎤ ⎡ ⎤ ⎞
29. y = c 1 ⎢ 0 ⎥e
−3t
+ c2 ⎢ 1 ⎥ e
−3t
+ c3 ⎜⎢ 1 ⎥ e
−3t
+ ⎢ 1 ⎥ te
−3t
⎟
⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎦ ⎣ ⎦ ⎠
1 0 0 1
−1 0 1 −1
⎡ ⎤ ⎡ ⎤ ⎛⎡ ⎤ ⎡ ⎤ ⎞
30. y = c 1 ⎢ 0 ⎥e
−3t
+ c2 ⎢ 1 ⎥ e
−3t
+ c3 ⎜⎢ 1 ⎥ e
−3t
+ ⎢ −1 ⎥ te
−3t
⎟
⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎦ ⎣ ⎦ ⎠
1 0 0 1
2 −3 1 −1
⎡ ⎤ ⎡ ⎤ ⎛⎡ ⎤ ⎡ ⎤ ⎞
−t
31. y = c 1 ⎢0⎥e
−t
+ c2 ⎢ 2 ⎥e
−t
+ c3 ⎜⎢ 0 ⎥
e
2
+⎢ 2 ⎥ te
−t
⎟
⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎦ ⎣ ⎦ ⎠
1 0 0 1
−1 0 −1 1
⎡ ⎤ ⎡ ⎤ ⎛⎡ ⎤ ⎡ ⎤ ⎞
32. y = c 1 ⎢ 1 ⎥e
−2t
+ c2 ⎢ 0 ⎥ e
−2t
+ c3 ⎜⎢ 0 ⎥e
−2t
+ ⎢ −1 ⎥ te
−2t
⎟
⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎦ ⎣ ⎦ ⎠
0 1 0 1
9/11/2020 2 https://math.libretexts.org/@go/page/45880
Section 10.6 Answers
3 cos t + sin t 3 sin t − cos t
1. y = c1e
2t
[ ] + c2 e
2t
[ ]
5 cos t 5 sin t
5. y = c1 ⎢ −1 ⎥ e
−2t
+ c2 e
4t
⎢ cos 2t + sin 2t ⎥ + c3 e
4t
⎢ sin 2t − cos 2t ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
2 2 cos 2t 2 sin 2t
1 − sin t cos t
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
7. y = c1 ⎢1⎥e
2t
+ c2 e ⎢
t
sin t
t
⎥ + c3 e ⎢ − cos t ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 cos t sin t
−1 sin t − cos t
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
13. y = c 1 ⎢ 1 ⎥e
−2t t
+ c2 e ⎢ − cos t ⎥ + c3 e ⎢ − sin t ⎥
t
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 cos t sin t
1 − sin 3t cos 3t
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
15. y = c 1 ⎢2⎥e
3t
+ c2 e
6t
⎢ sin 3t ⎥ + c3 e
6t
⎢ − cos 3t ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 cos 3t sin 3t
16. y = c 1 ⎢ 1 ⎥ e + c2 e ⎢
t t
cos t − sin t ⎥ + c3 e ⎢
t
cos t + sin t ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1 2 cos t 2 sin t
5 cos 3t + sin 3t
17. y = e t
[ ]
2 cos 3t + 3 sin 3t
9/11/2020 1 https://math.libretexts.org/@go/page/45881
5 cos 6t + 5 sin 6t
18. y = e 4t
[ ]
cos 6t − 3 sin 6t
17 cos 3t − sin 3t
19. y = e t
[ ]
7 cos 3t + 3 sin 3t
cos(t/2) + sin(t/2)
20. y = e t/2
[ ]
− cos(t/2) + 2 sin(t/2)
1 3 cos t + sin t
⎡ ⎤ ⎡ ⎤
21. y = ⎢ −1 ⎥ e t
+e
4t
⎢ cos t − 3 sin t ⎥
⎣ ⎦ ⎣ ⎦
2 4 cos t − 2 sin t
4 4 cos 2t + 8 sin 2t
⎡ ⎤ ⎡ ⎤
22. y = ⎢ 4 ⎥ e 8t
+e
2t
⎢ −6 sin 2t + 2 cos 2t ⎥
⎣ ⎦ ⎣ ⎦
2 3 cos 2t + sin 3t
0 15 cos 6t + 10 sin 6t
⎡ ⎤ ⎡ ⎤
23. y = ⎢ 3 ⎥ e −4t
+e
4t
⎢ 14 cos 6t − 8 sin 6t ⎥
⎣ ⎦ ⎣ ⎦
3 7 cos 6t − 4 sin 6t
6 10 cos 4t − 4 sin 4t
⎡ ⎤ ⎡ ⎤
24. y = ⎢ −3 ⎥ e 8t
+ ⎢ 17 cos 4t − sin 4t ⎥
⎣ ⎦ ⎣ ⎦
3 3 cos 4t − 7 sin 4t
−1 1
29. U = 1
[ ], V =
1
[ ]
√2 √2
1 1
.5257 −.8507
30. U ≈ [ ], V ≈[ ]
.8507 .5257
.8507 −.5257
31. U ≈ [ ], V ≈[ ]
.5257 .8507
−.9732 .2298
32. U ≈ [ ], V ≈[ ]
.2298 .9732
.5257 −.8507
33. U ≈ [ ], V ≈[ ]
.8507 .5257
−.5257 .8507
34. U ≈ [ ], V ≈[ ]
.8507 .5257
−.8817 .4719
35. U ≈ [ ], V ≈[ ]
.4719 .8817
.8817 −.4719
36. U ≈ [ ], V ≈[ ]
.4719 .8817
0 −1
37. U = [ ], V =[ ]
1 0
0 1
38. U = [ ], V =[ ]
1 0
1 −1
39. U = 1
[ ], V =
1
[ ]
√2 √2
1 1
.5257 −.8507
40. U ≈ [ ], V ≈[ ]
.8507 .5257
9/11/2020 2 https://math.libretexts.org/@go/page/45881
Index
A F method of undetermined coefficients
Abel's theorem forcing function 5.4: The Method of Undetermined Coefficients I
5.1: Homogeneous Linear Equations 5.5: The Method of Undetermined Coefficients II
9.1: Introduction to Linear Higher Order Equations 9.3: Undetermined Coefficients for Higher Order
10.2: Linear Systems of Differential Equations
10.3: Basic Theory of Homogeneous Linear Equations
Systems midpoint method
augmented matrix G 3.2: The Improved Euler Method and Related
11.3: Gaussian Elimination Gaussian Elimination Methods
autonomous differential equation 11.2: Elementary Operations
11.3: Gaussian Elimination
minor of a matrix
4.4: Autonomous Second Order Equations 13.1: Basic Techniques
gravity
Autonomous Second Order Equations minors or a matrix
4.3: Elementary Mechanics
4.4: Autonomous Second Order Equations 13.1: Basic Techniques
H multiplicity
B Heun’s method 9.2: Higher Order Constant Coefficient
Bernoulli Equations Homogeneous Equations
3.2: The Improved Euler Method and Related
2.4: Transformation of Nonlinear Equations into Methods
Separable Equations
homogeneous N
5.1: Homogeneous Linear Equations
Newton’s Law of Cooling
C Homogeneous Differential Equations 4.2: Cooling and Mixing
characteristic polynomial Nonhomgeneous Linear Equations
5.2: Constant Coefficient Homogeneous Equations with constant Coefficients 5.3: Nonhomogeneous Linear Equations
9.2: Higher Order Constant Coefficient 5.2: Constant Coefficient Homogeneous Equations
Homogeneous Equations
Chebyshev’s equation O
I open interval of convergence
7.3: Series Solutions Near an Ordinary Point I identity matrix 7.2: Review of Power Series
coefficient matrix 12.6: The Identity and Inverses
open rectangle
10.2: Linear Systems of Differential Equations implicit function Theorem 2.3: Existence and Uniqueness of Solutions of
cofactor 2.2: Separable Equations Nonlinear Equations
13.1: Basic Techniques improved Euler method ordinary point
complementary equation 3.2: The Improved Euler Method and Related 7.3: Series Solutions Near an Ordinary Point I
2.1: Linear First Order Equations Methods 7.4: Series Solutions Near an Ordinary Point II
5.3: Nonhomogeneous Linear Equations initial value problems Overdamped Motion
compound interest 8.3: Solution of Initial Value Problems 6.2: Spring Problems II
4.1: Growth and Decay integrating factor
constant coefficient equation 2.6: Integrating Factors P
9.2: Higher Order Constant Coefficient Inverse Laplace Transform particular solution
Homogeneous Equations
8.2: The Inverse Laplace Transform 5.3: Nonhomogeneous Linear Equations
constant matrix phase plane equivalent
11.3: Gaussian Elimination
K 4.4: Autonomous Second Order Equations
Critically Damped Motion Kronecker delta Poincaré phase plane
6.2: Spring Problems II 12.6: The Identity and Inverses 4.4: Autonomous Second Order Equations
polynomial operator
D L 9.2: Higher Order Constant Coefficient
damping Laplace Expansion Homogeneous Equations
4.4: Autonomous Second Order Equations 13.1: Basic Techniques power series
defective Laplace Transform 7.2: Review of Power Series
10.5: Constant Coefficient Homogeneous Systems 8: Laplace Transforms
II
determinant
8.1: Introduction to the Laplace Transform R
8.3: Solution of Initial Value Problems
radius of convergence
13.1: Basic Techniques Linear Independence 7.2: Review of Power Series
Determinants 14.3: Linear Independence
reduction of orders
13.2: Properties of Determinants
5.6: Reduction of Order
direction field M Riccati equation
1.3: Direction Fields for First Order Equations Malthusian model 2.4E: Transformation of Nonlinear Equations into
1.1: Applications Leading to Differential Separable Equations (Exercises)
E Equations
roundoff errors
Elementary operations matrix 3.1: Euler's Method
11.2: Elementary Operations 12.1: Matrix Arithmetic
Row Operations
escape velocity matrix inverse 13.2: Properties of Determinants
4.3: Elementary Mechanics 12.7: Finding the Inverse of a Matrix
Euler’s Method Matrix Multiplication S
3.1: Euler's Method 12.4: Properties of Matrix Multiplication
Separable Differential Equations
Exact Differentials Mechanics 2.2: Separable Equations
2.5: Exact Equations 4.3: Elementary Mechanics
separatrix
Exact Equations 4.4: Autonomous Second Order Equations
2.5: Exact Equations Simple Harmonic Motion
6.1: Spring Problems I
skew lines T V
11.1: Geometry temperature decay constant van der Pol equation
skew symmetric matrix 4.2: Cooling and Mixing 4.4E: Autonomous Second Order Equations
12.5: The Transpose terminal velocity (Exercises)
span 4.3: Elementary Mechanics Variation of Parameters
14.2: Spanning Sets transpose 5.7: Variation of Parameters
9.4: Variation of Parameters for Higher Order
Spanning Sets 12.5: The Transpose
Equations
14.2: Spanning Sets Triangular Matrix Verhulst model
square matrix 13.1: Basic Techniques
1.1: Applications Leading to Differential
12.1: Matrix Arithmetic truncation errors Equations
subsets 3.1: Euler's Method
14.2: Spanning Sets W
superposition principle U Wronskian
5.3: Nonhomogeneous Linear Equations Underdamped Motion 9.1: Introduction to Linear Higher Order Equations
system of linear equations 6.2: Spring Problems II 10.3: Basic Theory of Homogeneous Linear
11.2: Elementary Operations Systems