Differential Equations & Linear Algebra
Differential Equations & Linear Algebra
with
Wolfram Mathematica
.
2
y(t) [m]
-2
-4
0 20 40 60 80 100
t [s ]
July 2022
Preface
We are pleased to present this first edition of the “Differential Equations and Linear Algebra
with Wolfram Mathematica” student guidebook. This book is very comprehensive, but we hope
it does not hold it back from being an enjoyable read.
.
This book is written primarily for students enrolled in the course “Engineering Mathematics III
(Differential Equations and Linear Algebra)” (ENG200) here at Nazarbayev University (NU).
This is a common compulsory course offered to all 2nd-year engineering students. There is a
Computational Lab session in this course where students are expected to practice what they
have learned in the theoretical sessions on computers with Wolfram Mathematica. Such a
design of the course was credited to Prof. V. Zarikas (now at the University of Thessaly,
Greece), who was the course leader when YW taught this course in Fall 2020 and Fall 2021.
We want to thank Prof. Zarikas for selflessly sharing his course materials, which we have
benefited from when developing this book.
.
Every topic has been summarized and supported by a sufficient number of solved problems.
The present book has been designed to equip young engineering students with as much
knowledge on all topics as is desirable from the point of view of the ENG 200 learning
outcomes. Efforts have been made to make Differential equations and Linear algebra, the
fundamental subjects in every engineering curriculum, more interesting and engaging with the
help of Wolfram Mathematica language. In addition to the above-mentioned math skills, the
book helps to learn a new programming software, Wolfram Mathematica. Just like learning any
new skill, learning Wolfram Mathematica takes time, effort, and dedication. Therefore, we
believe this journey will also benefit our readers to become self-disciplined learners.
.
BZ wishes to express her appreciation to Dr. YW, the instructor for the ENG200 course at NU.
She is grateful to him for all the knowledge gained in the ENG200 course and for awakening
her interest in learning the newly introduced Wolfram Mathematica tool/language. BZ hopes
that this book, developed under the supervision of YW, will help the reader learn the basics of
Wolfram Mathematica and use their acquired skills for further work/research. In addition, BZ
thanks Dr. Devendra Kapadia for developing the interactive course “Introduction to Linear
Algebra” for learning linear algebra using the Wolfram Language used in the preparation of this
book and strongly encourages students to take a look at other Wolfram U Interactive Courses
listed in the References section of the book.
.
AT would like to express her gratitude to Dr. YW for his invaluable advice, continuous support,
and patience both during the ENG 200 course and the book writing process. Without YW’s
encouragement and supervision, this book would not have been possible. AT also would like to
thank her ENG 200 coursemates and friends for a cherished time spent together solving
3
2 Preface.nb
rigorous math problems and learning new skills in class and social settings.
.
YW would like to express his sincere gratitude to Prof. H. Tobita (University of Fukui, Japan),
who introduced Wolfram Mathematica to him in Fall 2002. Since then, YW has been in love
with this fantastic tool/language. Life would be different if he didn’t know about Wolfram
Mathematica, and this book would certainly not have been possible. YW would also like to
express his gratitude to students enrolled in the ENG200 course. YW has benefited from close
interactions with students since he started to teach this course in Fall 2020. The two coauthors
(BZ and AT) were also students enrolled in ENG200 in Fall 2021. This book would not have
been finished now without those two brilliant and hardworking students/coauthors.
.
We acknowledge that this version of the book might have uncorrected mistakes,
spelling/grammatical errors, and ambiguities. We are trying to eliminate them in a 2nd version
(to be released in July 2023). We are grateful to the readers, students, instructors, or anyone
who somehow encountered this book if they send us their valuable feedback so that we may
make further improvements in future editions.
.
4
Table of Contents
◼ Preface
5
2 table of contents.nb
6
table of contents.nb 3
7
4 table of contents.nb
8
table of contents.nb 5
9
6 table of contents.nb
10
Week 0: Preliminary
Introduction to Wolfram Mathematica
The secret to getting ahead is getting started. --- Mark Twain
Table of Contents
1. Prerequisites
1.1. To Begin With
1.2. Basic Algebra and Calculus
1.3. Some of the Basic Operations
2. Help Options
2.1. Help Browser
2.2. Text-based Help
3. How to | Clear User Defined Symbols
3.1. ClearAll["Global`*"]
3.2. Quit[]
4. Create Plots
4.1. Defining a Function
4.2. Graph of a Function of One Variable
4.3. Multiple Functions on a Graph
4.4. Graph of a Function of Two Variable
4.5. Parametric Plots
5. DSolve
6. How to | Visualize the Direction Field
6.1. Stream Plots
6.2. Vector Plots
6.3. Contour Plots
7. More to Explore
7.1. Animation
7.2. Interactive Manipulation
7.3. Sound Effects
11
2 Week 0_Preliminary.nb
Commands list
◼N
◼ Table
◼D
◼ Integrate
◼ Solve
◼ Coefficient
◼ ClearAll
◼ Clear
◼ Quit
◼ Plot
◼ Plot3D
◼ ParametricPlot
◼ StreamPlot
◼ VectorPlot
◼ ContourPlot
◼ DSolve
◼ Manipulate
◼ Sound
Prerequisites
To Begin With
There a few things to keep in mind when using Mathematica.
☑ When using a PC, in order to execute a command you must hit Shift-Enter.
☑ Mathematica is Case-SenSitive (AA is not the same as aA), so be careful about what you
type.
☑ All built-in Mathematica functions are spelled out and capitalized, such as Table,
ListPlot, IntegerPart, Plot, Sin, Cos, etc.
☑ The parameters inside a function are always enclosed with square brackets, [ ].
12
Week 0_Preliminary.nb 3
log
Log[10]
☑ You can use a semicolon (;) at the end of a line if you want to perform the action, but
don’t want to see the output.
☑ Don’t forget about the copy and paste commands. This will be useful if you have to type
similar commands and don’t want to have to retype the entire command.
☑ In Mathematica, it is important to distinguish between parentheses (), brackets [], and
braces {}:
◼ Parentheses (): Used to group mathematical expressions, such as (3 + 4) / (5 + 7).
In[ ]:= (3 + 4) / (5 + 7)
Out[ ]=
7
12
2.30259
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20}
13
4 Week 0_Preliminary.nb
3.14159
2.71828
3+2x
☑ To take the integral of a function, use Integrate and specify the integral with respect to
which variable.
For instance, find the integral of x2 + 3 x .
14
Week 0_Preliminary.nb 5
3 x2 x3
+
2 3
☑ To solve for the roots of a x2 + b x + c = 0 symbolically, use Solve[a x^2+ b x+c == 0, x].
☑ Notice the double equals sign (==). (Mathematica is searching for when the expression is
True.)
In[ ]:= Solve[a x^2 + b x + c 0, x]
Out[ ]=
-b - b2 - 4 a c -b + b2 - 4 a c
x , x
2a 2a
120
◼ Sqrt[x]
◼ Exp[x]
◼ Log[x]
◼ Log[b, x] (logarithm with base b)
◼ Sin[x]
◼ Cos[x]
◼ Tan[x]
◼ ArcSin[x]
◼ ArcCos[x]
◼ ArcTan[x]
◼ Sinh[x]
◼ n! (factorial)
◼ Abs[x] (absolute value)
◼ Round[x] (closer integer)
◼ Floor[x] (integer part)
◼ Mod[n, m]
15
6 Week 0_Preliminary.nb
◼ Random[ ]
◼ Max[x, y, …]
◼ Min[x ,y, …]
Help Options
Help Browser
.
To access the help browser, go the Help menu and choose Wolfram Documentation. If you
want to know about a particular function in Mathematica, select it and then go to Find
Selected Function or simply hit the F1 key on your keyboard.
.
There are a lot of fun examples on the Wolfram Demonstrations Project (URL). You may
also share your work with the world. Getting started is simple.
.
Text-based Help
.
The Question Mark function ? allows you to get basic information about a particular
Mathematica function.
In[ ]:= ? /.
Out[ ]=
Symbol
ReplaceAll [rules] represents an operator form of ReplaceAll that can be applied to an expression.
For example, suppose we want to find out how to use the derivative function D, the quesion
mark function ? yields:
16
Week 0_Preliminary.nb 7
In[ ]:= ? D
Out[ ]=
Symbol
D[f , {x, n}, {y, m}, …] gives the multiple partial derivative ⋯ ∂m ∂ ym ∂n ∂ xn f.
D[f , {{x1 , x2 , …}}] for a scalar f gives the vector derivative ∂ f ∂ x1 , ∂ f ∂ x2 , ….
The double question mark ?? gives the same information as ? but also gives information
about attributes and options.
In[ ]:= ?? D
Out[ ]=
Symbol
D[f , {x, n}, {y, m}, …] gives the multiple partial derivative ⋯ ∂m ∂ ym ∂n ∂ xn f.
D[f , {{x1 , x2 , …}}] for a scalar f gives the vector derivative ∂ f ∂ x1 , ∂ f ∂ x2 , ….
Documentation Web »
Options NonConstants {}
Attributes {Protected, ReadProtected }
Full Name System`D
If you are trying to recall a function that has the word Solve in it then you can use asterisk * in
conjunction with the word Solve , as shown below:
17
8 Week 0_Preliminary.nb
System`
AsymptoticDSol
DSolve LinearSolve NDSolveValue RiccatiSolve SolveDelayed
veValue
AsymptoticRSol DSolveChangeV LinearSolveFun
NSolve RSolve SolveValues
veValue ariables ction
AsymptoticSolve DSolveValue LyapunovSolve NSolveValues RSolveValue
DiscreteLyapun ParametricNDS
FrobeniusSolve MainSolve Solve
ovSolve olve
DiscreteRiccatiS ParametricNDS
KnapsackSolve NDSolve SolveAlways
olve olveValue
When you set a value to a symbol, that value will be used for the symbol for the entire
Wolfram System session. Since symbols no longer in use can introduce unexpected errors
when used in new computations, clearing your definitions is very desirable.
.
To clear all definitions of quantities you’ve introduced in a Mathematica session so far, type:
ClearAll[“Global`*”].
.
In[ ]:= x = 5; y = 7; x + y
Out[ ]=
12
.
Read this page (How to | Clear My Definitions | URL) for more details. .
Quit[]
.
18
Week 0_Preliminary.nb 9
To clear all definitions or to reclaim resources used by the kernel, you may want to restart it.
There are at least two options.
.
◼ Option 1: Quit the kernel by choosing Evaluation ▶ Quit Kernel ▶ “kernel name”,
where “kernel name” is typically “Local”.
◼ Option 2: Quit the kernel by evaluating Quit. Quit[] (URL) terminates a Wolfram
Language kernel session. Quit[] quits only the Wolfram Language kernel, not the front
end. To quit a notebook front end, choose the Quit menu item. All kernel definitions
are lost when the kernel session terminates.
In[ ]:= Quit[]
Create Plots
Defining a Function
.
There are many built-in function in Wolfram Language and some of them were introduced in
previous sections. This section will focus on learning how to define our own functions in
Mathematica.
.
☑ Syntax for defining a function that takes any single argument is f [ x _ ] := … (definition
of a function).
For example, the command for defining a function f (x) = x2 is
In[ ]:= f[x_] := x2
.
Notice the underscore “_” to the right of the variable y and/or on the left of “equality”. If
the character underscore was not used, then the function is only defined for this particular
symbol of the variable.
In[ ]:= Clear[f]
x3
f[x] = ;
2
In[ ]:= f[5]
Out[ ]=
f[5]
.
The use of equality symbol “:=” in the definition of the function, i.e., the assignment
with a delay(Set Delayed) is most othen the correct choice. The choice of direct, (Set)
assignment “=” can lead to undesirable results.
19
10 Week 0_Preliminary.nb
In[ ]:= a = 3;
setDelayed[x_] := x + y + a2 ;
set[x_] = x + y + a2 ;
9+x+y
9+x+y
In[ ]:= a = 4;
setDelayed[x]
Out[ ]=
16 + x + y
9+x+y
.
16
1 - 2 C1 -2 y
1 - 2 C1 -2 y
20
Week 0_Preliminary.nb 11
1
In[ ]:= Dy - + C1 * Exp[- 2 y], {y, 2}
2
Out[ ]=
4 C1 -2 y
4 C1 -2 y
.
Symbol
Global`q
Definitions
1
q[y_] := y - + C1 Exp[- 2 y]
2
☑ The name of a function i.e. f, is just a symbol for Mathematica. Thus do not define a
function with capital letter to avoid the confusion with other built-in Mathematica functions.
Also this symbol must not have been previously used for definition of another element
(variable, table, etc.).
.
☑ Function in Mathematica can have more than one argument. So we can define multiple
variables function.
In[ ]:= product[x_, y_] := x * y;
7
.
☑ If later you will give a new definition to the function, the latter definition is the one that
applies while the previous is canceled.
In[ ]:= product[x_, y_] := 1 + x * y
21
12 Week 0_Preliminary.nb
1.0
0.5
-6 -4 -2 2 4 6
-0.5
-1.0
To include two functions on the same graph, we simply write the Plot command using two
functions which slip with the “,”.
.
-1
-2
-15 -10 -5 0 5 10 15
22
Week 0_Preliminary.nb 13
In[ ]:= Plot[{x * Sin[1 / x], x, - x}, {x, - 0.1, 0.1}, PlotRange 0.1,
Filling Axis, Frame True, AspectRatio 1 / GoldenRatio]
Out[ ]=
0.10
0.05
0.00
-0.05
-0.10
-0.10 -0.05 0.00 0.05 0.10
In[ ]:= Plot3D[x ^ 2 - y ^ 2, {x, - 1, 1}, {y, - 1, 1}, BoxRatios {1, 1, 1}, ImageSize {270, 270}]
Out[ ]=
Parametric Plots
.
23
14 Week 0_Preliminary.nb
DSolve Command
.
The DSolve Command is used to solve differential equations, list of differential equations, and
a partial differential equations.
.
24
Week 0_Preliminary.nb 15
Symbol
DSolve[eqn, u, x] solves a differential equation for the function u, with independent variable x.
DSolve[eqn, u, {x, xmin , xmax }] solves a differential equation for x between xmin and xmax .
DSolve[eqn, u, {x1 , x2 , …} ∈ Ω] solves the partial differential equation eqn over the region Ω.
For example, find the general solution to the given ODE: y' = - 2 x y .
In[ ]:= ClearAll["Global`*"]
Find the particular solution to the same ODE with inital consition: y(0) = 1.8.
In[ ]:= solution = DSolve[{y '[x] - 2 x y[x], y[0] 1.8}, y[x], x]
Out[ ]=
2
y[x] 1.8 -x
.
1.5
1.0
0.5
0.0
-3 -2 -1 0 1 2 3
25
16 Week 0_Preliminary.nb
Stream Plots
In[ ]:= ? StreamPlot
Out[ ]=
Symbol
StreamPlot vx , vy , wx , wy , …, {x, xmin , xmax }, {y, ymin , ymax } generates plots of several vector fields.
StreamPlot […, {x, y} ∈ reg ] takes the variables {x, y} to be in the geometric region reg.
2x*y
In[ ]:= f1[x_, y_] := - ;
1 + x2
plot1 = StreamPlot[{1, f1[x, y]}, {x, - 10, 10},
{y, - 10, 10}, Frame True, Axes True, AspectRatio 1 / GoldenRatio]
Out[ ]=
10
-5
-10
-10 -5 0 5 10
Vector Plots
In[ ]:= ? VectorPlot
Out[ ]=
Symbol
VectorPlot vx , vy , wx , wy , …, {x, xmin , xmax }, {y, ymin , ymax } plots several vector fields.
VectorPlot […, {x, y} ∈ reg ] takes the variables {x, y} to be in the geometric region reg.
26
Week 0_Preliminary.nb 17
-5
-10
-10 -5 0 5 10
10
-5
-10
-10 -5 0 5 10
27
18 Week 0_Preliminary.nb
-2
-4
-4 -2 0 2 4
Contour Plots
In[ ]:= ? ContourPlot
Out[ ]=
Symbol
ContourPlot[f , {x, xmin , xmax }, {y, ymin , ymax }] generates a contour plot of f as a function of x and y.
ContourPlot[f == g, {x, xmin , xmax }, {y, ymin , ymax }] plots contour lines for which f = g.
ContourPlot[{f1 == g1 , f2 == g2 , …}, {x, xmin , xmax }, {y, ymin , ymax }] plots several contour lines.
ContourPlot[…, {x, y} ∈ reg ] takes the variables {x, y} to be in the geometric region reg.
28
Week 0_Preliminary.nb 19
Cos[x + y]
In[ ]:= f3[x_, y_] := - ;
2
3 y + 2 y + Cos[x + y]
p3 = StreamPlot[{1, f3[x, y]}, {x, - 5, 5},
{y, - 5, 5}, Frame True, Axes True, AspectRatio 1 / GoldenRatio]
Out[ ]=
-2
-4
-4 -2 0 2 4
-2
-4
-4 -2 0 2 4
More to Explore
Animation
29
20 Week 0_Preliminary.nb
Symbol
Animate[expr, {u, umin , umax }] generates an animation of expr in which u varies continuously from u min to umax .
In[ ]:= Animate[Plot3D[Sin[Sqrt[x ^ 2 + y ^ 2] + 2 * Pi * t], {x, - 8 * Pi, 8 * Pi}, {y, - 8 * Pi, 8 * Pi},
PlotRange 10, PlotPoints 50, AspectRatio 1,
Boxed False, Mesh None, Axes False], {t, 0, 2}, ControlPlacement Top]
Out[ ]=
Interactive Manipulation
30
Week 0_Preliminary.nb 21
Symbol
Manipulate[expr, {u, umin , umax , du}] allows the value of u to vary between umin and umax in steps du.
Manipulate[expr, {{u, uinit }, umin , umax , …}] takes the initial value of u to be uinit .
Manipulate[expr, {{u, uinit , ulbl }, …}] labels the controls for u with ulbl .
Manipulate[expr, {u, …}, {v, …}, …] provides controls to manipulate each of the u, v, ….
31
22 Week 0_Preliminary.nb
Amplitude, A 1
Angular frequency, ω 1
Phase, ϕ 0
A Sine Wave
1.0
0.5
A*sin(ωt+ϕ}
0.0
-0.5
-1.0
0 2 4 6 8 10 12
t
Sound Effects
In[ ]:= ? Sound
Out[ ]=
Symbol
Sound[primitives, {tmin , tmax }] specifies that the sound should extend from time t min to time tmax .
32
Week 0_Preliminary.nb 23
Symbol
SoundNote[pitch, {tmin , tmax }] takes the note to occupy the time interval tmin to tmax .
SoundNote[pitch, tspec, "style", opts] uses the specified rendering options for the note.
In[ ]:= OdeToJoy = {{"B", "B", "C5", "D5", "D5", "C5", "B", "A", "G", "G", "A", "B", "B", "A", "A", "B",
"B", "C5", "D5", "D5", "C5", "B", "A", "G", "G", "A", "B", "A", "G", "G", "A", "A",
"B", "G", "A", "B", "C5", "B", "G", "A", "B", "C5", "B", "A", "G", "A", "D", "B",
"B", "B", "C5", "D5", "D5", "C5", "B", "A", "G", "G", "A", "B", "A", "G", "G"},
{0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.75, 0.25, 1, 0.5,
0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.75, 0.25, 1, 0.5, 0.5,
0.5, 0.5, 0.5, 0.25, 0.25, 0.5, 0.5, 0.5, 0.25, 0.25, 0.5, 0.5, 0.5, 0.5, 0.5,
0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.75, 0.25, 1}};
.
Piano sound:
In[ ]:= Sound[SoundNote[##, "Piano"] & @@@ Transpose[OdeToJoy]] // EmitSound
.
Violin sound:
In[ ]:= Sound[SoundNote[##, "Violin"] & @@@ Transpose[OdeToJoy]] // EmitSound
33
34
Week 1: First-Order ODEs
How to Solve First-Order ODEs Step-by-step?
Table of Contents
1. Separable equations
1.1. Example 1.1: Separable ODE
1.2. Example 1.2: Initial Value Problem (IVP)
2. Exact ODEs & Integrating factors
2.1. Example 1.3: An Exact ODE
2.2. Non-Exact ODEs and Integrating Factors
2.3. Example 1.4: A Non-exact ODE with IVP
3. First-Order Linear ODEs
3.1. Example 1.5: First-Order ODE, IVP
4. Bernoulli Equation
4.1. Example 1.6: Logistic Equation
5. Summary
Commands list
◼ Integrate[f, x]
◼ ClearAll [symb1, symb2, ...]
◼ Simplify[expr]
◼ FullSimplify[expr]
◼ Solve[expr, vars]
Separable Equations
Many practically useful ODEs can be reduced to the form:
g(y) y' = f (x)
Then, by integrating both sides with respect to x, we obtain:
∫ g(y) y' x = ∫ f (x) x + C
35
2 Week 1_First-Order ODEs.nb
According to Calculus, y' x = y. So, the variable of the integration for left-side becomes y.
∫ g ( y ) y = ∫ f (x ) x + C
When f and g are continuous functions, the integrals mentioned above exist, and by evaluating
them, we obtain a general solution to the given ODE.
Example 1.1: Separable ODE
y' = (x + 1) e-x y2
Symbol
x y
Integrate[f , {x, xmin , xmax }, {y, ymin , ymax }, …] gives the multiple integral ∫x max dx ∫y max dy … f.
min min
-x (- 2 - x)
- -x (2 + x)
36
Week 1_First-Order ODEs.nb 3
1
◆ Step 4. By integration, - = - e - x (2 + x ) + C .
y
1
In[ ]:= FullSimplifySolve- - Exp[- x] * (2 + x) + C, y
y
Out[ ]=
x
y
2 - C x + x
x
2 - C x + x
x (1 + x)
2
2 - C x + x
x
y[x]
2 + x - x 1
2 x y[x] + y′ [x]
37
4 Week 1_First-Order ODEs.nb
1
In[ ]:= Integrate , y
y
Out[ ]=
Log[y]
- x2
Symbol
Solve[expr, vars] attempts to solve the system expr of equations or inequalities for the variables vars.
Solve[expr, vars, dom] solves over the domain dom. Common choices of dom are Reals, Integers, and Complexes.
◆ We solve the expression over the domain of Real numbers, because the natural loga-
rithm of y exists only when y > 0:
In[ ]:= FullSimplifySolveLog[y] - x2 + , y, Reals
Out[ ]=
2
y -x +
{{ 1.8}}
2
1.8 -x
38
Week 1_First-Order ODEs.nb 5
0.
M (x, y) x + N (x, y) y = 0
khx
is an exact differential equation. It can be written as the differential of some function u(x, y).
khx
u u
x + y = u
x y
khx
∂u ∂u
u = 0 ⟹ =M =N
∂x ∂y
khx
∂M ∂2 u ∂N ∂2 u
= =
∂y ∂y ∂x ∂x ∂x ∂y
khx
Consequently, the condition for the exactness of ODE is when the following partial
derivatives are equal:
∂M ∂N
∂y
= ∂x
39
6 Week 1_First-Order ODEs.nb
u(x, y) =
The function u(x, y) can be found by the following systematic way; EITHER by integrating
with respect to x, where k(y) is the constant of integration.
u = ∫ M x + k (y )
u = ∫ N y + l (x )
◆ Step 1. Test for exactness. By looking at the equation, we see that M = cos (x + y) and
N = 3 y2 + 2 y + cos (x + y). But instead of M & N , we use P & Q, because the capital
letter N is protected by Mathematica.
ClearAll["Global`*"]
P[x_, y_] := Cos[x + y];
Q[x_, y_] := 3 y2 + 2 y + Cos[x + y];
◆ NOTE: The variable cannot be named “N” because the Wolfram language has a built-in
symbol described below.
In[ ]:= ?N
Out[ ]=
Symbol
True
40
Week 1_First-Order ODEs.nb 7
y (2 + 3 y) k′ [y]
k[y] y2 + y3 + 1
In[ ]:= u
Out[ ]=
y2 + y3 + 1 + Sin[x + y]
◆ So, the general solution to the ODE is: u(x, y) = sin(x + y) + y2 + y3 + c1 = constant
u(x, y) = sin(x + y) + y2 + y3 =
Cos[x + y]
2 y + 3 y2 + Cos[x + y]
41
8 Week 1_First-Order ODEs.nb
P (x, y) x + Q(x, y) y = 0
khx
FP x + FQ y = 0
khx
Thus, the condition for the exactness when the integrating factor is present is:
khx
∂ ∂
(FP ) = (FQ)
∂y ∂x
khx
By the product rule, with subscripts denoting partial derivatives, this gives
khx
Fy P + FPy = Fx Q + FQx
khx
Because the integration factor depends only on one variable (either x or y), it simplifies easily.
khx
Let’s assume that the integrating factor depends on the x only. Also, let’s denote the derivative
∂F
of Fx as F ' = . Then it leads to
∂x
khx
Simplifying it, we get the formula for the Integrating Factor F(x):
1 ∂P ∂Q
F (x) = exp ∫ R(x) x where R (x ) = -
Q ∂y ∂x
After similar mathematical manipilations, the formula for the Integrating Factor F(y) is found.
42
Week 1_First-Order ODEs.nb 9
1 ∂Q ∂P
F * (y) = exp ∫ R* (y) y where R * (y ) = -
P ∂x ∂y
khx
◆ Step 1. Test for exactness. By looking at the equation, we see that P = e x+y + y e y and
Q = xe y - 1.
In[ ]:= ClearAll["Global`*"]
P[x_, y_] := Exp[x + y] + y * Exp[y];
Q[x_, y_] := x * Exp[y] - 1;
FullSimplify[D[P[x, y], y] D[Q[x, y], x] ]
Out[ ]=
y x + y 0
◆ We see that the R contains both on x and y. Therefore, the first assumption is wrong.
◆ Now, let’s assume that F depends on y.
1
In[ ]:= Ry = FullSimplify * (D[Q, x] - D[P, y])
P
Out[ ]=
◆ The second is assumption is correct and the Integrating Factor depends only on y, F(y).
In[ ]:= Fy = FullSimplify[Exp[Integrate[Ry, y]]]
Out[ ]=
1
◆ Let’s let’s redefine P[x,y] and Q[x,y] after multiplying the integrating factor of e -y to
both sides of the given ODE:
In[ ]:= ClearAll[P, Q, x, y];
P[x_, y_] := (Exp[x + y] + y * Exp[y]) * Exp[- y];
Q[x_, y_] := (x * Exp[y] - 1) * Exp[- y];
43
10 Week 1_First-Order ODEs.nb
True
◆ Indeed, it is exact.
◆ Step 3. General Solution. As shown before, the general solution to the ODE can be
found by the following formula, where k(y) is the constant of integration.
khx
u = ∫ P * F (y ) x + k (y )
khx
x + x y + k[y]
-y + k′ [y] 0
k[y] -y + 1
In[ ]:= u
Out[ ]=
x + x y + k[y]
◆ Thus we have:
In[ ]:= u /. KSoln〚1〛
Out[ ]=
x + -y + x y + 1
x + -y + x y + 1
u(x, y) = e x + e -y + xy =
1 + -y + 1
44
Week 1_First-Order ODEs.nb 11
- 2 + x + x y
x + y
- -y + x
◆ It can be seen that D[u, x] dx + D[u, y] dy = 0 recovers the given ODE. Since
u = const., we have du = D[u, x] dx + D[u, y] dy = 0 .
is called a Linear ODE. If the r(x) equals to 0, the equation becomes a Homogeneous Linear
ODE.
y' + p(x) y = 0
khx
It is easily noticed that the ODE is separable, so by separating variable, the solution to the
equation is
y (x) = e-∫ p(x) x
*
= ±e when y ≷ 0
khx
And the trivial solution y(x) = 0 for all x in the mentioned interval.
Here the ODE has a pleasant property that the integrating factor depends only on the x.
khx
F y' + p F y = r F
khx
45
12 Week 1_First-Order ODEs.nb
After some mathematical manipulations (refer to the textbook), the general solution to the
nonhomogeneous linear ODE is obtained.
y (x ) = e - h ∫ e h r x + e - h
◆ We can introduce p & r as functions as was done in the previous example, but we don’t
have to.
◆ Step 2. Find h using the formula above.
In[ ]:= h = Integrate[p, x]
Out[ ]=
- Log[Cos[x]]
◆ Step 3. Find the general solution to the given ODE. ysoln0 is the first term and ysoln1
is the second term in the general solution.
In[ ]:= ysoln0 = Exp[- h] * Integrate[Exp[h] * r, x]
Out[ ]=
- 2 Cos[x]2
c1 Cos[x]
c1 Cos[x] - 2 Cos[x]2
- 2 + c1
46
Week 1_First-Order ODEs.nb 13
{{c1 3}}
3 Cos[x] - 2 Cos[x]2
True
Bernoulli Equation
Many ODEs with a huge importance in engineering are nonlinear that can transform into a
linear ODE. One of the useful one is the Bernoulli Equation.
khx
When a = 0, the Bernoulli equation is a linear 1st order ODE, which we have solved in the
previous section.
kh
When a = 1, the Bernoulli equation is a separable, linear, 1st order, homogeneous ODE,
which is even simpler to solve.
khx
The trick to solve the Bernoulli equation is to introduce the following variable transformation:
khx
u(x) = [y(x)]1-a
khx
Using the u(x) transformation variable, we get the linear ODE, which we know how to solve.
khx
u' - (1 - a) p u = (1 - a) g
khx
y' = Ay - By2
47
14 Week 1_First-Order ODEs.nb
◆ Step 1. Find the u(x) transformation variable. From the equation, we see that a is equal
to 2 .
In[ ]:= u[y] = y[x]1-a /. a 2
Out[ ]=
1
y[x]
1
◆ Step 2. We found earlier that u(x) = . Hence, using it, u' (x) becomes
y (x )
u' (x) = B - A u(x).
khx
Ax
c1 -A x
ysoln0 + ysoln1
48
Week 1_First-Order ODEs.nb 15
1
◆ Step 4. Since u(x) = , the general solution y (x as follows:
y (x )
1
In[ ]:= FullSimplifySolve((u[x]) /. u[x] usolnGen) , y[x]
y[x]
Out[ ]=
1
y[x]
ysoln0 + ysoln1
◆ Step 5. Also, directly from the ODE, it is seen that y(x) = 0 for all x is the solution to
the equation as well (a trivial solution).
◆ Step 6. Always verify the solution.
A
In[ ]:= ysoln =
B + A c1 -A x
Out[ ]=
A
B + A c1 -A x
True
Summary
After completing this chapter, you should be able to
◼ solve several types of first-order ODEs step-by-step using Wolfram Mathematica.
◼ develop SOPs to solve first-order ODEs.
◼ develop the habit of always checking your solutions for quality assurance.
49
50
Week 2: Second-Order ODEs (Part 1)
How to solve 2nd-Order ODEs Step-by-step?
Table of Contents
1. Homogeneous Linear ODEs of Second Order
1.1. Example 2.1: Solve Second-Order ODE using DSolve
2. Homogeneous Linear ODEs with Constant Coefficients
2.1. Example 2.2: Case I with IVP
2.2. Example 2.3: Case II with IVP
2.3. Example 2.4: Case III with IVP
3. Modeling of Free Oscillations of Mass-Spring System
3.1. Example 2.5: Harmonic Oscillation of an Undamped Mass-Spring System
3.2. Example 2.6: The Three Cases of Damped Motion
4. Wolfram Demonstration Project: Unforced, Damped, Simple Harmonic Motion
5. Summary
Commands list
◼ DSolve[eqn, u, x]
◼ expr[[i]] or Part[expr, i]
◼ Log[z]
◼ D[f, x]
◼ Chop[expr]
51
2 Week 2_Second-Order ODEs-1 (Homogeneous).nb
The linear homogeneous second order ODEs have a rich solution structure that relies on the
Superposition Principle.
.
The superposition principle or linearity principle means that we can obtain further solutions
from the given ones by adding them or multiplying them with any constants.
.
Note: This principle works only for Homogeneous AND Linear ODEs.
.
For a second-order homogeneous linear ODE, the Initial Value Problem consists of tw
initial conditions:
y ( x0 ) = K 0 y ' (x0) = K1
.
Here y1 and y2 are not proportional and c1 and c2 are arbitrary constants. This pair of linearly
independent solutions is called a basis of solutions.
1.1
.
◆ Step 1. Use the DSolve function directly, including the equation for the function y[x],
with independent variable x
In[ ]:= ClearAll["Global`*"]
sol = DSolvex2 - x * y ''[x] - x * y '[x] + y[x] 0, y[x], x
Out[ ]=
{{y[x] x 1 + 2 (- 1 - x Log[x])}}
◆ The solution to y[x] is written to “ysol” variable. Here the double square brackets is the
short form [[ ]] for the Part function, which is used to get parts of lists.
◆ In short, the program gets the 1st part of the expression “sol”, and writes it to the new
variable “ysol”.
52
Week 2_Second-Order ODEs-1 (Homogeneous).nb 3
x 1 + 2 (- 1 - x Log[x])
Symbol
expr[["key"]] gives the value associated with the key "key" in an association expr.
expr[[Key[k ]]] gives the value associated with an arbitrary key k in the association expr.
◆ The new function called GeneralSol[x_] takes the solution to y[x] from the variable
“ysol”. It is done so in the next step, we can plot the graph of the obtained solution.
In[ ]:= GeneralSol[x_] := ysol; GeneralSol[x]
Out[ ]=
x 1 + 2 (- 1 - x Log[x])
-100
y (x )
-200
-300
0 20 40 60 80 100
x
53
4 Week 2_Second-Order ODEs-1 (Homogeneous).nb
◆ From the solution, it is seen that the solution perfectly matches the form y = c1 y1 + c2 y2 ,
thus a basis of solutions is the following: y1 = x and y2 = - 1 - x ln(x).
◆ Note: In Wolfram Mathematica, the function Log[x] gives the natural logarithm of x.
In[ ]:= ? Log
Out[ ]=
Symbol
◆ Step 2. Check the obtained solution by comparing Left-Hand-Side (LHS) and Right-
Hand-Side (RHS).
In[ ]:= LHS = FullSimplify
x2 - x * D[GeneralSol[x], {x, 2}] - x * D[GeneralSol[x], {x, 1}] + GeneralSol[x]
Out[ ]=
True
These ODEs have a huge implications in the mechanical and electrical vibrations, as we will
see further.
.
To solve the homogeneous linear second-order ODEs, we need to solve the characteristic
equation (or auxiliary equation)
λ2 + a λ + b = 0
.
Because the characteristic equation is in the quadratic form, it may have three different kind of
54
Week 2_Second-Order ODEs-1 (Homogeneous).nb 5
roots, depending on the sign of the discrimanant a2 - 4 b. These 3 cases are as follows:
.
2.1
.
◆ Step 1. Solve the characteristic equation and determine what case the ODE refers to.
In[ ]:= ClearAll["Global`*"]
roots = Solveλ2 + λ - 2 0, λ (** Note that we have to use , not = **)
Out[ ]=
◆ Step 2. Find the general solution. We got two distinct real roots, so we proceed with
Case I.
In[ ]:= λ1 = λ /. roots〚1〛; λ2 = λ /. roots〚2〛; {λ1 , λ2 }
(** Double squared brackets [[]] get the ith element from the list**)
Out[ ]=
{- 2, 1}
55
6 Week 2_Second-Order ODEs-1 (Homogeneous).nb
c1 -2 x + c2 x
◆ Step 3. Find the particular solution using the initial conditions: y(0) = 4, y' (0) = - 5
In[ ]:= cond1 = GeneralSol[0] 4;
cond2 = ( D[GeneralSol[x], x] /. x 0) - 5;
{{c1 3, c2 1}}
3 -2 x + x
-5
◆ Check that the solution satisfies the given ODE y'' + y' - 2 y = 0.
In[ ]:= LHS = D[ivpSoln[x], {x, 2}] + D[ivpSoln[x], {x, 1}] - 2 * ivpSoln[x]
Out[ ]=
6 -2 x + 2 x - 2 3 -2 x + x
True
◆ So the solution satisfies both the initial conditions and the ODE check.
◆ Step 5. Verify the solution by DSolve (Not Required).
56
Week 2_Second-Order ODEs-1 (Homogeneous).nb 7
y[x] -2 x 1 + x 2
In[ ]:= yp = DSolve[{y ''[x] + y '[x] - 2 y[x] 0, y[0] 4, y '[0] - 5}, y[x], x]
Out[ ]=
y(x)= -2 x (3 + 3 x )
20
15
y (x )
10
◆ Step 1. Solve the characteristic equation and determine what case the ODE refers to.
In[ ]:= roots = Solveλ2 + λ + 0.25 0, λ (** Note that we have to use , not = **)
Out[ ]=
◆ Step 2. Find the general solution. We got a real double root, so we proceed with Case II.
57
8 Week 2_Second-Order ODEs-1 (Homogeneous).nb
{- 0.5, - 0.5}
-0.5 x (c1 + c2 x)
◆ Step 3. Find the particular solution using the initial conditions: y(0) = 3.0, y' (0) = - 3.5
In[ ]:= cond1 = GeneralSol[0] 3.0;
cond2 = ( D[GeneralSol[x], x] /. x 0) - 3.5;
-0.5 x (3. - 2. x)
3.
- 3.5
◆ Check that the solution satisfies the given ODE y'' + y' + 0.25 y = 0 .
In[ ]:= LHS = D[ivpSoln[x], {x, 2}] + D[ivpSoln[x], {x, 1}] + 0.25 * ivpSoln[x]
Out[ ]=
0.
58
Week 2_Second-Order ODEs-1 (Homogeneous).nb 9
True
◆ So the solution satisfies both the initial conditions and the ODE check.
◆ Step 5. Verify the solution by DSolve (Not Required).
In[ ]:= ClearAll[y]; DSolve[y ''[x] + y '[x] + 0.25 y[x] 0, y[x], x]
Out[ ]=
In[ ]:= yp =
FullSimplify[DSolve[{y ''[x] + y '[x] + 0.25 y[x] 0, y[0] 3.0, y '[0] - 3.5}, y[x], x]]
Out[ ]=
2
y (x )
-1
0 5 10 15 20
x
2.3
◆ Step 1. Solve the characteristic equation and determine what case the ODE refers to.
59
10 Week 2_Second-Order ODEs-1 (Homogeneous).nb
◆ Step 2. Find the general solution. We got two complex roots, so we proceed with Case
III.
In this case, the roots of the characteristic equation are complex numbers that give the complex
solutions of the ODE. However, it can be shown that we can obtain a basis of real solutions:
a2
y1 = e-ax/2 cos ω x and y2 = e-ax/2 sin ω x where ω = b- 4
3.
◆ Step 3. Find the particular solution using the initial conditions: y(0) = 0, y' (0) = 3
In[ ]:= cond1 = GeneralSol[0] 0;
cond2 = ( D[GeneralSol[x], x] /. x 0) 3;
60
Week 2_Second-Order ODEs-1 (Homogeneous).nb 11
0.
3.
◆ Check that the solution satisfies the given ODE y'' + 0.4 y' + 9.04 y = 0 .
In[ ]:= LHS = FullSimplify[ D[ivpSoln[x], {x, 2}] + 0.4 * D[ivpSoln[x], {x, 1}] + 9.04 * ivpSoln[x]]
Out[ ]=
True
◆ So the solution satisfies both the initial conditions and the ODE check.
◆ Step 5. Verify the solution by DSolve (Not Required).
In[ ]:= ClearAll[y]; DSolve[y ''[x] + 0.4 y '[x] + 9.04 y[x] 0, y[x], x]
Out[ ]=
In[ ]:= yp = DSolve[{y ''[x] + 0.4 y '[x] + 9.04 y[x] 0, y[0] 0, y '[0] 3}, y[x], x]
Out[ ]=
61
12 Week 2_Second-Order ODEs-1 (Homogeneous).nb
In[ ]:= Plot1.` -0.2` x , ivpSoln[x], - 1.` -0.2` x , {x, 0, 30}, Frame True,
PlotStyle {{Black, Dashed}, {Red, Thick}, {Black, Dashed}}, Frame True,
FrameLabel {"x", "y(x)"}, BaseStyle {FontWeight "Bold", Black, FontSize 12},
PlotStyle {Black}, GridLines Automatic,
PlotLegends "e-0.2 x ", "y(x)= -0.2 x sin (3 x)", "- e-0.2 x ",
AxesStyle Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
Method {"DefaultBoundaryStyle" Automatic, "DefaultMeshStyle" AbsolutePointSize[6],
"ScalingFunctions" None}, PlotRange {- 1.0, 1.0}
Out[ ]=
1.0
0.5
e-0.2 x
y (x )
0.0
y(x)= -0.2 x sin (3 x)
-e-0.2 x
-0.5
-1.0
0 5 10 15 20 25 30
x
There are two possible scenarios for the mass-spring system motion.
.
3.3
where m is an object mass, k is the spring constant. This is a homogeneous linear ODE with
constant coefficients, whose general solution is obtained easily
.
62
Week 2_Second-Order ODEs-1 (Homogeneous).nb 13
k
y(t ) = A cos ω0 t + B sin ω0 t ω0 =
m
.
An alternative representation that shows physical characteristics of amplitude and phase shift is
B
y(t ) = C cos (ω0 t - δ) C= A2 + B2 tan δ = A
.
2.1, 2.5
.
If a mass–spring system with an iron ball of weight W = 98 nt (about 22 lb) can be regarded
as undamped, and the spring is such that the ball stretches it 1.09 m (about 43 in.), how
many cycles per minute will the system execute? What will its motion be if we pull the ball
down from rest by 16 cm (about 6 in.) and let it start with zero initial velocity?
.
89.9083
9.98981
3.
ω0
In[ ]:= f= (**In [Hz]**)
2 Pi
Out[ ]=
0.477465
63
14 Week 2_Second-Order ODEs-1 (Homogeneous).nb
29
◆ Find the coefficients A and B using the initial conditions: y(0) = 0.16 , y' (0) = ω0 B = 0
In[ ]:= y[t_] := A * Cos[ω0 * t] + B * Sin[ω0 * t]
0. + 1. A
{{A 0.16}}
{{B 0.}}
0.16 Cos[3. t]
True
True
True
64
Week 2_Second-Order ODEs-1 (Homogeneous).nb 15
◆ So the solution satisfies both the initial conditions and the ODE check.
◆ Step 3. Verify the solution by DSolve (Not Required).
In[ ]:= ClearAll[y]; DSolve[m * y ''[x] + k * y[x] 0, y[x], x]
Out[ ]=
In[ ]:= yp = DSolve[{m * y ''[x] + k * y[x] 0, y[0] 0.16, y '[0] 0}, y[x], x]
Out[ ]=
0.2
0.1
y(x)
0.0
-0.1
-0.2
0 2 4 6 8 10
x
here c is called the damping constant. This is a homogeneous linear ODE with constant
coefficients. We can obtain the general solution by solving the characteristic equation as
discussed before
65
16 Week 2_Second-Order ODEs-1 (Homogeneous).nb
c k
λ2 + λ+ =0
m m
.
Again there are three cases with three different kind of roots, depending on the sign of the
c 2 k
discrimanant - 4 .
m m
.
Case I. Overdamping
c 1
y(t ) = c1 e-(α-β) t + c2 e-(α+β) t α= β= c2 - 4 m k
2m 2m
.
.
C 2 = A2 + B2 δ = B/A α = c / (2 m)
1 k c2
ω* = 4 m k - c2 = -
2m m 4 m2
.
3.2
66
Week 2_Second-Order ODEs-1 (Homogeneous).nb 17
90 y + 100 y′ + 10 y′′
◆ There are two distinct roots, so we proceed with Case I, overdamping. This gives the
general solution:
In[ ]:= λ1 = λ /. roots〚1〛; λ2 = λ /. roots〚2〛; {λ1 , λ2 }
(** Double squared brackets [[]] get the ith element from the list**)
Out[ ]=
{- 9, - 1}
c1 -9 x + c2 -x
◆ Find the particular solution using the initial conditions: y(0) = 0.16 , y' (0) = ω0 B = 0
In[ ]:= cond1 = GeneralSol[0] 0.16;
cond2 = ( D[GeneralSol[x], x] /. x 0) 0;
67
18 Week 2_Second-Order ODEs-1 (Homogeneous).nb
0.16
0.
◆ Check that the solution satisfies the given ODE my'' + cy' + ky = 0.
In[ ]:= FullSimplify[
LHS /. {y '' D[ivpSoln[x], {x, 2}], y ' D[ivpSoln[x], {x, 1}], y ivpSoln[x]}]
Out[ ]=
True
◆ So the solution satisfies both the initial conditions and the ODE check.
(II) c = 60 kg / sec
90 y + 60 y′ + 10 y′′
68
Week 2_Second-Order ODEs-1 (Homogeneous).nb 19
◆ There is a real double root, so we proceed with Case II, critical damping. This gives the
general solution.
In[ ]:= λ1 = λ /. roots〚1〛; λ2 = λ /. roots〚2〛; {λ1 , λ2 }
(** Double squared brackets [[]] get the ith element from the list**)
Out[ ]=
{- 3, - 3}
-3 x (c1 + c2 x)
◆ Find the particular solution using the initial conditions: y(0) = 0.16 , y' (0) = ω0 B = 0.
In[ ]:= cond1 = GeneralSol[0] 0.16;
cond2 = ( D[GeneralSol[x], x] /. x 0) 0;
0.16
0.
◆ Check that the solution satisfies the given ODE my'' + cy' + ky = 0.
In[ ]:= FullSimplify[
LHS /. {y '' D[ivpSoln[x], {x, 2}], y ' D[ivpSoln[x], {x, 1}], y ivpSoln[x]}]
Out[ ]=
69
20 Week 2_Second-Order ODEs-1 (Homogeneous).nb
Out[ ]=
True
◆ So the solution satisfies both the initial conditions and the ODE check.
(III) c = 10 kg / sec
90 y + 10 y′ + 10 y′′
◆ Find the general solution. We got two complex roots, so we proceed with Case III.
In this case, the roots of the characteristic equation are complex numbers that give the complex
solutions of the ODE. However, it can be shown that we can obtain a basis of real solutions:
k c2
y1 = e-ax/2 cos ω x and y2 = e-ax/2 sin ω x where ω* = -
m 4 m2
k c2
In[ ]:= ω* = Sqrt -
m 4 m2
Out[ ]=
35
2
35 x 35 x
-x/2 A Cos + B Sin
2 2
◆ Find the particular solution using the initial conditions: y(0) = 0.16 , y' (0) = ω0 B = 0.
In[ ]:= cond1 = GeneralSol[0] 0.16;
cond2 = ( D[GeneralSol[x], x] /. x 0) 0;
70
Week 2_Second-Order ODEs-1 (Homogeneous).nb 21
35 x 35 x
-x/2 0.16 Cos + 0.0270449 Sin
2 2
0.16
0.
◆ Check that the solution satisfies the given ODE my'' + cy' + ky = 0.
In[ ]:= FullSimplify[
LHS /. {y '' D[ivpSoln[x], {x, 2}], y ' D[ivpSoln[x], {x, 1}], y ivpSoln[x]}]
Out[ ]=
35 x 35 x
-x/2 - 1.77636 × 10-15 Cos - 4.44089 × 10-16 Sin
2 2
True
◆ So the solution satisfies both the initial conditions and the ODE check.
◆ Let’s plot three curves at the same graph.
71
22 Week 2_Second-Order ODEs-1 (Homogeneous).nb
In[ ]:= Plot[{yp1, yp2, yp3}, {x, 0, 10}, Frame True, FrameLabel {"t", "y(t)"},
GridLines Automatic, BaseStyle {FontWeight "Bold", Black, FontSize 12},
PlotRange {- 0.1, 0.15}, PlotStyle {Red, Green, Blue},
PlotLegends {"c = 100 kg/sec (Overdamping)",
"c = 60 kg/sec (Critical damping)", "c = 10 kg/sec (Underdamping)"}]
Out[ ]=
0.15
0.10
0.05
c = 100 kg/sec (Overdamping)
y(t)
-0.10
0 2 4 6 8 10
t
The source code below was developed by John Erickson, Chicago State University (2009).
Open content licensed under CC BY-NC-SA.
John Erickson, Chicago State University
“Unforced, Damped, Simple Harmonic Motion”
https://demonstrations.wolfram.com/UnforcedDampedSimpleHarmonicMotion/
Wolfram Demonstrations Project; Published: March 10, 2009; Accessed on July 26, 2022.
In[ ]:= ClearAll["Global`*"]
72
Week 2_Second-Order ODEs-1 (Homogeneous).nb 23
73
24 Week 2_Second-Order ODEs-1 (Homogeneous).nb
time 0.
spring length 5
initial position 2
initial velocity 0.
damping coefficient c 0
mass m 1
Hooke's constant k 1
c2 - 4 k m = -4
position
15 3.0
2.5
10
2.0
5 1.5
1.0
time
10 20 30 40 0.5
-5 0.0 position
0 2 4 6 8 10 12 14
Summary
After completing this chapter, you should be able to
◼ solve 2nd-order linear homogeneous ODEs step-by-step using Wolfram Mathematica.
74
Week 2_Second-Order ODEs-1 (Homogeneous).nb 25
75
76
Week 3: Second-Order ODEs (Part 2)
How to Solve Second-Order ODEs Step-by-step?
Table of Contents
1. Nonhomogeneous Linear ODEs of Second Order
1.1. Example 3.1. Method of Undetermined Coefficients
1.2. Example 3.2. Application of Modification Rule
1.3. Example 3.3. Application of Sum Rule
1.4. Example 3.4. Another example of the Method of Undetermined Coefficients
2. Summary
Commands list
◼ Sqrt[z]
◼ Exp[z]
◼ Collect[expr,x]
◼ Chop[expr]
◼ Plot[f, {x, x_min, x_max}]
y (x ) = y h (x ) + y p (x )
.
here yh(x) = c1 y1 + c2 y2 is the general solution of the homogeneous ODE on the same I
interval. We learned how to solve it earlier (by solving the characteristic equation).
.
77
2 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb
The yp(x) is any solution on I containing no arbitrary constants. It can be found by using the
Method of Undetermined Coefficients. The method is suitable for linear ODEs with constant
coefficients a and b:
y'' + a y' + b y = r(x)
.
Note 1: If a term in your choice for yp(x) happens to be a solution of the homogeneous ODE,
use the Modification Rule (multiply this term by x or by x2).
.
Note 2: If a term in your choice for yp(x) happens to be a sum of functions in the first column
of the Table above, then for yp(x) choose the sum of the functions in the corresponding lines of
the second column (Sum Rule).
.
78
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 3
0.001 x2
◆ Step 1. Solve the corresponding homogeneous ODE to obtain the general solution of
y h (x ).
◆ To do so, let’s solve the characteristic equation.
In[ ]:= a = 0; b = 1;
roots = Solveλ2 + a * λ + b 0, λ
Out[ ]=
a2
y1 = e-ax/2 cos ω x and y2 = e-ax/2 sin ω x where ω = b- 4
A Cos[x] + B Sin[x]
K0 + K1 x + K2 x2
79
4 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb
K0 + 2 K2 + K1 x + K2 x2
0.001 x2
◆ Next, equate coefficients of x and x2 because the coefficient of each power of x must be
the same on both sides. Hence, LHS-RHS must be zero for all x.
In[ ]:= Q = Collect(LHS - RHS), x, x2
Out[ ]=
K0 + 2 K2 + K1 x + (- 0.001 + K2) x2
◆ The conditions that all coefficients must be 0 gives us 3 equations with 3 unknowns,
which we can solve using Solve[ ].
In[ ]:= eqn0 = K0 + 2 K2 0 (** constant term **);
eqn1 = K1 0 (**coefficient of x **);
eqn2 = (- 0.001` + K2) 0 (**coefficient of x2 **);
- 0.002 + 0.001 x2
0. + 0.001 x2
0.001 x2
True
80
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 5
◆ Step 4. Find the particular solution using the initial conditions: y(0) = 0, y' (0) = 1.5.
dy
◆ Find the derivative y' = .
dx
In[ ]:= dygeneral[x_] = D[ygeneral[x], {x, 1}]
Out[ ]=
- 0.002 + A 0
0. + B 1.5
◆ Step 5. Verify the solution to the ODE y'' + y = 0.001 x2 with initial conditions:
y(0) = 0, y' (0) = 1.5.
In[ ]:= LHSOp[yparticular, x]
Out[ ]=
0. + 0.001 x2
True
0.
1.5
81
6 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb
◆ The solution satisfies both the ODE and initial conditions check!
◆ Let’s take a look at the graph of the solution.
In[ ]:= Plotyparticular[x], {x, 0, 60}, Frame True,
PlotStyle {{Blue, Thick}}, Frame True, FrameLabel {"x", "y(x)"},
PlotLegends Placed"y(x)=-0.002+0.001 x2 +0.002 cos(x)+ 1.5 sin(x)", Above,
BaseStyle {FontWeight "Bold", Black, FontSize 12}, GridLines Automatic,
AxesStyle Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
Method {"DefaultBoundaryStyle" Automatic, "DefaultMeshStyle" AbsolutePointSize[6],
"ScalingFunctions" None}, PlotRange {- 2, 5}
Out[ ]=
2
y(x)
-1
-2
0 10 20 30 40 50 60
x
◆ Step 6. Solve the ODE using a built-in DSolve function (Not Required).
In[ ]:= DSolve[LHSOp[y, x] rhsFunc[x] , y[x], x] (** A general solution **)
Out[ ]=
82
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 7
- 10 -1.5 x
◆ Step 1. Solve the corresponding homogeneous ODE to obtain the general solution of
y h (x ).
◆ To do so, let’s solve the characteristic equation.
In[ ]:= a = 3; b = 2.25;
roots = Solveλ2 + a * λ + b 0, λ
Out[ ]=
{{λ - 1.5}, {λ - 1.5}}
{- 1.5, - 1.5}
-1.5 x (c1 + c2 x)
-1.5 x K0 x2
- 10 -1.5 x
83
8 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb
◆ Next, equate coefficients of x and x2 because the coefficient of each power of x must be
the same on both sides. Hence, LHS-RHS must be zero for all x.
In[ ]:= Q = Collect(LHS - RHS), Exp[- 1.5 x], x * Exp[- 1.5 x], x2 * Exp[- 1.5 x]
Out[ ]=
◆ The condition that all coefficients must be 0 gives us only one equation with the
unknown K0, which we can solve using Solve[ ].
In[ ]:= eqn0 = 10 + 2 K0 0
Out[ ]=
10 + 2 K0 0
{{K0 - 5}}
- 5 -1.5 x x2
- 10. -1.5 x
True
◆ Step 4. Find the particular solution using the initial conditions: y(0) = 1, y' (0) = 0.
dy
◆ Find the derivative y' = .
dx
84
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 9
0. + 1. c1 1
0. - 1.5 c1 + 1. c2 0
◆ Step 5. Verify the solution to the ODE y'' + 3 y' + 2.25 y = - 10 e-1.5 x with initial
conditions: y(0) = 1, y' (0) = 0.
In[ ]:= LHSOp[yparticular, x]
Out[ ]=
- 14.5 -1.5 x + 30. -1.5 x x - 11.25 -1.5 x x2 + 2.25 -1.5 x (1. + 1.5 x) +
3 1.5 -1.5 x - 10 -1.5 x x + 7.5 -1.5 x x2 - 1.5 -1.5 x (1. + 1.5 x) +
2.25 - 5 -1.5 x x2 + -1.5 x (1. + 1.5 x)
True
1.
0.
◆ The solution satisfies both the ODE and initial conditions check!
◆ Let’s take a look at the graph of the solution.
85
10 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb
1
y (x )
-1
-2
0 2 4 6 8 10
x
◆ Step 6. Solve the ODE using a built-in DSolve function (Not Required).
In[ ]:= DSolve[LHSOp[y, x] rhsFunc[x] , y[x], x] (** A general solution **)
Out[ ]=
◆ Create expressions for the LHS operator and the RHS function.
86
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 11
◆ Step 1. Solve the corresponding homogeneous ODE to obtain the general solution of
y h (x ).
◆ To do so, let’s solve the characteristic equation.
In[ ]:= a = 2; b = 0.75;
roots = Solveλ2 + a * λ + b 0, λ
Out[ ]=
◆ We got two distinct roots, so we proceed with Case I to obtain the general solution.
In[ ]:= λ1 = λ /. roots〚1〛; λ2 = λ /. roots〚2〛; {λ1 , λ2 }
Out[ ]=
{- 1.5, - 0.5}
c1 -1.5 x + c2 -0.5 x
K0 + K1 x + M1 Cos[x] + M2 Sin[x]
◆ Note: In Wolfram Language the variable cannot be named “K”, because it is already
built-in symbol, so use M1, M2 instead.
87
12 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb
In[ ]:= ?K
Out[ ]=
Symbol
◆ Next, equate coefficients of x and x2 because the coefficient of each power of x must be
the same on both sides. Hence, LHS-RHS must be zero for all x.
In[ ]:= Q = Collect[(LHS - RHS), {x, Cos[x], Sin[x]}]
Out[ ]=
0.75 K0 + 2 K1 + (- 0.09 + 0.75 K1) x + (- 2 - 0.25 M1 + 2 M2) Cos[x] + (0.25 - 2 M1 - 0.25 M2) Sin[x]
◆ The condition that all coefficients must be 0 gives us 4 equations with 4 unknowns,
which we can solve using Solve[ ].
In[ ]:= eqn0 = 0.75` K0 + 2 K1 0;
eqn1 = - 0.09` + 0.75` K1 0;
eqn2 = - 2 - 0.25` M1 + 2 M2 0;
eqn3 = 0.25` - 2 M1 - 0.25` M2 0;
In[ ]:= coeffSoln = Solve[{eqn0, eqn1, eqn2, eqn3}, {K0, K1, M1, M2}]
Out[ ]=
88
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 13
True
◆ Step 4. Find the particular solution using the initial conditions: y(0) = 2.78
y' (0) = - 0.43.
dy
◆ Find the derivative y' = .
dx
In[ ]:= dygeneral[x_] = D[ygeneral[x], {x, 1}]
Out[ ]=
- 0.32 + 1. c1 + 1. c2 2.78
89
14 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb
◆ Note: In doing numerical computations, it is inevitable that you will sometimes end up
with results that are less precise than you want. Particularly when you get numerical
results that are very close to zero, you may well want to assume that the results should be
exactly zero. The function Chop allows you to replace approximate real numbers that are
close to zero by the exact integer 0.
◆ Chop[expr] = To replace all approximate real numbers in expr with magnitude less
than 10-10 by 0.
◆ Step 5. Verify the solution to the ODE:
.
True
2.78
- 0.43
◆ The solution satisfies both the ODE and initial conditions check!
◆ Let’s take a look at the graph of the solution.
90
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 15
6
y(x)
0 10 20 30 40 50
x
◆ Step 6. Solve the ODE using a built-in DSolve function (Not Required).
In[ ]:= DsolveSoln0 = DSolve[LHSOp[y, x] rhsFunc[x], y[x], x] (** A general solution **)
Out[ ]=
y[x]
-16 -16 -16
-1.5 x 1 + -0.5 x 2 - 0.06 -1.11022×10 x
6. - 0.666667 1.11022×10 x
- 3. x + 1. 1.11022×10 x
x-
-16
16.6667 + 2.77556 × 10-15 Cos[x] + 16.6667 + 9.25186 × 10-16 1.11022×10 x
Cos[x] -
-16
-15 -15 1.11022×10 x
25. - 1.85037 × 10 Sin[x] + 8.33333 - 2.77556 × 10 Sin[x]
In[ ]:= DsolveSoln1 = DSolve[{LHSOp[y, x] rhsFunc[x], y[0] 2.78, y '[0] - 0.43} , y[x], x]
(** A particular solution **)
Out[ ]=
y[x]
- 0.06 -2. x 5.73615 × 10-15 - 1.85037 × 10-15 0.5 x - 51.6667 - 3.70074 × 10-15 1.5 x +
5.33333 2. x - 2. 2. x x + 3.55271 × 10-15 - 1.85037 × 10-15 2. x Cos[x] -
16.6667 + 9.25186 × 10-16 2. x Sin[x]
91
16 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb
In[ ]:= yparticular[x] (** The solution that we obtained through a step-by-step SOP **)
Out[ ]=
◆ Step 1. Solve the corresponding homogeneous ODE to obtain the general solution of
y h (x ).
◆ To do so, let’s solve the characteristic equation.
In[ ]:= a = 2; b = 5;
roots = Solveλ2 + a * λ + b 0, λ
Out[ ]=
a2
y1 = e-ax/2 cos ω x and y2 = e-ax/2 sin ω x where ω = b- 4
92
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 17
◆ Next, equate coefficients of x and x2 because the coefficient of each power of x must be
the same on both sides. Hence, LHS-RHS must be zero for all x.
In[ ]:= Q = Collect[(LHS - RHS), {Exp[0.5 x], Cos[4 x], Sin[4 x]}]
Out[ ]=
◆ The condition that all coefficients must be zero gives us 3 equations in 3 unknowns,
which we can solve using Solve[ ].
93
18 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb
0.05 0.5 x + 2 0.1 0.5 x + 20 Cos[4 x] - 80 Sin[4 x] + 5 0.2 0.5 x + 5 Sin[4 x]
True
◆ Step 4. Find the particular solution using the initial conditions: y(0) = 0.2, y' (0) = 60.1.
dy
◆ Find the derivative y' = .
dx
In[ ]:= dygeneral[x_] = D[ygeneral[x], {x, 1}]
Out[ ]=
0.1 0.5 x + 20 Cos[4 x] + -x (2 B Cos[2 x] - 2 A Sin[2 x]) - -x (A Cos[2 x] + B Sin[2 x])
0.2 + A 0.2
20.1 - A + 2 B 60.1
94
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 19
True
0.2
60.1
◆ The solution satisfies both the ODE and initial conditions check!
◆ Let’s take a look at the graph of the solution.
95
20 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb
200
150
y (x )
100
50
0
0 2 4 6 8 10 12 14
x
◆ Step 6. Solve the ODE using a built-in DSolve function (Not Required).
In[ ]:= DsolveSoln0 = DSolve[LHSOp[y, x] rhsFunc[x], y[x], x] (** A general solution **)
Out[ ]=
In[ ]:= DsolveSoln1 = DSolve[{LHSOp[y, x] rhsFunc[x], y[0] 0.2, y '[0] 60.1} , y[x], x]
(** A particular solution **)
Out[ ]=
y[x] (0. + 10. ) (-1.-2. ) x - (0. + 10. ) (-1.+2. ) x + 0.2 0.5 x + 5. Sin[4. x]
In[ ]:= yfromDsolve = y[x] /. FullSimplify[Chop[DsolveSoln1〚1〛]] (**A result from DSolve **)
Out[ ]=
96
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 21
In[ ]:= yparticular[x] (** The solution that we obtained through a step-by-step SOP **)
Out[ ]=
◆ Let’ compare the solution that we obtained through a step-by-step SOP (standard operat
ing procedures) to that from DSolve by plotting them together.
In[ ]:= Plot[{yparticular[x], yfromDsolve}, {x, 0, 15}, Frame True,
PlotStyle {{Orange, Thick}, {Black, Dashed}}, Frame True, FrameLabel {"x", "y(x)"},
BaseStyle {FontWeight "Bold", Black, FontSize 12}, GridLines Automatic,
AxesStyle Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
Method {"DefaultBoundaryStyle" Automatic,
"DefaultMeshStyle" AbsolutePointSize[6], "ScalingFunctions" None},
PlotLegends Placed[{"Step-by-step SOP", "DSolve"}, {0.4, 0.75}]]
Out[ ]=
100
50
0
0 2 4 6 8 10 12 14
x
Summary
After completing this chapter, you should be able to
◼ solve 2nd-order linear non-homogeneous ODEs step-by-step by the method of
undetermined coefficients using Wolfram Mathematica.
◼ develop SOPs for the method of undetermined coefficients.
◼ develop the habit of always checking your solutions for quality assurance.
◼ develop your attention-to-detail skills in solving problems.
97
98
Week 4: Second-Order ODEs (Part 3)
Forced Oscillations & Resonance
Table of Contents
1. Modeling: Forced Oscillations
2. Nonhomogeneous ODE
3. Maximum amplitude of Damped Forced Oscillations
3.1. Example 4.1. Amplitude of the Steady State solution. Practical Resonance
4. Summary
Commands list
◼ Collect[expr, x]
◼ expr[[i]] or Part[expr, i]
◼ Solve[expr, vars]
◼ Plot[f, {x, x_min, x_max}]
99
2 Week 4_Second-Order ODEs-3 (Forced Oscillations).nb
◆ To start with, let’s write the governing equation in the standard form:
.
c k F0
y'' (t ) + y' (t ) + y (t ) = cos ωt
m m m
.
◆ Step 1. Solve the corresponding homogeneous ODE to obtain the general solution of
y h (t ) .
◆ To do so, let’s solve the characteristic equation.
In[ ]:= a = c / m; b = k / m;
roots = Solveλ2 + a * λ + b 0, λ
Out[ ]=
1 c c2 - 4 k m 1 c c2 - 4 k m
λ - - , λ - +
2 m m 2 m m
c2 - 4 k m
m2
100
Week 4_Second-Order ODEs-3 (Forced Oscillations).nb 3
◆ Since the r(t ) term is in the form of k cos ω t, the corresponding yp(t ) choice (second row
in the Table) is yp = K1 cos ω t + K2 sin ω t .
◆ K1, K2 are coefficients to be determined.
In[ ]:= yp[t_] = K1 * Cos[ω * t] + K2 * Sin[ω * t]
K1 Cos[t ω] + K2 Sin[t ω]
101
4 Week 4_Second-Order ODEs-3 (Forced Oscillations).nb
◆ Next, gather coefficients of cos ω t and sin ω t. Notice that we are working with LHS-
RHS at this point.
◆ LHS-RHS must be zero for all t, which means the coefficients of cos ω t and sin ω t must
be zero independently.
In[ ]:= Q = Collect[(LHS - RHS), {Cos[ω * t], Sin[ω * t]}]
Out[ ]=
F0 k K1 c K2 ω k K2 c K1 ω
- + + - K1 ω2 Cos[t ω] + - - K2 ω2 Sin[t ω]
m m m m m
◆ The conditions that all coefficients must be 0 gives us two equations with two unknowns,
which we can solve using Solve[ ].
F0 k K1 c K2 ω
In[ ]:= eqn1 = - + + - K1 ω2 0 (**coefficient of Cos[ω*t] **);
m m m
k K2 c K1 ω
eqn2 = - - K2 ω2 0 (**coefficient of Sin[ω*t] **);
m m
F0 k - m ω2 c F0 ω
K1 , K2
2 2 2 2 2 4
k +c ω -2 k m ω +m ω k + c ω - 2 k m ω 2 + m 2 ω4
2 2 2
F0 m -ω2 + ω20 c F0 ω
K1 , K2
ω2 c2 + m2 ω2 + m2 ω20 -2 ω2 + ω20 ω2 c2 + m2 ω2 + m2 ω20 -2 ω2 + ω20
F0 k - m ω2 Cos[t ω] c F0 ω Sin[t ω]
+
2 2 2 2 2 4
k +c ω -2kmω +m ω k + c2 ω2 - 2 k m ω2 + m2 ω4
2
102
Week 4_Second-Order ODEs-3 (Forced Oscillations).nb 5
F0 ω2 k - m ω2 Cos[t ω] c F0 ω3 Sin[t ω]
- - +
k 2 + c 2 ω2 - 2 k m ω 2 + m 2 ω4 k 2 + c 2 ω2 - 2 k m ω 2 + m 2 ω4
F0 k-m ω2 Cos[t ω] c F0 ω Sin[t ω]
k +
k2 +c2 ω2 -2 k m ω2 +m2 ω4 k2 +c2 ω2 -2 k m ω2 +m2 ω4
+
m
c F0 ω2 Cos[t ω] F0 ω k-m ω2 Sin[t ω]
c -
k +c2 ω2 -2 k m ω2 +m2 ω4
2
k2 +c2 ω2 -2 k m ω2 +m2 ω4
m
In[ ]:= FullSimplify[LHSCheck - RHS ]
Out[ ]=
True
F0 k - m ω 2 Cos[t ω ] c F0 ω Sin[t ω ]
+ + yh[t]
k2 + c2 ω 2 - 2 k m ω 2 + m2 ω 4 k2 + c2 ω 2 - 2 k m ω 2 + m2 ω 4
◆ yh[t] is the solution to the corresponding homogeneous equation based on three cases
depending on the discriminant sign.
m ω20- ω2 ωc
yp = F0 2 2 2 cos(ω t ) + F0 sin(ω t )
m ω0- ω +ω2 c2 m 2
ω20 - ω2 +ω2 c2
.
After a sufficiently long time the output of a damped vibrating system under a purely
sinusoidal driving force will practically be a harmonic oscillation whose frequency is that of
the input. It is called a Steady State solution, when y(t ) = yp(t ).
103
6 Week 4_Second-Order ODEs-3 (Forced Oscillations).nb
Amplitude
.
.
F0
C* = a2 + b2 =
m2ω20- ω2+ω2 c2
.
Phase angle η
.
b ωc
tan η = =
a m ω20- ω2
.
The amplitude C *(ω) has a maximum at a certain ω value. Find its location, then its size.
.
- c2 + 2 m2 w02 - c2 + 2 m2 w02
{ω 0}, ω - , ω
2 m 2 m
104
Week 4_Second-Order ODEs-3 (Forced Oscillations).nb 7
2
- c2 + 2 m2 w02
In[ ]:= FullSimplify (** ω2max **)
2 m
Out[ ]=
c2
- + w02
2 m2
C*
◆ Let’s plot the amplification as a function of ω to see how the amplitude changes
F0
when the damping terms vary. The data about the mass, spring constant, driving force is
randomly assigned.
Ampl[ω]
In[ ]:= C0 = /. F0 10, m 2, w0 5
F0
Out[ ]=
1
2
c2 ω2 + 4 5 - ω2
105
8 Week 4_Second-Order ODEs-3 (Forced Oscillations).nb
1
In[ ]:= fC0[c_, w_] := ;
2
c2 ω2 + 4 5 - ω2
fontsize = 20;
fig01 = Plot[{fC0[2, ω], fC0[4, ω], fC0[8, ω]}, {ω, 0, 6},
PlotRange {0, 0.3}, PlotStyle {Blue, Green, Red}, Background White,
BaseStyle {FontFamily "Times New Roman", fontsize},
Frame True,
FrameLabel {"ω [1/s]", "C * (ω )"},
FrameStyle Directive[Black, Thick], AxesOrigin {0, 0},
PlotLegends Placed[LineLegend[Automatic, {Text[Style["c = 2 kg/s", fontsize]],
Text[Style["c = 4 kg/s", fontsize]], Text[Style["c = 8 kg/s", fontsize]]},
Spacings 0.2, LegendLayout {"Column", 1}], {0.75, 0.8}],
ImageSize 480, AspectRatio 3 / 4]
Out[ ]=
0.30
c = 2 kg/s
0.25 c = 4 kg/s
c = 8 kg/s
0.20
C* (ω )
0.15
0.10
0.05
0.00
0 1 2 3 4 5 6
ω [1/s]
In[ ]:= Export["fig01.pdf", fig01,
"AllowRasterization" True, ImageSize 480, ImageResolution 600] ;
◆ From the graph, we can conclude that the biggest practical resonance happens when
the damping term is the smallest.
◆ Please note that this is the figure of the publication quality.
106
Week 4_Second-Order ODEs-3 (Forced Oscillations).nb 9
Summary
After completing this chapter, you should be able to
◼ developstandard operating procedures to solve second-order linear non-homogeneous
ODEs step-by-step using Wolfram Mathematica.
◼ model simple physical situations encountered in engineering using differential
equations.
◼ learn and use information, tools, and technology to solve engineering math problems.
◼ analyze results graphically and create figures of publication quality.
107
108
Week 5: Laplace Transforms
Basics of Laplace Transforms
Table of Contents
1. Basics of Laplace Transforms
1.1. Built-in Functions in Wolfram Mathematica
1.2. Laplace Transform by Integration
1.3. Linearity of the Laplace Transform
1.4. Laplace Transform of Derivatives
2. Unit Step Function and Dirac’s Delta Fuction
2.1. Unit Step Function (Heaviside Function)
2.1.1. Example 5.1
2.1.2. Example 5.2
2.2. Dirac's Delta Function (Impulse Function)
2.2.1. Properties of Dirac's Delta
2.2.2. What is the Laplace Transform of the Dirac’s Delta Function?
3. Summary
Commands list
◼ LaplaceTransform[f[t],t,s]
◼ InverseLaplaceTransform[F[s],s,t]
◼ Integrate[f, x]
◼ Limit[f , x x* ]
◼ HeavisideTheta[x]
◼ UnitStep[x]
◼ DiracDelta[x]
109
2 Week 5_Laplace Transforms-1 (Basics).nb
If f(t) is a function defined for all t ≧ 0, its Laplace transform is the integral of f(t) times e-st
from t = 0 to ∞. It is a function of s, say, F(s), and is denoted by ℒ(f); thus
.
.
∞
F (s) = ℒ(f) =∫ e-st f (t ) t
0
.
Not only is the result F(s) called the Laplace transform, but the operation just described, which
yields F(s) from a given f(t), is also called the Laplace transform. It is an “integral
transform” with “kernel”: k(s, t ) = e-st.
.
.
∞
F (s ) = ∫ k(s, t ) f (t ) t
0
..
Laplace transforms are also extensively used in control theory and signal processing as a
way to represent and manipulate linear systems in the form of transfer functions and transfer
matrices. The Laplace transform and its inverse are then a way to transform between the
domain and frequency domain.
.
110
Week 5_Laplace Transforms-1 (Basics).nb 3
Symbol
transform of f [t] in the variable t and returns a transform F [s] in the variable s.
LaplaceTransform f [t], t, s gives the numeric Laplace transform at the numerical value s.
Symbol
◆ Find the Inverse of the Laplace transform using the built-in function.
In[ ]:= InverseLaplaceTransform[F[s], s, t]
Out[ ]=
a t Cos[t w]
111
4 Week 5_Laplace Transforms-1 (Basics).nb
1
if Re[a] < Re[s]
-a + s
- 1 + (a-s) T
a-s
1
if a < s
-a + s
- 1 + (a-s) T
In[ ]:= Limit , T Infinity, Assumptions {Re[a] < Re[s]}
a-s
Out[ ]=
1
-a + s
112
Week 5_Laplace Transforms-1 (Basics).nb 5
◆ Define the RHS of the above equation. For that, find the Laplace transfroms of
aℒ { f (t ) } and bℒ{ g(t) }.
In[ ]:= RHS1 = LaplaceTransform[c1 * Exp[a * t], t, s]
Out[ ]=
c1
-a + s
True
◆ Find the Laplace Transform of the 1st derivative of the function f(t).
In[ ]:= LaplaceTransform[f '[t], t, s]
Out[ ]=
- f[0] + s LaplaceTransform[f[t], t, s]
◆ Find the Laplace Transform of the 2nd derivative of the function f(t).
In[ ]:= LaplaceTransform[f ''[t], t, s]
Out[ ]=
113
6 Week 5_Laplace Transforms-1 (Basics).nb
◆ HeavisideTheta[x] returns 0 or 1 for all real numeric x other than 0. HeavisideTheta can
be used in integrals, integral transforms, and differential equations.
In[ ]:= ? HeavisideTheta
Out[ ]=
Symbol
HeavisideTheta[x] represents the Heaviside theta function θ(x), equal to 0 for x < 0 and 1 for x > 0.
◆ UnitStep[x] represents the unit step function, equal to 0 for x < 0 and 1 for x ≥ 0 .
In[ ]:= ? UnitStep
Symbol
UnitStep [x] represents the unit step function, equal to 0 for x < 0 and 1 for x ≥ 0.
114
Week 5_Laplace Transforms-1 (Basics).nb 7
Out[ ]=
1.0
0.8
0.6
0.4
0.2 UnitStep[t]
HeavisideTheta[t]
0.0
-1 0 1 2 3 4
+∞ -s t e -a s
ℒ { u ( t - a )} = ∫ 0 e u(t - a) t =
s
.
-a s
s
◆ In engineering, the Unit step function is mainly used in the problems that involve switch
on and off, shifts.
115
8 Week 5_Laplace Transforms-1 (Basics).nb
In[ ]:= Plot[UnitStep[t - 1] - UnitStep[t - 2], {t, 0, 10}, PlotStyle {Red, Thick},
PlotLegends Placed[{"u[t-1]-u[t-2]"}, {0.7, 0.2}], Exclusions None, Frame True]
Out[ ]=
1.0
0.8
0.6
0.4
0.2
u[t-1]-u[t-2]
0.0
0 2 4 6 8 10
1.0
u[t-1]-2*u[t-4]+u[t-6]
0.5
0.0
-0.5
-1.0
0 2 4 6 8 10
Example 5.1
2 0<t<1
1 2 1
f (t ) = 2
t 1<t< 2
π
cos(t ) 1
t> 2
π
116
Week 5_Laplace Transforms-1 (Basics).nb 9
1 1
In[ ]:= Plot2 * (1 - UnitStep[t - 1]) + t2 * UnitStep[t - 1] - UnitStept - Pi +
2 2
1
Cos[t] UnitStept - Pi, {t, 0, 5 Pi}, PlotStyle {Red, Thick},
2
1 1 1
PlotLegends Placed"f[t]=2*(1-u[t-1])+ t2 *(u[t-1]-u[t- Pi])+Cos[t]u[t- Pi]",
2 2 2
{0.5, 0.85}, Exclusions None, Frame True
Out[ ]=
2.0
1 1 1
f[t]=2*(1-u[t-1])+ t 2 *(u[t-1]-u[t- Pi])+Cos[t]u[t- Pi]
2 2 2
1.5
1.0
0.5
0.0
-0.5
-1.0
0 5 10 15
◆ Let’s define the given f(t) function as a function y[t_] of the variable t.
In[ ]:= y[t_] := 2 * (1 - UnitStep[t - 1]) +
1 2 1 1
t * UnitStep[t - 1] - UnitStept - Pi + Cos[t] UnitStept - Pi; y[t]
2 2 2
Out[ ]=
1 π π
2 (1 - UnitStep[- 1 + t]) + t2 UnitStep[- 1 + t] - UnitStep- + t + Cos[t] UnitStep- + t
2 2 2
117
10 Week 5_Laplace Transforms-1 (Basics).nb
◆ As we can see, the initial function and the expression obtained from the Laplace trans-
form method yield the same result.
In[ ]:= Plot[{y[t], y2[t]}, {t, 0, 5 Pi},
PlotStyle {{Red, Thick}, {Black, Dashed}}, Exclusions None, Frame True,
PlotLegends {"f(t) initial", "f(t) from the Laplace Transform"}]
Out[ ]=
2.0
1.5
1.0
0.5
f(t) initial
f(t) from the Laplace Transform
0.0
-0.5
-1.0
0 5 10 15
Example 5.2
(1 + t )2 0 ≤ t < 1
f (t ) =
1 + t2 t≥1
118
Week 5_Laplace Transforms-1 (Basics).nb 11
10
2 2 1 2 -s (1 + s)
+ + -
s3 s2 s s2
(1 + t)2 - 2 t HeavisideTheta[- 1 + t]
◆ As we can see, the initial function and the expression obtained from the Laplace trans-
form method yield the same result.
119
12 Week 5_Laplace Transforms-1 (Basics).nb
10
6
f(t) initial
f(t) from the Laplace Transform
4
120
Week 5_Laplace Transforms-1 (Basics).nb 13
121
14 Week 5_Laplace Transforms-1 (Basics).nb
DiracDelta[x]
DiracDelta[0]
1.0
0.5
0.0
-0.5
-1.0
-2 -1 0 1 2
122
Week 5_Laplace Transforms-1 (Basics).nb 15
∞, t=a +∞
δ(t - a) = , ∫ -∞ δ(t - a) t = 1
0, t≠a
.
In[ ]:= Integrate[DiracDelta[t - a], {t, 0, Infinity}, Assumptions {a ∈ Reals && a > 0}]
Out[ ]=
In[ ]:= Integrate[g[t] * DiracDelta[t - a], {t, 0, Infinity}, Assumptions {a ∈ Reals && a > 0}]
Out[ ]=
g[a]
-a s HeavisideTheta[a]
-a s
1
◆ Laplace transform of 125 δt - 3
π :
In[ ]:= LaplaceTransform[125 * DiracDelta[t - Pi / 3], t, s]
πs
-
125 3
Summary
After completing this chapter, you should be able to
123
16 Week 5_Laplace Transforms-1 (Basics).nb
124
Week 6: Laplace Transforms (Part 2)
Applications of Laplace Transforms
Table of Contents
1. Solving an IVP by Laplace Transforms: The SOP
1.1. Example 6.1
1.2. Example 6.2
2. Modeling Mass-Spring System using the Unit Step & the Dirac's Delta Functions
2.1. Mass-Spring System Under a Square Wave
2.2. Hammer-blow Response of a Mass-Spring System
2.3. Mass-Spring System Under a Sinusoidal Force for Some Time Interval
3. Convolution
4. Summary
Commands list
◼ LaplaceTransform[f[t],t,s]
◼ InverseLaplaceTransform[F[s],s,t]
◼ HeavisideTheta[x]
◼ UnitStep[x]
◼ DiracDelta[x]
◼ Convolve[f, g, x, y]
125
2 Week 6_Laplace Transforms-2 (Solving ODEs).nb
Example 6.1
y'' (t ) + 2 y' (t ) + 15 y(t ) = t e-t y(0) = 0, y' (0) = 1
This example and its sample solutions were developed by Prof. Katharine Long, Texas Tech
University - Math Dept.
In[ ]:= ClearAll["Global`*"]
◆ Step 0. Write the ODE as an equation, and the initial conditions as a set of substitution
rules.
In[ ]:= myODE = y ''[t] + 2 y '[t] + 15 y[t] t Exp[- t]
Out[ ]=
{y[0] 0, y′ [0] 1}
◆ Step 1. Take Laplace transforms of both sides of the equation, and substitute the initial
conditions into the equation.
In[ ]:= ltODE = LaplaceTransform[myODE, t, s] /. IC
Out[ ]=
- 1 + 15 LaplaceTransform[y[t], t, s] +
1
2 s LaplaceTransform[y[t], t, s] + s2 LaplaceTransform[y[t], t, s]
(1 + s)2
◆ This equation will be easier to read if we write Y(s) for ℒ{y(t)}(s), which we can do
using a substitution rule.
In[ ]:= eqnForY = ltODE /. LaplaceTransform[y[t], t, s] Y[s]
Out[ ]=
1
- 1 + 15 Y[s] + 2 s Y[s] + s2 Y[s]
(1 + s)2
2 + 2 s + s2
Y[s]
(1 + s)2 15 + 2 s + s2
126
Week 6_Laplace Transforms-2 (Solving ODEs).nb 3
2 + 2 s + s2
(1 + s)2 15 + 2 s + s2
◆ Now we have computed the Laplace transform of the solution. Take its inverse Laplace
transform to get the solution.
◆ Step 3. The solution in Step 2, Y(s), is transformed back, resulting in the solution of the
given problem.
In[ ]:= InverseLaplaceTransform[YSoln[s], s, t]
Out[ ]=
1
-t 14 t + 13 14 Sin 14 t
196
1
-t 14 t + 13 14 Sin 14 t
196
1 13 -t Sin 14 t 4
- -t 14 + 182 Cos 14 t - + -t 14 t + 13 14 Sin 14 t +
98 14 49
1 1
2 -t 14 + 182 Cos 14 t - -t 14 t + 13 14 Sin 14 t -t t
196 196
True
{True, True}
◆ A particular solution:
127
4 Week 6_Laplace Transforms-2 (Solving ODEs).nb
128
Week 6_Laplace Transforms-2 (Solving ODEs).nb 5
1
y(x)= -t (14 t+13 14 sin( 14 t))
196
0.3
0.2
0.1
y(t)
0.0
-0.1
-0.2
-0.3
0 1 2 3 4 5
t
Example 6.2
y'' (t ) + 2 y' (t ) + 5 y(t ) = 1.25 exp(0.5 t ) + 40 cos(4 t ) - 55 sin(4 t )
.
◆ Step 0. Write the ODE as an equation, and the initial conditions as a set of substitution
rules.
In[ ]:= LHSOp[y_, t_] = y ''[t] + 2.0 y '[t] + 5.0 y[t]
Out[ ]=
5. y[t] + 2. y′ [t] + y′′ [t]
5. y[t] + 2. y′ [t] + y′′ [t] 1.25 0.5 t + 40. Cos[4. t] - 55. Sin[4. t]
129
6 Week 6_Laplace Transforms-2 (Solving ODEs).nb
◆ Step 1. Take Laplace transforms of both sides of the equation, and substitute the initial
conditions into the equation.
In[ ]:= ltODE = LaplaceTransform[myODE, t, s] /. IC
Out[ ]=
◆ This equation will be easier to read if we write Y(s) for ℒ{y(t)}(s), which we can do
using a substitution rule.
In[ ]:= eqnForY = ltODE /. LaplaceTransform[y[t], t, s] Y[s]
Out[ ]=
1.25 220. 40. s
- 60.1 - 0.2 s + 5. Y[s] + s2 Y[s] + 2. (- 0.2 + s Y[s]) - +
2
- 0.5 + s 16. + s 16. + s2
5. + 2. s + s2
◆ Now we have computed the Laplace transform of the solution. Take its inverse Laplace
transform to get the solution.
◆ Step 3. The solution in Step 2, Y(s), is transformed back, resulting in the solution of the
given problem.
In[ ]:= InverseLaplaceTransform[YSoln[s], s, t]
Out[ ]=
0.2 0.5 t + (-1.-2. ) t - 1.19349 × 10-15 + 10. - 1.19349 × 10-15 + 10. (0.+4. ) t +
(0.-4. ) t - 4.44089 × 10-16 + 2.5 - 4.44089 × 10-16 + 2.5 (0.+8. ) t
130
Week 6_Laplace Transforms-2 (Solving ODEs).nb 7
5. y[t] + 2. y′ [t] + y′′ [t] 1.25 0.5 t + 40. Cos[4. t] - 55. Sin[4. t]
- 80. -1. t Cos[2. t] + 0.05 Cosh[0.5 t] - 60. -1. t Sin[2. t] - 80. Sin[4. t] +
2. 40. -1. t Cos[2. t] + 20. Cos[4. t] + 0.1 Cosh[0.5 t] - 20. -1. t Sin[2. t] + 0.1 Sinh[0.5 t] +
5. 0.2 Cosh[0.5 t] + 20. -1. t Sin[2. t] + 5. Sin[4. t] + 0.2 Sinh[0.5 t] + 0.05 Sinh[0.5 t]
True
In[ ]:= IC
Out[ ]=
{0.2, 60.1}
y[x] -1. x 2 Cos[2. x] + -1. x 1 Sin[2. x] + 5. -1. x 0. + 0.04 1.5 x Cos[2. x]2 +
0.04 1.5 x Sin[2. x]2 + 1. 1. x Cos[2. x]2 Sin[4. x] + 1. 1. x Sin[2. x]2 Sin[4. x]
◆ A particular solution:
DsolveSoln1 = DSolve[{LHSOp[y, x] rhsFunc[x], y[0] 0.2, y '[0] 60.1} , y[x], x]
Out[ ]=
131
8 Week 6_Laplace Transforms-2 (Solving ODEs).nb
◆ Let’s compare the obtained results by plotting them on the same graph:
In[ ]:= Plot[{ySoln[x] , yfromDsolve}, {x, 0, 15}, Frame True,
PlotStyle {{Orange, Thick}, {Black, Dashed}}, Frame True, FrameLabel {"x", "y(x)"},
BaseStyle {FontWeight "Bold", Black, FontSize 12}, GridLines Automatic,
AxesStyle Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
Method {"DefaultBoundaryStyle" Automatic,
"DefaultMeshStyle" AbsolutePointSize[6], "ScalingFunctions" None},
PlotLegends Placed[{"Laplace Transform", "DSolve"}, {0.4, 0.75}], Background White]
Out[ ]=
100
50
0
0 2 4 6 8 10 12 14
x
This example is taken from the Textbook (Kreyszig, 2011, 10th Edition), Section 6.4, page
227.
132
Week 6_Laplace Transforms-2 (Solving ODEs).nb 9
In[ ]:= Plot[r[t], {t, 0, 5}, PlotRange {- 0.5, 1.5}, PlotStyle {{Red, Thick}},
Frame True, Exclusions None, FrameLabel {"t", "r(t)"},
BaseStyle {FontWeight "Bold", Black, FontSize 12}, GridLines Automatic,
AxesStyle Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
PlotLegends Placed[{"r[ t ]=u[ t-1 ] - u[ t-2 ]"}, {0.65, 0.87}],
Method {"DefaultBoundaryStyle" Automatic,
"DefaultMeshStyle" AbsolutePointSize[6], "ScalingFunctions" None}]
Out[ ]=
1.5
0.5
0.0
-0.5
0 1 2 3 4 5
t
◆ Step 0. Write the ODE as an equation, and the initial conditions as a set of substitution
rules.
In[ ]:= myODE = y ''[t] + 3 y '[t] + 2 y[t] r[t]
Out[ ]=
{y[0] 0, y′ [0] 0}
◆ Step 1. Take Laplace transforms of both sides of the equation, and substitute the initial
conditions into the equation.
In[ ]:= ltODE = LaplaceTransform[myODE, t, s] /. IC
Out[ ]=
2 LaplaceTransform[y[t], t, s] +
-2 s -s
3 s LaplaceTransform[y[t], t, s] + s2 LaplaceTransform[y[t], t, s] - +
s s
◆ This equation will be easier to read if we write Y(s) for ℒ{y(t)}(s), which we can do
using a substitution rule.
133
10 Week 6_Laplace Transforms-2 (Solving ODEs).nb
-2 s -s
2 Y[s] + 3 s Y[s] + s2 Y[s] - +
s s
-2 s - 1 + s
Y[s]
s 2 + 3 s + s2
-2 s - 1 + s
s 2 + 3 s + s2
◆ Now we have computed the Laplace transform of the solution. Take its inverse Laplace
transform to get the solution.
◆ Step 3. The solution in Step 2, Y(s), is transformed back, resulting in the solution of the
given problem.
In[ ]:= InverseLaplaceTransform[YSoln[s], s, t]
Out[ ]=
1 2 1 2
- -2 (-2+t) - 1 + -2+t HeavisideTheta[- 2 + t] + -2 (-1+t) - 1 + -1+t HeavisideTheta[- 1 + t]
2 2
1 2 1 2
- -2 (-2+t) -1 + -2+t HeavisideTheta[-2 + t] + -2 (-1+t) -1 + -1+t HeavisideTheta[-1 + t]
2 2
134
Week 6_Laplace Transforms-2 (Solving ODEs).nb 11
1 2
2 - -2 (-2+t) - 1 + -2+t HeavisideTheta[- 2 + t] +
2
1 2
-2 (-1+t) - 1 + -1+t HeavisideTheta[- 1 + t] -
2
1 2 1 2
-2 (-2+t) - 1 + -2+t DiracDelta′ [- 2 + t] +
-2 (-1+t) - 1 + -1+t DiracDelta′ [- 1 + t]
2 2
- HeavisideTheta[- 2 + t] + HeavisideTheta[- 1 + t]
{True, True}
135
12 Week 6_Laplace Transforms-2 (Solving ODEs).nb
r(t)
1.0
y(t)
y (t )
0.5
0.0
-0.5
0 1 2 3 4 5
t
This example is taken from the Textbook (Kreyszig, 2011, 10th Edition), Section 6.4, page
227.
In[ ]:= ClearAll["Global`*"]
DiracDelta[- 1 + t]
◆ Step 0. Write the ODE as an equation, and the initial conditions as a set of substitution
rules.
In[ ]:= myODE = y ''[t] + 3 y '[t] + 2 y[t] r[t]
Out[ ]=
136
Week 6_Laplace Transforms-2 (Solving ODEs).nb 13
{y[0] 0, y′ [0] 0}
◆ Step 1. Take Laplace transforms of both sides of the equation, and substitute the initial
conditions into the equation.
In[ ]:= ltODE = LaplaceTransform[myODE, t, s] /. IC
Out[ ]=
2 LaplaceTransform[y[t], t, s] +
3 s LaplaceTransform[y[t], t, s] + s2 LaplaceTransform[y[t], t, s] -s
◆ This equation will be easier to read if we write Y(s) for ℒ{y(t)}(s), which we can do
using a substitution rule.
In[ ]:= eqnForY = ltODE /. LaplaceTransform[y[t], t, s] Y[s]
Out[ ]=
-s
Y[s]
2 + 3 s + s2
-s
2 + 3 s + s2
◆ Now we have computed the Laplace transform of the solution. Take its inverse Laplace
transform to get the solution.
◆ Step 3. The solution in Step 2, Y(s), is transformed back, resulting in the solution of the
given problem.
In[ ]:= InverseLaplaceTransform[YSoln[s], s, t]
Out[ ]=
1-2 t - + t HeavisideTheta[- 1 + t]
137
14 Week 6_Laplace Transforms-2 (Solving ODEs).nb
True
{True, True}
0.3
y (t )
0.2
y(t)
0.1
0.0
-0.1
0 2 4 6 8 10
t
138
Week 6_Laplace Transforms-2 (Solving ODEs).nb 15
This example is taken from the Textbook (Kreyszig, 2011, 10th Edition), Section 6.4, page
229.
In[ ]:= ClearAll["Global`*"]
10 HeavisideTheta[π - t] Sin[2 t]
10
r[t]=10 u[π-t] Sin[2 t]
5
r(t)
-5
-10
0 1 2 3 4 5 6
t
◆ Step 0. Write the ODE as an equation, and the initial conditions as a set of substitution
rules.
In[ ]:= myODE = y ''[t] + 3 y '[t] + 2 y[t] r[t]
Out[ ]=
{y[0] 1, y′ [0] - 5}
◆ Step 1. Take Laplace transforms of both sides of the equation, and substitute the initial
conditions into the equation.
139
16 Week 6_Laplace Transforms-2 (Solving ODEs).nb
5 - s + 2 LaplaceTransform[y[t], t, s] + s2 LaplaceTransform[y[t], t, s] +
10 2 - 2 -π s
3 (- 1 + s LaplaceTransform[y[t], t, s])
4 + s2
◆ This equation will be easier to read if we write Y(s) for ℒ{y(t)}(s), which we can do
using a substitution rule.
In[ ]:= eqnForY = ltODE /. LaplaceTransform[y[t], t, s] Y[s]
Out[ ]=
10 2 - 2 -π s
5 - s + 2 Y[s] + s2 Y[s] + 3 (- 1 + s Y[s])
4 + s2
2 + 3 s + s2
◆ Now we have computed the Laplace transform of the solution. Take its inverse Laplace
transform to get the solution.
◆ Step 3. The solution in Step 2, Y(s), is transformed back, resulting in the solution of the
given problem.
In[ ]:= InverseLaplaceTransform[YSoln[s], s, t]
Out[ ]=
1 -t 1
- -2 t - 2 + t - 2 -2 t - 1 + t + 20 - -2 t + + (- 3 Cos[2 t] - Sin[2 t]) -
8 5 40
π-t 1 1
20 HeavisideTheta[- π + t] - -2 (-π+t) + (- 3 Cos[2 (- π + t)] - Sin[2 (- π + t)])
5 8 40
140
Week 6_Laplace Transforms-2 (Solving ODEs).nb 17
1
-2 t - 16 π+t DiracDelta[- π + t] + 2 t (1 - 4 π HeavisideTheta[- π + t]) +
2
2 2 t DiracDelta[- π + t] (2 Cos[2 t] - 6 Sin[2 t]) +
4 2 t (- 1 + HeavisideTheta[- π + t]) (2 Cos[2 t] - 6 Sin[2 t]) +
2 t (- 1 + HeavisideTheta[- π + t]) (- 12 Cos[2 t] - 4 Sin[2 t]) +
4 2 t DiracDelta[- π + t] (3 Cos[2 t] + Sin[2 t]) +
4 2 t (- 1 + HeavisideTheta[- π + t]) (3 Cos[2 t] + Sin[2 t]) + 5 2 π DiracDelta′ [- π + t] -
8 π+t DiracDelta′ [- π + t] + 2 t (3 Cos[2 t] + Sin[2 t]) DiracDelta′ [- π + t]
10 HeavisideTheta[π - t] Sin[2 t]
{True, True}
141
18 Week 6_Laplace Transforms-2 (Solving ODEs).nb
10
r (t )
5 y (t )
y(t)
-5
-10
0 2 4 6 8 10
t
Convolution
◆ According to the textbook,
ℒ(f ) ℒ(g) is the transform of the convolution of f and g, denoted by the standard notation f * g
and defined by the integral:
t
h(t ) = (f * g) (t ) = ∫ 0 f (τ) g(t - τ) τ
+∞ +∞
∫ -∞ ∫ -∞ ⋯f (x1, x2, …) g (y1 - x1, y2 - x2, …) x1 x2 ⋯.
.
◆ Convolution uses the Convolve command, which is somewhat tricky and requires a bit
of explanation.
◆ The syntax is: Convolve[ first function , second function , dummy variable , final
variable]
142
Week 6_Laplace Transforms-2 (Solving ODEs).nb 19
Symbol
◆ The only difference between the two is the presence of the Heaviside function multi-
plied onto the result. However, that is fully consistent with the limits on the convolution
integral.
In[ ]:= Convolve[τ * UnitStep[τ], 1 * UnitStep[τ], τ, t]
Out[ ]=
1
t2 UnitStep[t]
2
t2
2
◆ Another example:
In[ ]:= convolve = Convolve[Sin[τ] * UnitStep[τ], Sin[τ] * UnitStep[τ], τ, t]
Out[ ]=
1
(- t Cos[t] + Sin[t]) UnitStep[t]
2
143
20 Week 6_Laplace Transforms-2 (Solving ODEs).nb
True
4 1
(-t cos(t)+sin(t))
2
0
y(t)
-2
-4
-6
0 2 4 6 8 10 12
t
Summary
After completing this chapter, you should be able to
◼ develop SOPs to solve 1st-/2nd-order ODEs (IVPs) by the Laplace transform method
◼ perform Laplace and inverse Laplace transforms using Wolfram Mathematica
◼ use Mathematica to find the convolution of two functions.
◼ develop the habit of always checking your solutions for quality assurance.
144
Week 7: Series Solutions of ODEs
How to Use Series to Solve ODEs?
Table of Contents
1. The Series Command in Wolfram Mathematica
1.1. Taylor and Maclaurin Series
1.1.1. Example 7.1
1.1.2. Example 7.2
2. Basic Concepts
2.1. Convergent vs. Divergent Series
2.1.1. Example 7.3
2.1.2. Example 7.4
2.1.3. Example 7.5
2.2. Analytic at Point
3. Solving ODEs by the Power Series Method
3.1. Standard Operating Procedures (SOPs)
3.1.1. Example 7.6
3.1.2. Example 7.7
3.1.3. Example 7.8
3.2. Different Approach: Built-in Function in Wolfram Mathematica
4. Extended Power Series Method: Frobenius Method
4.1. Standard Operating Procedures (SOPs)
4.1.1. Example 7.9
4.1.2. Example 7.10
5. Summary
Commands list
◼ Quit[]
◼ Series[f , {x, x0, n}]
◼ Normal[expr]
145
2 Week 7_Series Solutions of ODEs.nb
◼ SeriesCoefficient[series, n]
◼ Sum[expr, {n, nmin, nmax}]
◼ SumConvergence[ f, n]
◼ Factorial[n]
◼ Log[z]
◼ LogicalExpand[expr]
◼ Coefficient[expr, form]
◼ Table[expr, n]
◼ AsymptoticDSolveValue[eqn, f, x x0]
Symbol
Use Series to make a power series out of a function. The first argument is the function. The
second argument has the form {var, pt, order}, where var is the variable, pt is the point around
which to expand, and order is the order:
In[ ]:= ? Series
Out[ ]=
Symbol
Series[f , x x0 ] generates the leading term of a power series expansion for f about the point x = x0 .
Seriesf , {x, x0 , nx }, y, y0 , ny , … successively finds series expansions with respect to x, then y, etc.
146
Week 7_Series Solutions of ODEs.nb 3
x2 x3 x4 x5 x6 x7 x8 x9 x10
1+x+ + + + + + + + + + O[x]11
2 6 24 120 720 5040 40 320 362 880 3 628 800
1
◆ Power series for the function of around x = 0:
ex
In[ ]:= Series[1 / Exp[x], {x, 0, 10}]
Out[ ]=
x2 x3 x4 x5 x6 x7 x8 x9 x10
1-x+ - + - + - + - + + O[x]11
2 6 24 120 720 5040 40 320 362 880 3 628 800
1
◆ Power series for the function of around x = 0:
x
In[ ]:= Series[1 / x, {x, 0, 10}]
Out[ ]=
1
+ O[x]11
x
Log[x] + O[x]11
x2 x4 x6 x8 x10
1- + - + - + O[x]11
2 24 720 40 320 3 628 800
x2 x3 x4 x5 x6 x7 x8 x9 x10
1+x- - + + - - + + - + O[x]11
2 6 24 120 720 5040 40 320 362 880 3 628 800
◆ We may find the derivatives of the power series using the D[ ] command.
147
4 Week 7_Series Solutions of ODEs.nb
In[ ]:= ?D
Out[ ]=
Symbol
D[f , {x, n}, {y, m}, …] gives the multiple partial derivative ⋯ ∂m ∂ ym ∂n ∂ xn f.
D[f , {{x1 , x2 , …}}] for a scalar f gives the vector derivative ∂ f ∂ x1 , ∂ f ∂ x2 , ….
x2 x3 x4 x5 x6 x7 x8
1+x+ + + + + + + + O[x]9
2 6 24 120 720 5040 40 320
x2 x3 x4 x5 x6 x7 x8 x9
1+x+ + + + + + + + + O[x]10
2 6 24 120 720 5040 40 320 362 880
◆ Normal[ ] turns the power series back into an ordinary polynomial expression.
In[ ]:= Normal[s1]
Out[ ]=
x2 x3 x4 x5 x6 x7 x8 x9
1+x+ + + + + + + +
2 6 24 120 720 5040 40 320 362 880
◆ We can find the coefficients of the terms in the particular power series by using the
command SeriesCoefficient[ ].
In[ ]:= Table[SeriesCoefficient[s1, n], {n, 0, 9}]
Out[ ]=
1 1 1 1 1 1 1 1
1, 1, , , , , , , ,
2 6 24 120 720 5040 40 320 362 880
148
Week 7_Series Solutions of ODEs.nb 5
Symbol
the nth -order term in a power series in the form generated by Series.
SeriesCoefficient[f , {x, x0 , n}] finds the coefficient of (x - x0 )n in the expansion of f about the point x = x0 .
x2 x3 x4 x5 x6 x7
x- + - + - + + O[x]8
2 3 4 5 6 7
◆ Note: when we do operations on a power series, the result is computed only to the appro-
priate order of x.
In[ ]:= s12
Out[ ]=
4 x3 2 x4 4 x5 4 x6 8 x7 2 x8 4 x9
1 + 2 x + 2 x2 + + + + + + + + O[x]10
3 3 15 45 315 315 2835
Example 7.1
Find the Taylor expansion of the given function around x0 = 1 (up to 9 th order terms):
149
6 Week 7_Series Solutions of ODEs.nb
sinh 3 x2 - 4
Example 7.2
Find the Maclaurin expansion of the given function (up to 15 th order terms):
log 2 x3 + 5
Series[Log[2 x ^ 3 + 5], {x, 0, 15}] (** Maclaurin series means that x 0 =0 **)
3 6 9 12 15
2x 2x 8x 4x 32 x
Log[5] + - + - + + O[x]16
5 25 375 625 15 625
Basic Concepts
What do we mean by “Convergent vs. Divergent Series” ?
Convergent Series: A series is said to be convergent if it approaches some limit (D’Angelo
and West 2000, p. 259).
Divergent Series: A series which is not convergent.
.
◆ Sum[expr, {n, nmin, nmax}] finds the sum of expr as n goes from nmin to nmax .
In[ ]:= Sum[x ^ n / n !, {n, 0, Infinity}]
Out[ ]=
x
BesselI0, 2 x
150
Week 7_Series Solutions of ODEs.nb 7
Symbol
BesselI[n, z] gives the modified Bessel function of the first kind I n (z).
(n !) * xn
In[ ]:= Sum , {n, 0, Infinity}
(2 n) !
Out[ ]=
1 x
2 + x/4 π x Erf
2 2
(n !) * xn
In[ ]:= Sum , {n, 1, Infinity}
(2 n) !
Out[ ]=
1 x
x/4 π x Erf
2 2
◆ We can also use a built-in function SumConvergence to find out if the series is conver-
gent or divergent.
In[ ]:= ? SumConvergence
Out[ ]=
Symbol
Example 7.3
Test for convergence of the sum:
1
∑∞
n n
False
151
8 Week 7_Series Solutions of ODEs.nb
Example 7.4
Test for convergence of the sum:
3n n2
∑n∞ n!
3 n * n2
In[ ]:= SumConvergence , n
n!
Out[ ]=
True
Example 7.5
Test for convergence of the sum:
1
∑n∞ n!
f ( n ) ( 0) f '' (0) 2
∑n∞= 0 n!
(x)n = f (0) + f ' (0) x +
2!
x +⋯
◆ Not analytic at x = 0.
Series[Log[x], {x, 0, 5}]
Out[ ]=
Log[x] + O[x]6
152
Week 7_Series Solutions of ODEs.nb 9
◆ Not analytic at x = 0.
Series[Log[1 + x], {x, 0, 5}]
Out[ ]=
x2 x3 x4 x5
x- + - + + O[x]6
2 3 4 5
◆ Analytic at x = 0.
y = a0 + a1 x + a2 x2 + a3 x3 + ⋯ = ∑∞
m= 0 am x
m
Step 2. Insert the power series of y and the power series of y', y'' obtained by term-wise
differentiation in to the ODE.
y' = a1 + 2 a2 x + 3 a3 x2 + ⋯ = ∑∞
m = 1 m am x
m- 1
Step 3. Equating the coefficient of each power of x to zero, we have a system of equations of
the coefficients, am.
.
a1 - a0 = 0 , 2 a2 - a1 = 0 , 3 a3 - a2 = 0 , ⋯ .
.
Step 4. Solving these equations, we may express a1, a2, ... in terms of a0 (for the first-order
ODEs) or a2, a3, ... in terms of a0 and a1 (for the second-order ODEs).
.
a a a2 a
. a1 = a0 , a2 = 1 = 0 , a3 = = 0 , ⋯ .
2 2! 3 3!
.
Step 5. With these values of the coefficients, the series solution becomes the familiar general
solution.
a0 2 a x2 x3
y = a0 + a0 x + x + 0 x3 + ⋯ = a0 1 + x + + = a0 ex
2! 3! 2! 3!
Example 7.6
Find the general solution to the given ODE:
153
10 Week 7_Series Solutions of ODEs.nb
y' - y = 0
◆ Step 1. Define the solution as a Power Series. Here, we omit the terms of p + 1. The
value of p (max value) can be varied.
In[ ]:= p = 8; y = Sum[c[i] x ^ i, {i, 0, p}] + O[x] ^ (p + 1)
Out[ ]=
c[0] + c[1] x + c[2] x2 + c[3] x3 + c[4] x4 + c[5] x5 + c[6] x6 + c[7] x7 + c[8] x8 + O[x]9
◆ Step 2. Insert the power series solution (with undetermined coefficients) to the given
ODE.
In[ ]:= de = D[y, x] - y 0
Out[ ]=
- c[0] + c[1] 0 && - c[1] + 2 c[2] 0 && - c[2] + 3 c[3] 0 && - c[3] + 4 c[4] 0 &&
- c[4] + 5 c[5] 0 && - c[5] + 6 c[6] 0 && - c[6] + 7 c[7] 0 && - c[7] + 8 c[8] 0
Symbol
LogicalExpand[expr] expands out logical combinations of equations, inequalities, and other functions.
◆ Step 4. Solve the equations for the coefficients a[i]. We can also feed equations involv-
ing power series directly to Solve[]:
In[ ]:= solvedcoeffs = Solve[coeffeqns, Table[c[i], {i, 1, 8}]]
Out[ ]=
c[0] c[0]
c[1] c[0], c[2] , c[3]
,
2 6
c[0] c[0] c[0] c[0] c[0]
c[4] , c[5] , c[6] , c[7] , c[8]
24 120 720 5040 40 320
154
Week 7_Series Solutions of ODEs.nb 11
x2 x3 x4 x5 x6 x7 x8
1 + x + + + + + + +
2 6 24 120 720 5040 40 320
◆ Summation of Series: The Wolfram System recognizes this as the power series expan-
sion of exp(x).
In[ ]:= Sum[x ^ n / n !, {n, 0, Infinity}]
Out[ ]=
x
x2 x3 x4 x5 x6 x7 x8
1+x+ + + + + + + + O[x]9
2 6 24 120 720 5040 40 320
True
Example 7.7
Find the general solution to the given ODE:
y'' + y = 0
◆ Step 1. Define the solution as a Power Series. Here, we omit the terms of p + 1. The
value of p (max value) can be varied.
In[ ]:= p = 9; y = Sum[c[i] x ^ i, {i, 0, p}] + O[x] ^ (p + 1)
Out[ ]=
c[0] + c[1] x + c[2] x2 + c[3] x3 + c[4] x4 + c[5] x5 + c[6] x6 + c[7] x7 + c[8] x8 + c[9] x9 + O[x]10
◆ Step 2. Insert the power series solution (with undetermined coefficients) to the given
ODE.
155
12 Week 7_Series Solutions of ODEs.nb
c[0] + 2 c[2] 0 && c[1] + 6 c[3] 0 && c[2] + 12 c[4] 0 && c[3] + 20 c[5] 0 &&
c[4] + 30 c[6] 0 && c[5] + 42 c[7] 0 && c[6] + 56 c[8] 0 && c[7] + 72 c[9] 0
◆ Step 4. Solve the equations for the coefficients a[i]. We can also feed equations involv-
ing power series directly to Solve[]:
In[ ]:= solvedcoeffs = Solve[coeffeqns, Table[c[i], {i, 1, 10}]]
Solve: Equations may not give solutions for all "solve" variables.
Out[ ]=
c[0] c[1] c[0] c[1]
c[2] - , c[3] - , c[4] , c[5] ,
2 6 24 120
c[0] c[1] c[0] c[1]
c[6] - , c[7] - , c[8] , c[9]
720 5040 40 320 362 880
x2 x4 x6 x8
1 - + - +
2 24 720 40 320
x2 x4 x6 x8 x10
1- + - + - + O[x]11
2 24 720 40 320 3 628 800
◆ Expressing the coefficients in terms of the arbitrary c[0], we get the solution of
y = c0 cos(x).
156
Week 7_Series Solutions of ODEs.nb 13
x3 x5 x7 x9
x - + - +
6 120 5040 362 880
x3 x5 x7 x9
x- + - + + O[x]11
6 120 5040 362 880
◆ Expressing the coefficients in terms of the arbitrary c[1], we get the solution
y = c1 sin(x).
In[ ]:= ysoln = Coefficient[y, c[0]] + Coefficient[y, c[1]]
Out[ ]=
x2 x3 x4 x5 x6 x7 x8 x9
1 + x - - + + - - + +
2 6 24 120 720 5040 40 320 362 880
x2 x3 x4 x5 x6 x7 x8 x9
1+x- - + + - - + + + O[x]10
2 6 24 120 720 5040 40 320 362 880
True
Example 7.8
Find the general solution to the given ODE:
(y' )2 - y = x
c[0] + c[1] x + c[2] x2 + c[3] x3 + c[4] x4 + c[5] x5 + c[6] x6 + c[7] x7 + c[8] x8 + O[x]9
◆ Step 2. Insert the power series solution (with undetermined coefficients) to the given
ODE.
157
14 Week 7_Series Solutions of ODEs.nb
- c[0] + c[1]2 0 && - 1 - c[1] + 4 c[1] × c[2] 0 && - c[2] + 4 c[2]2 + 6 c[1] × c[3] 0 &&
- c[3] + 12 c[2] × c[3] + 8 c[1] × c[4] 0 && 9 c[3]2 - c[4] + 16 c[2] × c[4] + 10 c[1] × c[5] 0 &&
24 c[3] × c[4] - c[5] + 20 c[2] × c[5] + 12 c[1] × c[6] 0 &&
16 c[4]2 + 30 c[3] × c[5] - c[6] + 24 c[2] × c[6] + 14 c[1] × c[7] 0 &&
40 c[4] × c[5] + 36 c[3] × c[6] - c[7] + 28 c[2] × c[7] + 16 c[1] × c[8] 0
◆ Step 4. Solve the equations for the coefficients a[i]. We can also feed equations involv-
ing power series directly to Solve[]:
In[ ]:= c[0] = 1; solvedcoeffs = Solve[coeffeqns, Table[c[i], {i, 1, 8}]]
Out[ ]=
True
2
x2 x3 5 x4 41 x5 x2 x3 5 x4 41 x5
In[ ]:= D1 + x + - + - + O[x]6 , x - 1+x+ - + - + O[x]6 x
2 12 96 960 2 12 96 960
Out[ ]=
x + O[x]5 x
158
Week 7_Series Solutions of ODEs.nb 15
Symbol
The same ODE as in the Example 7.2 with the corresponding initial conditions:
y'' + y = 0 y(0) = 1, y' (0) = 0
In[ ]:= sol1 = AsymptoticDSolveValue[{y ''[x] + y[x] 0, y[0] 1, y '[0] 0}, y[x], {x, 0, 8}]
Out[ ]=
x2 x4 x6 x8
1- + - +
2 24 720 40 320
In[ ]:= sol2 = AsymptoticDSolveValue[{y ''[x] + y[x] 0, y[0] 1, y '[0] 0}, y[x], {x, 0, 16}]
Out[ ]=
159
16 Week 7_Series Solutions of ODEs.nb
p=4
3
p=8
2
p=12
1 p=16
p=24
0
cos(x)
-1
-2
0 2 4 6 8
y (x ) = x r
∑∞ m
m= 0 am x = xr a0 + a1 x + a2 x2 + ⋯
.
where the exponent r may be any (real or complex) number (and r is chosen so that a0 ≠ 0).
.
Step 2. Expand b(x) and c(x) in power series. To apply the Frobenius Method, b(x) and c(x)
must be analytic at x = 0 . If b(x) and c(x) are polynomials we do nothing in this step. The
purpose of this step is to obtain b0 = b(x = 0) and c0 = c(x = 0).
.
b(x) = b0 + b1 x + b2 x2 + ⋯ , c (x ) = c 0 + c 1 x + c 2 x 2 + ⋯
.
160
Week 7_Series Solutions of ODEs.nb 17
r(r - 1) + b0 r + c0 = 0
Step 4. Solve the indicial equation, and obtain its roots r1 and r2. Depending on the values
of r1 and r2, we have the following three cases:
(i) Distinct roots not differing by an integer;
(ii) Double root r1 = r2 ;
(iii) Roots differing by an integer.
.
y 1 (x ) = x r1 a
0 + a1 x + a2 x2 + ⋯
.
and
.
y 1 (x ) = x r1 a
0 + a1 x + a2 x2 + ⋯
.
whre the roots are so denoted that r1 - r2 > 0 and k may turn out to be zero.
.
Example 7.9
x(x - 1) y'' + (3 x - 1) y' + y = 0
◆ Step 1. Rewrite the ODE in the form of x2 y'' + x b(x) y' + c(x) y = 0. Find b(x) and
c ( x ).
.
161
18 Week 7_Series Solutions of ODEs.nb
( 3 x - 1) 1 (3 x- 1) x2 ( 3 x - 1)
y'' + x ( x - 1)
y' + x(x- 1)
y = 0 ⟹ x2 y'' + x ( x - 1)
y' + x ( x - 1)
y= 0 ⟹ b(x) = ( x - 1)
x2
c (x ) = x(x- 1)
.
◆ Step 2. Expand b(x) and c(x) in power series. To apply the Frobenius Method, b(x) and
c(x) must be analytic at x = 0 . If b(x) and c(x) are polynomials we do nothing in this
step. The purpose of this step is to obtain b0 = b(x = 0) and c0 = c(x = 0).
3x-1
In[ ]:= Series , {x, 0, 5}
x-1
Out[ ]=
1 - 2 x - 2 x2 - 2 x3 - 2 x4 - 2 x5 + O[x]6
x2
In[ ]:= Series , {x, 0, 5}
x (x - 1)
Out[ ]=
- x - x2 - x3 - x4 - x5 + O[x]6
◆ Step 3. Obtain the indicial equation. With b(0) = 1 and c(0) = 0, we have an indicial
equation r(r - 1) + r = 0.
In[ ]:= Solve[r (r - 1) + r 0, r]
Out[ ]=
◆ Step 4. Solving the indicial equation, and we obtained its roots r1 = 0 and r2 = 0. This
corresponds to the Case (ii) with a double root.
◆ The indicial root:
In[ ]:= ClearAll[a, r];
In[ ]:= r = 0;
◆ Substitute this series into the given ODE: x(x - 1) y'' + (3 x - 1) y' + y = 0 .
In[ ]:= deq = x * (x - 1) * D[y, {x, 2}] + (3 x - 1) * D[y, {x, 1}] + y 0
Out[ ]=
162
Week 7_Series Solutions of ODEs.nb 19
{{a[1] a[0], a[2] a[0], a[3] a[0], a[4] a[0], a[5] a[0], a[6] a[0]}}
1 + x + x2 + x3 + x4 + x5 + x6
1
1-x
1 + x + x2 + x3 + x4 + x5 + x6 + O[x]7
◆ Now we have obtained one solution to the given ODE, which is given by
a[0]
y = a[0] ∑ ∞ m
m = 0 x = 1- x
1
◆ By choosing a[0] = 1, we have y1 = .
1-x
◆ We may get a second independent solution y2(x) by using two methods:
163
20 Week 7_Series Solutions of ODEs.nb
{{A[1] A[0], A[2] A[0], A[3] A[0], A[4] A[0], A[5] A[0], A[6] A[0]}}
ln(x)
◆ We can easily see that the second independent solution y2(x) = .
1-x
164
Week 7_Series Solutions of ODEs.nb 21
1
In[ ]:= y1Soln[x_] =
1-x
Out[ ]=
1
1-x
True
Log[x]
In[ ]:= y2Soln[x_] =
1-x
Out[ ]=
Log[x]
1-x
True
u( x )
◆ Substitute y2(x) = , where the form of u(x) is yet to be determined.
1-x
165
22 Week 7_Series Solutions of ODEs.nb
u[x]
In[ ]:= yuSoln[x_] =
1-x
Out[ ]=
u[x]
1-x
t[x] + x t′ [x] 0
{{u[x] 1 + Log[x]}}
u[ x ] ln(x)
◆ Thus we have y2(x) = = .
1-x 1-x
1 ln(x)
◆ y 1 (x ) = and y2(x) = are linearly independent and thus form a basis of solu-
1- x 1- x
tions of the given ODE.
1 Log[x]
In[ ]:= yGen = c1 * + c2 * ;
1-x 1-x
FullSimplify[x * (x - 1) * D[yGen, {x, 2}] + (3 x - 1) * D[yGen, x] + (yGen)] 0
True
Example 7.10
◆ Step 1. Rewrite the ODE in the form of x2 y'' + x b(x) y' + c(x) y = 0. Find b(x) and
c (x ) .
166
Week 7_Series Solutions of ODEs.nb 23
-x 1 -x x2 -x
y'' + x ( x - 1)
y' + x ( x - 1)
y = 0 ⟹ x2 y'' + x (x- 1)
y' + x ( x - 1)
y= 0 ⟹ b(x) = ( x - 1)
x2
c (x ) =
x ( x - 1)
b(0) = 0; c(0) = 0
.
◆ Step 2. Expand b(x) and c(x) in power series. To apply the Frobenius Method, b(x) and
c(x) must be analytic at x = 0 . b(x) and c(x) are already polynomials so we do nothing
in this step.
◆ Step 3. Obtain the indicial equation. With b(0) = 1 and c(0) = 0, we have an indicial
equation r(r - 1) = 0.
In[ ]:= ClearAll[r]; Solve[r (r - 1) 0, r]
Out[ ]=
◆ Step 4. Solving the indicial equation, and we obtained its roots r1 = 0 and r2 = 1. This
corresponds to the Case (i) with two roots differing by an integer. Notice that in this case
we need to set r1 > r2 in order to follow the recipe of the Frobenius method.
◆ For the first solution, we have r = r1 = 1. Based on the recipe of the Frobenius method,
we have:
In[ ]:= r = 1;
k = 6;
y = x ^ r (Sum[ a[n] x ^ n, {n, 0, k} ] + O[x] ^ (k + 1))
Out[ ]=
◆ Substitute this series into the given ODE: x2 - x y'' - xy' + y = 0 .
In[ ]:= deq = x * (x - 1) D[y, {x, 2}] - x * D[y, {x, 1}] + y 0
Out[ ]=
◆ Substitute this series into the given ODE: x2 - x y'' - xy' + y = 0 .
◆ Write the equations that the coefficients must satisfy:
In[ ]:= coeffEqns = LogicalExpand[ deq ]
Out[ ]=
167
24 Week 7_Series Solutions of ODEs.nb
a[0] x + O[x]8
True
◆ Let’s find the second independent solution y2(x) by the method of Reduction of Order.
In[ ]:= yuSoln[x_] = u[x] * x;
uODE = FullSimplify[ myODE /. y yuSoln]
Out[ ]=
-(1-x)
◆ Let’s take 1 = - 1 a. So we have t (x) = = u' (x), from which we can find u[x].
x2
168
Week 7_Series Solutions of ODEs.nb 25
True
◆ y1(x) = x and y2(x) = 1 + x ln(x) are linearly independent and y2(x) has a logarithmic
term, thus they constitute a basis of solutions for positive x.
In[ ]:= yGen = c1 * x + c2 * (1 + x * Log[x]);
FullSimplifyx2 - x * D[yGen, {x, 2}] - x * D[yGen, x] + yGen 0
True
Summary
After completing this chapter, you should be able to
◼ use Mathematica to find the power series representation of a function.
◼ recognise and work with some higher transcendental functions of mathematics.
◼ develop SOPs to solve ODEs by the power series method.
◼ develop SOPs to solve ODEs by the Frobenius method.
◼ develop the habit of always checking your solutions for quality assurance.
169
170
Week 8: Systems of Linear Equations
How to solve systems of linear equations?
Table of Contents
1. Solving the Systems of Linear Equations
1.1. Example 8.1: The Solve Command
1.2. Example 8.2: The LinearSolve Command
1.3. Example 8.3: Gaussian Elimination
1.4. Example 8.4: Gauss-Jordan Elimination
2. Summary
Commands list
◼ Column
◼ Solve
◼ MatrixForm
◼ FullSimplify
◼ LinearSolve
◼ ArrayFlatten
◼ Normal
◼ CoefficientArrays
◼ RowReduce
171
2 Week 8_Systems of Linear Equations.nb
⋮ ⋮ ⋮ ⋮
am 1 x1 + am 2 x2 + am 3 x3 + ⋯ + am n xn = bm
.
Find all solutions to the consistent system of linear equations using the Solve command:
x1 + 2 x2 - x3 + 3 x5 = 7
x2 - 4 x3 + x5 = - 2
x4 - 2 x5 = 1
◆ Step 2. Set leading and free variable. In this system, x1, x2, x4 are leading variables
and x3, x5 are free variables. Therefore,
In[ ]:= x3 = s1 ;
x5 = s2 ;
◆ Step 3. Looking at the given system of equations, it is apparent that the easiest way to
start is at the bottom. Therefore, apply back substitution method.
172
Week 8_Systems of Linear Equations.nb 3
Symbol
Solve[expr, vars] attempts to solve the system expr of equations or inequalities for the variables vars.
Solve[expr, vars, dom] solves over the domain dom. Common choices of dom are Reals, Integers, and Complexes.
{{x4 1 + 2 s2 }}
◆ So, x4 = 1 + 2 s2. Solving the next equation up for x2 by substituting values for x3 and x5
gives:
In[ ]:= x4 = 1 + 2 s2 ;
Solve[ x2 - 4 x3 + x5 - 2, x2]
Out[ ]=
{{x2 - 2 + 4 s1 - s2 }}
◆ So, x2 = 2 + 4 s1 - s2. Finally, substituting x2, x3 and x5 into the top equation gives:
In[ ]:= x2 = - 2 + 4 s1 - s2 ;
Solve[x1 + 2 x2 - x3 + 3 x5 7, x1]
Out[ ]=
{{x1 11 - 7 s1 - s2 }}
◆ So, x1 = 11 - 7 s1 - s2 .
◆ Step 4. Verify the solution.
In[ ]:= Clear[x1, x2, x4]
soln = Solve[sys, {x1, x2, x4}]
Out[ ]=
{{x1 11 - 7 s1 - s2 , x2 - 2 + 4 s1 - s2 , x4 1 + 2 s2 }}
◆ Hence, the general solution for given system of linear equations is:
x1 = 11 - 7 s1 - s2
x2 = 2 + 4 s1 - s2
x3 = s1
x4 = 1 + 2 s2
x5 = s2
173
4 Week 8_Systems of Linear Equations.nb
where, s1 and s2 are free parameters and can be any real numbers. Remember that each distinct choice for
free parameters give a new particular solution, so the given system has infinitely many solutions.
◆ In the system of linear equations the coefficients change, but variables do not. Therefore,
coefficients can be simply transferred to matrix, which can be thought as a rectangular
table of numbers.
◆ Step 1. Construct the coefficient matrix of the given system.
ClearAll["Global`*"]
A = {{1, - 2, 1}, {0, 2, - 8}, {- 4, 5, 9}}; MatrixForm[A]
Out[ ]//MatrixForm=
1 -2 1
0 2 -8
-4 5 9
◆ Step 2. Construct a column matrix that contains all the constant terms on the right-hand-
side of each equation.
In[ ]:= b = {0, 8, - 9}; MatrixForm[b]
Out[ ]//MatrixForm=
0
8
-9
Symbol
174
Week 8_Systems of Linear Equations.nb 5
{29, 16, 3}
◆ Hence, the unique solution for given consistent system of linear equations is:
x = 29
y = 16
z=3
175
6 Week 8_Systems of Linear Equations.nb
◆ Augmented matrix:
In[ ]:= AugMat = {{2, - 2, - 6, 1, 3}, {- 1, 1, 3, - 1, - 3}, {1, - 2, - 1, 1, 2}}; AugMat // MatrixForm
Out[ ]//MatrixForm=
2 -2 -6 1 3
-1 1 3 -1 -3
1 -2 -1 1 2
.
where the leading term of a row is defined as the leftmost nonzero term in that row.
.
◆ Step 2. Apply elementary row operations to transform the augmented matrix to row
echelon form.
◆ Identify pivot position for Row 1. R1 ↔ R3
176
Week 8_Systems of Linear Equations.nb 7
◆ Eliminate the coefficients down the first column below the pivot position, a21 and
by transforming them to zero. R1 + R2 → R2 and - 2 R1 + R3 → R3.
In[ ]:= AugMat〚2〛 = AugMat〚1〛 + AugMat〚2〛;
AugMat〚3〛 = - 2 * AugMat〚1〛 + AugMat〚3〛; AugMat // MatrixForm
Out[ ]//MatrixForm=
1 -2 -1 1 2
0 -1 2 0 -1
0 2 -4 -1 -1
◆ Eliminate the coefficient of a32 below the pivot position in the second column:
2 R2 + R3 → R3 .
In[ ]:= AugMat〚3〛 = 2 * AugMat〚2〛 + AugMat〚3〛; AugMat // MatrixForm
Out[ ]//MatrixForm=
1 -2 -1 1 2
0 -1 2 0 -1
0 0 0 -1 -3
◆ Apply back substitution procedure from Example 8.1. Here, x3 is free variable.
In[ ]:= Solve[sys /. x3 s1 , {x1, x2, x4}]
Out[ ]=
{{x1 1 + 5 s1 , x2 1 + 2 s1 , x4 3}}
177
8 Week 8_Systems of Linear Equations.nb
{1, 1, 0, 3}
◆ Hence, the given system of linear equations is consistent as has a general solution of:
x1 = 1 + 5 s1
x2 = 1 + 2 s1
x3 = s1
x4 = 3
◆ Coefficient matrix A:
In[ ]:= A = Normal[CoefficientArrays[sys, {x, y, z}]]〚2〛; MatrixForm[A]
Out[ ]//MatrixForm=
5 2 11
7 3 4
12 5 15
178
Week 8_Systems of Linear Equations.nb 9
◆ Augmented matrix:
In[ ]:= AugMat = ArrayFlatten[{{A, b}}]; MatrixForm[AugMat]
Out[ ]//MatrixForm=
5 2 11 4
7 3 4 1
12 5 15 6
.
179
10 Week 8_Systems of Linear Equations.nb
1
In[ ]:= AugMat〚1〛 = * AugMat〚1〛;
5
AugMat〚2〛 = - 7 * AugMat〚1〛 + AugMat〚2〛;
AugMat〚3〛 = - 12 * AugMat〚1〛 + AugMat〚3〛;
AugMat〚2〛 = 5 * AugMat〚2〛;
2
AugMat〚1〛 = AugMat〚1〛 - * AugMat〚2〛;
5
1
AugMat〚3〛 = - * AugMat〚2〛 + AugMat〚3〛;
5
AugMat〚1〛 = AugMat〚1〛 - 10 * AugMat〚3〛;
AugMat〚2〛 = AugMat〚2〛 + 23 * AugMat〚3〛;
AugMat // MatrixForm
Out[ ]//MatrixForm=
1 0 25 0
0 1 - 57 0
0 0 0 1
◆ The matrix is now in reduced row echelon form. But, the third row shows inconsis-
tency. Since 0 = 1 is not a true condition, given system of equations has no solution.
◆ Step 4. Verify the solution.
In[ ]:= RREF = RowReduce[ArrayFlatten[{{A, b}}]]; RREF // MatrixForm
Out[ ]//MatrixForm=
1 0 25 0
0 1 - 57 0
0 0 0 1
True
Out[ ]=
Considering the last row, the system is inconsistent, i.e., has no solution.
Summary
After completing this chapter, you should be able to
◼ develop SOPs to solve systems of linear equations using wolfram Mathematica.
◼ be familiar with the list form and matrix form representations of data in Mathematica.
180
Week 8_Systems of Linear Equations.nb 11
181
182
Week 9: Matrix Operations and Inverse
Properties of Matrix Operations and Inverse
Table of Contents
1. Properties of Matrix Algebra
1.1. Example 9.1: Matrix Addition and Scalar Multiplication
1.2. Example 9.2: Matrix Multiplication
1.3. Example 9.3: Transpose of a Matrix
2. Example 9.4: Inverse of a Matrix
3. Summary
Commands list
◼ Table
◼ Dimensions
◼ ConstantArray
◼ RandomInteger
◼ RandomReal
◼ Do
◼ For
◼ Sum
◼ SeedRandom
◼ UpperTriangularize
◼ LowerTriangularize
◼ Dot
◼ Transpose
◼ Inverse
183
2 Week 9_Matrix Operations and Inverse.nb
Language also has commands for creating diagonal matrices, constant matrices, and other
special matrix types.
be n × m matrices. Then addition and scalar multiplication of matrices are defined as follows:
Construct two 3 3 matrices and prove one of the properties below by performing
corresponding matrix operations.
.
184
Week 9_Matrix Operations and Inverse.nb 3
(b) s (A + B) = sA + sB
(c) (s + t) A = sA + tA
(d) (A + B) + C = A + (B + C )
(e) (st) A = s (tA)
(f) A + 0nm = A
.
Note that two matrices can be equal if they have the same dimensions and if their corresponding entries
are equal.
True
{3, 3}
◆ Initially, new matrix sumAB will be a zero matrix and its elemts will be replaced later.
185
4 Week 9_Matrix Operations and Inverse.nb
◆ Step 5. Find the scalar multiples of A and B, sA and sB using a for loop.
◆ Define new zero matrices elements of which will be replaced later.
sA = ConstantArray[0, dim];
sB = ConstantArray[0, dim];
186
Week 9_Matrix Operations and Inverse.nb 5
◆ Step 7. Check and verify the2nd property of matrix addition and scalar multiplication.
In[ ]:= FullSimplify[RHS LHS]
Out[ ]=
True
True
True
187
6 Week 9_Matrix Operations and Inverse.nb
which is an n m matrix.
n
cij = ai 1 b1 j + ai 2 b2 j + · · · + ai n bn j = ∑k=1 aik bkj
Note that for AB to exist, the number of columns of A must equal the number of rows of B.
.
Construct two matrices of dimensions 4 3 and 3 4, respectively, with random entries and
prove one of the properties below by performing corresponding matrix operations.
.
(b) A (B +C ) = AB +AC
(c) (A + B) C = AC + BC
(d) s (AB) = (sA) B = A (sB)
(e) AI = A
(f) AB ≠ BA (even if the matrices have compatible dimensions)
.
188
Week 9_Matrix Operations and Inverse.nb 7
◆ Initially, new matrix AB will be a zero matrix and its elemts will be replaced later.
In[ ]:= AB = ConstantArray[0, {4, 4}];
189
8 Week 9_Matrix Operations and Inverse.nb
In[ ]:= AB ≠ BA
Out[ ]=
True
True
True
True
Construct upper/lower triangular matrix or matrices with random entries and prove one of the
properties below by performing corresponding matrix operations.
.
(b) (A + B)T = AT + BT
(c) (sA)T = sAT
(d) (AC )T = C T AT
190
Week 9_Matrix Operations and Inverse.nb 9
Similarly, an n n matrix A is lower triangular if the terms above the diagonal are all zero.
.
a11 0 0 ⋯ 0
a21 a22 0 ⋯ 0
A = a31 a32 a33 ⋯ 0
⋮ ⋮ ⋮ ⋱ ⋮
an 1 an 2 an 3 ⋯ ann
.
T
◆ Step 2. Verify that AT = A.
In[ ]:= At = Transpose [A]; At // MatrixForm
Out[ ]//MatrixForm=
-5 0 0 0
- 10 - 7 0 0
- 3 - 10 - 7 0
- 10 - 10 - 2 6
191
10 Week 9_Matrix Operations and Inverse.nb
True
True
True
1 0 0 ⋯ 0
0 1 0 ⋯ 0
In = 0 0 1 ⋯ 0
⋮ ⋮ ⋱ ⋮⋮
0 0 0 ⋯ 1n
.
Construct invertible matrix or matrices with random entries and prove one of the properties
192
Week 9_Matrix Operations and Inverse.nb 11
below.
.
(b) If A and B are both invertiblen n matrices, then (AB)-1 = B-1 A-1 .
-1 T
(c) If A is invertible, then AT = A -1 .
◆ Step 2. Check whether this matrix is invertible or not. The matrix is invertible since its
determinant is nonzero.
In[ ]:= Det[A] ≠ 0
Out[ ]=
True
193
12 Week 9_Matrix Operations and Inverse.nb
◆ On the left-hand-side is the identity matrix and on the right-hand-side is the inverse
matrix. Thus, we find that A-1 is:
In[ ]:= InvA = RREF〚All, 4 ;; 6〛; InvA // MatrixForm
Out[ ]//MatrixForm=
- 1.04865 - 1.80522 2.39039
0.505178 1.36229 - 1.23333
1.14102 1.67668 - 2.3007
-1
◆ Step 4. Find A-1 .
◆ Augmented matrix, A-1 I2.
In[ ]:= AugMat2 = ArrayFlatten[{{InvA, IdentityMatrix[3]}}]; MatrixForm[AugMat2]
Out[ ]//MatrixForm=
- 1.04865 - 1.80522 2.39039 1 0 0
0.505178 1.36229 - 1.23333 0 1 0
1.14102 1.67668 - 2.3007 0 0 1
-1
◆ The right-hand-side is the inverse matrix. Thus, A-1 is:
In[ ]:= RHS = RREF2〚All, 4 ;; 6〛; RHS // MatrixForm
Out[ ]//MatrixForm=
8.17389 1.1142 7.89526
1.87803 2.41361 0.657388
5.42247 2.31155 3.96006
True
True
True
194
Week 9_Matrix Operations and Inverse.nb 13
Summary
After completing this chapter, you should be able to
◼ perform various standard matrix operations using Mathematica.
◼ write
simple programs that involve looping and conditional expressions in
Mathematica
◼ generate random numbers in Mathematica.
195
196
Week 10: LU Factorization and Determinants
Applications of Matrix Operations and Determinants
Table of Contents
1. Example 10.1: The LU Factorization
1.1. Method 1: LU Factorization using Row Operations
1.2. Method 2: LUDecomposition Command
2. Example 10.2: Determinant and Its Properties | Part 1
2.1. Method 1: The Shortcut Method
2.2. Method 2: The Cofactor Expansion
3. Example 10.3: Determinant and Its Properties | Part 2
3.1. Method 3: Row Operations to Compute the Determinant
4. Example 10.4: Applications of the Determinant
4.1. Cramer's Rule
4.2. Inverses from Determinants
5. Summary
Commands list
◼ LUDecomposition
◼ UpperTriangularize
◼ LowerTriangularize
◼ Det
◼ Times
◼ Diagonal
◼ Reverse
◼ Transpose
◼ Inverse
◼ Minors
◼ Cofactor
197
2 Week 10_LU Factorization and Determinants.nb
A = LU
.
where L is a lower triangular matrix with 1’s on the diagonaland U is an upper triangular matrix in
echelon form.
◆ Step 1. Define coefficient matrix A and matrix b from the system expressed as Ax = b.
In[ ]:= ClearAll["Global`*"]
A = {{1, - 1, - 2}, {- 3, 2, 1}, {6, 11, - 2}}; MatrixForm[A]
Out[ ]//MatrixForm=
1 -1 -2
-3 2 1
6 11 - 2
198
Week 10_LU Factorization and Determinants.nb 3
◆ Obtain U by the process of row reduction of A, and build up L one column at a time as
we transform A to echelon form. For this, first construct a lower triangular matrix L1,
where the • symbol represents a matrix entry that has not yet been determined.
In[ ]:= L1 = ConstantArray[, {3, 3}]; L1 // MatrixForm
Out[ ]//MatrixForm=
◆ Take the first column of A, divide each entry by the pivot (1), and use the resulting val-
ues to form the first column of L1.
A〚All, 1〛
In[ ]:= L1〚All, 1〛 = ; L1 // MatrixForm
1
Out[ ]//MatrixForm=
1
-3
6
◆ Take the second column of A, starting from the pivot entry (-1) down, and divide each
entry by the pivot. Use the resulting values to form the lower portion of the second col-
umn of L1.
U1〚2 ;; 3, 2〛
In[ ]:= L1〚2 ;; 3, 2〛 = ; L1 // MatrixForm
-1
Out[ ]//MatrixForm=
1
-3 1
6 - 17
199
4 Week 10_LU Factorization and Determinants.nb
◆ Now we have finished with U1. The original matrix is in echelon form and upper
triangular.
◆ Finish filling in L1. Since L1 must be unit lower triangular, we put a 1 in the lower right
corner and fill in the remaining entries with 0’s.
In[ ]:= L1〚3, 3〛 = 1;
L1〚1, 2〛 = 0;
L1〚1 ;; 2, 3〛 = 0; L1 // MatrixForm
Out[ ]//MatrixForm=
1 0 0
-3 1 0
6 - 17 1
True
Symbol
200
Week 10_LU Factorization and Determinants.nb 5
◆ Define the permutation vector. The permutation vector indicates that the rows were
interchanged while factoring the matrix.
In[ ]:= p
Out[ ]=
{1, 2, 3}
True
◆ Step 3. Solve the given system of linear equations using the result of one of the methods
above.
◆ Since we have verified that A = LU, the system can be written as LUx = b. The first step
is to denote y = Ux, so that our system can be expressed as Ly = b.
◆ Solve the equation Ly = b:
In[ ]:= y = LinearSolve[L2, b〚p〛]
Out[ ]=
49 11 176
- , ,-
25 15 75
True
201
6 Week 10_LU Factorization and Determinants.nb
det(A) = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 - a11 a23 a32 - a12 a21 a33 - a13 a22 a31
5. If A has a row( or column) of zeros or if A has two identical rows(or columns), then det(A) = 0.
6. det(AB) = det(A)det(B)
1
7. Let A be invertible matrix, then detA-1 =
det (A)
202
Week 10_LU Factorization and Determinants.nb 7
Note that the Shortcut Method will not work for n × n matrices with dimensions larger than 3×3.
-3 1 2
5 5 -8
4 2 -5
◆ Step 2. Duplicate the first two columns of the matrix to the right of the third column of
the original matrix.
In[ ]:= extraColumns = {A1〚All, 1〛, A1〚All, 2〛};
newA1 = Transpose[Join[Transpose[A1], extraColumns]]; newA1 // MatrixForm
Out[ ]//MatrixForm=
-3 1 2 -3 1
5 5 -8 5 5
4 2 -5 4 2
◆ Step 3.1. Now, for each diagonal arrow multiply terms and then add or subtract based on
the labels.
In[ ]:= detA1 =
A1〚1, 1〛 * A1〚2, 2〛 * A1〚3, 3〛 +
A1〚1, 2〛 * A1〚2, 3〛 * A1〚3, 1〛 +
A1〚1, 3〛 * A1〚2, 1〛 * A1〚3, 2〛 -
A1〚1, 3〛 * A1〚2, 2〛 * A1〚3, 1〛 -
A1〚1, 1〛 * A1〚2, 3〛 * A1〚3, 2〛 -
A1〚1, 2〛 * A1〚2, 1〛 * A1〚3, 3〛
Out[ ]=
203
8 Week 10_LU Factorization and Determinants.nb
◆ Here Diagonal command is used to get the elements along the desired diagonal, Reverse
command is for getting the elements of back diagonal and Times command is used to get
the product of that diagonal elements in a list.
In[ ]:= detA13 = 0;
For[i = 0, i ≤ 2, i ++,
detA13 = detA13 + Times @@ Diagonal[newA1, i] - Times @@ Diagonal[Reverse[newA1], i]];
Print[detA13]
True
Out[ ]=
First of all, let’s focus on the concepts like minor and cofactor that are basis of the Cofactor
Expansion Method.
.
matrix that results from deleting the ith row and jth column of the original n × n matrix.
(a) det(A) = ai1 Ci1 +ai2 Ci2 +⋯ + ain Cin (Expand across row i)
.
204
Week 10_LU Factorization and Determinants.nb 9
◆ Use a for loop and the formula of column expansion given above to calculate the
det(A2.1).
minorA2 = {{m11}, {m21}, {m31}};
detA21 = 0;
Fori = 1, i ≤ 3, i ++,
Forj = 1, j < 2, j ++,
detA21 = detA21 + A2〚i, j〛 * (- 1)i+j * minorA2〚i, j〛
detA21
Out[ ]=
(- a1,3 a2,2 + a1,2 a2,3 ) a3,1 - a2,1 (- a1,3 a3,2 + a1,2 a3,3 ) + a1,1 (- a2,3 a3,2 + a2,2 a3,3 )
205
10 Week 10_LU Factorization and Determinants.nb
◆ Calculate the det(A2.2) using the formula for row expansion given above.
cofactor = Transpose[{{C11}, {C12}, {C13}}];
detA22 = 0;
For[i = 1, i < 2, i ++,
For[j = 1, j ≤ 3, j ++,
detA22 = detA22 + A2〚i, j〛 * cofactor〚i, j〛]]
detA22
Out[ ]=
a1,3 (- a2,2 a3,1 + a2,1 a3,2 ) + a1,2 (a2,3 a3,1 - a2,1 a3,3 ) + a1,1 (- a2,3 a3,2 + a2,2 a3,3 )
206
Week 10_LU Factorization and Determinants.nb 11
(- a1,3 a2,2 + a1,2 a2,3 ) a3,1 + a2,1 (a1,3 a3,2 - a1,2 a3,3 ) + a1,1 (- a2,3 a3,2 + a2,2 a3,3 )
True
True
True
True
207
12 Week 10_LU Factorization and Determinants.nb
◆ Step 2. Convert the matrix A to echelon form using elementary row operations and
reduce it to triangular form.
◆ Further steps are separated to keep track of the effect of the row operations on the determi-
nant.
1
R → R1 | det(A) = 4 det(A1 )
4 1
In[ ]:= A1 = A;
1
A1〚1〛 = A1〚1〛; A1 // MatrixForm
4
Out[ ]//MatrixForm=
5 3
1 2 - -
2 2
-3 -4 -3 0
1 -4 0 2
- 1 7 - 10 - 1
R2 + 3 R1 → R2 | det(A1 ) = det(A2 )
R3 - R1 → R 3 | det(A2 ) = det(A3 )
208
Week 10_LU Factorization and Determinants.nb 13
R4 + R 1 → R4 | det(A3 ) = det(A4 )
1
R → R2 | det(A4 ) = 2 det(A5 )
2 2
R3 + 6 R2 → R3 | det(A5 ) = det(A6 )
R4 - 9 R2 → R4 | det(A7 ) = det(A8 )
209
14 Week 10_LU Factorization and Determinants.nb
1
- R → R3 | det(A7 ) = - 29 det(A8 )
29 3
139
R4 - R 3 → R4 | det(A8 ) = det(A9 )
4
◆ Step 3. Since A9 is a triangular form of A, find its determinant using the property of
triangular matrices. Their determinant is equal to the product of its diagonal elements.
(7th property from previous example)
In[ ]:= detA9 = 1;
For[i = 1, i ≤ 4, i ++,
detA9 = detA9 * A9〚i, i〛];
detA9
Out[ ]=
669
116
210
Week 10_LU Factorization and Determinants.nb 15
True
◆ Step 4. Substitute the value of det(A9) and find det(A) using a back substitution.
In[ ]:= detA = 4 detA1;
detA1 = detA2;
detA2 = detA3;
detA3 = detA4;
detA4 = 2 detA5;
detA5 = detA6;
detA6 = detA7;
detA7 = - 29 detA8;
detA8 = detA9;
Print[detA]
- 1338
True
det(Ai )
xi = for i = 1, 2,… , n
det(A)
211
16 Week 10_LU Factorization and Determinants.nb
◆ Step 3. Replace the 1st, 2nd and 3rd column values with the values of the answer col-
umn b and construct new three matrices, respectively.
In[ ]:= A1 = A;
A1〚All, 1〛 = b; A1 // MatrixForm
Out[ ]//MatrixForm=
0 -2 1
8 2 -8
-9 5 9
In[ ]:= A2 = A;
A2〚All, 2〛 = b; A2 // MatrixForm
Out[ ]//MatrixForm=
1 0 1
0 8 -8
-4 -9 9
In[ ]:= A3 = A;
A3〚All, 3〛 = b; A3 // MatrixForm
Out[ ]//MatrixForm=
1 -2 0
0 2 8
-4 5 -9
58
212
Week 10_LU Factorization and Determinants.nb 17
32
◆ Step 5. Find the solution using the formula for Cramer’s rule.
detA1
x1 =
detA
Out[ ]=
29
detA2
x2 =
detA
Out[ ]=
16
detA3
x3 =
detA
Out[ ]=
3
{29, 16, 3}
True
{29, 16, 3}
True
213
18 Week 10_LU Factorization and Determinants.nb
3 1 0
-1 2 1
0 -1 2
.
1
A -1 = adj(A)
det(A)
17
214
Week 10_LU Factorization and Determinants.nb 19
5 2 1
17
- 17 17
2 6 3
17 17
- 17
1 3 7
17 17 17
True
True
Summary
After completing this chapter, you should be able to
◼ perform LU factorization of a matrix in Mathematica
◼ find the determinant of a matrix in Mathematica
◼ practice applications of determinants in Mathematica.
215
20 Week 10_LU Factorization and Determinants.nb
◼ develop the habit of always checking your solutions for quality assurance.
216
Week 11: Eigenvalues and Eigenvectors
Determination and applications of eigenvalues and eigenvectors
Table of Contents
1. Example 11.1: Characteristic Polynomial and Equation
2. Example 11.2: Multiplicity of an Eigenvalue
3. Example 11.3: Diagonalization
3.1. Non-Diagonalizable Matrix
3.2. Diagonalizable Matrix
4. Example 11.4: Matrix Powers
5. Summary
Commands list
◼ NullSpace
◼ CharacteristicPolynomial
◼ Eigenvalues
◼ Eigenvectors
◼ Eigensystem
◼ Factor
◼ DiagonalizableMatrixQ
◼ Diagonal
◼ Power
◼ MatrixPower
217
2 Week 11_Eigenvalues and Eigenvectors.nb
Note that an eigenvalue λ can be zero, but an eigenvactor u must be a nonzero vector.
◆ Step 1. Define the matrix A.
In[ ]:= ClearAll["Global`*"]
A = {{1, - 3, 3}, {2, - 2, 2}, {2, 0, 0}}; MatrixForm[A]
Out[ ]//MatrixForm=
1 -3 3
2 -2 2
2 0 0
where the polynomial from det(A - λ In ) is called the characteristic polynomial of A, and the equation
det(A - λ In ) = 0 is called the characteristic equation.
.
◆ Form a new matrix by subtracting λ from diagonal elements of A such that P = A -λI 3.
In[ ]:= Poly = A - λ * IdentityMatrix[3]; Poly // MatrixForm
Out[ ]//MatrixForm=
1-λ -3 3
2 -2 - λ 2
2 0 -λ
◆ The eigenvalues for a matrix A are given by the roots of the characteristic equation.
218
Week 11_Eigenvalues and Eigenvectors.nb 3
2 λ - λ2 - λ3 0
(A - λ In ) u = 0
.
◆ Step 3.1. Find the eigenvectors associated with λ1 = - 2 by solving the corresponding
homogeneous system: (A + 2 I3) u1 = 0.
◆ Step 3.1.1. Construct the augmented matrix of the system, AugMat= [A1|b].
◆ Construct the coefficient matrix A1 = A + 2 I3 .
In[ ]:= A1 = A + 2 * IdentityMatrix[3]; A1 // MatrixForm
Out[ ]//MatrixForm=
3 -3 3
2 0 2
2 0 2
◆ Augmented matrix:
In[ ]:= AugMat1 = ArrayFlatten[{{A1, b}}]; MatrixForm[AugMat1]
Out[ ]//MatrixForm=
3 -3 3 0
2 0 2 0
2 0 2 0
219
4 Week 11_Eigenvalues and Eigenvectors.nb
3
◆ -
2
R 1 + R2 → R 2
3
In[ ]:= AugMat1〚2〛 = - AugMat1〚1〛 + AugMat1〚2〛; AugMat1 // MatrixForm
2
Out[ ]//MatrixForm=
2 0 2 0
0 -3 0 0
2 0 2 0
◆ - R 1 + R3 → R 3
◆ Step 3.1.3. Perform the back substitution to find the eigenvector. Take x3 = s:
In[ ]:= x3 = s;
eq0 = 2 x1 + 2 x3 0;
eq1 = - 3 x2 0;
soln = Solve[{eq0, eq1}, {x1, x2}]
Out[ ]=
{{x1 - s, x2 0}}
220
Week 11_Eigenvalues and Eigenvectors.nb 5
-1
0
1
◆ Step 3.2. Find the eigenvectors associated with λ2 = 1 by solving the corresponding
homogeneous system: (A - I3) u2 = 0.
◆ Step 3.2.1. Construct the augmented matrix of the system, AugMat= [A2|b].
◆ Construct the coefficient matrix A2 = A - I3.
In[ ]:= A2 = A - IdentityMatrix[3]; A2 // MatrixForm
Out[ ]//MatrixForm=
0 -3 3
2 -3 2
2 0 -1
◆ Augmented matrix:
In[ ]:= AugMat2 = ArrayFlatten[{{A2, b}}]; MatrixForm[AugMat2]
Out[ ]//MatrixForm=
0 -3 3 0
2 -3 2 0
2 0 -1 0
◆ - R 1 + R2 → R 2
◆ - R 2 + R3 → R 3
221
6 Week 11_Eigenvalues and Eigenvectors.nb
◆ Step 3.2.3. Perform the back substitution to find the eigenvector. Take x3 = 2 s:
In[ ]:= Clear[x1, x2, x3]
x3 = 2 s;
eq0 = 2 x1 - x3 0;
eq1 = - 3 x2 + 3 x3 0;
soln = Solve[{eq0, eq1}, {x1, x2}]
Out[ ]=
{{x1 s, x2 2 s}}
1
2
2
◆ Step 3.2. Find the eigenvectors associated with λ3 = 0 by solving the homogeneous
system: (A - 0 I3) u3 = Au = 0.
◆ Step 3.2.1. Construct the matrix A3 = A - 0 I3 = A.
In[ ]:= A3 = A - 0 * IdentityMatrix[3]; A3 // MatrixForm
Out[ ]//MatrixForm=
1 -3 3
2 -2 2
2 0 0
222
Week 11_Eigenvalues and Eigenvectors.nb 7
0
1
1
2 λ - λ2 - λ3
{- 2, 1, 0}
-1 1 0
{- 2, 1, 0}, 0 , 2 , 1
1 2 1
223
8 Week 11_Eigenvalues and Eigenvectors.nb
12 - 16 λ + 7 λ2 - λ3
12 - 16 λ + 7 λ2 - λ3 0
◆ Step 4. The eigenvalues for a matrix A are given by the roots of the characteristic equa-
tion.
◆ Factorise the characteristic equation.
In[ ]:= Factor[eqn]
Out[ ]=
- (- 3 + λ) (- 2 + λ)2 0
◆ From the factored form we see that the matrix A has two distinct eigenvalues, λ1 = 2
(multiplicity 2) and λ2 = 3 (multiplicity 1). Confirm the results by solving the character-
istic equation for λ.
In[ ]:= Solve[eqn, λ]
Out[ ]=
224
Week 11_Eigenvalues and Eigenvectors.nb 9
◆ Step 5.1. Find the eigenvectors associated with λ1 = 2 by solving the corresponding
homogeneous system: (A - 2 I3) u1 = 0.
◆ Step 5.1.1. Construct the augmented matrix of the system, AugMat= [A1|b].
◆ Construct the coefficient matrix A1 = A + 2 I3 .
In[ ]:= A1 = A - 2 * IdentityMatrix[3]; A1 // MatrixForm
Out[ ]//MatrixForm=
2 4 -2
1 2 -1
3 6 -3
◆ Augmented matrix:
In[ ]:= AugMat1 = ArrayFlatten[{{A1, b}}]; MatrixForm[AugMat1]
Out[ ]//MatrixForm=
2 4 -2 0
1 2 -1 0
3 6 -3 0
◆ - 2 R1 + R2 → R2
◆ - 3 R1 + R3 → R3
225
10 Week 11_Eigenvalues and Eigenvectors.nb
◆ Step 5.1.3. Perform the back substitution to find the eigenvector. Let x2 = s1 and x3 = s2:
In[ ]:= x3 = s2 ;
x2 = s1 ;
eqn = x1 + 2 x2 - x3 0;
soln = Solve[eqn, x1]
Out[ ]=
{{x1 - 2 s1 + s2 }}
-2 1
1 , 0
0 1
Note that for the eigenvalue λ1 = 2, which has multiplicity of 2, its associated eigenspace has
............dimension 2 (i.e., a plane).
.
◆ Step 5.2. Find the eigenvectors associated with λ2 = 3 by solving the corresponding
homogeneous system: (A - 3 I3) u3 = 0.
◆ Step 5.2.1. Construct the augmented matrix of the system, AugMat= [A2|b].
◆ Construct the coefficient matrix A2 = A - 3 I3.
In[ ]:= A2 = A - 3 IdentityMatrix[3]; A2 // MatrixForm
Out[ ]//MatrixForm=
1 4 -2
1 1 -1
3 6 -4
226
Week 11_Eigenvalues and Eigenvectors.nb 11
◆ Augmented matrix:
In[ ]:= AugMat2 = ArrayFlatten[{{A2, b}}]; MatrixForm[AugMat2]
Out[ ]//MatrixForm=
1 4 -2 0
1 1 -1 0
3 6 -4 0
◆ - 3 R1 + R3 → R3
◆ - 2 R2 + R3 → R3
◆ Step 5.2.3. Perform the back substitution to find the eigenvector. Take x3 = 3 s:
In[ ]:= Clear[x1, x2, x3]
x3 = 3 s;
eq0 = x1 + 4 x2 - 2 x3 0;
eq1 = - 3 x2 + x3 0;
soln = Solve[{eq0, eq1}, {x1, x2}]
Out[ ]=
{{x1 2 s, x2 s}}
227
12 Week 11_Eigenvalues and Eigenvectors.nb
2
1
3
12 - 16 λ + 7 λ2 - λ3
{3, 2, 2}
2 1 -2
{3, 2, 2}, 1 , 0 , 1
3 1 0
where the columns of P are the eigenvectors of the matrix A, such that P = [u1 . . . un ] and the diagonal
entries of D given by the corresponding eigenvalues λ1 , . . . , λn .
Note that the order of the eigenvalues in D does not matter, as long as it matches the order of the
228
Week 11_Eigenvalues and Eigenvectors.nb 13
corresponding eigenvectors in P.
.
- (- 1 + λ) (1 + λ)2
{- 1, - 1, 1}
◆ Results show that the matrix A has two distinct eigenvalues, λ1 = - 1 (multiplicity 2)
and λ2 = 1 (multiplicity 1).
229
14 Week 11_Eigenvalues and Eigenvectors.nb
Note that although the eigenvalue λ1 = - 1 has multiplicity of 2, its associated eigenspace has
dimension 1 (i.e., a line).
.
◆ Step 4. Check the matrix for diagonalizability based on the eigenvalues and eigenvectors.
For the given matrix A, the dimension of eigenspace of λ1 = - 1 is less than the multiplicity of the
corresponding eigenvalue, therefore, according to the Theorem 11.4.3 and Theorem 11.4.4, the given
matrix A is non-diagonalizable.
False
.
Example 11.3.2. If possible, find matrices P and D to diagonalize the given matrix A:
1 -4 3
A= 0 7 1
0 0 2
230
Week 11_Eigenvalues and Eigenvectors.nb 15
- ((- 7 + λ) (- 2 + λ) (- 1 + λ))
{7, 2, 1}
◆ Results show that the matrix A has three distinct eigenvalues, λ1 = 7, λ2 = 2, and λ3 = 1.
◆ Step 3. Find the corresponding eigenvectors.
◆ Basis for eigenspace of λ1 = 7:
In[ ]:= λ1 = 7;
u1 = NullSpace[A - λ1 IdentityMatrix[3]]; u1 // Transpose // MatrixForm
Out[ ]//MatrixForm=
-2
3
0
◆ Step 4. Check the matrix for diagonalizability based on the eigenvalues and eigenvectors.
◆ Since the matrix A has three distinct eigenvalues and three eigenvectors corresponsing to
these eigenvalues, therefore by Theorem 11.4.3 the set of eigenvectors is linearly idepen-
dent and thus forms a basis for R3.
◆ According to the Theorem 11.4.1 and Theorem 11.4.4, the given matrix A is
diagonalizable.
231
16 Week 11_Eigenvalues and Eigenvectors.nb
15
True
7 0 0
0 2 0
0 0 1
-2 19 1
3 -1 0
0 5 0
True
True
232
Week 11_Eigenvalues and Eigenvectors.nb 17
True
Ak = PDk P -1
◆ Step 3. Check the matrix for diagonalizability based on the eigenvalues and eigenvectors.
◆ The following nonzero determinant implies that the eigenvectors are linearly independent .
233
18 Week 11_Eigenvalues and Eigenvectors.nb
True
True
◆ Diagonal entries of D:
In[ ]:= d = Diagonal[D1]
Out[ ]=
1
1,
9
234
Week 11_Eigenvalues and Eigenvectors.nb 19
◆ Step 6. Compute A5 based on the Definition 11.5. Calculate the 5th power of the diago-
nal matrix.
In[ ]:= A5 = P.D5.InvP; A5 // MatrixForm
Out[ ]//MatrixForm=
4921 14 762
19 683 59 049
14 762 44 287
19 683 59 049
True
Summary
After completing this chapter, you should be able to
◼ develop SOPs to find the eigensystem (eigenvalues and eigenvectors) of a given
square matrix.
◼ perform step-by-step matrix diagonalization in Mathematica.
◼ perform matrix power operations in Mathematica.
◼ develop the habit of always checking your solutions for quality assurance.
235
236
Week 12: Linear Algebra and Geometry
Vectors, Vector Operations, and Linear Transformations
Table of Contents
1. Example 12.1: Vectors and Vector Operations
2. Example 12.2: Geometry of Vectors
2.1. Vector Addition
2.2. Scalar Multiplication
2.3. Vector Subtraction
3. Example 12.3: Span
4. Example 12.4: Linear Independence
5. Example 12.5: Dot Product and its Applications
5.1. Properties of the Dot Product
5.2. Norm of a Vector
5.3. Distance Between Vectors
5.4. Angle Between Vectors
5.5. Orthogonal Vectors
6. Example 12.6: Linear Transformations
6.1. Linear Transformations
6.2. One-to-One Linear Transformations
6.3. Onto Linear Transformations
7. Example 12.7: Geometry of Linear Transformations
7.1. Reflection Across the x-Axis
7.2. Reflection Across the y-Axis
7.3. Rotation by Angle θ
7.4. Vertical Shear Transformation
7.5. Horizontal Shear Transformation
7.6. Dilation
7.7. Projection onto the x-Axis
7.8. Projection onto the y-Axis
237
2 Week 12_Linear Algebra and Geometry.nb
8. Summary
Commands list
◼ Arrowheads
◼ Arrow
◼ Show
◼ Graphics
◼ Point
◼ Line
◼ Dot
◼ Norm
◼ EuclideanDistance
◼ VectorAngle
◼ Expand
◼ BezierCurve
Operations on Vectors:
https://reference.wolfram.com/language/guide/OperationsOnVectors.html
The Wolfram Language provides fully integrated support for plane geometry, including basic
regions such as points, lines, triangles, and disks; functions for computing basic properties
such as arc length and area; and nearst points to solvers to find the intersection of regions or
integrals over regions. It is a powerful tool of visualization of geometrical figures and
graphical images, and consists of explicit list of primitives, directives, wrappers, and options.
Plane Geometry:
https://reference.wolfram.com/language/guide/PlaneGeometry.html
238
Week 12_Linear Algebra and Geometry.nb 3
u1
u2
u=
⋮
un
or as u = (u1 , u2 , … , un ).
The set of all vectors with n entries is denoted by Rn (real coordinate space of dimension n).
u1 v1
u2 v2
u= and v=
⋮ ⋮
un vn
u1 v1 u1 + v 1
u2 v2 u2 + v 2
Addition: u +v = + =
⋮ ⋮ ⋮
un vn un + v n
u1 c · u1
u2 c · u2
Scalar Multiplication: cu = c =
⋮ ⋮
un c · un
The set of all vectors in Rn , taken together with theses definitions of addition and scalar multiplication,
is called Euclidean space. The Euclidean space is an example of a vector space.
.
(b) a (u + v) = av + au
(c) (a + b) u = au + bu
(d) (u + v) + w = u +(v + w)
(e) a (bu) = (ab) u
(f) u +(- u) = 0
(g) u + 0 = 0 + u = u
(h) 1 u = u
(i) - u = (- 1) u
0
0
where the zero vector given by 0 = .
⋮
0
Note that two vectors can be equal if they have the same number of components. Similarly, it is
impossible to add two vectors that have a different number of components.
Suppose that we have the vectors in R4. Perform operations on following vectors to find u+v,
-4v, and 2u-3v.
2 -4
-3 6
u= v=
0 -2
-1 7
◆ Step 2. Check the number of component of the given vectors for equality.
240
Week 12_Linear Algebra and Geometry.nb 5
True
-2
3
-2
6
◆ Step 4. Find the scalar multiplication: - 4 v. Use a Do loop to perform vector operations.
◆ Get the dimension of the vector.
In[ ]:= dim = Dimensions[v]
Out[ ]=
{4, 1}
16
-24
8
-28
c 1 u1 + c 2 u2 + ⋯ + c m um
is a linear combination of u1 , u2 , … , um .
241
6 Week 12_Linear Algebra and Geometry.nb
{4, 1}
16
-24
6
-23
True
True
242
Week 12_Linear Algebra and Geometry.nb 7
The end of the vector with the arrow is called the tip, and the end at the origin is called the tail.
◆ Then, Graphics and Show functions were used to display them in coordinate system.
In[ ]:= Show[Graphics[{Thick, Blue, u1, u2, u3,
Red, PointSize[0.02], Point[{- 1, 1}], Point[{1, 2}], Point[{2, - 1}], Black,
Text[Style["(-1, 1)", FontFamily "Times", FontSize 14], {- 1, 1.3}],
Text[Style["(1, 2)", FontFamily "Times", FontSize 14], {1.2, 2.1}],
Text[Style["(2, -1)", FontFamily "Times", FontSize 14], {2, - 0.6}]}],
Axes True, AxesLabel {"x1 ", "x2 "}, LabelStyle
Directive[FontFamily "Times", FontSize 14, Black],
ImageSize 360, AspectRatio 1 / GoldenRatio]
Out[ ]=
x2
2.0 (1, 2)
1.5
(-1, 1)
1.0
0.5
x1
-1.0 -0.5 0.5 1.0 1.5 2.0
-0.5 (2, -1)
-1.0
Suppose that we have the vectors in R2. Perform vector operations and illustrate them
graphically.
Theorem 12.2.1: Geometric Procedures for Adding Vectors
1. The Tip-to-Tail Rule: Let u and v be two vectors. Tranlate the graph of v, preserving direction, so
that its tail is at the tip of u. Then the tip of the translated v is at the tip of u + v.
243
8 Week 12_Linear Algebra and Geometry.nb
2. The Parallelogram Rule: Let vectors u and v form two adjacent sides of a parallelogram with
vertices at the origin, the tip of u, and the tip of v. Then the tip of u + v is at the fourth vertex.
.
◆ Step 2. Check the number of component of the given vectors for equality.
In[ ]:= Dimensions[u] Dimensions[v]
Out[ ]=
True
◆ Step 5. Display the geometric interpretation of the Tip-to-Tail Rule for vector addition.
244
Week 12_Linear Algebra and Geometry.nb 9
x2
7 u+v
v
6
5 u
4
3
2
v
1
x1
1 2 3 4 5 6
◆ The figure above shows vectors u, v, the translated v (dashed), and u + v. When we add
v to u, we add each component of v to the corresponding component of u. Note that we
get to the same place if we translate u instead of v.
◆ Step 6. Display the geometric interpretation of the Parallelogram Rule for vector
addition.
245
10 Week 12_Linear Algebra and Geometry.nb
x2
7 u+v
6
5 u
4
3
2
v
1
x1
1 2 3 4 5 6
◆ The figure above shows that the third and fourth sides of the parallelogram are translated
copies of u and v, which shows the connection to the Tip-to-Tail Rule.
246
Week 12_Linear Algebra and Geometry.nb 11
◆ Step 4. Display the geometric interpretation of the scalar multiples of the vector u.
In[ ]:= Show[Graphics[{Thick, Pink, u3, Yellow, u4, Blue, u1, Green, u2, Black,
Text[Style["u", Bold, FontFamily "Times", FontSize 14], {- 0.7, 1}],
Text[Style["-u", Bold, FontFamily "Times", FontSize 14], {1, - 0.6}],
Text[Style["2.5u", Bold, FontFamily "Times", FontSize 14], {- 1.9, 2.5}],
Text[Style["-2u", Bold, FontFamily "Times", FontSize 14], {2.2, - 1.7}]}],
Axes True, AxesLabel {"x1 ", "x2 "}, LabelStyle
Directive[FontFamily "Times", FontSize 14, Black],
ImageSize 360, AspectRatio 1 / GoldenRatio]
Out[ ]=
x2
2.5u
2
u 1
x1
-2 -1 1 2
-u
-1
-2u
-2
247
12 Week 12_Linear Algebra and Geometry.nb
◆ Step 2. Check the number of component of the given vectors for equality.
In[ ]:= Dimensions[u] Dimensions[v]
Out[ ]=
True
248
Week 12_Linear Algebra and Geometry.nb 13
x2
6
u w
5
3 v
u-v
2
x1
-2 2 4
x 1 u1 + x 2 u2 + ⋯ + x m um
249
14 Week 12_Linear Algebra and Geometry.nb
x 1 u1 + x 2 u2 + ⋯ + x m um = v
[ u1 u2 ⋯ um v ]
have a solution.
◆ Step 3.1. Construct the augmented matrix of the system, A = [u1 u2 v1].
In[ ]:= A = ArrayFlatten[{{u1, u2, v1}}]; MatrixForm[A]
Out[ ]//MatrixForm=
2 1 -1
1 2 4
1 3 7
250
Week 12_Linear Algebra and Geometry.nb 15
◆ Step 5.1. Perform the back substitution to find scalars x1 and x2 and check the solution
using LinearSolve function.
In[ ]:= x2 = 3;
x1 = - 2;
{- 2, 3}
v1 - 2 u1 + 3 u2
251
16 Week 12_Linear Algebra and Geometry.nb
◆ The third row of the echelon matrix corresponds to the equation 0=1. Thus the system
has no solutions and v2 is not in S = span {u1, u2}.
◆ Step 5.2. Check the solution using LinearSolve function.
In[ ]:= LinearSolve[{{2, 1}, {1, 2}, {1, 3}}, {8, 2, 1}]
Out[ ]=
A = [ u1 u2 ⋯ um ] ∼ B
where B is in echelon form. Then span {u1 , … , um } = Rn exactly when B has a pivot position in every
row.
252
Week 12_Linear Algebra and Geometry.nb 17
Let {u1 , u2 , … , um } be a set of vectors in Rn . If m < n, then this set does not span Rn . If m ≥ n, then
the set might span Rn or it might not. In this case, we cannot say more without additional information
about the vectors.
Let a1 , a2 , … , am and b be vectors in Rn . Then the following statements are equivalent. That is, if one
is true, then so are the others, and if one i false then so are others.
(a) b is in span {a1 , a2 , … , am }.
(b) The vector equation x1 a1 + x2 a2 + ⋯ + xm am = b has at least one solution.
(c) The linear system corresponding to [ a1 a2 ⋯ am b ] has at least one solution.
(d) The equation Ax = b, with A and x given as
x1
x2
A = [ a1 a2 ⋯ am ] and x=
⋮
xm
has at least one solution.
x 1 u1 + x 2 u2 + ⋯ + x m um = 0
253
18 Week 12_Linear Algebra and Geometry.nb
◆ Step 5.1. Perform the back substitution and check the solution using LinearSolve func-
tion.
254
Week 12_Linear Algebra and Geometry.nb 19
x3 = 0;
x2 = 0;
x1 = 0;
◆ The results show that the only solution is the trivial one, x1 = x2 = x3 = 0. Hence the set
{u1, u2, u3} is linearly independent.
.
Suppose that {u1 , u2 , … , um } is a set of vectors in Rn . If n < m, then the set is linearly dependent.
Let {u1 , u2 , … , um } be a set of vectors in Rn . Then this set is linearly dependent if and only if one of
the vectors in the set is in the span of the other vectors.
Let a1 , a2 , … , am and b be vectors in Rn . Then the following statements are equivalent. That is, if one
is true, then so are the others, and if one i false then so are others.
(a) The set {a1 , a2 , … , am } is linearly independent.
(b) The vector equation x1 a1 + x2 a2 + ⋯ + xm am = b has at most one solution for every b.
(c) The linear system corresponding to [ a1 a2 ⋯ am b ] has at most one solution for every b.
(d) The equation Ax = b, with A = [ a1 a2 ⋯ am ], has at most one solution for every b.
u · v = u1 v1 + ⋯ + un vn
An alternative way to define the dot product of u and v is with matrix multiplication
v1
T
u · v = u v = ( u1 ⋯ un ) ⋮ = u1 v1 + ⋯ + un vn
vn
Unlike vector addition, which produces a new vector, the dot product of two vectors yields a scalar.
.
Prove the property (b) of Theorem 12.6 using the given vectors.
2 -1 5
1 4 0
u= v= w=
-3 0 1
2 3 2
256
Week 12_Linear Algebra and Geometry.nb 21
12
{4}
◆ Step 5. Assign the two new variables responsible for the dot products u · w and v · w a
zero value.
In[ ]:= dotUW = 0;
dotVW = 0;
◆ Step 6. Run a do loop for computing the dot product of the vectors u and w.
In[ ]:= Do[dotUW = dotUW + u〚i〛 * w〚i〛, {i, 1, dim〚1〛}];
dotUW
Out[ ]=
11
◆ Step 7. Run a for loop for computing the dot product of the vectors v and w.
257
22 Week 12_Linear Algebra and Geometry.nb
12
True
◆ Step 10. Confirm the results using the built-in function Dot(.).
In[ ]:= RHS Dot[(u + v), w]
Out[ ]=
True
True
The dot product is used to find the lengths, distances, angles, and orthogonality of vectors.
.
|| cx || = c || x ||
258
Week 12_Linear Algebra and Geometry.nb 23
26
5 26
True
True
259
24 Week 12_Linear Algebra and Geometry.nb
◆ Step 3. Compute the distance between two vectors using the definition.
In[ ]:= d= diff〚1〛2 + diff〚2〛2 + diff〚3〛2
Out[ ]=
3 13
◆ Step 4. Confirm the results using the built-in functions Norm and EuclideanDistance.
In[ ]:= Norm[u - v] d
Out[ ]=
True
True
u · v = || u || || v || cos(θ)
260
Week 12_Linear Algebra and Geometry.nb 25
70.56
261
26 Week 12_Linear Algebra and Geometry.nb
2 3 2
-1 2 9
u= v= w=
5 -4 6
-2 0 4
- 16
◆ The dot product is not equal to zero, hence u and v are not orthogonal.
In[ ]:= u.w
Out[ ]=
17
◆ The dot product is not equal to zero, hence u and w are not orthogonal.
In[ ]:= v.w
Out[ ]=
262
Week 12_Linear Algebra and Geometry.nb 27
120.633
180
In[ ]:= N[VectorAngle[u, w]] *
Pi
Out[ ]=
75.5766
180
In[ ]:= N[VectorAngle[v, w]] *
Pi
Out[ ]=
90.
◆ The angle between v and w vectors is 90o, hence they are perpendicular∼orthogonal.
263
28 Week 12_Linear Algebra and Geometry.nb
(a) Approach 1: To show that T is a linear transformation requires the verifying the both
conditions (a) and (b) of Definition 12.12.
.
◆ Step 1. Define the vectors u and v, and the given linear transformation.
In[ ]:= ClearAll["Global`*"]
u = {u1, u2, u3}; u // MatrixForm
Out[ ]//MatrixForm=
u1
u2
u3
In[ ]:= T[{x1_, x2_, x3_}] := {{2 x1 + x3}, {- x1 + 2 x2}, {x1 - 3 x2 + 5 x3}, {4 x2}}
◆ Expanded form:
In[ ]:= Expand[%] // MatrixForm
Out[ ]//MatrixForm=
2 u1 + u3 + 2 v1 + v3
- u1 + 2 u2 - v1 + 2 v2
u1 - 3 u2 + 5 u3 + v1 - 3 v2 + 5 v3
4 u2 + 4 v2
264
Week 12_Linear Algebra and Geometry.nb 29
True
◆ Right-hand-side: rT (u).
In[ ]:= bRHS = r * T[u]; bRHS // MatrixForm
Out[ ]//MatrixForm=
r (2 u1 + u3)
r (- u1 + 2 u2)
r (u1 - 3 u2 + 5 u3)
4 r u2
◆ Expanded form:
In[ ]:= Expand[%] // MatrixForm
Out[ ]//MatrixForm=
2 r u1 + r u3
- r u1 + 2 r u2
r u1 - 3 r u2 + 5 r u3
4 r u2
True
The results verify that both parts of the Definition 12.12 hold, so T is a linear
transformation.
.
(a) Approach 2: To apply the Theorem 12.7 and find matrix A such that T (x) = Ax to show
265
30 Week 12_Linear Algebra and Geometry.nb
Out[ ]//MatrixForm=
x1
x2
x3
In[ ]:= T[{x1_, x2_, x3_}] := {{2 x1 + x3}, {- x1 + 2 x2}, {x1 - 3 x2 + 5 x3}, {4 x2}}
◆ Step 3. Rewrite the given linear transformation function by adding the missing elements.
In[ ]:= newT[{x1_, x2_, x3_}] :=
{{2 x1 + 0 x2 + x3}, {- x1 + 2 x2 + 0 x3}, {x1 - 3 x2 + 5 x3}, {0 x1 + 4 x2 + 0 x3}}
◆ Step 4. Verify that nothing have changed and the linear transformations are the same.
In[ ]:= T[{x1, x2, x3}] newT[{x1, x2, x3}]
Out[ ]=
True
2 0 1
-1 2 0
1 -3 5
0 4 0
266
Week 12_Linear Algebra and Geometry.nb 31
True
(b) To find whether the linear transformation is one-to-one or not, we shoud apply either part
of Theorem 12.8.
.
According to the Theorem 12.8.1, we need to find the solution to T (x) = 0, which is
equivalent to solving Ax = 0.
.
267
32 Week 12_Linear Algebra and Geometry.nb
◆ The echelon form shows that Ax = 0 has only the trivial solution.
◆ Step 4. Verify the result using built-in function.
In[ ]:= LinearSolve[A, b]
Out[ ]=
(c) To find whether the linear transformation is onto or not, we shoud apply Theorem 12.9.
.
268
Week 12_Linear Algebra and Geometry.nb 33
{4, 3}
◆ Step 3. Check the condition (c) of the given theorem and compare the number of rows
and columns.
In[ ]:= n>m
Out[ ]=
True
269
34 Week 12_Linear Algebra and Geometry.nb
1 0
0 -1
◆ Define the corresponding linear transformation using the definition T (x) = Ax.
In[ ]:= T[{x_, y_}] = A.u
Out[ ]=
{5, - 4}
{5, - 4}
4 u
x
1 2 3 4 5
-2
-4
u'
270
Week 12_Linear Algebra and Geometry.nb 35
-1 0
0 1
◆ Define the corresponding linear transformation using the definition T (x) = Ax.
In[ ]:= T[{x_, y_}] = A.u
Out[ ]=
{- 5, 4}
{- 5, 4}
y
u' u
4
x
-4 -2 2 4
271
36 Week 12_Linear Algebra and Geometry.nb
Rotation by Angle θ
Cos[θ] - Sin[θ]
Sin[θ] Cos[θ]
◆ Define the corresponding linear transformation using the definition T (x) = Ax.
In[ ]:= T[{x_, y_}] = A.{x, y}
Out[ ]=
272
Week 12_Linear Algebra and Geometry.nb 37
y
6 u'
5
u
4
2 θ=25o
1
x
1 2 3 4 5
1 0
v 1
{x, v x + y}
{5, 19}
273
38 Week 12_Linear Algebra and Geometry.nb
y
u'
15
10
5
u
x
1 2 3 4 5
1 h
0 1
{x + h y, y}
274
Week 12_Linear Algebra and Geometry.nb 39
In[ ]:= h = 2;
hSht = T[u]
Out[ ]=
{13, 4}
y
u u'
4
x
2 4 6 8 10 12
Dilation
d 0
0 d
◆ where d is a scale factor which determines how much larger or smaller the image will be
compared to the original geometric object.
◆ Define the corresponding linear transformation using the definition T (x) = Ax.
275
40 Week 12_Linear Algebra and Geometry.nb
{d x, d y}
{7.5, 6.}
y
u'
6
5
u
4
x
1 2 3 4 5 6 7
1 0
0 0
276
Week 12_Linear Algebra and Geometry.nb 41
◆ Define the corresponding linear transformation using the definition T (x) = Ax.
In[ ]:= T[{x_, y_}] = A.{x, y}
Out[ ]=
{x, 0}
{5, 0}
◆ Show the image of vector u under the projection onto the x-axis.
In[ ]:= Show[Graphics[{Thickness[0.008], Dashed, Line[{{5, 0}, {5, 4}}], Dashing[None], Blue, u1,
RGBColor[1, 0, 0.4], Px1, Red, PointSize[0.02], Point[u], Point[Px], Black,
Text[Style["u", Bold, FontFamily "Times", FontSize 16], u + 0.3],
Text[Style["u'", Bold, FontFamily "Times", FontSize 16], Px + 0.3]},
GridLines Automatic], Axes True, AxesLabel {"x", "y"},
LabelStyle Directive[FontFamily "Times", FontSize 14, Black],
ImageSize 360, AspectRatio 1 / GoldenRatio]
Out[ ]=
y
u
4
u'
x
1 2 3 4 5
0 0
0 1
277
42 Week 12_Linear Algebra and Geometry.nb
◆ Define the corresponding linear transformation using the definition T (x) = Ax.
In[ ]:= T[{x_, y_}] = A.{x, y}
Out[ ]=
{0, y}
{0, 4}
◆ Show the image of vector u under the projection onto the y-axis.
In[ ]:= Show[Graphics[{Thickness[0.008], Black, Dashed, Line[{{0, 4}, {5, 4}}], Dashing[None],
Blue, u1, RGBColor[1, 0, 0.4], Py1, Red, PointSize[0.02], Point[u], Point[Py], Black,
Text[Style["u", Bold, FontFamily "Times", FontSize 16], u + 0.3],
Text[Style["u'", Bold, FontFamily "Times", FontSize 16], Py + 0.3]},
GridLines Automatic], Axes True, PlotRange {{- 0.5, 5.5}, {- 0.5, 4.5}},
AxesLabel {"x", "y"}, LabelStyle Directive[FontFamily "Times", FontSize 14, Black],
ImageSize 360, AspectRatio 1 / GoldenRatio]
Out[ ]=
y
u' u
4
x
1 2 3 4 5
In[ ]:=
Summary
After completing this chapter, you should be able to
◼ analyze vectors, simple vector operations, and geometry of vectors in Mathematica.
◼ analyze span and linear independence of vectors in Mathematica.
278
Week 12_Linear Algebra and Geometry.nb 43
◼ analyze the dot product of two vectors and related applications in Mathematica.
◼ analyze linear transformations in Mathematica.
◼ learn and use information, tools, and technology to solve problems.
279
280
Week 13: Linear Systems of ODEs
How to solve a system of differential equations?
Table of Contents
1. Homogeneous First-Order Linear System of ODEs with Initial Condition
1.1. Method 1: Separation of Variables
1.2. Method 2: Laplace Transforms
1.3. Method 3: Eigenvalues and Eigenvectors
2. Summary
Commands list
◼ Integrate
◼ Solve
◼ DSolve
◼ LaplaceTransform
◼ InverseLaplaceTransform
◼ RowReduce
◼ Transpose
◼ Join
◼ Inverse
◼ CharacteristicPolynomial
◼ Eigenvalues
◼ Eigenvectors
◼ Eigensystem
◼ Wronskian
◼ DiagoalizableMatrixQ
281
2 Week 13_Linear Systems of ODEs.nb
systems ODEs of first order with initial condition. Therefore it is recommended to visit and
read the sections for Weeks 1, 6, and 11 of this guidebook to learn more about these methods.
Solution: Instaed of using notations like [A], [B], and [C], let’s introduce y1 = [A], y2 = [B],
and y 3 = [C ] and rewrite the system of ODEs as
dy1
dt
= y1 = - k1 y1
dy2
= y2 = k1 y1 - k2 y2
dt
282
Week 13_Linear Systems of ODEs.nb 3
dy3
= y3 = k2 y2
dt
Log[y1]
- k1 t
◆ Step 4. By integration we got: lny1 = - k1 t + C. Solve the expression to get the general
solution to ODE.
In[ ]:= Solve[LHS RHS + C, y1, Reals]
Out[ ]=
y1 C-k1 t
C -k1 t
◆ Step 5. Use initial value condition y1(0) = A0 to find the particular solution.
In[ ]:= y10 = y1genSoln /. t 0
Out[ ]=
283
4 Week 13_Linear Systems of ODEs.nb
{{C A0}}
y1 [t] A0 -k1 t
y1[t] A0 -k1 t
True
.
Now we move to the second ODE. Substituting the solution y1(t ) into the second ODE,
y2 = k1 y1 - k2 y2, we get a non-homogeneous linear ODE of first order.
.
◆ Step 1. Define the second ODE and substitute the solution y1(t ).
In[ ]:= expr2 = y2 '[t] - k1 * y1[t] + k2 * y2[t]
Out[ ]=
◆ Now we have a non-homogeneous linear ODE of the form y' + p(t ) y = r(t ).
◆ Step 2. Define the p(t) and r(t).
In[ ]:= p = k2;
r = A0 * Exp[- k1 * t] * k1;
k2 t
◆ Step 4. Find the general solution to given ODE: y(t ) = e-h ∫ eh r t + c e-h.
284
Week 13_Linear Systems of ODEs.nb 5
A0 -k2 t+(-k1+k2) t k1
C -k2 t +
- k1 + k2
A0 k1
C+
- k1 + k2
A0 k1
C
k1 - k2
A0 -k2 t - 1 + (-k1+k2) t k1
y2 [t]
- k1 + k2
A0 -k2 t - 1 + (-k1+k2) t k1
y2[t]
- k1 + k2
A0 -k2 t - 1 + (-k1+k2) t k1
In[ ]:= FullSimplifyy2partSol
- k1 + k2
Out[ ]=
True
.
Substituting the solution y2(t ) into the third ODE, y3 = k2 y2, we get again a separable equation
which can be solved by separation of variables.
.
◆ Step 1. Define the third ODE and substitute the solution y2(t ).
In[ ]:= expr3 = y3 '[t] - k2 * y2[t]
Out[ ]=
285
6 Week 13_Linear Systems of ODEs.nb
A0 -k1 t - -k2 t k1 k2
+ y3′ [t]
k1 - k2
A0 e-k1 t -e-k2 t k1 k2
◆ Now we have a separable equation: dy3 = - dt.
k1-k2
.
y3
286
Week 13_Linear Systems of ODEs.nb 7
◆ Step 5. Use initial value condition y3(0) = 0 to find the particular solution.
In[ ]:= y30 = y3genSoln /. t 0
Out[ ]=
- A0 + C
{{C A0}}
True
287
8 Week 13_Linear Systems of ODEs.nb
{y1[0] A0}
◆ Step 2.1. Compute the Laplace transforms of both sides of the ODE and substitute the
initial condition.
In[ ]:= LT1 = LaplaceTransform[ode1, t, s] /. IC1
Out[ ]=
- A0 + s LaplaceTransform[y1[t], t, s] - k1 LaplaceTransform[y1[t], t, s]
- A0 + s Y1[s] - k1 Y1[s]
◆ Step 1.2. Define the second ODE as an equation and its intial condition as a substitution.
In[ ]:= ode2 = y2 '[t] k1 * y1[t] - k2 * y2[t]
Out[ ]=
{y2[0] 0}
◆ Step 2.2. Compute the Laplace transforms of both sides of the ODE and substitute the
initial condition.
In[ ]:= LT2 = LaplaceTransform[ode2, t, s] /. IC2
Out[ ]=
s LaplaceTransform[y2[t], t, s]
k1 LaplaceTransform[y1[t], t, s] - k2 LaplaceTransform[y2[t], t, s]
◆ Step 1.3. Define the third ODE as an equation and its intial condition as a substitution.
In[ ]:= ode3 = y3 '[t] k2 * y2[t]
Out[ ]=
288
Week 13_Linear Systems of ODEs.nb 9
{y3[0] 0}
◆ Step 2.3. Compute the Laplace transforms of both sides of the ODE and substitute the
initial condition.
In[ ]:= LT3 = LaplaceTransform[ode3, t, s] /. IC3
Out[ ]=
s LaplaceTransform[y3[t], t, s] k2 LaplaceTransform[y2[t], t, s]
s Y3[s] k2 Y2[s]
A0 A0 k1 A0 k1 k2
Y1[s] , Y2[s] , Y3[s]
k1 + s (k1 + s) (k2 + s) s (k1 + s) (k2 + s)
◆ Step 4. Take the inverse transforms of Y1, Y2, and Y3 to get the solution to the given
system of ODEs.
289
10 Week 13_Linear Systems of ODEs.nb
y1 A0 -k1 t
A0 -k1 t - -k2 t k1
y2 -
k1 - k2
True
True
True
290
Week 13_Linear Systems of ODEs.nb 11
- λ (k1 + λ) (k2 + λ)
◆ The eigenvalues for a matrix A are given by the roots of the characteristic equation.
In[ ]:= Solve[Det[A - λ * IdentityMatrix[3]] 0 0, λ]
Out[ ]=
◆ Results show that the matrix A has three distinct eigenvalues, λ1 = 0, λ2 = - k1, and
λ3 = - k2.
In[ ]:= {λ1, λ2, λ3} = Eigenvalues[A]
Out[ ]=
Theorem 13.1: General Solution to the 1st Order Linear System of ODEs.
Suppose that y' = Ay is a first-order linear system of differential equations. If A is an n × n
diagonalizable matrix, then the general solution to the system is given by
y = c 1 e λ 1 t u1 + ⋯ + c n e λ n t un
291
12 Week 13_Linear Systems of ODEs.nb
True
◆ Step 5. By Theorem 13.1, the corresponding solutions of the differential equations are:
In[ ]:= sol1 = u1 * Exp[λ1 * t]; sol1 // MatrixForm
Out[ ]//MatrixForm=
0
0
1
◆ Step 6. The Wronskian determinant can be used to check if these functions form a
fundamental solution set:
In[ ]:= Wronskian[{sol1, sol2, sol3}, t]
Out[ ]=
-((k1+k2) t) (- k1 + k2)
k2
◆ Since the Wronskian detrminant is not equal to zero for real values of t, these functions
form a fundamental solution set.
◆ Step 7. By Theorem 13.1, the general solution of the system is:
.
y = c 1 e λ1 t u 1 + c 2 e λ2 t u2 + c 3 e λ 3 t u 3
.
292
Week 13_Linear Systems of ODEs.nb 13
◆ Step 8. Find the particular solution with initial conditions: y1(0) = A0, y2(0) = 0,
y3(0) = 0.
y1 A0
◆ Define y2 = 0 :
y3 0 0
In[ ]:= IC = {{A0}, {0}, {0}}; IC // MatrixForm
Out[ ]//MatrixForm=
A0
0
0
◆ Define the corresponding augmented matrix of the system with initial conditions.
In[ ]:= AugMat =
Transpose[Join[Transpose[sol1], Transpose[sol2], Transpose[sol3], Transpose[IC]]];
AugMat // MatrixForm
Out[ ]//MatrixForm=
-k1+k2
0 - 0 A0
k2
k1
0 - -1 0
k2
1 1 1 0
293
14 Week 13_Linear Systems of ODEs.nb
A0 -k1 t
y1
A0 -k1 t --k2 t k1
y2 - k1-k2
y3 A0 k1--k2 t k1+-1+-k1 t k2
k1-k2
◆ Or alternatively:
In[ ]:= MatrixForm[{y1 , y2 , y3 }] A0 * MatrixForm[partSol /. A0 1]
Out[ ]=
-k1 t
y1
-k1 t --k2 t k1
y2 A0 -
k1-k2
y3 k1--k2 t k1+-1+-k1 t k2
k1-k2
Another technique for solving initial-value problems and ample solution were taken from the
book “Differential Equations with Mathematica, 5th Ed.” by Martha L. Abell and James
P. Braselton, Chapter 6.
294
Week 13_Linear Systems of ODEs.nb 15
Φ = e λ1 t u 1 e λ1 t u 1 ⋯ e λn t u n
Then, a general solution is X' (t) = Φ(t) C, where C is a constant vector. If the initial condition
X0 = Φ(0) C,
C = Φ-1 (0) X0 .
- k1 0 0
k1 - k2 0
0 k2 0
295
16 Week 13_Linear Systems of ODEs.nb
A0 -k1 t
A0 -k1 t --k2 t k1
- k1-k2
-k2 t
A0 k1- k1+-1+-k1 t k2
k1-k2
Summary
After completing this chapter, you should be able to
◼ improve problem-solving skills by practicing different methods.
◼ develop SOPs and streamline your workflow after you are familar with the methods.
◼ always check/verify your solutions for quality assurance.
296
Week 13_Linear Systems of ODEs.nb 17
◼ Remember, “to learn and not to do is really not to learn. To know and not to do is
really not to know.” - Stephen R. Covey.
297
298
References and Suggested Readings
Table of Contents
1. Mathematica-Related Books
2. Wolfram U Interactive Courses
3. Books on Engineering Mathematics (ODE & Linear Algebra)
Mathematica-Related Books
◆ An Elementary Introduction to the Wolfram Language, 2nd Ed., by Stephen Wolfram,
Wolfram Media, Inc., 2017. URL: https://www.wolfram.com/language/elementary-
introduction/2nd-ed/index.html
.
◆ The Student’s Introduction to Mathematica and the Wolfram Language, 3rd Ed., by
Bruce F. Torrence and Eve A. Torrence, Cambridge University Press, 2019. URL: https://-
doi.org/10.1017/9781108290937
.
◆ Hands-on Start to Wolfram Mathematica and Programming with the Wolfram Language,
2nd Ed., by Cliff Hastings, Kelvin Mischo, Michael Morrison, Wolfram Media, Inc.,
2016
.
◆ Differential Equations with Mathematica, 5th Ed., by Martha L. Abell and James P.
Braselton, Academic Press, 2022. URL: https://doi.org/10.1016/C2020-0-00005-8
.
299
2 References and Suggested Readings.nb
◆ Holt, Jeffrey, Linear algebra with applications, 2nd Ed., W.H. Freeman and Company,
2017
.
◆ Pauls Online Math Notes - Differential Equations, by Paul Dawkins, 2018. URL: https://-
tutorial.math.lamar.edu/Classes/DE/DE.aspx
.
300
References and Suggested Readings.nb 3
◆ Interactive Linear Algebra, by Dan Margalit and Joseph Rabinoff, Georgia Institute of
Technology, 2019. URL: https://textbooks.math.gatech.edu/ila/
.
◆ MIT OpenCourseWare - Linear Algebra (Instructor: Prof. Gilbert Strang), 2010. URL:
https://ocw.mit.edu/courses/18-06-linear-algebra-spring-2010/
.
301
302