0% found this document useful (0 votes)
240 views

Differential Equations & Linear Algebra

This document is the preface to the first edition of the student guidebook "Differential Equations and Linear Algebra with Wolfram Mathematica". It introduces the authors and their motivations for writing the book. The book is intended to help engineering students at Nazarbayev University learn differential equations, linear algebra, and the Wolfram Mathematica programming language. It provides summaries of topics and solved problems to equip students with the necessary knowledge and skills.

Uploaded by

帅华蒋
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
240 views

Differential Equations & Linear Algebra

This document is the preface to the first edition of the student guidebook "Differential Equations and Linear Algebra with Wolfram Mathematica". It introduces the authors and their motivations for writing the book. The book is intended to help engineering students at Nazarbayev University learn differential equations, linear algebra, and the Wolfram Mathematica programming language. It provides summaries of topics and solved problems to equip students with the necessary knowledge and skills.

Uploaded by

帅华蒋
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 302

Differential Equations & Linear Algebra

with
Wolfram Mathematica
.

Student Guidebook | 1st Edition


.

2
y(t) [m]

-2

-4
0 20 40 60 80 100
t [s ]

Balnur Zhaidarbek Aruzhan Tleubek Yanwei Wang


.

July 2022
Preface
We are pleased to present this first edition of the “Differential Equations and Linear Algebra
with Wolfram Mathematica” student guidebook. This book is very comprehensive, but we hope
it does not hold it back from being an enjoyable read.
.

This book is written primarily for students enrolled in the course “Engineering Mathematics III
(Differential Equations and Linear Algebra)” (ENG200) here at Nazarbayev University (NU).
This is a common compulsory course offered to all 2nd-year engineering students. There is a
Computational Lab session in this course where students are expected to practice what they
have learned in the theoretical sessions on computers with Wolfram Mathematica. Such a
design of the course was credited to Prof. V. Zarikas (now at the University of Thessaly,
Greece), who was the course leader when YW taught this course in Fall 2020 and Fall 2021.
We want to thank Prof. Zarikas for selflessly sharing his course materials, which we have
benefited from when developing this book.
.

Every topic has been summarized and supported by a sufficient number of solved problems.
The present book has been designed to equip young engineering students with as much
knowledge on all topics as is desirable from the point of view of the ENG 200 learning
outcomes. Efforts have been made to make Differential equations and Linear algebra, the
fundamental subjects in every engineering curriculum, more interesting and engaging with the
help of Wolfram Mathematica language. In addition to the above-mentioned math skills, the
book helps to learn a new programming software, Wolfram Mathematica. Just like learning any
new skill, learning Wolfram Mathematica takes time, effort, and dedication. Therefore, we
believe this journey will also benefit our readers to become self-disciplined learners.
.

BZ wishes to express her appreciation to Dr. YW, the instructor for the ENG200 course at NU.
She is grateful to him for all the knowledge gained in the ENG200 course and for awakening
her interest in learning the newly introduced Wolfram Mathematica tool/language. BZ hopes
that this book, developed under the supervision of YW, will help the reader learn the basics of
Wolfram Mathematica and use their acquired skills for further work/research. In addition, BZ
thanks Dr. Devendra Kapadia for developing the interactive course “Introduction to Linear
Algebra” for learning linear algebra using the Wolfram Language used in the preparation of this
book and strongly encourages students to take a look at other Wolfram U Interactive Courses
listed in the References section of the book.
.

AT would like to express her gratitude to Dr. YW for his invaluable advice, continuous support,
and patience both during the ENG 200 course and the book writing process. Without YW’s
encouragement and supervision, this book would not have been possible. AT also would like to
thank her ENG 200 coursemates and friends for a cherished time spent together solving

3
2 Preface.nb

rigorous math problems and learning new skills in class and social settings.
.

YW would like to express his sincere gratitude to Prof. H. Tobita (University of Fukui, Japan),
who introduced Wolfram Mathematica to him in Fall 2002. Since then, YW has been in love
with this fantastic tool/language. Life would be different if he didn’t know about Wolfram
Mathematica, and this book would certainly not have been possible. YW would also like to
express his gratitude to students enrolled in the ENG200 course. YW has benefited from close
interactions with students since he started to teach this course in Fall 2020. The two coauthors
(BZ and AT) were also students enrolled in ENG200 in Fall 2021. This book would not have
been finished now without those two brilliant and hardworking students/coauthors.
.

We acknowledge that this version of the book might have uncorrected mistakes,
spelling/grammatical errors, and ambiguities. We are trying to eliminate them in a 2nd version
(to be released in July 2023). We are grateful to the readers, students, instructors, or anyone
who somehow encountered this book if they send us their valuable feedback so that we may
make further improvements in future editions.
.

Balnur Zhaidarbek (BZ) Email: balnur.zhaidarbek@nu.edu.kz


Aruzhan Tleubek (AT) Email: aruzhan.tleubek@nu.edu.kz
Yanwei Wang (YW) Email: yanwei.wang@nu.edu.kz

4
Table of Contents
◼ Preface

◼ Week 0: Introduction to Wolfram Mathematica


0.1. Prerequisites
0.1.1. To Begin With
0.1.2. Basic Algebra and Calculus
0.1.3. Some of the Basic Operations
0.2. Help Options
0.2.1. Help Browser
0.2.2. Text-based Help
0.3. How to | Clear User Defined Symbols
0.3.1. ClearAll["Global`*"]
0.3.2. Quit[]
0.4. Create Plots
0.4.1. Defining a Function
0.4.2. Graph of a Function of One Variable
0.4.3. Multiple Functions on a Graph
0.4.4. Graph of a Function of Two Variables
0.4.5. Parametric Plots
0.5. DSolve
0.6. How to | Visualize the Direction Field
0.6.1. Stream Plots
0.6.2. Vector Plots
0.6.3. Contour Plots
0.7. More to Explore
0.7.1. Animation
0.7.2. Interactive Manipulation
0.7.3. Sound Effects

1. Week 1: First-Order ODEs

5
2 table of contents.nb

1.1. Separable equations


1.1.1. Example 1.1: Separable ODE
1.1.2. Example 1.2: Initial Value Problem (IVP)
1.2. Exact ODEs & Integrating factors
1.2.1. Example 1.3: An Exact ODE
1.2.2. Non-Exact ODEs and Integrating Factors
1.2.3. Example 1.4: A Non-Exact ODE with IVP
1.3. First-Order Linear ODEs
1.3.1. Example 1.5: First-Order ODE, IVP
1.4. Bernoulli Equation
1.4.1. Example 1.6: Logistic Equation
1.5. Summary

2. Week 2: Second-Order ODEs - 1 (Homogeneous)


2.1. Homogeneous Linear ODEs of Second Order
2.1.1. Example 2.1: Solve Second-Order ODE using DSolve
2.2. Homogeneous Linear ODEs with Constant Coefficients
2.2.1. Example 2.2: Case I with IVP
2.2.2. Example 2.3: Case II with IVP
2.2.3. Example 2.4: Case III with IVP
2.3. Modeling of Free Oscillations of Mass-Spring System
2.3.1. Example 2.5: Harmonic Oscillation of an Undamped Mass-Spring System
2.3.2. Example 2.6: The Three Cases of Damped Motion
2.4. Wolfram Demonstration Project: Unforced, Damped, Simple Harmonic
Motion
2.5. Summary

3. Week 3: Second-Order ODEs - 2 (Nonhomogeneous)


3.1. Nonhomogeneous Linear ODEs of Second Order
3.1.1. Example 3.1. Method of Undetermined Coefficients
3.1.2. Example 3.2. Application of Modification Rule
3.1.3. Example 3.3. Application of Sum Rule
3.1.4. Example 3.4. Another example of the Method of Undetermined Coefficients
3.2. Summary

6
table of contents.nb 3

4. Week 4: Second-Order ODEs - 3 (Forced Oscillations)


4.1. Modeling: Forced Oscillations
4.2. Nonhomogeneous ODE
4.3. Maximum amplitude of Damped Forced Oscillations
4.3.1. Example 4.1. Amplitude of the Steady State Solution. Practical Resonance
4.4. Summary

5. Week 5: Laplace Transforms - 1 (Basics)


5.1. Basics of Laplace Transforms
5.1.1. Built-in Functions in Wolfram Mathematica
5.1.2. Laplace Transform by Integration
5.1.3. Linearity of the Laplace Transform
5.1.4. Laplace Transform of Derivatives
5.2. Unit Step Function and Dirac's Delta Fuction
5.3. Dirac's Delta Function (Impulse Function)
5.4. Summary

6. Week 6: Laplace Transforms - 2 (Solving ODEs)


6.1. Solving an IVP by Laplace Transforms: The SOP
6.1.1. Example 6.1
6.1.2. Example 6.2
6.2. Modeling Mass-Spring System using the Unit Step & Dirac's Delta Functions
6.2.1. Mass-Spring System Under a Square Wave
6.2.2. Hammer-blow Response of a Mass-Spring System
6.2.3. Mass-Spring System Under a Sinusoidal Force for Some Time Interval
6.3. Convolution
6.4. Summary

7. Week 7: Series Solutions of ODEs


7.1. The Series Command in Wolfram Mathematica
7.1.1. Taylor and Maclaurin Series
7.2. Basic Concepts
7.2.1. Convergent vs. Divergent Series

7
4 table of contents.nb

7.2.2. Analytic at Point


7.3. Solving ODEs by the Power Series Method
7.3.1. Standard Operating Procedures (SOPs)
7.3.2. Different Approach: Built-in Function in Wolfram Mathematica
7.4. Extended Power Series Method: Frobenius Method
7.4.1. Standard Operating Procedures (SOPs)
7.5. Summary

8. Week 8: Systems of Linear Equations


8.1. Solving the Systems of Linear Equations
8.1.1. Example 8.1: The Solve Command
8.1.2. Example 8.2: The LinearSolve Command
8.1.3. Example 8.3: Gaussian Elimination
8.1.4. Example 8.4: Gauss-Jordan Elimination
8.2. Summary

9. Week 9: Matrix Operations and Inverse


9.1. Properties of Matrix Algebra
9.1.1. Example 9.1: Matrix Addition and Scalar Multiplication
9.1.2. Example 9.2: Matrix Multiplication
9.1.3. Example 9.3: Transpose of a Matrix
9.2. Inverse of a Matrix
9.3. Summary

10. Week 10: LU Factorization and Determinants


10.1. The LU Factorization
10.1.1. Method 1: LU Factorization using Row Operations
10.1.2. Method 2: LUDecomposition Command
10.2. Determinant and Its Properties | Part 1
10.2.1. Method 1: The Shortcut Method
10.2.2. Method 2: The Cofactor Expansion
10.3. Determinant and Its Properties | Part 2
10.3.1. Method 3: Row Operations to Compute the Determinant
10.4. Some Applications of the Determinant

8
table of contents.nb 5

10.4.1. Cramer's Rule


10.4.2. Inverses from Determinants
10.5. Summary

11. Week 11: Eigenvalues and Eigenvectors


11.1. Characteristic Polynomial and Equation
11.2. Multiplicity of an Eigenvalue
11.3. Diagonalization
11.3.1. Non-Diagonalizable Matrix
11.3.2. Diagonalizable Matrix
11.4. Matrix Power
11.5. Summary

12. Week 12: Linear Algebra and Geometry


12.1. Vectors and Vector Operations
12.2. Geometry of Vectors
12.3. Span of a Set of vectors
12.4. Linear Independence
12.5. Dot Product and its Applications
12.6. Linear Transformations
12.7. Geometry of Linear Transformations
12.8. Summary

13. Week 13: Linear Systems of ODEs


13.1. System of linear first-order ODEs (IVP)
13.1.1. Method 1: Separation of Variables
13.1.2. Method 2: Laplace Transforms
13.1.3. Method 3: Eigenvalues and Eigenvectors
13.2. Summary

◼ References and Suggested Readings


◼ Mathematica-Related Books
◼ Wolfram U Interactive Courses

9
6 table of contents.nb

◼ Books on Engineering Mathematics (ODE & Linear Algebra)

10
Week 0: Preliminary
Introduction to Wolfram Mathematica
The secret to getting ahead is getting started. --- Mark Twain

Table of Contents
1. Prerequisites
1.1. To Begin With
1.2. Basic Algebra and Calculus
1.3. Some of the Basic Operations
2. Help Options
2.1. Help Browser
2.2. Text-based Help
3. How to | Clear User Defined Symbols
3.1. ClearAll["Global`*"]
3.2. Quit[]
4. Create Plots
4.1. Defining a Function
4.2. Graph of a Function of One Variable
4.3. Multiple Functions on a Graph
4.4. Graph of a Function of Two Variable
4.5. Parametric Plots
5. DSolve
6. How to | Visualize the Direction Field
6.1. Stream Plots
6.2. Vector Plots
6.3. Contour Plots
7. More to Explore
7.1. Animation
7.2. Interactive Manipulation
7.3. Sound Effects

11
2 Week 0_Preliminary.nb

Commands list
◼N
◼ Table
◼D
◼ Integrate
◼ Solve
◼ Coefficient
◼ ClearAll
◼ Clear
◼ Quit
◼ Plot
◼ Plot3D
◼ ParametricPlot
◼ StreamPlot
◼ VectorPlot
◼ ContourPlot
◼ DSolve
◼ Manipulate
◼ Sound

Prerequisites
To Begin With
There a few things to keep in mind when using Mathematica.
☑ When using a PC, in order to execute a command you must hit Shift-Enter.
☑ Mathematica is Case-SenSitive (AA is not the same as aA), so be careful about what you
type.
☑ All built-in Mathematica functions are spelled out and capitalized, such as Table,
ListPlot, IntegerPart, Plot, Sin, Cos, etc.
☑ The parameters inside a function are always enclosed with square brackets, [ ].

12
Week 0_Preliminary.nb 3

In[ ]:= log


Out[ ]=

log

In[ ]:= Log[10]


Out[ ]=

Log[10]

☑ You can use a semicolon (;) at the end of a line if you want to perform the action, but
don’t want to see the output.
☑ Don’t forget about the copy and paste commands. This will be useful if you have to type
similar commands and don’t want to have to retype the entire command.
☑ In Mathematica, it is important to distinguish between parentheses (), brackets [], and
braces {}:
◼ Parentheses (): Used to group mathematical expressions, such as (3 + 4) / (5 + 7).
In[ ]:= (3 + 4) / (5 + 7)
Out[ ]=
7
12

◼ Brackets []: Used when calling functions, such as N[Log[10]].


In[ ]:= N[Log[10]]
Out[ ]=

2.30259

◼ Braces {}: Used when making lists, such as {i,1,20}.


In[ ]:= Table[i, {i, 1, 20}]
Out[ ]=

{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20}

☑ In Mathematica, there are four types of equals: = , := ,  , and === .


◼ To define a variable to store it in memory, use =. For example, to define z to be 3,
write z = 3. Syntax for setting a variable is x = … (definition of a variable).
◼ You use  to check for equality. For example, 1 - 1  0 will evaluate to True and
1  0 will evaluate to False.
◼ You use := to define your own command. (This is advanced.)
◼ You will likely not use === in this class (URL).

Basic Algebra and Calculus

☑ Use ^ (or hit CTRL+6 ) to put something to a power.

13
4 Week 0_Preliminary.nb

In[ ]:= Table[n ^ 2, {n, 10}]


Out[ ]=

{1, 4, 9, 16, 25, 36, 49, 64, 81, 100}

In[ ]:= Table[n ^ 2, {n, 0, 10}]


Out[ ]=

{0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100}

In[ ]:= Table[n ^ 2, {n, 0, 10, 2}]


Out[ ]=

{0, 4, 16, 36, 64, 100}

☑ pi is Pi, e is E and sqrt(-1) is I.


☑ If you want to see the numerical approximation to a fraction or irrational number, use
the function N.
For example, to find the decimal represenation of pi, write N[Pi].
In[ ]:= N[Pi]
Out[ ]=

3.14159

In[ ]:= N[E]


Out[ ]=

2.71828

In[ ]:= Sqrt[- 1]


Out[ ]=

☑ Use E^x or Exp[x] to represent the function ex .


☑ To take the derivative of a function, use D and specify the derivative with respect to
which variable.
For instance, find the first derivative of x2 + 3 x .
In[ ]:= D[x^2 + 3 x, x]
Out[ ]=

3+2x

☑ To take the integral of a function, use Integrate and specify the integral with respect to
which variable.
For instance, find the integral of x2 + 3 x .

14
Week 0_Preliminary.nb 5

In[ ]:= Integrate [x^2 + 3 x, x]


Out[ ]=

3 x2 x3
+
2 3

☑ To solve for the roots of a x2 + b x + c = 0 symbolically, use Solve[a x^2+ b x+c == 0, x].
☑ Notice the double equals sign (==). (Mathematica is searching for when the expression is
True.)
In[ ]:= Solve[a x^2 + b x + c  0, x]
Out[ ]=

-b - b2 - 4 a c -b + b2 - 4 a c
x  , x  
2a 2a

☑ Coefficient[(1 + x)^10, x^3] gives the coefficient of x3 in the expansion of (1 + x)10.


In[ ]:= Coefficient[(1 + x) ^10, x^3]
Out[ ]=

120

Some of the Basic Operations

◼ Sqrt[x]
◼ Exp[x]
◼ Log[x]
◼ Log[b, x] (logarithm with base b)
◼ Sin[x]
◼ Cos[x]
◼ Tan[x]
◼ ArcSin[x]
◼ ArcCos[x]
◼ ArcTan[x]
◼ Sinh[x]
◼ n! (factorial)
◼ Abs[x] (absolute value)
◼ Round[x] (closer integer)
◼ Floor[x] (integer part)
◼ Mod[n, m]

15
6 Week 0_Preliminary.nb

◼ Random[ ]
◼ Max[x, y, …]
◼ Min[x ,y, …]

Help Options
Help Browser
.

To access the help browser, go the Help menu and choose Wolfram Documentation. If you
want to know about a particular function in Mathematica, select it and then go to Find
Selected Function or simply hit the F1 key on your keyboard.
.

There are a lot of fun examples on the Wolfram Demonstrations Project (URL). You may
also share your work with the world. Getting started is simple.
.

Text-based Help
.

The Question Mark function ? allows you to get basic information about a particular
Mathematica function.
In[ ]:= ? /.
Out[ ]=

Symbol

expr /. rules or ReplaceAll [expr, rules] applies a rule or list

of rules in an attempt to transform each subpart of an expression expr.

ReplaceAll [rules] represents an operator form of ReplaceAll that can be applied to an expression.

For example, suppose we want to find out how to use the derivative function D, the quesion
mark function ? yields:

16
Week 0_Preliminary.nb 7

In[ ]:= ? D
Out[ ]=

Symbol

D[f , x] gives the partial derivative ∂ f  ∂ x.

D[f , {x, n}] gives the multiple derivative ∂n f  ∂ xn .

D[f , x, y, …] gives the partial derivative ⋯ (∂ / ∂ y) (∂ / ∂ x) f.

D[f , {x, n}, {y, m}, …] gives the multiple partial derivative ⋯ ∂m  ∂ ym  ∂n  ∂ xn  f.

D[f , {{x1 , x2 , …}}] for a scalar f gives the vector derivative ∂ f  ∂ x1 , ∂ f  ∂ x2 , ….

D[f , {array}] gives an array derivative.

The double question mark ?? gives the same information as ? but also gives information
about attributes and options.
In[ ]:= ?? D
Out[ ]=

Symbol

D[f , x] gives the partial derivative ∂ f  ∂ x.

D[f , {x, n}] gives the multiple derivative ∂n f  ∂ xn .

D[f , x, y, …] gives the partial derivative ⋯ (∂ / ∂ y) (∂ / ∂ x) f.

D[f , {x, n}, {y, m}, …] gives the multiple partial derivative ⋯ ∂m  ∂ ym  ∂n  ∂ xn  f.

D[f , {{x1 , x2 , …}}] for a scalar f gives the vector derivative ∂ f  ∂ x1 , ∂ f  ∂ x2 , ….

D[f , {array}] gives an array derivative.

Documentation Web »

Options NonConstants  {}
Attributes {Protected, ReadProtected }
Full Name System`D

If you are trying to recall a function that has the word Solve in it then you can use asterisk * in
conjunction with the word Solve , as shown below:

17
8 Week 0_Preliminary.nb

In[ ]:= ? *Solve*


Out[ ]=

System`

AsymptoticDSol
DSolve LinearSolve NDSolveValue RiccatiSolve SolveDelayed
veValue
AsymptoticRSol DSolveChangeV LinearSolveFun
NSolve RSolve SolveValues
veValue ariables ction
AsymptoticSolve DSolveValue LyapunovSolve NSolveValues RSolveValue
DiscreteLyapun ParametricNDS
FrobeniusSolve MainSolve Solve
ovSolve olve
DiscreteRiccatiS ParametricNDS
KnapsackSolve NDSolve SolveAlways
olve olveValue

How to | Clear User Defined Symbols


ClearAll[“Global`*”]
.

When you set a value to a symbol, that value will be used for the symbol for the entire
Wolfram System session. Since symbols no longer in use can introduce unexpected errors
when used in new computations, clearing your definitions is very desirable.
.

ClearAll[symb1,symb2,…] clears all values, definitions, attributes, messages, and


defaults associated with symbols.
.

To clear all definitions of quantities you’ve introduced in a Mathematica session so far, type:
ClearAll[“Global`*”].
.

In[ ]:= ClearAll["Global`*"]


.

Assign values to two symbols (x and y) and observe their sum: .

In[ ]:= x = 5; y = 7; x + y
Out[ ]=

12
.

Use Clear to clear the definitions for x and y: .

In[ ]:= Clear[x, y]


.

Read this page (How to | Clear My Definitions | URL) for more details. .

Quit[]
.

18
Week 0_Preliminary.nb 9

To clear all definitions or to reclaim resources used by the kernel, you may want to restart it.
There are at least two options.
.

◼ Option 1: Quit the kernel by choosing Evaluation ▶ Quit Kernel ▶ “kernel name”,
where “kernel name” is typically “Local”.
◼ Option 2: Quit the kernel by evaluating Quit. Quit[] (URL) terminates a Wolfram
Language kernel session. Quit[] quits only the Wolfram Language kernel, not the front
end. To quit a notebook front end, choose the Quit menu item. All kernel definitions
are lost when the kernel session terminates.
In[ ]:= Quit[]

Create Plots
Defining a Function
.

There are many built-in function in Wolfram Language and some of them were introduced in
previous sections. This section will focus on learning how to define our own functions in
Mathematica.
.

☑ Syntax for defining a function that takes any single argument is f [ x _ ] := … (definition
of a function).
For example, the command for defining a function f (x) = x2 is
In[ ]:= f[x_] := x2
.

 Notice the underscore “_” to the right of the variable y and/or on the left of “equality”. If
the character underscore was not used, then the function is only defined for this particular
symbol of the variable.
In[ ]:= Clear[f]
x3
f[x] = ;
2
In[ ]:= f[5]
Out[ ]=

f[5]
.

 The use of equality symbol “:=” in the definition of the function, i.e., the assignment
with a delay(Set Delayed) is most othen the correct choice. The choice of direct, (Set)
assignment “=” can lead to undesirable results.

19
10 Week 0_Preliminary.nb

In[ ]:= a = 3;
setDelayed[x_] := x + y + a2 ;
set[x_] = x + y + a2 ;

In[ ]:= setDelayed[x]


Out[ ]=

9+x+y

In[ ]:= set[x]


Out[ ]=

9+x+y

In[ ]:= a = 4;
setDelayed[x]
Out[ ]=

16 + x + y

In[ ]:= set[x]


Out[ ]=

9+x+y
.

☑ The argument of a function may be a number or any complex algebraic expression.


In[ ]:= f[4]
Out[ ]=

16

In[ ]:= fa2 + a + 1


Out[ ]=
2
1 + a + a2 
.

☑ It is also possible to use a function in a calculation.


1
Define a function q (y) = y - 2
+ c 1 e -2 y :
1
In[ ]:= q[y_] := y - + C1 * Exp[- 2 y];
2
Find the first derivative of this function:
1
In[ ]:= Dy - + C1 * Exp[- 2 y], y
2
Out[ ]=

1 - 2 C1 -2 y

In[ ]:= D[q[y], y]


Out[ ]=

1 - 2 C1 -2 y

Find the second derivative of the function:

20
Week 0_Preliminary.nb 11

1
In[ ]:= Dy - + C1 * Exp[- 2 y], {y, 2}
2
Out[ ]=

4 C1 -2 y

In[ ]:= D[q[y], {y, 2}]


Out[ ]=

4 C1 -2 y
.

☑ The Question Mark function ? allows you to get the definition of f.


In[ ]:= ?q
Out[ ]=

Symbol

Global`q

Definitions
1
q[y_] := y - + C1 Exp[- 2 y]
2

Full Name Global`q

☑ The name of a function i.e. f, is just a symbol for Mathematica. Thus do not define a
function with capital letter to avoid the confusion with other built-in Mathematica functions.
Also this symbol must not have been previously used for definition of another element
(variable, table, etc.).
.

☑ Function in Mathematica can have more than one argument. So we can define multiple
variables function.
In[ ]:= product[x_, y_] := x * y;

In[ ]:= 1 + product[2, 3]


Out[ ]=

7
.

☑ If later you will give a new definition to the function, the latter definition is the one that
applies while the previous is canceled.
In[ ]:= product[x_, y_] := 1 + x * y

In[ ]:= product[2, 3]


Out[ ]=

Graph of a Function of One Variable

21
12 Week 0_Preliminary.nb

The command for plotting a functions of one variables is


.

Plot[ function, {variable, lower bound, upper bound}]

In[ ]:= Plot[Sin[x], {x, - 2 Pi, 2 Pi}]


Out[ ]=

1.0

0.5

-6 -4 -2 2 4 6

-0.5

-1.0

Multiple Functions on a Graph


.

To include two functions on the same graph, we simply write the Plot command using two
functions which slip with the “,”.
.

Plot[ { fx , fy , …}, {variable, lower bound, upper bound}]

In[ ]:= Plot[{Sin[x], Cos[x], Tan[x]}, {x, - 5 Pi, 5 Pi},


PlotRange  {- 2, 2}, Frame  True, PlotStyle  {Red, Blue, Gray}]
Out[ ]=
2

-1

-2
-15 -10 -5 0 5 10 15

22
Week 0_Preliminary.nb 13

In[ ]:= Plot[{x * Sin[1 / x], x, - x}, {x, - 0.1, 0.1}, PlotRange  0.1,
Filling  Axis, Frame  True, AspectRatio  1 / GoldenRatio]
Out[ ]=
0.10

0.05

0.00

-0.05

-0.10
-0.10 -0.05 0.00 0.05 0.10

Graph of a Function of Two Variable


.

The relative command for functions of two variables is


.

Plot3D[function, {variable_ 1, lower bound, upper bound}, {variable_ 2, lower bound,


upper bound}]

In[ ]:= Plot3D[x ^ 2 - y ^ 2, {x, - 1, 1}, {y, - 1, 1}, BoxRatios  {1, 1, 1}, ImageSize  {270, 270}]
Out[ ]=

Parametric Plots
.

The relative command for making a parametric plot is


.

ParametricPlot[ { fx , fy }, { t, tmin , tmax }]

23
14 Week 0_Preliminary.nb

The relative command for plotting several parametric curves together is


.

ParametricPlot[{{ fx , fy }, { gx , gy }, …}, { t, tmin , tmax }]


.

In[ ]:= ParametricPlot[{u * Sin[u], u * Cos[u]}, {u, 0, 100},


PlotPoints  125, Axes  False, MaxRecursion  0, ColorFunction  "Rainbow"]
Out[ ]=

DSolve Command
.

The DSolve Command is used to solve differential equations, list of differential equations, and
a partial differential equations.
.

24
Week 0_Preliminary.nb 15

In[ ]:= ? DSolve


Out[ ]=

Symbol

DSolve[eqn, u, x] solves a differential equation for the function u, with independent variable x.

DSolve[eqn, u, {x, xmin , xmax }] solves a differential equation for x between xmin and xmax .

DSolve[{eqn1 , eqn2 , …}, {u1 , u2 , …}, …] solves a list of differential equations.

DSolve[eqn, u, {x1 , x2 , …}] solves a partial differential equation.

DSolve[eqn, u, {x1 , x2 , …} ∈ Ω] solves the partial differential equation eqn over the region Ω.

For example, find the general solution to the given ODE: y' = - 2 x y .
In[ ]:= ClearAll["Global`*"]

In[ ]:= DSolve[y '[x]  - 2 x y[x], y[x], x]


Out[ ]=
2
y[x]  -x 1 
.

Find the particular solution to the same ODE with inital consition: y(0) = 1.8.
In[ ]:= solution = DSolve[{y '[x]  - 2 x y[x], y[0]  1.8}, y[x], x]
Out[ ]=
2
y[x]  1.8 -x 
.

Plot its graph:


In[ ]:= Plot[y[x] /. solution, {x, - 3, 3}, Frame  True]
Out[ ]=

1.5

1.0

0.5

0.0

-3 -2 -1 0 1 2 3

How to | Visualize the Direction Field

25
16 Week 0_Preliminary.nb

Stream Plots
In[ ]:= ? StreamPlot
Out[ ]=

Symbol

StreamPlot vx , vy , {x, xmin , xmax }, {y, ymin , ymax }

generates a stream plot of the vector field vx , vy  as a function of x and y.

StreamPlot vx , vy , wx , wy , …, {x, xmin , xmax }, {y, ymin , ymax } generates plots of several vector fields.

StreamPlot […, {x, y} ∈ reg ] takes the variables {x, y} to be in the geometric region reg.

2x*y
In[ ]:= f1[x_, y_] := - ;
1 + x2
plot1 = StreamPlot[{1, f1[x, y]}, {x, - 10, 10},
{y, - 10, 10}, Frame  True, Axes  True, AspectRatio  1 / GoldenRatio]
Out[ ]=

10

-5

-10

-10 -5 0 5 10

Vector Plots
In[ ]:= ? VectorPlot
Out[ ]=

Symbol

VectorPlot vx , vy , {x, xmin , xmax }, {y, ymin , ymax }

generates a vector plot of the vector field vx , vy  as a function of x and y.

VectorPlot vx , vy , wx , wy , …, {x, xmin , xmax }, {y, ymin , ymax } plots several vector fields.

VectorPlot […, {x, y} ∈ reg ] takes the variables {x, y} to be in the geometric region reg.

26
Week 0_Preliminary.nb 17

In[ ]:= plot2 = VectorPlot[{1, f1[x, y]}, {x, - 10, 10},


{y, - 10, 10}, Frame  True, Axes  True, AspectRatio  1 / GoldenRatio]
Out[ ]=
10

-5

-10
-10 -5 0 5 10

In[ ]:= Show[plot1, plot2]


Out[ ]=

10

-5

-10

-10 -5 0 5 10

27
18 Week 0_Preliminary.nb

In[ ]:= f2[x_, y_] := x ^ 2 / (1 - y ^ 2);


Show[VectorPlot[{1, f2[x, y]} / Sqrt[1 + f2[x, y] ^ 2], {x, - 4, 4},
{y, - 4, 4}, VectorScale  0.03, VectorPoints  Fine, VectorStyle  "Arrow"],
Table[ContourPlot[- x ^ 3 + 3 y - y ^ 3  c, {x, - 4, 4}, {y, - 4, 4}, ContourStyle  Green],
{c, {- 10, - 5, 0, 5, 10}}], AspectRatio  3 / 4]
Out[ ]=
4

-2

-4
-4 -2 0 2 4

Contour Plots
In[ ]:= ? ContourPlot
Out[ ]=

Symbol

ContourPlot[f , {x, xmin , xmax }, {y, ymin , ymax }] generates a contour plot of f as a function of x and y.

ContourPlot[f == g, {x, xmin , xmax }, {y, ymin , ymax }] plots contour lines for which f = g.

ContourPlot[{f1 == g1 , f2 == g2 , …}, {x, xmin , xmax }, {y, ymin , ymax }] plots several contour lines.

ContourPlot[…, {x, y} ∈ reg ] takes the variables {x, y} to be in the geometric region reg.

28
Week 0_Preliminary.nb 19

Cos[x + y]
In[ ]:= f3[x_, y_] := - ;
2
3 y + 2 y + Cos[x + y]
p3 = StreamPlot[{1, f3[x, y]}, {x, - 5, 5},
{y, - 5, 5}, Frame  True, Axes  True, AspectRatio  1 / GoldenRatio]
Out[ ]=

-2

-4

-4 -2 0 2 4

In[ ]:= p4 = Showp3,


TableContourPlotS in[x + y] + y3 + y2  c, {x, - 6, 6}, {y, - 5, 5}, ContourStyle  Green,
{c, {- 4, - 3, - 2, - 1, 0, 1, 2, 3, 4, 8, 16, 32, 64}}
Out[ ]=

-2

-4

-4 -2 0 2 4

More to Explore
Animation

29
20 Week 0_Preliminary.nb

In[ ]:= ? Animate


Out[ ]=

Symbol

Animate[expr, {u, umin , umax }] generates an animation of expr in which u varies continuously from u min to umax .

Animate[expr, {u, umin , umax , du}] takes u to vary in steps du.

Animate[expr, {u, {u1 , u2 , …}}] makes u take on discrete values u1 , u2 , ….

Animate[expr, {u, …}, {v, …}, …] varies all the variables u, v, ….

In[ ]:= Animate[Plot3D[Sin[Sqrt[x ^ 2 + y ^ 2] + 2 * Pi * t], {x, - 8 * Pi, 8 * Pi}, {y, - 8 * Pi, 8 * Pi},
PlotRange  10, PlotPoints  50, AspectRatio  1,
Boxed  False, Mesh  None, Axes  False], {t, 0, 2}, ControlPlacement  Top]
Out[ ]=

Interactive Manipulation

30
Week 0_Preliminary.nb 21

In[ ]:= ? Manipulate


Out[ ]=

Symbol

Manipulate[expr, {u, umin , umax }] generates a version of

expr with controls added to allow interactive manipulation of the value of u.

Manipulate[expr, {u, umin , umax , du}] allows the value of u to vary between umin and umax in steps du.

Manipulate[expr, {{u, uinit }, umin , umax , …}] takes the initial value of u to be uinit .

Manipulate[expr, {{u, uinit , ulbl }, …}] labels the controls for u with ulbl .

Manipulate[expr, {u, {u1 , u2 , …}}] allows u to take on discrete values u1 , u2 , ….

Manipulate[expr, {u, …}, {v, …}, …] provides controls to manipulate each of the u, v, ….

Manipulate[expr, cu  {u, …}, cv  {v, …}, …]

links the controls to the specified controllers on an external device.

31
22 Week 0_Preliminary.nb

In[ ]:= g[x_, A_, w_, phi_] := A * Sin[w * x + phi];


Manipulate[
plt = Plot[g[x, A, w, phi], {x, 0, 4 Pi}, Frame  True, FrameLabel  {"t", "A*sin(ωt+ϕ}"},
LabelStyle  Directive[Black, Bold], PlotStyle  Red, PlotLabel  "A Sine Wave"],
{{A, 1, "Amplitude, A"}, 0.1, 10, Appearance  "Labeled"},
{{w, 1, "Angular frequency, ω"}, 0.1, 10, Appearance  "Labeled"},
{{phi, 0, "Phase, ϕ"}, - 2 Pi, 2 Pi, Appearance  "Labeled"},
ControlPlacement  Top, SaveDefinitions  True]
(**Introduction to Manipulate: Demo for a sine wave**)
Out[ ]=

Amplitude, A 1

Angular frequency, ω 1

Phase, ϕ 0

A Sine Wave
1.0

0.5
A*sin(ωt+ϕ}

0.0

-0.5

-1.0

0 2 4 6 8 10 12
t

Sound Effects
In[ ]:= ? Sound
Out[ ]=

Symbol

Sound[primitives] represents a sound.

Sound[primitives, t] specifies that the sound should have duration t.

Sound[primitives, {tmin , tmax }] specifies that the sound should extend from time t min to time tmax .

32
Week 0_Preliminary.nb 23

In[ ]:= ? SoundNote


Out[ ]=

Symbol

SoundNote[pitch] represents a music-like sound note with the specified pitch.

SoundNote[pitch, t] takes the note to have duration t.

SoundNote[pitch, {tmin , tmax }] takes the note to occupy the time interval tmin to tmax .

SoundNote[pitch, tspec, "style"] takes the note to be in the specified style.

SoundNote[pitch, tspec, "style", opts] uses the specified rendering options for the note.

In[ ]:= OdeToJoy = {{"B", "B", "C5", "D5", "D5", "C5", "B", "A", "G", "G", "A", "B", "B", "A", "A", "B",
"B", "C5", "D5", "D5", "C5", "B", "A", "G", "G", "A", "B", "A", "G", "G", "A", "A",
"B", "G", "A", "B", "C5", "B", "G", "A", "B", "C5", "B", "A", "G", "A", "D", "B",
"B", "B", "C5", "D5", "D5", "C5", "B", "A", "G", "G", "A", "B", "A", "G", "G"},
{0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.75, 0.25, 1, 0.5,
0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.75, 0.25, 1, 0.5, 0.5,
0.5, 0.5, 0.5, 0.25, 0.25, 0.5, 0.5, 0.5, 0.25, 0.25, 0.5, 0.5, 0.5, 0.5, 0.5,
0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.75, 0.25, 1}};
.

Piano sound:
In[ ]:= Sound[SoundNote[##, "Piano"] & @@@ Transpose[OdeToJoy]] // EmitSound
.

Violin sound:
In[ ]:= Sound[SoundNote[##, "Violin"] & @@@ Transpose[OdeToJoy]] // EmitSound

33
34
Week 1: First-Order ODEs
How to Solve First-Order ODEs Step-by-step?

Table of Contents
1. Separable equations
1.1. Example 1.1: Separable ODE
1.2. Example 1.2: Initial Value Problem (IVP)
2. Exact ODEs & Integrating factors
2.1. Example 1.3: An Exact ODE
2.2. Non-Exact ODEs and Integrating Factors
2.3. Example 1.4: A Non-exact ODE with IVP
3. First-Order Linear ODEs
3.1. Example 1.5: First-Order ODE, IVP
4. Bernoulli Equation
4.1. Example 1.6: Logistic Equation
5. Summary

Commands list
◼ Integrate[f, x]
◼ ClearAll [symb1, symb2, ...]
◼ Simplify[expr]
◼ FullSimplify[expr]
◼ Solve[expr, vars]

Separable Equations
Many practically useful ODEs can be reduced to the form:
g(y) y' = f (x)
Then, by integrating both sides with respect to x, we obtain:
∫ g(y) y'  x = ∫ f (x)  x + C

35
2 Week 1_First-Order ODEs.nb

According to Calculus, y'  x =  y. So, the variable of the integration for left-side becomes y.

∫ g ( y )  y = ∫ f (x )  x + C

When f and g are continuous functions, the integrals mentioned above exist, and by evaluating
them, we obtain a general solution to the given ODE.
Example 1.1: Separable ODE

y' = (x + 1) e-x y2

◆ Step 1. The given ODE is separable: y-2  y = (x + 1) e-x  x


In[ ]:= ClearAll["Global`*"]

In[ ]:= expr = y '[x] - (x + 1) * Exp[- x] * y[x]2


Out[ ]=

- -x (1 + x) y[x]2 + y′ [x]

◆ Step 2. Integrate the left-side with respect to y.


In[ ]:= Integratey-2 , y
Out[ ]=
1
-
y

In[ ]:= ? Integrate


Out[ ]=

Symbol

Integrate[f , x] gives the indefinite integral ∫ f dx.


x
Integrate[f , {x, xmin , xmax }] gives the definite integral ∫x max f dx.
min

x y
Integrate[f , {x, xmin , xmax }, {y, ymin , ymax }, …] gives the multiple integral ∫x max dx ∫y max dy … f.
min min

Integrate[f , {x, y, …} ∈ reg ] integrates over the geometric region reg.

◆ Step 3. Integrate the right-side with respect to x.


In[ ]:= Integrate[(x + 1) * Exp[- x], x]
Out[ ]=

-x (- 2 - x)

In[ ]:= FullSimplify[Integrate[(x + 1) * Exp[- x], x]]


Out[ ]=

- -x (2 + x)

36
Week 1_First-Order ODEs.nb 3

1
◆ Step 4. By integration, - = - e - x (2 + x ) + C .
y
1
In[ ]:= FullSimplifySolve-  - Exp[- x] * (2 + x) + C, y
y
Out[ ]=

x
y  
2 - C x + x

◆ Step 5. Verify the answer:


Exp[x]
In[ ]:= ySoln =
2 - C * Exp[x] + x
Out[ ]=

x
2 - C x + x

In[ ]:= yDSoln = FullSimplify[D[ySoln, x]]


Out[ ]=

x (1 + x)
2
2 - C x + x

◆ Substituting y and y' to the initially given ODE, we get:


In[ ]:= FullSimplify[expr /. {y[x]  ySoln, y '[x]  yDSoln}]
Out[ ]=

◆ Check the answer using DSolve command:


In[ ]:= FullSimplifyDSolvey '[x]  (x + 1) * Exp[- x] * y[x]2 , y[x], x
Out[ ]=

x
y[x]  
2 + x -  x 1

Example 1.2: Initial Value Problem (IVP)


y' = - 2 xy y(0) = 1.8
1
◆ Step 1. The given ODE is separable: y = -2 x x
y
In[ ]:= ClearAll["Global`*"]

In[ ]:= expr = y '[x] + 2 x * y[x]


Out[ ]=

2 x y[x] + y′ [x]

◆ Step 2. Integrate the left-side with respect to y.

37
4 Week 1_First-Order ODEs.nb

1
In[ ]:= Integrate , y
y
Out[ ]=

Log[y]

◆ Step 3. Integrate the right-side with respect to x.


In[ ]:= Integrate[- 2 x, x]
Out[ ]=

- x2

◆ Step 4. By integration, we got lny = - x2 + C . Solving the expression, we get a general


solution to ODE:
In[ ]:= ? Solve
Out[ ]=

Symbol

Solve[expr, vars] attempts to solve the system expr of equations or inequalities for the variables vars.

Solve[expr, vars, dom] solves over the domain dom. Common choices of dom are Reals, Integers, and Complexes.

◆ We solve the expression over the domain of Real numbers, because the natural loga-
rithm of y exists only when y > 0:
In[ ]:= FullSimplifySolveLog[y]  - x2 + , y, Reals
Out[ ]=

2
y  -x +


◆ Step 5. Now let’s use initial value to get a particular solution:


In[ ]:= ySoln =  * Exp- x2 
Out[ ]=
2
-x 

In[ ]:= y0 = ySoln /. x  0


Out[ ]=

In[ ]:= Solve[y0  1.8, ]


Out[ ]=

{{  1.8}}

In[ ]:= yIVP = ySoln /.   1.8


Out[ ]=

2
1.8 -x

38
Week 1_First-Order ODEs.nb 5

◆ Step 6. Verify the answer:


In[ ]:= yDIVP = FullSimplify[D[yIVP, x]]
Out[ ]=
2
- 3.6 -x x

◆ Substituting y and y' to the initially given ODE, we get:


In[ ]:= FullSimplify[expr /. {y[x]  yIVP, y '[x]  yDIVP}]
Out[ ]=

0.

◆ Check the answer using DSolve command:


In[ ]:= FullSimplify[DSolve[{y '[x]  - 2 x * y[x], y[0]  1.8}, y[x], x]]
Out[ ]=
2
y[x]  1.8 -x 

Exact ODEs & Integrating Factors


A 1st order ODE M (x, y) + N (x, y) y' = 0 written as
khx

M (x, y)  x + N (x, y)  y = 0
khx

is an exact differential equation. It can be written as the differential of some function u(x, y).
khx

u u
x + y = u
x y
khx

Comparing the ODE and the differential form, we see that:


khx

∂u ∂u
u = 0 ⟹ =M =N
∂x ∂y
khx

Let’s do some partial derivatives manupulation to get:


khx

∂M ∂2 u ∂N ∂2 u
= =
∂y ∂y ∂x ∂x ∂x ∂y
khx

Consequently, the condition for the exactness of ODE is when the following partial
derivatives are equal:
∂M ∂N
∂y
= ∂x

Finally, by integration we obtain an implicit solution to an ODE as a function of u(x, y):

39
6 Week 1_First-Order ODEs.nb

u(x, y) = 
The function u(x, y) can be found by the following systematic way; EITHER by integrating
with respect to x, where k(y) is the constant of integration.

u = ∫ M  x + k (y )

OR by integrating with respect to y, where l (x) is the constant of integration as well:

u = ∫ N  y + l (x )

Example 1.3: An Exact ODE

cos(x + y)  x + 3 y2 + 2 y + cos(x + y)  y = 0

◆ Step 1. Test for exactness. By looking at the equation, we see that M = cos (x + y) and
N = 3 y2 + 2 y + cos (x + y). But instead of M & N , we use P & Q, because the capital
letter N is protected by Mathematica.
ClearAll["Global`*"]
P[x_, y_] := Cos[x + y];
Q[x_, y_] := 3 y2 + 2 y + Cos[x + y];

◆ NOTE: The variable cannot be named “N” because the Wolfram language has a built-in
symbol described below.
In[ ]:= ?N
Out[ ]=

Symbol

N[expr] gives the numerical value of expr.

N[expr, n] attempts to give a result with n-digit precision.

◆ Let’s check if the given ODE is exact:


In[ ]:= D[P[x, y], y]  D[Q[x, y], x] (**Check for exactness **)
Out[ ]=

True

◆ The given ODE is exact.


◆ Step 2. Find the general solution u(x, y) by integrating with respect to x:
khx
u = ∫ P  x + k (y )

40
Week 1_First-Order ODEs.nb 7

In[ ]:= u = Integrate[P[x, y], x] + k[y]


Out[ ]=

k[y] + Cos[y] Sin[x] + Cos[x] Sin[y]

◆ where k[y] is a yet-to-be-determined function.


◆ Step 3. Let’s solve for k(y) :
In[ ]:= ODEofK = Simplify[D[u, y]  Q[x, y]]
Out[ ]=

y (2 + 3 y)  k′ [y]

In[ ]:= KSoln = DSolve[ODEofK, k[y], y]


Out[ ]=

k[y]  y2 + y3 + 1 

In[ ]:= u
Out[ ]=

k[y] + Cos[y] Sin[x] + Cos[x] Sin[y]

◆ Step 4. Substitute the value of k[y] to the given ODE:


In[ ]:= u /. KSoln〚1〛
Out[ ]=

y2 + y3 + 1 + Cos[y] Sin[x] + Cos[x] Sin[y]

In[ ]:= FullSimplify[u /. KSoln〚1〛]


Out[ ]=

y2 + y3 + 1 + Sin[x + y]

◆ So, the general solution to the ODE is: u(x, y) = sin(x + y) + y2 + y3 + c1 = constant

u(x, y) = sin(x + y) + y2 + y3 = 

◆ Step 5. Check the obtained solution:


In[ ]:= uSoln = y2 + y3 + 1 + Sin[x + y];
D[uSoln, x]
Out[ ]=

Cos[x + y]

In[ ]:= D[uSoln, y]


Out[ ]=

2 y + 3 y2 + Cos[x + y]

◆ So, the solution is correct.

Non-Exact ODEs and Integrating Factors

41
8 Week 1_First-Order ODEs.nb

What to do if the equation is not exact?


In a case of Nonexactness, the ODE can be solved by reducing the equation to the exact form.
It is done by the integrating factors. The nonexact ODE is given by the following form:
khx

P (x, y)  x + Q(x, y)  y = 0
khx

If the equation is multiplied by a function F both sides, the result is


khx

FP  x + FQ  y = 0
khx

This function F(x, y) is called an integrating factor.


khx

How to find integrating factor?


As discussed before, the condition for the exactness of ODE is when the following partial
derivatives are equal:
∂M ∂N
=
∂y ∂x
khx

Thus, the condition for the exactness when the integrating factor is present is:
khx

∂ ∂
(FP ) = (FQ)
∂y ∂x
khx

By the product rule, with subscripts denoting partial derivatives, this gives
khx

Fy P + FPy = Fx Q + FQx
khx

Because the integration factor depends only on one variable (either x or y), it simplifies easily.
khx

Let’s assume that the integrating factor depends on the x only. Also, let’s denote the derivative
∂F
of Fx as F ' = . Then it leads to
∂x
khx

FPy = F ' Q + FQx


khx

Simplifying it, we get the formula for the Integrating Factor F(x):

1 ∂P ∂Q
F (x) = exp ∫ R(x)  x where R (x ) =  - 
Q ∂y ∂x

After similar mathematical manipilations, the formula for the Integrating Factor F(y) is found.

42
Week 1_First-Order ODEs.nb 9

1 ∂Q ∂P
F * (y) = exp ∫ R* (y)  y where R * (y ) =  - 
P ∂x ∂y

khx

Example 1.4: A Non-exact ODE with IVP


(e x+ y + ye y )  x + (xe y - 1)  y = 0, y(0) = - 1

◆ Step 1. Test for exactness. By looking at the equation, we see that P = e x+y + y e y and
Q = xe y - 1.
In[ ]:= ClearAll["Global`*"]
P[x_, y_] := Exp[x + y] + y * Exp[y];
Q[x_, y_] := x * Exp[y] - 1;
FullSimplify[D[P[x, y], y]  D[Q[x, y], x] ]
Out[ ]=

y x + y  0

◆ The result shows that the given ODE is NOT exact.


◆ Step 2. Finding the Integrating Factor. First, assume that the Integrating Factor
depends only on x.
1
In[ ]:= Rx = FullSimplify * (D[P, y] - D[Q, x])
Q
Out[ ]=

◆ We see that the R contains both on x and y. Therefore, the first assumption is wrong.
◆ Now, let’s assume that F depends on y.
1
In[ ]:= Ry = FullSimplify * (D[Q, x] - D[P, y])
P
Out[ ]=

◆ The second is assumption is correct and the Integrating Factor depends only on y, F(y).
In[ ]:= Fy = FullSimplify[Exp[Integrate[Ry, y]]]
Out[ ]=
1

◆ Let’s let’s redefine P[x,y] and Q[x,y] after multiplying the integrating factor of e -y to
both sides of the given ODE:
In[ ]:= ClearAll[P, Q, x, y];
P[x_, y_] := (Exp[x + y] + y * Exp[y]) * Exp[- y];
Q[x_, y_] := (x * Exp[y] - 1) * Exp[- y];

◆ Check the obtained equation for exactness: (e x + y)  x + (- e -y + x)  y = 0

43
10 Week 1_First-Order ODEs.nb

In[ ]:= FullSimplify[D[P[x, y], y]  D[Q[x, y], x]]


Out[ ]=

True

◆ Indeed, it is exact.
◆ Step 3. General Solution. As shown before, the general solution to the ODE can be
found by the following formula, where k(y) is the constant of integration.
khx

u = ∫ P * F (y )  x + k (y )
khx

In[ ]:= u = Integrate[P[x, y], x] + k[y]


Out[ ]=

x + x y + k[y]

◆ where k[y] is a yet-to-be-determined function.


◆ Let’s solve for k(y) :
In[ ]:= ODEofK = Simplify[D[u, y]  Q[x, y]]
Out[ ]=

-y + k′ [y]  0

In[ ]:= KSoln = DSolve[ODEofK, k[y], y]


Out[ ]=

k[y]  -y + 1 

In[ ]:= u
Out[ ]=

x + x y + k[y]

◆ Thus we have:
In[ ]:= u /. KSoln〚1〛
Out[ ]=

x + -y + x y + 1

In[ ]:= uSoln = FullSimplify[u /. KSoln〚1〛]


Out[ ]=

x + -y + x y + 1

◆ Hence, the general solution is

u(x, y) = e x + e -y + xy = 

◆ Step 4. Find the Particular solution with y(0) = - 1:


In[ ]:= u0 = uSoln /. x  0
Out[ ]=

1 + -y + 1

44
Week 1_First-Order ODEs.nb 11

In[ ]:= Solve[u0  - 1, 1 ]


Out[ ]=

1  - -y 1 + 2 y 

In[ ]:= uIVP = SimplifyuSoln /. 1  - -y 1 + 2 y 


Out[ ]=

- 2 + x + x y

◆ Step 5. Check the obtained solution:


In[ ]:= D[uSoln, x]
Out[ ]=

x + y

In[ ]:= D[uSoln, y]


Out[ ]=

- -y + x

◆ It can be seen that D[u, x] dx + D[u, y] dy = 0 recovers the given ODE. Since
u = const., we have du = D[u, x] dx + D[u, y] dy = 0 .

First-Order Linear Equations


A first-order ODE (in interval a < x < b) written in the standard form as follows
khx

y' + p(x) y = r(x)


khx

is called a Linear ODE. If the r(x) equals to 0, the equation becomes a Homogeneous Linear
ODE.
y' + p(x) y = 0
khx

It is easily noticed that the ODE is separable, so by separating variable, the solution to the
equation is
y (x) = e-∫ p(x) x
*
 = ±e  when y ≷ 0
khx

And the trivial solution y(x) = 0 for all x in the mentioned interval.

In a case the equation is a Nonhomogeneous Linear ODE, another method is used.


khx

y' + p(x) y = r(x)


khx

Here the ODE has a pleasant property that the integrating factor depends only on the x.
khx

F y' + p F y = r F
khx

45
12 Week 1_First-Order ODEs.nb

After some mathematical manipulations (refer to the textbook), the general solution to the
nonhomogeneous linear ODE is obtained.

y(x) = e- h ∫ e h r  x +  where h = ∫ p(x)  x

y (x ) = e - h ∫ e h r  x +  e - h

Example 1.5: First-Order ODE, IVP


y' + y tan x = sin 2 x, y(0) = 1

◆ Step 1. From the standard form, here p = tan x, r = sin 2 x.


ClearAll["Global`*"]
p = Tan[x];
r = Sin[2 x];

◆ We can introduce p & r as functions as was done in the previous example, but we don’t
have to.
◆ Step 2. Find h using the formula above.
In[ ]:= h = Integrate[p, x]
Out[ ]=

- Log[Cos[x]]

◆ Step 3. Find the general solution to the given ODE. ysoln0 is the first term and ysoln1
is the second term in the general solution.
In[ ]:= ysoln0 = Exp[- h] * Integrate[Exp[h] * r, x]
Out[ ]=

- 2 Cos[x]2

In[ ]:= ysoln1 = Exp[- h] * c1


Out[ ]=

c1 Cos[x]

In[ ]:= ysolnGen = ysoln0 + ysoln1


Out[ ]=

c1 Cos[x] - 2 Cos[x]2

◆ Step 4. Find the particular solution by the initial data: y(0) = 1.


In[ ]:= ysolnGen /. x  0
Out[ ]=

- 2 + c1

◆ Solve for c1:

46
Week 1_First-Order ODEs.nb 13

In[ ]:= Solvec1 Cos[x] - 2 Cos[x]2  /. x  0  1, c1


Out[ ]=

{{c1  3}}

In[ ]:= ysoln = (ysoln0 + ysoln1) /. c1  3


Out[ ]=

3 Cos[x] - 2 Cos[x]2

◆ Step 5. Verify the solution.


In[ ]:= FullSimplify[D[ysoln, x] + p * ysoln  r]
Out[ ]=

True

Bernoulli Equation
Many ODEs with a huge importance in engineering are nonlinear that can transform into a
linear ODE. One of the useful one is the Bernoulli Equation.
khx

y' + p(x) y = g(x) y a (a is any real number)


khx

When a = 0, the Bernoulli equation is a linear 1st order ODE, which we have solved in the
previous section.
kh

When a = 1, the Bernoulli equation is a separable, linear, 1st order, homogeneous ODE,
which is even simpler to solve.
khx

When a is neither 0 nor 1, we have a nonlinear ODE of y(x).


khx

The trick to solve the Bernoulli equation is to introduce the following variable transformation:
khx

u(x) = [y(x)]1-a
khx

Using the u(x) transformation variable, we get the linear ODE, which we know how to solve.
khx

u' - (1 - a) p u = (1 - a) g
khx

Example 1.6: Logistic Equation

y' = Ay - By2

The given ODE is a Bernoulli equation is known as the Logistic Equation


Equation).

47
14 Week 1_First-Order ODEs.nb

In[ ]:= ClearAll["Global`*"]

◆ Step 1. Find the u(x) transformation variable. From the equation, we see that a is equal
to 2 .
In[ ]:= u[y] = y[x]1-a /. a  2
Out[ ]=
1
y[x]

In[ ]:= D[u[y], x]


Out[ ]=
y′ [x]
-
y[x]2

In[ ]:= FullSimplifyD[u[y], x] /. y′ [x]  A * y[x] - B * y[x]2 


Out[ ]=
A
B - 
y[x]

1
◆ Step 2. We found earlier that u(x) = . Hence, using it, u' (x) becomes
y (x )
u' (x) = B - A u(x).
khx

◆ Now we have a linear ODE of form: u' + A u = B.


◆ Step 3. Solve the obtained linear ODE. It is nonhomogeneous. Thus, we use the same
method as in Example 1.3.
In[ ]:= p = A;
r = B;

In[ ]:= h = Integrate[p, x]


Out[ ]=

Ax

In[ ]:= usoln0 = Exp[- h] * Integrate[Exp[h] * r, x]


Out[ ]=
B
A

In[ ]:= usoln1 = Exp[- h] * c1


Out[ ]=

c1 -A x

In[ ]:= usolnGen = ysoln0 + ysoln1


Out[ ]=

ysoln0 + ysoln1

48
Week 1_First-Order ODEs.nb 15

1
◆ Step 4. Since u(x) = , the general solution y (x as follows:
y (x )
1
In[ ]:= FullSimplifySolve((u[x]) /. u[x]  usolnGen)  , y[x]
y[x]
Out[ ]=

1
y[x]  
ysoln0 + ysoln1

◆ Step 5. Also, directly from the ODE, it is seen that y(x) = 0 for all x is the solution to
the equation as well (a trivial solution).
◆ Step 6. Always verify the solution.
A
In[ ]:= ysoln =
B + A c1 -A x
Out[ ]=
A
B + A c1 -A x

In[ ]:= FullSimplify[D[ysoln, x]]  FullSimplifyA * ysoln - B * ysoln 2 


Out[ ]=

True

Summary
After completing this chapter, you should be able to
◼ solve several types of first-order ODEs step-by-step using Wolfram Mathematica.
◼ develop SOPs to solve first-order ODEs.
◼ develop the habit of always checking your solutions for quality assurance.

49
50
Week 2: Second-Order ODEs (Part 1)
How to solve 2nd-Order ODEs Step-by-step?

Table of Contents
1. Homogeneous Linear ODEs of Second Order
1.1. Example 2.1: Solve Second-Order ODE using DSolve
2. Homogeneous Linear ODEs with Constant Coefficients
2.1. Example 2.2: Case I with IVP
2.2. Example 2.3: Case II with IVP
2.3. Example 2.4: Case III with IVP
3. Modeling of Free Oscillations of Mass-Spring System
3.1. Example 2.5: Harmonic Oscillation of an Undamped Mass-Spring System
3.2. Example 2.6: The Three Cases of Damped Motion
4. Wolfram Demonstration Project: Unforced, Damped, Simple Harmonic Motion
5. Summary

Commands list
◼ DSolve[eqn, u, x]
◼ expr[[i]] or Part[expr, i]
◼ Log[z]
◼ D[f, x]
◼ Chop[expr]

Homogeneous Linear ODEs of Second Order


The standard form of the Linear Second Order ODE is as follows:
.

y'' + p(x) y' + q(x) y = r(x)


.

If r(x) term is equal to 0:

51
2 Week 2_Second-Order ODEs-1 (Homogeneous).nb

y'' + p(x) y' + q(x) y = 0


.

the ODE is called Homogeneous. If r(x) ≠ 0, then it is called Nonhomogeneous.


.

The linear homogeneous second order ODEs have a rich solution structure that relies on the
Superposition Principle.
.

The superposition principle or linearity principle means that we can obtain further solutions
from the given ones by adding them or multiplying them with any constants.
.

y = c1 y1 + c2 y2 (c1, c2 arbitrary contants)


.

Note: This principle works only for Homogeneous AND Linear ODEs.
.

For a second-order homogeneous linear ODE, the Initial Value Problem consists of tw
initial conditions:
y ( x0 ) = K 0 y ' (x0) = K1
.

The General Solution to the ODE is


y = c1 y1 + c2 y2
.

Here y1 and y2 are not proportional and c1 and c2 are arbitrary constants. This pair of linearly
independent solutions is called a basis of solutions.
1.1
.

Example 2.1: Solve Second-Order ODE using DSolve

x2 - x y'' - x y' + y = 0

◆ Step 1. Use the DSolve function directly, including the equation for the function y[x],
with independent variable x
In[ ]:= ClearAll["Global`*"]
sol = DSolvex2 - x * y ''[x] - x * y '[x] + y[x]  0, y[x], x
Out[ ]=

{{y[x]  x 1 + 2 (- 1 - x Log[x])}}

◆ The solution to y[x] is written to “ysol” variable. Here the double square brackets is the
short form [[ ]] for the Part function, which is used to get parts of lists.
◆ In short, the program gets the 1st part of the expression “sol”, and writes it to the new
variable “ysol”.

52
Week 2_Second-Order ODEs-1 (Homogeneous).nb 3

In[ ]:= ysol = y[x] /. sol〚1〛; ysol


Out[ ]=

x 1 + 2 (- 1 - x Log[x])

In[ ]:= ? Part


Out[ ]=

Symbol

expr[[i]] or Part [expr, i] gives the ith part of expr.

expr[[-i]] counts from the end.

expr[[i, j, …]] or Part [expr, i, j, …] is equivalent to expr[[i]][[j]] ….

expr[[{i1 , i2 , …}]] gives a list of the parts i1 , i2 , … of expr.

expr[[m ;; n]] gives parts m through n.

expr[[m ;; n ;; s]] gives parts m through n in steps of s.

expr[["key"]] gives the value associated with the key "key" in an association expr.

expr[[Key[k ]]] gives the value associated with an arbitrary key k in the association expr.

◆ The new function called GeneralSol[x_] takes the solution to y[x] from the variable
“ysol”. It is done so in the next step, we can plot the graph of the obtained solution.
In[ ]:= GeneralSol[x_] := ysol; GeneralSol[x]
Out[ ]=

x 1 + 2 (- 1 - x Log[x])

In[ ]:= Plot[GeneralSol[x] /. {C[1]  1, C[2]  1}, {x, 0, 100},


Frame  True, FrameLabel  {"x", "y(x)"}, GridLines  Automatic,
BaseStyle  {FontWeight  "Bold", Black, FontSize  12}, PlotStyle  {Dashed, Red},
PlotLegends  Placed[{"y(x)= x 1 + 2 (-1 - x Log[x]) 1 1, 2 1 "}, Above]]
Out[ ]=

y(x)= x 1 + 2 (-1 - x Log[x]) 1 1, 2 1

-100
y (x )

-200

-300

0 20 40 60 80 100
x

53
4 Week 2_Second-Order ODEs-1 (Homogeneous).nb

◆ From the solution, it is seen that the solution perfectly matches the form y = c1 y1 + c2 y2 ,
thus a basis of solutions is the following: y1 = x and y2 = - 1 - x ln(x).
◆ Note: In Wolfram Mathematica, the function Log[x] gives the natural logarithm of x.
In[ ]:= ? Log
Out[ ]=

Symbol

Log[z] gives the natural logarithm of z (logarithm to base e).

Log[b, z] gives the logarithm to base b.

◆ Step 2. Check the obtained solution by comparing Left-Hand-Side (LHS) and Right-
Hand-Side (RHS).
In[ ]:= LHS = FullSimplify
x2 - x * D[GeneralSol[x], {x, 2}] - x * D[GeneralSol[x], {x, 1}] + GeneralSol[x]
Out[ ]=

In[ ]:= RHS = 0


Out[ ]=

In[ ]:= LHS  RHS


Out[ ]=

True

Homogeneous Linear ODEs with Constant Coefficients


Now let’s consider the homogeneous linear second-order ODEs with constant coefficients a
and b:
y'' + a y' + b y = 0
.

These ODEs have a huge implications in the mechanical and electrical vibrations, as we will
see further.
.

To solve the homogeneous linear second-order ODEs, we need to solve the characteristic
equation (or auxiliary equation)
λ2 + a λ + b = 0
.

Because the characteristic equation is in the quadratic form, it may have three different kind of

54
Week 2_Second-Order ODEs-1 (Homogeneous).nb 5

roots, depending on the sign of the discrimanant a2 - 4 b. These 3 cases are as follows:
.

(Case I) Two real roots if a2 - 4 b > 0,


(Case II) A real double root if a2 - 4 b = 0,
(Case III) Complex conjugate roots if a2 - 4 b < 0.
Depending on the case, the basis of solutions and the general solution to the ODE is
summarized in the following table:
.

2.1
.

Example 2.2: Case I with IVP


y'' + y' - 2 y = 0, y(0) = 4, y' (0) = - 5

◆ Step 1. Solve the characteristic equation and determine what case the ODE refers to.
In[ ]:= ClearAll["Global`*"]
roots = Solveλ2 + λ - 2  0, λ (** Note that we have to use  , not = **)
Out[ ]=

{{λ  - 2}, {λ  1}}

◆ Step 2. Find the general solution. We got two distinct real roots, so we proceed with
Case I.
In[ ]:= λ1 = λ /. roots〚1〛; λ2 = λ /. roots〚2〛; {λ1 , λ2 }
(** Double squared brackets [[]] get the ith element from the list**)
Out[ ]=

{- 2, 1}

55
6 Week 2_Second-Order ODEs-1 (Homogeneous).nb

In[ ]:= GeneralSol[x_] := c1 * Exp[λ1 * x] + c2 * Exp[λ2 * x]; GeneralSol[x]


Out[ ]=

c1 -2 x + c2 x

◆ Step 3. Find the particular solution using the initial conditions: y(0) = 4, y' (0) = - 5
In[ ]:= cond1 = GeneralSol[0]  4;
cond2 = ( D[GeneralSol[x], x] /. x  0)  - 5;

◆ Solve for the arbitrary constants.


In[ ]:= soln = Solve[{cond1, cond2}, {c1, c2}]
Out[ ]=

{{c1  3, c2  1}}

◆ Obtain the particular solution.


In[ ]:= GeneralSol[x] /. soln
Out[ ]=

 3  -2 x +  x 

◆ Step 4. Verify the solution.


In[ ]:= ivpSoln[x_] := 3 -2 x + x ;

◆ Check for the initial conditions.


In[ ]:= ivpSoln[0]
Out[ ]=

In[ ]:= D[ivpSoln[x], {x, 1}] /. x  0


Out[ ]=

-5

◆ Check that the solution satisfies the given ODE y'' + y' - 2 y = 0.
In[ ]:= LHS = D[ivpSoln[x], {x, 2}] + D[ivpSoln[x], {x, 1}] - 2 * ivpSoln[x]
Out[ ]=

6 -2 x + 2 x - 2 3 -2 x + x 

In[ ]:= RHS = 0


Out[ ]=

In[ ]:= FullSimplify[LHS  RHS]


Out[ ]=

True

◆ So the solution satisfies both the initial conditions and the ODE check.
◆ Step 5. Verify the solution by DSolve (Not Required).

56
Week 2_Second-Order ODEs-1 (Homogeneous).nb 7

In[ ]:= ClearAll[y]; DSolve[y ''[x] + y '[x] - 2 y[x]  0, y[x], x]


Out[ ]=

y[x]  -2 x 1 + x 2 

In[ ]:= yp = DSolve[{y ''[x] + y '[x] - 2 y[x]  0, y[0]  4, y '[0]  - 5}, y[x], x]
Out[ ]=

y[x]  -2 x 3 + 3 x 

◆ Using the DSolve function yields in the same result.


◆ Let’s also take a look at the graph of the solution:
In[ ]:= Ploty[x] /. yp, {x, 0, 3}, Frame  True, FrameLabel  {"x", "y(x)"},
GridLines  Automatic, BaseStyle  {FontWeight  "Bold", Black, FontSize  12},
PlotStyle  {Orange}, PlotLegends  Placed"y(x)=  -2 x (3 + 3 x )", Above
Out[ ]=

y(x)= -2 x (3 + 3 x )

20

15
y (x )

10

0.0 0.5 1.0 1.5 2.0 2.5 3.0


x
2.2

Example 2.3: Case II with IVP


y'' + y' + 0.25 y = 0, y(0) = 3.0, y' (0) = - 3.5

In[ ]:= ClearAll["Global`*"]

◆ Step 1. Solve the characteristic equation and determine what case the ODE refers to.
In[ ]:= roots = Solveλ2 + λ + 0.25  0, λ (** Note that we have to use  , not = **)
Out[ ]=

{{λ  - 0.5}, {λ  - 0.5}}

◆ Step 2. Find the general solution. We got a real double root, so we proceed with Case II.

57
8 Week 2_Second-Order ODEs-1 (Homogeneous).nb

In[ ]:= λ1 = λ /. roots〚1〛; λ2 = λ /. roots〚2〛; {λ1 , λ2 }


(** Double squared brackets [[]] get the ith element from the list**)
Out[ ]=

{- 0.5, - 0.5}

In[ ]:= GeneralSol[x_] := (c1 + c2 * x) * Exp[λ1 * x]; GeneralSol[x]


Out[ ]=

-0.5 x (c1 + c2 x)

◆ Step 3. Find the particular solution using the initial conditions: y(0) = 3.0, y' (0) = - 3.5
In[ ]:= cond1 = GeneralSol[0]  3.0;
cond2 = ( D[GeneralSol[x], x] /. x  0)  - 3.5;

◆ Solve for the arbitrary constants.


In[ ]:= soln = Solve[{cond1, cond2}, {c1, c2}]
Out[ ]=

{{c1  3., c2  - 2.}}

◆ Obtain the particular solution.


In[ ]:= GeneralSol[x] /. soln
Out[ ]=

-0.5 x (3. - 2. x)

◆ Step 4. Verify the solution.


In[ ]:= ivpSoln[x_] := -0.5` x (3.` - 2.` x); ivpSoln[x]
Out[ ]=

-0.5 x (3. - 2. x)

◆ Check for the initial conditions.


In[ ]:= ivpSoln[0]
Out[ ]=

3.

In[ ]:= D[ivpSoln[x], {x, 1}] /. x  0


Out[ ]=

- 3.5

◆ Check that the solution satisfies the given ODE y'' + y' + 0.25 y = 0 .
In[ ]:= LHS = D[ivpSoln[x], {x, 2}] + D[ivpSoln[x], {x, 1}] + 0.25 * ivpSoln[x]
Out[ ]=

0.

In[ ]:= RHS = 0


Out[ ]=

58
Week 2_Second-Order ODEs-1 (Homogeneous).nb 9

In[ ]:= LHS  RHS


Out[ ]=

True

◆ So the solution satisfies both the initial conditions and the ODE check.
◆ Step 5. Verify the solution by DSolve (Not Required).
In[ ]:= ClearAll[y]; DSolve[y ''[x] + y '[x] + 0.25 y[x]  0, y[x], x]
Out[ ]=

y[x]  -0.5 x 1 + -0.5 x x 2 

In[ ]:= yp =
FullSimplify[DSolve[{y ''[x] + y '[x] + 0.25 y[x]  0, y[0]  3.0, y '[0]  - 3.5}, y[x], x]]
Out[ ]=

y[x]  -0.5 x (3. - 2. x)

◆ Using the DSolve function yields in the same result.


◆ Let’s also take a look at the graph of the solution.
In[ ]:= Ploty[x] /. yp, {x, 0, 20}, Frame  True, FrameLabel  {"x", "y(x)"}, GridLines  Automatic,
BaseStyle  {FontWeight  "Bold", Black, FontSize  12}, PlotRange  {- 1, 3},
PlotStyle  Automatic, PlotLegends  Placed"y(x)=  -0.5 x (3 - 2x)", Above
Out[ ]=

y(x)= -0.5 x (3 - 2x)

2
y (x )

-1
0 5 10 15 20
x

2.3

Example 2.4: Case III with IVP


y'' + 0.4 y' + 9.04 y = 0, y(0) = 0, y' (0) = 3

In[ ]:= ClearAll["Global`*"]

◆ Step 1. Solve the characteristic equation and determine what case the ODE refers to.

59
10 Week 2_Second-Order ODEs-1 (Homogeneous).nb

In[ ]:= a = 0.4; b = 9.04;


roots = Solveλ2 + a * λ + b  0, λ(** Note that we have to use  , not = **)
Out[ ]=

{{λ  - 0.2 - 3. }, {λ  - 0.2 + 3. }}

◆ Step 2. Find the general solution. We got two complex roots, so we proceed with Case
III.
In this case, the roots of the characteristic equation are complex numbers that give the complex
solutions of the ODE. However, it can be shown that we can obtain a basis of real solutions:

a2
y1 = e-ax/2 cos ω x and y2 = e-ax/2 sin ω x where ω = b- 4

In[ ]:= ω = Sqrtb - a2  4


Out[ ]=

3.

In[ ]:= ClearAll[A, B];


GeneralSol[x_] := (A * Cos[ω * x] + B * Sin[ω * x] ) * Exp[(- a / 2) * x];
GeneralSol[x]
Out[ ]=

-0.2 x (A Cos[3. x] + B Sin[3. x])

◆ Step 3. Find the particular solution using the initial conditions: y(0) = 0, y' (0) = 3
In[ ]:= cond1 = GeneralSol[0]  0;
cond2 = ( D[GeneralSol[x], x] /. x  0)  3;

◆ Solve for the constants A and B.


In[ ]:= soln = Solve[{cond1, cond2}, {A, B}]
Out[ ]=

{{A  0., B  1.}}

◆ Obtain the particular solution.


In[ ]:= GeneralSol[x] /. soln
Out[ ]=

-0.2 x (0. + 1. Sin[3. x])

In[ ]:= FullSimplify[GeneralSol[x] /. soln]


Out[ ]=

1. -0.2 x Sin[3. x]

◆ The solution is y = e-0.2 x sin (3 x) .


◆ Step 4. Verify the solution.

60
Week 2_Second-Order ODEs-1 (Homogeneous).nb 11

In[ ]:= ivpSoln[x_] := -0.2` x (0.` + 1.` Sin[3.` x]);

◆ Check for the initial conditions.


In[ ]:= ivpSoln[0]
Out[ ]=

0.

In[ ]:= D[ivpSoln[x], {x, 1}] /. x  0


Out[ ]=

3.

◆ Check that the solution satisfies the given ODE y'' + 0.4 y' + 9.04 y = 0 .
In[ ]:= LHS = FullSimplify[ D[ivpSoln[x], {x, 2}] + 0.4 * D[ivpSoln[x], {x, 1}] + 9.04 * ivpSoln[x]]
Out[ ]=

- 1.72085 × 10-15 -0.2 x Sin[3. x]

In[ ]:= Chop[LHS]


Out[ ]=

In[ ]:= RHS = 0


Out[ ]=

In[ ]:= Chop[LHS]  RHS


Out[ ]=

True

◆ So the solution satisfies both the initial conditions and the ODE check.
◆ Step 5. Verify the solution by DSolve (Not Required).
In[ ]:= ClearAll[y]; DSolve[y ''[x] + 0.4 y '[x] + 9.04 y[x]  0, y[x], x]
Out[ ]=

y[x]  -0.2 x 2 Cos[3. x] + -0.2 x 1 Sin[3. x]

In[ ]:= yp = DSolve[{y ''[x] + 0.4 y '[x] + 9.04 y[x]  0, y[0]  0, y '[0]  3}, y[x], x]
Out[ ]=

y[x]  1. -0.2 x Sin[3. x]

◆ Using the DSolve function yields in the same result.


◆ Let’s also take a look at the graph of the solution.

61
12 Week 2_Second-Order ODEs-1 (Homogeneous).nb

In[ ]:= Plot1.` -0.2` x , ivpSoln[x], - 1.` -0.2` x , {x, 0, 30}, Frame  True,
PlotStyle  {{Black, Dashed}, {Red, Thick}, {Black, Dashed}}, Frame  True,
FrameLabel  {"x", "y(x)"}, BaseStyle  {FontWeight  "Bold", Black, FontSize  12},
PlotStyle  {Black}, GridLines  Automatic,
PlotLegends  "e-0.2 x ", "y(x)= -0.2 x sin (3 x)", "- e-0.2 x ",
AxesStyle  Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
Method  {"DefaultBoundaryStyle"  Automatic, "DefaultMeshStyle"  AbsolutePointSize[6],
"ScalingFunctions"  None}, PlotRange  {- 1.0, 1.0}
Out[ ]=

1.0

0.5

e-0.2 x
y (x )

0.0
y(x)= -0.2 x sin (3 x)
-e-0.2 x
-0.5

-1.0
0 5 10 15 20 25 30
x

◆ The solution oscillates between e-0.2 x and - e-0.2 x functions.


3

Modeling of Free Oscillations of Mass-Spring System


The motion of the mechanical mass-spring system is determined by Newton’s second law:
.

Mass × Acceleration = m y'' = Force


.

There are two possible scenarios for the mass-spring system motion.
.

3.3

First Case. Undamped System.


The damping in the system is negligible. In this case the ODE of the Undamped System is as
follows:
m y'' + k y = 0
.

where m is an object mass, k is the spring constant. This is a homogeneous linear ODE with
constant coefficients, whose general solution is obtained easily
.

62
Week 2_Second-Order ODEs-1 (Homogeneous).nb 13

k
y(t ) = A cos ω0 t + B sin ω0 t ω0 =
m
.

An alternative representation that shows physical characteristics of amplitude and phase shift is

B
y(t ) = C cos (ω0 t - δ) C= A2 + B2 tan δ = A
.

2.1, 2.5
.

Example 2.5: Harmonic Oscillation of an Undamped Mass–Spring System


.

If a mass–spring system with an iron ball of weight W = 98 nt (about 22 lb) can be regarded
as undamped, and the spring is such that the ball stretches it 1.09 m (about 43 in.), how
many cycles per minute will the system execute? What will its motion be if we pull the ball
down from rest by 16 cm (about 6 in.) and let it start with zero initial velocity?
.

◆ Step 1. Set up the model and determine the suitable ODE .


◆ Find the spring constant from Hooke’s law.
In[ ]:= ClearAll["Global`*"]
W = 98;
l = 1.09;
k = W/l
Out[ ]=

89.9083

◆ Find the mass of the object.


In[ ]:= g = 9.81;
m = W/g
Out[ ]=

9.98981

◆ Find the frequency.


k
In[ ]:= ω0 =
m
Out[ ]=

3.

ω0
In[ ]:= f= (**In [Hz]**)
2 Pi
Out[ ]=

0.477465

63
14 Week 2_Second-Order ODEs-1 (Homogeneous).nb

In[ ]:= fcpm = Round[f * 60] (**In [cycles per minute]**)


Out[ ]=

29

◆ Find the coefficients A and B using the initial conditions: y(0) = 0.16 , y' (0) = ω0 B = 0
In[ ]:= y[t_] := A * Cos[ω0 * t] + B * Sin[ω0 * t]

In[ ]:= y0 = y[0]


Out[ ]=

0. + 1. A

In[ ]:= Solve[y0  0.16, A]


Out[ ]=

{{A  0.16}}

In[ ]:= Solve[{ω0 * B  0}, B]


Out[ ]=

{{B  0.}}

In[ ]:= y[t] /. {A  0.16, B  0}


Out[ ]=

0.16 Cos[3. t]

◆ Step 2. Verify the solution.


In[ ]:= ySoln[t_] := 0.16 * Cos[3 * t]

In[ ]:= ySoln[0]  0.16


Out[ ]=

True

In[ ]:= D[ySoln[t] /. t  0, t]  0


Out[ ]=

True

In[ ]:= LHS = FullSimplify[m * D[ySoln[t], {t, 2}] + k * ySoln[t]]


Out[ ]=

1.77636 × 10-15 Cos[3 t]

In[ ]:= RHS = 0


Out[ ]=

In[ ]:= Chop[LHS]  RHS


Out[ ]=

True

64
Week 2_Second-Order ODEs-1 (Homogeneous).nb 15

◆ So the solution satisfies both the initial conditions and the ODE check.
◆ Step 3. Verify the solution by DSolve (Not Required).
In[ ]:= ClearAll[y]; DSolve[m * y ''[x] + k * y[x]  0, y[x], x]
Out[ ]=

{{y[x]  1. 1 Cos[3. x] + 1. 2 Sin[3. x]}}

In[ ]:= yp = DSolve[{m * y ''[x] + k * y[x]  0, y[0]  0.16, y '[0]  0}, y[x], x]
Out[ ]=

{{y[x]  0.16 Cos[3. x]}}

◆ Using the DSolve function yields in the same result.


◆ Let’s also take a look at the graph of the solution.
In[ ]:= Plot[y[x] /. yp, {x, 0, 10}, Frame  True, FrameLabel  {"x", "y(x)"},
GridLines  Automatic, BaseStyle  {FontWeight  "Bold", Black, FontSize  12},
PlotRange  {- 0.2, 0.2}, PlotStyle  RGBColor[0.3, 0.8, 0.5],
PlotLegends  Placed[{"y(x)= 0.16 cos(3 x)"}, Above]]
Out[ ]=

y(x)= 0.16 cos(3 x)

0.2

0.1
y(x)

0.0

-0.1

-0.2
0 2 4 6 8 10
x

Second Case. Damped System.


The system has a considerable damping. In this case the ODE of the Damped System
follows:
m y'' + c y' + k y = 0
.

here c is called the damping constant. This is a homogeneous linear ODE with constant
coefficients. We can obtain the general solution by solving the characteristic equation as
discussed before

65
16 Week 2_Second-Order ODEs-1 (Homogeneous).nb

c k
λ2 + λ+ =0
m m
.

Again there are three cases with three different kind of roots, depending on the sign of the
c 2 k
discrimanant   - 4 .
m m
.

Case I c2 > 4 m k Distinct real roots λ1, λ2. (Overdampling)

Case II c2 = 4 m k A real double root. (Critical dampling)


Case III c2 < 4 m k Complex conjugate roots. (Underdampling)
.

As before, the solution to the ODE in each case is summarized below:

Case I. Overdamping

c 1
y(t ) = c1 e-(α-β) t + c2 e-(α+β) t α= β= c2 - 4 m k
2m 2m
.
.

Case II. Critical damping


c
y(t ) = (c1 + c2 t ) e-α t α=
2m
.
.

Case III. Underdamping

y(t ) = e-α t (A cos ω* t + B sin ω* t ) = C e-α t cos (ω* t - δ)

C 2 = A2 + B2 δ = B/A α = c / (2 m)

1 k c2
ω* = 4 m k - c2 = -
2m m 4 m2
.

3.2

Example 2.6: The Three Cases of Damped Motion


.

If a mass–spring system in the Example 1 with an iron ball of mass m = 10 kg now


regarded as damped, and the spring has a spring constant k = 90 N/m. We pull the ball down
from rest by 16 cm (about 6 in.) and let it start with zero initial velocity as before. How does
the motion change if we change the damping constant c from one to another of the

66
Week 2_Second-Order ODEs-1 (Homogeneous).nb 17

following three values?


.

(I) c = 100 kg/sec


(II) c = 60 kg/sec
(III) c = 10 kg/sec
.

In[ ]:= ClearAll["Global`*"]

(I) c = 100 kg / sec

In[ ]:= m = 10;


k = 90;
c = 100;

In[ ]:= LHS = m * y '' + c * y ' + k * y


RHS = 0;
Out[ ]=

90 y + 100 y′ + 10 y′′

◆ Solve the characteristic equation.


c k
In[ ]:= roots = Solveλ2 + *λ+  0, λ
m m
Out[ ]=

{{λ  - 9}, {λ  - 1}}

◆ There are two distinct roots, so we proceed with Case I, overdamping. This gives the
general solution:
In[ ]:= λ1 = λ /. roots〚1〛; λ2 = λ /. roots〚2〛; {λ1 , λ2 }
(** Double squared brackets [[]] get the ith element from the list**)
Out[ ]=

{- 9, - 1}

In[ ]:= GeneralSol[x_] := c1 * Exp[λ1 * x] + c2 * Exp[λ2 * x]; GeneralSol[x]


Out[ ]=

c1 -9 x + c2 -x

◆ Find the particular solution using the initial conditions: y(0) = 0.16 , y' (0) = ω0 B = 0
In[ ]:= cond1 = GeneralSol[0]  0.16;
cond2 = ( D[GeneralSol[x], x] /. x  0)  0;

◆ Solve for the arbitrary constants.


In[ ]:= soln = Solve[{cond1, cond2}, {c1, c2}]
Out[ ]=

{{c1  - 0.02, c2  0.18}}

◆ Obtain the particular solution.

67
18 Week 2_Second-Order ODEs-1 (Homogeneous).nb

In[ ]:= yp1 = GeneralSol[x] /. soln


Out[ ]=

- 0.02 -9 x + 0.18 -x 

◆ Check the particular solution.


In[ ]:= ivpSoln[x_] := - 0.02` -9 x + 0.18` -x ;

◆ Check for the initial conditions.


In[ ]:= ivpSoln[0]
Out[ ]=

0.16

In[ ]:= D[ivpSoln[x], {x, 1}] /. x  0


Out[ ]=

0.

◆ Check that the solution satisfies the given ODE my'' + cy' + ky = 0.
In[ ]:= FullSimplify[
LHS /. {y ''  D[ivpSoln[x], {x, 2}], y '  D[ivpSoln[x], {x, 1}], y  ivpSoln[x]}]
Out[ ]=

- 2.88658 × 10-15 -9 x

In[ ]:= Chop[FullSimplify[


LHS /. {y ''  D[ivpSoln[x], {x, 2}], y '  D[ivpSoln[x], {x, 1}], y  ivpSoln[x]}]]  RHS
Out[ ]=

True

◆ So the solution satisfies both the initial conditions and the ODE check.

(II) c = 60 kg / sec

In[ ]:= m = 10;


k = 90;
c = 60;

In[ ]:= LHS = m * y '' + c * y ' + k * y


RHS = 0;
Out[ ]=

90 y + 60 y′ + 10 y′′

◆ Solve the characteristic equation.


c k
In[ ]:= roots = Solveλ2 + *λ+  0, λ
m m
Out[ ]=

{{λ  - 3}, {λ  - 3}}

68
Week 2_Second-Order ODEs-1 (Homogeneous).nb 19

◆ There is a real double root, so we proceed with Case II, critical damping. This gives the
general solution.
In[ ]:= λ1 = λ /. roots〚1〛; λ2 = λ /. roots〚2〛; {λ1 , λ2 }
(** Double squared brackets [[]] get the ith element from the list**)
Out[ ]=

{- 3, - 3}

In[ ]:= GeneralSol[x_] := (c1 + c2 * x) * Exp[λ1 * x]; GeneralSol[x]


Out[ ]=

-3 x (c1 + c2 x)

◆ Find the particular solution using the initial conditions: y(0) = 0.16 , y' (0) = ω0 B = 0.
In[ ]:= cond1 = GeneralSol[0]  0.16;
cond2 = ( D[GeneralSol[x], x] /. x  0)  0;

◆ Solve for the arbitrary constants.


In[ ]:= soln = Solve[{cond1, cond2}, {c1, c2}]
Out[ ]=

{{c1  0.16, c2  0.48}}

◆ Obtain the particular solution.


In[ ]:= yp2 = GeneralSol[x] /. soln
Out[ ]=

-3 x (0.16 + 0.48 x)

◆ Check the particular solution.


In[ ]:= ivpSoln[x_] := -3 x (0.16` + 0.48` x);

◆ Check for the initial conditions.


In[ ]:= ivpSoln[0]
Out[ ]=

0.16

In[ ]:= D[ivpSoln[x], {x, 1}] /. x  0


Out[ ]=

0.

◆ Check that the solution satisfies the given ODE my'' + cy' + ky = 0.
In[ ]:= FullSimplify[
LHS /. {y ''  D[ivpSoln[x], {x, 2}], y '  D[ivpSoln[x], {x, 1}], y  ivpSoln[x]}]
Out[ ]=

7.10543 × 10-15 -3 x x

In[ ]:= Chop[FullSimplify[


LHS /. {y ''  D[ivpSoln[x], {x, 2}], y '  D[ivpSoln[x], {x, 1}], y  ivpSoln[x]}]]  RHS

69
20 Week 2_Second-Order ODEs-1 (Homogeneous).nb

Out[ ]=

True

◆ So the solution satisfies both the initial conditions and the ODE check.

(III) c = 10 kg / sec

In[ ]:= m = 10;


k = 90;
c = 10;

In[ ]:= LHS = m * y '' + c * y ' + k * y


RHS = 0;
Out[ ]=

90 y + 10 y′ + 10 y′′

◆ Solve the characteristic equation.


c k
In[ ]:= roots = Solveλ2 + *λ+  0, λ
m m
Out[ ]=
1 1
λ  - 1 -  35 , λ  - 1 +  35 
2 2

◆ Find the general solution. We got two complex roots, so we proceed with Case III.
In this case, the roots of the characteristic equation are complex numbers that give the complex
solutions of the ODE. However, it can be shown that we can obtain a basis of real solutions:

k c2
y1 = e-ax/2 cos ω x and y2 = e-ax/2 sin ω x where ω* = -
m 4 m2

k c2
In[ ]:= ω* = Sqrt - 
m 4 m2
Out[ ]=

35
2

In[ ]:= ClearAll[A, B];


GeneralSol[x_] := (A * Cos[ω* * x] + B * Sin[ω* * x] ) * Exp[(- c / (2 m) ) * x];
GeneralSol[x]
Out[ ]=

35 x 35 x
-x/2 A Cos  + B Sin 
2 2

◆ Find the particular solution using the initial conditions: y(0) = 0.16 , y' (0) = ω0 B = 0.
In[ ]:= cond1 = GeneralSol[0]  0.16;
cond2 = ( D[GeneralSol[x], x] /. x  0)  0;

70
Week 2_Second-Order ODEs-1 (Homogeneous).nb 21

◆ Solve for the arbitrary constants.


In[ ]:= soln = Solve[{cond1, cond2}, {A, B}]
Out[ ]=

{{A  0.16, B  0.0270449}}

◆ Obtain the particular solution.


In[ ]:= yp3 = GeneralSol[x] /. soln
Out[ ]=

35 x 35 x
-x/2 0.16 Cos  + 0.0270449 Sin 
2 2

◆ Check the particular solution.


35 x 35 x
In[ ]:= ivpSoln[x_] := -x/2 0.16` Cos  + 0.02704493615131253` Sin  ;
2 2

◆ Check for the initial conditions.


In[ ]:= ivpSoln[0]
Out[ ]=

0.16

In[ ]:= D[ivpSoln[x], {x, 1}] /. x  0


Out[ ]=

0.

◆ Check that the solution satisfies the given ODE my'' + cy' + ky = 0.
In[ ]:= FullSimplify[
LHS /. {y ''  D[ivpSoln[x], {x, 2}], y '  D[ivpSoln[x], {x, 1}], y  ivpSoln[x]}]
Out[ ]=

35 x 35 x
-x/2 - 1.77636 × 10-15 Cos  - 4.44089 × 10-16 Sin 
2 2

In[ ]:= Chop[FullSimplify[


LHS /. {y ''  D[ivpSoln[x], {x, 2}], y '  D[ivpSoln[x], {x, 1}], y  ivpSoln[x]}]]  RHS
Out[ ]=

True

◆ So the solution satisfies both the initial conditions and the ODE check.
◆ Let’s plot three curves at the same graph.

71
22 Week 2_Second-Order ODEs-1 (Homogeneous).nb

In[ ]:= Plot[{yp1, yp2, yp3}, {x, 0, 10}, Frame  True, FrameLabel  {"t", "y(t)"},
GridLines  Automatic, BaseStyle  {FontWeight  "Bold", Black, FontSize  12},
PlotRange  {- 0.1, 0.15}, PlotStyle  {Red, Green, Blue},
PlotLegends  {"c = 100 kg/sec (Overdamping)",
"c = 60 kg/sec (Critical damping)", "c = 10 kg/sec (Underdamping)"}]
Out[ ]=

0.15

0.10

0.05
c = 100 kg/sec (Overdamping)
y(t)

0.00 c = 60 kg/sec (Critical damping)


c = 10 kg/sec (Underdamping)
-0.05

-0.10
0 2 4 6 8 10
t

Wolfram Demonstrations Project: Unforced, Damped,


Simple Harmonic Motion
This Demonstration illustrates unforced, damped, simple harmonic motion using the standard
mass and spring setup. By manipulating the mass, Hooke’s constant, and the damping
coefficient parameters, you can observe the phenomenon of critical dampening.
.

The source code below was developed by John Erickson, Chicago State University (2009).
Open content licensed under CC BY-NC-SA.
John Erickson, Chicago State University
“Unforced, Damped, Simple Harmonic Motion”
https://demonstrations.wolfram.com/UnforcedDampedSimpleHarmonicMotion/
Wolfram Demonstrations Project; Published: March 10, 2009; Accessed on July 26, 2022.
In[ ]:= ClearAll["Global`*"]

72
Week 2_Second-Order ODEs-1 (Homogeneous).nb 23

In[ ]:= solPlotDamp[springLength_, initEquibPos_, initVel_, mm_, kk_, cc_, tt_] :=


Module{x, t},
cc kk
sol = x[t] /. DSolvex ''[t] + x '[t] + x[t]  0,
mm mm
x[0]  initEquibPos, x '[0]  initVel, x[t], t〚1〛;

ColumnText @ TraditionalForm @ Rowc2 - 4 k m, " = ", cc2 - 4 kk mm,

GraphicsGridPlot[{springLength, springLength + sol}, {t, 0, 40}, PlotRange 


{- 5, 15}, PlotStyle  {Red, Black}, AxesLabel  {"time", "position"},
Epilog  {Black, PointSize[.03], Point[{0, springLength + sol /. t  tt}],
Green, PointSize[.05], Point[{tt, springLength + sol /. t  tt}]}],
Plot1 + .3 Sin[π s (5 - sol /. t  tt)], {s, 0, springLength + sol /. t  tt},
PlotRange  {{0, 15}, {0, 3}}, PlotStyle  { Black, Thickness[.005 kk]},
AxesLabel  {"position", None}, Epilog  Orange, Thickness[.03 cc],
springLength + sol /. t  tt
Line{0, 1.6},  , 1.6,
2
springLength + sol /. t  tt
Thickness[.06 cc], Line , 1.6,
2
{springLength + sol /. t  tt, 1.6},
Blue, Dashed, Line[{{springLength, 0}, {springLength, 2}}],
Black, PointSize[.05], Point[{springLength + sol /. t  tt, 0}],
Red, Rectangle[{springLength + sol /. t  tt, 0},
{springLength + mm + sol /. t  tt, 2}], ImageSize  {540, 270}, Center

73
24 Week 2_Second-Order ODEs-1 (Homogeneous).nb

In[ ]:= Manipulate[solPlotDamp[springLength, initEquibPos, initVel, mm, kk, cc, tt],


{{tt, 0.0, "time"}, 0, 40, Appearance  "Labeled"},
{{springLength, 5, "spring length"}, 3, 8, Appearance  "Labeled"},
{{initEquibPos, 2, "initial position"}, 1, 5, Appearance  "Labeled"},
{{initVel, 0.0, "initial velocity"}, 0, 1, Appearance  "Labeled"},
{{cc, 0, "damping coefficient c"}, 0, 2, Appearance  "Labeled"},
{{mm, 1, "mass m"}, 1, 4, Appearance  "Labeled"},
{{kk, 1, "Hooke's constant k"}, 1, 3, Appearance  "Labeled"},
ControlPlacement  Top, SaveDefinitions  True, SynchronousUpdating  False]
Out[ ]=

time 0.

spring length 5

initial position 2

initial velocity 0.

damping coefficient c 0

mass m 1

Hooke's constant k 1

c2 - 4 k m = -4

position
15 3.0

2.5
10
2.0

5 1.5

1.0
time
10 20 30 40 0.5

-5 0.0 position
0 2 4 6 8 10 12 14

Summary
After completing this chapter, you should be able to
◼ solve 2nd-order linear homogeneous ODEs step-by-step using Wolfram Mathematica.

74
Week 2_Second-Order ODEs-1 (Homogeneous).nb 25

◼ develop SOPs to solve 2nd-order linear homogeneous ODEs.


◼ develop the habit of always checking your solutions for quality assurance.
◼ learn and use information, tools, and technology to solve engineering math problems.

75
76
Week 3: Second-Order ODEs (Part 2)
How to Solve Second-Order ODEs Step-by-step?

Table of Contents
1. Nonhomogeneous Linear ODEs of Second Order
1.1. Example 3.1. Method of Undetermined Coefficients
1.2. Example 3.2. Application of Modification Rule
1.3. Example 3.3. Application of Sum Rule
1.4. Example 3.4. Another example of the Method of Undetermined Coefficients
2. Summary

Commands list
◼ Sqrt[z]
◼ Exp[z]
◼ Collect[expr,x]
◼ Chop[expr]
◼ Plot[f, {x, x_min, x_max}]

Nonhomogeneous Linear ODEs of Second Order


The standard form of the Nonhomogeneous Linear Second Order ODE is as follows:
.

y'' + p(x) y' + q(x) y = r(x)


.

The general solution of the nonhomogeneous ODE on the open interval I is


.

y (x ) = y h (x ) + y p (x )
.

here yh(x) = c1 y1 + c2 y2 is the general solution of the homogeneous ODE on the same I
interval. We learned how to solve it earlier (by solving the characteristic equation).
.

77
2 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb

The yp(x) is any solution on I containing no arbitrary constants. It can be found by using the
Method of Undetermined Coefficients. The method is suitable for linear ODEs with constant
coefficients a and b:
y'' + a y' + b y = r(x)
.

Note 1: If a term in your choice for yp(x) happens to be a solution of the homogeneous ODE,
use the Modification Rule (multiply this term by x or by x2).
.

Note 2: If a term in your choice for yp(x) happens to be a sum of functions in the first column
of the Table above, then for yp(x) choose the sum of the functions in the corresponding lines of
the second column (Sum Rule).
.

Example 3.1. Method of Undetermined Coefficients

y'' + y = 0.001 x2 y(0) = 0, y' (0) = 1.5

78
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 3

In[ ]:= ClearAll["Global`*"]

In[ ]:= LHSOp[y_, x_] = y ''[x] + y[x]


Out[ ]=

y[x] + y′′ [x]

In[ ]:= rhsFunc[x_] = 0.001 x2


Out[ ]=

0.001 x2

◆ Step 1. Solve the corresponding homogeneous ODE to obtain the general solution of
y h (x ).
◆ To do so, let’s solve the characteristic equation.
In[ ]:= a = 0; b = 1;
roots = Solveλ2 + a * λ + b  0, λ
Out[ ]=

{{λ  - }, {λ  }}

◆ We got two complex roots, so we proceed with Case III.


In this case, the roots of the characteristic equation are complex numbers that give the complex
solutions of the ODE. However, it can be shown that we can obtain a basis of real solutions:

a2
y1 = e-ax/2 cos ω x and y2 = e-ax/2 sin ω x where ω = b- 4

In[ ]:= ω = Sqrtb - a2  4


Out[ ]=

In[ ]:= ClearAll[A, B];


yh[x_] := (A * Cos[ω * x] + B * Sin[ω * x] ) * Exp[(- a / 2) * x];
yh[x]
Out[ ]=

A Cos[x] + B Sin[x]

◆ Step 2. Applying the method of undetermined coefficients, find a solution yp(x).


◆ Since the r(x) term is in the form of k xn for (n = 0, 1, ...), the corresponding yp(x)
choice (second row in the Table) is yp = K2 x2 + K1 x + K0.
◆ K2, K1, K0 are coefficients to be determined
In[ ]:= yp[x_] = K2 * x2 + K1 * x + K0
Out[ ]=

K0 + K1 x + K2 x2

◆ Let’s plug the assumed solution yp(x) into the LHS:

79
4 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb

In[ ]:= LHS = LHSOp[yp, x]


Out[ ]=

K0 + 2 K2 + K1 x + K2 x2

In[ ]:= RHS = rhsFunc[x]


Out[ ]=

0.001 x2

◆ Next, equate coefficients of x and x2 because the coefficient of each power of x must be
the same on both sides. Hence, LHS-RHS must be zero for all x.
In[ ]:= Q = Collect(LHS - RHS), x, x2 
Out[ ]=

K0 + 2 K2 + K1 x + (- 0.001 + K2) x2

◆ The conditions that all coefficients must be 0 gives us 3 equations with 3 unknowns,
which we can solve using Solve[ ].
In[ ]:= eqn0 = K0 + 2 K2  0 (** constant term **);
eqn1 = K1  0 (**coefficient of x **);
eqn2 = (- 0.001` + K2)  0 (**coefficient of x2 **);

In[ ]:= coeffSoln = Solve[{eqn0, eqn1, eqn2}, {K2, K1, K0}]


Out[ ]=

{{K2  0.001, K1  0, K0  - 0.002}}

◆ Let’s substitute the coefficients to the solution.


In[ ]:= ypSoln[x_] = yp[x] /. coeffSoln〚1〛
Out[ ]=

- 0.002 + 0.001 x2

◆ Check the solution.


In[ ]:= LHSCheck = LHSOp[ypSoln, x]
Out[ ]=

0. + 0.001 x2

In[ ]:= FullSimplify[Chop[LHSCheck]]


Out[ ]=

0.001 x2

In[ ]:= FullSimplify[Chop[LHSCheck]]  RHS


Out[ ]=

True

◆ Great! The same as the right hand side.


◆ Step 3. Then, the general solution to the initial nonhomogeneous ODE is:

80
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 5

In[ ]:= ygeneral[x_] = yh[x] + ypSoln[x]


Out[ ]=

- 0.002 + 0.001 x2 + A Cos[x] + B Sin[x]

◆ Step 4. Find the particular solution using the initial conditions: y(0) = 0, y' (0) = 1.5.
dy
◆ Find the derivative y' = .
dx
In[ ]:= dygeneral[x_] = D[ygeneral[x], {x, 1}]
Out[ ]=

0.002 x + B Cos[x] - A Sin[x]

In[ ]:= IC1 = ygeneral[0]  0


Out[ ]=

- 0.002 + A  0

In[ ]:= IC2 = dygeneral[0]  1.5


Out[ ]=

0. + B  1.5

In[ ]:= valuesofcofficients = Solve[{IC1, IC2}, {A, B}]


Out[ ]=

{{A  0.002, B  1.5}}

◆ Hence, the particular solution to the given ODE is:


In[ ]:= yparticular[x_] = ygeneral[x] /. valuesofcofficients〚1〛
Out[ ]=

- 0.002 + 0.001 x2 + 0.002 Cos[x] + 1.5 Sin[x]

◆ Step 5. Verify the solution to the ODE y'' + y = 0.001 x2 with initial conditions:
y(0) = 0, y' (0) = 1.5.
In[ ]:= LHSOp[yparticular, x]
Out[ ]=

0. + 0.001 x2

In[ ]:= FullSimplify[Chop[LHSOp[yparticular, x]]]  rhsFunc[x]


Out[ ]=

True

In[ ]:= yparticular[0]


Out[ ]=

0.

In[ ]:= D[yparticular[x], {x, 1}] /. {x  0}


Out[ ]=

1.5

81
6 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb

◆ The solution satisfies both the ODE and initial conditions check!
◆ Let’s take a look at the graph of the solution.
In[ ]:= Plotyparticular[x], {x, 0, 60}, Frame  True,
PlotStyle  {{Blue, Thick}}, Frame  True, FrameLabel  {"x", "y(x)"},
PlotLegends  Placed"y(x)=-0.002+0.001 x2 +0.002 cos(x)+ 1.5 sin(x)", Above,
BaseStyle  {FontWeight  "Bold", Black, FontSize  12}, GridLines  Automatic,
AxesStyle  Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
Method  {"DefaultBoundaryStyle"  Automatic, "DefaultMeshStyle"  AbsolutePointSize[6],
"ScalingFunctions"  None}, PlotRange  {- 2, 5}
Out[ ]=

y(x)=-0.002+0.001 x 2 +0.002 cos(x)+ 1.5 sin(x)

2
y(x)

-1

-2
0 10 20 30 40 50 60
x

◆ Step 6. Solve the ODE using a built-in DSolve function (Not Required).
In[ ]:= DSolve[LHSOp[y, x]  rhsFunc[x] , y[x], x] (** A general solution **)
Out[ ]=

y[x]  - 0.002 + 0.001 x2 + 1. 1 Cos[1. x] + 1. 2 Sin[1. x]

In[ ]:= DSolve[{LHSOp[y, x]  rhsFunc[x], y[0]  0, y '[0]  1.5} , y[x], x]


(** A particular solution **)
Out[ ]=

y[x]  - 0.002 + 0.001 x2 + 0.002 Cos[1. x] + 1.5 Sin[1. x]

Example 3.2. Application of Modification Rule

y'' + 3 y' + 2.25 y = - 10 e-1.5 x y(0) = 1, y' (0) = 0

In[ ]:= ClearAll["Global`*"]

In[ ]:= LHSOp[y_, x_] = y ''[x] + 3 y '[x] + 2.25 y[x]


Out[ ]=

2.25 y[x] + 3 y′ [x] + y′′ [x]

82
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 7

In[ ]:= rhsFunc[x_] = - 10 Exp[- 1.5 x]


Out[ ]=

- 10 -1.5 x

◆ Step 1. Solve the corresponding homogeneous ODE to obtain the general solution of
y h (x ).
◆ To do so, let’s solve the characteristic equation.
In[ ]:= a = 3; b = 2.25;
roots = Solveλ2 + a * λ + b  0, λ
Out[ ]=
{{λ  - 1.5}, {λ  - 1.5}}

◆ We got a real double root, so we proceed with Case II.


In[ ]:= λ1 = λ /. roots〚1〛; λ2 = λ /. roots〚2〛; {λ1 , λ2 }
Out[ ]=

{- 1.5, - 1.5}

In[ ]:= yh[x_] := (c1 + c2 * x) * Exp[λ1 * x]; yh[x]


Out[ ]=

-1.5 x (c1 + c2 x)

◆ Step 2. Applying the method of undetermined coefficients, find a solution yp(x).


◆ Since the r(x) term is in the form of k eγ x , the corresponding yp(x) choice (first row in
the Table) is yp = K0 e-1.5 x . K0 is a coefficient to be determined.
◆ Warning! Here we need to use the Modification Rule, because yp(x) term happens to be
the solution to the corresponding homogeneous equation. Thus, multiply the term by x2
and get the correct expression of form yp = K0 x2 e-1.5 x .
In[ ]:= yp[x_] = K0 * x2 * Exp[- 1.5 x]
Out[ ]=

-1.5 x K0 x2

◆ Let’s plug the assumed solution yp(x) into the LHS.


In[ ]:= LHS = LHSOp[yp, x]
Out[ ]=

2 -1.5 x K0 - 6. -1.5 x K0 x + 4.5 -1.5 x K0 x2 + 3 2 -1.5 x K0 x - 1.5 -1.5 x K0 x2 

In[ ]:= RHS = rhsFunc[x]


Out[ ]=

- 10 -1.5 x

In[ ]:= LHS - RHS


Out[ ]=

10 -1.5 x + 2 -1.5 x K0 - 6. -1.5 x K0 x + 4.5 -1.5 x K0 x2 + 3 2 -1.5 x K0 x - 1.5 -1.5 x K0 x2 

83
8 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb

◆ Next, equate coefficients of x and x2 because the coefficient of each power of x must be
the same on both sides. Hence, LHS-RHS must be zero for all x.
In[ ]:= Q = Collect(LHS - RHS), Exp[- 1.5 x], x * Exp[- 1.5 x], x2 * Exp[- 1.5 x]
Out[ ]=

-1.5 x (10 + 2 K0)

◆ The condition that all coefficients must be 0 gives us only one equation with the
unknown K0, which we can solve using Solve[ ].
In[ ]:= eqn0 = 10 + 2 K0  0
Out[ ]=

10 + 2 K0  0

In[ ]:= coeffSoln = Solve[{eqn0}, {K0}]


Out[ ]=

{{K0  - 5}}

◆ Let’s substitute the coefficient to the solution.


In[ ]:= ypSoln[x_] = yp[x] /. coeffSoln〚1〛
Out[ ]=

- 5 -1.5 x x2

◆ Check the solution.


In[ ]:= LHSCheck = LHSOp[ypSoln, x]
Out[ ]=

- 10 -1.5 x + 30. -1.5 x x - 22.5 -1.5 x x2 + 3 - 10 -1.5 x x + 7.5 -1.5 x x2 

In[ ]:= FullSimplify[Chop[LHSCheck]]


Out[ ]=

- 10. -1.5 x

In[ ]:= FullSimplify[Chop[LHSCheck]]  RHS


Out[ ]=

True

◆ Great! The same as the right hand side.


◆ Step 3. Then, the general solution to the initial nonhomogeneous ODE is:
In[ ]:= ygeneral[x_] = yh[x] + ypSoln[x]
Out[ ]=

- 5 -1.5 x x2 + -1.5 x (c1 + c2 x)

◆ Step 4. Find the particular solution using the initial conditions: y(0) = 1, y' (0) = 0.
dy
◆ Find the derivative y' = .
dx

84
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 9

In[ ]:= dygeneral[x_] = D[ygeneral[x], {x, 1}]


Out[ ]=

c2 -1.5 x - 10 -1.5 x x + 7.5 -1.5 x x2 - 1.5 -1.5 x (c1 + c2 x)

In[ ]:= IC1 = ygeneral[0]  1


Out[ ]=

0. + 1. c1  1

In[ ]:= IC2 = dygeneral[0]  0


Out[ ]=

0. - 1.5 c1 + 1. c2  0

In[ ]:= valuesofcofficients = Solve[{IC1, IC2}, {c1, c2}]


Out[ ]=

{{c1  1., c2  1.5}}

◆ Hence, the particular solution to the given ODE is:


In[ ]:= yparticular[x_] = ygeneral[x] /. valuesofcofficients〚1〛
Out[ ]=

- 5 -1.5 x x2 + -1.5 x (1. + 1.5 x)

◆ Step 5. Verify the solution to the ODE y'' + 3 y' + 2.25 y = - 10 e-1.5 x with initial
conditions: y(0) = 1, y' (0) = 0.
In[ ]:= LHSOp[yparticular, x]
Out[ ]=

- 14.5 -1.5 x + 30. -1.5 x x - 11.25 -1.5 x x2 + 2.25 -1.5 x (1. + 1.5 x) +
3 1.5 -1.5 x - 10 -1.5 x x + 7.5 -1.5 x x2 - 1.5 -1.5 x (1. + 1.5 x) +
2.25 - 5 -1.5 x x2 + -1.5 x (1. + 1.5 x)

In[ ]:= FullSimplify[Chop[LHSOp[yparticular, x]]]  rhsFunc[x]


Out[ ]=

True

In[ ]:= yparticular[0]


Out[ ]=

1.

In[ ]:= D[yparticular[x], {x, 1}] /. {x  0}


Out[ ]=

0.

◆ The solution satisfies both the ODE and initial conditions check!
◆ Let’s take a look at the graph of the solution.

85
10 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb

In[ ]:= Plotyparticular[x], {x, 0, 10}, Frame  True,


PlotStyle  {{Red, Thick}}, Frame  True, FrameLabel  {"x", "y(x)"},
PlotLegends  Placed"y(x)=-5 -1.5 x x2 + -1.5 x (1 + 1.5 x)", Above,
BaseStyle  {FontWeight  "Bold", Black, FontSize  12}, GridLines  Automatic,
AxesStyle  Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
Method  {"DefaultBoundaryStyle"  Automatic, "DefaultMeshStyle"  AbsolutePointSize[6],
"ScalingFunctions"  None}, PlotRange  {- 2, 2}
Out[ ]=

y(x)=-5 -1.5 x x 2 + -1.5 x (1 + 1.5 x)

1
y (x )

-1

-2
0 2 4 6 8 10
x

◆ Step 6. Solve the ODE using a built-in DSolve function (Not Required).
In[ ]:= DSolve[LHSOp[y, x]  rhsFunc[x] , y[x], x] (** A general solution **)
Out[ ]=

y[x]  - 5. -1.5 x x2 + -1.5 x 1 + -1.5 x x 2 

In[ ]:= DSolve[{LHSOp[y, x]  rhsFunc[x], y[0]  1, y '[0]  0} , y[x], x]


(** A particular solution **)
Out[ ]=

y[x]  - 5. -1.5 x - 0.2 - 0.3 x + 1. x2 

In[ ]:= FullSimplify[yparticular[x]]


(** The solution that we obtained through a step-by-step SOP **)
Out[ ]=

- 5. -1.5 x (- 0.2 + (- 0.3 + x) x)

Example 3.3. Application of Sum Rule


y'' + 2 y' + 0.75 y = 2 cos (x) - 0.25 sin (x) + 0.09 x y(0) = 2.78, y' (0) = - 0.43

In[ ]:= ClearAll["Global`*"]

◆ Create expressions for the LHS operator and the RHS function.

86
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 11

In[ ]:= LHSOp[y_, x_] = y ''[x] + 2 y '[x] + 0.75 y[x]


Out[ ]=

0.75 y[x] + 2 y′ [x] + y′′ [x]

In[ ]:= rhsFunc[x_] = 2 Cos[x] - 0.25 Sin[x] + 0.09 x


Out[ ]=

0.09 x + 2 Cos[x] - 0.25 Sin[x]

◆ Step 1. Solve the corresponding homogeneous ODE to obtain the general solution of
y h (x ).
◆ To do so, let’s solve the characteristic equation.
In[ ]:= a = 2; b = 0.75;
roots = Solveλ2 + a * λ + b  0, λ
Out[ ]=

{{λ  - 1.5}, {λ  - 0.5}}

◆ We got two distinct roots, so we proceed with Case I to obtain the general solution.
In[ ]:= λ1 = λ /. roots〚1〛; λ2 = λ /. roots〚2〛; {λ1 , λ2 }
Out[ ]=

{- 1.5, - 0.5}

In[ ]:= yh[x_] := c1 * Exp[λ1 * x] + c2 * Exp[λ2 * x]; yh[x]


Out[ ]=

c1 -1.5 x + c2 -0.5 x

◆ Step 2. Applying the method of undetermined coefficients, find a solution yp(x).


◆ Since the r(x) term is the sum of several functions
.

r(x) = 2 cos (x) - 0.25 sin (x) + 0.09 x


.

◆ The corresponding yp(x) choice (from the Undetermined Coefficients Table) is


yp = K cos (x) + M sin(x) + K1 x + K0 (based on the Sum Rule).
◆ K, M, K1, K0 are coefficients to be determined. Don’t also forget to check with the modi-
fication rule.
In[ ]:= yp[x_] = M1 * Cos[x] + M2 * Sin[x] + K1 * x + K0
Out[ ]=

K0 + K1 x + M1 Cos[x] + M2 Sin[x]

◆ Note: In Wolfram Language the variable cannot be named “K”, because it is already
built-in symbol, so use M1, M2 instead.

87
12 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb

In[ ]:= ?K
Out[ ]=

Symbol

K is a default generic name for a summation index in a symbolic sum.

◆ Let’s plug the assumed solution yp(x) into the LHS.


In[ ]:= LHS = LHSOp[yp, x]
Out[ ]=

- M1 Cos[x] - M2 Sin[x] + 2 (K1 + M2 Cos[x] - M1 Sin[x]) + 0.75 (K0 + K1 x + M1 Cos[x] + M2 Sin[x])

In[ ]:= RHS = rhsFunc[x]


Out[ ]=

0.09 x + 2 Cos[x] - 0.25 Sin[x]

In[ ]:= LHS - RHS


Out[ ]=

- 0.09 x - 2 Cos[x] - M1 Cos[x] + 0.25 Sin[x] - M2 Sin[x] +


2 (K1 + M2 Cos[x] - M1 Sin[x]) + 0.75 (K0 + K1 x + M1 Cos[x] + M2 Sin[x])

◆ Next, equate coefficients of x and x2 because the coefficient of each power of x must be
the same on both sides. Hence, LHS-RHS must be zero for all x.
In[ ]:= Q = Collect[(LHS - RHS), {x, Cos[x], Sin[x]}]
Out[ ]=

0.75 K0 + 2 K1 + (- 0.09 + 0.75 K1) x + (- 2 - 0.25 M1 + 2 M2) Cos[x] + (0.25 - 2 M1 - 0.25 M2) Sin[x]

◆ The condition that all coefficients must be 0 gives us 4 equations with 4 unknowns,
which we can solve using Solve[ ].
In[ ]:= eqn0 = 0.75` K0 + 2 K1  0;
eqn1 = - 0.09` + 0.75` K1  0;
eqn2 = - 2 - 0.25` M1 + 2 M2  0;
eqn3 = 0.25` - 2 M1 - 0.25` M2  0;

In[ ]:= coeffSoln = Solve[{eqn0, eqn1, eqn2, eqn3}, {K0, K1, M1, M2}]
Out[ ]=

{{K0  - 0.32, K1  0.12, M1  0., M2  1.}}

◆ Let’s substitute the coefficients to the solution.


In[ ]:= ypSoln[x_] = yp[x] /. coeffSoln〚1〛
Out[ ]=

- 0.32 + 0.12 x + 1. Sin[x]

◆ Check the solution.

88
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 13

In[ ]:= LHSCheck = LHSOp[ypSoln, x]


Out[ ]=

2 (0.12 + 1. Cos[x]) - 1. Sin[x] + 0.75 (- 0.32 + 0.12 x + 1. Sin[x])

In[ ]:= FullSimplify[Chop[LHSCheck]]


Out[ ]=

2.77556 × 10-17 + 0.09 x + 2. Cos[x] - 0.25 Sin[x]

In[ ]:= Chop[FullSimplify[Chop[LHSCheck]]]


Out[ ]=

0.09 x + 2. Cos[x] - 0.25 Sin[x]

In[ ]:= Chop[FullSimplify[Chop[LHSCheck]]]  RHS


Out[ ]=

True

◆ Excellent! The same as the right hand side.


◆ Step 3. Then, the general solution to the initial nonhomogeneous ODE is:
In[ ]:= ygeneral[x_] = yh[x] + ypSoln[x]
Out[ ]=

- 0.32 + c1 -1.5 x + c2 -0.5 x + 0.12 x + 1. Sin[x]

◆ Step 4. Find the particular solution using the initial conditions: y(0) = 2.78
y' (0) = - 0.43.
dy
◆ Find the derivative y' = .
dx
In[ ]:= dygeneral[x_] = D[ygeneral[x], {x, 1}]
Out[ ]=

0.12 - 1.5 c1 -1.5 x - 0.5 c2 -0.5 x + 1. Cos[x]

In[ ]:= IC1 = ygeneral[0]  2.78


Out[ ]=

- 0.32 + 1. c1 + 1. c2  2.78

In[ ]:= IC2 = dygeneral[0]  - 0.43


Out[ ]=

1.12 - 1.5 c1 - 0.5 c2  - 0.43

In[ ]:= valuesofcofficients = Solve[{IC1, IC2}, {c1, c2}]


Out[ ]=

c1  2.22045 × 10-16 , c2  3.1

◆ Hence, the particular solution to the given ODE is:


In[ ]:= yparticular[x_] = ygeneral[x] /. valuesofcofficients〚1〛
Out[ ]=

- 0.32 + 2.22045 × 10-16 -1.5 x + 3.1 -0.5 x + 0.12 x + 1. Sin[x]

89
14 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb

In[ ]:= yparticular[x_] = Chop[ ygeneral[x] /. valuesofcofficients〚1〛]


Out[ ]=

- 0.32 + 3.1 -0.5 x + 0.12 x + 1. Sin[x]

◆ Note: In doing numerical computations, it is inevitable that you will sometimes end up
with results that are less precise than you want. Particularly when you get numerical
results that are very close to zero, you may well want to assume that the results should be
exactly zero. The function Chop allows you to replace approximate real numbers that are
close to zero by the exact integer 0.
◆ Chop[expr] = To replace all approximate real numbers in expr with magnitude less
than 10-10 by 0.
◆ Step 5. Verify the solution to the ODE:
.

y'' + 2 y' + 0.75 y = 2 cos (x) - 0.25 sin (x) + 0.09 x


.

◆ with initial conditions y(0) = 2.78, y' (0) = - 0.43.


In[ ]:= LHSOp[yparticular, x]
Out[ ]=

0.775 -0.5 x + 2 0.12 - 1.55 -0.5 x + 1. Cos[x] -


1. Sin[x] + 0.75 - 0.32 + 3.1 -0.5 x + 0.12 x + 1. Sin[x]

In[ ]:= Chop[FullSimplify[Chop[LHSOp[yparticular, x]]]]  rhsFunc[x]


Out[ ]=

True

In[ ]:= yparticular[0]


Out[ ]=

2.78

In[ ]:= D[yparticular[x], {x, 1}] /. {x  0}


Out[ ]=

- 0.43

◆ The solution satisfies both the ODE and initial conditions check!
◆ Let’s take a look at the graph of the solution.

90
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 15

In[ ]:= Plotyparticular[x], {x, 0, 50}, Frame  True,


PlotStyle  {{Magenta, Thick}, {Black, Thick}},
PlotLegends  Placed"y(x)=-0.32+3.1 -0.5 x +0.12x + sin(x)", Above,
Frame  True, FrameLabel  {"x", "y(x)"},
BaseStyle  {FontWeight  "Bold", Black, FontSize  12}, GridLines  Automatic,
AxesStyle  Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
Method  {"DefaultBoundaryStyle"  Automatic, "DefaultMeshStyle"  AbsolutePointSize[6],
"ScalingFunctions"  None}, PlotRange  {- 1, 9}
Out[ ]=

y(x)=-0.32+3.1 -0.5 x +0.12x + sin(x)

6
y(x)

0 10 20 30 40 50
x

◆ Step 6. Solve the ODE using a built-in DSolve function (Not Required).
In[ ]:= DsolveSoln0 = DSolve[LHSOp[y, x]  rhsFunc[x], y[x], x] (** A general solution **)
Out[ ]=

y[x] 
-16 -16 -16
-1.5 x 1 + -0.5 x 2 - 0.06 -1.11022×10 x
6. - 0.666667 1.11022×10 x
- 3. x + 1. 1.11022×10 x
x-
-16
16.6667 + 2.77556 × 10-15  Cos[x] + 16.6667 + 9.25186 × 10-16  1.11022×10 x
Cos[x] -
-16
-15 -15 1.11022×10 x
25. - 1.85037 × 10  Sin[x] + 8.33333 - 2.77556 × 10   Sin[x]

In[ ]:= DsolveSoln1 = DSolve[{LHSOp[y, x]  rhsFunc[x], y[0]  2.78, y '[0]  - 0.43} , y[x], x]
(** A particular solution **)
Out[ ]=

y[x] 
- 0.06 -2. x 5.73615 × 10-15 - 1.85037 × 10-15  0.5 x - 51.6667 - 3.70074 × 10-15  1.5 x +
5.33333 2. x - 2. 2. x x + 3.55271 × 10-15 - 1.85037 × 10-15  2. x Cos[x] -
16.6667 + 9.25186 × 10-16  2. x Sin[x]

91
16 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb

In[ ]:= Chop[FullSimplify[DsolveSoln1]]


Out[ ]=

y[x]  - 0.32 + 3.1 -0.5 x + 0.12 x + 1. Sin[x]

In[ ]:= yparticular[x] (** The solution that we obtained through a step-by-step SOP **)
Out[ ]=

- 0.32 + 3.1 -0.5 x + 0.12 x + 1. Sin[x]


.

Example 3.4. Another example of the Method of Undetermined Coefficients


y'' + 2 y' + 5 y = 1.25 exp(0.5 x) + 40 cos(4 x) - 55 sin(4 x) y(0) = 0.2, y' (0) = 60.1

In[ ]:= ClearAll["Global`*"]

In[ ]:= LHSOp[y_, x_] = y ''[x] + 2 y '[x] + 5 y[x]


Out[ ]=

5 y[x] + 2 y′ [x] + y′′ [x]

In[ ]:= rhsFunc[x_] = 1.25 Exp[0.5 x] + 40 Cos[4 x] - 55 Sin[4 x]


Out[ ]=

1.25 0.5 x + 40 Cos[4 x] - 55 Sin[4 x]

◆ Step 1. Solve the corresponding homogeneous ODE to obtain the general solution of
y h (x ).
◆ To do so, let’s solve the characteristic equation.
In[ ]:= a = 2; b = 5;
roots = Solveλ2 + a * λ + b  0, λ
Out[ ]=

{{λ  - 1 - 2 }, {λ  - 1 + 2 }}

◆ We got two complex roots, so we proceed with Case III.


In this case, the roots of the characteristic equation are complex numbers that give the complex
solutions of the ODE. However, it can be shown that we can obtain a basis of real solutions:

a2
y1 = e-ax/2 cos ω x and y2 = e-ax/2 sin ω x where ω = b- 4

In[ ]:= ω = Sqrtb - a2  4


Out[ ]=

92
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 17

In[ ]:= ClearAll[A, B];


yh[x_] := (A * Cos[ω * x] + B * Sin[ω * x] ) * Exp[(- a / 2) * x];
yh[x]
Out[ ]=

-x (A Cos[2 x] + B Sin[2 x])

◆ Step 2. Applying the method of undetermined coefficients, find a solution yp(x).


◆ Since the r(x) term is the sum of several functions:
.

r(x) = 1.25 exp(0.5 x) + 40 cos(4 x) - 55 sin(4 x)


.

◆ The corresponding yp(x) choice (from the Undetermined Coefficients Table) is


yp = C exp(0.5 x) + K cos(4 x) + M sin(4 x) (based on the Sum Rule).
◆ C, K, M are coefficients to be determined. Don’t also forget to check with the modifica-
tion rule.
In[ ]:= yp[x_] = M0 * Exp[0.5 x] + M1 * Cos[4 x] + M2 * Sin[4 x]
Out[ ]=

0.5 x M0 + M1 Cos[4 x] + M2 Sin[4 x]

◆ Let’s plug the assumed solution yp(x) into the LHS.


In[ ]:= LHS = LHSOp[yp, x]
Out[ ]=

0.25 0.5 x M0 - 16 M1 Cos[4 x] - 16 M2 Sin[4 x] +


2 0.5 0.5 x M0 + 4 M2 Cos[4 x] - 4 M1 Sin[4 x] + 5 0.5 x M0 + M1 Cos[4 x] + M2 Sin[4 x]

In[ ]:= RHS = rhsFunc[x]


Out[ ]=

1.25 0.5 x + 40 Cos[4 x] - 55 Sin[4 x]

In[ ]:= LHS - RHS


Out[ ]=

- 1.25 0.5 x + 0.25 0.5 x M0 - 40 Cos[4 x] - 16 M1 Cos[4 x] + 55 Sin[4 x] - 16 M2 Sin[4 x] +


2 0.5 0.5 x M0 + 4 M2 Cos[4 x] - 4 M1 Sin[4 x] + 5 0.5 x M0 + M1 Cos[4 x] + M2 Sin[4 x]

◆ Next, equate coefficients of x and x2 because the coefficient of each power of x must be
the same on both sides. Hence, LHS-RHS must be zero for all x.
In[ ]:= Q = Collect[(LHS - RHS), {Exp[0.5 x], Cos[4 x], Sin[4 x]}]
Out[ ]=

0.5 x (- 1.25 + 6.25 M0) + (- 40 - 11 M1 + 8 M2) Cos[4 x] + (55 - 8 M1 - 11 M2) Sin[4 x]

◆ The condition that all coefficients must be zero gives us 3 equations in 3 unknowns,
which we can solve using Solve[ ].

93
18 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb

In[ ]:= eqn0 = - 1.25` + 6.25` M0  0;


eqn1 = - 40 - 11 M1 + 8 M2  0;
eqn2 = 55 - 8 M1 - 11 M2  0;

In[ ]:= coeffSoln = Solve[{eqn0, eqn1, eqn2}, {M0, M1, M2}]


Out[ ]=

{{M0  0.2, M1  0, M2  5}}

◆ Let’s substitute the coefficients to the solution.


In[ ]:= ypSoln[x_] = yp[x] /. coeffSoln〚1〛
Out[ ]=

0.2 0.5 x + 5 Sin[4 x]

◆ Check the solution.


In[ ]:= LHSCheck = LHSOp[ypSoln, x]
Out[ ]=

0.05 0.5 x + 2 0.1 0.5 x + 20 Cos[4 x] - 80 Sin[4 x] + 5 0.2 0.5 x + 5 Sin[4 x]

In[ ]:= FullSimplify[Chop[LHSCheck]]


Out[ ]=

1.25 0.5 x + 40. Cos[4 x] - 55. Sin[4 x]

In[ ]:= FullSimplify[Chop[LHSCheck]]  RHS


Out[ ]=

True

◆ Great! The same as the right hand side.


◆ Step 3. Then, the general solution to the initial nonhomogeneous ODE is:
In[ ]:= ygeneral[x_] = yh[x] + ypSoln[x]
Out[ ]=

0.2 0.5 x + -x (A Cos[2 x] + B Sin[2 x]) + 5 Sin[4 x]

◆ Step 4. Find the particular solution using the initial conditions: y(0) = 0.2, y' (0) = 60.1.
dy
◆ Find the derivative y' = .
dx
In[ ]:= dygeneral[x_] = D[ygeneral[x], {x, 1}]
Out[ ]=

0.1 0.5 x + 20 Cos[4 x] + -x (2 B Cos[2 x] - 2 A Sin[2 x]) - -x (A Cos[2 x] + B Sin[2 x])

In[ ]:= IC1 = ygeneral[0]  0.2


Out[ ]=

0.2 + A  0.2

In[ ]:= IC2 = dygeneral[0]  60.1


Out[ ]=

20.1 - A + 2 B  60.1

94
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 19

In[ ]:= valuesofcofficients = Solve[{IC1, IC2}, {A, B}]


Out[ ]=

{{A  0., B  20.}}

◆ Hence, the particular solution to the given ODE is:


In[ ]:= yparticular[x_] = ygeneral[x] /. valuesofcofficients〚1〛
Out[ ]=

0.2 0.5 x + -x (0. + 20. Sin[2 x]) + 5 Sin[4 x]

In[ ]:= yparticular[x_] = Chop[ ygeneral[x] /. valuesofcofficients〚1〛]


Out[ ]=

0.2 0.5 x + 20. -x Sin[2 x] + 5 Sin[4 x]

◆ Step 5. Verify the solution to the ODE:


.

y'' + 2 y' + 5 y = 1.25 exp(0.5 x) + 40 cos(4 x) - 55 sin(4 x)


.

◆ with initial conditions: y(0) = 0.2, y' (0) = 60.1.


In[ ]:= LHSOp[yparticular, x]
Out[ ]=

0.05 0.5 x - 80. -x Cos[2 x] - 60. -x Sin[2 x] +


2 0.1 0.5 x + 40. -x Cos[2 x] + 20 Cos[4 x] - 20. -x Sin[2 x] -
80 Sin[4 x] + 5 0.2 0.5 x + 20. -x Sin[2 x] + 5 Sin[4 x]

In[ ]:= FullSimplify[Chop[LHSOp[yparticular, x]]]  rhsFunc[x]


Out[ ]=

True

In[ ]:= yparticular[0]


Out[ ]=

0.2

In[ ]:= D[yparticular[x], {x, 1}] /. {x  0}


Out[ ]=

60.1

◆ The solution satisfies both the ODE and initial conditions check!
◆ Let’s take a look at the graph of the solution.

95
20 Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb

In[ ]:= Plotyparticular[x], {x, 0, 15}, Frame  True,


PlotStyle  {{Orange, Thick}, {Black, Thick}}, Frame  True, FrameLabel  {"x", "y(x)"},
PlotLegends  Placed"y(x)=0.2 0.5 x +20 -x sin(2x) + 5 sin(4 x)", Above,
BaseStyle  {FontWeight  "Bold", Black, FontSize  12}, GridLines  Automatic,
AxesStyle  Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
Method  {"DefaultBoundaryStyle"  Automatic,
"DefaultMeshStyle"  AbsolutePointSize[6], "ScalingFunctions"  None}
Out[ ]=

y(x)=0.2 0.5 x +20 -x sin(2x) + 5 sin(4 x)

200

150
y (x )

100

50

0
0 2 4 6 8 10 12 14
x

◆ Step 6. Solve the ODE using a built-in DSolve function (Not Required).
In[ ]:= DsolveSoln0 = DSolve[LHSOp[y, x]  rhsFunc[x], y[x], x] (** A general solution **)
Out[ ]=

y[x]  (-1.+2. ) x 1 + (-1.-2. ) x 2 + (0.1 - 0.075 ) -2. x (1.28 + 0.96 ) 2.5 x -


2.84217 × 10-15 + 2.13163 × 10-15  2. x Cos[4. x] + (32. + 24. ) 2. x Sin[4. x]

In[ ]:= DsolveSoln1 = DSolve[{LHSOp[y, x]  rhsFunc[x], y[0]  0.2, y '[0]  60.1} , y[x], x]
(** A particular solution **)
Out[ ]=

y[x]  (10. + 0. ) (-4.-2. ) x (0. - 1. ) (3.+4. ) x + 1.70974 × 10-16 + 1.  3. x +


0.02 + 4.16334 × 10-18  (4.5+2. ) x - 4.44089 × 10-17 + 0.  (4.+2. ) x Cos[4. x] +
0.5 + 4.44089 × 10-17  (4.+2. ) x Sin[4. x]

In[ ]:= FullSimplify[Chop[DsolveSoln1〚1〛]]


Out[ ]=

y[x]  (0. + 10. ) (-1.-2. ) x - (0. + 10. ) (-1.+2. ) x + 0.2 0.5 x + 5. Sin[4. x]

In[ ]:= yfromDsolve = y[x] /. FullSimplify[Chop[DsolveSoln1〚1〛]] (**A result from DSolve **)
Out[ ]=

(0. + 10. ) (-1.-2. ) x - (0. + 10. ) (-1.+2. ) x + 0.2 0.5 x + 5. Sin[4. x]

96
Week 3_Second-Order ODEs-2 (Nonhomogeneous).nb 21

In[ ]:= yparticular[x] (** The solution that we obtained through a step-by-step SOP **)
Out[ ]=

0.2 0.5 x + 20. -x Sin[2 x] + 5 Sin[4 x]

◆ Let’ compare the solution that we obtained through a step-by-step SOP (standard operat
ing procedures) to that from DSolve by plotting them together.
In[ ]:= Plot[{yparticular[x], yfromDsolve}, {x, 0, 15}, Frame  True,
PlotStyle  {{Orange, Thick}, {Black, Dashed}}, Frame  True, FrameLabel  {"x", "y(x)"},
BaseStyle  {FontWeight  "Bold", Black, FontSize  12}, GridLines  Automatic,
AxesStyle  Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
Method  {"DefaultBoundaryStyle"  Automatic,
"DefaultMeshStyle"  AbsolutePointSize[6], "ScalingFunctions"  None},
PlotLegends  Placed[{"Step-by-step SOP", "DSolve"}, {0.4, 0.75}]]
Out[ ]=

200 Step-by-step SOP


DSolve
150
y(x)

100

50

0
0 2 4 6 8 10 12 14
x

◆ Exactly the same! Well done.

Summary
After completing this chapter, you should be able to
◼ solve 2nd-order linear non-homogeneous ODEs step-by-step by the method of
undetermined coefficients using Wolfram Mathematica.
◼ develop SOPs for the method of undetermined coefficients.
◼ develop the habit of always checking your solutions for quality assurance.
◼ develop your attention-to-detail skills in solving problems.

97
98
Week 4: Second-Order ODEs (Part 3)
Forced Oscillations & Resonance

Table of Contents
1. Modeling: Forced Oscillations
2. Nonhomogeneous ODE
3. Maximum amplitude of Damped Forced Oscillations
3.1. Example 4.1. Amplitude of the Steady State solution. Practical Resonance
4. Summary

Commands list
◼ Collect[expr, x]
◼ expr[[i]] or Part[expr, i]
◼ Solve[expr, vars]
◼ Plot[f, {x, x_min, x_max}]

Modeling: Forced Oscillations


The oscillations in the presence of the external force is described by the Nonhomogeneous
Linear Second Order ODE as follows:
.

m y'' (t ) + c y' (t ) + k y(t ) = F0 cos ωt


.

here r(t ) = F0 cos ωt is called a driving force.


m is the mass of the object that undergoes the oscillations.
c is the damping constant.
k is the spring constant.
y(t ) is displacement as a function of time, t .

Solve the Nonhomogeneous ODE


In[ ]:= ClearAll["Global`*"]

99
2 Week 4_Second-Order ODEs-3 (Forced Oscillations).nb

◆ To start with, let’s write the governing equation in the standard form:
.

c k F0
y'' (t ) + y' (t ) + y (t ) = cos ωt
m m m
.

◆ Define the equations for the LHS and RHS.


In[ ]:= LHSOp[y_, x_] = y ''[t] + (c / m) * y '[t] + (k / m) * y[t]
Out[ ]=
k y[t] c y′ [t]
+ + y′′ [t]
m m

In[ ]:= rhsFunc[x_] = (F0 / m) * Cos[ω * t]


Out[ ]=
F0 Cos[t ω]
m

◆ Step 1. Solve the corresponding homogeneous ODE to obtain the general solution of
y h (t ) .
◆ To do so, let’s solve the characteristic equation.
In[ ]:= a = c / m; b = k / m;
roots = Solveλ2 + a * λ + b  0, λ
Out[ ]=

1 c c2 - 4 k m 1 c c2 - 4 k m
λ  - - , λ  - + 
2 m m 2 m m

In[ ]:= Discriminantλ2 + a * λ + b, λ


Out[ ]=

c2 - 4 k m
m2

100
Week 4_Second-Order ODEs-3 (Forced Oscillations).nb 3

◆ Step 2. Applying the method of undetermined coefficients, find a solution yp(t ).

◆ Since the r(t ) term is in the form of k cos ω t, the corresponding yp(t ) choice (second row
in the Table) is yp = K1 cos ω t + K2 sin ω t .
◆ K1, K2 are coefficients to be determined.
In[ ]:= yp[t_] = K1 * Cos[ω * t] + K2 * Sin[ω * t]

◆ Let’s plug the assumed solution yp(t ) into the LHS.


In[ ]:= yp[t_] = K1 * Cos[ω * t] + K2 * Sin[ω * t]
Out[ ]=

K1 Cos[t ω] + K2 Sin[t ω]

In[ ]:= LHS = LHSOp[yp, t]


Out[ ]=
k (K1 Cos[t ω] + K2 Sin[t ω]) c (K2 ω Cos[t ω] - K1 ω Sin[t ω])
- K1 ω2 Cos[t ω] - K2 ω2 Sin[t ω] + +
m m

101
4 Week 4_Second-Order ODEs-3 (Forced Oscillations).nb

In[ ]:= RHS = rhsFunc[t]


Out[ ]=
F0 Cos[t ω]
m

In[ ]:= LHS - RHS


Out[ ]=
F0 Cos[t ω]
- - K1 ω2 Cos[t ω] - K2 ω2 Sin[t ω] +
m
k (K1 Cos[t ω] + K2 Sin[t ω]) c (K2 ω Cos[t ω] - K1 ω Sin[t ω])
+
m m

◆ Next, gather coefficients of cos ω t and sin ω t. Notice that we are working with LHS-
RHS at this point.
◆ LHS-RHS must be zero for all t, which means the coefficients of cos ω t and sin ω t must
be zero independently.
In[ ]:= Q = Collect[(LHS - RHS), {Cos[ω * t], Sin[ω * t]}]
Out[ ]=
F0 k K1 c K2 ω k K2 c K1 ω
- + + - K1 ω2 Cos[t ω] + - - K2 ω2 Sin[t ω]
m m m m m

◆ The conditions that all coefficients must be 0 gives us two equations with two unknowns,
which we can solve using Solve[ ].
F0 k K1 c K2 ω
In[ ]:= eqn1 = - + + - K1 ω2  0 (**coefficient of Cos[ω*t] **);
m m m
k K2 c K1 ω
eqn2 = - - K2 ω2  0 (**coefficient of Sin[ω*t] **);
m m

In[ ]:= coeffSoln = Solve[{eqn1, eqn2}, {K1, K2}]


Out[ ]=

F0 k - m ω2  c F0 ω
K1  , K2  
2 2 2 2 2 4
k +c ω -2 k m ω +m ω k + c ω - 2 k m ω 2 + m 2 ω4
2 2 2

In[ ]:= FullSimplifycoeffSoln /. k  m * ω20 


Out[ ]=

F0 m -ω2 + ω20  c F0 ω
K1  , K2  
ω2 c2 + m2 ω2  + m2 ω20 -2 ω2 + ω20  ω2 c2 + m2 ω2  + m2 ω20 -2 ω2 + ω20 

◆ Let’s substitute the coefficients to the solution.


In[ ]:= ypSoln[t_] = yp[t] /. coeffSoln〚1〛
Out[ ]=

F0 k - m ω2  Cos[t ω] c F0 ω Sin[t ω]
+
2 2 2 2 2 4
k +c ω -2kmω +m ω k + c2 ω2 - 2 k m ω2 + m2 ω4
2

102
Week 4_Second-Order ODEs-3 (Forced Oscillations).nb 5

◆ Check the solution.


In[ ]:= LHSCheck = LHSOp[ypSoln, t]
Out[ ]=

F0 ω2 k - m ω2  Cos[t ω] c F0 ω3 Sin[t ω]
- - +
k 2 + c 2 ω2 - 2 k m ω 2 + m 2 ω4 k 2 + c 2 ω2 - 2 k m ω 2 + m 2 ω4
F0 k-m ω2  Cos[t ω] c F0 ω Sin[t ω]
k + 
k2 +c2 ω2 -2 k m ω2 +m2 ω4 k2 +c2 ω2 -2 k m ω2 +m2 ω4
+
m
c F0 ω2 Cos[t ω] F0 ω k-m ω2  Sin[t ω]
c - 
k +c2 ω2 -2 k m ω2 +m2 ω4
2
k2 +c2 ω2 -2 k m ω2 +m2 ω4

m
In[ ]:= FullSimplify[LHSCheck - RHS ]
Out[ ]=

In[ ]:= FullSimplify[LHSCheck]  RHS


Out[ ]=

True

◆ Great! The same as the right hand side.


◆ Step 3. Then, the general solution to the initial nonhomogeneous ODE is:
In[ ]:= ygeneral[t_] = yh[t] + ypSoln[t]
Out[ ]=

F0 k - m ω 2  Cos[t ω ] c F0 ω Sin[t ω ]
+ + yh[t]
k2 + c2 ω 2 - 2 k m ω 2 + m2 ω 4 k2 + c2 ω 2 - 2 k m ω 2 + m2 ω 4

◆ yh[t] is the solution to the corresponding homogeneous equation based on three cases
depending on the discriminant sign.

Find the maximum amplitude of Damped Forced Oscillations


From the previous chapter, we know that the solution yp(t ) for the nonhomogeneous ODE is as
follows:
.

m  ω20- ω2 ωc
yp = F0 2 2 2 cos(ω t ) + F0 sin(ω t )
m ω0- ω +ω2 c2 m 2
ω20 - ω2 +ω2 c2
.

After a sufficiently long time the output of a damped vibrating system under a purely
sinusoidal driving force will practically be a harmonic oscillation whose frequency is that of
the input. It is called a Steady State solution, when y(t ) = yp(t ).

103
6 Week 4_Second-Order ODEs-3 (Forced Oscillations).nb

yp(t ) = a cos (ω t ) + b sin(ω t ) = C * cos (ω t - η)


.

Amplitude
.

.
F0
C* = a2 + b2 =
m2ω20- ω2+ω2 c2
.

Phase angle η
.

b ωc
tan η = =
a m ω20- ω2
.

Example 4.1. Amplitude of the Steady State solution. Practical Resonance


.

The amplitude C *(ω) has a maximum at a certain ω value. Find its location, then its size.
.

In[ ]:= ClearAll["Global`*"]

◆ Define C *(ω) as a function:


F0
In[ ]:= Ampl[ω_] := ;
2 2 2 2 2 2
m w0 - ω  + ω c

◆ Find its maximum by taking the first derivative.


In[ ]:= Solve[D[Ampl[ω], ω]  0, ω] (** 1st derivative=0, solve for ω **)
Out[ ]=

- c2 + 2 m2 w02 - c2 + 2 m2 w02
{ω  0}, ω  - , ω  
2 m 2 m

◆ Then C *(ωmax) is equal to:


- c2 + 2 m2 w02
In[ ]:= C0 = FullSimplifyAmpl  (** C* (ωmax ) **)
2 m
Out[ ]=
2 F0
c4
- + 4 c2 w02
m2

◆ (ωmax)2 is equal to:

104
Week 4_Second-Order ODEs-3 (Forced Oscillations).nb 7

2
- c2 + 2 m2 w02
In[ ]:= FullSimplify  (** ω2max **)
2 m
Out[ ]=

c2
- + w02
2 m2

C*
◆ Let’s plot the amplification as a function of ω to see how the amplitude changes
F0
when the damping terms vary. The data about the mass, spring constant, driving force is
randomly assigned.
Ampl[ω]
In[ ]:= C0 = /. F0  10, m  2, w0  5
F0
Out[ ]=
1
2
c2 ω2 + 4 5 - ω2 

105
8 Week 4_Second-Order ODEs-3 (Forced Oscillations).nb

1
In[ ]:= fC0[c_, w_] := ;
2
c2 ω2 + 4 5 - ω2 

fontsize = 20;
fig01 = Plot[{fC0[2, ω], fC0[4, ω], fC0[8, ω]}, {ω, 0, 6},
PlotRange  {0, 0.3}, PlotStyle  {Blue, Green, Red}, Background  White,
BaseStyle  {FontFamily  "Times New Roman", fontsize},
Frame  True,
FrameLabel  {"ω [1/s]", "C * (ω )"},
FrameStyle  Directive[Black, Thick], AxesOrigin  {0, 0},
PlotLegends  Placed[LineLegend[Automatic, {Text[Style["c = 2 kg/s", fontsize]],
Text[Style["c = 4 kg/s", fontsize]], Text[Style["c = 8 kg/s", fontsize]]},
Spacings  0.2, LegendLayout  {"Column", 1}], {0.75, 0.8}],
ImageSize  480, AspectRatio  3 / 4]
Out[ ]=

0.30
c = 2 kg/s
0.25 c = 4 kg/s
c = 8 kg/s
0.20
C* (ω )

0.15

0.10

0.05

0.00
0 1 2 3 4 5 6
ω [1/s]
In[ ]:= Export["fig01.pdf", fig01,
"AllowRasterization"  True, ImageSize  480, ImageResolution  600] ;

In[ ]:= SystemOpen["fig01.pdf"]

◆ From the graph, we can conclude that the biggest practical resonance happens when
the damping term is the smallest.
◆ Please note that this is the figure of the publication quality.

106
Week 4_Second-Order ODEs-3 (Forced Oscillations).nb 9

Summary
After completing this chapter, you should be able to
◼ developstandard operating procedures to solve second-order linear non-homogeneous
ODEs step-by-step using Wolfram Mathematica.
◼ model simple physical situations encountered in engineering using differential
equations.
◼ learn and use information, tools, and technology to solve engineering math problems.
◼ analyze results graphically and create figures of publication quality.

107
108
Week 5: Laplace Transforms
Basics of Laplace Transforms

Table of Contents
1. Basics of Laplace Transforms
1.1. Built-in Functions in Wolfram Mathematica
1.2. Laplace Transform by Integration
1.3. Linearity of the Laplace Transform
1.4. Laplace Transform of Derivatives
2. Unit Step Function and Dirac’s Delta Fuction
2.1. Unit Step Function (Heaviside Function)
2.1.1. Example 5.1
2.1.2. Example 5.2
2.2. Dirac's Delta Function (Impulse Function)
2.2.1. Properties of Dirac's Delta
2.2.2. What is the Laplace Transform of the Dirac’s Delta Function?
3. Summary

Commands list
◼ LaplaceTransform[f[t],t,s]
◼ InverseLaplaceTransform[F[s],s,t]
◼ Integrate[f, x]
◼ Limit[f , x  x* ]
◼ HeavisideTheta[x]
◼ UnitStep[x]
◼ DiracDelta[x]

Basics of Laplace Transforms


Laplace tranforms are essential tools in solving the engineering problems since they they
make solving linear ODEs , IVPs, as well as systems of linear ODEs, much easier.

109
2 Week 5_Laplace Transforms-1 (Basics).nb

If f(t) is a function defined for all t ≧ 0, its Laplace transform is the integral of f(t) times e-st
from t = 0 to ∞. It is a function of s, say, F(s), and is denoted by ℒ(f); thus
.

.

F (s) = ℒ(f) =∫ e-st f (t )  t
0
.

Not only is the result F(s) called the Laplace transform, but the operation just described, which
yields F(s) from a given f(t), is also called the Laplace transform. It is an “integral
transform” with “kernel”: k(s, t ) = e-st.
.

.

F (s ) = ∫ k(s, t ) f (t )  t
0
..

Built-in Functions in Wolfram Mathematica


Laplace transforms are typically used to transform differential and partial
equations to algebraic equations, solve and then inverse transform back to a solution.
.

Laplace transforms are also extensively used in control theory and signal processing as a
way to represent and manipulate linear systems in the form of transfer functions and transfer
matrices. The Laplace transform and its inverse are then a way to transform between the
domain and frequency domain.
.

LaplaceTransform[f[t], t, s] gives the symbolic Laplace transform of f[t] in the variable


t and returns a transform F[s] in the variable s.
.

InverseLaplaceTransform[F[s], s, t] gives the symbolic inverse Laplace transform of


f[t] in the variable s and returns a transform F[s] in the variable t.
.

110
Week 5_Laplace Transforms-1 (Basics).nb 3

In[ ]:= ? LaplaceTransform


Out[ ]=

Symbol

LaplaceTransform [f [t], t, s] gives the symbolic Laplace

transform of f [t] in the variable t and returns a transform F [s] in the variable s.

LaplaceTransform f [t], t, s gives the numeric Laplace transform at the numerical value s.

LaplaceTransform [f [t1 , …, tn ], {t1 , …, tn }, {s1 , …, sn }]

gives the multidimensional Laplace transform of f [t1 , …, tn ].

In[ ]:= ? InverseLaplaceTransform


Out[ ]=

Symbol

InverseLaplaceTransform [F [s], s, t] gives the symbolic

inverse Laplace transform of F [s] in the variable s as f [t] in the variable t.



InverseLaplaceTransform F [s], s, t gives the numeric inverse Laplace transform at the numerical value t .

InverseLaplaceTransform [F [s1 , …, sn ], {s1 , s2 , …}, {t1 , t2 , …}]

gives the multidimensional inverse Laplace transform of F [s1 , …, sn ].

In[ ]:= ClearAll["Global`*"]

◆ Define a function f(t) in the variable t.


f[t_] := Exp[a * t] * Cos[w * t];

◆ Find the Laplace transform using the built-in function.


In[ ]:= LaplaceTransform[f[t], t, s]
Out[ ]=
-a + s
(a - s)2 + w2

◆ Define a function F(s) in the variable s.


-a + s
In[ ]:= F[s_] := ;
(a - s)2 + w2

◆ Find the Inverse of the Laplace transform using the built-in function.
In[ ]:= InverseLaplaceTransform[F[s], s, t]
Out[ ]=

a t Cos[t w]

111
4 Week 5_Laplace Transforms-1 (Basics).nb

Laplace Transform by Integration


In[ ]:= ClearAll["Global`*"]

◆ Define a kernel function.


In[ ]:= kernel[s_, t_] := Exp[- s * t];

◆ Define a function f(t) in the variable t.


In[ ]:= f[t_] := Exp[a * t];

◆ Perform the integration.


In[ ]:= Integrate[kernel[s, t] * f[t], {t, 0, + Infinity}]
Out[ ]=

1
if Re[a] < Re[s]
-a + s

In[ ]:= Integrate[kernel[s, t] * f[t], {t, 0, T}]


Out[ ]=

- 1 + (a-s) T
a-s

◆ Take a limit of the integral as T approaches the infinity.


- 1 + (a-s) T
In[ ]:= Limit , T  Infinity
a-s
Out[ ]=

1
if a < s
-a + s

- 1 + (a-s) T
In[ ]:= Limit , T  Infinity, Assumptions  {Re[a] < Re[s]}
a-s
Out[ ]=

1
-a + s

Linearity of the Laplace Transform


The Laplace transform is a linear operation; that is, for any functions f(t) and g(t) whose
transforms exist and any constants a and b the transform of af (t ) + bg (t ) exists, and
.

ℒ{ af (t) + bg(t) } = aℒ{ f (t) } + bℒ{ g(t) }


.

112
Week 5_Laplace Transforms-1 (Basics).nb 5

◆ Let’s verify the linearity of the Laplace transform.


In[ ]:= ClearAll["Global`*"]

◆ Define the RHS of the above equation. For that, find the Laplace transfroms of
aℒ { f (t ) } and bℒ{ g(t) }.
In[ ]:= RHS1 = LaplaceTransform[c1 * Exp[a * t], t, s]
Out[ ]=
c1
-a + s

In[ ]:= RHS2 = LaplaceTransform[c2 * Exp[b * t], t, s]


Out[ ]=
c2
-b + s

In[ ]:= RHS = RHS1 + RHS2


Out[ ]=
c1 c2
+
-a + s -b + s

◆ Define the LHS of the above equation.


In[ ]:= LHS = LaplaceTransform[c1 * Exp[a * t] + c2 * Exp[b * t], t, s]
Out[ ]=
c1 c2
+
-a + s -b + s

◆ Check for equality.


In[ ]:= LHS  RHS
Out[ ]=

True

Laplace Transform of Derivatives


In[ ]:= ClearAll["Global`*"]

◆ Find the Laplace Transform of the 1st derivative of the function f(t).
In[ ]:= LaplaceTransform[f '[t], t, s]
Out[ ]=

- f[0] + s LaplaceTransform[f[t], t, s]

◆ Find the Laplace Transform of the 2nd derivative of the function f(t).
In[ ]:= LaplaceTransform[f ''[t], t, s]
Out[ ]=

- s f[0] + s2 LaplaceTransform[f[t], t, s] - f′ [0]

113
6 Week 5_Laplace Transforms-1 (Basics).nb

Unit Step Function (Heaviside Function)


There are two ways to generate a unit step function in Wolfram Mathematica. You may use
either the UnitStep[x] command or the HeavisideTheta[x] command.
khx

◆ HeavisideTheta[x] returns 0 or 1 for all real numeric x other than 0. HeavisideTheta can
be used in integrals, integral transforms, and differential equations.
In[ ]:= ? HeavisideTheta
Out[ ]=

Symbol

HeavisideTheta[x] represents the Heaviside theta function θ(x), equal to 0 for x < 0 and 1 for x > 0.

HeavisideTheta[x1 , x2 , …] represents the multidimensional

Heaviside theta function, which is 1 only if all of the x i are positive.

◆ UnitStep[x] represents the unit step function, equal to 0 for x < 0 and 1 for x ≥ 0 .
In[ ]:= ? UnitStep

Symbol

UnitStep [x] represents the unit step function, equal to 0 for x < 0 and 1 for x ≥ 0.

UnitStep [x1 , x2 , …] represents the multidimensional

unit step function which is 1 only if none of the x i are negative.

In[ ]:= ClearAll["Global`*"]

In[ ]:= Plot[{UnitStep[t], HeavisideTheta[t]}, {t, - 1, 4},


PlotStyle  {{Green, Thick}, {Black, Dashed}}, Exclusions  None, Frame  True,
PlotLegends  Placed[{"UnitStep[t]", "HeavisideTheta[t]"}, {0.7, 0.2}],
Background  White]

114
Week 5_Laplace Transforms-1 (Basics).nb 7

Out[ ]=

1.0

0.8

0.6

0.4

0.2 UnitStep[t]

HeavisideTheta[t]

0.0

-1 0 1 2 3 4

What is the Laplace Transform of the Unit Step Function?


.

+∞ -s t e -a s
ℒ { u ( t - a )} = ∫ 0 e u(t - a)  t =
s
.

In[ ]:= LaplaceTransform[HeavisideTheta[t - a], t, s]


Out[ ]=

UnitStep[- a] + -a s UnitStep[a]


s

In[ ]:= LaplaceTransform[UnitStep[t - a], t, s]


Out[ ]=

UnitStep[- a] + -a s UnitStep[a]


s

In[ ]:= FullSimplify[LaplaceTransform[UnitStep[t - a], t, s], a ∈ Reals && a > 0]


Out[ ]=

-a s
s

◆ In engineering, the Unit step function is mainly used in the problems that involve switch
on and off, shifts.

115
8 Week 5_Laplace Transforms-1 (Basics).nb

In[ ]:= Plot[UnitStep[t - 1] - UnitStep[t - 2], {t, 0, 10}, PlotStyle  {Red, Thick},
PlotLegends  Placed[{"u[t-1]-u[t-2]"}, {0.7, 0.2}], Exclusions  None, Frame  True]
Out[ ]=

1.0

0.8

0.6

0.4

0.2
u[t-1]-u[t-2]

0.0

0 2 4 6 8 10

In[ ]:= Plot[UnitStep[t - 1] - 2 * UnitStep[t - 4] + UnitStep[t - 6], {t, 0, 10},


PlotStyle  {Red, Thick}, PlotLegends  Placed[{"u[t-1]-2*u[t-4]+u[t-6]"}, {0.75, 0.8}],
Exclusions  None, Frame  True]
Out[ ]=

1.0

u[t-1]-2*u[t-4]+u[t-6]
0.5

0.0

-0.5

-1.0

0 2 4 6 8 10

Example 5.1
2 0<t<1
1 2 1
f (t ) = 2
t 1<t< 2
π
cos(t ) 1
t> 2
π

In[ ]:= ClearAll["Global`*"]

116
Week 5_Laplace Transforms-1 (Basics).nb 9

1 1
In[ ]:= Plot2 * (1 - UnitStep[t - 1]) + t2 * UnitStep[t - 1] - UnitStept - Pi +
2 2
1
Cos[t] UnitStept - Pi, {t, 0, 5 Pi}, PlotStyle  {Red, Thick},
2
1 1 1
PlotLegends  Placed"f[t]=2*(1-u[t-1])+ t2 *(u[t-1]-u[t- Pi])+Cos[t]u[t- Pi]",
2 2 2
{0.5, 0.85}, Exclusions  None, Frame  True
Out[ ]=

2.0
1 1 1
f[t]=2*(1-u[t-1])+ t 2 *(u[t-1]-u[t- Pi])+Cos[t]u[t- Pi]
2 2 2
1.5

1.0

0.5

0.0

-0.5

-1.0

0 5 10 15

◆ Let’s define the given f(t) function as a function y[t_] of the variable t.
In[ ]:= y[t_] := 2 * (1 - UnitStep[t - 1]) +
1 2 1 1
t * UnitStep[t - 1] - UnitStept - Pi + Cos[t] UnitStept - Pi; y[t]
2 2 2
Out[ ]=
1 π π
2 (1 - UnitStep[- 1 + t]) + t2 UnitStep[- 1 + t] - UnitStep- + t + Cos[t] UnitStep- + t
2 2 2

◆ Find the Laplace transform of the function.


In[ ]:= Y[s_] := LaplaceTransform[y[t], t, s]; Y[s]
Out[ ]=
πs
-
1 2 2 1 1 -
πs 2 π π2 2 2 -s  2
-s + + -  2 + + + - -
2 s3 s2 s 2 s3 s2 4s s s 1 + s2

◆ Find the Inverse of the Laplace transform of the function.

117
10 Week 5_Laplace Transforms-1 (Basics).nb

In[ ]:= y2[t_] := InverseLaplaceTransform[Y[s], s, t]; y2[t]


Out[ ]=
3
2- HeavisideTheta[- 1 + t] +
2
1
(- 1 + t) HeavisideTheta[- 1 + t] + (- 1 + t)2 HeavisideTheta[- 1 + t] -
2
1 2
π 1 π π
π HeavisideTheta- + t - π - + t HeavisideTheta- + t -
8 2 2 2 2
1 π 2 π π
- +t HeavisideTheta- + t + Cos[t] HeavisideTheta- + t
2 2 2 2

◆ As we can see, the initial function and the expression obtained from the Laplace trans-
form method yield the same result.
In[ ]:= Plot[{y[t], y2[t]}, {t, 0, 5 Pi},
PlotStyle  {{Red, Thick}, {Black, Dashed}}, Exclusions  None, Frame  True,
PlotLegends  {"f(t) initial", "f(t) from the Laplace Transform"}]
Out[ ]=

2.0

1.5

1.0

0.5
f(t) initial
f(t) from the Laplace Transform
0.0

-0.5

-1.0

0 5 10 15

Example 5.2

(1 + t )2 0 ≤ t < 1
f (t ) =
1 + t2 t≥1

In[ ]:= ClearAll["Global`*"]

◆ Let’s define the given f(t) function as a function of the variable t.


In[ ]:= f[t_] := (1 + t)2 * (1 - HeavisideTheta[t - 1]) + 1 + t2  * HeavisideTheta[t - 1];

118
Week 5_Laplace Transforms-1 (Basics).nb 11

In[ ]:= Plotf[t], {t, 0, 3}, PlotStyle  {Orange, Thick},


PlotLegends  Placed"f[t]=(1+t)2 * (1-u[t-1])+(1+t2 )*u[t-1]", {0.3, 0.85},
Exclusions  None, Frame  True
Out[ ]=

10

f[t]=(1+t)2 * (1-u[t-1])+(1+t 2 )*u[t-1]

0.0 0.5 1.0 1.5 2.0 2.5 3.0

◆ Find the Laplace transform of the function.


In[ ]:= F[s_] := LaplaceTransform[f[t], t, s]; F[s]
Out[ ]=

2 2 1 2 -s (1 + s)
+ + -
s3 s2 s s2

◆ Find the Inverse of the Laplace transform of the function.


In[ ]:= f2[t_] := InverseLaplaceTransform[F[s], s, t]; f2[t]
Out[ ]=

(1 + t)2 - 2 t HeavisideTheta[- 1 + t]

◆ As we can see, the initial function and the expression obtained from the Laplace trans-
form method yield the same result.

119
12 Week 5_Laplace Transforms-1 (Basics).nb

In[ ]:= Plot[{f[t], f2[t]}, {t, 0, 3},


PlotStyle  {{Orange, Thick}, {Black, Dashed}}, Exclusions  None, Frame  True,
PlotLegends  {"f(t) initial", "f(t) from the Laplace Transform"}]
Out[ ]=

10

6
f(t) initial
f(t) from the Laplace Transform
4

0.0 0.5 1.0 1.5 2.0 2.5 3.0

120
Week 5_Laplace Transforms-1 (Basics).nb 13

121
14 Week 5_Laplace Transforms-1 (Basics).nb

Dirac’s Delta Function (Impulse Function)


◆ DiracDelta[x] returns 0 for all real numeric x other than 0.
◆ DiracDelta can be used in integrals, integral transforms, and differential equations.
◆ Some transformations are done automatically when DiracDelta appears in a product of
terms.
◆  Differentiate the Heaviside function to obtain DiracDelta:
In[ ]:= D[HeavisideTheta[x], x]
Out[ ]=

DiracDelta[x]

◆  DiracDelta vanishes for nonzero arguments:


In[ ]:= DiracDelta[1 / 2]
Out[ ]=

◆  DiracDelta stays unevaluated for x = 0:


In[ ]:= DiracDelta[0]
Out[ ]=

DiracDelta[0]

In[ ]:= Plot[DiracDelta[x], {x, - 2, 2}, AxesOrigin  {0, - 1},


PlotStyle  {Red, Thick}, Exclusions  None, Frame  True]
Out[ ]=

1.0

0.5

0.0

-0.5

-1.0

-2 -1 0 1 2

◆ Use DiracDelta in an integral:


In[ ]:= Integrate[DiracDelta[x] Cos[x], {x, - Infinity, Infinity}]
1

Properties of Dirac’s Delta

122
Week 5_Laplace Transforms-1 (Basics).nb 15

∞, t=a +∞
δ(t - a) =  , ∫ -∞ δ(t - a)  t = 1
0, t≠a
.

For a > 0, we have


.
+∞
∫0 δ(t - a)  t = 1
.

In[ ]:= Integrate[DiracDelta[t - a], {t, 0, Infinity}, Assumptions  {a ∈ Reals && a > 0}]
Out[ ]=

For a continuous function ℊ(t ), we have


.
+∞
∫0 ℊ (t ) δ(t - a)  t = ℊ(a)
.

In[ ]:= Integrate[g[t] * DiracDelta[t - a], {t, 0, Infinity}, Assumptions  {a ∈ Reals && a > 0}]
Out[ ]=

g[a]

What is the Laplace Transform of the Dirac’s Delta Function?


.
+∞ -s t
ℒ { δ ( t - a )} = ∫ 0 e δ(t - a)  t = e-a s
.

◆ Laplace transform of δ(t - a) :


In[ ]:= LaplaceTransform[DiracDelta[t - a], t, s]
Out[ ]=

-a s HeavisideTheta[a]

In[ ]:= FullSimplify[LaplaceTransform[DiracDelta[t - a], t, s], a ∈ Reals && a > 0]


Out[ ]=

-a s

1
◆ Laplace transform of 125 δt - 3
π :
In[ ]:= LaplaceTransform[125 * DiracDelta[t - Pi / 3], t, s]
πs
-
125  3

Summary
After completing this chapter, you should be able to

123
16 Week 5_Laplace Transforms-1 (Basics).nb

◼ perform Laplace and inverse Laplace transforms using Wolfram Mathematica


◼ analyze special functions such as the unit step function and the Dirac delta function
◼ learn and use information, tools, and technology to solve engineering math problems.

124
Week 6: Laplace Transforms (Part 2)
Applications of Laplace Transforms

Table of Contents
1. Solving an IVP by Laplace Transforms: The SOP
1.1. Example 6.1
1.2. Example 6.2
2. Modeling Mass-Spring System using the Unit Step & the Dirac's Delta Functions
2.1. Mass-Spring System Under a Square Wave
2.2. Hammer-blow Response of a Mass-Spring System
2.3. Mass-Spring System Under a Sinusoidal Force for Some Time Interval
3. Convolution
4. Summary

Commands list
◼ LaplaceTransform[f[t],t,s]
◼ InverseLaplaceTransform[F[s],s,t]
◼ HeavisideTheta[x]
◼ UnitStep[x]
◼ DiracDelta[x]
◼ Convolve[f, g, x, y]

Solving an IVP by Laplace Transforms: The SOP


The process of solving an ODE using Laplace Transform method consists of three steps shown
below:
 Step 1. The given ODE is transformed into an algebraic equation, called the subsidiar
equation.
 Step 2. The subsidiary equation is solved by purely algebraic manipulations.
 Step 3. The solution in Step 2 is transformed back, resulting in the solution of the given
probem.

125
2 Week 6_Laplace Transforms-2 (Solving ODEs).nb

Example 6.1
y'' (t ) + 2 y' (t ) + 15 y(t ) = t e-t y(0) = 0, y' (0) = 1

This example and its sample solutions were developed by Prof. Katharine Long, Texas Tech
University - Math Dept.
In[ ]:= ClearAll["Global`*"]

◆ Step 0. Write the ODE as an equation, and the initial conditions as a set of substitution
rules.
In[ ]:= myODE = y ''[t] + 2 y '[t] + 15 y[t]  t Exp[- t]
Out[ ]=

15 y[t] + 2 y′ [t] + y′′ [t]  -t t

In[ ]:= IC = {y[0]  0, y '[0]  1}


Out[ ]=

{y[0]  0, y′ [0]  1}

◆ Step 1. Take Laplace transforms of both sides of the equation, and substitute the initial
conditions into the equation.
In[ ]:= ltODE = LaplaceTransform[myODE, t, s] /. IC
Out[ ]=

- 1 + 15 LaplaceTransform[y[t], t, s] +
1
2 s LaplaceTransform[y[t], t, s] + s2 LaplaceTransform[y[t], t, s] 
(1 + s)2

◆ This equation will be easier to read if we write Y(s) for ℒ{y(t)}(s), which we can do
using a substitution rule.
In[ ]:= eqnForY = ltODE /. LaplaceTransform[y[t], t, s]  Y[s]
Out[ ]=
1
- 1 + 15 Y[s] + 2 s Y[s] + s2 Y[s] 
(1 + s)2

◆ Step 2. Solve the subsidiary equation by algebraic manipulations.


In[ ]:= Solve[eqnForY, Y[s]]
Out[ ]=

2 + 2 s + s2
Y[s]  
(1 + s)2 15 + 2 s + s2 

126
Week 6_Laplace Transforms-2 (Solving ODEs).nb 3

In[ ]:= YSoln[s_] := Y[s] /. Solve[eqnForY, Y[s]]〚1〛; YSoln[s]


Out[ ]=

2 + 2 s + s2
(1 + s)2 15 + 2 s + s2 

◆ Now we have computed the Laplace transform of the solution. Take its inverse Laplace
transform to get the solution.
◆ Step 3. The solution in Step 2, Y(s), is transformed back, resulting in the solution of the
given problem.
In[ ]:= InverseLaplaceTransform[YSoln[s], s, t]
Out[ ]=
1
-t 14 t + 13 14 Sin 14 t
196

In[ ]:= ySoln[t_] = InverseLaplaceTransform[YSoln[s], s, t]; ySoln[t]


Out[ ]=

1
-t 14 t + 13 14 Sin 14 t
196

◆ Step 4. Verify the solution.


In[ ]:= ODECheck = myODE /. y  ySoln
Out[ ]=

1 13 -t Sin 14 t 4
- -t 14 + 182 Cos 14 t - + -t 14 t + 13 14 Sin 14 t +
98 14 49
1 1
2 -t 14 + 182 Cos 14 t - -t 14 t + 13 14 Sin 14 t  -t t
196 196

In[ ]:= FullSimplify[ODECheck]


Out[ ]=

True

In[ ]:= ICCheck = {ySoln[0]  y[0], ySoln '[0]  y '[0]} /. IC


Out[ ]=

{True, True}

◆ Step 5. Verify the solution by DSolve function (Not Required).


◆ A general solution:
In[ ]:= DsolveSoln0 = DSolve[myODE, y[t], t]
Out[ ]=
1 2 2
y[t]  -t 2 Cos 14 t + -t 1 Sin 14 t + -t t Cos 14 t + Sin 14 t 
14

◆ A particular solution:

127
4 Week 6_Laplace Transforms-2 (Solving ODEs).nb

In[ ]:= DsolveSoln1 = DSolve[{myODE, y[0]  0, y '[0]  1} , y[t], t]


Out[ ]=
1 2 2
y[t]  -t 14 t Cos 14 t + 13 14 Sin 14 t + 14 t Sin 14 t 
196

In[ ]:= FullSimplify[Chop[DsolveSoln1〚1〛]]


Out[ ]=
1
y[t]  -t 14 t + 13 14 Sin 14 t
196

◆ Result from DSolve:


In[ ]:= yfromDsolve = y[t] /. FullSimplify[Chop[DsolveSoln1〚1〛]]
Out[ ]=
1
-t 14 t + 13 14 Sin 14 t
196

◆ Solution from the method of Laplace transform:


In[ ]:= ySoln[t]
Out[ ]=
1
-t 14 t + 13 14 Sin 14 t
196

◆ Also, let’s take a look at the solution by plotting its graph:

128
Week 6_Laplace Transforms-2 (Solving ODEs).nb 5

In[ ]:= Plot{ySoln[t]}, {t, 0, 5}, PlotRange  {- 0.3, 0.3},


PlotStyle  {Blue, Thick}, Frame  True, FrameLabel  {"t", "y(t)"},
BaseStyle  {FontWeight  "Bold", Black, FontSize  12}, GridLines  Automatic,
1
PlotLegends  Placed"y(x)= -t (14 t+13 14 sin( 14 t))", Above,
196
AxesStyle  Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
Method  {"DefaultBoundaryStyle"  Automatic,
"DefaultMeshStyle"  AbsolutePointSize[6], "ScalingFunctions"  None}
Out[ ]=

1
y(x)= -t (14 t+13 14 sin( 14 t))
196

0.3

0.2

0.1
y(t)

0.0

-0.1

-0.2

-0.3
0 1 2 3 4 5
t

Example 6.2
y'' (t ) + 2 y' (t ) + 5 y(t ) = 1.25 exp(0.5 t ) + 40 cos(4 t ) - 55 sin(4 t )
.

y(0) = 0.2, y' (0) = 60.1

In[ ]:= ClearAll["Global`*"]

◆ Step 0. Write the ODE as an equation, and the initial conditions as a set of substitution
rules.
In[ ]:= LHSOp[y_, t_] = y ''[t] + 2.0 y '[t] + 5.0 y[t]
Out[ ]=
5. y[t] + 2. y′ [t] + y′′ [t]

In[ ]:= rhsFunc[t_] = 1.25 Exp[0.5 t] + 40.0 Cos[4.0 t] - 55.0 Sin[4.0 t]


Out[ ]=

1.25 0.5 t + 40. Cos[4. t] - 55. Sin[4. t]

In[ ]:= myODE = LHSOp[y, t]  rhsFunc[t]


Out[ ]=

5. y[t] + 2. y′ [t] + y′′ [t]  1.25 0.5 t + 40. Cos[4. t] - 55. Sin[4. t]

129
6 Week 6_Laplace Transforms-2 (Solving ODEs).nb

In[ ]:= IC = {y[0]  0.2, y '[0]  60.1}


Out[ ]=

{y[0]  0.2, y′ [0]  60.1}

◆ Step 1. Take Laplace transforms of both sides of the equation, and substitute the initial
conditions into the equation.
In[ ]:= ltODE = LaplaceTransform[myODE, t, s] /. IC
Out[ ]=

- 60.1 - 0.2 s + 5. LaplaceTransform[y[t], t, s] + s2 LaplaceTransform[y[t], t, s] +


1.25 220. 40. s
2. (- 0.2 + s LaplaceTransform[y[t], t, s])  - +
2
- 0.5 + s 16. + s 16. + s2

◆ This equation will be easier to read if we write Y(s) for ℒ{y(t)}(s), which we can do
using a substitution rule.
In[ ]:= eqnForY = ltODE /. LaplaceTransform[y[t], t, s]  Y[s]
Out[ ]=
1.25 220. 40. s
- 60.1 - 0.2 s + 5. Y[s] + s2 Y[s] + 2. (- 0.2 + s Y[s])  - +
2
- 0.5 + s 16. + s 16. + s2

◆ Step 2. Solve the subsidiary equation by algebraic manipulations.


In[ ]:= Solve[eqnForY, Y[s]]
Out[ ]=
1.25 220. 40. s
60.5 + + 0.2 s - +
-0.5+s 16.+s2 16.+s2
Y[s]  
5. + 2. s + s2

In[ ]:= YSoln[s_] := Y[s] /. Solve[eqnForY, Y[s]]〚1〛; YSoln[s]


Out[ ]=
1.25 220. 40. s
60.5 + + 0.2 s - +
-0.5+s 16.+s2 16.+s2

5. + 2. s + s2

◆ Now we have computed the Laplace transform of the solution. Take its inverse Laplace
transform to get the solution.
◆ Step 3. The solution in Step 2, Y(s), is transformed back, resulting in the solution of the
given problem.
In[ ]:= InverseLaplaceTransform[YSoln[s], s, t]
Out[ ]=

0.2 0.5 t + (-1.-2. ) t - 1.19349 × 10-15 + 10.  - 1.19349 × 10-15 + 10.  (0.+4. ) t  +
(0.-4. ) t - 4.44089 × 10-16 + 2.5  - 4.44089 × 10-16 + 2.5  (0.+8. ) t 

In[ ]:= ySoln[t_] = FullSimplify[Chop[InverseLaplaceTransform[YSoln[s], s, t]]]; ySoln[t]


Out[ ]=

0.2 Cosh[0.5 t] + 20. -1. t Sin[2. t] + 5. Sin[4. t] + 0.2 Sinh[0.5 t]

130
Week 6_Laplace Transforms-2 (Solving ODEs).nb 7

◆ Step 4. Verify the solution.


In[ ]:= myODE
Out[ ]=

5. y[t] + 2. y′ [t] + y′′ [t]  1.25 0.5 t + 40. Cos[4. t] - 55. Sin[4. t]

In[ ]:= LHS = LHSOp[ySoln, t]


Out[ ]=

- 80. -1. t Cos[2. t] + 0.05 Cosh[0.5 t] - 60. -1. t Sin[2. t] - 80. Sin[4. t] +
2. 40. -1. t Cos[2. t] + 20. Cos[4. t] + 0.1 Cosh[0.5 t] - 20. -1. t Sin[2. t] + 0.1 Sinh[0.5 t] +
5. 0.2 Cosh[0.5 t] + 20. -1. t Sin[2. t] + 5. Sin[4. t] + 0.2 Sinh[0.5 t] + 0.05 Sinh[0.5 t]

In[ ]:= RHS = rhsFunc[t]


Out[ ]=

1.25 0.5 t + 40. Cos[4. t] - 55. Sin[4. t]

In[ ]:= Chop[FullSimplify[LHS  RHS]]


Out[ ]=

True

In[ ]:= IC
Out[ ]=

{y[0]  0.2, y′ [0]  60.1}

In[ ]:= {ySoln[0], ySoln '[0]}


Out[ ]=

{0.2, 60.1}

◆ Step 5. Verify the solution by DSolve function (Not Required).


◆ A general solution:
DsolveSoln0 = DSolve[LHSOp[y, x]  rhsFunc[x], y[x], x]
Out[ ]=

y[x]  -1. x 2 Cos[2. x] + -1. x 1 Sin[2. x] + 5. -1. x 0. + 0.04 1.5 x Cos[2. x]2 +
0.04 1.5 x Sin[2. x]2 + 1. 1. x Cos[2. x]2 Sin[4. x] + 1. 1. x Sin[2. x]2 Sin[4. x]

◆ A particular solution:
DsolveSoln1 = DSolve[{LHSOp[y, x]  rhsFunc[x], y[0]  0.2, y '[0]  60.1} , y[x], x]
Out[ ]=

y[x]  5. -1. x 0.04 1.5 x Cos[2. x]2 + 4. Sin[2. x] +


0.04 1.5 x Sin[2. x]2 + 1. 1. x Cos[2. x]2 Sin[4. x] + 1. 1. x Sin[2. x]2 Sin[4. x]

In[ ]:= FullSimplify[Chop[DsolveSoln1〚1〛]]


Out[ ]=

y[x]  0.2 0.5 x + 20. -1. x Sin[2. x] + 5. Sin[4. x]

◆ Result from DSolve:

131
8 Week 6_Laplace Transforms-2 (Solving ODEs).nb

yfromDsolve = y[x] /. FullSimplify[Chop[DsolveSoln1〚1〛]]


Out[ ]=

0.2 0.5 x + 20. -1. x Sin[2. x] + 5. Sin[4. x]

◆ Solution from the method of Laplace transform:


ySoln[t]
Out[ ]=

0.2 Cosh[0.5 t] + 20. -1. t Sin[2. t] + 5. Sin[4. t] + 0.2 Sinh[0.5 t]

◆ Let’s compare the obtained results by plotting them on the same graph:
In[ ]:= Plot[{ySoln[x] , yfromDsolve}, {x, 0, 15}, Frame  True,
PlotStyle  {{Orange, Thick}, {Black, Dashed}}, Frame  True, FrameLabel  {"x", "y(x)"},
BaseStyle  {FontWeight  "Bold", Black, FontSize  12}, GridLines  Automatic,
AxesStyle  Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
Method  {"DefaultBoundaryStyle"  Automatic,
"DefaultMeshStyle"  AbsolutePointSize[6], "ScalingFunctions"  None},
PlotLegends  Placed[{"Laplace Transform", "DSolve"}, {0.4, 0.75}], Background  White]
Out[ ]=

200 Laplace Transform


DSolve
150
y (x )

100

50

0
0 2 4 6 8 10 12 14
x

Modeling Mass-Spring System using the Unit Step & the


Dirac’s Delta Functions
Mass-Spring System Under a Square Wave
Determine the response of a damped mass-spring system under a square wave, modelled by
y'' (t ) + 3 y' (t ) + 2 y(t ) = r(t ) = u(t - 1) - u(t - 2)
.

y(0) = 0, y' (0) = 0

This example is taken from the Textbook (Kreyszig, 2011, 10th Edition), Section 6.4, page
227.

132
Week 6_Laplace Transforms-2 (Solving ODEs).nb 9

In[ ]:= ClearAll["Global`*"]

◆ Let’s define the RHS as a function of vm and plot it:


In[ ]:= r[t_] := HeavisideTheta[t - 1] - HeavisideTheta[t - 2];

In[ ]:= Plot[r[t], {t, 0, 5}, PlotRange  {- 0.5, 1.5}, PlotStyle  {{Red, Thick}},
Frame  True, Exclusions  None, FrameLabel  {"t", "r(t)"},
BaseStyle  {FontWeight  "Bold", Black, FontSize  12}, GridLines  Automatic,
AxesStyle  Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
PlotLegends  Placed[{"r[ t ]=u[ t-1 ] - u[ t-2 ]"}, {0.65, 0.87}],
Method  {"DefaultBoundaryStyle"  Automatic,
"DefaultMeshStyle"  AbsolutePointSize[6], "ScalingFunctions"  None}]
Out[ ]=
1.5

r[ t ]=u[ t-1 ] - u[ t-2 ]


1.0
r(t)

0.5

0.0

-0.5
0 1 2 3 4 5
t

◆ Step 0. Write the ODE as an equation, and the initial conditions as a set of substitution
rules.
In[ ]:= myODE = y ''[t] + 3 y '[t] + 2 y[t]  r[t]
Out[ ]=

2 y[t] + 3 y′ [t] + y′′ [t]  - HeavisideTheta[- 2 + t] + HeavisideTheta[- 1 + t]

In[ ]:= IC = {y[0]  0, y '[0]  0}


Out[ ]=

{y[0]  0, y′ [0]  0}

◆ Step 1. Take Laplace transforms of both sides of the equation, and substitute the initial
conditions into the equation.
In[ ]:= ltODE = LaplaceTransform[myODE, t, s] /. IC
Out[ ]=

2 LaplaceTransform[y[t], t, s] +
-2 s -s
3 s LaplaceTransform[y[t], t, s] + s2 LaplaceTransform[y[t], t, s]  - +
s s

◆ This equation will be easier to read if we write Y(s) for ℒ{y(t)}(s), which we can do
using a substitution rule.

133
10 Week 6_Laplace Transforms-2 (Solving ODEs).nb

In[ ]:= eqnForY = ltODE /. LaplaceTransform[y[t], t, s]  Y[s]


Out[ ]=

-2 s -s
2 Y[s] + 3 s Y[s] + s2 Y[s]  - +
s s

◆ Step 2. Solve the subsidiary equation by algebraic manipulations.


In[ ]:= Solve[eqnForY, Y[s]]
Out[ ]=

-2 s - 1 + s 
Y[s]  
s 2 + 3 s + s2 

In[ ]:= YSoln[s_] := Y[s] /. Solve[eqnForY, Y[s]]〚1〛; YSoln[s]


Out[ ]=

-2 s - 1 + s 

s 2 + 3 s + s2 

◆ Now we have computed the Laplace transform of the solution. Take its inverse Laplace
transform to get the solution.
◆ Step 3. The solution in Step 2, Y(s), is transformed back, resulting in the solution of the
given problem.
In[ ]:= InverseLaplaceTransform[YSoln[s], s, t]
Out[ ]=
1 2 1 2
- -2 (-2+t) - 1 + -2+t  HeavisideTheta[- 2 + t] + -2 (-1+t) - 1 + -1+t  HeavisideTheta[- 1 + t]
2 2

In[ ]:= ySoln[t_] = InverseLaplaceTransform[YSoln[s], s, t]; ySoln[t]


Out[ ]=

1 2 1 2
- -2 (-2+t) -1 + -2+t  HeavisideTheta[-2 + t] + -2 (-1+t) -1 + -1+t  HeavisideTheta[-1 + t]
2 2

◆ Step 4. Verify the solution.

134
Week 6_Laplace Transforms-2 (Solving ODEs).nb 11

In[ ]:= ODECheck = myODE /. y  ySoln


Out[ ]=
2
- 2 -2-2 (-2+t)+t - 1 + -2+t  DiracDelta[- 2 + t] + 2 -2 (-2+t) - 1 + -2+t  DiracDelta[- 2 + t] +
2
2 -1-2 (-1+t)+t - 1 + -1+t  DiracDelta[- 1 + t] - 2 -2 (-1+t) - 1 + -1+t  DiracDelta[- 1 + t] -
-4-2 (-2+t)+2 t HeavisideTheta[- 2 + t] + 3 -2-2 (-2+t)+t - 1 + -2+t  HeavisideTheta[- 2 + t] -
2
2 -2 (-2+t) - 1 + -2+t  HeavisideTheta[- 2 + t] + -2-2 (-1+t)+2 t HeavisideTheta[- 1 + t] -
3 -1-2 (-1+t)+t - 1 + -1+t  HeavisideTheta[- 1 + t] +
2
2 -2 (-1+t) - 1 + -1+t  HeavisideTheta[- 1 + t] +
1 2 1 2
3 - -2 (-2+t) - 1 + -2+t  DiracDelta[- 2 + t] + -2 (-1+t) - 1 + -1+t  DiracDelta[- 1 + t] -
2 2
-2-2 (-2+t)+t - 1 + -2+t  HeavisideTheta[- 2 + t] +
2
-2 (-2+t) - 1 + -2+t  HeavisideTheta[- 2 + t] + -1-2 (-1+t)+t - 1 + -1+t 
2
HeavisideTheta[- 1 + t] - -2 (-1+t) - 1 + -1+t  HeavisideTheta[- 1 + t] +

1 2
2 - -2 (-2+t) - 1 + -2+t  HeavisideTheta[- 2 + t] +
2
1 2
-2 (-1+t) - 1 + -1+t  HeavisideTheta[- 1 + t] -
2
1 2 1 2
-2 (-2+t) - 1 + -2+t  DiracDelta′ [- 2 + t] +
-2 (-1+t) - 1 + -1+t  DiracDelta′ [- 1 + t] 
2 2
- HeavisideTheta[- 2 + t] + HeavisideTheta[- 1 + t]

In[ ]:= FullSimplify[ODECheck]


Out[ ]=
True

In[ ]:= ICCheck = {ySoln[0]  y[0], ySoln '[0]  y '[0]} /. IC


Out[ ]=

{True, True}

◆ Now, let’s take a look at the solution by plotting it:

135
12 Week 6_Laplace Transforms-2 (Solving ODEs).nb

Plot[{r[t], ySoln[t]}, {t, 0, 5}, PlotRange  {- 0.5, 1.5},


PlotStyle  {{Blue, Thick}, {Red, Thick}}, Frame  True,
FrameLabel  {"t", "y(t)"}, Exclusions  None,
BaseStyle  {FontWeight  "Bold", Black, FontSize  12}, GridLines  Automatic,
AxesStyle  Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
Method  {"DefaultBoundaryStyle"  Automatic,
"DefaultMeshStyle"  AbsolutePointSize[6], "ScalingFunctions"  None},
PlotLegends  Placed[{"r(t)", "y(t)"}, {0.8, 0.75}], Background  White]
1.5

r(t)
1.0
y(t)
y (t )

0.5

0.0

-0.5
0 1 2 3 4 5
t

Hammer-blow Response of a Mass-Spring System


Determine the response of a mass-spring system under a unit impulse at t = 1 ,i.e.,
r(t ) = δ (t - 1) , modelled by
y'' (t ) + 3 y' (t ) + 2 y(t ) = r(t ) y(0) = 0, y' (0) = 0

This example is taken from the Textbook (Kreyszig, 2011, 10th Edition), Section 6.4, page
227.
In[ ]:= ClearAll["Global`*"]

◆ Let’s define the RHS as a function of r(t).


In[ ]:= r[t_] := DiracDelta[t - 1]; r[t]
Out[ ]=

DiracDelta[- 1 + t]

◆ Step 0. Write the ODE as an equation, and the initial conditions as a set of substitution
rules.
In[ ]:= myODE = y ''[t] + 3 y '[t] + 2 y[t]  r[t]
Out[ ]=

2 y[t] + 3 y′ [t] + y′′ [t]  DiracDelta[- 1 + t]

136
Week 6_Laplace Transforms-2 (Solving ODEs).nb 13

In[ ]:= IC = {y[0]  0, y '[0]  0}


Out[ ]=

{y[0]  0, y′ [0]  0}

◆ Step 1. Take Laplace transforms of both sides of the equation, and substitute the initial
conditions into the equation.
In[ ]:= ltODE = LaplaceTransform[myODE, t, s] /. IC
Out[ ]=

2 LaplaceTransform[y[t], t, s] +
3 s LaplaceTransform[y[t], t, s] + s2 LaplaceTransform[y[t], t, s]  -s

◆ This equation will be easier to read if we write Y(s) for ℒ{y(t)}(s), which we can do
using a substitution rule.
In[ ]:= eqnForY = ltODE /. LaplaceTransform[y[t], t, s]  Y[s]
Out[ ]=

2 Y[s] + 3 s Y[s] + s2 Y[s]  -s

◆ Step 2. Solve the subsidiary equation by algebraic manipulations.


In[ ]:= Solve[eqnForY, Y[s]]
Out[ ]=

-s
Y[s]  
2 + 3 s + s2

In[ ]:= YSoln[s_] := Y[s] /. Solve[eqnForY, Y[s]]〚1〛; YSoln[s]


Out[ ]=

-s
2 + 3 s + s2

◆ Now we have computed the Laplace transform of the solution. Take its inverse Laplace
transform to get the solution.
◆ Step 3. The solution in Step 2, Y(s), is transformed back, resulting in the solution of the
given problem.
In[ ]:= InverseLaplaceTransform[YSoln[s], s, t]
Out[ ]=

1-2 t -  + t  HeavisideTheta[- 1 + t]

In[ ]:= ySoln[t_] = InverseLaplaceTransform[YSoln[s], s, t]; ySoln[t]


Out[ ]=

1-2 t - + t  HeavisideTheta[- 1 + t]

◆ Step 4. Verify the solution.

137
14 Week 6_Laplace Transforms-2 (Solving ODEs).nb

In[ ]:= ODECheck = myODE /. y  ySoln


Out[ ]=

2 1-t DiracDelta[- 1 + t] - 4 1-2 t -  + t  DiracDelta[- 1 + t] - 3 1-t HeavisideTheta[- 1 + t] +


6 1-2 t -  + t  HeavisideTheta[- 1 + t] + 3 1-2 t -  + t  DiracDelta[- 1 + t] +
1-t HeavisideTheta[- 1 + t] - 2 1-2 t -  + t  HeavisideTheta[- 1 + t] +
1-2 t -  + t  DiracDelta′ [- 1 + t]  DiracDelta[- 1 + t]

In[ ]:= FullSimplify[ODECheck]


Out[ ]=

True

In[ ]:= ICCheck = {ySoln[0]  y[0], ySoln '[0]  y '[0]} /. IC


Out[ ]=

{True, True}

◆ Now, let’s take a look at the solution by plotting it:


In[ ]:= Plot[{ySoln[t]}, {t, 0, 10}, PlotRange  {- 0.1, 0.4}, PlotStyle  {{Orange, Thick}},
Frame  True, FrameLabel  {"t", "y(t)"}, Exclusions  None,
BaseStyle  {FontWeight  "Bold", Black, FontSize  12}, GridLines  Automatic,
AxesStyle  Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
Method  {"DefaultBoundaryStyle"  Automatic,
"DefaultMeshStyle"  AbsolutePointSize[6], "ScalingFunctions"  None},
PlotLegends  Placed[{"y(t)"}, {0.8, 0.75}], Background  White]
Out[ ]=
0.4

0.3
y (t )

0.2
y(t)

0.1

0.0

-0.1
0 2 4 6 8 10
t

Mass-Spring System Under a Sinusoidal Force for Some Time Interval


Determine the response of a mass-spring system under a sinusoidal force for
interval, modelled by
y'' (t ) + 3 y' (t ) + 2 y(t ) = r(t ), y(0) = 1, y' (0) = - 5
.

10 sin(2 t ) 0 < t < π


r (t ) = 
0 t>π

138
Week 6_Laplace Transforms-2 (Solving ODEs).nb 15

This example is taken from the Textbook (Kreyszig, 2011, 10th Edition), Section 6.4, page
229.
In[ ]:= ClearAll["Global`*"]

◆ Let’s define the RHS as a function of r(t) and plot it.


In[ ]:= r[t_] := 10 * Sin[2 t] * HeavisideTheta[Pi - t] ; r[t]
Out[ ]=

10 HeavisideTheta[π - t] Sin[2 t]

In[ ]:= Plot[r[t], {t, 0, 2 * Pi}, PlotStyle  {{Red, Thick}},


Frame  True, Exclusions  None, FrameLabel  {"t", "r(t)"},
BaseStyle  {FontWeight  "Bold", Black, FontSize  12}, GridLines  Automatic,
AxesStyle  Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
PlotLegends  Placed[{"r[t]=10 u[π-t] Sin[2 t]"}, {0.65, 0.85}],
Method  {"DefaultBoundaryStyle"  Automatic,
"DefaultMeshStyle"  AbsolutePointSize[6], "ScalingFunctions"  None}]
Out[ ]=

10
r[t]=10 u[π-t] Sin[2 t]
5
r(t)

-5

-10
0 1 2 3 4 5 6
t

◆ Step 0. Write the ODE as an equation, and the initial conditions as a set of substitution
rules.
In[ ]:= myODE = y ''[t] + 3 y '[t] + 2 y[t]  r[t]
Out[ ]=

2 y[t] + 3 y′ [t] + y′′ [t]  10 HeavisideTheta[π - t] Sin[2 t]

In[ ]:= IC = {y[0]  1, y '[0]  - 5}


Out[ ]=

{y[0]  1, y′ [0]  - 5}

◆ Step 1. Take Laplace transforms of both sides of the equation, and substitute the initial
conditions into the equation.

139
16 Week 6_Laplace Transforms-2 (Solving ODEs).nb

In[ ]:= ltODE = LaplaceTransform[myODE, t, s] /. IC


Out[ ]=

5 - s + 2 LaplaceTransform[y[t], t, s] + s2 LaplaceTransform[y[t], t, s] +
10 2 - 2 -π s 
3 (- 1 + s LaplaceTransform[y[t], t, s]) 
4 + s2

◆ This equation will be easier to read if we write Y(s) for ℒ{y(t)}(s), which we can do
using a substitution rule.
In[ ]:= eqnForY = ltODE /. LaplaceTransform[y[t], t, s]  Y[s]
Out[ ]=

10 2 - 2 -π s 
5 - s + 2 Y[s] + s2 Y[s] + 3 (- 1 + s Y[s]) 
4 + s2

◆ Step 2. Solve the subsidiary equation by algebraic manipulations.


In[ ]:= Solve[eqnForY, Y[s]]
Out[ ]=
10 2-2 -π s 
-2 + s +
4+s2
Y[s]  
2 + 3 s + s2

In[ ]:= YSoln[s_] := Y[s] /. Solve[eqnForY, Y[s]]〚1〛; YSoln[s]


Out[ ]=
10 2-2 -π s 
-2 + s +
4+s2

2 + 3 s + s2

◆ Now we have computed the Laplace transform of the solution. Take its inverse Laplace
transform to get the solution.
◆ Step 3. The solution in Step 2, Y(s), is transformed back, resulting in the solution of the
given problem.
In[ ]:= InverseLaplaceTransform[YSoln[s], s, t]
Out[ ]=

1 -t 1
- -2 t - 2 + t  - 2 -2 t - 1 + t  + 20 - -2 t + + (- 3 Cos[2 t] - Sin[2 t]) -
8 5 40
π-t 1 1
20 HeavisideTheta[- π + t] - -2 (-π+t) + (- 3 Cos[2 (- π + t)] - Sin[2 (- π + t)])
5 8 40

In[ ]:= ySoln[t_] = FullSimplify[InverseLaplaceTransform[YSoln[s], s, t]]; ySoln[t]


Out[ ]=
1
-2 t 3 + 5 2 π HeavisideTheta[- π + t] + 2 t (1 - 4 π HeavisideTheta[- π + t]) +
2
2 t (- 1 + HeavisideTheta[- π + t]) (3 Cos[2 t] + Sin[2 t])

◆ Step 4. Verify the solution.

140
Week 6_Laplace Transforms-2 (Solving ODEs).nb 17

In[ ]:= ODECheck = myODE /. y  ySoln


Out[ ]=

3 -2 t 3 + 5 2 π HeavisideTheta[- π + t] + 2 t (1 - 4 π HeavisideTheta[- π + t]) +


2 t (- 1 + HeavisideTheta[- π + t]) (3 Cos[2 t] + Sin[2 t]) - 2 -2 t
5 2 π DiracDelta[- π + t] - 8 π+t DiracDelta[- π + t] + 2 t (1 - 4 π HeavisideTheta[- π + t]) +
2 t (- 1 + HeavisideTheta[- π + t]) (2 Cos[2 t] - 6 Sin[2 t]) + 2 t DiracDelta[- π + t]
(3 Cos[2 t] + Sin[2 t]) + 2 2 t (- 1 + HeavisideTheta[- π + t]) (3 Cos[2 t] + Sin[2 t]) +

3 - -2 t 3 + 5 2 π HeavisideTheta[- π + t] + 2 t (1 - 4 π HeavisideTheta[- π + t]) +

2 t (- 1 + HeavisideTheta[- π + t]) (3 Cos[2 t] + Sin[2 t]) +


1
-2 t 5 2 π DiracDelta[- π + t] - 8 π+t DiracDelta[- π + t] +
2
2 t (1 - 4 π HeavisideTheta[- π + t]) + 2 t (- 1 + HeavisideTheta[- π + t])
(2 Cos[2 t] - 6 Sin[2 t]) + 2 t DiracDelta[- π + t] (3 Cos[2 t] + Sin[2 t]) +

2 2 t (- 1 + HeavisideTheta[- π + t]) (3 Cos[2 t] + Sin[2 t]) +

1
-2 t - 16 π+t DiracDelta[- π + t] + 2 t (1 - 4 π HeavisideTheta[- π + t]) +
2
2 2 t DiracDelta[- π + t] (2 Cos[2 t] - 6 Sin[2 t]) +
4 2 t (- 1 + HeavisideTheta[- π + t]) (2 Cos[2 t] - 6 Sin[2 t]) +
2 t (- 1 + HeavisideTheta[- π + t]) (- 12 Cos[2 t] - 4 Sin[2 t]) +
4 2 t DiracDelta[- π + t] (3 Cos[2 t] + Sin[2 t]) +
4 2 t (- 1 + HeavisideTheta[- π + t]) (3 Cos[2 t] + Sin[2 t]) + 5 2 π DiracDelta′ [- π + t] -
8 π+t DiracDelta′ [- π + t] + 2 t (3 Cos[2 t] + Sin[2 t]) DiracDelta′ [- π + t] 
10 HeavisideTheta[π - t] Sin[2 t]

In[ ]:= FullSimplify[ODECheck]


Out[ ]=

(- 1 + HeavisideTheta[π - t] + HeavisideTheta[- π + t]) Sin[2 t]  0

In[ ]:= ICCheck = {ySoln[0]  y[0], ySoln '[0]  y '[0]} /. IC


Out[ ]=

{True, True}

◆ Now, let’s take a look at the solution by plotting it:

141
18 Week 6_Laplace Transforms-2 (Solving ODEs).nb

In[ ]:= Plot[{r[t], ySoln[t]}, {t, 0, 10},


PlotRange  {- 12, 12}, PlotStyle  {{Blue, Thick}, {Red, Thick}},
Frame  True, FrameLabel  {"t", "y(t)"}, Exclusions  None,
BaseStyle  {FontWeight  "Bold", Black, FontSize  12}, GridLines  Automatic,
AxesStyle  Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
Method  {"DefaultBoundaryStyle"  Automatic,
"DefaultMeshStyle"  AbsolutePointSize[6], "ScalingFunctions"  None},
PlotLegends  Placed[{"r(t)", "y(t)"}, {0.8, 0.75}], Background  White]

10
r (t )
5 y (t )
y(t)

-5

-10

0 2 4 6 8 10
t

Convolution
◆ According to the textbook,
ℒ(f ) ℒ(g) is the transform of the convolution of f and g, denoted by the standard notation f * g
and defined by the integral:
t
h(t ) = (f * g) (t ) = ∫ 0 f (τ) g(t - τ)  τ

◆ According to the Wolfram Mathematica,


+∞
▪ The convolution ( f * g ) (y) of two functions f (x) and g(x) is given by ∫ -∞ f (x) g(y - x)  x.
▪ The multidimensional convolution is given by.
.

+∞ +∞
∫ -∞ ∫ -∞ ⋯f (x1, x2, …) g (y1 - x1, y2 - x2, …)  x1  x2 ⋯.
.

◆ Convolution uses the Convolve command, which is somewhat tricky and requires a bit
of explanation.
◆ The syntax is: Convolve[ first function , second function , dummy variable , final
variable]

142
Week 6_Laplace Transforms-2 (Solving ODEs).nb 19

In[ ]:= ? Convolve


Out[ ]=

Symbol

Convolve[f , g, x, y] gives the convolution with respect to x of the expressions f and g.

Convolve[f , g, {x1 , x2 , …}, {y1 , y2 , …}] gives the multidimensional convolution.

◆ Convolution is defined in Wolfram Mathematica as an integral from -∞ to +∞, which is


consistent with its use in signal processing.
◆ Many textbooks define convolution as an integral from 0 to t. The Heaviside function
will be required in order to input functions into the Convolve command.
In[ ]:= ClearAll["Global`*"]

In[ ]:= Convolve[Sin[τ] * UnitStep[τ], Cos[τ] * UnitStep[τ], τ, t]


Out[ ]=
1
t Sin[t] UnitStep[t]
2

◆ Alternatively, Mathematica can be used to evaluate the integral directly:


In[ ]:= Integrate[Sin[τ] Cos[t - τ], {τ, 0, t}]
Out[ ]=
1
t Sin[t]
2

◆ The only difference between the two is the presence of the Heaviside function multi-
plied onto the result. However, that is fully consistent with the limits on the convolution
integral.
In[ ]:= Convolve[τ * UnitStep[τ], 1 * UnitStep[τ], τ, t]
Out[ ]=
1
t2 UnitStep[t]
2

In[ ]:= Integrate[τ * 1, {τ, 0, t}]


Out[ ]=

t2
2

◆ Another example:
In[ ]:= convolve = Convolve[Sin[τ] * UnitStep[τ], Sin[τ] * UnitStep[τ], τ, t]
Out[ ]=
1
(- t Cos[t] + Sin[t]) UnitStep[t]
2

143
20 Week 6_Laplace Transforms-2 (Solving ODEs).nb

In[ ]:= integral = Integrate[Sin[τ] Sin[t - τ], {τ, 0, t}]


Out[ ]=
1
(- t Cos[t] + Sin[t])
2

In[ ]:= convolve  integral * UnitStep[t]


Out[ ]=

True

◆ Let’s take a look at the graph:


1
In[ ]:= Plot (- t Cos[t] + Sin[t]), {t, 0, 4 Pi}, PlotStyle  {{Red, Thick}},
2
Frame  True, FrameLabel  {"t", "y(t)"}, Exclusions  None,
BaseStyle  {FontWeight  "Bold", Black, FontSize  12}, GridLines  Automatic,
AxesStyle  Directive[RGBColor[0.`, 0.`, 0.`], AbsoluteThickness[1]],
1
PlotLegends  Placed" (-t cos(t)+sin(t))", {0.3, 0.86},
2
Method  {"DefaultBoundaryStyle"  Automatic, "DefaultMeshStyle"  AbsolutePointSize[6],
"ScalingFunctions"  None}, Background  White
Out[ ]=

4 1
(-t cos(t)+sin(t))
2

0
y(t)

-2

-4

-6

0 2 4 6 8 10 12
t

Summary
After completing this chapter, you should be able to
◼ develop SOPs to solve 1st-/2nd-order ODEs (IVPs) by the Laplace transform method
◼ perform Laplace and inverse Laplace transforms using Wolfram Mathematica
◼ use Mathematica to find the convolution of two functions.
◼ develop the habit of always checking your solutions for quality assurance.

144
Week 7: Series Solutions of ODEs
How to Use Series to Solve ODEs?

Table of Contents
1. The Series Command in Wolfram Mathematica
1.1. Taylor and Maclaurin Series
1.1.1. Example 7.1
1.1.2. Example 7.2
2. Basic Concepts
2.1. Convergent vs. Divergent Series
2.1.1. Example 7.3
2.1.2. Example 7.4
2.1.3. Example 7.5
2.2. Analytic at Point
3. Solving ODEs by the Power Series Method
3.1. Standard Operating Procedures (SOPs)
3.1.1. Example 7.6
3.1.2. Example 7.7
3.1.3. Example 7.8
3.2. Different Approach: Built-in Function in Wolfram Mathematica
4. Extended Power Series Method: Frobenius Method
4.1. Standard Operating Procedures (SOPs)
4.1.1. Example 7.9
4.1.2. Example 7.10
5. Summary

Commands list
◼ Quit[]
◼ Series[f , {x, x0, n}]
◼ Normal[expr]

145
2 Week 7_Series Solutions of ODEs.nb

◼ SeriesCoefficient[series, n]
◼ Sum[expr, {n, nmin, nmax}]
◼ SumConvergence[ f, n]
◼ Factorial[n]
◼ Log[z]
◼ LogicalExpand[expr]
◼ Coefficient[expr, form]
◼ Table[expr, n]
◼ AsymptoticDSolveValue[eqn, f, x  x0]

The Series Command in Wolfram Mathematica


To clear all definitions or to reclaim resources used by the kernel, you may want to quit the
kernel by evaluating Quit. Quit[].
In[ ]:= ? Quit
Out[ ]=

Symbol

Quit[] terminates a Wolfram Language kernel session.

In[ ]:= Quit[]

Use Series to make a power series out of a function. The first argument is the function. The
second argument has the form {var, pt, order}, where var is the variable, pt is the point around
which to expand, and order is the order:
In[ ]:= ? Series
Out[ ]=

Symbol

Series[f , {x, x0 , n}] generates a power series expansion

for f about the point x = x0 to order (x - x0 )n , where n is an explicit integer.

Series[f , x  x0 ] generates the leading term of a power series expansion for f about the point x = x0 .

Seriesf , {x, x0 , nx }, y, y0 , ny , … successively finds series expansions with respect to x, then y, etc.

146
Week 7_Series Solutions of ODEs.nb 3

◆ Read more on How to | Compute a Power Series


https://en.wikipedia.org/wiki/Power_series
https://reference.wolfram.com/language/ref/Series.html
.

◆ Power series for the exponential function around x = 0:


In[ ]:= Series[Exp[x], {x, 0, 10}]
Out[ ]=

x2 x3 x4 x5 x6 x7 x8 x9 x10
1+x+ + + + + + + + + + O[x]11
2 6 24 120 720 5040 40 320 362 880 3 628 800

1
◆ Power series for the function of around x = 0:
ex
In[ ]:= Series[1 / Exp[x], {x, 0, 10}]
Out[ ]=

x2 x3 x4 x5 x6 x7 x8 x9 x10
1-x+ - + - + - + - + + O[x]11
2 6 24 120 720 5040 40 320 362 880 3 628 800

1
◆ Power series for the function of around x = 0:
x
In[ ]:= Series[1 / x, {x, 0, 10}]
Out[ ]=
1
+ O[x]11
x

◆ Power series for the function of natural logarithm of x around x = 0:


Series[Log[x], {x, 0, 10}] (** Note that Log[x] gives the natural logarithm of x **)
Out[ ]=

Log[x] + O[x]11

◆ Power series for the function of cos(x) around x = 0:


In[ ]:= Series[Cos[x], {x, 0, 10}]
Out[ ]=

x2 x4 x6 x8 x10
1- + - + - + O[x]11
2 24 720 40 320 3 628 800

◆ Power series for the function of e i x around x = 0:


In[ ]:= Series[Exp[I * x], {x, 0, 10}]
Out[ ]=

x2  x3 x4  x5 x6  x7 x8  x9 x10
1+x- - + + - - + + - + O[x]11
2 6 24 120 720 5040 40 320 362 880 3 628 800

◆ We may find the derivatives of the power series using the D[ ] command.

147
4 Week 7_Series Solutions of ODEs.nb

In[ ]:= ?D
Out[ ]=

Symbol

D[f , x] gives the partial derivative ∂ f  ∂ x.

D[f , {x, n}] gives the multiple derivative ∂n f  ∂ xn .

D[f , x, y, …] gives the partial derivative ⋯ (∂ / ∂ y) (∂ / ∂ x) f.

D[f , {x, n}, {y, m}, …] gives the multiple partial derivative ⋯ ∂m  ∂ ym  ∂n  ∂ xn  f.

D[f , {{x1 , x2 , …}}] for a scalar f gives the vector derivative ∂ f  ∂ x1 , ∂ f  ∂ x2 , ….

D[f , {array}] gives an array derivative.

In[ ]:= D[D[Series[Exp[x], {x, 0, 10}], {x, 1}], {x, 1}]


Out[ ]=

x2 x3 x4 x5 x6 x7 x8
1+x+ + + + + + + + O[x]9
2 6 24 120 720 5040 40 320

In[ ]:= s1 = D[Series[Exp[x], {x, 0, 10}], {x, 1}]


Out[ ]=

x2 x3 x4 x5 x6 x7 x8 x9
1+x+ + + + + + + + + O[x]10
2 6 24 120 720 5040 40 320 362 880

◆ Normal[ ] turns the power series back into an ordinary polynomial expression.
In[ ]:= Normal[s1]
Out[ ]=

x2 x3 x4 x5 x6 x7 x8 x9
1+x+ + + + + + + +
2 6 24 120 720 5040 40 320 362 880

◆ We can find the coefficients of the terms in the particular power series by using the
command SeriesCoefficient[ ].
In[ ]:= Table[SeriesCoefficient[s1, n], {n, 0, 9}]
Out[ ]=
1 1 1 1 1 1 1 1
1, 1, , , , , , , , 
2 6 24 120 720 5040 40 320 362 880

148
Week 7_Series Solutions of ODEs.nb 5

In[ ]:= ? SeriesCoefficient


Out[ ]=

Symbol

SeriesCoefficient[series, n] finds the coefficient of

the nth -order term in a power series in the form generated by Series.

SeriesCoefficient[f , {x, x0 , n}] finds the coefficient of (x - x0 )n in the expansion of f about the point x = x0 .

SeriesCoefficientf , {x, x0 , nx }, y, y0 , ny , … finds a coefficient in a multivariate series.

In[ ]:= Series[Log[1 + x], {x, 0, 7}]


Out[ ]=

x2 x3 x4 x5 x6 x7
x- + - + - + + O[x]8
2 3 4 5 6 7

◆ Note: when we do operations on a power series, the result is computed only to the appro-
priate order of x.
In[ ]:= s12
Out[ ]=

4 x3 2 x4 4 x5 4 x6 8 x7 2 x8 4 x9
1 + 2 x + 2 x2 + + + + + + + + O[x]10
3 3 15 45 315 315 2835

In[ ]:= (Normal[s1])2


Out[ ]=
2
x2 x3 x4 x5 x6 x7 x8 x9
1+x+ + + + + + + +
2 6 24 120 720 5040 40 320 362 880

Taylor and Maclaurin series


If f has derivatives of all orders at x = a, then the Taylor series for the function f at a is
.
f (n ) ( a ) f '' (a) f ( n ) ( a)
∑n∞= 0 n! (x - a)n = f (a) + f ' (a) (x - a) +
2!
(x - a)2 + ⋯ +
n!
(x - a)n + ⋯
.

The Taylor series for f at 0 is known as the Maclaurin series for f.


.

Read more on:


https://math.libretexts.org/Bookshelves/Calculus/Book%3 A_Calculus _ (OpenStax)/10%3
A_Power _Series/10.3%3 A_Taylor _and _Maclaurin _Series
.

Example 7.1
Find the Taylor expansion of the given function around x0 = 1 (up to 9 th order terms):

149
6 Week 7_Series Solutions of ODEs.nb

sinh 3 x2 - 4

In[ ]:= SeriesSinh3 x2 - 4, {x, 1, 9}


Out[ ]=

- Sinh[1] + 6 Cosh[1] (x - 1) + (3 Cosh[1] - 18 Sinh[1]) (x - 1)2 +


117 Sinh[1]
(36 Cosh[1] - 18 Sinh[1]) (x - 1)3 + 54 Cosh[1] - (x - 1)4 +
2
459 Cosh[1] 333 Cosh[1] 729 Sinh[1]
- 108 Sinh[1] (x - 1)5 + - (x - 1)6 +
5 2 5
7614 Cosh[1] 1107 Sinh[1] 1377 Cosh[1] 80 649 Sinh[1]
- (x - 1)7 + - (x - 1)8 +
35 5 5 280
47 547 Cosh[1] 11 502 Sinh[1]
- (x - 1)9 + O[x - 1]10
140 35

Example 7.2
Find the Maclaurin expansion of the given function (up to 15 th order terms):

log 2 x3 + 5

Series[Log[2 x ^ 3 + 5], {x, 0, 15}] (** Maclaurin series means that x 0 =0 **)
3 6 9 12 15
2x 2x 8x 4x 32 x
Log[5] + - + - + + O[x]16
5 25 375 625 15 625

Basic Concepts
What do we mean by “Convergent vs. Divergent Series” ?
Convergent Series: A series is said to be convergent if it approaches some limit (D’Angelo
and West 2000, p. 259).
Divergent Series: A series which is not convergent.
.

Read more on: https://en.wikipedia.org/wiki/Convergent_series


.

◆ Sum[expr, {n, nmin, nmax}] finds the sum of expr as n goes from nmin to nmax .
In[ ]:= Sum[x ^ n / n !, {n, 0, Infinity}]
Out[ ]=

x

In[ ]:= Sumx ^ n  (n !)2 , {n, 0, Infinity}


Out[ ]=

BesselI0, 2 x

150
Week 7_Series Solutions of ODEs.nb 7

In[ ]:= ? BesselI


Out[ ]=

Symbol

BesselI[n, z] gives the modified Bessel function of the first kind I n (z).

(n !) * xn
In[ ]:= Sum , {n, 0, Infinity}
(2 n) !
Out[ ]=

1 x
2 + x/4 π x Erf 
2 2

(n !) * xn
In[ ]:= Sum , {n, 1, Infinity}
(2 n) !
Out[ ]=

1 x
x/4 π x Erf 
2 2

In[ ]:= Sum[1 / n, {n, 1, Infinity}]

Sum: Sum does not converge.


Out[ ]=
∞ 1

n=1
n

◆ We can also use a built-in function SumConvergence to find out if the series is conver-
gent or divergent.
In[ ]:= ? SumConvergence
Out[ ]=

Symbol

SumConvergence[f , n] gives conditions for the sum ∑∞


n f to be convergent.

SumConvergence[f , {n1 , n2 , …}] gives conditions for the multiple sum ∑∞


n1 ∑n2 … f to be convergent.

Example 7.3
Test for convergence of the sum:
1
∑∞
n n

In[ ]:= SumConvergence[1 / n, n]


Out[ ]=

False

151
8 Week 7_Series Solutions of ODEs.nb

Example 7.4
Test for convergence of the sum:
3n n2
∑n∞ n!

3 n * n2
In[ ]:= SumConvergence , n
n!
Out[ ]=

True

Example 7.5
Test for convergence of the sum:
1
∑n∞ n!

In[ ]:= SumConvergence[1 / Factorial[n], n]


True

What do we mean by “Analytic at point” ?


An analytic function is a function that is locally given by a convergent power series.
Any function is said to be analytic at a point a if it can be represented by a power series in x-a.
For example, functions such as ex , sin x, log x can be represented by Taylor series, thus these
functions are analytic.
If function is analytic at point, it can be replaced either by Taylor Series or Maclaurin series,
which are both power series.
(n)
∞ f (a) f '' (a)
∑n= 0 n! (x - a)n = f (a) + f ' (a) (x - a) +
2!
(x - a)2 + ⋯

f ( n ) ( 0) f '' (0) 2
∑n∞= 0 n!
(x)n = f (0) + f ' (0) x +
2!
x +⋯

Series[1 / x, {x, 0, 5}]


Out[ ]=
1
+ O[x]6
x

◆ Not analytic at x = 0.
Series[Log[x], {x, 0, 5}]
Out[ ]=

Log[x] + O[x]6

152
Week 7_Series Solutions of ODEs.nb 9

◆ Not analytic at x = 0.
Series[Log[1 + x], {x, 0, 5}]
Out[ ]=

x2 x3 x4 x5
x- + - + + O[x]6
2 3 4 5

◆ Analytic at x = 0.

Solving ODEs by the Power Series Method


Standard Operating Procedures (SOPs)
 Step 1. Define the solution as a Power Series (with coefficients to be determined; the center
of the series is usually taken to be at x0 = 0).

y = a0 + a1 x + a2 x2 + a3 x3 + ⋯ = ∑∞
m= 0 am x
m

 Step 2. Insert the power series of y and the power series of y', y'' obtained by term-wise
differentiation in to the ODE.

y' = a1 + 2 a2 x + 3 a3 x2 + ⋯ = ∑∞
m = 1 m am x
m- 1

Collect the powers of x finding.


.

(a1 - a0) + (2 a2 - a1) x + (3 a3 - a2) x2 + ⋯ = 0


.

 Step 3. Equating the coefficient of each power of x to zero, we have a system of equations of
the coefficients, am.
.

a1 - a0 = 0 , 2 a2 - a1 = 0 , 3 a3 - a2 = 0 , ⋯ .
.

 Step 4. Solving these equations, we may express a1, a2, ... in terms of a0 (for the first-order
ODEs) or a2, a3, ... in terms of a0 and a1 (for the second-order ODEs).
.
a a a2 a
. a1 = a0 , a2 = 1 = 0 , a3 = = 0 , ⋯ .
2 2! 3 3!
.

 Step 5. With these values of the coefficients, the series solution becomes the familiar general
solution.
a0 2 a x2 x3
y = a0 + a0 x + x + 0 x3 + ⋯ = a0 1 + x + + = a0 ex
2! 3! 2! 3!
Example 7.6
Find the general solution to the given ODE:

153
10 Week 7_Series Solutions of ODEs.nb

y' - y = 0

In[ ]:= ClearAll["Global`*"]

◆ Step 1. Define the solution as a Power Series. Here, we omit the terms of p + 1. The
value of p (max value) can be varied.
In[ ]:= p = 8; y = Sum[c[i] x ^ i, {i, 0, p}] + O[x] ^ (p + 1)
Out[ ]=

c[0] + c[1] x + c[2] x2 + c[3] x3 + c[4] x4 + c[5] x5 + c[6] x6 + c[7] x7 + c[8] x8 + O[x]9

◆ Step 2. Insert the power series solution (with undetermined coefficients) to the given
ODE.
In[ ]:= de = D[y, x] - y  0
Out[ ]=

(- c[0] + c[1]) + (- c[1] + 2 c[2]) x + (- c[2] + 3 c[3]) x2 + (- c[3] + 4 c[4]) x3 +


(- c[4] + 5 c[5]) x4 + (- c[5] + 6 c[6]) x5 + (- c[6] + 7 c[7]) x6 + (- c[7] + 8 c[8]) x7 + O[x]8  0

◆ Step 3. Use LogicalExpand[] to generate a sequence of equations for each power of x.


In[ ]:= coeffeqns = LogicalExpand[de]
Out[ ]=

- c[0] + c[1]  0 && - c[1] + 2 c[2]  0 && - c[2] + 3 c[3]  0 && - c[3] + 4 c[4]  0 &&
- c[4] + 5 c[5]  0 && - c[5] + 6 c[6]  0 && - c[6] + 7 c[7]  0 && - c[7] + 8 c[8]  0

In[ ]:= ? LogicalExpand


Out[ ]=

Symbol

LogicalExpand[expr] expands out logical combinations of equations, inequalities, and other functions.

◆ Step 4. Solve the equations for the coefficients a[i]. We can also feed equations involv-
ing power series directly to Solve[]:
In[ ]:= solvedcoeffs = Solve[coeffeqns, Table[c[i], {i, 1, 8}]]
Out[ ]=
c[0] c[0]
c[1]  c[0], c[2]  , c[3] 
,
2 6
c[0] c[0] c[0] c[0] c[0]
c[4]  , c[5]  , c[6]  , c[7]  , c[8]  
24 120 720 5040 40 320

◆ Step 4. Substitute the obtained coefficients to get our solution.

154
Week 7_Series Solutions of ODEs.nb 11

In[ ]:= y = y /. solvedcoeffs


Out[ ]=
1 1
c[0] + c[0] x + c[0] x2 + c[0] x3 +
2 6
1 1 1 c[0] x7 c[0] x8
c[0] x4 + c[0] x5 + c[0] x6 + + + O[x]9 
24 120 720 5040 40 320

In[ ]:= Coefficient[y, c[0]]


Out[ ]=

x2 x3 x4 x5 x6 x7 x8
1 + x + + + + + + + 
2 6 24 120 720 5040 40 320

◆ Summation of Series: The Wolfram System recognizes this as the power series expan-
sion of exp(x).
In[ ]:= Sum[x ^ n / n !, {n, 0, Infinity}]
Out[ ]=

x

In[ ]:= Series[Exp[x], {x, 0, 8}]


Out[ ]=

x2 x3 x4 x5 x6 x7 x8
1+x+ + + + + + + + O[x]9
2 6 24 120 720 5040 40 320

◆ Thus we have obtained the familiar solution of y = c0 e x .


◆ Step 5. Verify the solution.
In[ ]:= D[c0 * Exp[x], x] - c0 * Exp[x]  0
Out[ ]=

True

Example 7.7
Find the general solution to the given ODE:
y'' + y = 0

In[ ]:= ClearAll["Global`*"]

◆ Step 1. Define the solution as a Power Series. Here, we omit the terms of p + 1. The
value of p (max value) can be varied.
In[ ]:= p = 9; y = Sum[c[i] x ^ i, {i, 0, p}] + O[x] ^ (p + 1)
Out[ ]=

c[0] + c[1] x + c[2] x2 + c[3] x3 + c[4] x4 + c[5] x5 + c[6] x6 + c[7] x7 + c[8] x8 + c[9] x9 + O[x]10

◆ Step 2. Insert the power series solution (with undetermined coefficients) to the given
ODE.

155
12 Week 7_Series Solutions of ODEs.nb

In[ ]:= de = D[y, {x, 2}] + y  0


Out[ ]=

(c[0] + 2 c[2]) + (c[1] + 6 c[3]) x + (c[2] + 12 c[4]) x2 + (c[3] + 20 c[5]) x3 +


(c[4] + 30 c[6]) x4 + (c[5] + 42 c[7]) x5 + (c[6] + 56 c[8]) x6 + (c[7] + 72 c[9]) x7 + O[x]8  0

◆ Step 3. Use LogicalExpand[] to generate a sequence of equations for each power of x.


In[ ]:= coeffeqns = LogicalExpand[de]
Out[ ]=

c[0] + 2 c[2]  0 && c[1] + 6 c[3]  0 && c[2] + 12 c[4]  0 && c[3] + 20 c[5]  0 &&
c[4] + 30 c[6]  0 && c[5] + 42 c[7]  0 && c[6] + 56 c[8]  0 && c[7] + 72 c[9]  0

◆ Step 4. Solve the equations for the coefficients a[i]. We can also feed equations involv-
ing power series directly to Solve[]:
In[ ]:= solvedcoeffs = Solve[coeffeqns, Table[c[i], {i, 1, 10}]]

Solve: Equations may not give solutions for all "solve" variables.
Out[ ]=
c[0] c[1] c[0] c[1]
c[2]  - , c[3]  - , c[4]  , c[5]  ,
2 6 24 120
c[0] c[1] c[0] c[1]
c[6]  - , c[7]  - , c[8]  , c[9]  
720 5040 40 320 362 880

◆ Step 5. Substitute the obtained coefficients to get our solution.


In[ ]:= y = y /. solvedcoeffs
Out[ ]=
1 1 1
c[0] + c[1] x - c[0] x2 - c[1] x3 + c[0] x4 +
2 6 24
1 1 c[1] x7 c[0] x8 c[1] x9
c[1] x5 - c[0] x6 - + + + O[x]10 
120 720 5040 40 320 362 880

In[ ]:= Coefficient[y, c[0]]


Out[ ]=

x2 x4 x6 x8
1 - + - + 
2 24 720 40 320

In[ ]:= Series[Cos[x], {x, 0, 10}]


Out[ ]=

x2 x4 x6 x8 x10
1- + - + - + O[x]11
2 24 720 40 320 3 628 800

◆ Expressing the coefficients in terms of the arbitrary c[0], we get the solution of
y = c0 cos(x).

156
Week 7_Series Solutions of ODEs.nb 13

In[ ]:= Coefficient[y, c[1]]


Out[ ]=

x3 x5 x7 x9
x - + - + 
6 120 5040 362 880

In[ ]:= Series[Sin[x], {x, 0, 10}]


Out[ ]=

x3 x5 x7 x9
x- + - + + O[x]11
6 120 5040 362 880

◆ Expressing the coefficients in terms of the arbitrary c[1], we get the solution
y = c1 sin(x).
In[ ]:= ysoln = Coefficient[y, c[0]] + Coefficient[y, c[1]]
Out[ ]=

x2 x3 x4 x5 x6 x7 x8 x9
1 + x - - + + - - + + 
2 6 24 120 720 5040 40 320 362 880

◆ Thus the general solution is y = c0 cos(x) + c1 sin(x).


In[ ]:= Series[Cos[x] + Sin[x], {x, 0, 9}]
Out[ ]=

x2 x3 x4 x5 x6 x7 x8 x9
1+x- - + + - - + + + O[x]10
2 6 24 120 720 5040 40 320 362 880

◆ Step 5. Verify the solution.


In[ ]:= D[c0 * Cos[x] + c1 * Sin[x], {x, 2}] + c0 * Cos[x] + c1 * Sin[x]  0
Out[ ]=

True

Example 7.8
Find the general solution to the given ODE:

(y' )2 - y = x

In[ ]:= ClearAll["Global`*"]

◆ Step 1. Define the solution as a Power Series.


In[ ]:= y = Sum[c[i] x ^ i, {i, 0, 8}] + O[x] ^ 9
Out[ ]=

c[0] + c[1] x + c[2] x2 + c[3] x3 + c[4] x4 + c[5] x5 + c[6] x6 + c[7] x7 + c[8] x8 + O[x]9

◆ Step 2. Insert the power series solution (with undetermined coefficients) to the given
ODE.

157
14 Week 7_Series Solutions of ODEs.nb

In[ ]:= de = D[y, x] ^ 2 - y  x


Out[ ]=

- c[0] + c[1]2  + (- c[1] + 4 c[1] × c[2]) x + - c[2] + 4 c[2]2 + 6 c[1] × c[3] x2 +


(- c[3] + 12 c[2] × c[3] + 8 c[1] × c[4]) x3 + 9 c[3]2 - c[4] + 16 c[2] × c[4] + 10 c[1] × c[5] x4 +
(24 c[3] × c[4] - c[5] + 20 c[2] × c[5] + 12 c[1] × c[6]) x5 +
16 c[4]2 + 30 c[3] × c[5] - c[6] + 24 c[2] × c[6] + 14 c[1] × c[7] x6 +
(40 c[4] × c[5] + 36 c[3] × c[6] - c[7] + 28 c[2] × c[7] + 16 c[1] × c[8]) x7 + O[x]8  x

◆ Step 3. Use LogicalExpand[] to generate a sequence of equations for each power of x.


In[ ]:= coeffeqns = LogicalExpand[de]
Out[ ]=

- c[0] + c[1]2  0 && - 1 - c[1] + 4 c[1] × c[2]  0 && - c[2] + 4 c[2]2 + 6 c[1] × c[3]  0 &&
- c[3] + 12 c[2] × c[3] + 8 c[1] × c[4]  0 && 9 c[3]2 - c[4] + 16 c[2] × c[4] + 10 c[1] × c[5]  0 &&
24 c[3] × c[4] - c[5] + 20 c[2] × c[5] + 12 c[1] × c[6]  0 &&
16 c[4]2 + 30 c[3] × c[5] - c[6] + 24 c[2] × c[6] + 14 c[1] × c[7]  0 &&
40 c[4] × c[5] + 36 c[3] × c[6] - c[7] + 28 c[2] × c[7] + 16 c[1] × c[8]  0

◆ Step 4. Solve the equations for the coefficients a[i]. We can also feed equations involv-
ing power series directly to Solve[]:
In[ ]:= c[0] = 1; solvedcoeffs = Solve[coeffeqns, Table[c[i], {i, 1, 8}]]
Out[ ]=

{c[1]  - 1, c[2]  0, c[3]  0, c[4]  0, c[5]  0, c[6]  0, c[7]  0, c[8]  0},


1 1 5
c[1]  1, c[2]  , c[3]  - , c[4]  ,
2 12 96
41 469 6889 24 721
c[5]  - , c[6]  , c[7]  - , c[8]  
960 11 520 161 280 516 096

◆ Step 5. Substitute the obtained coefficients to get our solution.


In[ ]:= y = y /. solvedcoeffs
Out[ ]=

x2 x3 5 x4 41 x5 469 x6 6889 x7 24 721 x8


 1 - x + O [ x] 9 , 1 + x + - + - + - + + O[x]9 
2 12 96 960 11 520 161 280 516 096

In[ ]:= (D[1 - x, x])2 - (1 - x)  x


Out[ ]=

True

2
x2 x3 5 x4 41 x5 x2 x3 5 x4 41 x5
In[ ]:= D1 + x + - + - + O[x]6 , x - 1+x+ - + - + O[x]6  x
2 12 96 960 2 12 96 960
Out[ ]=

x + O[x]5  x

Different Approach Using a Function Embedded in Wolfram Mathematica

158
Week 7_Series Solutions of ODEs.nb 15

In[ ]:= ? AsymptoticDSolveValue


Out[ ]=

Symbol

AsymptoticDSolveValue [eqn, f , x  x0 ] computes an

asymptotic approximation to the differential equation eqn for f [x] centered at x0 .

AsymptoticDSolveValue [{eqn1 , eqn2 , …}, {f1 , f2 , …}, x  x0 ]

computes an asymptotic approximation to a system of differential equations.

AsymptoticDSolveValue [eqn, f , x, ϵ  ϵ0 ] computes

an asymptotic approximation of f [x, ϵ] for the parameter ϵ centered at ϵ0 .

AsymptoticDSolveValue [eqn, f , …, {ξ, ξ0 , n}] computes the asymptotic approximation to order n.

The same ODE as in the Example 7.2 with the corresponding initial conditions:
y'' + y = 0 y(0) = 1, y' (0) = 0

In[ ]:= ClearAll["Global`*"]

In[ ]:= sol1 = AsymptoticDSolveValue[{y ''[x] + y[x]  0, y[0]  1, y '[0]  0}, y[x], {x, 0, 8}]
Out[ ]=

x2 x4 x6 x8
1- + - +
2 24 720 40 320

In[ ]:= sol2 = AsymptoticDSolveValue[{y ''[x] + y[x]  0, y[0]  1, y '[0]  0}, y[x], {x, 0, 16}]
Out[ ]=

x2 x4 x6 x8 x10 x12 x14 x16


1- + - + - + - +
2 24 720 40 320 3 628 800 479 001 600 87 178 291 200 20 922 789 888 000

◆ Asymptotic approximation by varying the order n.


In[ ]:= sol[n_] := AsymptoticDSolveValue[{y ''[x] + y[x]  0, y[0]  1, y '[0]  0}, y[x], {x, 0, n}]

159
16 Week 7_Series Solutions of ODEs.nb

In[ ]:= Plot[{sol[4], sol[8], sol[12], sol[16], sol[24], Cos[x]} // Evaluate,


{x, 0, 3 Pi}, PlotRange  {- 2, 5}, Frame  True,
PlotLegends  {"p=4", "p=8", "p=12", "p=16", "p=24", "cos(x)"},
PlotStyle  {Blue, Orange, Yellow, Green, Red, Gray}]
5

p=4
3
p=8
2
p=12
1 p=16
p=24
0
cos(x)
-1

-2
0 2 4 6 8

Extended Power Series Method: Frobenius Method


Let b(x) and c(x) be any functions that are analytic at x = 0. Then the ODE
b(x) c (x )
y'' + y' + 2 y = 0
x x

has at least one solution that can be represented in the form


.

y (x ) = x r
∑∞ m
m= 0 am x = xr a0 + a1 x + a2 x2 + ⋯
.

where the exponent r may be any (real or complex) number (and r is chosen so that a0 ≠ 0).
.

Frobenius Method. Standard Operating Procedures (SOPs)


 Step 1. Rewrite the ODE in the form of x2 y'' + x b(x) y' + c(x) y = 0. Find b(x) and c(x)

x2 y'' + xb(x) y' + c(x) y = 0

 Step 2. Expand b(x) and c(x) in power series. To apply the Frobenius Method, b(x) and c(x)
must be analytic at x = 0 . If b(x) and c(x) are polynomials we do nothing in this step. The
purpose of this step is to obtain b0 = b(x = 0) and c0 = c(x = 0).
.

b(x) = b0 + b1 x + b2 x2 + ⋯ , c (x ) = c 0 + c 1 x + c 2 x 2 + ⋯
.

 Step 3. Obtain the indicial equation: r(r - 1) + b0 r + c0 = 0

160
Week 7_Series Solutions of ODEs.nb 17

r(r - 1) + b0 r + c0 = 0
 Step 4. Solve the indicial equation, and obtain its roots r1 and r2. Depending on the values
of r1 and r2, we have the following three cases:
(i) Distinct roots not differing by an integer;
(ii) Double root r1 = r2 ;
(iii) Roots differing by an integer.
.

 Case 1. Distinct Roots Not Differing by an Integer. A basis is


.

y 1 (x ) = x r1  a
0 + a1 x + a2 x2 + ⋯
.

and
.

y2(x) = xr2 A0 + A1 x + A2 x2 + ⋯


.

 Case 2. Double Root r1 = r2 = r . A basis is


.
1
y1(x) = xr a0 + a1 x + a2 x2 + ⋯ r = 2
(1 - b0)
.

(of the same general form as before) and


.

y2(x) = y1(x) ln x + xr A1 x + A2 x2 + ⋯ (x > 0)


.

 Case 3. Roots Differing by an Integer. A basis is


.

y 1 (x ) = x r1  a
0 + a1 x + a2 x2 + ⋯
.

(of the same general form as before) and


.

y2(x) = k y1(x) ln x + xr2 A0 + A1 x + A2 x2 + ⋯


.

whre the roots are so denoted that r1 - r2 > 0 and k may turn out to be zero.
.

Example 7.9
x(x - 1) y'' + (3 x - 1) y' + y = 0

◆ Step 1. Rewrite the ODE in the form of x2 y'' + x b(x) y' + c(x) y = 0. Find b(x) and
c ( x ).
.

161
18 Week 7_Series Solutions of ODEs.nb

( 3 x - 1) 1 (3 x- 1) x2 ( 3 x - 1)
y'' + x ( x - 1)
y' + x(x- 1)
y = 0 ⟹ x2 y'' + x ( x - 1)
y' + x ( x - 1)
y= 0 ⟹ b(x) = ( x - 1)
x2
c (x ) = x(x- 1)
.

◆ Step 2. Expand b(x) and c(x) in power series. To apply the Frobenius Method, b(x) and
c(x) must be analytic at x = 0 . If b(x) and c(x) are polynomials we do nothing in this
step. The purpose of this step is to obtain b0 = b(x = 0) and c0 = c(x = 0).
3x-1
In[ ]:= Series , {x, 0, 5}
x-1
Out[ ]=

1 - 2 x - 2 x2 - 2 x3 - 2 x4 - 2 x5 + O[x]6

x2
In[ ]:= Series , {x, 0, 5}
x (x - 1)
Out[ ]=

- x - x2 - x3 - x4 - x5 + O[x]6

◆ Step 3. Obtain the indicial equation. With b(0) = 1 and c(0) = 0, we have an indicial
equation r(r - 1) + r = 0.
In[ ]:= Solve[r (r - 1) + r  0, r]
Out[ ]=

{{r  0}, {r  0}}

◆ Step 4. Solving the indicial equation, and we obtained its roots r1 = 0 and r2 = 0. This
corresponds to the Case (ii) with a double root.
◆ The indicial root:
In[ ]:= ClearAll[a, r];

In[ ]:= r = 0;

◆ The first k + 1 terms of a proposed Frobenius series solution:


In[ ]:= k = 6;
y = x ^ r (Sum[ a[n] x ^ n, {n, 0, k} ] + O[x] ^ (k + 1))
Out[ ]=

a[0] + a[1] x + a[2] x2 + a[3] x3 + a[4] x4 + a[5] x5 + a[6] x6 + O[x]7

◆ Substitute this series into the given ODE: x(x - 1) y'' + (3 x - 1) y' + y = 0 .
In[ ]:= deq = x * (x - 1) * D[y, {x, 2}] + (3 x - 1) * D[y, {x, 1}] + y  0
Out[ ]=

(a[0] - a[1]) + (4 a[1] - 4 a[2]) x + (9 a[2] - 9 a[3]) x2 +


(16 a[3] - 16 a[4]) x3 + (25 a[4] - 25 a[5]) x4 + (36 a[5] - 36 a[6]) x5 + O[x]6  0

◆ Write the equations that the coefficients must satisfy.

162
Week 7_Series Solutions of ODEs.nb 19

In[ ]:= coeffEqns = LogicalExpand[ deq ]


Out[ ]=

a[0] - a[1]  0 && 4 a[1] - 4 a[2]  0 && 9 a[2] - 9 a[3]  0 &&


16 a[3] - 16 a[4]  0 && 25 a[4] - 25 a[5]  0 && 36 a[5] - 36 a[6]  0

◆ Table listing the successive coefficients.


In[ ]:= succCoeffs = Table[ a[n], {n, 1, 6} ]
Out[ ]=

{a[1], a[2], a[3], a[4], a[5], a[6]}

◆ Solve for these coefficients in terms of a[0].


In[ ]:= ourCoeffs = Solve[coeffEqns, succCoeffs]
Out[ ]=

{{a[1]  a[0], a[2]  a[0], a[3]  a[0], a[4]  a[0], a[5]  a[0], a[6]  a[0]}}

◆ Substitute these coefficients in original series to obtain desired particular solution.


In[ ]:= y = y /. ourCoeffs
Out[ ]=

a[0] + a[0] x + a[0] x2 + a[0] x3 + a[0] x4 + a[0] x5 + a[0] x6 + O[x]7 

◆ Take the common factor a[0] out:


In[ ]:= Coefficient[y, a[0]]
Out[ ]=

1 + x + x2 + x3 + x4 + x5 + x6 

◆ Calculate the infinite sum of our series solution.


In[ ]:= Sum[x ^ n, {n, 0, Infinity}]
Out[ ]=

1
1-x

◆ Verify the infinite sum of the series solution.


1
In[ ]:= Series , {x, 0, k}
1-x
Out[ ]=

1 + x + x2 + x3 + x4 + x5 + x6 + O[x]7

◆ Now we have obtained one solution to the given ODE, which is given by
a[0]
y = a[0] ∑ ∞ m
m = 0 x = 1- x
1
◆ By choosing a[0] = 1, we have y1 = .
1-x
◆ We may get a second independent solution y2(x) by using two methods:

163
20 Week 7_Series Solutions of ODEs.nb

(1) following the Case (ii) rule of the Frobenius method;


(2) the method of reduction of order.
◆ Let’s find the second independent solution y2(x).
◆ Method 1. Following the Case (ii) rule of the Frobenius method.
In[ ]:= r = 0; k = 6;
Log[x]
y = + x ^ r (Sum[ A[n] x ^ n, {n, 0, k} ] + O[x] ^ (k + 1))
1-x
Out[ ]=

(A[0] + Log[x]) + (A[1] + Log[x]) x + (A[2] + Log[x]) x2 + (A[3] + Log[x]) x3 +


(A[4] + Log[x]) x4 + (A[5] + Log[x]) x5 + (A[6] + Log[x]) x6 + O[x]7

In[ ]:= deq = x * (x - 1) D[y, {x, 2}] + (3 x - 1) D[y, {x, 1}] + y  0


Out[ ]=

(A[0] - A[1]) + (- 3 + A[1] - 4 A[2] - 3 Log[x] + 3 (1 + A[1] + Log[x])) x +


(- 3 + 3 A[2] - 9 A[3] - 6 Log[x] + 3 (1 + 2 A[2] + 2 Log[x])) x2 +
(- 3 + 7 A[3] - 16 A[4] - 9 Log[x] + 3 (1 + 3 A[3] + 3 Log[x])) x3 +
(- 3 + 13 A[4] - 25 A[5] - 12 Log[x] + 3 (1 + 4 A[4] + 4 Log[x])) x4 +
(- 3 + 21 A[5] - 36 A[6] - 15 Log[x] + 3 (1 + 5 A[5] + 5 Log[x])) x5 + O[x]6  0

In[ ]:= coeffEqns = LogicalExpand[ deq ]


Out[ ]=

A[0] - A[1]  0 && - 3 + A[1] - 4 A[2] - 3 Log[x] + 3 (1 + A[1] + Log[x])  0 &&


- 3 + 3 A[2] - 9 A[3] - 6 Log[x] + 3 (1 + 2 A[2] + 2 Log[x])  0 &&
- 3 + 7 A[3] - 16 A[4] - 9 Log[x] + 3 (1 + 3 A[3] + 3 Log[x])  0 &&
- 3 + 13 A[4] - 25 A[5] - 12 Log[x] + 3 (1 + 4 A[4] + 4 Log[x])  0 &&
- 3 + 21 A[5] - 36 A[6] - 15 Log[x] + 3 (1 + 5 A[5] + 5 Log[x])  0

In[ ]:= succCoeffs = Table[ A[n], {n, 1, 6} ]


Out[ ]=

{A[1], A[2], A[3], A[4], A[5], A[6]}

In[ ]:= ourCoeffs = Solve[coeffEqns, succCoeffs]


Out[ ]=

{{A[1]  A[0], A[2]  A[0], A[3]  A[0], A[4]  A[0], A[5]  A[0], A[6]  A[0]}}

In[ ]:= y = y /. ourCoeffs


Out[ ]=

(A[0] + Log[x]) + (A[0] + Log[x]) x + (A[0] + Log[x]) x2 + (A[0] + Log[x]) x3 +


(A[0] + Log[x]) x4 + (A[0] + Log[x]) x5 + (A[0] + Log[x]) x6 + O[x]7 

y = y /. A[0]  0 (* Note that Log[x] represents the natural logarithm of x*)


Out[ ]=

Log[x] + Log[x] x + Log[x] x2 + Log[x] x3 + Log[x] x4 + Log[x] x5 + Log[x] x6 + O[x]7 

ln(x)
◆ We can easily see that the second independent solution y2(x) = .
1-x

164
Week 7_Series Solutions of ODEs.nb 21

◆ Verify two solutions to the given ODE: x(x - 1) y'' + (3 x - 1) y' + y = 0 .


In[ ]:= ClearAll[y];
myODE = x (x - 1) * y ''[x] + (3 x - 1) y '[x] + y[x]  0
Out[ ]=

y[x] + (- 1 + 3 x) y′ [x] + (- 1 + x) x y′′ [x]  0

1
In[ ]:= y1Soln[x_] =
1-x
Out[ ]=
1
1-x

In[ ]:= ODECheck = myODE /. y  y1Soln


Out[ ]=
1 2 (- 1 + x) x -1 + 3 x
+ + 0
1-x (1 - x)3 (1 - x)2

In[ ]:= FullSimplify[ODECheck]


Out[ ]=

True

Log[x]
In[ ]:= y2Soln[x_] =
1-x
Out[ ]=
Log[x]
1-x

In[ ]:= ODECheck = myODE /. y  y2Soln


Out[ ]=
Log[x] 1 2 2 Log[x] 1 Log[x]
+ (- 1 + x) x - + + + (- 1 + 3 x) + 0
1-x (1 - x) x2 (1 - x)2 x (1 - x)3 (1 - x) x (1 - x)2

In[ ]:= FullSimplify[ODECheck]


Out[ ]=

True

◆ Method 2. The method of Reduction of Order.


In[ ]:= ClearAll[y];
myODE = x (x - 1) * y ''[x] + (3 x - 1) y '[x] + y[x]  0
Out[ ]=

y[x] + (- 1 + 3 x) y′ [x] + (- 1 + x) x y′′ [x]  0

u( x )
◆ Substitute y2(x) = , where the form of u(x) is yet to be determined.
1-x

165
22 Week 7_Series Solutions of ODEs.nb

u[x]
In[ ]:= yuSoln[x_] =
1-x
Out[ ]=
u[x]
1-x

In[ ]:= uODE = myODE /. y  yuSoln


Out[ ]=
u[x] u[x] u′ [x] 2 u[x] 2 u′ [x] u′′ [x]
+ (- 1 + 3 x) + + (- 1 + x) x + + 0
1-x (1 - x)2 1-x (1 - x)3 (1 - x)2 1-x

In[ ]:= u2 = FullSimplify[uODE]


Out[ ]=

u′ [x] + x u′′ [x]  0

◆ Let’s introduce a new variable t so that t = u' and t ' = u''.


In[ ]:= u2 /. { u '  t, u ''  t '}
Out[ ]=

t[x] + x t′ [x]  0

In[ ]:= DSolve[t[x] + x t′ [x]  0, t[x], x]


Out[ ]=
1
t[x]  
x

In[ ]:= DSolve[u '[x]  1 / x, u[x], x]


Out[ ]=

{{u[x]  1 + Log[x]}}

u[ x ] ln(x)
◆ Thus we have y2(x) = = .
1-x 1-x
1 ln(x)
◆ y 1 (x ) = and y2(x) = are linearly independent and thus form a basis of solu-
1- x 1- x
tions of the given ODE.
1 Log[x]
In[ ]:= yGen = c1 * + c2 * ;
1-x 1-x
FullSimplify[x * (x - 1) * D[yGen, {x, 2}] + (3 x - 1) * D[yGen, x] + (yGen)]  0
True

Example 7.10

x2 - x y'' - xy' + y = 0

In[ ]:= ClearAll["Global`*"]

◆ Step 1. Rewrite the ODE in the form of x2 y'' + x b(x) y' + c(x) y = 0. Find b(x) and
c (x ) .

166
Week 7_Series Solutions of ODEs.nb 23

-x 1 -x x2 -x
y'' + x ( x - 1)
y' + x ( x - 1)
y = 0 ⟹ x2 y'' + x (x- 1)
y' + x ( x - 1)
y= 0 ⟹ b(x) = ( x - 1)
x2
c (x ) =
x ( x - 1)
b(0) = 0; c(0) = 0
.

◆ Step 2. Expand b(x) and c(x) in power series. To apply the Frobenius Method, b(x) and
c(x) must be analytic at x = 0 . b(x) and c(x) are already polynomials so we do nothing
in this step.
◆ Step 3. Obtain the indicial equation. With b(0) = 1 and c(0) = 0, we have an indicial
equation r(r - 1) = 0.
In[ ]:= ClearAll[r]; Solve[r (r - 1)  0, r]
Out[ ]=

{{r  0}, {r  1}}

◆ Step 4. Solving the indicial equation, and we obtained its roots r1 = 0 and r2 = 1. This
corresponds to the Case (i) with two roots differing by an integer. Notice that in this case
we need to set r1 > r2 in order to follow the recipe of the Frobenius method.
◆ For the first solution, we have r = r1 = 1. Based on the recipe of the Frobenius method,
we have:
In[ ]:= r = 1;
k = 6;
y = x ^ r (Sum[ a[n] x ^ n, {n, 0, k} ] + O[x] ^ (k + 1))
Out[ ]=

a[0] x + a[1] x2 + a[2] x3 + a[3] x4 + a[4] x5 + a[5] x6 + a[6] x7 + O[x]8

◆ Substitute this series into the given ODE: x2 - x y'' - xy' + y = 0 .
In[ ]:= deq = x * (x - 1) D[y, {x, 2}] - x * D[y, {x, 1}] + y  0
Out[ ]=

- 2 a[1] x + (a[1] - 6 a[2]) x2 + (4 a[2] - 12 a[3]) x3 +


(9 a[3] - 20 a[4]) x4 + (16 a[4] - 30 a[5]) x5 + (25 a[5] - 42 a[6]) x6 + O[x]7  0

◆ Substitute this series into the given ODE: x2 - x y'' - xy' + y = 0 .
◆ Write the equations that the coefficients must satisfy:
In[ ]:= coeffEqns = LogicalExpand[ deq ]
Out[ ]=

- 2 a[1]  0 && a[1] - 6 a[2]  0 && 4 a[2] - 12 a[3]  0 &&


9 a[3] - 20 a[4]  0 && 16 a[4] - 30 a[5]  0 && 25 a[5] - 42 a[6]  0

◆ Table listing the successive coefficients:

167
24 Week 7_Series Solutions of ODEs.nb

In[ ]:= succCoeffs = Table[ a[n], {n, 1, 6} ]


Out[ ]=

{a[1], a[2], a[3], a[4], a[5], a[6]}

◆ Solve for these coefficients in terms of a[0]:


In[ ]:= ourCoeffs = Solve[coeffEqns, succCoeffs]
Out[ ]=

{{a[1]  0, a[2]  0, a[3]  0, a[4]  0, a[5]  0, a[6]  0}}

◆ Substitute these coefficients in original series to obtain desired particular solution:


In[ ]:= y = y /. ourCoeffs
Out[ ]=

a[0] x + O[x]8 

◆ By choosing a[0] = 1, we have y1(x) = x.


◆ Let’s check the result.
In[ ]:= ClearAll[y];
myODE = x (x - 1) * y ''[x] - x * y '[x] + y[x]  0 ;
y1Soln[x_] = x;
ODECheck = myODE /. y  y1Soln;
FullSimplify[ODECheck]
Out[ ]=

True

◆ Let’s find the second independent solution y2(x) by the method of Reduction of Order.
In[ ]:= yuSoln[x_] = u[x] * x;
uODE = FullSimplify[ myODE /. y  yuSoln]
Out[ ]=

x ((- 2 + x) u′ [x] + (- 1 + x) x u′′ [x])  0

◆ Let’s introduce a new variable t so that t = u' and t ' = u''.


In[ ]:= tODE = FullSimplify[uODE /. { u '  t, u ''  t '}]
Out[ ]=

x ((- 2 + x) t[x] + (- 1 + x) x t′ [x])  0

In[ ]:= DSolve[tODE, t[x], x]


Out[ ]=
(1 - x) 1
t[x]  
x2

-(1-x)
◆ Let’s take 1 = - 1 a. So we have t (x) = = u' (x), from which we can find u[x].
x2

168
Week 7_Series Solutions of ODEs.nb 25

In[ ]:= DSolveu '[x]  - (1 - x)  x2 , u[x], x


Out[ ]=
1
u[x]  + 1 + Log[x]
x
1
◆ Take 1 = 0 a. So we have u(x) = x
+ ln(x), and

y2(x) = u(x) y1(x) =  x1 + ln(x) x = 1 + x ln(x)


◆ Let’s check the second solution.
In[ ]:= ClearAll[y];
myODE = x (x - 1) * y ''[x] - x * y '[x] + y[x]  0 ;
y2Soln[x_] = x * Log[x] + 1;
ODECheck = myODE /. y  y2Soln;
FullSimplify[ODECheck]
Out[ ]=

True

◆ y1(x) = x and y2(x) = 1 + x ln(x) are linearly independent and y2(x) has a logarithmic
term, thus they constitute a basis of solutions for positive x.
In[ ]:= yGen = c1 * x + c2 * (1 + x * Log[x]);
FullSimplifyx2 - x * D[yGen, {x, 2}] - x * D[yGen, x] + yGen  0
True

Summary
After completing this chapter, you should be able to
◼ use Mathematica to find the power series representation of a function.
◼ recognise and work with some higher transcendental functions of mathematics.
◼ develop SOPs to solve ODEs by the power series method.
◼ develop SOPs to solve ODEs by the Frobenius method.
◼ develop the habit of always checking your solutions for quality assurance.

169
170
Week 8: Systems of Linear Equations
How to solve systems of linear equations?

Table of Contents
1. Solving the Systems of Linear Equations
1.1. Example 8.1: The Solve Command
1.2. Example 8.2: The LinearSolve Command
1.3. Example 8.3: Gaussian Elimination
1.4. Example 8.4: Gauss-Jordan Elimination
2. Summary

Commands list
◼ Column
◼ Solve
◼ MatrixForm
◼ FullSimplify
◼ LinearSolve
◼ ArrayFlatten
◼ Normal
◼ CoefficientArrays
◼ RowReduce

Prerequisite: How to Get Parts of a Matrix


The Wolfram Language has many matrix operations that support operations such as building,
computing, and visualizing matrices. It also has a rich language for picking out and extracting
parts of matrices.

How to | Get Parts of a Matrix:


https://reference.wolfram.com/language/howto/GetPartsOfAMatrix.html

171
2 Week 8_Systems of Linear Equations.nb

Example 8.1: The Solve Command


Definition 8.1: System of Linear Equations
A system of linear equations is a collection of equations of the form
.

a11 x1 + a12 x2 + a13 x3 + ⋯ + a1 n xn = b1


.

a21 x1 + a22 x2 + a23 x3 + ⋯ + a2 n xn = b2


.

a31 x1 + a32 x2 + a33 x3 + ⋯ + a3 n xn = b3


.

⋮ ⋮ ⋮ ⋮
am 1 x1 + am 2 x2 + am 3 x3 + ⋯ + am n xn = bm
.

Definition 8.2: Consistent and Inconsistent Linear System


If a linear system has at least one solution, then we say that it is consistent. If not,
inconsistent.
.

Find all solutions to the consistent system of linear equations using the Solve command:
x1 + 2 x2 - x3 + 3 x5 = 7
x2 - 4 x3 + x5 = - 2
x4 - 2 x5 = 1

◆ Step 1. First, set up the system of equations:


In[ ]:= ClearAll["Global`*"]
sys = {x1 + 2 x2 - x3 + 3 x5  7, x2 - 4 x3 + x5  - 2, x4 - 2 x5  1};
Column[sys]
Out[ ]=
x1 + 2 x2 - x3 + 3 x5  7
x2 - 4 x3 + x5  - 2
x4 - 2 x5  1

◆ Step 2. Set leading and free variable. In this system, x1, x2, x4 are leading variables
and x3, x5 are free variables. Therefore,
In[ ]:= x3 = s1 ;
x5 = s2 ;

◆ Step 3. Looking at the given system of equations, it is apparent that the easiest way to
start is at the bottom. Therefore, apply back substitution method.

172
Week 8_Systems of Linear Equations.nb 3

In[ ]:= ? Solve


Out[ ]=

Symbol

Solve[expr, vars] attempts to solve the system expr of equations or inequalities for the variables vars.

Solve[expr, vars, dom] solves over the domain dom. Common choices of dom are Reals, Integers, and Complexes.

◆ Substituting x5 into the bottom equation and solving it for x4 gives:


In[ ]:= Solve[x4 - 2 x5  1, x4]
Out[ ]=

{{x4  1 + 2 s2 }}

◆ So, x4 = 1 + 2 s2. Solving the next equation up for x2 by substituting values for x3 and x5
gives:
In[ ]:= x4 = 1 + 2 s2 ;
Solve[ x2 - 4 x3 + x5  - 2, x2]
Out[ ]=

{{x2  - 2 + 4 s1 - s2 }}

◆ So, x2 = 2 + 4 s1 - s2. Finally, substituting x2, x3 and x5 into the top equation gives:
In[ ]:= x2 = - 2 + 4 s1 - s2 ;
Solve[x1 + 2 x2 - x3 + 3 x5  7, x1]
Out[ ]=

{{x1  11 - 7 s1 - s2 }}

◆ So, x1 = 11 - 7 s1 - s2 .
◆ Step 4. Verify the solution.
In[ ]:= Clear[x1, x2, x4]
soln = Solve[sys, {x1, x2, x4}]
Out[ ]=

{{x1  11 - 7 s1 - s2 , x2  - 2 + 4 s1 - s2 , x4  1 + 2 s2 }}

In[ ]:= FullSimplify[sys /. soln〚1〛]


Out[ ]=

{True, True, True}

◆ Hence, the general solution for given system of linear equations is:
x1 = 11 - 7 s1 - s2
x2 = 2 + 4 s1 - s2
x3 = s1
x4 = 1 + 2 s2
x5 = s2

173
4 Week 8_Systems of Linear Equations.nb

where, s1 and s2 are free parameters and can be any real numbers. Remember that each distinct choice for
free parameters give a new particular solution, so the given system has infinitely many solutions.

Example 8.2: The LinearSolve Command


Solve the following system of linear equations using command LinearSolve:
x-2y+z = 0
2y-8z = 8
-4 x + 5 y + 9 z = -9

◆ In the system of linear equations the coefficients change, but variables do not. Therefore,
coefficients can be simply transferred to matrix, which can be thought as a rectangular
table of numbers.
◆ Step 1. Construct the coefficient matrix of the given system.
ClearAll["Global`*"]
A = {{1, - 2, 1}, {0, 2, - 8}, {- 4, 5, 9}}; MatrixForm[A]
Out[ ]//MatrixForm=
1 -2 1
0 2 -8
-4 5 9

◆ Step 2. Construct a column matrix that contains all the constant terms on the right-hand-
side of each equation.
In[ ]:= b = {0, 8, - 9}; MatrixForm[b]

Out[ ]//MatrixForm=
0
8
-9

◆ Step 3. Solve the system using LinearSolve command.


In[ ]:= ? LinearSolve
Out[ ]=

Symbol

LinearSolve[m, b] finds an x that solves the matrix equation m.x == b.

LinearSolve[m] generates a LinearSolveFunction[…] that can be applied repeatedly to different b.

174
Week 8_Systems of Linear Equations.nb 5

In[ ]:= LinearSolve[A, b]


Out[ ]=

{29, 16, 3}

◆ Step 4. Verify the result.


In[ ]:= sys = {x - 2 y + z  0, 2 y - 8 z  8, - 4 x + 5 y + 9 z  - 9}; Column[sys]
Out[ ]=
x-2y+z  0
2y-8z  8
-4 x + 5 y + 9 z  -9

In[ ]:= FullSimplify[sys /. {x  29, y  16, z  3}]


Out[ ]=

{True, True, True}

◆ Hence, the unique solution for given consistent system of linear equations is:
x = 29
y = 16
z=3

Example 8.3: Gaussian Elimination


Solve the following system of linear equations using Gaussian elimination:
2 x1 - 2 x2 - 6 x3 + x4 = 3
- x1 + x2 + 3 x3 - x4 = - 3
x1 - 2 x2 - x3 + x4 = 2
.

Definition 8.3: Augmented Matrix


When a matrix contains all the coefficients of a linear system, including the constant terms on the right
side of each equation, it is called an augmented matrix. Augmented matrices include a vertical line
separating the left and right sides of the equation.

◆ Step 1. Construct the augmented matrix of the system, AugMat= [A | b].


◆ Construct the coefficient matrix.
In[ ]:= ClearAll["Global`*"]
A = {{2, - 2, - 6, 1}, {- 1, 1, 3, - 1}, {1, - 2, - 1, 1}}; A // MatrixForm
Out[ ]//MatrixForm=
2 -2 -6 1
-1 1 3 -1
1 -2 -1 1

175
6 Week 8_Systems of Linear Equations.nb

◆ Construct the matrix of constant terms on the RHS (right-hand-side).


In[ ]:= b = {3, - 3, 2}; b // MatrixForm
Out[ ]//MatrixForm=
3
-3
2

◆ Augmented matrix:
In[ ]:= AugMat = {{2, - 2, - 6, 1, 3}, {- 1, 1, 3, - 1, - 3}, {1, - 2, - 1, 1, 2}}; AugMat // MatrixForm
Out[ ]//MatrixForm=
2 -2 -6 1 3
-1 1 3 -1 -3
1 -2 -1 1 2
.

Definition 8.4: Elementary Row Operations


1. Interchange two rows
2. Multiply a row by nonzero constant
3. Add a multiple of one row to another row
.

Definition 8.5: Gaussian Elimination, Echelon Form, Leading Term


The procedure of reducing the matrix to the echelon form (or row echelon form) using the elementary
row operations is known as Gaussian elimination.
.

A matrix is in echelon form if


(a) Every leading term is in a column to the left of the leading term of the row below it.
(b) Any zero rows are at the bottom of the matrix.
.

where the leading term of a row is defined as the leftmost nonzero term in that row.
.

Definition 8.6: Pivot Positions, Pivot Columns, Pivot


For a matrix in echelon form, the pivot positions are those that contain a leading term. The pivot
columns are the columns that contain pivot positions, and a pivot is a nonzero number in a pivot
position.

◆ Step 2. Apply elementary row operations to transform the augmented matrix to row
echelon form.
◆ Identify pivot position for Row 1. R1 ↔ R3

176
Week 8_Systems of Linear Equations.nb 7

In[ ]:= AugMat〚{1, 3}〛 = AugMat〚{3, 1}〛; AugMat // MatrixForm


Out[ ]//MatrixForm=
1 -2 -1 1 2
-1 1 3 -1 -3
2 -2 -6 1 3

◆ Eliminate the coefficients down the first column below the pivot position, a21 and
by transforming them to zero. R1 + R2 → R2 and - 2 R1 + R3 → R3.
In[ ]:= AugMat〚2〛 = AugMat〚1〛 + AugMat〚2〛;
AugMat〚3〛 = - 2 * AugMat〚1〛 + AugMat〚3〛; AugMat // MatrixForm
Out[ ]//MatrixForm=
1 -2 -1 1 2
0 -1 2 0 -1
0 2 -4 -1 -1

◆ Eliminate the coefficient of a32 below the pivot position in the second column:
2 R2 + R3 → R3 .
In[ ]:= AugMat〚3〛 = 2 * AugMat〚2〛 + AugMat〚3〛; AugMat // MatrixForm
Out[ ]//MatrixForm=
1 -2 -1 1 2
0 -1 2 0 -1
0 0 0 -1 -3

◆ The matrix is now in row echelon form.


◆ Step 3. Interpret the result of Step 2 and find all solutions.
◆ The corresponding echelon system is:
In[ ]:= sys = {x1 - 2 x2 - x3 + x4  2, - x2 + 2 x3  - 1, - x4  - 3}; Column[sys]
Out[ ]=
x1 - 2 x2 - x3 + x4  2
- x2 + 2 x3  - 1
- x4  - 3

◆ Apply back substitution procedure from Example 8.1. Here, x3 is free variable.
In[ ]:= Solve[sys /. x3  s1 , {x1, x2, x4}]
Out[ ]=

{{x1  1 + 5 s1 , x2  1 + 2 s1 , x4  3}}

◆ Step 4. Verify the solution.


In[ ]:= sys1 = {2 x1 - 2 x2 - 6 x3 + x4  3, - x1 + x2 + 3 x3 - x4  - 3, x1 - 2 x2 - x3 + x4  2};

In[ ]:= FullSimplify[sys1 /. {x1  1 + 5 s1 , x2  1 + 2 s1 , x3  s1 , x4  3}]


Out[ ]=

{True, True, True}

177
8 Week 8_Systems of Linear Equations.nb

In[ ]:= LinearSolve[A, b]


Out[ ]=

{1, 1, 0, 3}

In[ ]:= FullSimplify[sys1 /. {x1  1, x2  1, x3  0, x4  3}]


Out[ ]=

{True, True, True}

◆ Hence, the given system of linear equations is consistent as has a general solution of:

x1 = 1 + 5 s1
x2 = 1 + 2 s1
x3 = s1
x4 = 3

where, s1 can be any real number.

Example 8.4: Gauss-Jordan Elimination


Solve the following system of linear equations using Gauss-Jordan elimination:
5 x + 2 y + 11 z = 4
7x+3y+4z = 1
12 x + 5 y + 15 z = 6

◆ Step 1. Construct the augmented matrix of the system, AugMat= [A|b].


In[ ]:= ClearAll["Global`*"]
sys = {5 x + 2 y + 11 z  4, 7 x + 3 y + 4 z  1, 12 x + 5 y + 15 z  6}; Column[sys]
Out[ ]=
5 x + 2 y + 11 z  4
7x+3y+4z  1
12 x + 5 y + 15 z  6

◆ Coefficient matrix A:
In[ ]:= A = Normal[CoefficientArrays[sys, {x, y, z}]]〚2〛; MatrixForm[A]
Out[ ]//MatrixForm=
5 2 11
7 3 4
12 5 15

◆ Column matrix b with right-hand-side constants:

178
Week 8_Systems of Linear Equations.nb 9

In[ ]:= b = {{4}, {1}, {6}}; MatrixForm[b]


Out[ ]//MatrixForm=
4
1
6

◆ Augmented matrix:
In[ ]:= AugMat = ArrayFlatten[{{A, b}}]; MatrixForm[AugMat]
Out[ ]//MatrixForm=
5 2 11 4
7 3 4 1
12 5 15 6
.

Definition 8.7: Gauss-Jordan Elimination, Reduced Echelon Form


The procedure of reducing the matrix to the reduced echelon form (or reduced row echelon form) using
the elementary row operations is known as Gauss-Jordan elimination.
.

A matrix is in reduced echelon form if


(a) It is in echelon form.
(b) All pivot positions contain a 1.
(c) The only nonzero term in a pivot column is in the pivot position.
.

◆ Step 2. Apply elementary row operations to transform the augmented matrix to


reduced row echelon form.
1
5
R1 → R1
- 7 R1 + R 2 → R 2
- 12 R1 + R3 → R3
5 R2 → R 2
R1 - 25 R2 → R1
1
- 5 R2 + R3 → R3
R1 - 10 R3 → R1
R2 + 23 R3 → R2

179
10 Week 8_Systems of Linear Equations.nb

1
In[ ]:= AugMat〚1〛 = * AugMat〚1〛;
5
AugMat〚2〛 = - 7 * AugMat〚1〛 + AugMat〚2〛;
AugMat〚3〛 = - 12 * AugMat〚1〛 + AugMat〚3〛;
AugMat〚2〛 = 5 * AugMat〚2〛;
2
AugMat〚1〛 = AugMat〚1〛 - * AugMat〚2〛;
5
1
AugMat〚3〛 = - * AugMat〚2〛 + AugMat〚3〛;
5
AugMat〚1〛 = AugMat〚1〛 - 10 * AugMat〚3〛;
AugMat〚2〛 = AugMat〚2〛 + 23 * AugMat〚3〛;
AugMat // MatrixForm
Out[ ]//MatrixForm=
1 0 25 0
0 1 - 57 0
0 0 0 1

◆ The matrix is now in reduced row echelon form. But, the third row shows inconsis-
tency. Since 0 = 1 is not a true condition, given system of equations has no solution.
◆ Step 4. Verify the solution.
In[ ]:= RREF = RowReduce[ArrayFlatten[{{A, b}}]]; RREF // MatrixForm
Out[ ]//MatrixForm=
1 0 25 0
0 1 - 57 0
0 0 0 1

In[ ]:= AugMat  RREF


Out[ ]=

True

In[ ]:= LinearSolve[A, b]

LinearSolve: Linear equation encountered that has no solution.

Out[ ]=

LinearSolve[{{5, 2, 11}, {7, 3, 4}, {12, 5, 15}}, {{4}, {1}, {6}}]

Considering the last row, the system is inconsistent, i.e., has no solution.

Summary
After completing this chapter, you should be able to
◼ develop SOPs to solve systems of linear equations using wolfram Mathematica.
◼ be familiar with the list form and matrix form representations of data in Mathematica.

180
Week 8_Systems of Linear Equations.nb 11

◼ practice Gaussian elimination and Gauss-Jordan elimination in Mathematica.


◼ develop the habit of always checking your solutions for quality assurance.

181
182
Week 9: Matrix Operations and Inverse
Properties of Matrix Operations and Inverse

Table of Contents
1. Properties of Matrix Algebra
1.1. Example 9.1: Matrix Addition and Scalar Multiplication
1.2. Example 9.2: Matrix Multiplication
1.3. Example 9.3: Transpose of a Matrix
2. Example 9.4: Inverse of a Matrix
3. Summary

Commands list
◼ Table
◼ Dimensions
◼ ConstantArray
◼ RandomInteger
◼ RandomReal
◼ Do
◼ For
◼ Sum
◼ SeedRandom
◼ UpperTriangularize
◼ LowerTriangularize
◼ Dot
◼ Transpose
◼ Inverse

Prerequisite: How to Create a Matrix


Matrices are represented in the Wolfram Language with lists. They can be entered directly
with the { } notation, constructed from a formula, or imported from a data file. The Wolfram

183
2 Week 9_Matrix Operations and Inverse.nb

Language also has commands for creating diagonal matrices, constant matrices, and other
special matrix types.

How to | Create a Matrix:


https://reference.wolfram.com/language/howto/CreateAMatrix.html

Properties of Matrix Algebra


Example 9.1: Matrix Addition | Scalar Multiplication
Definition 9.1: Addition, Scalar Multiplication of Matrices
Let c be a scalar, and let

a11 a12 ⋯ a1 m b11 b12 ⋯ b1 m


a21 a22 ⋯ a2 m b21 b22 ⋯ b2 m
A= and B=
⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮
an1 an2 ⋯ anm bn1 bn2 ⋯ bnm

be n × m matrices. Then addition and scalar multiplication of matrices are defined as follows:

(a11 + b11 ) (a12 + b12 ) ⋯ (a1 m + b1 m )


(a21 + b21 ) (a22 + b22 ) ⋯ (a2 m + b2 m )
Addition: A+B =
⋮ ⋮ ⋱ ⋮
(an1 + bn1 ) (an2 + bn2 ) ⋯ (anm + bnm )

ca11 ca12 ⋯ ca1 m


ca21 ca22 ⋯ ca2 m
Scalar Multiplication: cA =
⋮ ⋮ ⋱ ⋮
can1 can2 ⋯ canm

Construct two 3  3 matrices and prove one of the properties below by performing
corresponding matrix operations.
.

Theorem 9.1: Properties of Addition and Scalar Multiplication


Let s and t be scalars, A, B, and C be matrices of dimensions n  m, and 0nm be the n  m matrix with all
zero entries. Then
(a) A + B = B + A

184
Week 9_Matrix Operations and Inverse.nb 3

(b) s (A + B) = sA + sB
(c) (s + t) A = sA + tA
(d) (A + B) + C = A + (B + C )
(e) (st) A = s (tA)
(f) A + 0nm = A
.

 Note that two matrices can be equal if they have the same dimensions and if their corresponding entries
are equal.

◆ 2nd property is chosen for this example.

◆ Step 1. Construct two 3  3 matrices.


In[ ]:= ClearAll["Global`*"]
A = Tableai,j , {i, 3}, {j, 3}; MatrixForm[A]
Out[ ]//MatrixForm=
a1,1 a1,2 a1,3
a2,1 a2,2 a2,3
a3,1 a3,2 a3,3

In[ ]:= B = Tablebi,j , {i, 3}, {j, 3}; MatrixForm[B]


Out[ ]//MatrixForm=

b1,1 b1,2 b1,3


b2,1 b2,2 b2,3
b3,1 b3,2 b3,3

◆ Step 2. Check that they have the same size.


In[ ]:= Dimensions[A]  Dimensions[B]
Out[ ]=

True

Method 1. Use a Do loop to perform matrix operations.

◆ Step 3. Compute their sum A + B.


◆ Get the dimension of matrix A(or B).
In[ ]:= dim = Dimensions[A]
Out[ ]=

{3, 3}

◆ Initially, new matrix sumAB will be a zero matrix and its elemts will be replaced later.

185
4 Week 9_Matrix Operations and Inverse.nb

In[ ]:= sumAB = ConstantArray[0, dim]; MatrixForm[sumAB]


Out[ ]//MatrixForm=
0 0 0
0 0 0
0 0 0

◆ Run a do loop to perform matrix addition.


In[ ]:= Do[sumAB〚i, j〛 = A〚i, j〛 + B〚i, j〛, {i, 1, dim〚1〛}, {j, 1, dim〚2〛}];
MatrixForm[sumAB]
Out[ ]//MatrixForm=

a1,1 + b1,1 a1,2 + b1,2 a1,3 + b1,3


a2,1 + b2,1 a2,2 + b2,2 a2,3 + b2,3
a3,1 + b3,1 a3,2 + b3,2 a3,3 + b3,3

◆ Step 4. Multiply the sum A + B by the scalar s.


◆ Assing a value of zero to resulting matrix RHS (right-hand-side).
In[ ]:= RHS = ConstantArray[0, dim]; RHS // MatrixForm
Out[ ]//MatrixForm=
0 0 0
0 0 0
0 0 0

◆ Run a do loop to perform scalar multiplication.


In[ ]:= Do[RHS〚i, j〛 = s * sumAB〚i, j〛, {i, 1, dim〚1〛}, {j, 1, dim〚2〛}];
MatrixForm[RHS]
Out[ ]//MatrixForm=
s (a1,1 + b1,1 ) s (a1,2 + b1,2 ) s (a1,3 + b1,3 )
s (a2,1 + b2,1 ) s (a2,2 + b2,2 ) s (a2,3 + b2,3 )
s (a3,1 + b3,1 ) s (a3,2 + b3,2 ) s (a3,3 + b3,3 )

Method 2. Use a For loop to perform matrix operations.

◆ Step 5. Find the scalar multiples of A and B, sA and sB using a for loop.
◆ Define new zero matrices elements of which will be replaced later.
sA = ConstantArray[0, dim];
sB = ConstantArray[0, dim];

◆ Run a for loop to perform scalar multiplication.

186
Week 9_Matrix Operations and Inverse.nb 5

In[ ]:= For[i = 1, i ≤ 3, i ++,


For[j = 1, j ≤ 3, j ++,
{sA〚i, j〛 = s * A〚i, j〛, sB〚i, j〛 = s * B〚i, j〛}
]];
MatrixForm[sA]
Out[ ]//MatrixForm=
s a1,1 s a1,2 s a1,3
s a2,1 s a2,2 s a2,3
s a3,1 s a3,2 s a3,3

In[ ]:= MatrixForm[sB]


Out[ ]//MatrixForm=

s b1,1 s b1,2 s b1,3


s b2,1 s b2,2 s b2,3
s b3,1 s b3,2 s b3,3

◆ Step 6. Find the sum sA + sB using a for loop.


In[ ]:= LHS = ConstantArray[0, dim]; MatrixForm[LHS]
Out[ ]//MatrixForm=
0 0 0
0 0 0
0 0 0

In[ ]:= For[i = 1, i ≤ 3, i ++,


For[j = 1, j ≤ 3, j ++,
LHS〚i, j〛 = sA〚i, j〛 + sB〚i, j〛
]];
MatrixForm[LHS]
Out[ ]//MatrixForm=

s a1,1 + s b1,1 s a1,2 + s b1,2 s a1,3 + s b1,3


s a2,1 + s b2,1 s a2,2 + s b2,2 s a2,3 + s b2,3
s a3,1 + s b3,1 s a3,2 + s b3,2 s a3,3 + s b3,3

◆ Step 7. Check and verify the2nd property of matrix addition and scalar multiplication.
In[ ]:= FullSimplify[RHS  LHS]
Out[ ]=

True

◆ Step 8. Verify the results.


In[ ]:= RHS  s * (A + B)
Out[ ]=

True

In[ ]:= LHS  s * A + s * B


Out[ ]=

True

187
6 Week 9_Matrix Operations and Inverse.nb

Example 9.2: Matrix Multiplication


Definition 9.2: Matrix Multiplication
Let A be an n  k matrix and B = [ b1 b2 ⋯ bm ] a k  m matrix. We define the product
.

AB = [ Ab1 Ab2 ⋯ Abm ]


.

which is an n  m matrix.

n
cij = ai 1 b1 j + ai 2 b2 j + · · · + ai n bn j = ∑k=1 aik bkj

 Note that for AB to exist, the number of columns of A must equal the number of rows of B.
.

Construct two matrices of dimensions 4  3 and 3  4, respectively, with random entries and
prove one of the properties below by performing corresponding matrix operations.
.

Theorem 9.2: Properties of Matrix Multiplication


Let s be a scalar, and let A, B, and C be matrices. Then each of the following holds in the cases where
the indicated operations are defined:
(a) A (BC ) = (AB) C

(b) A (B +C ) = AB +AC

(c) (A + B) C = AC + BC
(d) s (AB) = (sA) B = A (sB)
(e) AI = A
(f) AB ≠ BA (even if the matrices have compatible dimensions)
.

Here I denotes an identity matrix of appropriate dimension.

◆ 6th property is chosen for this example.

◆ Step 1. Construct two matrices with appropriate dimensions and elements.


In[ ]:= ClearAll["Global`*"]
A = RandomInteger[{- 10, 10}, {4, 3}]; MatrixForm[A]
Out[ ]//MatrixForm=
3 -2 -1
- 10 6 - 6
-8 -7 2
-5 -2 -7

188
Week 9_Matrix Operations and Inverse.nb 7

In[ ]:= B = RandomInteger[{- 10, 10}, {3, 4}]; MatrixForm[B]


Out[ ]//MatrixForm=
- 10 7 2 1
8 -1 -5 -1
1 -3 7 -1

◆ Since the dimension of A is 4  3 and dimension of B is 3  4, their product will give a


new matrices AB of dimension 4  4 and BA of dimension 33.
◆ Step 2. Calculate the product AB using a do loop and based on the given formula below:
n
cij = ai 1 b1 j + ai 2 b2 j + · · · + ai n bn j = ∑k=1 aik bkj

◆ Get the dimension of matrix A and B.


In[ ]:= dimA = Dimensions[A];
dimB = Dimensions[B];

◆ Initially, new matrix AB will be a zero matrix and its elemts will be replaced later.
In[ ]:= AB = ConstantArray[0, {4, 4}];

◆ Run a do loop to perform matrix multiplication.


In[ ]:= Do[AB〚i, j〛 = Sum[(A〚i, k〛) * (B〚k, j〛), {k, 1, 3}], {i, 1, dimA〚1〛}, {j, 1, dimB〚2〛}];
MatrixForm[AB]
Out[ ]//MatrixForm=
- 47 26 9 6
142 - 58 - 92 - 10
26 - 55 33 - 3
27 - 12 - 49 4

◆ Step 3. Calculate the product BA using a for loop.


n
cij = ai 1 b1 j + ai 2 b2 j + · · · + ai n bn j = ∑k=1 aik bkj

◆ Define a zero matrix BA.


In[ ]:= BA = ConstantArray[0, {3, 3}];

◆ Run a for loop based on the formula above.


In[ ]:= For[i = 1, i ≤ 3, i ++,
For[j = 1, j ≤ 3, j ++,
BA〚i, j〛 = Sum[(B〚i, k〛) * (A〚k, j〛), {k, 1, 4}]]];
MatrixForm[BA]
Out[ ]//MatrixForm=
- 121 46 - 35
79 15 - 5
- 18 - 67 38

◆ Step 4. Check and verify the 6th property.

189
8 Week 9_Matrix Operations and Inverse.nb

In[ ]:= AB ≠ BA
Out[ ]=

True

◆ Step 5. Verify the results.


In[ ]:= AB  Dot[A, B]
Out[ ]=

True

In[ ]:= BA  Dot[B, A]


Out[ ]=

True

In[ ]:= A.B ≠ B.A


Out[ ]=

True

Example 9.3: Transpose of a Matrix


Definition 9.3: Transpose
The transpose of a matrix A is denoted by AT and results from interchanging the rows and columns of
A. Focusing on individual entries, the entry in row i and column j of A becomes the entry in row j and
column i of AT .
.

Construct upper/lower triangular matrix or matrices with random entries and prove one of the
properties below by performing corresponding matrix operations.
.

Theorem 9.3: Properties of Matrix Transposes


Let s be a scalar, A and B be n  m matrices, and C an m × k matrix. Then
T
(a) AT  = A

(b) (A + B)T = AT + BT
(c) (sA)T = sAT
(d) (AC )T = C T AT

◆ 1st and2nd properties are chosen for this example.


.

Definition 9.4: Upper and Lower Triangular Matrices


An n  n matrix A is upper triangular if the entries below the diagonal are all zero.
.

190
Week 9_Matrix Operations and Inverse.nb 9

a11 a12 a13 ⋯ a1 n


0 a22 a23 ⋯ a2 n
A = 0 0 a33 ⋯ a3 n
⋮ ⋮ ⋮⋱ ⋮
0 0 0 ⋯ ann
.

Similarly, an n  n matrix A is lower triangular if the terms above the diagonal are all zero.
.

a11 0 0 ⋯ 0
a21 a22 0 ⋯ 0
A = a31 a32 a33 ⋯ 0
⋮ ⋮ ⋮ ⋱ ⋮
an 1 an 2 an 3 ⋯ ann
.

◆ Step 1. Construct two matrices with appropriate dimensions and elements.


In[ ]:= ClearAll["Global`*"]
SeedRandom[1];
A = UpperTriangularize[RandomInteger[{- 10, 10}, {4, 4}]]; A // MatrixForm
Out[ ]//MatrixForm=
- 5 - 10 - 3 - 10
0 - 7 - 10 - 10
0 0 -7 -2
0 0 0 6

In[ ]:= SeedRandom[2]; B = LowerTriangularize[RandomInteger[{- 10, 10}, {4, 4}]]; B // MatrixForm


Out[ ]//MatrixForm=
-7 0 0 0
7 -2 0 0
- 10 9 - 1 0
-6 -7 -6 2

T
◆ Step 2. Verify that AT  = A.
In[ ]:= At = Transpose [A]; At // MatrixForm
Out[ ]//MatrixForm=
-5 0 0 0
- 10 - 7 0 0
- 3 - 10 - 7 0
- 10 - 10 - 2 6

In[ ]:= Att = Transpose[At]; Att // MatrixForm


Out[ ]//MatrixForm=
- 5 - 10 - 3 - 10
0 - 7 - 10 - 10
0 0 -7 -2
0 0 0 6

191
10 Week 9_Matrix Operations and Inverse.nb

In[ ]:= Att  A


Out[ ]=

True

In[ ]:= Transpose [Transpose[A]]  A


Out[ ]=

True

◆ Step 3. Verify that (A + B)T = AT + BT .


In[ ]:= RHS = Transpose[A + B]; RHS // MatrixForm
Out[ ]//MatrixForm=
- 12 7 - 10 - 6
- 10 - 9 9 -7
- 3 - 10 - 8 - 6
- 10 - 10 - 2 8

In[ ]:= LHS = Transpose[A] + Transpose[B]; LHS // MatrixForm


Out[ ]//MatrixForm=
- 12 7 - 10 - 6
- 10 - 9 9 -7
- 3 - 10 - 8 - 6
- 10 - 10 - 2 8

In[ ]:= RHS  LHS


Out[ ]=

True

Example 9.4: Inverse of a Matrix


Definition 9.4: Invertible Matrix
An n  n matrix A is invertible if there exists an n  n matrix matrix B such that AB = In .
.

Definition 9.6: Identity Matrix


Identity matrix is an n  n matrix that contain only 1’s on its main diagonal and 0’s elsewhere else.
.

1 0 0 ⋯ 0
0 1 0 ⋯ 0
In = 0 0 1 ⋯ 0
⋮ ⋮ ⋱ ⋮⋮
0 0 0 ⋯ 1n
.

Construct invertible matrix or matrices with random entries and prove one of the properties

192
Week 9_Matrix Operations and Inverse.nb 11

below.
.

Theorem 9.4: Properties of Matrix Inverse


Let A and B be invertible n  n matrices. Then
-1
(a) If matrix A is invertible, then A-1  = A.

(b) If A and B are both invertiblen  n matrices, then (AB)-1 = B-1 A-1 .
-1 T
(c) If A is invertible, then AT  =  A -1  .

◆ 1st property is chosen for this example.

◆ Step 1. Construct a matrix with appropriate dimension and elements.


In[ ]:= ClearAll["Global`*"]
SeedRandom[1];
A = RandomReal[{0, 10}, {3, 3}]; A // MatrixForm
Out[ ]//MatrixForm=
8.17389 1.1142 7.89526
1.87803 2.41361 0.657388
5.42247 2.31155 3.96006

◆ Step 2. Check whether this matrix is invertible or not. The matrix is invertible since its
determinant is nonzero.
In[ ]:= Det[A] ≠ 0
Out[ ]=

True

Theorem 9.5: Comdition for Invertible Matrix


An n  n matrix A is invertible if and only if det(A) ≠ 0.

◆ Step 3. Find A-1.


◆ Construct the augmented matrix, [A I2].
In[ ]:= AugMat = ArrayFlatten[{{A, IdentityMatrix[3]}}]; MatrixForm[AugMat]
Out[ ]//MatrixForm=
8.17389 1.1142 7.89526 1 0 0
1.87803 2.41361 0.657388 0 1 0
5.42247 2.31155 3.96006 0 0 1

◆ Transform the augmented matrix to reduced row echelon form.


In[ ]:= RREF = RowReduce[AugMat]; RREF // MatrixForm
Out[ ]//MatrixForm=
1 0. 0. - 1.04865 - 1.80522 2.39039
0 1 0. 0.505178 1.36229 - 1.23333
0 0 1 1.14102 1.67668 - 2.3007

193
12 Week 9_Matrix Operations and Inverse.nb

◆ On the left-hand-side is the identity matrix and on the right-hand-side is the inverse
matrix. Thus, we find that A-1 is:
In[ ]:= InvA = RREF〚All, 4 ;; 6〛; InvA // MatrixForm
Out[ ]//MatrixForm=
- 1.04865 - 1.80522 2.39039
0.505178 1.36229 - 1.23333
1.14102 1.67668 - 2.3007

-1
◆ Step 4. Find A-1 .
◆ Augmented matrix, A-1 I2.
In[ ]:= AugMat2 = ArrayFlatten[{{InvA, IdentityMatrix[3]}}]; MatrixForm[AugMat2]
Out[ ]//MatrixForm=
- 1.04865 - 1.80522 2.39039 1 0 0
0.505178 1.36229 - 1.23333 0 1 0
1.14102 1.67668 - 2.3007 0 0 1

◆ Reduced row echelon form:


In[ ]:= RREF2 = RowReduce[AugMat2]; RREF2 // MatrixForm
Out[ ]//MatrixForm=
1 0. 0. 8.17389 1.1142 7.89526
0 1 0. 1.87803 2.41361 0.657388
0 0 1 5.42247 2.31155 3.96006

-1
◆ The right-hand-side is the inverse matrix. Thus, A-1 is:
In[ ]:= RHS = RREF2〚All, 4 ;; 6〛; RHS // MatrixForm
Out[ ]//MatrixForm=
8.17389 1.1142 7.89526
1.87803 2.41361 0.657388
5.42247 2.31155 3.96006

◆ Step 5. Check and verify the 1st property.


In[ ]:= RHS  A
Out[ ]=

True

◆ Step 6. Verify the solution.


In[ ]:= RHS  Inverse[Inverse[A]]
Out[ ]=

True

In[ ]:= Inverse[Inverse[A]]  A


Out[ ]=

True

194
Week 9_Matrix Operations and Inverse.nb 13

Summary
After completing this chapter, you should be able to
◼ perform various standard matrix operations using Mathematica.
◼ write
simple programs that involve looping and conditional expressions in
Mathematica
◼ generate random numbers in Mathematica.

195
196
Week 10: LU Factorization and Determinants
Applications of Matrix Operations and Determinants

Table of Contents
1. Example 10.1: The LU Factorization
1.1. Method 1: LU Factorization using Row Operations
1.2. Method 2: LUDecomposition Command
2. Example 10.2: Determinant and Its Properties | Part 1
2.1. Method 1: The Shortcut Method
2.2. Method 2: The Cofactor Expansion
3. Example 10.3: Determinant and Its Properties | Part 2
3.1. Method 3: Row Operations to Compute the Determinant
4. Example 10.4: Applications of the Determinant
4.1. Cramer's Rule
4.2. Inverses from Determinants
5. Summary

Commands list
◼ LUDecomposition
◼ UpperTriangularize
◼ LowerTriangularize
◼ Det
◼ Times
◼ Diagonal
◼ Reverse
◼ Transpose
◼ Inverse
◼ Minors
◼ Cofactor

197
2 Week 10_LU Factorization and Determinants.nb

Prerequisite: Basic Matrix Operations


The Wolfram Language’s matrix operations handle both numeric and symbolic matrices,
automatically accessing large numbers of highly efficient algorithms. The Wolfram Language
uses state-of-the-art algorithms to work with both dense and sparse matrices, and incorporates
a number of powerful original algorithms, especially for high-precision and symbolic matrices.

Basic Matrix Operations:


https://reference.wolfram.com/language/guide/MatrixOperations.html
https://reference.wolfram.com/language/tutorial/LinearAlgebra.html#25697

Example 10.1: The LU Factorization


The LU factorization can be used to solve linear systems of equations in a convenient manner.
Solve the following linear system of equations using LU factorization algothithm.
x1 - x2 - 2 x3 = 2
- 3 x1 + - 2 x2 + 1 x3 = 5
6 x1 + 11 x2 - 2 x3 = 1
.

Definition 10.1: LU Factorization


A nonsingular square matrix A is said to have an LU factorization if it can be written in the form:
.

A = LU
.

where L is a lower triangular matrix with 1’s on the diagonaland U is an upper triangular matrix in
echelon form.

◆ Step 1. Define coefficient matrix A and matrix b from the system expressed as Ax = b.
In[ ]:= ClearAll["Global`*"]
A = {{1, - 1, - 2}, {- 3, 2, 1}, {6, 11, - 2}}; MatrixForm[A]
Out[ ]//MatrixForm=
1 -1 -2
-3 2 1
6 11 - 2

In[ ]:= b = {2, 5, 1}; MatrixForm[b]


Out[ ]//MatrixForm=
2
5
1

◆ Step 2. Compute the LU factorization of A.

198
Week 10_LU Factorization and Determinants.nb 3

Method 1. LU factorization using Row Operations.

◆ Obtain U by the process of row reduction of A, and build up L one column at a time as
we transform A to echelon form. For this, first construct a lower triangular matrix L1,
where the • symbol represents a matrix entry that has not yet been determined.
In[ ]:= L1 = ConstantArray[, {3, 3}]; L1 // MatrixForm
Out[ ]//MatrixForm=
  
  
  

◆ Take the first column of A, divide each entry by the pivot (1), and use the resulting val-
ues to form the first column of L1.
A〚All, 1〛
In[ ]:= L1〚All, 1〛 = ; L1 // MatrixForm
1
Out[ ]//MatrixForm=
1  
-3  
6  

◆ Our goal is to construct upper-triangular matrix U1 by transforming A to echeclon form.


For this, set U1 equal to A and perform a row operations on A to introduce zeros down
the first column of A.
In[ ]:= U1 = A;
U1〚2〛 = U1〚2〛 + 3 U1〚1〛;
U1〚3〛 = U1〚3〛 - 6 U1〚1〛; U1 // MatrixForm
Out[ ]//MatrixForm=
1 -1 -2
0 -1 -5
0 17 10

◆ Take the second column of A, starting from the pivot entry (-1) down, and divide each
entry by the pivot. Use the resulting values to form the lower portion of the second col-
umn of L1.
U1〚2 ;; 3, 2〛
In[ ]:= L1〚2 ;; 3, 2〛 = ; L1 // MatrixForm
-1
Out[ ]//MatrixForm=
1  
-3 1 
6 - 17 

◆ Perform a row operations on A to introduce zeros down the second column of A.

199
4 Week 10_LU Factorization and Determinants.nb

In[ ]:= U1〚3〛 = U1〚3〛 + 17 U1〚2〛; U1 // MatrixForm


Out[ ]//MatrixForm=
1 -1 -2
0 -1 -5
0 0 - 75

◆ Now we have finished with U1. The original matrix is in echelon form and upper
triangular.
◆ Finish filling in L1. Since L1 must be unit lower triangular, we put a 1 in the lower right
corner and fill in the remaining entries with 0’s.
In[ ]:= L1〚3, 3〛 = 1;
L1〚1, 2〛 = 0;
L1〚1 ;; 2, 3〛 = 0; L1 // MatrixForm
Out[ ]//MatrixForm=
1 0 0
-3 1 0
6 - 17 1

◆ Verify the results of Method 1 using standard matrix multiplication.


Dot[L1, U1]  A
Out[ ]=

True

Method 2. LU factorization using LUDecomposition command


.

Definition 10.2: LUDecomposition Command


LUDecomposition returns a list of three elements. The first element is a combination of upper and
lower-triangular matrices, the second element is a permutation vector specifying rows used for pivoting,
and the third element is an estimate of the condition number.

◆ Find the LU factorization of A using LUDecomposition command.


In[ ]:= ? LUDecomposition
Out[ ]=

Symbol

LUDecomposition[m] generates a representation of the LU decomposition of a square matrix m.

In[ ]:= {lu, p, c} = LUDecomposition[A];

◆ Compute the factors L2 and U2.

200
Week 10_LU Factorization and Determinants.nb 5

In[ ]:= L2 = LowerTriangularize[lu, - 1] + IdentityMatrix[3]; L2 // MatrixForm


Out[ ]//MatrixForm=
1 0 0
-3 1 0
6 - 17 1

In[ ]:= U2 = UpperTriangularize[lu]; U2 // MatrixForm


Out[ ]//MatrixForm=
1 -1 -2
0 -1 -5
0 0 - 75

◆ Define the permutation vector. The permutation vector indicates that the rows were
interchanged while factoring the matrix.
In[ ]:= p
Out[ ]=

{1, 2, 3}

◆ Verify the results of Method 2 by permutting the rows of A.


In[ ]:= L2.U2  A〚p〛
Out[ ]=

True

◆ Step 3. Solve the given system of linear equations using the result of one of the methods
above.
◆ Since we have verified that A = LU, the system can be written as LUx = b. The first step
is to denote y = Ux, so that our system can be expressed as Ly = b.
◆ Solve the equation Ly = b:
In[ ]:= y = LinearSolve[L2, b〚p〛]
Out[ ]=

{2, 11, 176}

◆ Solve the equation y = Ux:


In[ ]:= x = LinearSolve[U2, y]
Out[ ]=

49 11 176
- , ,- 
25 15 75

◆ Step 4. Verify the solution.


In[ ]:= x  LinearSolve[A, b]
Out[ ]=

True

201
6 Week 10_LU Factorization and Determinants.nb

Example 10.2: Determinant and Its Properties | Part 1


Definition 10.3: Determinant of a 3  3 Matrix
Let A be the 3  3 matrix.
.
a11 a12 a13
A = a21 a22 a23
a31 a32 a33
.

Then the determinant of A is given by


.

det(A) = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 - a11 a23 a32 - a12 a21 a33 - a13 a22 a31

Construct a matrix(or matrices) of dimension 3  3. Compute its(or their) determinant(s) and


prove the properties of your choice listed below.
.

Theorem 10.1: Properties of the Determinant


Let A and B be a n×n square matrices.

1. A is invertible if and only if det(A) ≠ 0.


2. For n ≥ 1, we have det(In ) = 1.
3. If A is a triangular n×n matrix, then det(A) is the product of the terms along the diagonal.
4. det AT  = det (A)

5. If A has a row( or column) of zeros or if A has two identical rows(or columns), then det(A) = 0.
6. det(AB) = det(A)det(B)
1
7. Let A be invertible matrix, then detA-1  =
det (A)

8. det(A+B) ≠ det(A) + det(B)

◆ 1st and 4th properties are chosen for this example.


.

Method 1. Use the Shortcut Method to compute the determinant.


.

Definition 10.4: The Shortcut Method for 33 Matrices


This method is also known as Rule of Sarrus.

202
Week 10_LU Factorization and Determinants.nb 7

 Note that the Shortcut Method will not work for n × n matrices with dimensions larger than 3×3.

◆ Step 1. Construct a non-invertible matrix for proving the 1st property.


In[ ]:= ClearAll["Global`*"]
A1 = {{- 3, 1, 2}, {5, 5, - 8}, {4, 2, - 5}}; MatrixForm[A1]
Out[ ]//MatrixForm=

-3 1 2
5 5 -8
4 2 -5

◆ Step 2. Duplicate the first two columns of the matrix to the right of the third column of
the original matrix.
In[ ]:= extraColumns = {A1〚All, 1〛, A1〚All, 2〛};
newA1 = Transpose[Join[Transpose[A1], extraColumns]]; newA1 // MatrixForm
Out[ ]//MatrixForm=

-3 1 2 -3 1
5 5 -8 5 5
4 2 -5 4 2

◆ Step 3.1. Now, for each diagonal arrow multiply terms and then add or subtract based on
the labels.
In[ ]:= detA1 =
A1〚1, 1〛 * A1〚2, 2〛 * A1〚3, 3〛 +
A1〚1, 2〛 * A1〚2, 3〛 * A1〚3, 1〛 +
A1〚1, 3〛 * A1〚2, 1〛 * A1〚3, 2〛 -
A1〚1, 3〛 * A1〚2, 2〛 * A1〚3, 1〛 -
A1〚1, 1〛 * A1〚2, 3〛 * A1〚3, 2〛 -
A1〚1, 2〛 * A1〚2, 1〛 * A1〚3, 3〛
Out[ ]=

In[ ]:= detA12 = (- 3 * 5 * - 5) + (1 * - 8 * 4) + (2 * 5 * 2) - (2 * 5 * 4) - (- 3 * - 8 * 2) - (1 * 5 * - 5)


Out[ ]=

◆ Step 3.2. Use for loop to eliminate repeating units.

203
8 Week 10_LU Factorization and Determinants.nb

◆ Here Diagonal command is used to get the elements along the desired diagonal, Reverse
command is for getting the elements of back diagonal and Times command is used to get
the product of that diagonal elements in a list.
In[ ]:= detA13 = 0;
For[i = 0, i ≤ 2, i ++,
detA13 = detA13 + Times @@ Diagonal[newA1, i] - Times @@ Diagonal[Reverse[newA1], i]];
Print[detA13]

◆ Step 4. Verify the result and check the 1st property.


In[ ]:= detA1  detA12  detA13  Det[A1]
Out[ ]=

True

In[ ]:= Inverse[A1]

Inverse: Matrix {{-3, 1, 2}, {5, 5, -8}, {4, 2, -5}} is singular.

Out[ ]=

Inverse[{{- 3, 1, 2}, {5, 5, - 8}, {4, 2, - 5}}]

Method 2. Use the Cofactor Expansion to compute the determinant.

First of all, let’s focus on the concepts like minor and cofactor that are basis of the Cofactor
Expansion Method.
.

Definition 10.5: Minor, Cofactor


detMij ), the i,j minor of the matrix, is defined as the determinant of the Mij matrix, (n - 1) × (n - 1)

matrix that results from deleting the ith row and jth column of the original n × n matrix.

Cij , the i,j cofactor of the matrix, is defined by the formula:


.

Cij = (- 1)i+j det Mij 


.

where detMij ) is the i,j minor of the matrix.


.

Theorem 10.2: Cofactor Expansions


Let A be the n×n matrix. Then
.

(a) det(A) = ai1 Ci1 +ai2 Ci2 +⋯ + ain Cin (Expand across row i)
.

(b) det (A) = a2 j C2 j + a2 j C2 j +⋯ + anj Cnj (Expand across column j)

204
Week 10_LU Factorization and Determinants.nb 9

where Cij is denoted as the cofactor of aij .

◆ Step 1. Construct a 33 matrix for proving the 4th property.


In[ ]:= A2 = Array[Subscript[a, ##] &, {3, 3}]; A2 // MatrixForm
Out[ ]//MatrixForm=

a1,1 a1,2 a1,3


a2,1 a2,2 a2,3
a3,1 a3,2 a3,3

◆ Step 2.1. Compute the determinant of a matrix A2 using column expansion.


◆ Calculate the minors of matrix A2 across the 1st column.
.

Definition 10.6: Determinant of a 2  2 Matrix


The determinant of a 22 matrix is equal to the difference between the product of the elements along
the main diagonal and the product of the elements on the secondary diagonal.

In[ ]:= m11 = A2〚2, 2〛 * A2〚3, 3〛 - A2〚2, 3〛 * A2〚3, 2〛


Out[ ]=

- a2,3 a3,2 + a2,2 a3,3

In[ ]:= m21 = A2〚1, 2〛 * A2〚3, 3〛 - A2〚1, 3〛 * A2〚3, 2〛


Out[ ]=

- a1,3 a3,2 + a1,2 a3,3

In[ ]:= m31 = A2〚1, 2〛 * A2〚2, 3〛 - A2〚1, 3〛 * A2〚2, 2〛


Out[ ]=

- a1,3 a2,2 + a1,2 a2,3

◆ Use a for loop and the formula of column expansion given above to calculate the
det(A2.1).
minorA2 = {{m11}, {m21}, {m31}};
detA21 = 0;
Fori = 1, i ≤ 3, i ++,
Forj = 1, j < 2, j ++,
detA21 = detA21 + A2〚i, j〛 * (- 1)i+j * minorA2〚i, j〛
detA21
Out[ ]=

(- a1,3 a2,2 + a1,2 a2,3 ) a3,1 - a2,1 (- a1,3 a3,2 + a1,2 a3,3 ) + a1,1 (- a2,3 a3,2 + a2,2 a3,3 )

◆ Step 2.2. Compute the determinant of a matrix A2 using row expansion.


◆ Calculate the minors of matrix A2 across the 1st row using built-in command Minors.

205
10 Week 10_LU Factorization and Determinants.nb

In[ ]:= (mA2 = Minors[A2]) // MatrixForm


Out[ ]//MatrixForm=
- a1,2 a2,1 + a1,1 a2,2 - a1,3 a2,1 + a1,1 a2,3 - a1,3 a2,2 + a1,2 a2,3
- a1,2 a3,1 + a1,1 a3,2 - a1,3 a3,1 + a1,1 a3,3 - a1,3 a3,2 + a1,2 a3,3
- a2,2 a3,1 + a2,1 a3,2 - a2,3 a3,1 + a2,1 a3,3 - a2,3 a3,2 + a2,2 a3,3

◆ Calculate the cofactors using the corresponding formula.


In[ ]:= C11 = (- 1)1+1 mA2〚3, 3〛
Out[ ]=

- a2,3 a3,2 + a2,2 a3,3

In[ ]:= C12 = (- 1)1+2 mA2〚3, 2〛


Out[ ]=

a2,3 a3,1 - a2,1 a3,3

In[ ]:= C13 = (- 1)1+3 mA2〚3, 1〛


Out[ ]=

- a2,2 a3,1 + a2,1 a3,2

◆ Calculate the det(A2.2) using the formula for row expansion given above.
cofactor = Transpose[{{C11}, {C12}, {C13}}];
detA22 = 0;
For[i = 1, i < 2, i ++,
For[j = 1, j ≤ 3, j ++,
detA22 = detA22 + A2〚i, j〛 * cofactor〚i, j〛]]
detA22
Out[ ]=

a1,3 (- a2,2 a3,1 + a2,1 a3,2 ) + a1,2 (a2,3 a3,1 - a2,1 a3,3 ) + a1,1 (- a2,3 a3,2 + a2,2 a3,3 )

◆ Step 3. Compute the determinant of A T using the built-in command Cofactor.


◆ Find the transpose of the A2.
In[ ]:= A2t = Transpose[A2]; A2t // MatrixForm
Out[ ]//MatrixForm=
a1,1 a2,1 a3,1
a1,2 a2,2 a3,2
a1,3 a2,3 a3,3

◆ To use Cofactor, first load the Combinatorica Package.


In[ ]:= Needs["Combinatorica`"]

◆ Calculate the detAT ) using the formula for row expansion.

206
Week 10_LU Factorization and Determinants.nb 11

In[ ]:= detA2t = 0;


For[i = 1, i < 2, i ++,
For[j = 1, j ≤ 3, j ++,
detA2t = detA2t + A2t〚i, j〛 * Cofactor[A2t, {i, j}]]]
detA2t
Out[ ]=

(- a1,3 a2,2 + a1,2 a2,3 ) a3,1 + a2,1 (a1,3 a3,2 - a1,2 a3,3 ) + a1,1 (- a2,3 a3,2 + a2,2 a3,3 )

◆ Step 4. Verify the results and check the 4th property.


FullSimplify[Det[A2]  detA2 1  detA22]
Out[ ]=

True

In[ ]:= FullSimplify[Det[A2t]  detA2t]


Out[ ]=

True

In[ ]:= FullSimplify[detA21  detA2t]


Out[ ]=

True

In[ ]:= FullSimplify[Det[A2t]  Det[A2]]


Out[ ]=

True

Example 10.3: Determinant and Its Properties | Part 2


Computing the determinant using row operations may be more efficient than cofactor
expansion. Construct a 4  4 matrix and find its determinant using properties below.
.

Theorem 10.3: Influence of Row Operations on Determinants


Let A be a square matrix.
1. Let B be produced by interchanging two rows of A. Then det (A) = - det (B).
1
2. Let B be produced by multiplying a row of A by a scalar c. Then det (A) = det (B).
c
3. Let B be produced by adding a multiple of one row of A to another. Then det (A) = det (B).

This is also true if rows are replaced with columns.


.

Method 3. Use Row Operations to compute the determinant.

◆ Step 1. Construct a matrix with appropriate dimension and elements.

207
12 Week 10_LU Factorization and Determinants.nb

In[ ]:= ClearAll["Global`*"]


SeedRandom[123];
A = RandomInteger[{- 10, 10}, {4, 4}]; A // MatrixForm
Out[ ]//MatrixForm=
4 8 - 10 - 6
-3 -4 -3 0
1 -4 0 2
- 1 7 - 10 - 1

◆ Step 2. Convert the matrix A to echelon form using elementary row operations and
reduce it to triangular form.
◆ Further steps are separated to keep track of the effect of the row operations on the determi-
nant.
1
R → R1 | det(A) = 4 det(A1 )
4 1

In[ ]:= A1 = A;
1
A1〚1〛 = A1〚1〛; A1 // MatrixForm
4
Out[ ]//MatrixForm=
5 3
1 2 - -
2 2
-3 -4 -3 0
1 -4 0 2
- 1 7 - 10 - 1

R2 + 3 R1 → R2 | det(A1 ) = det(A2 )

In[ ]:= A2 = A1;


A2〚2〛 = A2〚2〛 + 3 A2〚1〛; A2 // MatrixForm
Out[ ]//MatrixForm=
5 3
1 2 - -
2 2
21 9
0 2 - -
2 2
1 -4 0 2
- 1 7 - 10 - 1

R3 - R1 → R 3 | det(A2 ) = det(A3 )

208
Week 10_LU Factorization and Determinants.nb 13

In[ ]:= A3 = A2;


A3〚3〛 = A3〚3〛 - A3〚1〛; A3 // MatrixForm
Out[ ]//MatrixForm=
5 3
1 2 - -
2 2
21 9
0 2 - -
2 2
5 7
0 -6
2 2
-1 7 - 10 - 1

R4 + R 1 → R4 | det(A3 ) = det(A4 )

In[ ]:= A4 = A3;


A4〚4〛 = A4〚4〛 + A4〚1〛; A4 // MatrixForm
Out[ ]//MatrixForm=
5 3
1 2 - -
2 2
21 9
0 2 - -
2 2
5 7
0 -6
2 2
25 5
0 9 - -
2 2

1
R → R2 | det(A4 ) = 2 det(A5 )
2 2

In[ ]:= A5 = A4;


1
A5〚2〛 = A5〚2〛; A5 // MatrixForm
2
Out[ ]//MatrixForm=
5 3
1 2 - -
2 2
21 9
0 1 - -
4 4
5 7
0 -6
2 2
25 5
0 9 - -
2 2

R3 + 6 R2 → R3 | det(A5 ) = det(A6 )

In[ ]:= A6 = A5;


A6〚3〛 = A6〚3〛 + 6 A6〚2〛; A6 // MatrixForm
Out[ ]//MatrixForm=
5 3
1 2 - -
2 2
21 9
0 1 - -
4 4
0 0 - 29 - 10
25 5
0 9 - -
2 2

R4 - 9 R2 → R4 | det(A7 ) = det(A8 )

209
14 Week 10_LU Factorization and Determinants.nb

In[ ]:= A7 = A6;


A7〚4〛 = A7〚4〛 - 9 A7〚2〛; A7 // MatrixForm
Out[ ]//MatrixForm=
5 3
1 2 - -
2 2
21 9
0 1 - -
4 4
0 0 - 29 - 10
139 71
0 0
4 4

1
- R → R3 | det(A7 ) = - 29 det(A8 )
29 3

In[ ]:= A8 = A7;


1
A8〚3〛 = - A8〚3〛; A8 // MatrixForm
29
Out[ ]//MatrixForm=
5 3
1 2 - -
2 2
21 9
0 1 - -
4 4
10
0 0 1
29
139 71
0 0
4 4

139
R4 - R 3 → R4 | det(A8 ) = det(A9 )
4

In[ ]:= A9 = A8;


139
A9〚4〛 = A9〚4〛 - A9〚3〛; A9 // MatrixForm
4
Out[ ]//MatrixForm=
5 3
1 2 - -
2 2
21 9
0 1 - -
4 4
10
0 0 1
29
669
0 0 0
116

◆ Step 3. Since A9 is a triangular form of A, find its determinant using the property of
triangular matrices. Their determinant is equal to the product of its diagonal elements.
(7th property from previous example)
In[ ]:= detA9 = 1;
For[i = 1, i ≤ 4, i ++,
detA9 = detA9 * A9〚i, i〛];
detA9
Out[ ]=
669
116

210
Week 10_LU Factorization and Determinants.nb 15

◆ Verify the result.


In[ ]:= detA9  Det[A9]
Out[ ]=

True

◆ Step 4. Substitute the value of det(A9) and find det(A) using a back substitution.
In[ ]:= detA = 4 detA1;
detA1 = detA2;
detA2 = detA3;
detA3 = detA4;
detA4 = 2 detA5;
detA5 = detA6;
detA6 = detA7;
detA7 = - 29 detA8;
detA8 = detA9;
Print[detA]

- 1338

◆ Step 5. Verify the final result using built-in command.


In[ ]:= Det[A]  detA
Out[ ]=

True

Example 10.4: Applications of the Determinant


Cramer’s Rule

. Solve the following system of linear equations using Cramer’s rule.


x1 - 2 x2 + x3 = 0
2 x2 - 8 x3 = 8
- 4 x1 + 5 x2 + 9 x3 = - 9
.

Theorem 10.4: Cramer’s Rule


Let A be an invertible n × n matrix. Then the components of the unique solution to Ax = b are given by

det(Ai )
xi = for i = 1, 2,… , n
det(A)

◆ Step 1. The system is equivalent to Ax = b, where A and b are defined as follows:

211
16 Week 10_LU Factorization and Determinants.nb

In[ ]:= ClearAll["Global`*"]


A = {{1, - 2, 1}, {0, 2, - 8}, {- 4, 5, 9}}; A // MatrixForm
Out[ ]//MatrixForm=
1 -2 1
0 2 -8
-4 5 9

In[ ]:= b = {0, 8, - 9}; b // MatrixForm


Out[ ]//MatrixForm=
0
8
-9

◆ Step 2. Compute the determinant of the coefficient matrix A.


In[ ]:= detA = Det[A]
Out[ ]=

◆ Step 3. Replace the 1st, 2nd and 3rd column values with the values of the answer col-
umn b and construct new three matrices, respectively.
In[ ]:= A1 = A;
A1〚All, 1〛 = b; A1 // MatrixForm
Out[ ]//MatrixForm=
0 -2 1
8 2 -8
-9 5 9

In[ ]:= A2 = A;
A2〚All, 2〛 = b; A2 // MatrixForm
Out[ ]//MatrixForm=
1 0 1
0 8 -8
-4 -9 9

In[ ]:= A3 = A;
A3〚All, 3〛 = b; A3 // MatrixForm
Out[ ]//MatrixForm=
1 -2 0
0 2 8
-4 5 -9

◆ Step 4. Compute their determinants.


In[ ]:= detA1 = Det[A1]
Out[ ]=

58

212
Week 10_LU Factorization and Determinants.nb 17

In[ ]:= detA2 = Det[A2]


Out[ ]=

32

In[ ]:= detA3 = Det[A3]


Out[ ]=

◆ Step 5. Find the solution using the formula for Cramer’s rule.
detA1
x1 =
detA
Out[ ]=

29

detA2
x2 =
detA
Out[ ]=

16

detA3
x3 =
detA
Out[ ]=
3

In[ ]:= x = {x1, x2, x3}


Out[ ]=

{29, 16, 3}

◆ Step 6. Verify the obtained solution.


In[ ]:= A.x  b
Out[ ]=

True

In[ ]:= LinearSolve[A, b]


Out[ ]=

{29, 16, 3}

In[ ]:= %x


Out[ ]=

True

Inverses from Determinants

Find the inverse of the following matrix using its determinant.

213
18 Week 10_LU Factorization and Determinants.nb

3 1 0
-1 2 1
0 -1 2
.

Theorem 10.5: Inverse from Determinant


If A is an invertible matrix, then

1
A -1 = adj(A)
det(A)

Definition 10.7: Adjoint Matrix


The formal adjoint of A is the transpose of the matrix formed by the cofactors, C, of A and is denoted
by adj(A).
C11 C21 ⋯ Cn 1
C12 C22 ⋯ Cn 2
adj(A) = C T =
⋮ ⋮ ⋱ ⋮
C1 n C2 n ⋯ Cnm
.

◆ Step 1. Construct the matrix A.


In[ ]:= ClearAll["Global`*"]
A = {{3, 1, 0}, {- 1, 2, 1}, {0, - 1, 2}}; A // MatrixForm
Out[ ]//MatrixForm=
3 1 0
-1 2 1
0 -1 2

◆ Step 2. Find the determinant of A. Since det(A)≠0, the matrix A is invertible.


In[ ]:= detA = Det[A]
Out[ ]=

17

◆ Step 3. Compute the nine cofactors for A as a matrix.

214
Week 10_LU Factorization and Determinants.nb 19

In[ ]:= Needs["Combinatorica`"]


cofactorA = ConstantArray[0, {3, 3}];
For[i = 1, i ≤ 3, i ++,
For[j = 1, j ≤ 3, j ++,
cofactorA〚i, j〛 = Cofactor[A, {i, j}]]]
cofactorA // MatrixForm
Out[ ]//MatrixForm=
5 2 1
-2 6 3
1 -3 7

◆ Step 4. Define the adjoint matrix corresponding to A.


In[ ]:= adjA = Transpose[cofactorA]; adjA // MatrixForm
Out[ ]//MatrixForm=
5 -2 1
2 6 -3
1 3 7

◆ Step 5. Finally, compute the inverse of A.


1
In[ ]:= invA = * adjA // MatrixForm
detA
Out[ ]//MatrixForm=

5 2 1
17
- 17 17
2 6 3
17 17
- 17
1 3 7
17 17 17

◆ Step 6. Verify the solution.


In[ ]:= invA.A  A.invA  IdentityMatrix[3]
Out[ ]=

True

In[ ]:= Inverse[A]  invA


Out[ ]=

True

Summary
After completing this chapter, you should be able to
◼ perform LU factorization of a matrix in Mathematica
◼ find the determinant of a matrix in Mathematica
◼ practice applications of determinants in Mathematica.

215
20 Week 10_LU Factorization and Determinants.nb

◼ develop the habit of always checking your solutions for quality assurance.

216
Week 11: Eigenvalues and Eigenvectors
Determination and applications of eigenvalues and eigenvectors

Table of Contents
1. Example 11.1: Characteristic Polynomial and Equation
2. Example 11.2: Multiplicity of an Eigenvalue
3. Example 11.3: Diagonalization
3.1. Non-Diagonalizable Matrix
3.2. Diagonalizable Matrix
4. Example 11.4: Matrix Powers
5. Summary

Commands list
◼ NullSpace
◼ CharacteristicPolynomial
◼ Eigenvalues
◼ Eigenvectors
◼ Eigensystem
◼ Factor
◼ DiagonalizableMatrixQ
◼ Diagonal
◼ Power
◼ MatrixPower

Prerequisite: Eigenvalues and Eigenvectors


The Wolfram Language includes funtions for finding the eigenvalues, eigenvectors, and
eigensystems of matrices. There are also functions related to characteristic polynomials,
diagonalization and many other matrix-related topics.

217
2 Week 11_Eigenvalues and Eigenvectors.nb

Eigenvalues and Eigenvectors:


https://reference.wolfram.com/language/tutorial/LinearAlgebra.html#9501

Example 11.1: Characteristic Polynomial and Equation


Determine the eigenvalues for the given matrix and eigenvectors associated with each
eigenvalue using the characteristic polynomial:
1 -3 3
A = 2 -2 2
2 0 0

Definition 11.1: Eigenvector, Eigenvalue


Let A be an n × n matrix. Then a nonzero vector u is an eigenvector of A if there exists a scalar λ such
that
Au = λu

where the scalar λ is called an eigenvalue of A.

 Note that an eigenvalue λ can be zero, but an eigenvactor u must be a nonzero vector.
◆ Step 1. Define the matrix A.
In[ ]:= ClearAll["Global`*"]
A = {{1, - 3, 3}, {2, - 2, 2}, {2, 0, 0}}; MatrixForm[A]
Out[ ]//MatrixForm=
1 -3 3
2 -2 2
2 0 0

◆ Step 2. Determine the eigenvalues of A by calculating the characteristic polynomial.


Theorem 11.1: How to use determinants to find eigenvalues
Let A be an n × n matrix. Then λ is an eigenvalue of A if and only if det(A - λ In ) = 0.

where the polynomial from det(A - λ In ) is called the characteristic polynomial of A, and the equation
det(A - λ In ) = 0 is called the characteristic equation.
.

◆ Form a new matrix by subtracting λ from diagonal elements of A such that P = A -λI 3.
In[ ]:= Poly = A - λ * IdentityMatrix[3]; Poly // MatrixForm
Out[ ]//MatrixForm=
1-λ -3 3
2 -2 - λ 2
2 0 -λ

◆ The eigenvalues for a matrix A are given by the roots of the characteristic equation.

218
Week 11_Eigenvalues and Eigenvectors.nb 3

In[ ]:= eqn = Det[Poly]  0


Out[ ]=

2 λ - λ2 - λ3  0

In[ ]:= Solve[eqn, λ]


Out[ ]=

{{λ  - 2}, {λ  0}, {λ  1}}

Theorem 11.2: Eigenvectors


The corresponding eigenvectors are found by solving the homogeneous system:
.

(A - λ In ) u = 0
.

◆ Step 3.1. Find the eigenvectors associated with λ1 = - 2 by solving the corresponding
homogeneous system: (A + 2 I3) u1 = 0.
◆ Step 3.1.1. Construct the augmented matrix of the system, AugMat= [A1|b].
◆ Construct the coefficient matrix A1 = A + 2 I3 .
In[ ]:= A1 = A + 2 * IdentityMatrix[3]; A1 // MatrixForm
Out[ ]//MatrixForm=
3 -3 3
2 0 2
2 0 2

◆ Column matrix b with right-hand-side constants:


In[ ]:= b = {{0}, {0}, {0}}; MatrixForm[b]
Out[ ]//MatrixForm=
0
0
0

◆ Augmented matrix:
In[ ]:= AugMat1 = ArrayFlatten[{{A1, b}}]; MatrixForm[AugMat1]
Out[ ]//MatrixForm=
3 -3 3 0
2 0 2 0
2 0 2 0

◆ Step 3.1.2. Transform the augmented matrix to echelon form.


◆ R1 ⟷ R2

219
4 Week 11_Eigenvalues and Eigenvectors.nb

In[ ]:= AugMat1〚{1, 2}〛 = AugMat1〚{2, 1}〛; AugMat1 // MatrixForm


Out[ ]//MatrixForm=
2 0 2 0
3 -3 3 0
2 0 2 0

3
◆ -
2
R 1 + R2 → R 2
3
In[ ]:= AugMat1〚2〛 = - AugMat1〚1〛 + AugMat1〚2〛; AugMat1 // MatrixForm
2
Out[ ]//MatrixForm=
2 0 2 0
0 -3 0 0
2 0 2 0

◆ - R 1 + R3 → R 3

In[ ]:= AugMat1〚3〛 = - AugMat1〚1〛 + AugMat1〚3〛; AugMat1 // MatrixForm


Out[ ]//MatrixForm=
2 0 2 0
0 -3 0 0
0 0 0 0

◆ Step 3.1.3. Perform the back substitution to find the eigenvector. Take x3 = s:
In[ ]:= x3 = s;
eq0 = 2 x1 + 2 x3  0;
eq1 = - 3 x2  0;
soln = Solve[{eq0, eq1}, {x1, x2}]
Out[ ]=

{{x1  - s, x2  0}}

◆ The general solution to u1:


In[ ]:= u1 = {x1, x2, x3} /. soln〚1〛; u1 // MatrixForm
Out[ ]//MatrixForm=
-s
0
s

Definition 11.2: Eigenspace


Each distinct eigenvalue λ has its own associated eigenspace. The subspace of all eigenvectors
associated with that eigenvalue λ, together with the zero vector, is called the eigenspace of λ.

◆ Basis for eigenspace of λ1 = - 2:

220
Week 11_Eigenvalues and Eigenvectors.nb 5

In[ ]:= u1 = u1 /. s  1; u1 // MatrixForm


Out[ ]//MatrixForm=

-1
0
1

◆ Step 3.2. Find the eigenvectors associated with λ2 = 1 by solving the corresponding
homogeneous system: (A - I3) u2 = 0.
◆ Step 3.2.1. Construct the augmented matrix of the system, AugMat= [A2|b].
◆ Construct the coefficient matrix A2 = A - I3.
In[ ]:= A2 = A - IdentityMatrix[3]; A2 // MatrixForm
Out[ ]//MatrixForm=
0 -3 3
2 -3 2
2 0 -1

◆ Augmented matrix:
In[ ]:= AugMat2 = ArrayFlatten[{{A2, b}}]; MatrixForm[AugMat2]
Out[ ]//MatrixForm=
0 -3 3 0
2 -3 2 0
2 0 -1 0

◆ Step 3.2.2. Transform the augmented matrix to echelon form.


◆ R1 ⟷ R3

In[ ]:= AugMat2〚{1, 3}〛 = AugMat2〚{3, 1}〛; AugMat2 // MatrixForm


Out[ ]//MatrixForm=
2 0 -1 0
2 -3 2 0
0 -3 3 0

◆ - R 1 + R2 → R 2

In[ ]:= AugMat2〚2〛 = - AugMat2〚1〛 + AugMat2〚2〛; AugMat2 // MatrixForm


Out[ ]//MatrixForm=
2 0 -1 0
0 -3 3 0
0 -3 3 0

◆ - R 2 + R3 → R 3

In[ ]:= AugMat2〚3〛 = - AugMat2〚2〛 + AugMat2〚3〛; AugMat2 // MatrixForm


Out[ ]//MatrixForm=
2 0 -1 0
0 -3 3 0
0 0 0 0

221
6 Week 11_Eigenvalues and Eigenvectors.nb

◆ Step 3.2.3. Perform the back substitution to find the eigenvector. Take x3 = 2 s:
In[ ]:= Clear[x1, x2, x3]
x3 = 2 s;
eq0 = 2 x1 - x3  0;
eq1 = - 3 x2 + 3 x3  0;
soln = Solve[{eq0, eq1}, {x1, x2}]
Out[ ]=

{{x1  s, x2  2 s}}

◆ The general solution to u2:


In[ ]:= u2 = {x1, x2, x3} /. soln〚1〛; u2 // MatrixForm
Out[ ]//MatrixForm=
s
2s
2s

◆ Basis for eigenspace of λ2 = 1:


In[ ]:= u2 = u2 /. s  1; u2 // MatrixForm
Out[ ]//MatrixForm=

1
2
2

◆ Step 3.2. Find the eigenvectors associated with λ3 = 0 by solving the homogeneous
system: (A - 0 I3) u3 = Au = 0.
◆ Step 3.2.1. Construct the matrix A3 = A - 0 I3 = A.
In[ ]:= A3 = A - 0 * IdentityMatrix[3]; A3 // MatrixForm
Out[ ]//MatrixForm=
1 -3 3
2 -2 2
2 0 0

◆ Step 3.3.2. Use the NullSpace function to find the eigenvector.


◆ The general solution to u3:
In[ ]:= u3 = MatrixForm[Transpose[NullSpace[A3] * s]]  MatrixForm[Transpose[NullSpace[A3]]] * s
Out[ ]=
0 0
s s 1
s 1

◆ Basis for eigenspace of λ3 = 0:

222
Week 11_Eigenvalues and Eigenvectors.nb 7

In[ ]:= u3 = NullSpace[A3]; Transpose[u3] // MatrixForm


Out[ ]//MatrixForm=

0
1
1

◆ Step 4. Verify the results using the built-in functions.


In[ ]:= checkPoly = CharacteristicPolynomial[A, λ]
Out[ ]=

2 λ - λ2 - λ3

In[ ]:= Solve[checkPoly  0, λ]


Out[ ]=

{{λ  - 2}, {λ  0}, {λ  1}}

In[ ]:= Eigenvalues[A]


Out[ ]=

{- 2, 1, 0}

In[ ]:= v = Eigenvectors[A];


{MatrixForm[v〚1〛], MatrixForm[v〚2〛], MatrixForm[v〚3〛]}
Out[ ]=
-1 1 0
 0 , 2 , 1 
1 2 1

In[ ]:= sys = Eigensystem[A];


{sys〚1〛, MatrixForm[sys〚2, 1〛], MatrixForm[sys〚2, 2〛], MatrixForm[sys〚2, 3〛]}
Out[ ]=

-1 1 0
{- 2, 1, 0}, 0 , 2 , 1 
1 2 1

Example 11.2: Multiplicity of an Eigenvalue


Find the eigenvalues and a basis for each eigenspace of the given matrix A:
4 4 -2
A = 1 4 -1
3 6 -1

◆ Step 1. Define the matrix A.

223
8 Week 11_Eigenvalues and Eigenvectors.nb

In[ ]:= ClearAll["Global`*"]


A = {{4, 4, - 2}, {1, 4, - 1}, {3, 6, - 1}}; MatrixForm[A]
Out[ ]//MatrixForm=
4 4 -2
1 4 -1
3 6 -1

◆ Step 2. Determine the eigenvalues of A by calculating the characteristic polynomial.


◆ The characteristic polynomial of A is P = det(A - λI3).
In[ ]:= poly = A - λ * IdentityMatrix[3]; poly // MatrixForm
Out[ ]//MatrixForm=
4-λ 4 -2
1 4-λ -1
3 6 -1 - λ

In[ ]:= Poly = Det[poly]


Out[ ]=

12 - 16 λ + 7 λ2 - λ3

◆ Step 3. Define the characteristic equation of A as det(A - λI3) = 0.


In[ ]:= eqn = Poly  0
Out[ ]=

12 - 16 λ + 7 λ2 - λ3  0

◆ Step 4. The eigenvalues for a matrix A are given by the roots of the characteristic equa-
tion.
◆ Factorise the characteristic equation.
In[ ]:= Factor[eqn]
Out[ ]=

- (- 3 + λ) (- 2 + λ)2   0

◆ From the factored form we see that the matrix A has two distinct eigenvalues, λ1 = 2
(multiplicity 2) and λ2 = 3 (multiplicity 1). Confirm the results by solving the character-
istic equation for λ.
In[ ]:= Solve[eqn, λ]
Out[ ]=

{{λ  2}, {λ  2}, {λ  3}}

Definition 11.3: Multiplicity


The multiplicity of an eigenvalue is equal to its factor’s exponent. In general, for a polynomial P(x), a
root α of P(x) = 0 has muliplicity r if P(x) = (x - α)r Q(x) with Q(α) ≠ 0.

224
Week 11_Eigenvalues and Eigenvectors.nb 9

◆ Step 5.1. Find the eigenvectors associated with λ1 = 2 by solving the corresponding
homogeneous system: (A - 2 I3) u1 = 0.
◆ Step 5.1.1. Construct the augmented matrix of the system, AugMat= [A1|b].
◆ Construct the coefficient matrix A1 = A + 2 I3 .
In[ ]:= A1 = A - 2 * IdentityMatrix[3]; A1 // MatrixForm
Out[ ]//MatrixForm=
2 4 -2
1 2 -1
3 6 -3

◆ Column matrix b with right-hand-side constants:


In[ ]:= b = {{0}, {0}, {0}}; MatrixForm[b]
Out[ ]//MatrixForm=
0
0
0

◆ Augmented matrix:
In[ ]:= AugMat1 = ArrayFlatten[{{A1, b}}]; MatrixForm[AugMat1]
Out[ ]//MatrixForm=
2 4 -2 0
1 2 -1 0
3 6 -3 0

◆ Step 5.1.2. Transform the augmented matrix to echelon form.


◆ R2 ⟷ R1

In[ ]:= AugMat1〚{1, 2}〛 = AugMat1〚{2, 1}〛; AugMat1 // MatrixForm


Out[ ]//MatrixForm=
1 2 -1 0
2 4 -2 0
3 6 -3 0

◆ - 2 R1 + R2 → R2

In[ ]:= AugMat1〚2〛 = - 2 * AugMat1〚1〛 + AugMat1〚2〛; AugMat1 // MatrixForm


Out[ ]//MatrixForm=
1 2 -1 0
0 0 0 0
3 6 -3 0

◆ - 3 R1 + R3 → R3

In[ ]:= AugMat1〚3〛 = - 3 * AugMat1〚1〛 + AugMat1〚3〛; AugMat1 // MatrixForm


Out[ ]//MatrixForm=
1 2 -1 0
0 0 0 0
0 0 0 0

225
10 Week 11_Eigenvalues and Eigenvectors.nb

◆ Step 5.1.3. Perform the back substitution to find the eigenvector. Let x2 = s1 and x3 = s2:
In[ ]:= x3 = s2 ;
x2 = s1 ;
eqn = x1 + 2 x2 - x3  0;
soln = Solve[eqn, x1]
Out[ ]=

{{x1  - 2 s1 + s2 }}

◆ The general solution to u1:


In[ ]:= u1 = {x1, x2, x3} /. soln〚1〛;
u11 = u1 /. {s1  1, s2  0};
u12 = u1 /. {s1  0, s2  1};
MatrixForm[u1]  s1 * MatrixForm[u11] + s2 * MatrixForm[u12]
Out[ ]=
- 2 s 1 + s2 -2 1
s1  1 s1 + 0 s2
s2 0 1

◆ Basis for eigenspace of λ1 = 2:


In[ ]:= u1 = { MatrixForm[u11] , MatrixForm[u12] }
Out[ ]=

-2 1
 1 , 0 
0 1

Note that for the eigenvalue λ1 = 2, which has multiplicity of 2, its associated eigenspace has

............dimension 2 (i.e., a plane).
.

Theorem 11.3: Multiplicity and Eigenspace Dimension


Let A be a square matrix with eigenvalue λ. Then the dimension of the associated eigenspace is less
that or equal to the multiplicity of λ.

◆ Step 5.2. Find the eigenvectors associated with λ2 = 3 by solving the corresponding
homogeneous system: (A - 3 I3) u3 = 0.
◆ Step 5.2.1. Construct the augmented matrix of the system, AugMat= [A2|b].
◆ Construct the coefficient matrix A2 = A - 3 I3.
In[ ]:= A2 = A - 3 IdentityMatrix[3]; A2 // MatrixForm
Out[ ]//MatrixForm=
1 4 -2
1 1 -1
3 6 -4

226
Week 11_Eigenvalues and Eigenvectors.nb 11

◆ Augmented matrix:
In[ ]:= AugMat2 = ArrayFlatten[{{A2, b}}]; MatrixForm[AugMat2]
Out[ ]//MatrixForm=
1 4 -2 0
1 1 -1 0
3 6 -4 0

◆ Step 5.2.2. Transform the augmented matrix to echelon form.


◆ - 1 R1 + R2 → R2

In[ ]:= AugMat2〚2〛 = - 1 * AugMat2〚1〛 + AugMat2〚2〛; AugMat2 // MatrixForm


Out[ ]//MatrixForm=
1 4 -2 0
0 -3 1 0
3 6 -4 0

◆ - 3 R1 + R3 → R3

In[ ]:= AugMat2〚3〛 = - 3 * AugMat2〚1〛 + AugMat2〚3〛; AugMat2 // MatrixForm


Out[ ]//MatrixForm=
1 4 -2 0
0 -3 1 0
0 -6 2 0

◆ - 2 R2 + R3 → R3

In[ ]:= AugMat2〚3〛 = - 2 * AugMat2〚2〛 + AugMat2〚3〛; AugMat2 // MatrixForm


Out[ ]//MatrixForm=
1 4 -2 0
0 -3 1 0
0 0 0 0

◆ Step 5.2.3. Perform the back substitution to find the eigenvector. Take x3 = 3 s:
In[ ]:= Clear[x1, x2, x3]
x3 = 3 s;
eq0 = x1 + 4 x2 - 2 x3  0;
eq1 = - 3 x2 + x3  0;
soln = Solve[{eq0, eq1}, {x1, x2}]
Out[ ]=

{{x1  2 s, x2  s}}

◆ The general solution to u2:


In[ ]:= u2 = {x1, x2, x3} /. soln〚1〛; u2 // MatrixForm
Out[ ]//MatrixForm=
2s
s
3s

◆ Basis for eigenspace of λ2 = 3:

227
12 Week 11_Eigenvalues and Eigenvectors.nb

In[ ]:= u2 = u2 /. s  1; u2 // MatrixForm


Out[ ]//MatrixForm=

2
1
3

◆ Step 6. Verify the results using the built-in functions.


In[ ]:= checkPoly = CharacteristicPolynomial[A, λ]
Out[ ]=

12 - 16 λ + 7 λ2 - λ3

In[ ]:= Solve[checkPoly  0, λ]


Out[ ]=

{{λ  2}, {λ  2}, {λ  3}}

In[ ]:= Eigenvalues[A]


Out[ ]=

{3, 2, 2}

In[ ]:= v = Eigenvectors[A];


{MatrixForm[v〚1〛], MatrixForm[v〚2〛], MatrixForm[v〚3〛]}
Out[ ]=
2 1 -2
 1 , 0 , 1 
3 1 0

In[ ]:= sys = Eigensystem[A];


{sys〚1〛, MatrixForm[sys〚2, 1〛], MatrixForm[sys〚2, 2〛], MatrixForm[sys〚2, 3〛]}
Out[ ]=

2 1 -2
{3, 2, 2}, 1 , 0 , 1 
3 1 0

Example 11.3: Diagonalization


Definition 11.4: Diagonalizable Matrix
An n × n matrix A is diagonalizable if there exist n × n matrices D and P, with D diagonal and P
invertible, such that
A = PDP-1

where the columns of P are the eigenvectors of the matrix A, such that P = [u1 . . . un ] and the diagonal
entries of D given by the corresponding eigenvalues λ1 , . . . , λn .

 Note that the order of the eigenvalues in D does not matter, as long as it matches the order of the

228
Week 11_Eigenvalues and Eigenvectors.nb 13

corresponding eigenvectors in P.
.

Theorem 11.4: Conditions for a Matrix to be Diagonalizable


1. An n × n matrix A is diagonalizable if and only if A has eigenvectors that form a basis for R n .
2. If A has n linearly independent eigenvectors u1 , . . . , un , then A is diagonalizable.
3. Suppose that an n × n matrix A has only real eigenvalues. Then A is diagonalizable if and only if the
. dimension of each eigenspace is equal to the multiplicity of the corresponding eigenvalue.
4. If A is an n × n matrix with n distinct real eigenvalues, then A is always diagonalizable.
.

Theorem 11.5: Eigenvalues and Linear Independence


If {λ1 , . . . , λk } are distinct eigenvalues of a matrix A, then any set of associated eigenvectors
{u1 , . . . , uk } is linearly independent.
.

Example 11.3.1. If possible, diagonalize the given matrix A:


0 2 -1
A = 1 -1 0
1 -2 0

◆ Step 1. Define the matrix A.


In[ ]:= ClearAll["Global`*"]
A = {{0, 2, - 1}, {1, - 1, 0}, {1, - 2, 0}}; MatrixForm[A]
Out[ ]//MatrixForm=
0 2 -1
1 -1 0
1 -2 0

◆ Step 2. Find the eigenvalues of the matrix A.


In[ ]:= Factor[CharacteristicPolynomial[A, λ]]
Out[ ]=

- (- 1 + λ) (1 + λ)2 

In[ ]:= Solve[Det[A - λ * IdentityMatrix[3]]  0, λ]


Out[ ]=

{{λ  - 1}, {λ  - 1}, {λ  1}}

In[ ]:= Eigenvalues[A]


Out[ ]=

{- 1, - 1, 1}

◆ Results show that the matrix A has two distinct eigenvalues, λ1 = - 1 (multiplicity 2)
and λ2 = 1 (multiplicity 1).

229
14 Week 11_Eigenvalues and Eigenvectors.nb

◆ Step 3. Find the corresponding eigenvectors.


◆ Basis for eigenspace of λ1 = - 1:
In[ ]:= λ1 = - 1;
NullSpace[A - λ1 IdentityMatrix[3]] // Transpose // MatrixForm
Out[ ]//MatrixForm=
0
1
2

 Note that although the eigenvalue λ1 = - 1 has multiplicity of 2, its associated eigenspace has
dimension 1 (i.e., a line).
.

◆ Basis for eigenspace of λ2 = 1:


In[ ]:= λ2 = 1;
NullSpace[A - λ2 IdentityMatrix[3]] // Transpose // MatrixForm
Out[ ]//MatrixForm=
2
1
0

◆ Step 4. Check the matrix for diagonalizability based on the eigenvalues and eigenvectors.
For the given matrix A, the dimension of eigenspace of λ1 = - 1 is less than the multiplicity of the
corresponding eigenvalue, therefore, according to the Theorem 11.4.3 and Theorem 11.4.4, the given
matrix A is non-diagonalizable.

◆ Step 5. Verify the conclusion using DiagonalizableMatrixQ.


In[ ]:= DiagonalizableMatrixQ[A]
Out[ ]=

False
.

Example 11.3.2. If possible, find matrices P and D to diagonalize the given matrix A:
1 -4 3
A= 0 7 1
0 0 2

◆ Step 1. Define the matrix A.


In[ ]:= ClearAll["Global`*"]
A = {{1, - 4, 3}, {0, 7, 1}, {0, 0, 2}}; MatrixForm[A]
Out[ ]//MatrixForm=
1 -4 3
0 7 1
0 0 2

230
Week 11_Eigenvalues and Eigenvectors.nb 15

◆ Step 2. Find the eigenvalues of the matrix A.


In[ ]:= Factor[CharacteristicPolynomial[A, λ]]
Out[ ]=

- ((- 7 + λ) (- 2 + λ) (- 1 + λ))

In[ ]:= Solve[Det[A - λ * IdentityMatrix[3]]  0, λ]


Out[ ]=

{{λ  1}, {λ  2}, {λ  7}}

In[ ]:= Eigenvalues[A]


Out[ ]=

{7, 2, 1}

◆ Results show that the matrix A has three distinct eigenvalues, λ1 = 7, λ2 = 2, and λ3 = 1.
◆ Step 3. Find the corresponding eigenvectors.
◆ Basis for eigenspace of λ1 = 7:
In[ ]:= λ1 = 7;
u1 = NullSpace[A - λ1 IdentityMatrix[3]]; u1 // Transpose // MatrixForm
Out[ ]//MatrixForm=
-2
3
0

◆ Basis for eigenspace of λ2 = 2:


In[ ]:= λ2 = 2;
u2 = NullSpace[A - λ2 IdentityMatrix[3]]; u2 // Transpose // MatrixForm
Out[ ]//MatrixForm=
19
-1
5

◆ Basis for eigenspace of λ3 = 1:


In[ ]:= λ3 = 1;
u3 = NullSpace[A - λ3 IdentityMatrix[3]]; u3 // Transpose // MatrixForm
Out[ ]//MatrixForm=
1
0
0

◆ Step 4. Check the matrix for diagonalizability based on the eigenvalues and eigenvectors.
◆ Since the matrix A has three distinct eigenvalues and three eigenvectors corresponsing to
these eigenvalues, therefore by Theorem 11.4.3 the set of eigenvectors is linearly idepen-
dent and thus forms a basis for R3.
◆ According to the Theorem 11.4.1 and Theorem 11.4.4, the given matrix A is
diagonalizable.

231
16 Week 11_Eigenvalues and Eigenvectors.nb

◆ Step 5. Verify the conclusion.


◆ The following nonzero determinant implies that the eigenvectors are linearly independent.
In[ ]:= Det[Eigenvectors[A]]
Out[ ]=

15

In[ ]:= DiagonalizableMatrixQ[A]


Out[ ]=

True

◆ Step 6. Define the matrices P and D based on the Definition 11.4.


In[ ]:= D1 = DiagonalMatrix[{λ1 , λ2 , λ3 }]; D1 // MatrixForm
Out[ ]//MatrixForm=

7 0 0
0 2 0
0 0 1

In[ ]:= P = Transpose[{u1 〚1〛, u2 〚1〛, u3 〚1〛}]; P // MatrixForm


Out[ ]//MatrixForm=

-2 19 1
3 -1 0
0 5 0

◆ Step 7. If possible, obtain the inverse of matrix P.


◆ Calculate the determinant of P.
In[ ]:= Det[P] ≠ 0
Out[ ]=

True

◆ The nonzero determinant implies that the matrix P is invertible.


In[ ]:= InvP = Inverse[P]; InvP // MatrixForm
Out[ ]//MatrixForm=
1 1
0
3 15
1
0 0
5
2 11
1 -
3 3

◆ Step 7.1. Show that A = PDP-1.


In[ ]:= A  Dot[P, D1, InvP]
Out[ ]=

True

◆ Step 7.2. Alternatively, since P is invertible, show that AP = PD .

232
Week 11_Eigenvalues and Eigenvectors.nb 17

In[ ]:= A.P  P.D1


Out[ ]=

True

Example 11.4: Matrix Powers


Definition 11.5: Matrix Powers
Suppose that n × n matrix A is diagonalizable with A = PDP-1 . Then the k th power of A is defined to be

Ak = PDk P -1

Find A5 via diagonalization.


1 2
3 9
A=
2 7
3 9

◆ Step 1. Define the matrix A.


In[ ]:= ClearAll["Global`*"]
1 2 2 7
A =  , ,  , ; MatrixForm[A]
3 9 3 9
Out[ ]//MatrixForm=
1 2
3 9
2 7
3 9

◆ Step 2. Find the eigenvalues and corresponding eigenvectors of the matrix A.


In[ ]:= λ = Eigenvalues[A]
Out[ ]=
1
1, 
9

In[ ]:= u = Eigenvectors[A];


{ MatrixForm[u〚1〛], MatrixForm[u〚2〛]}
Out[ ]=
1
-1
 3 , 
1 1

◆ Step 3. Check the matrix for diagonalizability based on the eigenvalues and eigenvectors.
◆ The following nonzero determinant implies that the eigenvectors are linearly independent .

233
18 Week 11_Eigenvalues and Eigenvectors.nb

In[ ]:= Det[Eigenvectors[A]]


Out[ ]=
4
3

In[ ]:= DiagonalizableMatrixQ[A]


Out[ ]=

True

◆ Step 4. Define the matrices P and D based on the Definition 11.4.


In[ ]:= D1 = DiagonalMatrix[{λ〚1〛, λ〚2〛}]; D1 // MatrixForm
Out[ ]//MatrixForm=
1 0
1
0
9

In[ ]:= P = Transpose[u]; P // MatrixForm


Out[ ]//MatrixForm=
1
-1
3
1 1

◆ Step 5. If possible, obtain the inverse of matrix P.


◆ Calculate the determinant of P.
In[ ]:= Det[P] ≠ 0
Out[ ]=

True

◆ The nonzero determinant implies that the matrix P is invertible.


In[ ]:= InvP = Inverse[P]; InvP // MatrixForm
Out[ ]//MatrixForm=
3 3
4 4
3 1
-
4 4

◆ Step 6. Calculate the 5th power of the diagonal matrix.


.

Theorem 11.5: Powers of a Diagonal Matrix


The k th power of a diagonal matrix D is a diagonal matrix whose diagonal entries are the k th powers of
the corresponsing entries of D.

◆ Diagonal entries of D:
In[ ]:= d = Diagonal[D1]
Out[ ]=
1
1, 
9

234
Week 11_Eigenvalues and Eigenvectors.nb 19

In[ ]:= D5 = DiagonalMatrix[Power[d, 5]]; D5 // MatrixForm


Out[ ]//MatrixForm=
1 0
1
0
59 049

◆ Step 6. Compute A5 based on the Definition 11.5. Calculate the 5th power of the diago-
nal matrix.
In[ ]:= A5 = P.D5.InvP; A5 // MatrixForm
Out[ ]//MatrixForm=

4921 14 762
19 683 59 049
14 762 44 287
19 683 59 049

◆ Step 7. Verify the obtained solution.


In[ ]:= MatrixPower[A, 5]  A5
Out[ ]=

True

Summary
After completing this chapter, you should be able to
◼ develop SOPs to find the eigensystem (eigenvalues and eigenvectors) of a given
square matrix.
◼ perform step-by-step matrix diagonalization in Mathematica.
◼ perform matrix power operations in Mathematica.
◼ develop the habit of always checking your solutions for quality assurance.

235
236
Week 12: Linear Algebra and Geometry
Vectors, Vector Operations, and Linear Transformations

Table of Contents
1. Example 12.1: Vectors and Vector Operations
2. Example 12.2: Geometry of Vectors
2.1. Vector Addition
2.2. Scalar Multiplication
2.3. Vector Subtraction
3. Example 12.3: Span
4. Example 12.4: Linear Independence
5. Example 12.5: Dot Product and its Applications
5.1. Properties of the Dot Product
5.2. Norm of a Vector
5.3. Distance Between Vectors
5.4. Angle Between Vectors
5.5. Orthogonal Vectors
6. Example 12.6: Linear Transformations
6.1. Linear Transformations
6.2. One-to-One Linear Transformations
6.3. Onto Linear Transformations
7. Example 12.7: Geometry of Linear Transformations
7.1. Reflection Across the x-Axis
7.2. Reflection Across the y-Axis
7.3. Rotation by Angle θ
7.4. Vertical Shear Transformation
7.5. Horizontal Shear Transformation
7.6. Dilation
7.7. Projection onto the x-Axis
7.8. Projection onto the y-Axis

237
2 Week 12_Linear Algebra and Geometry.nb

8. Summary

Commands list
◼ Arrowheads
◼ Arrow
◼ Show
◼ Graphics
◼ Point
◼ Line
◼ Dot
◼ Norm
◼ EuclideanDistance
◼ VectorAngle
◼ Expand
◼ BezierCurve

Prerequisite: Operations on Vectors | Plane Geometry


The Wolfram Language represents vectors as a lists, and never needs to distinguish between
row and column cases. Vectors in the Wolfram Language can always mix numbers and
arbitrary symbolic or algebraic elements. There are varoius built-in functions for constructing
and displaying the vectors, performing operations on vectors, and for vector analysis.

Operations on Vectors:
https://reference.wolfram.com/language/guide/OperationsOnVectors.html

The Wolfram Language provides fully integrated support for plane geometry, including basic
regions such as points, lines, triangles, and disks; functions for computing basic properties
such as arc length and area; and nearst points to solvers to find the intersection of regions or
integrals over regions. It is a powerful tool of visualization of geometrical figures and
graphical images, and consists of explicit list of primitives, directives, wrappers, and options.

Plane Geometry:
https://reference.wolfram.com/language/guide/PlaneGeometry.html

Example 12.1: Vectors and Vector Operations

238
Week 12_Linear Algebra and Geometry.nb 3

Definition 12.1: Vector and Rn


A vector is an object that has both magnitude and direction. It can be represented by an ordered list of
real numbers u1 , u2 , … , un expressed as

u1
u2
u=

un

or as u = (u1 , u2 , … , un ).

The set of all vectors with n entries is denoted by Rn (real coordinate space of dimension n).

Definition 12.2: Vector Arithmetic, Scalar, Euclidean Space


Let u and v be vectors in Rn given by

u1 v1
u2 v2
u= and v=
⋮ ⋮
un vn

Equality: u=v if and only if u1 = v1 , u2 = v2 , … , un = vn .

u1 v1 u1 + v 1
u2 v2 u2 + v 2
Addition: u +v = + =
⋮ ⋮ ⋮
un vn un + v n

u1 c · u1
u2 c · u2
Scalar Multiplication: cu = c =
⋮ ⋮
un c · un

where c is a real number, called a scalar.

The set of all vectors in Rn , taken together with theses definitions of addition and scalar multiplication,
is called Euclidean space. The Euclidean space is an example of a vector space.
.

Theorem 12.1: Algebraic Properties of Vectors


Let a and b be scalars, and u, v, and w be vectors in Rn . Then
(a) u + v = v + u
239
4 Week 12_Linear Algebra and Geometry.nb

(b) a (u + v) = av + au
(c) (a + b) u = au + bu
(d) (u + v) + w = u +(v + w)
(e) a (bu) = (ab) u
(f) u +(- u) = 0
(g) u + 0 = 0 + u = u
(h) 1 u = u
(i) - u = (- 1) u
0
0
where the zero vector given by 0 = .

0

 Note that two vectors can be equal if they have the same number of components. Similarly, it is
impossible to add two vectors that have a different number of components.

Suppose that we have the vectors in R4. Perform operations on following vectors to find u+v,
-4v, and 2u-3v.
2 -4
-3 6
u= v=
0 -2
-1 7

◆ Step 1. Define vectors u and v.


In[ ]:= ClearAll["Global`*"]
u = {{2}, {- 3}, {0}, {- 1}}; MatrixForm[u]
Out[ ]//MatrixForm=
2
-3
0
-1

In[ ]:= v = {{- 4}, {6}, {- 2}, {7}}; MatrixForm[v]


Out[ ]//MatrixForm=
-4
6
-2
7

◆ Step 2. Check the number of component of the given vectors for equality.

240
Week 12_Linear Algebra and Geometry.nb 5

In[ ]:= Dimensions[u]  Dimensions[v]


Out[ ]=

True

◆ Step 3. Find the sum, u + v, by adding the corresponding components.


In[ ]:= sol1 = {u〚1〛 + v〚1〛, u〚2〛 + v〚2〛, u〚3〛 + v〚3〛, u〚4〛 + v〚4〛}; sol1 // MatrixForm
Out[ ]//MatrixForm=

-2
3
-2
6

◆ Step 4. Find the scalar multiplication: - 4 v. Use a Do loop to perform vector operations.
◆ Get the dimension of the vector.
In[ ]:= dim = Dimensions[v]
Out[ ]=

{4, 1}

◆ Construct a zero vector.


In[ ]:= sol2 = ConstantArray[0, dim]; MatrixForm[sol2]
Out[ ]//MatrixForm=
0
0
0
0

◆ Run a do loop to perform the scalar multiplication.


In[ ]:= Do[sol2〚i, j〛 = - 4 * v〚i, j〛, {i, 1, dim〚1〛}, {j, 1, dim〚2〛}];
MatrixForm[sol2]

16
-24
8
-28

◆ Step 5. Find the expression: 2 u - 3 v, which is an exampe of a linear combination of


vectors. Use a For loop to perform vector operations.

Definition 12.3: Linear Combination


If u1 , u2 , … , um are vectors and c1 , c2 , … , cm are scalars, then

c 1 u1 + c 2 u2 + ⋯ + c m um

is a linear combination of u1 , u2 , … , um .

241
6 Week 12_Linear Algebra and Geometry.nb

 Note that it is possible for scalars to be negative or equal to zero.

◆ Get the dimension of one of the vectors.


In[ ]:= dim = Dimensions[u]
Out[ ]=

{4, 1}

◆ Construct a zero vector.


In[ ]:= sol3 = ConstantArray[0, dim]; MatrixForm[sol3]
Out[ ]//MatrixForm=
0
0
0
0

◆ Run a for loop to perform the given vector operation.


◆ Difference of two vectors can be rewritten in the following way:
2 u - 3 v = 2 u + (- 3) v
In[ ]:= sol3 = ConstantArray[0, dim];
For[i = 1, i ≤ dim〚1〛, i ++,
For[j = 1, j ≤ dim〚2〛, j ++,
sol3〚i, j〛 = 2 * u〚i, j〛 + (- 3) * v〚i, j〛
]];
MatrixForm[sol3]

16
-24
6
-23

◆ Step 8. Verify the results.


In[ ]:= FullSimplify[sol1  u + v]
Out[ ]=
True

In[ ]:= FullSimplify[sol2  - 4 v]


Out[ ]=

True

In[ ]:= FullSimplify[sol3  2 u - 3 v]


Out[ ]=

True

242
Week 12_Linear Algebra and Geometry.nb 7

Example 12.2: Geometry of Vectors


Definition 11.4: Tip, Tail of Vector
x1
Vectors have a geometric interpretation that is most easilty understood in R 2 . We plot the vector
x2
by drawing an arrow from the origin to the point (x1 , x2 ) in the plane.

The end of the vector with the arrow is called the tip, and the end at the origin is called the tail.

◆ The built-in command Arrow can be used to visualize the vectors.


In[ ]:= u1 = {Arrowheads[Medium], Arrow[{{0, 0}, {- 1, 1}}]};
u2 = {Arrowheads[Medium], Arrow[{{0, 0}, {1, 2}}]};
u3 = {Arrowheads[Medium], Arrow[{{0, 0}, {2, - 1}}]};

◆ Then, Graphics and Show functions were used to display them in coordinate system.
In[ ]:= Show[Graphics[{Thick, Blue, u1, u2, u3,
Red, PointSize[0.02], Point[{- 1, 1}], Point[{1, 2}], Point[{2, - 1}], Black,
Text[Style["(-1, 1)", FontFamily  "Times", FontSize  14], {- 1, 1.3}],
Text[Style["(1, 2)", FontFamily  "Times", FontSize  14], {1.2, 2.1}],
Text[Style["(2, -1)", FontFamily  "Times", FontSize  14], {2, - 0.6}]}],
Axes  True, AxesLabel  {"x1 ", "x2 "}, LabelStyle 
Directive[FontFamily  "Times", FontSize  14, Black],
ImageSize  360, AspectRatio  1 / GoldenRatio]
Out[ ]=

x2
2.0 (1, 2)

1.5
(-1, 1)
1.0

0.5

x1
-1.0 -0.5 0.5 1.0 1.5 2.0
-0.5 (2, -1)
-1.0

Suppose that we have the vectors in R2. Perform vector operations and illustrate them
graphically.
Theorem 12.2.1: Geometric Procedures for Adding Vectors
1. The Tip-to-Tail Rule: Let u and v be two vectors. Tranlate the graph of v, preserving direction, so
that its tail is at the tip of u. Then the tip of the translated v is at the tip of u + v.

243
8 Week 12_Linear Algebra and Geometry.nb

2. The Parallelogram Rule: Let vectors u and v form two adjacent sides of a parallelogram with
vertices at the origin, the tip of u, and the tip of v. Then the tip of u + v is at the fourth vertex.
.

Example 12.2.1. Vector addition


.

◆ Step 1. Define vectors u and v.


In[ ]:= ClearAll["Global`*"]
u = {2, 5}; u // MatrixForm
Out[ ]//MatrixForm=
2
5

In[ ]:= v = {4, 2}; v // MatrixForm


Out[ ]//MatrixForm=
4
2

◆ Step 2. Check the number of component of the given vectors for equality.
In[ ]:= Dimensions[u]  Dimensions[v]
Out[ ]=

True

◆ Step 3. Find the sum, u + v.


In[ ]:= sum = u + v; sum // MatrixForm
Out[ ]//MatrixForm=
6
7

◆ Step 4. Represent the vectors u, v, and their sum as an arrow.


In[ ]:= u1 = {Arrowheads[Medium], Arrow[{{0, 0}, u}]};
v1 = {Arrowheads[Medium], Arrow[{{0, 0}, v}]};
sum1 = {Arrowheads[Medium], Arrow[{{0, 0}, sum}]};

◆ Step 5. Display the geometric interpretation of the Tip-to-Tail Rule for vector addition.

244
Week 12_Linear Algebra and Geometry.nb 9

In[ ]:= Show[Graphics[{Black, Dashed, Thickness[0.005], {Arrowheads[Medium], Arrow[{u, sum}]} ,


Dashing[None], Thick, Blue, u1, v1, RGBColor[1, 0, 0.4], sum1, Black,
Text[Style["u", Bold, FontFamily  "Times", FontSize  14], {1.5, 4.9}],
Text[Style["v", Bold, FontFamily  "Times", FontSize  14], {4, 1.6}],
Text[Style["u + v", Bold, FontFamily  "Times", FontSize  14], {6.6, 7.2}],
Text[Style["v", Bold, FontFamily  "Times", FontSize  14], {4.2, 6.7}]}],
Axes  True, AxesLabel  {"x1 ", "x2 "},
LabelStyle  Directive[FontFamily  "Times", FontSize  14, Black],
ImageSize  360, AspectRatio  1 / GoldenRatio]
Out[ ]=

x2
7 u+v
v
6
5 u
4
3
2
v
1
x1
1 2 3 4 5 6

◆ The figure above shows vectors u, v, the translated v (dashed), and u + v. When we add
v to u, we add each component of v to the corresponding component of u. Note that we
get to the same place if we translate u instead of v.
◆ Step 6. Display the geometric interpretation of the Parallelogram Rule for vector
addition.

245
10 Week 12_Linear Algebra and Geometry.nb

In[ ]:= Show[Graphics[{Thick, Blue, u1, v1, RGBColor[1, 0, 0.4], sum1,


Black, Dashed, Thickness[0.005], Line[{u, sum}] , Line[{v, sum}],
Text[Style["u", Bold, FontFamily  "Times", FontSize  14], {1.5, 4.9}],
Text[Style["v", Bold, FontFamily  "Times", FontSize  14], {4, 1.6}],
Text[Style["u + v", Bold, FontFamily  "Times", FontSize  14], {6.6, 7.2}]}],
Axes  True, AxesLabel  {"x1 ", "x2 "},
LabelStyle  Directive[FontFamily  "Times", FontSize  14, Black],
ImageSize  360, AspectRatio  1 / GoldenRatio]
Out[ ]=

x2
7 u+v

6
5 u
4
3
2
v
1
x1
1 2 3 4 5 6

◆ The figure above shows that the third and fourth sides of the parallelogram are translated
copies of u and v, which shows the connection to the Tip-to-Tail Rule.

Theorem 12.2.2: Geometric Interpretation of the Scalar Multiplication of Vectors


If a vector u is multiplied by a scalar c, then the new vector cu points in the same direction as u when
c > 0 and in the opposite direction when c < 0. The length of cu is equal to the length of u multiplied by
c.
.

Example 12.2.2. Scalar Multiplication


.

◆ Step 1. Define vector u.


In[ ]:= ClearAll["Global`*"]
u = {- 1, 1}; u // MatrixForm
Out[ ]//MatrixForm=
-1
1

◆ Step 2. Multiply the vector u by scalars.

246
Week 12_Linear Algebra and Geometry.nb 11

In[ ]:= MatrixForm[u2 = - u]


Out[ ]//MatrixForm=
1
-1

In[ ]:= MatrixForm[u3 = 2.5 * u]


Out[ ]//MatrixForm=
- 2.5
2.5

In[ ]:= MatrixForm[u4 = - 2 u]


Out[ ]//MatrixForm=
2
-2

◆ Step 3. Represent all resulting vectors as arrows.


In[ ]:= u1 = {Arrowheads[Medium], Arrow[{{0, 0}, u}]};
u2 = {Arrowheads[Medium], Arrow[{{0, 0}, u2}]};
u3 = {Arrowheads[Medium], Arrow[{{0, 0}, u3}]};
u4 = {Arrowheads[Medium], Arrow[{{0, 0}, u4}]};

◆ Step 4. Display the geometric interpretation of the scalar multiples of the vector u.
In[ ]:= Show[Graphics[{Thick, Pink, u3, Yellow, u4, Blue, u1, Green, u2, Black,
Text[Style["u", Bold, FontFamily  "Times", FontSize  14], {- 0.7, 1}],
Text[Style["-u", Bold, FontFamily  "Times", FontSize  14], {1, - 0.6}],
Text[Style["2.5u", Bold, FontFamily  "Times", FontSize  14], {- 1.9, 2.5}],
Text[Style["-2u", Bold, FontFamily  "Times", FontSize  14], {2.2, - 1.7}]}],
Axes  True, AxesLabel  {"x1 ", "x2 "}, LabelStyle 
Directive[FontFamily  "Times", FontSize  14, Black],
ImageSize  360, AspectRatio  1 / GoldenRatio]
Out[ ]=

x2
2.5u
2

u 1

x1
-2 -1 1 2
-u
-1

-2u
-2

◆ 2.5u points in the same direction as u and is 2.5 times as long.


◆ -u points in the opposite direction of u and is equal in length.

247
12 Week 12_Linear Algebra and Geometry.nb

◆ -2u points in the opposite direction of u and is twice as long.

Theorem 12.2.3: Geometric Interpretation of the Subtraction of Vectors


Draw a vector w from the tip of v to the tip of u. Then translate w, preserving direction and placing the
tail at the origin. The resulting vector is u - v.
.

Example 12.2.3. Vector Subtraction


.

◆ Step 1. Define vectors u and v.


In[ ]:= ClearAll["Global`*"]
u = {2, 6}; u // MatrixForm
Out[ ]//MatrixForm=
2
6

In[ ]:= v = {5, 4}; v // MatrixForm


Out[ ]//MatrixForm=
5
4

◆ Step 2. Check the number of component of the given vectors for equality.
In[ ]:= Dimensions[u]  Dimensions[v]
Out[ ]=

True

◆ Step 3. Find the difference, u - v.


In[ ]:= diff = u - v; diff // MatrixForm
Out[ ]//MatrixForm=
-3
2

◆ Step 4. Represent the vectors u, v, and their difference as an arrow.


In[ ]:= u1 = {Arrowheads[Medium], Arrow[{{0, 0}, u}]};
v1 = {Arrowheads[Medium], Arrow[{{0, 0}, v}]};
diff1 = {Arrowheads[Medium], Arrow[{{0, 0}, diff}]};

◆ Step 5. Display the geometric interpretation of the vector subtraction procedure.

248
Week 12_Linear Algebra and Geometry.nb 13

In[ ]:= Show[Graphics[{Thick, Blue, u1, v1, RGBColor[1, 0, 0.4], diff1,


Black, Dashed, Thickness[0.005], {Arrowheads[Medium], Arrow[{v, u}]},
Text[Style["u", Bold, FontFamily  "Times", FontSize  14], {1.5, 5.7}],
Text[Style["v", Bold, FontFamily  "Times", FontSize  14], {4.8, 3.2}],
Text[Style["u - v", Bold, FontFamily  "Times", FontSize  14], {- 2, 2.3}],
Text[Style["w", Bold, FontFamily  "Times", FontSize  14], {3.5, 5.6}]}],
Axes  True, AxesLabel  {"x1 ", "x2 "},
LabelStyle  Directive[FontFamily  "Times", FontSize  14, Black],
ImageSize  360, AspectRatio  1 / GoldenRatio]
Out[ ]=

x2
6
u w
5

3 v

u-v
2

x1
-2 2 4

Example 12.3: Span


Definition 12.5: Span
Let {u1 , u2 , … , um } be a set of vectors in Rn . The span of this set is denoted span {u1 , u2 , … , um }
and is defined as the set of all linear combinations

x 1 u1 + x 2 u2 + ⋯ + x m um

where x1 + x2 + ⋯ + xm can be any real numbers.

If span {u1 , u2 , … , um } = Rn , then we say that the set {u1 , u2 , … , um } spans Rn .


.

Show that v1 is in S = span {u1, u2}, and that v2 is not.


2 1 -1 8
u1 = 1 u2 = 2 v1 = 4 v2 = 2
1 3 7 1

◆ Step 1.1. Specify the given vectors.

249
14 Week 12_Linear Algebra and Geometry.nb

In[ ]:= ClearAll["Global`*"]


u1 = {{2}, {1}, {1}}; u1 // MatrixForm
Out[ ]//MatrixForm=
2
1
1

In[ ]:= u2 = {{1}, {2}, {3}}; u2 // MatrixForm


Out[ ]//MatrixForm=
1
2
3

In[ ]:= v1 = {{- 1}, {4}, {7}}; v1 // MatrixForm


Out[ ]//MatrixForm=
-1
4
7

Theorem 12.3: Vectors in Rn


Let u1 , u2 , … , um and v be vectors in Rn . The v is an element of span {u1 , u2 , … , um } if and only if
the linear system with vector equation and augmented matrix

x 1 u1 + x 2 u2 + ⋯ + x m um = v

[ u1 u2 ⋯ um v ]

have a solution.

◆ Step 2.1. Construct the vector equation of the form: x1 u1 + x2 u2 = v1.


In[ ]:= x1 * MatrixForm[u1] + x2 * MatrixForm[u2]  MatrixForm[v1]
Out[ ]=
1 2 -1
x2 2 + x1 1  4
3 1 7

◆ Step 3.1. Construct the augmented matrix of the system, A = [u1 u2 v1].
In[ ]:= A = ArrayFlatten[{{u1, u2, v1}}]; MatrixForm[A]
Out[ ]//MatrixForm=
2 1 -1
1 2 4
1 3 7

◆ Step 4.1. Transform the augmented matrix to echelon form.

250
Week 12_Linear Algebra and Geometry.nb 15

In[ ]:= RowReduce[A] // MatrixForm


Out[ ]//MatrixForm=
1 0 -2
0 1 3
0 0 0

◆ Step 5.1. Perform the back substitution to find scalars x1 and x2 and check the solution
using LinearSolve function.
In[ ]:= x2 = 3;
x1 = - 2;

In[ ]:= LinearSolve[{{2, 1}, {1, 2}, {1, 3}}, {- 1, 4, 7}]


Out[ ]=

{- 2, 3}

◆ Since the augmented matrix has a solution, v1 is in S = span {u1, u2}.


◆ Step 6.1. Define the linear combination.
In[ ]:= Clear[u1, u2, v1]
v1  x1 * u1 + x2 * u2
Out[ ]=

v1  - 2 u1 + 3 u2

◆ Step 1.2. Specify the given vectors.


In[ ]:= ClearAll["Global`*"]
u1 = {{2}, {1}, {1}}; u1 // MatrixForm
Out[ ]//MatrixForm=
2
1
1

In[ ]:= u2 = {{1}, {2}, {3}}; u2 // MatrixForm


Out[ ]//MatrixForm=
1
2
3

In[ ]:= v2 = {{8}, {2}, {1}}; v2 // MatrixForm


Out[ ]//MatrixForm=
8
2
1

◆ Step 2.2. Construct the vector equation of the form: x1 u1 + x2 u2 = v2.

251
16 Week 12_Linear Algebra and Geometry.nb

In[ ]:= x1 * MatrixForm[u1] + x2 * MatrixForm[u2]  MatrixForm[v2]


Out[ ]=
1 2 8
x2 2 + x1 1  2
3 1 1

◆ Step 3.2. Construct the augmented matrix of the system, A = [u1 u2 v2 ].


In[ ]:= A = ArrayFlatten[{{u1, u2, v2}}]; MatrixForm[A]
Out[ ]//MatrixForm=
2 1 8
1 2 2
1 3 1

◆ Step 4.2. Transform the augmented matrix to echelon form.


In[ ]:= RowReduce[A] // MatrixForm
Out[ ]//MatrixForm=
1 0 0
0 1 0
0 0 1

◆ The third row of the echelon matrix corresponds to the equation 0=1. Thus the system
has no solutions and v2 is not in S = span {u1, u2}.
◆ Step 5.2. Check the solution using LinearSolve function.
In[ ]:= LinearSolve[{{2, 1}, {1, 2}, {1, 3}}, {8, 2, 1}]

LinearSolve: Linear equation encountered that has no solution.

Out[ ]=

LinearSolve[{{2, 1}, {1, 2}, {1, 3}}, {8, 2, 1}]

◆ Other theorems on span are listed below.


Theorem 12.4: Theorems on Span
Let u1 , u2 , … , um and u be vectors in Rn . If u is in span {u1 , u2 , … , um } then

span {u, u1 , u2 , … , um } = span {u1 , u2 , … , um }

Suppose that u1 , u2 , … , um are in Rn , and let

A = [ u1 u2 ⋯ um ] ∼ B

where B is in echelon form. Then span {u1 , … , um } = Rn exactly when B has a pivot position in every
row.

252
Week 12_Linear Algebra and Geometry.nb 17

Let {u1 , u2 , … , um } be a set of vectors in Rn . If m < n, then this set does not span Rn . If m ≥ n, then
the set might span Rn or it might not. In this case, we cannot say more without additional information
about the vectors.

Let a1 , a2 , … , am and b be vectors in Rn . Then the following statements are equivalent. That is, if one
is true, then so are the others, and if one i false then so are others.
(a) b is in span {a1 , a2 , … , am }.
(b) The vector equation x1 a1 + x2 a2 + ⋯ + xm am = b has at least one solution.
(c) The linear system corresponding to [ a1 a2 ⋯ am b ] has at least one solution.
(d) The equation Ax = b, with A and x given as
x1
x2
A = [ a1 a2 ⋯ am ] and x=

xm
has at least one solution.

Example 12.4: Linear Independence


Definition 12.6: Linear Independence
Let {u1 , u2 , … , um } be a set of vectors in Rn . If the only solution to the vector equation

x 1 u1 + x 2 u2 + ⋯ + x m um = 0

is the trivial solution given by x1 = x2 = ⋯ = xm = 0, then the set {u1 , u2 , … , um } is linearly


independent.

If there are nontrivial solutions, then the set is linearly dependent.


.

Determine if the given set is linearly dependent or linearly independent.


-1 3 -2
4 - 13 1
u1 = u2 = u3 =
-2 7 9
-3 7 -5

◆ Step 1.1. Specify the given vectors.

253
18 Week 12_Linear Algebra and Geometry.nb

In[ ]:= ClearAll["Global`*"]


u1 = {{- 1}, {4}, {- 2}, {- 3}}; u1 // MatrixForm
Out[ ]//MatrixForm=
-1
4
-2
-3

In[ ]:= u2 = {{3}, {- 13}, {7}, {7}}; u2 // MatrixForm


Out[ ]//MatrixForm=
3
- 13
7
7

In[ ]:= u3 = {{- 2}, {1}, {9}, {- 5}}; u3 // MatrixForm


Out[ ]//MatrixForm=
-2
1
9
-5

◆ Step 2.1. Construct the vector equation of the form: x1 u1 + x2 u2 + x3 u3 = 0.


In[ ]:= b = ConstantArray[0, {4, 1}];
MatrixForm[x1 * u1] + MatrixForm[x2 * u2] + MatrixForm[x3 * u3]  MatrixForm[b]
Out[ ]=
- x1 3 x2 - 2 x3 0
4 x1 - 13 x2 x3 0
+ + 
- 2 x1 7 x2 9 x3 0
- 3 x1 7 x2 - 5 x3 0

◆ Step 3.1. Construct the corresponding augmented matrix: A = [u1 u2 u3 0]


In[ ]:= A = ArrayFlatten[{{u1, u2, u3, b}}]; MatrixForm[A]
Out[ ]//MatrixForm=
-1 3 -2 0
4 - 13 1 0
-2 7 9 0
-3 7 -5 0

◆ Step 4.1. Transform the augmented matrix to echelon form.


In[ ]:= RowReduce[A] // MatrixForm
Out[ ]//MatrixForm=
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 0

◆ Step 5.1. Perform the back substitution and check the solution using LinearSolve func-
tion.

254
Week 12_Linear Algebra and Geometry.nb 19

x3 = 0;
x2 = 0;
x1 = 0;

In[ ]:= LinearSolve[{{- 1, 3, - 2}, {4, - 13, 1}, {- 2, 7, 9}, {- 3, 7, - 5}}, b]


Out[ ]=

{{0}, {0}, {0}}

◆ The results show that the only solution is the trivial one, x1 = x2 = x3 = 0. Hence the set
{u1, u2, u3} is linearly independent.
.

◆ Other theorems on linear independence are listed below.


Theorem 12.5: Theorems on Linear Independence
Suppose that {0, u1 , u2 , … , um } is a set of vectors in Rn . Then the set is linearly dependent.

Suppose that {u1 , u2 , … , um } is a set of vectors in Rn . If n < m, then the set is linearly dependent.

Let {u1 , u2 , … , um } be a set of vectors in Rn . Then this set is linearly dependent if and only if one of
the vectors in the set is in the span of the other vectors.

Let u1 , u2 , … , um be in Rn , and suppose


A = [ u1 u2 ⋯ um ] ∼ B
where B is in echelon form. Then
(a) span {u1 , … , um } = Rn exactly when B has a pivot position in every row.
(b) {u1 , … , um } is linearly independent exactly when B has a pivot position in every column.

Let a1 , a2 , … , am and b be vectors in Rn . Then the following statements are equivalent. That is, if one
is true, then so are the others, and if one i false then so are others.
(a) The set {a1 , a2 , … , am } is linearly independent.
(b) The vector equation x1 a1 + x2 a2 + ⋯ + xm am = b has at most one solution for every b.
(c) The linear system corresponding to [ a1 a2 ⋯ am b ] has at most one solution for every b.
(d) The equation Ax = b, with A = [ a1 a2 ⋯ am ], has at most one solution for every b.

Example 12.5: Dot Products and its Applications


Definition 12.7: Dot Product
Suppose that
u1 v1
u= ⋮ and v= ⋮
un 255 vn
20 Week 12_Linear Algebra and Geometry.nb

are both in Rn . Then the dot product of u and v is given by

u · v = u1 v1 + ⋯ + un vn

An alternative way to define the dot product of u and v is with matrix multiplication

v1
T
u · v = u v = ( u1 ⋯ un ) ⋮ = u1 v1 + ⋯ + un vn
vn

 Unlike vector addition, which produces a new vector, the dot product of two vectors yields a scalar.
.

Theorem 12.6: Properties of the Dot Product


Let u, v, and w be vectors in Rn , and c be a scalar. Then
(a) u · v = v · u
(b) (u + v) · w = u · w + v · w
(c) (cu) · v = u · (cu) = c (u · v)
(d) u · u ≥ 0, and u · u = 0 only when u = 0
.

Prove the property (b) of Theorem 12.6 using the given vectors.
2 -1 5
1 4 0
u= v= w=
-3 0 1
2 3 2

◆ Step 1. Define the given vectors.


In[ ]:= ClearAll["Global`*"]
u = {2, 1, - 3, 2}; u // MatrixForm
Out[ ]//MatrixForm=
2
1
-3
2

In[ ]:= v = {- 1, 4, 0, 3}; v // MatrixForm


Out[ ]//MatrixForm=
-1
4
0
3

256
Week 12_Linear Algebra and Geometry.nb 21

In[ ]:= w = {5, 0, 1, 2}; w // MatrixForm


Out[ ]//MatrixForm=
5
0
1
2

◆ Lets start with the right-hand-side of (b).


◆ Step 2. Compute the sum u + v.
In[ ]:= sumUV = u + v; sumUV // MatrixForm
Out[ ]//MatrixForm=
1
5
-3
5

◆ Step 3. Compute the dot product of u + v and w by multiplying the corresponding


entries and adding them together.
◆ So the right-hand-side of Theorem 12.6 (b) equals to:
In[ ]:= RHS = sumUV〚1〛 * w〚1〛 + sumUV〚2〛 * w〚2〛 + sumUV〚3〛 * w〚3〛 + sumUV〚4〛 * w〚4〛
Out[ ]=

12

◆ Compute the left-hand-side expression of (b) using a Do and For loops.


◆ Step 4. Get the dimension of one of the vectors.
In[ ]:= dim = Dimensions[u]
Out[ ]=

{4}

◆ Step 5. Assign the two new variables responsible for the dot products u · w and v · w a
zero value.
In[ ]:= dotUW = 0;
dotVW = 0;

◆ Step 6. Run a do loop for computing the dot product of the vectors u and w.
In[ ]:= Do[dotUW = dotUW + u〚i〛 * w〚i〛, {i, 1, dim〚1〛}];
dotUW
Out[ ]=

11

◆ Step 7. Run a for loop for computing the dot product of the vectors v and w.

257
22 Week 12_Linear Algebra and Geometry.nb

In[ ]:= For[i = 1, i ≤ dim〚1〛, i ++,


dotVW = dotVW + v〚i〛 * w〚i〛
];
dotVW
Out[ ]=

◆ Step 8. Sum upp the two dot products.


In[ ]:= LHS = dotUW + dotVW
Out[ ]=

12

◆ Step 9. Check the two sides of (b) for equality.


In[ ]:= RHS  LHS
Out[ ]=

True

◆ Step 10. Confirm the results using the built-in function Dot(.).
In[ ]:= RHS  Dot[(u + v), w]
Out[ ]=

True

In[ ]:= LHS  u.w + v.w


Out[ ]=

True

The dot product is used to find the lengths, distances, angles, and orthogonality of vectors.
.

Definition 12.8: Norm of a Vector


Let x be a vector in Rn . Then the norm (or length) of x is given by

|| x || = x·x = x21 + x22 + ⋯ + x2n

For a scalar c and a vector x, it follows from Theorem 12.6c that

|| cx || = c || x ||

Find || x || and || -5x || for the given vector x.


-3
x= 1
4

258
Week 12_Linear Algebra and Geometry.nb 23

◆ Step 1. Define the given vector x.


In[ ]:= ClearAll["Global`*"]
x = {- 3, 1, 4}; x // MatrixForm
Out[ ]//MatrixForm=
-3
1
4

◆ Step 2. Compute the length of x using the definition.


In[ ]:= norm1 = x〚1〛2 + x〚2〛2 + x〚3〛2
Out[ ]=

26

◆ Step 3. Compute the length of -5x using the definition.


In[ ]:= norm2 = Abs[- 5] * norm1
Out[ ]=

5 26

◆ Step 4. Confirm the results using the built-in functions.


In[ ]:= x.x  norm1
Out[ ]=

True

In[ ]:= Norm[- 5 x]  norm2


Out[ ]=

True

Definition 12.9: Distance Between Vectors


For two vectors u and v in Rn , the distance between u and v is given by || u - v ||.
.

Compute the distance between the given vectors u and v.


-1 7
u= 3 v= 1
2 -5

◆ Step 1. Define the given vectors u and v.

259
24 Week 12_Linear Algebra and Geometry.nb

In[ ]:= ClearAll["Global`*"]


u = {- 1, 3, 2}; u // MatrixForm
Out[ ]//MatrixForm=
-1
3
2

In[ ]:= v = {7, 1, - 5}; v // MatrixForm


Out[ ]//MatrixForm=
7
1
-5

◆ Step 2. Compute the difference of two vectors: u - v.


In[ ]:= diff = u - v; diff // MatrixForm
Out[ ]//MatrixForm=
-8
2
7

◆ Step 3. Compute the distance between two vectors using the definition.
In[ ]:= d= diff〚1〛2 + diff〚2〛2 + diff〚3〛2
Out[ ]=

3 13

◆ Step 4. Confirm the results using the built-in functions Norm and EuclideanDistance.
In[ ]:= Norm[u - v]  d
Out[ ]=

True

In[ ]:= EuclideanDistance[u, v]  d


Out[ ]=

True

Definition 12.10: Angle Between Vectors


Let u and v be vectors in Rn . Then the angle θ between u and v is given by

u · v = || u || || v || cos(θ)

Compute the angle between the given vectors u and v.


2 -3
u= v=
3 4

260
Week 12_Linear Algebra and Geometry.nb 25

◆ Step 1. Define the given vectors u and v.


In[ ]:= ClearAll["Global`*"]
u = {2, 3}; u // MatrixForm
Out[ ]//MatrixForm=
2
3

In[ ]:= v = {- 3, 4}; v // MatrixForm


Out[ ]//MatrixForm=
-3
4

◆ Step 2. Solve the equation in definition for cos(θ).


u.v
In[ ]:= cos =
Norm[u] * Norm[v]
Out[ ]=
6
5 13

◆ Step 3. Find the angle θ.


In[ ]:= angle = ArcCos[cos]
Out[ ]=
6
ArcCos 
5 13

◆ Step 4. Convert the angle into degrees.


180
In[ ]:= angleDeg = N[angle] *
Pi
Out[ ]=

70.56

◆ Step 5. Confirm the results using the built-in function VectorAngle.


In[ ]:= VectorAngle[u, v]
Out[ ]=
6
ArcCos 
5 13

Definition 12.11: Orthogonal Vectors


Vectors u and v in Rn are orthogonal if u · v = 0.
.

Determine if any pair among u, v, and w is orthogonal.

261
26 Week 12_Linear Algebra and Geometry.nb

2 3 2
-1 2 9
u= v= w=
5 -4 6
-2 0 4

◆ Step 1. Define the given vectors u, v, and w.


In[ ]:= ClearAll["Global`*"]
u = {2, - 1, 5, - 2}; u // MatrixForm
Out[ ]//MatrixForm=
2
-1
5
-2

In[ ]:= v = {3, 2, - 4, 0}; v // MatrixForm


Out[ ]//MatrixForm=
3
2
-4
0

In[ ]:= w = {2, 9, 6, 4}; w // MatrixForm


Out[ ]//MatrixForm=
2
9
6
4

◆ Step 2. Compute their dot products.


In[ ]:= u.v
Out[ ]=

- 16

◆ The dot product is not equal to zero, hence u and v are not orthogonal.
In[ ]:= u.w
Out[ ]=

17

◆ The dot product is not equal to zero, hence u and w are not orthogonal.
In[ ]:= v.w
Out[ ]=

◆ The dot product is equal to zero, hence v and w are orthogonal.


.

◆ Step 3. Confirm the results by computing the angles between vectors.

262
Week 12_Linear Algebra and Geometry.nb 27

◆ The term orthogonal is commonly said to be equivalent to perpendicular.


180
In[ ]:= N[VectorAngle[u, v]] *
Pi
Out[ ]=

120.633

180
In[ ]:= N[VectorAngle[u, w]] *
Pi
Out[ ]=

75.5766

180
In[ ]:= N[VectorAngle[v, w]] *
Pi
Out[ ]=

90.

◆ The angle between v and w vectors is 90o, hence they are perpendicular∼orthogonal.

Example 12.6: Linear Transformations


Definition 12.12: Linear Transformation
A function T: Rm → Rn is a linear transformation if for all vectors u and v in Rm and all scalars r and
s we have
(a) T (u + v) = T (u) + T (v)
(b) T (ru) = rT (u)
Conditions (a) and (b) can be combined into a single condition
T (ru + sv) = rT (u) + sT (v)
.

Suppose that T: R3 → R4 is defined by


2 x1 + x 3
x1 - x 1 + 2 x2
T x2 =
x 1 - 3 x2 + 5 x3
x3
4 x2

(a) Show that T is a linear transformation.


.

(b) Is a linear transformation one-to-one?


.

(c) Is a linear transformation onto?


.

263
28 Week 12_Linear Algebra and Geometry.nb

(a) Approach 1: To show that T is a linear transformation requires the verifying the both
conditions (a) and (b) of Definition 12.12.
.

◆ Step 1. Define the vectors u and v, and the given linear transformation.
In[ ]:= ClearAll["Global`*"]
u = {u1, u2, u3}; u // MatrixForm
Out[ ]//MatrixForm=
u1
u2
u3

In[ ]:= v = {v1, v2, v3}; v // MatrixForm


Out[ ]//MatrixForm=
v1
v2
v3

In[ ]:= T[{x1_, x2_, x3_}] := {{2 x1 + x3}, {- x1 + 2 x2}, {x1 - 3 x2 + 5 x3}, {4 x2}}

◆ Step 2. For Part (a), verify that T (u + v) = T (u) + T (v).


◆ Left-hand-side: T (u + v).
In[ ]:= aLHS = T[u + v]; aLHS // MatrixForm
Out[ ]//MatrixForm=
u3 + 2 (u1 + v1) + v3
- u1 - v1 + 2 (u2 + v2)
u1 + v1 - 3 (u2 + v2) + 5 (u3 + v3)
4 (u2 + v2)

◆ Expanded form:
In[ ]:= Expand[%] // MatrixForm
Out[ ]//MatrixForm=
2 u1 + u3 + 2 v1 + v3
- u1 + 2 u2 - v1 + 2 v2
u1 - 3 u2 + 5 u3 + v1 - 3 v2 + 5 v3
4 u2 + 4 v2

◆ Right-hand-side: T (u) + T (v).


In[ ]:= aRHS = T[u] + T[v]; aRHS // MatrixForm
Out[ ]//MatrixForm=
2 u1 + u3 + 2 v1 + v3
- u1 + 2 u2 - v1 + 2 v2
u1 - 3 u2 + 5 u3 + v1 - 3 v2 + 5 v3
4 u2 + 4 v2

◆ Checking both sides of (a) for equality:

264
Week 12_Linear Algebra and Geometry.nb 29

In[ ]:= FullSimplify[aLHS  aRHS]


Out[ ]=

True

◆ Step 3. For Part (b), verify that T (ru) = rT (u).


◆ Left-hand-side: T (ru).
In[ ]:= bLHS = T[r * u]; bLHS // MatrixForm
Out[ ]//MatrixForm=
2 r u1 + r u3
- r u1 + 2 r u2
r u1 - 3 r u2 + 5 r u3
4 r u2

◆ Right-hand-side: rT (u).
In[ ]:= bRHS = r * T[u]; bRHS // MatrixForm
Out[ ]//MatrixForm=
r (2 u1 + u3)
r (- u1 + 2 u2)
r (u1 - 3 u2 + 5 u3)
4 r u2

◆ Expanded form:
In[ ]:= Expand[%] // MatrixForm
Out[ ]//MatrixForm=
2 r u1 + r u3
- r u1 + 2 r u2
r u1 - 3 r u2 + 5 r u3
4 r u2

◆ Checking both sides of (a) for equality:


In[ ]:= FullSimplify[bLHS  bRHS]
Out[ ]=

True

The results verify that both parts of the Definition 12.12 hold, so T is a linear
transformation.
.

Theorem 12.7: Matrix Transformation


Let T: Rm → Rn . Then T is a linear transformation if and only if T (x) = Ax for some n × m matrix A.
.

(a) Approach 2: To apply the Theorem 12.7 and find matrix A such that T (x) = Ax to show

265
30 Week 12_Linear Algebra and Geometry.nb

that T is a linear transformation.


◆ Step 1. Define the vector x and the given linear transformation.
In[ ]:= ClearAll["Global`*"]
x = {x1, x2, x3}; x // MatrixForm

Out[ ]//MatrixForm=
x1
x2
x3

In[ ]:= T[{x1_, x2_, x3_}] := {{2 x1 + x3}, {- x1 + 2 x2}, {x1 - 3 x2 + 5 x3}, {4 x2}}

◆ Step 2. Compute the linear transformation of x.


In[ ]:= T[x] // MatrixForm
Out[ ]//MatrixForm=
2 x1 + x3
- x1 + 2 x2
x1 - 3 x2 + 5 x3
4 x2

◆ Step 3. Rewrite the given linear transformation function by adding the missing elements.
In[ ]:= newT[{x1_, x2_, x3_}] :=
{{2 x1 + 0 x2 + x3}, {- x1 + 2 x2 + 0 x3}, {x1 - 3 x2 + 5 x3}, {0 x1 + 4 x2 + 0 x3}}

In[ ]:= newT[{x1, x2, x3}] // MatrixForm


Out[ ]//MatrixForm=
2 x1 + x3
- x1 + 2 x2
x1 - 3 x2 + 5 x3
4 x2

◆ Step 4. Verify that nothing have changed and the linear transformations are the same.
In[ ]:= T[{x1, x2, x3}]  newT[{x1, x2, x3}]
Out[ ]=

True

◆ Step 5. Find the matrix A so that T (x) = Ax.


In[ ]:= newT[{x1_, x2_, x3_}] :=
{{2 x1, + 0 x2, + x3}, {- x1, + 2 x2, + 0 x3}, {x1, - 3 x2, + 5 x3}, {0 x1, + 4 x2, + 0 x3}}

A = newT[{1, 1, 1}]; A // MatrixForm


Out[ ]//MatrixForm=

2 0 1
-1 2 0
1 -3 5
0 4 0

266
Week 12_Linear Algebra and Geometry.nb 31

◆ Step 6. Verify the equality T (x) = Ax.


x1
In[ ]:= T[x]  A. x2 // FullSimplify
x3
Out[ ]=

True

◆ Since, T (x) = Ax, T is a linear transformation by Theorem 12.7 .


.

Definition 12.13: One-to-One and Onto Linear Transformations


Let T: Rm → Rn be a linear transformation. Then if for all vectors u and v in Rm and all scalars r and s
we have
(a) T is one-to-one if for every vector w in Rn there exists at most one vector u in Rm such that
T (u) = w.
(b) T is onto if for every vector w in Rn there exists at least one vector u in Rm such that T (u) = w.
.

Theorem 12.8: Conditions for One-to-One Linear Transformation


Let T be a linear transformation. Then T is one-to-one if and only if the solution to T (x) = 0 is the
trivial solution x = 0.

Let A be a n × m matrix and define T: Rm → Rn by T (x) = Ax. Then


(a) T is one-to-one if and only if the columns of A are linearly independent.
(b) If A ∼ B and B is in echelon form, then T is one-to-one if and only if B has a pivot position in every
column.
(c) If n < m , then T is not one-to-one.
.

(b) To find whether the linear transformation is one-to-one or not, we shoud apply either part
of Theorem 12.8.
.

According to the Theorem 12.8.1, we need to find the solution to T (x) = 0, which is
equivalent to solving Ax = 0.
.

◆ Step 1. Define the given linear transformation and corresponding matrix A.

267
32 Week 12_Linear Algebra and Geometry.nb

In[ ]:= T[{x1_, x2_, x3_}] :=


{{2 x1, + 0 x2, + x3}, {- x1, + 2 x2, + 0 x3}, {x1, - 3 x2, + 5 x3}, {0 x1, + 4 x2, + 0 x3}}

A = T[{1, 1, 1}]; A // MatrixForm


Out[ ]//MatrixForm=
2 0 1
-1 2 0
1 -3 5
0 4 0

◆ Step 2. Construct the corresponding augmented matrix for Ax = 0.


In[ ]:= b = ConstantArray[0, {4, 1}];
AugMat = ArrayFlatten[{{A, b}}]; AugMat // MatrixForm
Out[ ]//MatrixForm=
2 0 1 0
-1 2 0 0
1 -3 5 0
0 4 0 0

◆ Step 3. Reduce the augmented matrix to row echelon form.


In[ ]:= RowReduce[AugMat] // MatrixForm
Out[ ]//MatrixForm=
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 0

◆ The echelon form shows that Ax = 0 has only the trivial solution.
◆ Step 4. Verify the result using built-in function.
In[ ]:= LinearSolve[A, b]
Out[ ]=

{{0}, {0}, {0}}

T (x) = 0 has only the trivial solution and thus T is one-to-one.


.

Theorem 12.9: Conditions for Onto Linear Transformation


Let A be a n × m matrix and define T: Rm → Rn by T (x) = Ax. Then
(a) T is onto if and only if the columns of A span the codomain Rn .
(b) If A ∼ B and B is in echelon form, then T is onto if and only if B has a pivot position in every row.
(c) If n > m , then T is not onto.

(c) To find whether the linear transformation is onto or not, we shoud apply Theorem 12.9.
.

268
Week 12_Linear Algebra and Geometry.nb 33

◆ Step 1. Define the given linear transformation and corresponding matrix A.


In[ ]:= T[{x1_, x2_, x3_}] :=
{{2 x1, + 0 x2, + x3}, {- x1, + 2 x2, + 0 x3}, {x1, - 3 x2, + 5 x3}, {0 x1, + 4 x2, + 0 x3}}

A = T[{1, 1, 1}]; A // MatrixForm


Out[ ]//MatrixForm=
2 0 1
-1 2 0
1 -3 5
0 4 0

◆ Step 2. Get the dimensions of the matrix A.


In[ ]:= dim = Dimensions[A]
Out[ ]=

{4, 3}

In[ ]:= n = dim〚1〛;


m = dim〚2〛;

◆ Step 3. Check the condition (c) of the given theorem and compare the number of rows
and columns.
In[ ]:= n>m
Out[ ]=

True

Since n = 4 > m = 3, t is not onto.

Example 12.7: Geometry of Linear Transformations


Every linear transformation of the plane has a geometric effects and used to change the
position. orientation, or shape of geometric objects. This section discusses the different types
of common geometric linear transformations like reflections, rotations, shears, dilations, and
projections.

◆ Define the vector u.


In[ ]:= ClearAll["Global`*"]
u = {5, 4}; u // MatrixForm
Out[ ]//MatrixForm=
5
4

Refection Across x-Axis

269
34 Week 12_Linear Algebra and Geometry.nb

◆ Define the given geometric transformation with the matrix A.


In[ ]:= A = {{1, 0}, {0, - 1}}; A // MatrixForm
Out[ ]//MatrixForm=

1 0
0 -1

◆ Define the corresponding linear transformation using the definition T (x) = Ax.
In[ ]:= T[{x_, y_}] = A.u
Out[ ]=

{5, - 4}

◆ Find the reflection of the vector u across the x-axis.


In[ ]:= Rx = T[u]
Out[ ]=

{5, - 4}

◆ Represent the vector u and its image as an arrow.


In[ ]:= u1 = {Arrowheads[0.05], Arrow[{{0, 0}, u}]};
Rx1 = {Arrowheads[0.05], Arrow[{{0, 0}, Rx}]};

◆ Show the image of vector u under the reflection across x-axis.


In[ ]:= Show[Graphics[{Thickness[0.008], Black, Dashed, Line[{{0, 0}, {5.2, 0}}], Dashing[None],
Blue, u1, RGBColor[1, 0, 0.4], Rx1, Red, PointSize[0.02], Point[u], Point[Rx], Black,
Text[Style["u", Bold, FontFamily  "Times", FontSize  16], {4.5, 4.3}],
Text[Style["u'", Bold, FontFamily  "Times", FontSize  16], {4.5, - 4.3}]},
GridLines  Automatic], Axes  True, AxesLabel  {"x", "y"},
PlotRange  {{- 0.5, 5.5}, {- 5, 5}},
LabelStyle  Directive[FontFamily  "Times", FontSize  14, Black],
ImageSize  360, AspectRatio  1 / GoldenRatio]
Out[ ]=

4 u

x
1 2 3 4 5

-2

-4
u'

Refection Across y-Axis

270
Week 12_Linear Algebra and Geometry.nb 35

◆ Define the given geometric transformation with the matrix A.


In[ ]:= A = {{- 1, 0}, {0, 1}}; A // MatrixForm
Out[ ]//MatrixForm=

-1 0
0 1

◆ Define the corresponding linear transformation using the definition T (x) = Ax.
In[ ]:= T[{x_, y_}] = A.u
Out[ ]=

{- 5, 4}

◆ Find the reflection of the vector u across the y-axis.


In[ ]:= Ry = T[u]
Out[ ]=

{- 5, 4}

◆ Represent the vector u and its image as an arrow.


In[ ]:= u1 = {Arrowheads[0.05], Arrow[{{0, 0}, u}]};
Ry1 = {Arrowheads[0.05], Arrow[{{0, 0}, Ry}]};

◆ Show the image of vector u under the reflection across y-axis.


In[ ]:= Show[Graphics[{Thickness[0.008], Black, Dashed, Line[{{0, 0}, {0, 4.5}}], Dashing[None],
Blue, u1, RGBColor[1, 0, 0.4], Ry1, Red, PointSize[0.02], Point[u], Point[Ry], Black,
Text[Style["u", Bold, FontFamily  "Times", FontSize  16], {4.5, 4.3}],
Text[Style["u'", Bold, FontFamily  "Times", FontSize  16], {- 4.5, 4.3}]},
GridLines  Automatic], Axes  True, AxesLabel  {"x", "y"},
PlotRange  {{- 5.5, 5.5}, {- 0.5, 4.5}},
LabelStyle  Directive[FontFamily  "Times", FontSize  14, Black],
ImageSize  360, AspectRatio  1 / GoldenRatio]
Out[ ]=

y
u' u
4

x
-4 -2 2 4

271
36 Week 12_Linear Algebra and Geometry.nb

Rotation by Angle θ

◆ Define the given geometric transformation with the matrix A.


In[ ]:= A = {{Cos[θ], - Sin[θ]}, {Sin[θ], Cos[θ]}}; A // MatrixForm
Out[ ]//MatrixForm=

Cos[θ] - Sin[θ]
Sin[θ] Cos[θ]

◆ Define the corresponding linear transformation using the definition T (x) = Ax.
In[ ]:= T[{x_, y_}] = A.{x, y}
Out[ ]=

{x Cos[θ] - y Sin[θ], y Cos[θ] + x Sin[θ]}

◆ Find the rotation of the vector u by angle θ = 30o.


In[ ]:= θ = 25 Degree;
Rθ = T[u]
Out[ ]=

{5 Cos[25 °] - 4 Sin[25 °], 4 Cos[25 °] + 5 Sin[25 °]}

◆ Represent the vector u and its image as an arrow.


In[ ]:= u1 = {Arrowheads[0.05], Arrow[{{0, 0}, u}]};
Rθ1 = {Arrowheads[0.05], Arrow[{{0, 0}, Rθ}]};

◆ Show the image of vector u under the rotation by angle θ.

272
Week 12_Linear Algebra and Geometry.nb 37

In[ ]:= ShowGraphicsThickness[0.008], Blue, u1, RGBColor[1, 0, 0.4], Rθ1,


Red, PointSize[0.02], Point[u], Point[Rθ], Black, Thickness[0.005],
Arrow[BezierCurve[{{1, 0.8}, {1.2, 1.8}, {0.7, 1.4}}]],
Text[Style["u", Bold, FontFamily  "Times", FontSize  16], u + 0.3],
Text[Style["u'", Bold, FontFamily  "Times", FontSize  16], {2.6, 6.1}],
TextStyle"θ=25o ", Bold, FontFamily  "Times", FontSize  14, {1.4, 1.8},
GridLines  Automatic, Axes  True, AxesLabel  {"x", "y"},
LabelStyle  Directive[FontFamily  "Times", FontSize  14, Black],
ImageSize  360, AspectRatio  1 / GoldenRatio
Out[ ]=

y
6 u'

5
u
4

2 θ=25o
1

x
1 2 3 4 5

Vertical Shear Transformation

◆ Define the given geometric transformation with the matrix A.


In[ ]:= A = {{1, 0}, {v, 1}}; A // MatrixForm
Out[ ]//MatrixForm=

1 0
v 1

◆ where v is a vertical shear factor.


◆ Define the corresponding linear transformation using the definition T (x) = Ax.
In[ ]:= T[{x_, y_}] = A.{x, y}
Out[ ]=

{x, v x + y}

◆ Find the vertical shear of the vector u by a factor of 3.


In[ ]:= v = 3;
vSht = T[u]
Out[ ]=

{5, 19}

273
38 Week 12_Linear Algebra and Geometry.nb

◆ Represent the vector u and its image as an arrow.


In[ ]:= u1 = {Arrowheads[0.05], Arrow[{{0, 0}, u}]};
vSht1 = {Arrowheads[0.05], Arrow[{{0, 0}, vSht}]};

◆ Show the image of vector u under the vertical shearing.


In[ ]:= Show[Graphics[{Thickness[0.008], Blue, u1, RGBColor[1, 0, 0.4],
vSht1, Red, PointSize[0.02], Point[u], Point[vSht], Black,
Text[Style["u", Bold, FontFamily  "Times", FontSize  16], u + 0.3],
Text[Style["u'", Bold, FontFamily  "Times", FontSize  16], vSht + 0.3]},
GridLines  Automatic], Axes  True, AxesLabel  {"x", "y"},
LabelStyle  Directive[FontFamily  "Times", FontSize  14, Black],
ImageSize  360, AspectRatio  1 / GoldenRatio]
Out[ ]=

y
u'

15

10

5
u

x
1 2 3 4 5

Horizontal Shear Transformation

◆ Define the given geometric transformation with the matrix A.


In[ ]:= A = {{1, h}, {0, 1}}; A // MatrixForm
Out[ ]//MatrixForm=

1 h
0 1

◆ where h is a horizontal shear factor.


◆ Define the corresponding linear transformation using the definition T (x) = Ax.
In[ ]:= T[{x_, y_}] = A.{x, y}
Out[ ]=

{x + h y, y}

◆ Find the horizontal shear of the vector u by a factor of 2.

274
Week 12_Linear Algebra and Geometry.nb 39

In[ ]:= h = 2;
hSht = T[u]
Out[ ]=

{13, 4}

◆ Represent the vector u and its image as an arrow.


In[ ]:= u1 = {Arrowheads[0.05], Arrow[{{0, 0}, u}]};
hSht1 = {Arrowheads[0.05], Arrow[{{0, 0}, hSht}]};

◆ Show the image of vector u under the horizontal shearing.


In[ ]:= Show[Graphics[{Thickness[0.008], Blue, u1, RGBColor[1, 0, 0.4],
hSht1, Red, PointSize[0.02], Point[u], Point[hSht], Black,
Text[Style["u", Bold, FontFamily  "Times", FontSize  16], {5.7, 4.3}],
Text[Style["u'", Bold, FontFamily  "Times", FontSize  16], {13.7, 4.3}]},
GridLines  Automatic], Axes  True, AxesLabel  {"x", "y"},
LabelStyle  Directive[FontFamily  "Times", FontSize  14, Black],
ImageSize  360, AspectRatio  1 / GoldenRatio]
Out[ ]=

y
u u'
4

x
2 4 6 8 10 12

Dilation

◆ Define the given geometric transformation with the matrix A.


In[ ]:= A = {{d, 0}, {0, d}}; A // MatrixForm
Out[ ]//MatrixForm=

d 0
0 d

◆ where d is a scale factor which determines how much larger or smaller the image will be
compared to the original geometric object.
◆ Define the corresponding linear transformation using the definition T (x) = Ax.

275
40 Week 12_Linear Algebra and Geometry.nb

In[ ]:= T[{x_, y_}] = A.{x, y}


Out[ ]=

{d x, d y}

◆ Find the dilation of the vector u by a factor of 1.5.


In[ ]:= d = 1.5;
Du = T[u]
Out[ ]=

{7.5, 6.}

◆ Represent the vector u and its image as an arrow.


In[ ]:= u1 = {Arrowheads[0.05], Arrow[{{0, 0}, u}]};
Du1 = {Arrowheads[0.05], Arrow[{{0, 0}, Du}]};

◆ Show the image of vector u under the dilation.


In[ ]:= Show[Graphics[{Thickness[0.008], RGBColor[1, 0, 0.4],
Du1, Blue, u1, Red, PointSize[0.02], Point[u], Point[Du], Black,
Text[Style["u", Bold, FontFamily  "Times", FontSize  16], {5.1, 4.5}],
Text[Style["u'", Bold, FontFamily  "Times", FontSize  16], {7.6, 6.5}]},
GridLines  Automatic], Axes  True, AxesLabel  {"x", "y"},
LabelStyle  Directive[FontFamily  "Times", FontSize  14, Black],
ImageSize  360, AspectRatio  1 / GoldenRatio]
Out[ ]=

y
u'
6

5
u
4

x
1 2 3 4 5 6 7

Projection onto the x-Axis

◆ Define the given geometric transformation with the matrix A.


In[ ]:= A = {{1, 0}, {0, 0}}; A // MatrixForm
Out[ ]//MatrixForm=

1 0
0 0

276
Week 12_Linear Algebra and Geometry.nb 41

◆ Define the corresponding linear transformation using the definition T (x) = Ax.
In[ ]:= T[{x_, y_}] = A.{x, y}
Out[ ]=

{x, 0}

◆ Find the projetion of the vector u onto the x-axis.


In[ ]:= Px = T[u]
Out[ ]=

{5, 0}

◆ Represent the vector u and its image as an arrow.


In[ ]:= u1 = {Arrowheads[0.05], Arrow[{{0, 0}, u}]};
Px1 = {Arrowheads[0.05], Arrow[{{0, 0}, Px}]};

◆ Show the image of vector u under the projection onto the x-axis.
In[ ]:= Show[Graphics[{Thickness[0.008], Dashed, Line[{{5, 0}, {5, 4}}], Dashing[None], Blue, u1,
RGBColor[1, 0, 0.4], Px1, Red, PointSize[0.02], Point[u], Point[Px], Black,
Text[Style["u", Bold, FontFamily  "Times", FontSize  16], u + 0.3],
Text[Style["u'", Bold, FontFamily  "Times", FontSize  16], Px + 0.3]},
GridLines  Automatic], Axes  True, AxesLabel  {"x", "y"},
LabelStyle  Directive[FontFamily  "Times", FontSize  14, Black],
ImageSize  360, AspectRatio  1 / GoldenRatio]
Out[ ]=

y
u
4

u'
x
1 2 3 4 5

Projection onto the y-Axis

◆ Define the given geometric transformation with the matrix A.


In[ ]:= A = {{0, 0}, {0, 1}}; A // MatrixForm
Out[ ]//MatrixForm=

0 0
0 1

277
42 Week 12_Linear Algebra and Geometry.nb

◆ Define the corresponding linear transformation using the definition T (x) = Ax.
In[ ]:= T[{x_, y_}] = A.{x, y}
Out[ ]=

{0, y}

◆ Find the projetion of the vector u onto the y-axis.


In[ ]:= Py = T[u]
Out[ ]=

{0, 4}

◆ Represent the vector u and its image as an arrow.


In[ ]:= u1 = {Arrowheads[0.05], Arrow[{{0, 0}, u}]};
Py1 = {Arrowheads[0.05], Arrow[{{0, 0}, Py}]};

◆ Show the image of vector u under the projection onto the y-axis.
In[ ]:= Show[Graphics[{Thickness[0.008], Black, Dashed, Line[{{0, 4}, {5, 4}}], Dashing[None],
Blue, u1, RGBColor[1, 0, 0.4], Py1, Red, PointSize[0.02], Point[u], Point[Py], Black,
Text[Style["u", Bold, FontFamily  "Times", FontSize  16], u + 0.3],
Text[Style["u'", Bold, FontFamily  "Times", FontSize  16], Py + 0.3]},
GridLines  Automatic], Axes  True, PlotRange  {{- 0.5, 5.5}, {- 0.5, 4.5}},
AxesLabel  {"x", "y"}, LabelStyle  Directive[FontFamily  "Times", FontSize  14, Black],
ImageSize  360, AspectRatio  1 / GoldenRatio]
Out[ ]=

y
u' u
4

x
1 2 3 4 5

In[ ]:=

Summary
After completing this chapter, you should be able to
◼ analyze vectors, simple vector operations, and geometry of vectors in Mathematica.
◼ analyze span and linear independence of vectors in Mathematica.

278
Week 12_Linear Algebra and Geometry.nb 43

◼ analyze the dot product of two vectors and related applications in Mathematica.
◼ analyze linear transformations in Mathematica.
◼ learn and use information, tools, and technology to solve problems.

279
280
Week 13: Linear Systems of ODEs
How to solve a system of differential equations?

Table of Contents
1. Homogeneous First-Order Linear System of ODEs with Initial Condition
1.1. Method 1: Separation of Variables
1.2. Method 2: Laplace Transforms
1.3. Method 3: Eigenvalues and Eigenvectors
2. Summary

Commands list
◼ Integrate
◼ Solve
◼ DSolve
◼ LaplaceTransform
◼ InverseLaplaceTransform
◼ RowReduce
◼ Transpose
◼ Join
◼ Inverse
◼ CharacteristicPolynomial
◼ Eigenvalues
◼ Eigenvectors
◼ Eigensystem
◼ Wronskian
◼ DiagoalizableMatrixQ

Prerequisite: Eigenvalues and Eigenvectors


The Wolfram Language includes built-in functions that are helpful in solving the systems of
differential equations. This section dicusses the 3 methods for solving the homogeneous linear

281
2 Week 13_Linear Systems of ODEs.nb

systems ODEs of first order with initial condition. Therefore it is recommended to visit and
read the sections for Weeks 1, 6, and 11 of this guidebook to learn more about these methods.

Week 1 | 1st Order ODEs


Week 6 | Laplace Transforms-2 (Solving ODEs)
Week 11 | Eigenvalues and Eigenvectors

Homogeneous First-Order Linear System of ODEs with IVP


Example 13.1: Consider the reaction network of two irreversible (one-way), first-order
reaction in series:
k1 k2
A⟶B⟶C
Suppose at time t = 0, we have initial conditions [A] = [A0], [B] = 0, [C ] = 0, where [A], [B],
and [C] denote the concentrations of species A, B, and C, respectively. Using the Guldberg-
Waage form of the reaction rates to describes the network A  B  C gives for
volume:
d [A]
= - k1[A]
dt
d [B]
= k1[A] - k2[B]
dt
d [C ]
= k2[B]
dt
Solve the above rate equations (system of ODEs) to determine the concentrations [A], [B], and
[C] as a function of time.
.

Solution: Instaed of using notations like [A], [B], and [C], let’s introduce y1 = [A], y2 = [B],
and y 3 = [C ] and rewrite the system of ODEs as
dy1
dt
= y1 = - k1 y1
dy2
= y2 = k1 y1 - k2 y2
dt

282
Week 13_Linear Systems of ODEs.nb 3

dy3
= y3 = k2 y2
dt

with initial conditions y1(0) = [A0], y2(0) = 0, and y3(0) = 0.

Method 1: Sepataion of Variables


Since the first ODE are separable equations, we will solve it by the method of separation of
variables.
◆ Step 1. The first ODE is separable: y-1 1 dy1 = - k1 dt .
In[ ]:= ClearAll["Global`*"]
expr = y1 '[t] + k1 * y1[t]
Out[ ]=

k1 y1[t] + y1′ [t]

◆ Step 2. Integrate the left-hand-side with respect to y1.


In[ ]:= LHS = Integratey1-1 , y1
Out[ ]=

Log[y1]

◆ Step 3. Integrate the right-hand-side with respect to t.


In[ ]:= RHS = Integrate[- k1, t]
Out[ ]=

- k1 t

◆ Step 4. By integration we got: lny1 = - k1 t + C. Solve the expression to get the general
solution to ODE.
In[ ]:= Solve[LHS  RHS + C, y1, Reals]
Out[ ]=

y1  C-k1 t 

In[ ]:= y1genSoln = C * Exp[- k1 * t]


Out[ ]=

C -k1 t

◆ Step 5. Use initial value condition y1(0) = A0 to find the particular solution.
In[ ]:= y10 = y1genSoln /. t  0
Out[ ]=

◆ Solve for the arbitrary constant.

283
4 Week 13_Linear Systems of ODEs.nb

In[ ]:= Solve[y10  A0, C]


Out[ ]=

{{C  A0}}

◆ Substitute the arbitrary constant to the general solution.


In[ ]:= y1partSol = y1genSoln /. C  A0;
y1 [t]  y1partSol
Out[ ]=

y1 [t]  A0 -k1 t

◆ Step 6. Verify the obtained solution.


In[ ]:= dsol1 = DSolve[{expr  0, y1[0]  A0}, y1[t], t]
Out[ ]=

y1[t]  A0 -k1 t 

In[ ]:= FullSimplifyy1partSol  A0 -k1 t 


Out[ ]=

True
.

Now we move to the second ODE. Substituting the solution y1(t ) into the second ODE,
y2 = k1 y1 - k2 y2, we get a non-homogeneous linear ODE of first order.
.

◆ Step 1. Define the second ODE and substitute the solution y1(t ).
In[ ]:= expr2 = y2 '[t] - k1 * y1[t] + k2 * y2[t]
Out[ ]=

- k1 y1[t] + k2 y2[t] + y2′ [t]

In[ ]:= expr2 = expr2 /. y1[t]  A0 * Exp[- k1 * t]


Out[ ]=

- A0 -k1 t k1 + k2 y2[t] + y2′ [t]

◆ Now we have a non-homogeneous linear ODE of the form y' + p(t ) y = r(t ).
◆ Step 2. Define the p(t) and r(t).
In[ ]:= p = k2;
r = A0 * Exp[- k1 * t] * k1;

◆ Step 3. Calculate the integrting factor h as: h = ∫ p(t )  t.


In[ ]:= h = Integrate[p, t]
Out[ ]=

k2 t

◆ Step 4. Find the general solution to given ODE: y(t ) = e-h ∫ eh r  t + c e-h.

284
Week 13_Linear Systems of ODEs.nb 5

In[ ]:= y2genSol = Exp[- h] * Integrate[Exp[h] * r, t] + Exp[- h] * C


Out[ ]=

A0 -k2 t+(-k1+k2) t k1
C -k2 t +
- k1 + k2

◆ Step 5. Find the particular solution by the initial condition y2(0) = 0.


In[ ]:= y20 = y2genSol /. t  0
Out[ ]=

A0 k1
C+
- k1 + k2

◆ Solve for the arbitrary constant.


In[ ]:= Solve[y20  0, C]
Out[ ]=

A0 k1
C  
k1 - k2

◆ Substitute the arbitrary constant to the general solution.


k1
In[ ]:= y2partSol = y2genSol /. C  A0;
k1 - k2
y2 [t]  FullSimplify[y2partSol]
Out[ ]=

A0 -k2 t - 1 + (-k1+k2) t  k1
y2 [t] 
- k1 + k2

◆ Step 6. Verify the obtained solution.


In[ ]:= dsol2 = DSolve[{expr2  0, y2[0]  0}, y2[t], t]
Out[ ]=

A0 -k2 t - 1 + (-k1+k2) t  k1
y2[t]  
- k1 + k2

A0 -k2 t - 1 + (-k1+k2) t  k1
In[ ]:= FullSimplifyy2partSol  
- k1 + k2
Out[ ]=

True
.

Substituting the solution y2(t ) into the third ODE, y3 = k2 y2, we get again a separable equation
which can be solved by separation of variables.
.

◆ Step 1. Define the third ODE and substitute the solution y2(t ).
In[ ]:= expr3 = y3 '[t] - k2 * y2[t]
Out[ ]=

- k2 y2[t] + y3′ [t]

285
6 Week 13_Linear Systems of ODEs.nb

A0 * k1 * Exp[- k2 * t] (- 1 + Exp[- k1 * t + k2 * t])


In[ ]:= expr3 = expr3 /. y2[t] 
- k1 + k2
Out[ ]=

A0 -k2 t - 1 + -k1 t+k2 t  k1 k2


- + y3′ [t]
- k1 + k2

In[ ]:= expr3 // FullSimplify


Out[ ]=

A0 -k1 t - -k2 t  k1 k2
+ y3′ [t]
k1 - k2

A0 e-k1 t -e-k2 t  k1 k2
◆ Now we have a separable equation: dy3 = - dt.
k1-k2
.

◆ Step 2. Integrate the left-hand-side with respect to y3.


In[ ]:= LHS = Integrate[1, y3]
Out[ ]=

y3

◆ Step 3. Integrate the right-hand-side with respect to t.


A0 * (Exp[- k1 * t] - Exp[- k2 * t]) * k1 * k2
In[ ]:= RHS = Integrate- , t
k1 - k2
Out[ ]=
-k1 t -k2 t
A0 k1 - +  k2
k1 k2
-
k1 - k2

In[ ]:= RHS = FullSimplify[RHS]


Out[ ]=

A0 -k2 t k1 - -k1 t k2


-
k1 - k2

A0e-k2 t k1-e-k1 t k2


◆ Step 4. By integration we got: y3 = - + C. Solve the expression for
k1-k2
y3 to get the general solution to ODE.
In[ ]:= Solve[LHS  RHS + C, y3]
Out[ ]=

A0 -k2 t k1 - -k1 t k2


y3  C - 
k1 - k2

A0 * (Exp[- k2 * t] * k1 - Exp[- k1 * t] * k2)


In[ ]:= y3genSoln = C -
k1 - k2
Out[ ]=

A0 -k2 t k1 - -k1 t k2


C-
k1 - k2

286
Week 13_Linear Systems of ODEs.nb 7

◆ Step 5. Use initial value condition y3(0) = 0 to find the particular solution.
In[ ]:= y30 = y3genSoln /. t  0
Out[ ]=

- A0 + C

◆ Solve for the arbitrary constant.


In[ ]:= Solve[y30  0, C]
Out[ ]=

{{C  A0}}

◆ Substitute the arbitrary constant to the general solution.


In[ ]:= y3partSol = y3genSoln /. C  A0;
y3  y3partSol
Out[ ]=

A0 -k2 t k1 - -k1 t k2


y3  A0 -
k1 - k2

◆ Step 6. Verify the obtained solution.


In[ ]:= dsol3 = FullSimplify[DSolve[{expr3  0, y3[0]  0}, y3[t], t]]
Out[ ]=

A0 k1 - -k2 t k1 + - 1 + -k1 t  k2


y3[t]  
k1 - k2

A0 k1 - -k2 t k1 + - 1 + -k1 t  k2


In[ ]:= FullSimplifyy3partSol  
k1 - k2
Out[ ]=

True

Method 2: Laplace Transform


Now lets solve the system od ODEs using Laplace transforms.
dy1
dt
= y1 = - k1 y1
dy2
= y2 = k1 y1 - k2 y2
dt
dy3
= y3 = k2 y2
dt

with initial conditions y1(0) = [A0], y2(0) = 0, and y3(0) = 0.


◆ Step 1.1. Define the first ODE as an equation and its intial condition as a substitution.

287
8 Week 13_Linear Systems of ODEs.nb

In[ ]:= ClearAll["Global`*"]


ode1 = y1 '[t]  - k1 * y1[t]
Out[ ]=

y1′ [t]  - k1 y1[t]

In[ ]:= IC1 = {y1[0]  A0}


Out[ ]=

{y1[0]  A0}

◆ Step 2.1. Compute the Laplace transforms of both sides of the ODE and substitute the
initial condition.
In[ ]:= LT1 = LaplaceTransform[ode1, t, s] /. IC1
Out[ ]=

- A0 + s LaplaceTransform[y1[t], t, s]  - k1 LaplaceTransform[y1[t], t, s]

◆ Replace ℒ {y1} with Y1.


In[ ]:= eqnforY1 = LT1 /. LaplaceTransform[y1[t], t, s]  Y1[s]
Out[ ]=

- A0 + s Y1[s]  - k1 Y1[s]

◆ Step 1.2. Define the second ODE as an equation and its intial condition as a substitution.
In[ ]:= ode2 = y2 '[t]  k1 * y1[t] - k2 * y2[t]
Out[ ]=

y2′ [t]  k1 y1[t] - k2 y2[t]

In[ ]:= IC2 = {y2[0]  0}


Out[ ]=

{y2[0]  0}

◆ Step 2.2. Compute the Laplace transforms of both sides of the ODE and substitute the
initial condition.
In[ ]:= LT2 = LaplaceTransform[ode2, t, s] /. IC2
Out[ ]=

s LaplaceTransform[y2[t], t, s] 
k1 LaplaceTransform[y1[t], t, s] - k2 LaplaceTransform[y2[t], t, s]

◆ Replace ℒ {y1} with Y1 and ℒ {y2} with Y2.


In[ ]:= eqnforY2 =
LT2 /. {LaplaceTransform[y1[t], t, s]  Y1[s], LaplaceTransform[y2[t], t, s]  Y2[s]}
Out[ ]=

s Y2[s]  k1 Y1[s] - k2 Y2[s]

◆ Step 1.3. Define the third ODE as an equation and its intial condition as a substitution.
In[ ]:= ode3 = y3 '[t]  k2 * y2[t]
Out[ ]=

y3′ [t]  k2 y2[t]

288
Week 13_Linear Systems of ODEs.nb 9

In[ ]:= IC3 = {y3[0]  0}


Out[ ]=

{y3[0]  0}

◆ Step 2.3. Compute the Laplace transforms of both sides of the ODE and substitute the
initial condition.
In[ ]:= LT3 = LaplaceTransform[ode3, t, s] /. IC3
Out[ ]=

s LaplaceTransform[y3[t], t, s]  k2 LaplaceTransform[y2[t], t, s]

◆ Replace ℒ {y2} with Y2 and ℒ {y3} with Y3.


In[ ]:= eqnforY3 =
LT3 /. {LaplaceTransform[y2[t], t, s]  Y2[s], LaplaceTransform[y3[t], t, s]  Y3[s]}
Out[ ]=

s Y3[s]  k2 Y2[s]

◆ Step 3. Solve the obtained algebraic system of equations.


In[ ]:= sys = {eqnforY1, eqnforY2, eqnforY3}; sys // Column
Out[ ]=
- A0 + s Y1[s]  - k1 Y1[s]
s Y2[s]  k1 Y1[s] - k2 Y2[s]
s Y3[s]  k2 Y2[s]

In[ ]:= soln = Solve[sys, {Y1[s], Y2[s], Y3[s] }]


Out[ ]=

A0 A0 k1 A0 k1 k2
Y1[s]  , Y2[s]  , Y3[s]  
k1 + s (k1 + s) (k2 + s) s (k1 + s) (k2 + s)

◆ The Laplace transforms of the solutions are:


In[ ]:= Y1sol[s_] := Y1[s] /. soln〚1, 1〛; Y1sol[s]
Out[ ]=
A0
k1 + s

In[ ]:= Y2sol[s_] := Y2[s] /. soln〚1, 2〛; Y2sol[s]


Out[ ]=
A0 k1
(k1 + s) (k2 + s)

In[ ]:= Y3sol[s_] := Y3[s] /. soln〚1, 3〛; Y3sol[s]


Out[ ]=
A0 k1 k2
s (k1 + s) (k2 + s)

◆ Step 4. Take the inverse transforms of Y1, Y2, and Y3 to get the solution to the given
system of ODEs.

289
10 Week 13_Linear Systems of ODEs.nb

In[ ]:= y1Soln[t_] = InverseLaplaceTransform[Y1sol[s], s, t]; y 1  y1Soln[t]


Out[ ]=

y1  A0 -k1 t

In[ ]:= y2Soln[t_] = InverseLaplaceTransform[Y2sol[s], s, t]; y 2  y2Soln[t]


Out[ ]=

A0 -k1 t - -k2 t  k1
y2  -
k1 - k2

In[ ]:= y3Soln[t_] = InverseLaplaceTransform[Y3sol[s], s, t]; y 3  FullSimplify[y3Soln[t]]


Out[ ]=

A0 k1 - -k2 t k1 + - 1 + -k1 t  k2


y3 
k1 - k2

◆ Step 5. Verify the obtained results.


In[ ]:= dsol = FullSimplify[
DSolve[{ode1, ode2, ode3, y1[0]  A0, y2[0]  0, y3[0]  0}, {y1[t], y2[t], y3[t]}, t]]
Out[ ]=

A0 -k1 t - -k2 t  k1 A0 k1 - -k2 t k1 + - 1 + -k1 t  k2


y1[t]  A0 -k1 t , y2[t]  - , y3[t]  
k1 - k2 k1 - k2

In[ ]:= FullSimplify[y1Soln[t]  y1[t] /. dsol〚1, 1〛]


Out[ ]=

True

In[ ]:= FullSimplify[y2Soln[t]  y2[t] /. dsol〚1, 2〛]


Out[ ]=

True

In[ ]:= FullSimplify[y3Soln[t]  y3[t] /. dsol〚1, 3〛]


Out[ ]=

True

Method 3: Eigenvalues and Eigenvectors


The given system of linear first-order ODEs can be expressed as y' = Ay:
y1 - k1 0 0 y1
y2 = k1 - k2 0 y2
y3 0 k2 0 y3

◆ Step 1. Define the coefficient matrix A.

290
Week 13_Linear Systems of ODEs.nb 11

In[ ]:= ClearAll["Global`*"]


A = {{- k1, 0, 0}, {k1, - k2, 0}, {0, k2, 0}}; A // MatrixForm
Out[ ]//MatrixForm=
- k1 0 0
k1 - k2 0
0 k2 0

◆ Step 2. Determine the eigenvalues of A by calculating the characteristic polynomial,


det (A - λI3).
In[ ]:= Factor[CharacteristicPolynomial[A, λ]]
Out[ ]=

- λ (k1 + λ) (k2 + λ)

◆ The eigenvalues for a matrix A are given by the roots of the characteristic equation.
In[ ]:= Solve[Det[A - λ * IdentityMatrix[3]]  0  0, λ]
Out[ ]=

{{λ  0}, {λ  - k1}, {λ  - k2}}

◆ Results show that the matrix A has three distinct eigenvalues, λ1 = 0, λ2 = - k1, and
λ3 = - k2.
In[ ]:= {λ1, λ2, λ3} = Eigenvalues[A]
Out[ ]=

{0, - k1, - k2}

◆ Step 3. Find the eigenvectors associated with corresponding eigenvalues.


In[ ]:= {u1, u2, u3} = Eigenvectors[A];
{MatrixForm[u1], MatrixForm[u2], MatrixForm[u3]}
Out[ ]=
-k1+k2
0 - 0
k2
 0 , -
k1 , -1 
k2
1 1
1

Theorem 13.1: General Solution to the 1st Order Linear System of ODEs.
Suppose that y' = Ay is a first-order linear system of differential equations. If A is an n × n
diagonalizable matrix, then the general solution to the system is given by

y = c 1 e λ 1 t u1 + ⋯ + c n e λ n t un

where u1 , . . . , un are n linearly independent eigenvectors with associayed eigenvalues λ1 , . . . , λn and


c1 , … , cn are constants.

◆ Step 4. Check the matrix for diagonalizability.

291
12 Week 13_Linear Systems of ODEs.nb

In[ ]:= DiagonalizableMatrixQ[A]


Out[ ]=

True

◆ Step 5. By Theorem 13.1, the corresponding solutions of the differential equations are:
In[ ]:= sol1 = u1 * Exp[λ1 * t]; sol1 // MatrixForm
Out[ ]//MatrixForm=
0
0
1

In[ ]:= sol2 = u2 * Exp[λ2 * t]; sol2 // MatrixForm


Out[ ]//MatrixForm=
-k1 t (-k1+k2)
-
k2
-k1 t k1
-
k2
-k1 t

In[ ]:= sol3 = u3 * Exp[λ3 * t]; sol3 // MatrixForm


Out[ ]//MatrixForm=
0
- -k2 t
-k2 t

◆ Step 6. The Wronskian determinant can be used to check if these functions form a
fundamental solution set:
In[ ]:= Wronskian[{sol1, sol2, sol3}, t]
Out[ ]=

-((k1+k2) t) (- k1 + k2)
k2

◆ Since the Wronskian detrminant is not equal to zero for real values of t, these functions
form a fundamental solution set.
◆ Step 7. By Theorem 13.1, the general solution of the system is:
.

y = c 1 e λ1 t u 1 + c 2 e λ2 t u2 + c 3 e λ 3 t u 3
.

In[ ]:= genSol = c1 * sol1 + c2 * sol2 + c3 * sol3;


MatrixForm[{y1 , y2 , y3 }] 
c1 * MatrixForm[sol1] + c2 * MatrixForm[sol2] + c3 * MatrixForm[sol3]
Out[ ]=
-k1 t (-k1+k2)
y1 0 0 -
k2
y2  c1 0 + c3 - -k2 t + c2 -
-k1 t k1
k2
y3 1  -k2 t
-k1 t

292
Week 13_Linear Systems of ODEs.nb 13

◆ Step 8. Find the particular solution with initial conditions: y1(0) = A0, y2(0) = 0,
y3(0) = 0.
y1 A0
◆ Define y2 = 0 :
y3 0 0
In[ ]:= IC = {{A0}, {0}, {0}}; IC // MatrixForm
Out[ ]//MatrixForm=
A0
0
0

◆ Substitute the value t0 = 0:


In[ ]:= sol1 = {{sol1〚1〛}, {sol1〚2〛}, {sol1〚3〛}} /. t  0;
sol1 // MatrixForm
Out[ ]//MatrixForm=
0
0
1

In[ ]:= sol2 = {{sol2〚1〛}, {sol2〚2〛}, {sol2〚3〛}} /. t  0;


sol2 // MatrixForm
Out[ ]//MatrixForm=
-k1+k2
-
k2
k1
-
k2
1

In[ ]:= sol3 = {{sol3〚1〛}, {sol3〚2〛}, {sol3〚3〛}} /. t  0;


sol3 // MatrixForm
Out[ ]//MatrixForm=
0
-1
1

◆ Define the corresponding augmented matrix of the system with initial conditions.
In[ ]:= AugMat =
Transpose[Join[Transpose[sol1], Transpose[sol2], Transpose[sol3], Transpose[IC]]];
AugMat // MatrixForm
Out[ ]//MatrixForm=
-k1+k2
0 - 0 A0
k2
k1
0 - -1 0
k2
1 1 1 0

◆ Reduce the augmented matrix to row echelon form.

293
14 Week 13_Linear Systems of ODEs.nb

In[ ]:= RowReduce[AugMat] // MatrixForm


Out[ ]//MatrixForm=
1 0 0 A0
A0 k2
0 1 0
k1-k2
A0 k1
0 0 1 -
k1-k2

◆ Perform a back substitution and find the arbitrary constants.


k1
In[ ]:= c3 = - A0;
k1 - k2
k2
c2 = A0;
k1 - k2
c1 = A0;

◆ Obtain the particular solution.


k2 k1
In[ ]:= partSol = SimplifygenSol /. c1  A0, c2  A0, c3  - A0;
k1 - k2 k1 - k2
MatrixForm[{y1 , y2 , y3 }]  MatrixForm[partSol]
Out[ ]=

A0 -k1 t
y1
A0 -k1 t --k2 t  k1
y2  - k1-k2
y3 A0 k1--k2 t k1+-1+-k1 t  k2
k1-k2

◆ Or alternatively:
In[ ]:= MatrixForm[{y1 , y2 , y3 }]  A0 * MatrixForm[partSol /. A0  1]
Out[ ]=

-k1 t
y1
-k1 t --k2 t  k1
y2  A0 -
k1-k2
y3 k1--k2 t k1+-1+-k1 t  k2
k1-k2

◆ Step 9. Verify the solution using DSolve function.


In[ ]:= ClearAll["Global`*"]
dsol = FullSimplify[DSolve[{y1 '[t]  - k1 * y1[t], y2 '[t]  k1 * y1[t] - k2 * y2[t],
y3 '[t]  k2 * y2[t], y1[0]  A0, y2[0]  0, y3[0]  0}, {y1[t], y2[t], y3[t]}, t]]
Out[ ]=

A0 -k1 t - -k2 t  k1 A0 k1 - -k2 t k1 + - 1 + -k1 t  k2


y1[t]  A0 -k1 t , y2[t]  - , y3[t]  
k1 - k2 k1 - k2

Another technique for solving initial-value problems and ample solution were taken from the
book “Differential Equations with Mathematica, 5th Ed.” by Martha L. Abell and James
P. Braselton, Chapter 6.

294
Week 13_Linear Systems of ODEs.nb 15

Theorem 13.2: Solving Initial-Value Problems


Let Φ(t) be a fundamnetal matrix for the system of equations X' (t) = A(t) X(t) and define it as follows

Φ =  e λ1 t u 1 e λ1 t u 1 ⋯ e λn t u n 

Then, a general solution is X' (t) = Φ(t) C, where C is a constant vector. If the initial condition

X(0) = X0 is given, then X(0) = Φ(0) C,

X0 = Φ(0) C,

C = Φ-1 (0) X0 .

X' (t) = A(t) X(t) .


Therefore, the the solution to the initial-value problem is X(t) = Φ(t) × Φ-1 (0) X0 .
X ( 0) = X 0 .

◆ Step 1. Define the coefficient matrix A.


In[ ]:= ClearAll["Global`*"]
A = {{- k1, 0, 0}, {k1, - k2, 0}, {0, k2, 0}}; A // MatrixForm
Out[ ]//MatrixForm=

- k1 0 0
k1 - k2 0
0 k2 0

◆ Step 2. Compute the eigenvalues and corresponding eigenvectors of A.


In[ ]:= s1 = Eigensystem[A]
Out[ ]=
- k1 + k2 k1
{0, - k1, - k2}, {0, 0, 1}, - ,- , 1, {0, - 1, 1}
k2 k2

◆ Results show that eigenvalues are λ1 = 0, λ2 = - k1, and λ3 = - k2 with corresponding


eigenvectors, respectively:
.
-k1 +k2
0 - 0
k2
u1 = 0 , u2 = -
k1 , and u3 = - 1 .
k2
1 1
1
.

◆ Step 3. Define the fundamental matrix as follows: Φ =  eλ1 t u1 e λ1 t u 1 ⋯ e λn t u n  .

295
16 Week 13_Linear Systems of ODEs.nb

In[ ]:= phi[t_] = {s1〚2, 1〛 * Exp[s1〚1, 1〛 * t],


s1〚2, 2〛 * Exp[s1〚1, 2〛 * t], s1〚2, 3〛 * Exp[s1〚1, 3〛 * t]} // Transpose;
phi[t] // MatrixForm
Out[ ]//MatrixForm=
-k1 t (-k1+k2)
0 - 0
k2
-k1 t k1
0 - - -k2 t
k2
-k1 t
1  -k2 t

◆ Step 4. Calculate Φ-1 with Inverse function.


In[ ]:= Inverse[phi[t]] // MatrixForm
Out[ ]//MatrixForm=
1 1 1
-k2 t

- -k1 t-k2 t k1
0 0
-k1 t-k2 t -
k2
-k1 t k1
-k1 t k1 --k1 t +
k2
-k1 t-k2 t k1 -k1 t-k2 t k1
0
-k1 t-k2 t -  k2 -k1 t-k2 t -
k2 k2

◆ Step 5. Calculate the solution to the initial-value problem.


In[ ]:= IC = {A0, 0, 0};

In[ ]:= sol = Dot[phi[t], Inverse[phi[0]], IC] // Simplify // MatrixForm


Out[ ]//MatrixForm=

A0 -k1 t
A0 -k1 t --k2 t  k1
- k1-k2
-k2 t
A0 k1- k1+-1+-k1 t  k2
k1-k2

◆ Step 6. Verify the solution using DSolve function.


In[ ]:= ClearAll["Global`*"]
dsol = FullSimplify[DSolve[{y1 '[t]  - k1 * y1[t], y2 '[t]  k1 * y1[t] - k2 * y2[t],
y3 '[t]  k2 * y2[t], y1[0]  A0, y2[0]  0, y3[0]  0}, {y1[t], y2[t], y3[t]}, t]]
Out[ ]=

A0 -k1 t - -k2 t  k1 A0 k1 - -k2 t k1 + - 1 + -k1 t  k2


-k1 t
y1[t]  A0  , y2[t]  - , y3[t]  
k1 - k2 k1 - k2

Summary
After completing this chapter, you should be able to
◼ improve problem-solving skills by practicing different methods.
◼ develop SOPs and streamline your workflow after you are familar with the methods.
◼ always check/verify your solutions for quality assurance.

296
Week 13_Linear Systems of ODEs.nb 17

◼ Remember, “to learn and not to do is really not to learn. To know and not to do is
really not to know.” - Stephen R. Covey.

297
298
References and Suggested Readings
Table of Contents
1. Mathematica-Related Books
2. Wolfram U Interactive Courses
3. Books on Engineering Mathematics (ODE & Linear Algebra)

Mathematica-Related Books
◆ An Elementary Introduction to the Wolfram Language, 2nd Ed., by Stephen Wolfram,
Wolfram Media, Inc., 2017. URL: https://www.wolfram.com/language/elementary-
introduction/2nd-ed/index.html
.

◆ The Student’s Introduction to Mathematica and the Wolfram Language, 3rd Ed., by
Bruce F. Torrence and Eve A. Torrence, Cambridge University Press, 2019. URL: https://-
doi.org/10.1017/9781108290937
.

◆ Hands-on Start to Wolfram Mathematica and Programming with the Wolfram Language,
2nd Ed., by Cliff Hastings, Kelvin Mischo, Michael Morrison, Wolfram Media, Inc.,
2016
.

◆ Differential Equations with Mathematica, 5th Ed., by Martha L. Abell and James P.
Braselton, Academic Press, 2022. URL: https://doi.org/10.1016/C2020-0-00005-8
.

◆ Advanced Engineering Mathematics with Mathematica, by Edward B. Magrab, CRC


Press, 2020
.

◆ Wolfram Documentation Center: Symbolic Differential Equation Solving. URL: http-


s://reference.wolfram.com/language/tutorial/DSolveOverview.html
.

◆ Wolfram Documentation Center: Advanced Numerical Differential Equation Solving in


the Wolfram Language. URL: https://reference.wolfram.com/language/tutorial/ND-
SolveOverview.html
.

◆ Wolfram Documentation Center: Linear Algebra. URL: https://reference.wolfram.-


com/language/tutorial/LinearAlgebra.html
.

299
2 References and Suggested Readings.nb

◆ Wolfram Documentation Center: Lists. URL: https://reference.wolfram.com/language/tu-


torial/Lists.html
.

Wolfram U Interactive Courses


◆ Wolfram U - FULL INTERACTIVE COURSE: An Elementary Introduction to the
Wolfram Language. URL: https://www.wolfram.com/wolfram-u/an-elementary-introduc-
tion-to-the-wolfram-language/
.

◆ Wolfram U - FULL INTERACTIVE COURSE: Introduction to Notebooks. URL: http-


s://www.wolfram.com/wolfram-u/introduction-to-notebooks/
.

◆ Wolfram U - FULL INTERACTIVE COURSE: Introduction to Calculus. URL: http-


s://www.wolfram.com/wolfram-u/introduction-to-calculus/
.

◆ Wolfram U - FULL INTERACTIVE COURSE: Introduction to Differential Equa-


tions. URL: https://www.wolfram.com/wolfram-u/introduction-to-differential-equations/
.

◆ Wolfram U - FULL INTERACTIVE COURSE: Introduction to Linear Algebra.


URL: https://www.wolfram.com/wolfram-u/introduction-to-linear-algebra/
.

Books on Engineering Mathematics (ODE & Linear Algebra)


◆ Kreyszig, Erwin, Advanced engineering mathematics, 10th Ed., International ed., John
Wiley & Sons, Inc, 2011
.

◆ Holt, Jeffrey, Linear algebra with applications, 2nd Ed., W.H. Freeman and Company,
2017
.

◆ Pauls Online Math Notes - Differential Equations, by Paul Dawkins, 2018. URL: https://-
tutorial.math.lamar.edu/Classes/DE/DE.aspx
.

◆ The LibreTexts libraries - Differential Equations. URL: https://math.libretexts.org/Book-


shelves/Differential_Equations
.

◆ The LibreTexts libraries - Linear Algebra. URL: https://math.libretexts.org/Bookshelves/-


Linear_Algebra
.

300
References and Suggested Readings.nb 3

◆ Interactive Linear Algebra, by Dan Margalit and Joseph Rabinoff, Georgia Institute of
Technology, 2019. URL: https://textbooks.math.gatech.edu/ila/
.

◆ MIT OpenCourseWare - Linear Algebra (Instructor: Prof. Gilbert Strang), 2010. URL:
https://ocw.mit.edu/courses/18-06-linear-algebra-spring-2010/
.

◆ MIT OpenCourseWare - Differential Equations and Linear Algebra (Instructor: Prof.


Gilbert Strang and Dr. Cleve Moler), 2015. URL: https://ocw.mit.edu/cours-
es/res-18-009-learn-differential-equations-up-close-with-gilbert-strang-and-cleve-moler-
fall-2015/
.

◆ Differential Equations and Linear Algebra, by Gilbert Strang, Wellesley-Cambridge


Press, 2014. URL: https://math.mit.edu/~gs/dela/
.

301
302

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy