MCSC 202 Numerical Methods

Download as pdf or txt
Download as pdf or txt
You are on page 1of 466

NUMERICAL METHODS

(MCSC-202)

By
Samir Shrestha
Department of Mathematics
Kathmandu University, Dhulikhel

Lecture 1
Errors in numerical computation
1
References
Recommended Text Book
• Introductory Methods of Numerical analysis, S. S. Sastry, PHI
Learning Private Limited, New Delhi, 5th edition, 2012.

Supplementary Text Book


• Numerical Methods for Scientific and Engineering computation,
M. K. Jain, S. R. K Iyengar & R. K. Jain, New Age International
Publisher, 4th edition, 2005.
2
Errors
Outline

• Introduction to numerical error

• Exact and approximated numbers

• Significant digits (Signifcant figures), Rounding off

• Error, absolute, relative, error limit

• Examples

3
Chapter-1

Introduction
to
Errors in numerical computation

4
Errors in numerical computation Chap-1:Error
• While using numerical methods, it is impossible to
ignore the numerical errors

• Errors come in a variety of forms and sizes

• Some errors are avoidable and some are not. For


example, data conversion and round off errors can
not be avoided, but human error can be eliminated
completely

• Although certain errors cannot be eliminated


completely, we must at least know the bounds of these
errors to make use of our final solutions
5
Errors in numerical computation Chap 1: Errors
• It is therefore essential to know how errors arise, how
they grow during the numerical process, and how they
affect the accuracy of a solution

• A number of different types of errors occur during the


process of numerical computing. All these errors
contribute to the total error in final result

• A classification of errors encountered in a numerical


process is given in a figure below which shows that every
stage of the numerical computing cycle contributes to
the total error
6
Errors in numerical computation Chap 1: Errors
Classification of Errors

Total
error

Modeling Inherent Numerical Blunders


error error error

Human
Missing imperfection
information

Data Convesion Roundoff Truncation


errors errors errors error

Measuring Computing Numerical


method machine method

7
Chapter-1

Exact and Approximation of


Numbers

8
Exact and Approximation of Numbers Chap 1: Errors
• There are two kinds of numbers, exact and approximate
numbers

• For example numbers like 1, 2, 3, ... 1/2, 3/2, , , e etc are


exact numbers

• Approximate numbers are those that represent the numbers


to a certain degree of accuracy

• Some numbers cannot be represented exactly in a given


number of decimal digits. For example the quantity  is equal
to 3.1415926535897932384642…

•  can never be represented accurately. We may write as 3.14,


3.14159, or 3.141592653. In all cases we have omitted some
9
digits
Exact and Approximation of Numbers Chap 1: Errors
Continue ...
• The transcendental numbers like, e and irrational
number like 2, 5 , do not have a terminating
representation

• Some rational numbers also have a repeating pattern. For


instance, the rational number 2/7 = 0.285714285714…,
which can also not be given exactly

10
Chapter-1

Significant Digits or Significant


Figures

11
Significant Digits or Significant Figures Chap 1: Errors
• The concept of significant digits has been introduced primarily to
indicate the accuracy of the numerical values

• The digits that are used to express a number are called the
significant digits or significant figures

• Thus, the number 3.1416. 0.36567 and 4.0345 contain five


significant digits in each

• The number 0.00345 has thee significant digits, viz, 3, 4, and 5,


since zeros serve only to fix the position of the decimal point

• However in the number 453,000, the number of significant digits


is uncertain, whereas the numbers 4.53×105, 4.530×105 and
4.5300×105 have three, four and five significant figures
12
respectively
Significant Digits or Significant Figures Chap 1: Errors
Continue ...
The following statements describe the notion of significant digits:
1. All non-zero digits are significant.

2. All zeros occurring between non-zero digits are significant.

3. Trailing zeros following a decimal point are significant. For example,


3.50, 65.0, and 0.230 have three significant digits.

4. Zeros between the decimal point and preceding non-zero digits are
not significant. For example, the following numbers have only four
significant digits:
0.0001234 (= 1234×10-7), 0.001234 (=1234×10-6), 0.01234
(=1234×10-5)

5. When the decimal point is not written, the trailing zeros are not
13
considered to be significant.
Significant Digits or Significant Figures Chap 1: Errors
Continue ...
More examples:
• 96.763 has -------- significant digits
Ans: 5
• 0.008472 has ---------significant digits
Ans: 4
• 0.0456000 has ---------significant digits
Ans: 6
• 36 has ----------- significant digits
Ans: 2
• 3600 has ------------ significant digits
Ans: This number has uncertain s.d.
• 3600.00 has ------------ significant digits.
Ans: 6 14
Chapter-1

Numerical Errors and Rounding off


Error

15
Numerical Errors Chap 1: Errors

• Numerical Errors are introduced during the process of


implementation of a numerical method

• They come in two forms, round-off errors and truncation


errors

• The total numerical error is the summation of these two


errors

• The total error can be reduced by devising suitable


techniques for the implementing the solution
16
Round-off Errors
Chap 1: Errors
• Round-off errors occur when a fixed number of digits are
used to represent exact numbers

• Round-off error is introduced at the end of every


arithmetic operation

• Consequently, even though an individual round-off error


could be very small, the cumulative effect of a series of
computation can be very significant.

• It is usual to round-off numbers according to the following


rule:
17
Round-off Errors continue …
Chap 1: Errors
• It is usual to round-off numbers according to the following
rule:
To round-off a number to n significant digits, discard all
digits to the right of the nth digit, if the first discarded digit
is
1. greater than 5, the last retained significant digit is
“rounded up” by 1
2. less than 5, keep the last retained significant digit
unchanged
3. exactly 5, “rounded up” the last retained digit by 1 if it is
odd; otherwise, leave it unchanged

The number thus rounded-off is said to be correct to n


significant digits 18
Round-off Errors continue … Chap 1: Errors
Examples: Following number are rounded-off to four
significant digits:
a) 2.64570
Ans: 2.646
b) 12. 0354
Ans: 12. 04
c) 0.547326
Ans: 0.5473
d) 3.24152
Ans: 3.242

In manual computation, the round-off error can be reduced


by carrying out computations to more significant figures at
each step of the computation 19
Chapter-1

Absolute, Relative and Percentage


Errors

20
Absolute, Relative and Percentage Errors Chap 1: Errors
1. Error: Let 𝑋 denotes the true value of a data item and 𝑋1 is
its approximated value. Then, these two quantities are
related as
True value = Approximate value + Error
i.e 𝑋 = 𝑋1 + 𝐸
or, 𝐸 = 𝑋 − 𝑋1

2. Absoulte Error: The error may be negative or positive


depending on the values of 𝑋 and 𝑋1 . In error analysis, what
is important is the magnitude of the error but not the sign of
the error, and therefore, we normally consider what is known
as absolute error which is denoted by 𝐸𝐴 and given by
𝐸𝐴 = 𝑋 − 𝑋1 21
Absolute, Relative and Percentage Errors Chap 1: Errors
Continue ...

3. Relative Error: In many cases, absolute error may not


reflect its influence correctly as it does not take into account
the order of the magnitude of the value under study. relative
error which is nothing but the “normalized” absolute error.
The relative error is denoted by 𝐸𝑅 and defined by
𝐸𝐴 𝑋−𝑋1 𝑋1
𝐸𝑅 = = = 1−
𝑋 𝑋 𝑋

4. Percentage Error: The percentage error 𝐸𝑝 is given by


𝐸𝑝 = 𝐸𝑅 × 100%

22
Chapter-1

Limiting Absolute Error

23
Limiting Absolute Error Chap 1: Errors
Definition: Let ∆𝑋 > 0 be such a number such that
𝑋 − 𝑋1 ≤ ΔX, i.e. 𝐸𝐴 ≤ ΔX Then, ∆𝑋 is an upper limit on
the magnitude of the absolute error and is said to measure
absolute accuracy.
Δ𝑋 Δ𝑋
Similarly, ≈ measures the relative accuracy.
𝑋 𝑋1

Result to Remember: If the number 𝑋 is rounded to 𝑁


decimal places, then the absolute error does not exceed the
1
amount Δ𝑋 = 10−𝑁
2

24
Limiting Absolute Error Chap 1: Errors
Continue ...
Example: If the number 𝑋 = 1.325 is correct to three
decimal places. Find limiting absolute error, maximum
relative and percentage error .

Solutions: The given number𝑋 = 1.325 is correct to three


decimal places, so N = 3
Then,
1 1
(i) Maximum absolute error ΔX = 10 = 10−3 = 0.005
−N
2 2
Δ𝑋 0.005
(ii) Maximum relative error = = = 0.000378
𝑋 1.325
Δ𝑋 0.005
(iii) Maximum percentage error = × 100% = ×
𝑋 1.325
100% = 0.0378% 25
End of Lecture-1

Next
Lecture-2
Error propagation, General Error
Formula 26
NUMERICAL METHODS
(MCSC-202)

By
Samir Shrestha
Department of Mathematics
Kathmandu University, Dhulikhel

Lecture 2
Errors in numerical computation
1
References
Recommended Text Book
• Introductory Methods of Numerical analysis, S. S. Sastry, PHI
Learning Private Limited, New Delhi, 5th edition, 2012.

Supplementary Text Book


• Numerical Methods for Scientific and Engineering computation,
M. K. Jain, S. R. K Iyengar & R. K. Jain, New Age International
Publisher, 4th edition, 2005.
2
Errors
Outline

• Error propagation due to arithmetic operations

• General error formula

• Examples

3
Chapter-1

Error Propagation

4
Error Propagation Chap 1: Errors
• Numerical computing involves a number basic arithmetic
operations (+, -, *, /)

• Therefore, it is not the individual round-off errors that are


important but the final error on the result

• Our main concern is how an error at one point in the


process propagates and how it affects the final error

• In this section, we will discuss the arithmetic of error


propagation and its effect.

5
Error Propagation continue … Chap 1: Errors
Addition and Subtraction:
Consider take two number 𝑋 = 𝑋1 + 𝐸𝑋 and 𝑌 = 𝑌1 + 𝐸𝑌 ,
where 𝐸𝑋 and 𝐸𝑌 are the errors in 𝑋1 and 𝑌1 respectively.
Then,
𝑋 + 𝑌 = (𝑋1 +𝐸𝑋 ) + 𝑌1 + 𝐸𝑌
⟹ 𝑋 + 𝑌 = (𝑋1 +𝑌1 ) + (𝐸𝑋 + 𝐸𝑌 )
𝑇𝑟𝑢𝑒 𝐴𝑝𝑝𝑟𝑜𝑥. 𝐸𝑟𝑟𝑜𝑟

Therefore, total error in addition is


𝑬𝑿+𝒀 = (𝑬𝑿 + 𝑬𝒀 )

Similarly, the total error in subtraction is


𝑬𝑿−𝒀 = (𝑬𝑿 − 𝑬𝒀 )
6
Error Propagation Chap 1: Errors
Addition and Subtraction continue…
Generally, we are interested in the magnitude of the
maximum error that evolved during addition and
subtraction rather than the amount. Which is obtained by
taking the absolute values both sides, that means,

𝐸𝑋±𝑌 = 𝐸𝑋 ± 𝐸𝑌 ≤ 𝐸𝑋 + 𝐸𝑌 (Triangle Inequality)

⟹ 𝑬𝑿±𝒀 ≤ 𝑬𝑿 + 𝑬𝒀

This implies , the magnitude of the absolute error of a sum


(or difference) is less than or equal to the sum of the
magnitude of the errors.
7
Error Propagation Chap 1: Errors
Addition and Subtraction continue…

How to add several numbers and find the total error

Example: Find the sum of the following numbers:


1.35265, 2.00468, 1.532, 28.201, 31.00123,
where each of which are correct to given digits. Also, find
total absolute error.

We adopt the following procedures


8
Error Propagation Chap 1: Errors
Addition and Subtraction continue…
How to add several number of different absolute accuracies:
The following procedure may be adopted:

1. Isolate the number with greatest absolute error

2. Round-off all other number retaining in them one digit more


than in the isolated number

3. Add up, and

4. Round-off the sum by discarding last digit

This procedure is explained by an example 9


Error Propagation Chap 1: Errors
Addition and Subtraction continue…
Example: Find the sum of the following numbers:
1.35265, 2.00468, 1.532, 28.201, 31.00123,
where each of which are correct to given digits. Also, find total
absolute error.
Solution: We have two numbers 1.532 and 28.201 having greatest
absolute error of 0.0005.
Round-off all other numbers to four decimal digits. These are 1.3526,
2.0047, 31.0012
The sum of all the numbers is given by
S = 1.3526+2.0047+31.0012+1.532+28.201
= 64.0915
= 64.092 (Rounding-off by discarding last digit)
Thus, the sum of the given numbers is
𝑺 = 𝟔𝟒. 𝟎𝟗𝟐
10
Error Propagation Chap 1: Errors
Addition and Subtraction continue…
To find absolute error:
Two numbers have each an absolute error of 0.0005 and three
numbers have each an absolute error of 0.00005.
Therefore, absolute error in sum of all five numbers is
EA = 20.0005 + 30.00005
EA = 0.00115
In addition to above absolute error, we have to take into account the
rounding-off error in sum S and which is 64.0915 − 64.092 =
0.0005.
Therefore, total absolute error in sum is
𝐸𝑇 = 0.00115 + 0.0005
𝑬𝑻 = 𝟎. 𝟎𝟎𝟏𝟔𝟓

It has to be noted that the sum is in the range S = 64.092  0.00165


11
Error Propagation Chap 1: Errors
Multiplication:
Consider take two number 𝑋 = 𝑋1 + 𝐸𝑋 and 𝑌 = 𝑌1 + 𝐸𝑌 ,
where 𝐸𝑋 and 𝐸𝑌 are the errors in 𝑋1 and 𝑌1 respectively.
Then, the multiplication of these two numbers is
𝑋𝑌 = 𝑋1 + 𝐸𝑋 (𝑌1 + 𝐸𝑌 )
⟹ 𝑋𝑌 = 𝑋1 𝑌1 + 𝑋1 𝐸𝑌 + 𝑌1 𝐸𝑋 + 𝐸𝑋 𝐸𝑌
By neglect the product of the errors terms 𝐸𝑋 𝐸𝑌 , we get
𝑋𝑌 = 𝑋1 𝑌1 + (𝑋1 𝐸𝑌 + 𝑌1 𝐸𝑋 )
𝑇𝑟𝑢𝑒 𝐴𝑝𝑝𝑟𝑜𝑥. 𝐸𝑟𝑟𝑜𝑟
Then, the total error in the product is
𝐸𝑋𝑌 = (𝑋1 𝐸𝑌 + 𝑌1 𝐸𝑋 )
𝑬𝑿 𝑬𝒀
⟹ 𝑬𝑿𝒀 = 𝑿𝟏 𝒀𝟏 +
𝑿𝟏 𝒀𝟏
12
Error Propagation Chap 1: Errors
Multiplication continue …
We are interested in the magnitude of the maximum error.
Taking absolute values both sides:
𝐸𝑋 𝐸𝑌 𝐸𝑋 𝐸𝑌 𝐸𝑋 𝐸𝑌
𝐸𝑋𝑌 = 𝑋1 𝑌1 𝑋1
+ 𝑌1
= 𝑋1 𝑌1 𝑋1
+ 𝑌1
≤ 𝑋1 𝑌1 𝑋1
+ 𝑌1
𝑬𝑿 𝑬𝒀
⟹ 𝑬𝑿𝒀 ≤ 𝑿𝟏 𝒀𝟏 +
𝑿𝟏 𝒀𝟏
Division:
The error in division can be found in similar manner.
The total error in the quotient is found as
𝑿𝟏 𝑬 𝑿 𝑬𝒀
𝑬𝑿/𝒀 = −
𝒀𝟏 𝑿𝟏 𝒀𝟏
The maximum absolute error is given by
𝑿𝟏 𝑬𝑿 𝑬𝒀
⟹ 𝑬𝑿 ≤ +
𝒀 𝒀𝟏 𝑿𝟏 𝒀𝟏 13
Error Propagation Chap 1: Errors
Multiplication/Division continue …
How to multiply or divide any two numbers
Following procedure may be adopted:
1. Isolate the number with greatest absolute error
2. Round-off all another number so that it has same absolute error as in
the isolated number
3. Multiply (or divide) the numbers
4. Round-off the result so that it has the same significant digits as in the
isolated numbers
Example: Find the product of the numbers 56.54 and 12.4 which are
both correct to significant figures given
Solution: Here, the number 12.4 has greatest absolute error of 0.05 so
we round off the second number to one decimal digits, i.e, 56.5
Then, the product is given by
𝑃 = 12.4 × 56.5
= 700.6 14
Error Propagation Chap 1: Errors
Multiplication/Division continue …
Now, round-off the product 3 significant digits because the
isolated number 12.4 has three significant digits, we get

P = 701

Absolute error, EA = 0.0556.5 + 0.0512.4


𝐄𝐀 = 𝟑. 𝟒𝟒𝟓

Round-off error = 700.6 − 701 = 𝟎. 𝟒

Total absolute error, ET = 𝟑. 𝟒𝟒𝟓 + 𝟎. 𝟒 = 3.845


𝑬𝑻 = 𝟑. 𝟖𝟒𝟓
15
Chapter-1

General Error Formula

16
Mathematical Preliminaries: Error
Result 1: Taylor‘s series for a function of single variable
𝑓 ′ 𝑥1 𝑓′′ 𝑥1 2 𝑓′′′ 𝑥1 𝑛
𝑓 𝑥1 + Δ𝑥1 = 𝑓 𝑥1 + Δ𝑥1 + Δ𝑥1 + ⋯+ Δ𝑥1 +⋯
1! 2! 𝑛!

Result 2: Taylor‘s series for a function of two variables


𝜕𝑓 𝜕𝑓
𝑓 𝑥1 + Δ𝑥1 , 𝑥2 + Δ𝑥2 = 𝑓 𝑥1 , 𝑥2 + 𝜕𝑥1
Δ𝑥1 + 𝜕𝑥 Δ𝑥2 +
2
2 2 2
1 𝜕 𝑓 2
𝜕 𝑓 𝜕 𝑓 2
2 Δ𝑥1 +2 Δ𝑥1 Δ𝑥2 + 2 Δ𝑥2 +⋯
2 𝜕𝑥1 𝜕𝑥1 𝜕𝑥2 𝜕𝑥2

Result 3: Taylor‘s series for a function of 𝑛 variables


𝑓 𝑥1 + Δ𝑥1 , 𝑥2 + Δ𝑥2 , ⋯ , 𝑥𝑛 + Δ𝑥𝑛 = 𝑓 𝑥1 , 𝑥2 , ⋯ , 𝑥𝑛
𝜕𝑓 𝜕𝑓 𝜕𝑓
+ Δ𝑥 + Δ𝑥 + ⋯ + Δ𝑥
𝜕𝑥1 1 𝜕𝑥2 2 𝜕𝑥𝑛 𝑛
1 𝜕2𝑓 2
𝜕 2
𝑓 2
𝜕 2
𝑓
+ Δ𝑥1 + ⋯ + 2 Δ𝑥𝑛 + 2 Δ𝑥1 Δ𝑥2 + ⋯
2 𝜕𝑥12 𝜕𝑥𝑛 𝜕𝑥 1 𝜕𝑥 2
𝜕2𝑓
+2 Δ𝑥 Δ𝑥 + ⋯
𝜕𝑥𝑛−1 𝜕𝑥𝑛 𝑛−1 𝑛
17
General Error Formula Chap 1: Errors
We derive a general formula for the error committed while using a
certain formula or a functional relation
Let 𝑢 = 𝑓(𝑥1 , 𝑥2 , … , 𝑥𝑛 ) be a function of several variables
𝑥𝑖 , 𝑖 = 1,2, … , 𝑛 and let Δ𝑥𝑖 be the error in each 𝑥𝑖 .
Then the error Δ𝑢 in 𝑢 is given by
Δ𝑢 = 𝑓 𝑥1 + Δ𝑥1 , 𝑥2 + Δ𝑥2 , … , 𝑥𝑛 + Δ𝑥𝑛 − 𝑓(𝑥1 , 𝑥2 , … , 𝑥𝑛 )

Expanding the first term in right hand side by Taylor’s series, we


obtain
𝜕𝑓 𝜕𝑓 𝜕𝑓
Δ𝑢 = 𝑓 𝑥1 , 𝑥2 , … , 𝑥𝑛 + Δ𝑥1 + Δ𝑥2 + ⋯ + Δ𝑥𝑛
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛

+ 𝑇𝑒𝑟𝑚𝑠 𝑖𝑛𝑣𝑜𝑙𝑣𝑖𝑛𝑔 Δ𝑥𝑖2 𝑎𝑛𝑑 𝑕𝑖𝑔𝑕𝑒𝑟 𝑝𝑜𝑤𝑒𝑟𝑠 𝑜𝑓 Δ𝑥𝑖2

− 𝑓(𝑥1 , 𝑥2 , … , 𝑥𝑛 )
18
General Error Formula Continue … Chap 1: Errors
𝜕𝑓 𝜕𝑓 𝜕𝑓
⟹ Δ𝑢 = 𝑓 𝑥1 , 𝑥2 , … , 𝑥𝑛 + Δ𝑥1 + Δ𝑥2 + ⋯ + Δ𝑥𝑛
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛

+ 𝑇𝑒𝑟𝑚𝑠 𝑖𝑛𝑣𝑜𝑙𝑣𝑖𝑛𝑔 Δ𝑥𝑖2 𝑎𝑛𝑑 𝑕𝑖𝑔𝑕𝑒𝑟 𝑝𝑜𝑤𝑒𝑟𝑠 𝑜𝑓 Δ𝑥𝑖2

− 𝑓(𝑥1 , 𝑥2 , … , 𝑥𝑛 )
𝜕𝑓 𝜕𝑓 𝜕𝑓
⟹ Δ𝑢 = Δ𝑥1 + Δ𝑥2 + ⋯ + Δ𝑥𝑛
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛
+ 𝑇𝑒𝑟𝑚𝑠 𝑖𝑛𝑣𝑜𝑙𝑣𝑖𝑛𝑔 Δ𝑥𝑖2 𝑎𝑛𝑑 𝑕𝑖𝑔𝑕𝑒𝑟 𝑝𝑜𝑤𝑒𝑟𝑠 𝑜𝑓 Δxi

Assuming that the errors Δ𝑥𝑖 involving in 𝑥𝑖 are small enough that the
square power and higher powers of Δ𝑥𝑖 can be neglected. Then above
relation gives
𝜕𝑓 𝜕𝑓 𝜕𝑓
Δ𝑢 = Δ𝑥1 + Δ𝑥2 + ⋯ + Δ𝑥𝑛
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛
19
General Error Formula Continue … Chap 1: Errors
To find the maximum absolute error, take absolute values both sides,
𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓
Δ𝑢 = Δ𝑥1 + Δ𝑥2 + ⋯ + Δ𝑥𝑛 ≤ Δ𝑥1 + Δ𝑥2
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛 𝑇.𝐸.
𝜕𝑥1 𝜕𝑥2
𝜕𝑓
+ ⋯+ Δ𝑥𝑛
𝜕𝑥𝑛
𝝏𝒇 𝝏𝒇 𝝏𝒇
⟹ 𝜟𝒖 ≤ 𝚫𝒙𝟏 + 𝚫𝒙𝟐 + ⋯ + 𝚫𝒙𝒏
𝝏𝒙𝟏 𝝏𝒙𝟐 𝝏𝒙𝒏

The maximum absolute error in 𝑢 is


𝝏𝒇 𝝏𝒇 𝝏𝒇
𝜟𝒖 𝒎𝒂𝒙 = 𝚫𝒙 + 𝚫𝒙 + ⋯ + 𝚫𝒙
𝝏𝒙𝟏 𝟏 𝝏𝒙𝟐 𝟐 𝝏𝒙𝒏 𝒏

The maximum relative error is given by


𝜟𝒖 𝒎𝒂𝒙 𝝏𝒇 𝚫𝒙𝟏 𝝏𝒇 𝚫𝒙𝟐 𝝏𝒇 𝚫𝒙𝒏
= + +⋯+
𝒖 𝝏𝒙𝟏 𝒖 𝝏𝒙𝟐 𝒖 𝝏𝒙𝒏 𝒖
20
General Error Formula Continue … Chap 1: Errors
𝟒𝒙𝟐 𝒚𝟑
Example: If 𝒖 = 𝒛𝟒
errors in 𝒙, 𝒚, 𝒛 be 0.0001. Compute the maximum
and
absolute error and relative error in evaluating 𝒖 when 𝒙 = 𝒚 = 𝒛 = 𝟏.
Solution:
4𝑥 2 𝑦 3
The given function is 𝑢 = = 𝑓(𝑥, 𝑦, 𝑧)
𝑧4
Here, we compute the following:
𝜕𝑓 8𝑥𝑦 3 𝜕𝑓 8×1×12
𝜕𝑥
= 𝑧4
gives 𝜕𝑥 𝑥,𝑦,𝑧 =(1,1,1)
= 14
=8
𝜕𝑓 12𝑥 2 𝑦 2 𝜕𝑓 12×12 ×12
𝜕𝑦
= 𝑧4
gives 𝜕𝑦 𝑥,𝑦,𝑧 =(1,1,1)
= 14
= 12
𝜕𝑓 −16𝑥 2 𝑦 3 𝜕𝑓 −16×12 ×13
𝜕𝑧
= 𝑧5
gives 𝜕𝑧 𝑥,𝑦,𝑧 =(1,1,1)
= 15
= −16
The value of 𝑢 = 𝑓(𝑥, 𝑦, 𝑧) at 𝑥, 𝑦, 𝑧 = (1,1,1) is
4 × 12 13
𝑢 = 𝑓 1,1,1 = 4
=4
1

The given errors in 𝑥, 𝑦, 𝑧 are respectively Δ𝑥 = Δ𝑦 = Δ𝑧 = 0.0001.


21
General Error Formula Continue … Chap 1: Errors
The maximum absolute error is
𝜕𝑓 𝜕𝑓 𝜕𝑓
Δ𝑢 𝑚𝑎𝑥 = Δ𝑥 + Δ𝑦 + Δ𝑧
𝜕𝑥 𝜕𝑦 𝜕𝑧
⟹ Δ𝑢 𝑚𝑎𝑥 = 8 × 0.0001 + 12 × 0.0001 + −16 × 0.0001
⟹ 𝜟𝒖 𝒎𝒂𝒙 = 𝟎. 𝟎𝟎𝟑𝟓
4𝑥 2 𝑦 3
Thus, the maximum absolute error in the given function 𝑢 = while
𝑧4
evaluating at 𝑥, 𝑦, 𝑧 = 1,1,1 is Δ𝑢 𝑚𝑎𝑥 = 0.0035

The maximum relative error is


Δ𝑢 𝑚𝑎𝑥 0.0035
= 𝑢
= 4
= 𝟎. 𝟎𝟎𝟎𝟗
The maximum percentage error is
Δ𝑢 𝑚𝑎𝑥
= 𝑢
× 100 % = 0.0009 × 100 %
= 𝟎. 𝟎𝟗%

22
End of Lecture-2

Next
Lecture-3
Problems from Book
23
NUMERICAL METHODS
(MCSC-202)

By
Samir Shrestha
Department of Mathematics
Kathmandu University, Dhulikhel

Lecture 3
Errors in numerical computation
1
References
Recommended Text Book
• Introductory Methods of Numerical analysis, S. S. Sastry, PHI
Learning Private Limited, New Delhi, 5th edition, 2012.

Supplementary Text Book


• Numerical Methods for Scientific and Engineering computation,
M. K. Jain, S. R. K Iyengar & R. K. Jain, New Age International
Publisher, 4th edition, 2005.
2
Errors

Outline

• Related probles from text book

3
Chapter-1

Book Exercises (Pages19-20)

4
Book Exercises (Pages19-20) Chap 1: Errors
1.3 Calculate the value of the difference 𝟏𝟎𝟐 − 𝟏𝟎𝟏
correct to four significant figures.
Solutions: We have
102 = 10.09950493
101 = 10.04987562
Therefore, 102 − 101 = 10.09950493 − 10.04987562
= 0.04962931
= 𝟎. 𝟎𝟒𝟗𝟔𝟑, correct to four
significant figures

5
Book Exercises (Pages19-20) Chap 1: Errors
1.4 If 𝒑 = 𝟑𝒄𝟔 − 𝟔𝒄𝟐 , find the percentage error in 𝒑 at 𝒄 = 𝟏, if
the error in 𝒄 is 𝟎. 𝟎𝟓
Solutions: Given function is 𝑝 = 3𝑐 6 − 6𝑐 2
The error in c is Δ𝑐 = 0.05
The maximum absolute error in 𝑝 from general error formula is
given by
𝜕𝑝
Δ𝑝 𝑚𝑎𝑥 = Δ𝑐
--------------------(1)
𝜕𝑐
The maximum relative error is given by
Δ𝑝 𝑚𝑎𝑥
𝐸𝑅 𝑚𝑎𝑥 = 𝑝
----------------------(2)
Now, we compute
𝜕𝑝 5 𝜕𝑝
= 18𝑐 − 12𝑐, this gives = 18 × 15 −12 × 1 = 6
𝜕𝑐 𝜕𝑐 𝑐=1
𝝏𝒑
⟹ =𝟔
𝝏𝒄 𝒄=𝟏
6
Book Exercises (Pages19-20) Chap 1: Errors
Continue 1.4...

The value of 𝑝 at 𝑐 = 1 is
𝑝 = 3 × 16 − 6 × 12 = −3
⟹ 𝒑 = −𝟑
From (1), the maximum absolute error in p is
Δ𝑝 𝑚𝑎𝑥 = 6 × 0.05 = 0.3
⟹ Δ𝑝 𝑚𝑎𝑥 = 0.3
From (2), the maximum relative error in p is
0.3 0.3
𝐸𝑅 𝑚𝑎𝑥 = = = 0.1
−3 3
⟹ 𝐸𝑅 𝑚𝑎𝑥 = 0.1
The percentage error in p is
𝐸𝑝 = 𝐸𝑅 𝑚𝑎𝑥 × 100% = 0.1 × 100% = 10%
7
Book Exercises (Pages19-20) Chap 1: Errors
1.7 Find the absolute error in the product 𝒖𝒗 if 𝒖 =
𝟓𝟔. 𝟓𝟒 ± 𝟎. 𝟎𝟎𝟓 and 𝒗 = 𝟏𝟐. 𝟒 ± 𝟎. 𝟎𝟓.
Solution: The given number are 𝑢 = 56.54 and 𝑣 =
12.4 with their respective errors
Δ𝑢 = 0.005 and Δ𝑣 = 0.05
The maximum absolute error due to product of two numbers
is given by
Δ𝑢 Δ𝑣
𝐸𝑢𝑣 = 𝑢𝑣 +
𝑢 𝑣
0.005 0.05
⟹ 𝐸𝑢𝑣 = 56.54 × 12.4 × +
56.54 12.4
⟹ 𝐸𝑢𝑣 = 2.889000

8
Book Exercises (Pages19-20) Chap 1: Errors
1.8 Prove that the relative error in a product of thee non-zero
numbers does not exceed the sum of the relative error of given
numbers.
Solution: Let 𝑥, 𝑦, 𝑧 be thee non-zero numbers with their respective
errors Δ𝑥, Δ𝑦, Δ𝑧.
Let their product be given by
𝑢 = 𝑥𝑦𝑧
Here, 𝑓 𝑥, 𝑦, 𝑧 = 𝑥𝑦𝑧
Now, we compute the partial derivatives of 𝑓 w.r.t. 𝑥, 𝑦, 𝑧
𝜕𝑓 𝜕𝑓 𝜕𝑓
= 𝑦𝑧, = 𝑥𝑧, = 𝑥𝑦
𝜕𝑥 𝜕𝑦 𝜕𝑧
From general error formula, the absolute error is given by
𝜕𝑓 𝜕𝑓 𝜕𝑓
Δ𝑢 ≤ Δ𝑥 + Δ𝑦 + Δ𝑧
𝜕𝑥 𝜕𝑦 𝜕𝑧
⟹ Δ𝑢 ≤ 𝑦𝑧Δ𝑥 + 𝑥𝑧Δ𝑦 + 𝑥𝑦Δ𝑧 --------------(1)
9
Book Exercises (Pages19-20) Chap 1: Errors
Continue 1.8...
The relative error is given by
Δ𝑢 𝑦𝑧Δ𝑥 + 𝑥𝑧Δ𝑦 + 𝑥𝑦Δ𝑧
𝐸𝑅 = ≤
𝑢 (𝑓𝑟𝑜𝑚 1) 𝑥𝑦𝑧
𝑦𝑧Δ𝑥 𝑥zΔ𝑦 𝑥𝑦Δ𝑧
⟹ 𝐸𝑅 ≤ + +
𝑥𝑦𝑧 𝑥𝑦𝑧 𝑥𝑦𝑧
Δ𝑥 Δ𝑦 Δ𝑧
⟹ 𝐸𝑅 ≤ + +
𝑥 𝑦 𝑧
Δ𝑥 Δ𝑦 Δ𝑧
⟹ 𝑬𝑹 ≤ 𝑥 + 𝑦 + 𝑧 -----------------(2)

The right hand side (2) is the sum of the relative error of the numbers 𝑥, 𝑦, 𝑧.
Hence, the relative error in a product of thee non-zero numbers does not
exceed the sum of the relative error of given numbers
Problem: Prove that the relative error in a product of 𝑛 non-zero
numbers 𝑥1 , 𝑥2 , ⋯ , 𝑥𝑛 does not exceed the sum of the relative error of
given numbers. 10
Book Exercises (Pages 19-20) Chap 1: Errors
1.9 Find the relative error in the quotient 4.536/1.32, the numbers
being correct to digits given.
Solution: Let the number be 𝑥 = 4.536 and y = 1.32 with their
respective errors Δ𝑥 = 0.0005 and Δ𝑦 = 0.005.
𝑥 Δ𝑥 Δ𝑦
The absolute error in the quotient is 𝐸𝑥/𝑦 ≤ 𝑦 𝑥
+ 𝑦
The relative error in the quotient is
𝑥 Δ𝑥 Δ𝑦
𝐸𝑥/𝑦 + Δ𝑥 Δ𝑦
𝑦 𝑥 𝑦
𝐸𝑅 = 𝑥 ≤ 𝑥 = +
𝑥 𝑦
𝑦 𝑦
Δ𝑥 Δ𝑦
⟹ 𝐸𝑅 ≤ +
𝑥 𝑦
0.0005 0.005
⟹ 𝐸𝑅 ≤ + = 0.0038981
4.536 1.32
⟹ 𝑬𝑹 ≤ 𝟎. 𝟎𝟎𝟑𝟖𝟗𝟖𝟏

Thus, the relative error in the quotient does not exceed 0.0038981
11
End of Chapter-1
Error in Numerical Computation

Next
Chapter-2
Roots of the Equations
12
NUMERICAL METHODS
(MCSC-202)

By
Samir Shrestha
Department of Mathematics
Kathmandu University, Dhulikhel

Lecture 1
Chap-2: Roots Finding
1
Numerical Methods
Contents

 Basic introduction of Computer programming


language [4]
 Errors in numerical computation [5]
• Root findings [7]
• Finite differences and Interpolation [8]
• Solving ODE (IVP) [6]
• Numerical Differentiation and Integration [7]
• Matrices and System of linear equations [6]
• Curve fitting [2]
References
Recommended Text Book
• Introductory Methods of Numerical analysis, S. S. Sastry, PHI
Learning Private Limited, New Delhi, 5th edition, 2012.

Supplementary Text Book


• Numerical Methods for Scientific and Engineering computation,
M. K. Jain, S. R. K Iyengar & R. K. Jain, New Age International
Publisher, 4th edition, 2005.
3
Roots
Outline

• Introduction

• Bisection method with examples

• Method of false position with example

• Classwork

4
Chapter-2

Introduction
to
Roots Finding

5
Introduction: Chap-2:Roots
• A common problem encountered in science and
engineering can be formulated into the equation of the
form:

𝑓(𝑥) = 0 ---------------------(1)

where 𝑥 and 𝑓(𝑥) may be real, complex, or vector quantities.


• The solution process often involves finding the values of 𝑥
that would satisfy the Equation (1).
• These values of 𝑥 are called the roots of the Equations (1).
• Since the function 𝑓(𝑥) becomes zero at these values,
they are also known as the zeros of the function 𝑓(𝑥).
6
Introduction: Continue … Chap-2:Roots
Equation (1) could be one of the following types:

Algebraic Equations:
2𝑥 + 3𝑥𝑦 − 25 = 0
𝑥3 − 𝑥𝑦 − 3𝑦2 = 0
Polynomial Equations:
5𝑥3 − 𝑥2 + 𝑥 − 2 = 0
𝑥2 − 4𝑥 + 4 = 0
Transcendental Equations:
2 sin 𝑥 − 𝑥 = 0
𝑒𝑥 sin 𝑥 − 𝑥 = 0
ln 𝑥 2 − 1 = 0
𝑥 − 𝑒𝑥 = 0
7
Introduction: Continue … Chap-2:Roots
Graphically, a root of the equation 𝑓 𝑥 = 0 is point where the
graph of x-axis intersect the x-axis, more specifically, the x-
intercept of the function 𝑓(𝑥)
𝒇(𝒙)

O
𝒙 = 𝝃 is the root of 𝒇 𝒙 = 𝟎

Note: The graph of 𝑓 𝑥 may intersect the x-axis more than one
points if the equation 𝑓 𝑥 = 0 has more than one roots. 8
Introduction: Chap-2:Roots
Methods of Solution:
There are number of ways to find the roots of the equations:
Direct Analytical Methods: Some of the equation might be solved
exactly to find the root in certain cases, for example, Lower
degree polynomials viz. quadratic, cubic or a biquadratic
equations and some trigonometric equations.

Graphical Methods: Plotting the graph of 𝑓(𝑥) versus 𝑥, the root


can be estimated by find the point of intersection of the graph of
𝑓(𝑥) with 𝑥 −axis.

Trail and Error: This method involves a series of guesses for 𝑥,


each time evaluating the functional value 𝑓(𝑥) to see whether it
is close to zero. The value of 𝑥 that causes the function closer to
zero is one of the approximate roots of the equation. 9
Introduction: Chap-2:Roots
Methods of Solution: Continue …
Iterative Methods: An iterative technique usually begins with
an approximate value of the root, known as initial guess,
which is then successively corrected iteratively by iteration by
iteration using a formula. The process stops when the desire
level of accuracy is reached.
The methods that start with two initial guesses are:
• Bisection Method
• False Position Method

The methods that start with single initial guess are:


• Newton-Raphson Method
• Secant Method
• Fixed-Point Method or Iteration Method 10
Chapter-2

Bisection Method

11
Bisection Method: Chap-2:Roots
• The bisection method is one of the simplest and most reliable
method for finding the solution of equation 𝑓(𝑥) = 0.
• This method based on the Intermediate Value Theorem which
states that “If the function 𝑓(𝑥) is continuous on the interval
[𝑎, 𝑏] and 𝑓(𝑎) and 𝑓(𝑏) are of opposite signs, i.e.
𝑓(𝑎) 𝑓(𝑏) < 0, then there exists at least one root x = 𝜉 of the
equation 𝑓(𝑥) = 0 ”.
• The root x = 𝜉 lies between 𝑎 and 𝑏 is approximated by the
𝑎+𝑏
by the 𝑥0 = 2 , which is the mid point of the interval [𝑎, 𝑏].
Now, there exist one of the following three possibilities :
1. If 𝑓(𝑥0) = 0, we conclude that 𝑥0 is a root of the equation 𝑓 𝑥 = 0

2. If 𝑓(𝑎) 𝑓(𝑥0) < 0, there is a root between 𝑎 and 𝑥0 .

3. If 𝑓(𝑥0) 𝑓(𝑏) < 0, there is a root between 𝑥0 and 𝑏. 12


Bisection Method: Chap-2:Roots
• In cases (2) and (3), we get the new sub-interval and desiginate the
name to be 𝑎1 , 𝑏1 that contains the root. The lenght of this interval
𝑏−𝑎
𝑎1 , 𝑏1 is 2
.
𝑎1 +𝑏1
• The next approximnation of the root is now 𝑥1 =
2
• Now, we repeate to check above three possibilities with with 𝑥1 and
continue this process by finding the new sub- intervals.
• The process will be continued until the lenght of the sub-intervals to
be sufficiently small so that approximated root is close enough to the
exact root.
Calculation of number of steps 𝒏 needed:
1
• It is clear that the interval length is reduced by in each step
2
• At the end of nth step, the new interval will be [𝑎𝑛 , 𝑏𝑛 ] which of
𝑏−𝑎
length 2𝑛
• We desire the length of this interval is smaller than or equal to a
desire very small positive number 𝜖 > 0 13
Bisection Method: Chap-2:Roots
Calculation of number of steps 𝒏 needed : Continue...
• That means,
𝑏−𝑎
𝑛
≤𝜖
2
𝑏−𝑎
⇒ ≤ 2𝑛
𝜖
Taking log both sides
𝑏−𝑎
⇒ ln ≤ ln 2𝑛
𝜖
𝑏−𝑎
⇒ ln ≤ 𝑛ln 2
𝜖
𝑏−𝑎
ln 𝜖
⇒𝑛≥
ln 2
This gives the number 𝑛 of iterations required to achieve an accuracy 𝜖
14
Bisection Method: Chap-2:Roots
Calculation of number of steps 𝒏 needed : Continue...
Example: Find the number of iteration steps required to achieve the an
accuracy 𝜖 = 0.0001 by Bisection method to approximate the root of
𝑓 𝑥 = 0 that lies in the interval [−1,1].
Solution:
Here 𝜖 = 0.0001, 𝑏 − 𝑎 = 2, then we have
𝑏−𝑎
ln 𝜖
𝑛≥
ln 2
2
ln
⇒𝑛≥ 0.0001
ln 2
⇒ 𝑛 ≥ 14.29
This means number of iterations 𝑛 = 15 is required to achieve an
accuracy ϵ = 0.0001

15
Bisection Method: Chap-2:Roots
The Bisection method is shown graphically:

16
Bisection Method: Chap-2:Roots
Algorithm: Bisection method
Step 1: Choose initial values for a and b and stopping criterion ,𝑒𝑝𝑠 > 0.
Step 2: Compute 𝑓𝑎 = 𝑓(𝑎) and 𝑓𝑏 = 𝑓(𝑏)
Step 3: If 𝑓𝑎 𝑓𝑏 > 0, 𝑎 and 𝑏 does not contain any root. Go to Step 1
Otherwise
𝑎+𝑏
Step 4: Compute 𝑥0 = and compute 𝑓0 = 𝑓(𝑥0)
2
Step 5: If 𝑓𝑎 𝑓0 < 0 (root lies between 𝑎 and 𝑥0)
Set 𝑏 = 𝑥0
Set 𝑓𝑏 = 𝑓0
else (root lies between x0 and b)
Set 𝑎 = 𝑥0
Set 𝑓𝑎 = 𝑓0
Step 6: If 𝑏 − 𝑎 < 𝑒𝑝𝑠, then 𝑥0 is the approximated root of the
equation. STOP
17
else go to Step 4
Chapter-2

Bisection Method
Examples

18
Bisection Method: Chap-2:Roots
Example-1: Find the root, correct to two decimal places, of the equation
𝑥 2 − 4𝑥 − 10 = 0 in the interval −2 < 𝑥 < −1 by using bisection method.
Solution: Given equation 𝑥 2 − 4𝑥 − 10 = 0, 𝑎 = −2, 𝑏 = −1. Here
𝑓 𝑥 = 𝑥 2 − 4𝑥 − 10 = 0.
First, we compute 𝒇 𝒂 = 𝑓(−2) = 𝟐 and 𝒇 𝒃 = 𝑓(−1) = −𝟓
Since 𝑓(𝑎)𝑓(𝑏) < 0, root lies in the interval (−2, −1). Now,

𝑎+𝑏
Step 1: Compute 𝑥0 = 2 = −1.5 and 𝑓 x0 = 𝑓(−1.5) = −1.75.
Since 𝑓(−2)𝑓(−1.5) < 0, the root must lie in the new interval (-2, -1.5).

−2−1.5
Step 2: Compute 𝑥1 = 2
= −1.75 and 𝑓 𝑥1 = 𝑓(−1.75) = 0.0625.
Since 𝑓(−1.75)𝑓(−1.5) < 0, the root lies on the interval (-1.75, -1.5).

−1.75−1.5
Step 3: Compute 𝑥3 = 2
= −1.625 and 𝑓 −1.625 = −0.8594.
Since 𝑓 −1.625 𝑓 −1.75 < 0, the root lies in the interval (-1.75, -1.625)
Continuing this procedure we can obtain following table: 19
Bisection Method: Chap-2:Roots
Example-1: Continue ...

Iteration 𝒂 𝒃 𝒙𝒊 𝒇(𝒙𝒊 )
1 -2.0000 (+) -1.0000 (-) -1.5000 (-) -1.7500
2 -2.000 (+) -1.5000 (-) -1.7500 (+) 0.0625
3 -1.7500 (+) -1.500 (-) -1.6250 (-) -0.8594
4 -1.7500 (+) -1.6250 (-) -1.6875 (-) -0.4023
5 -1.7500 (+) -1.6875 (-) -1.7188 (-) -0.1705
6 -1.7500 (+) -1.7188 (-) -1.7344 (-) -0.544
7 -1.7500 (+) -1.7344 (-) -1.7422 (+) 0.0040
8 -1.7422 (+) -1.7344 (-) -1.7383 (-) -0.253
9 -1.7422 (+) -1.7383 (-) -1.7402 (-) -0.0106
10 -1.7422 (+) -1.7402 (-) -1.7412 (-) -0.0033

The root of the equation correct to two decimal places is 𝒙𝟎 = −𝟏. 𝟕𝟒.
20
Bisection Method: Chap-2:Roots
Example-2: Find a positive root correct to two decimal places of the equation
𝑥𝑒 𝑥 = 1 using bisection method, which lies between 0 and 1
Solution: Given equation 𝑥𝑒 𝑥 − 1 = 0, 𝑎 = 0, 𝑏 = 1, so 𝑓(𝑥) = 𝑥𝑒 𝑥 −1
First, we compute 𝑓 0 = −1 and 𝑓 1 = 1.7183
Since 𝑓(0)𝑓(1) < 0, root lies in the interval [0, 1]. Now,

0+1
Step 1: Compute 𝑥0 = = 0.5 and 𝑓(0.5) = −0.1756.
2
Since 𝑓(0.5)𝑓(1) < 0, the root must lie in the new interval (0.5, 1).

0.5+1
Step 2: Compute 𝑥1 = = 0.75 and 𝑓(0.75) = 0.5878.
2
Since 𝑓(0.5)𝑓(0.75) < 0, the root lies on the interval (0.5, 0.75].

0.5+0.75
Step 3: Compute 𝑥3 = 2
= 0.625 and 𝑓 0.625 = 0.1677.
Since 𝑓 0.5 𝑓 0.625 < 0, the root lies in the interval [0.5, 0.625]
Continuing this procedure we can obtain following table:
21
Bisection Method: Chap-2:Roots
Example-2: Continue ...

Iteration 𝒂 𝒃 𝒙𝒊 𝒇(𝒙𝒊 )
1 0 (-) 1.0000 (+) 0.5000 (-) -0.1756
2 0.5000 (-) 1.0000 (+) 0.7500 (+) 0.5878
3 0.5000 (-) 0.7500 (+) 0.6250 (+) 0.1677
4 0.5000 (-) 0.6250 (+) 0.5625 (-) -0.0128
5 0.5625 (-) 0.6250 (+) 0.5938 (+) 0.0751
6 0.5625 (-) 0.5938 (+) 0.5781 (+) 0.0306
7 0.5625 (-) 0.5781 (+) 0.5703 (+) 0.0306
8 0.5625 (-) 0.5703 (+) 0.5664 (-) -0.0020
9 0.5664 (-) 0.5703 (+) 0.5684 (+) 0.0034

The root of the equation correct to two decimal places is 𝒙𝟎 = 𝟎. 𝟓𝟕.


22
Chapter-2
Classwork
On Bisection Method

Problem 1: Compute a root of the equation 𝑒 𝑥 – 3𝑥 = 0 correct


to two decimal places, which lies between 1.4 and 1.6.

Problem 2: Find the root of 𝑥 2 = sin 𝑥, which lies between 0.5


and 1 correct to three decimal places.

23
Chapter-1

Method of False Position

24
Method of False Position: Chap-2:Roots
• The oldest method for finding the real root of a non-linear equation
𝑓(𝑥) = 0 and closely resemble to the bisection method is false
position method.
• This method is also known as regula falsi or method of chords.
• In this method, we choose two initial guesses 𝑎 and 𝑏 such that
𝑓(𝑎) and 𝑓(𝑏) have opposite sign, i.e. 𝑓 𝑎 ∗ 𝑓 𝑏 < 0, so the root
lies between a and b.
• Now, we find the equation of the chord joining the points
𝐴(𝑎, 𝑓(𝑎)) and 𝐵(𝑏, 𝑓(𝑏)) which is given by

𝑓 𝑏 −𝑓 𝑎
𝑦=𝑓 𝑎 + (𝑥 − 𝑎)
𝑏−𝑎
Then, we find the first approximate root 𝑥1 of the equation by
computing the abscissa of the intersection point of the chord line with
𝑥 −axis. The point of intersection in this case is obtained by putting
𝑦 = 0 in the above equation. Thus, we obtain
25
Method of False Position: Continue … Chap-2:Roots
𝑓 𝑏 −𝑓 𝑎
0=𝑓 𝑎 + (𝑥1 − 𝑎)
𝑏−𝑎
𝑎𝑓 𝑏 −𝑏𝑓 𝑎
⇒ 𝑥1 = ---------------------------(1)
𝑓(𝑏)−𝑓(𝑎)
Now, there exist one of the following three possibilities :
I. If 𝑓(𝑥1 ) = 0, we conclude that 𝑥1 is a root of the equation 𝑓 𝑥 = 0
and we stop.

II. If 𝑓(𝑎) 𝑓(𝑥1 ) < 0, there is a root between 𝑎 and 𝑥1 . Then, we set
𝑎 = 𝑎 and 𝑏 = 𝑥1 and get the next approximation 𝑥2 using (1)

III. If 𝑓(𝑥1 ) 𝑓(𝑏) < 0, there is a root between 𝑥1 and 𝑏. Then, we set
𝑎 = 𝑥1 and 𝑏 = 𝑏 and get the next approximation 𝑥2 using (1)
The procedure is repeated till the root is obtained to the desire
accuracy. The figure below illustrates the graphical representation of
this method. 26
Method of False Position: Continue … Chap-2:Roots
y
B(b, f(b))

(x1, f(x1))

a
x
x1 b

A(a, f(a))

Figure: Illustration of method of false position


27
Bisection Method: Chap-2:Roots
Algorithm: False Position Method
Step 1: Choose initial values for a and b and stopping criterion , eps>0
Step 2: Compute 𝑓(𝑎) and 𝑓(𝑏)
Step 3: If 𝑓(𝑎)𝑓(𝑏) > 0, 𝑎 and 𝑏 does not contain any root. Go to Step 1.
Otherwise
𝑎𝑓 𝑏 −𝑏𝑓 𝑎
Step 4: Compute and compute 𝑥𝑛 = 𝑓(𝑏)−𝑓(𝑎)
Step 5:
If 𝑓 𝑥𝑛 = 0, the root is found and is 𝑥𝑛
else If 𝑓(𝑎)𝑓(𝑥𝑛 ) < 0 (root lies between 𝑎 and 𝑥𝑛 )
Set 𝑏 = 𝑥𝑛
else (root lies between𝑥𝑛 and b)
Set 𝑎 = 𝑥𝑛
Step 6: If 𝑥𝑛 − 𝑥𝑛−1 ≤ 𝜖, then 𝑥𝑛 is the approximated root of the
equation. STOP.
Otherwise
go to Step 4 to compute the next approximation 𝑥𝑛+1
28
Method of False Position: Chap-2:Roots
Example-1 Find a positive root correct to three decimal places of the equation
𝑥𝑒 𝑥 = 1 using method of false position, which lies between 0 and 1.
Solution: Given equation𝑥𝑒 𝑥 − 1 = 0, 𝑎 = 0, 𝑏 = 1, so 𝑓(𝑥) = 𝑥𝑒 𝑥 −1
First, we compute 𝑓 0 = −1 and 𝑓 1 = 1.7183
Since 𝑓(0)𝑓(1) < 0, root lies in the interval [0, 1]. Now,
𝑎𝑓 𝑏 −𝑏𝑓(𝑎) 0×𝑓 1 −1×𝑓(0)
Step 1: Compute 𝑥1 = = = 0.3679 and
𝑓 𝑏 −𝑓(𝑎) 𝑓 1 −𝑓(0)
𝑓(0.3679) = −0.4685.
Since 𝑓(0.3679)𝑓(1) < 0, the root must lie in the new interval (0.3679, 1).
0.3679×𝑓 1 −1×𝑓(0.3679)
Step 2: Compute 𝑥2 = 𝑓 1 −𝑓(0.3679)
= 0.5033 and 𝑓 0.5033 =
− 0.1674.
Since 𝑓(0.5033)𝑓(1) < 0, the root lies on the interval (0.5033, 1).
0.5033×𝑓 1 −1×𝑓(0.5033)
Step 3: Compute 𝑥3 = 𝑓 1 −𝑓(0.5033)
= 0.5474 and 𝑓 0.5474 =
− 0.0536.
Since 𝑓(0.5475)𝑓(1) < 0, the root lies on the interval (0.5474, 1).

Continuing this procedure we can obtain following table: 29


Method of False Position: Chap-2:Roots
Example-1: Continue ...
Iteration 𝒂 𝒃 𝒙𝒊 𝒇(𝒙𝒊 )
1 0.0000 (-) 1.0000 (+) 0.3679 (-) -0.4685
2 0.3679 (-) 1.0000 (+) 0.5033 (-) -0.1674
3 0.5033 (-) 1.0000 (+) 0.5474 (-) -0.0536
4 0.5474 (-) 1.0000 (+) 0.5611 (-) -0.0166
5 0.5611 (-) 1.0000 (+) 0.5653 (-) -0.0051
6 0.5653 (-) 1.0000 (+) 0.5666 (-) -0.0015
7 0.5666 (-) 1.0000 (+) 0.5670 (-) -0.0005
8 0.5670 (-) 1.0000 (+) 0.5671 (-) -0.0001
9 0.5671 (-) 1.0000 (+) 0.5671 (-) -0.0000
10 0.5671 (-) 1.0000 (+) 0.5671 (-) -0.0000

The root of the equation correct to three decimal places is 𝒙𝟏𝟎 = 𝟎. 𝟓𝟔𝟕.
30
Chapter-2
Classwork
On Method of False Position

Problem 1: Find the root of 𝑥3 – 4𝑥 – 9 = 0 correct to two


decimal places, which lies between 2.5 and 3.

Problem 2: Compute the root of the equation


log 𝑥 = cos 𝑥 correct to three decimal places, which lies
between 1 and 1.5.

31
End of Lecture-1

Next
Lecture-2
Fixed point method, Newton-Raphson
method
32
NUMERICAL METHODS
(MCSC-202)

By
Samir Shrestha
Department of Mathematics
Kathmandu University, Dhulikhel

Lecture 2
Root Finding
1
References
Recommended Text Book
• Introductory Methods of Numerical analysis, S. S. Sastry, PHI
Learning Private Limited, New Delhi, 5th edition, 2012.

Supplementary Text Book


• Numerical Methods for Scientific and Engineering computation,
M. K. Jain, S. R. K Iyengar & R. K. Jain, New Age International
Publisher, 4th edition, 2005.
2
Roots
Outline

• Fixed point iteration method

• Newton-Raphson method

• Secant method

• System of non-linear equations

• Classwork

3
Chapter-2
• We have so far discussed the methods to find the root of a
given equation 𝑓 𝑥 = 0 which require the interval that
contains the root 𝜉

• We now discuss the methods which require one or more


starting values of 𝑥 not necessarily contains the root 𝜉 of the
given equation 𝑓 𝑥 = 0

Iteration Method

4
(Fixed Point) Iteration Method: Chap-2:Roots
Let the given equation be
𝑓(𝑥) = 0 ---------------------(1)
We re-write equation (1) into the form:
𝑥 = 𝜙(𝑥) ---------------------(2)
Remarks:
• Equations (1) and (2) are equivalent and, therefore, a root of Equation
(1) is also a root of Equation (2)
• Geometrically, the root of Equation (1) is the point where the curves
𝑦 = 𝑥 and 𝑦 = 𝜙(𝑥) intersect. This point of intersection is known as
fixed point of the function
• The above transformation can be obtained either by algebraic
manipulation of the given Equation (1) or by simply adding 𝑥 to both
sides of the Equation (1)
• Equation (2) is known as the fixed point equation
• For example, the equation 𝑥 2 − 2𝑥 + 1 = 0, can be written into the
𝑥 2 +1
from (2) by writing 𝑥 = 2
or 𝑥 = 𝑥 2 − 𝑥 + 1 or 𝑥 = 2𝑥 − 1 5
(Fixed Point) Iteration Method: Continue … Chap-2:Roots
Let be 𝑥0 be an initial guess to a root of equation (1), then the next
approximation 𝑥1 is obtained from equation (2) and given by
𝑥1 = 𝜙(𝑥0 )
Similary, the next approximation 𝑥2 is given by
𝑥2 = 𝜙(𝑥1 )

In general, the ith approximation is given by


𝑥𝑖 = 𝜙 𝑥𝑖−1 , 𝑖 = 1,2,3, … ----------------(3)
Relation (3) is called fixed point iteration formula. Iteration will be
continue until 𝑥𝑖 − 𝑥𝑖−1 < 𝜖, for a very small positive number 𝜖 < 0.
Remarks: Following questions might arises
• Does the sequence of the approximations 𝑥0, 𝑥1, 𝑥2, … 𝑥𝑖, … , always
converge?
• If it does, will it converse to a root 𝜉 of the equation (1) ?
• How should we choose fixed point 𝜙(𝑥) in order that the sequence
𝑥0, 𝑥1, 𝑥2, … 𝑥𝑖 , … , converges to the root 𝜉 . 6
(Fixed Point) Iteration Method: Continue … Chap-2:Roots
Let be 𝑥0 be an initial guess to a root of equation (1), then the next
approximation 𝑥1 is obtained from equation (2) and given by
𝑥1 = 𝜙(𝑥0 )
Similary, the next approximation 𝑥2 is given by
𝑥2 = 𝜙(𝑥1 )

In general, the ith approximation is given by


𝑥𝑖 = 𝜙 𝑥𝑖−1 , 𝑖 = 1,2,3, … ----------------(3)
Relation (3) is called fixed point iteration formula. Iteration will be
continue until 𝑥𝑖 − 𝑥𝑖−1 < 𝜖, for a very small positive number 𝜖 < 0.
Remarks: Following questions might arises
• Does the sequence of the approximations 𝑥0, 𝑥1, 𝑥2, … 𝑥𝑖, … , always
converge?
• If it does, will it converse to a root 𝜉 of the equation (1) ?
• How should we choose fixed point 𝜙(𝑥) in order that the sequence
𝑥0, 𝑥1, 𝑥2, … 𝑥𝑖, … , converges to the root 𝜉.
• These question will be answered by statement of a Theorem 7
(Fixed Point) Iteration Method: Continue … Chap-2:Roots
Theorem: Let 𝑥 = 𝜉 be a root of 𝑓(𝑥) = 0 and let 𝐼 be an interval
containing the root 𝜉 . Let 𝜙(𝑥)and 𝜙′(𝑥) be continuous functions in
the interval I, where 𝜙 𝑥 is defined to be the fixed point equation
𝑥 = 𝜙(𝑥) of the given equation f(x)=0. Then, if 𝜙′(𝑥) < 1 for all
𝑥 ∈ 𝐼 , then the sequence of approximations 𝑥0, 𝑥1, 𝑥2, … 𝑥𝑖, … defined
by the Equation (3) converges to the root 𝜉 , provided that the initial
approximation 𝑥0 is chosen in I.

Iteration Method Graphically: 𝒚=𝒙 𝒚 = 𝝓(𝒙)

 x2 x1 x0
Fig: The iteration method shown graphically to converse to the exact root if 𝜙′(𝑥) < 1 8
Iteration Method: Chap-2:Roots

Algorithm: Iteration Method

Step 1: Rearrange the given equation 𝑓(𝑥) = 0 in the fixed point


equation 𝑥 = 𝜙(𝑥) such that 𝜙′(𝑥) < 1 for all 𝑥 ∈ 𝐼.
Step 2: Choose an initial guess 𝑥0 and stopping criterion , 𝑒𝑝𝑠 > 0.
Step 3: Compute 𝑥𝑖 = 𝜙(𝑥𝑖−1 ) , 𝑖 = 1,2,3 …
Step 4: Stop if 𝑥𝑖 − 𝑥𝑖−1 < 𝑒𝑝𝑠 otherwise go to step 2

9
Chapter-2

Iteration Method
Examples

10
Iteration Method: Chap-2:Roots
Example-1: Compute a root, correct to thee significant digits, of the
equation 𝑒 −𝑥 = −10𝑥 lying on the interval [−1,1] using iteration
method.
.
Solution: Given equation is 𝑒 −𝑥 = −10𝑥, Writing this equation into the fix
𝑒 −𝑥 𝑒 −𝑥 𝑒 −𝑥
point equation 𝑥 = −10
, here 𝜙 𝑥 = −10
and 𝜙′ 𝑥 = 10
.
𝑒 −𝑥
Clearly, 𝜙′(𝑥) = 10
< 1 for all 𝑥 ∈ [−1,1]
Choosing the initial guess 𝑥0 = 0, we obtained the successive
approximations:
𝑒 −0
𝑥1 = 𝜙 𝑥0 = 𝜙 0 = = −0.1000
−10
0.1000
𝑒
𝑥2 = 𝜙 𝑥1 = = −0.1105
−10
𝑥3 = 𝜙 𝑥2 =−0.1117
𝑥4 = 𝜙 𝑥3 == −0.1118
𝑥5 = 𝜙 𝑥4 == −0.1118
The root of the equation correct to three significant figures is −0.112 11
Iteration Method: Chap-2:Roots

12
Chapter-2
Classwork
On Iteration Method
Problem 1: Using iteration method, find the root of the
equation cos 𝑥 = 3𝑥 – 1 correct to three decimal places,
where root lies between 0 and 1

Problem 2: Find the root of the equation 𝑥2 – 7 = 0 correct


to three decimal places using iteration method.

13
Chapter-2

Newton-Raphson Method
• This method is generally used to improve the result
obtained by one of the previous method

• This method is developed by using Taylor‘s series


expansion

14
Newton-Raphson Method: Chap-2:Roots
Let the given equation be 𝑓(𝑥) = 0 ---------------------(1)
Let 𝑥0 be an approximate root of the equation (1) and
let 𝑥1 = 𝑥0 + 𝑕 be the correct solution so that 𝑓(𝑥1) = 0, i.e. ,
𝑓(𝑥0 + 𝑕) = 0.
we use Taylor’s series to expand 𝑓(𝑥0 + 𝑕), we obtain
2
𝑕
𝑓 𝑥0 + 𝑕 = 𝑓 𝑥0 + 𝑕𝑓 ′ 𝑥0 + 𝑓 ′′ 𝑥0 + ⋯ = 0
2!
Neglecting second and higher derivatives terms under the assumption
that h is sufficiently small, we have
𝑓 𝑥0 + 𝑕𝑓 ′ 𝑥0 = 0

𝑓 𝑥0
⇒𝑕=− ′
𝑓 𝑥0
𝑓 𝑥0
Thus, the first approximation 𝑥1 is given by 𝑥1 = 𝑥0 −
𝑓 ′ 𝑥0
𝑓 𝑥1
The next approximation 𝑥2 is given by 𝑥2 = 𝑥1 − 15
𝑓 ′ 𝑥1
Newton-Raphson Method: Continue … Chap-2:Roots
Similarily, the successive approximations are given by
𝑓 𝑥𝑖
𝑥𝑖+1 = 𝑥𝑖 − 𝑓′ 𝑥 , 𝑖 = 0,1,2, …
𝑖
The iteration stops wheneven 𝑥𝑖+1 − 𝑥𝑖 < 𝜖 for any small tolerance 𝜖 < 0.

Note: Newton-Raphson method fails to proceed if 𝑓 ′ 𝑥𝑖 = 0, for some 𝑖


y f(x)
Remark:
Tangent at a point P
Geometrically, the Newton-Raphson
method approximates the root of the
𝑷(𝒙𝟎, 𝒇(𝒙𝟎))
equation 𝑓 𝑥 = 0 by the 𝑥-intercept ,

of the tangents drawn on the curve of


𝑓(𝑥). The first approximation 𝑥1 to x

the root of the equation 𝑓(𝑥) = 0 is x1 x0

the 𝑥-intercept 𝑥1 drawn at the point Root


𝑃(𝑥0 , 𝑓(𝑥0 )) on the curve of 𝑓(𝑥) as Fig: Newton-Raphson method
shown in figure. 16
Newton-Raphson Method: Chap-2:Roots

Algorithm: Newton Raphson


Step 1: Choose an initial guess 𝑥 = 𝑥0 such that 𝑓 ′ 𝑥0 ≠ 0 and
stopping criterion 𝑒𝑝𝑠 > 0.
Step 2: Compute and 𝑓(𝑥0 ) and 𝑓 ′ (𝑥0 )
Step 3: Compute the improve estimate 𝑥1 by using
𝑓 𝑥0
𝑥1 = 𝑥0 − ′
𝑓 𝑥0
Step 4: Check for the accuracy of the latest estimate:
If 𝑥1 − 𝑥0 < 𝑒𝑝𝑠 STOP. Root = 𝑥1 .
else
𝑥0 = 𝑥1 and repeat Step 2 , Step 3 and Step 4.

17
Chapter-2

Newton-Raphson Method
Examples

18
Newton-Raphson Method: Example-1 Chap-2:Roots
Example: Find a real root, correct to three decimal places, of the
equation 𝑥 2 − 3𝑥 + 2 = 0 using Newton-Raphson method
Solution: Given equation is 𝑥 2 −3𝑥 + 2 = 0.
The next approximations
Here, 𝑓 𝑥 = 𝑥 2 − 3𝑥 + 2 and 𝑓 ′ 𝑥 = 2𝑥 − 3.
𝑥3 , 𝑥4 , … . are computed in
Choosing 𝑥0 = 0,
similar manner:
then we compute 𝑓 𝑥0 = 2 and 𝑓 ′ 𝑥0 = −3.
𝑥3 = 0.9959,
The first approximation 𝑥1 is
𝑓 𝑥0 𝑥4 = 0.9959,
𝑥1 = 𝑥0 − ′ 𝑥5 = 0.9999,
𝑓 𝑥0
𝑥5 = 1.0000
2
⇒ 𝑥1 = 1 − The root of the given equation
(−3) correct to three decimal places is
⇒ 𝑥1 = 0.6667 1.000
Similarly,
𝑓 𝑥1 = 0.4444 and 𝑓 ′ 𝑥1 = −1.6667.
𝑓 𝑥1
𝑥2 = 𝑥1 − ′
𝑓 𝑥1
0.4444
⇒ 𝑥2 = 0.6667 −
(−1.6667)
⇒ 𝑥2 = 0.9333 19
Newton-Raphson Method: Example-2 Chap-2:Roots

𝑓 𝑥0
𝑥1 = 𝑥0 − ,
𝑓 ′ 𝑥0

The root of the equation correct to three decimal places is 0.567.


20
Chapter-2
Classwork
On Newton-Raphson Method
Problem 1: Find the root of the equation 𝑥 + ln 𝑥 = 2 correct to
three decimal place by using Newton-Raphson method.

Problem 2: Compute the root of 𝑒 𝑥 = 4𝑥 correct to three


decimal places by using Newton-Raphson method.

Problem 3: Using Newton-Raphson method to establish the


1 𝑁
formula 𝑥𝑖+1 = 2
𝑥𝑖 + 𝑥𝑖
to find the square root of the positive
real number 𝑁 which is not perfect square and hence compute the
value of 2 correct to six decimal places.
21
Chapter-1

The Secant Method


• To avoid the derivative in the Newton-Raphson method,
the secant method is developed
• This method is obtained by approximating the derivative
′ 𝑓 𝑥𝑖 −𝑓(𝑥𝑖−1 )
in the newton-Raphson method by 𝑓 𝑥𝑖 ≈
𝑥𝑖 −𝑥𝑖−1
• This method demands two initial guesses to proceed for
the next approximation

22
Secant-Method: Chap-2:Roots
• Newton-Raphson method requires the evaluation if deriavative 𝑓′(𝑥𝑖 )
in each iteration steps which might be possible specially in practical
problems
• In secant method, the derivative 𝑓′(𝑥𝑖 ) is approximated by the
′ 𝑓 𝑥𝑖 −𝑓(𝑥𝑖−1 )
formula𝑓 𝑥𝑖 ≈ 𝑥𝑖 −𝑥𝑖−1
and replaced in the Newton-Raphson
method, we get
(𝑥𝑖 −𝑥𝑖−1 )𝑓(𝑥𝑖 )
𝑥𝑖+1 = 𝑥𝑖 − ,
𝑓 𝑥𝑖 − 𝑓(𝑥𝑖−1 )
On simplification, we get
𝑥𝑖 𝑓 𝑥𝑖−1 − 𝑥𝑖−1 𝑓 𝑥𝑖
𝑥𝑖+1 = , 𝑖 = 1,2,3, …
𝑓 𝑥𝑖 − 𝑓 𝑥𝑖−1
Which is the secant-method to approximate a root of the equation
𝑓 𝑥 = 0 and this method needs two previous values 𝑥𝑖−1 , 𝑥𝑖 to
approximate the next value 𝑥𝑖+1 .
The iteration stops wheneven 𝑥𝑖+1 − 𝑥𝑖 < 𝜖 for any small tolerance
𝜖 < 0.
23
Secant-Method: Continue … Chap-2:Roots
Note: Secant method demands two initial guesses 𝑥0 , 𝑥1 to approximate
the next values but the initial guesses do not necessarily include the exact
root 𝜉 unlike Bisection and False position methods.

Secant line joining P and Q

Remark: y
Geometrically, the secant 𝑷(𝒙𝒊−𝟏 , 𝒇(𝒙𝒊−𝟏 ))
,
method approximates the root of Root
the equation 𝑓 𝑥 = 0 by the 𝑥-
intercept of the secant line joining
the points 𝑃(𝑥𝑖−1 , 𝑓(𝑥𝑖−1 )) and 
Q(𝑥𝑖 , 𝑓(𝑥𝑖 )) on the curve of 𝑓 𝑥 xi xi-1
x

as shown in figure. f(x)


xi+1

Fig: Secant method

24
Secant Method: Chap-2:Roots

Algorithm: Secant Method

Step 1: Choose two initial guesses 𝑥0, 𝑥1, and stopping criterion
𝑒𝑝𝑠 > 0.
Step 2: Compute 𝑓(𝑥0) and 𝑓(𝑥1).
Step 3: Compute new approximation using the formula
𝑥0 𝑓 𝑥1 − 𝑥1 𝑓 𝑥0
𝑥2 =
𝑓 𝑥1 − 𝑓 𝑥0
Step 4: Check for the accuracy of the latest estimate:
If 𝑥2 − 𝑥1 < 𝑒𝑝𝑠 STOP. Root = 𝑥2 .
else set 𝑥0 = 𝑥1 , 𝑥1 = 𝑥2 , and repeat Step 2 , Step 3 and
Step 4.

25
Secant Method: Examples Chap-2:Roots
Example-1 Use secant method to estimate the root of the equation
𝑥 2 − 4𝑥 − 10 = 0 with initial guesses 𝑥0 = 4 and 𝑥1 = 2.
Solution: Given equation 𝑥 2 − 4𝑥 − 10 = 0,𝑥0 = 4 and 𝑥1 = 2.
𝑓(𝑥0) = 𝑓(4) = −10 𝑎𝑛𝑑 𝑓(𝑥1) = 𝑓(2) = −14.
The first approximation 𝑥2 is
𝑥0 𝑓 𝑥1 −𝑥1 𝑓 𝑥0
𝑥2 =
𝑓 𝑥1 −𝑓 𝑥0
4× −14 −2×(−10)
⇒ 𝑥2 = (−14)−(−10)
⇒ 𝑥2 = 9
For next approximation, we take 𝑥1 = 2 and 𝑥2 = 9,
and 𝑥3 is approximated by
𝑥1 𝑓 𝑥2 −𝑥2 𝑓 𝑥1
𝑥3 = 𝑓 𝑥2 −𝑓 𝑥1
2×𝑓 9 −9×𝑓 2
⇒ 𝑥3 = 𝑓 9 −𝑓 2
⇒ 𝑥3 = 4
26
Secant Method: Examples Chap-2:Roots
Example-1 (Continue...)
Solution:
For next approximation, we take 𝑥2 = 9 and 𝑥3 = 4,
and 𝑥4 is approximated by
𝑥2 𝑓 𝑥3 −𝑥3 𝑓 𝑥2
𝑥4 = 𝑓 𝑥 −𝑓 𝑥
3 2
9×𝑓 4 −4×𝑓 9
⇒ 𝑥4 = 𝑓 4 −𝑓 9
⇒ 𝑥4 = 5.1111
For next approximation, we take 𝑥3 = 4 and 𝑥4 = 5.1111,
and 𝑥5 is approximated by
𝑥3 𝑓 𝑥4 −𝑥4 𝑓 𝑥3
𝑥5 =
𝑓 𝑥4 −𝑓 𝑥3
4×𝑓 5.1111 −5.1111×𝑓 4
⇒ 𝑥5 =
𝑓 5.1111 −𝑓 4
⇒ 𝑥5 = 5.9563
The continuing this process, we can obtain the next approximtions
27
Chapter-2
Classwork
On Secant Method
Problem 1: Use the secant method to determine the root,
between 5 and 8 of the equation 𝑥 2.2 = 69, correct to two
decimal places.

Problem 2: Compute the root of the equation 𝑥 – 𝑒 −𝑥 = 0


correct to three decimal places by using secant method.

28
Chapter-2

Order of Convergence
• Order of convergence tells the quality of numerical
methods to approximate the root of the given equation
𝑓 𝑥 =0
• Higher the order of convergence of an approximation
method, faster is the convergence of the method.

29
Order of Convergence Chapter-2
Definition: A method that approximates the exact root 𝜉 of the
equation 𝑓 𝑥 = 0 by a sequence of number 𝑥𝑖 , 𝑖 = 0,1,2, … is said to
have order of convergence 𝑝 if 𝑝 is the largest positive real number for
which there exists a finite constant 𝐶 > 0 such that
𝜖𝑖+1
lim = 𝐶,
𝑖→∞ 𝜖𝑖 𝑝
where 𝜖𝑖 = 𝜉 − 𝑥𝑖 is the error in 𝑖 𝑡ℎ iterations.
Remarks:
• If 𝑝 = 1, the sequence of approximations
𝑥𝑖 , 𝑖 = 0,1,2, … converges to 𝜉,i.e., lim 𝑥𝑖 = 𝜉 is said to be
𝑖→∞
linearly convergent or first order convergence.
• If 𝑝 = 2, the sequence of approximations
𝑥𝑖 , 𝑖 = 0,1,2, … converges to 𝜉,i.e., lim 𝑥𝑖 = 𝜉 is said to be
𝑖→∞
quadratically convergent or second order convergence
• Large values of p correspons to the faster convergence 30
Order of Convergence Chapter-2
Here are the order of convegence of the methods that we
have discussed so far:

• Order of convergence of Bisection method is 𝑝 = 1, that is


first order convergence.
• Order of convergence of method of false position is 𝑝 =
1, that is first order convergence.
• The order of convergence of (fixed point) iteration
method is 𝑝 = 1, that is first order convergence.
• The order of convergence of Newton-Raphson method is
𝑝 = 2, that is second order convergence.
• Order of convergence of secant method is
1
𝑝 = 1 + 5 = 1.6.
2 31
System of Non-linear Equations Chapter-2
A system of equations is a set consisting of more than one
equation in more than one variables (unknowns)

For example:
𝑥 2 + 2𝑥 − 𝑦 2 − 2 = 0
𝑥 2 + 2𝑥𝑦 − 4 = 0
is the system of two non-linear equations for the unknowns 𝑥
and 𝑦. The solution of system requires the values of 𝑥 and
𝑦 that satisfy both the equations simultaneously.

Two Methods:
(i) Fixed Point Iteration Method
(ii) Newton-Raphson Method
to get the approximated solution numerically 32
Chapter-2

Fixed Point Iteration Method


(System of Non-Linear Equations)

33
Fixed Point Iteration Method Chapter-2
Lets us consider a system of two non-linear equations in 𝑥 and 𝑦
𝑓 𝑥, 𝑦 = 0, 𝑔 𝑥, 𝑦 = 0 --------------(1)
Let us assume that the above equations can be written in the from
𝑥 = 𝐹(𝑥, 𝑦), 𝑦 = 𝐺(𝑥, 𝑦) -------------------(2)
such that the function 𝐹 and 𝐺 satisfy the conditions:
𝜕𝐹 𝜕𝐹 𝜕𝐺 𝜕𝐺
+ < 1 and + < 1 --------------(3)
𝜕𝑥 𝜕𝑦 𝜕𝑥 𝜕𝑦
Let (𝑥0, 𝑦0) be the initial approximation to a root of the given non-linear
system. Then, we construct the successive approximations using (2) as
follow:
𝑥1 = 𝐹(𝑥0 , 𝑦0 ), 𝑦1 = 𝐺(𝑥0 , 𝑦0 ),
𝑥2 = 𝐹(𝑥1 , 𝑦1 ), 𝑦2 = 𝐺(𝑥1 , 𝑦1 ),
𝑥3 = 𝐹(𝑥2 , 𝑦2 ), 𝑦3 = 𝐺(𝑥2 , 𝑦2 ),
In general, 𝑥𝑖+1 = 𝐹(𝑥𝑖 , 𝑦𝑖 ), 𝑦𝑖+1 = 𝐺(𝑥𝑖 , 𝑦𝑖 ) --------------(4)
Remarks: (i) For faster convergence, recently computed values of 𝑥𝑖 may
be used in the evaluation of 𝑦𝑖+1 in the iteration process (4)
(ii) Condition (3) guarantees the convergence of the iteration (4) to a root
of the system (1) 34
Fixed Point Method: Chap-2:Roots
(System of Non-linear Equations)

Algorithm: Fixed Point Method


Step 1: Define the iteration function 𝐹(𝑥, 𝑦) and 𝐺(𝑥, 𝑦) such that
𝜕𝐹 𝜕𝐹 𝜕𝐺 𝜕𝐺
, 𝜕𝑥
+ 𝜕𝑦
< 1 and 𝜕𝑥
+ 𝜕𝑦
<1
Step 2: Choose starting point (𝑥0, 𝑦0) and stopping criterion 𝑒𝑝𝑠 > 0.
Step 3: Compute, x1 = 𝐹(𝑥0 , 𝑦0 ) and y1 = 𝐺(𝑥0 , 𝑦0 )
Step 4: If 𝑥1 − 𝑥0 < 𝑒𝑝𝑠 and 𝑦1 − 𝑦0 < 𝑒𝑝𝑠 STOP. (Solution
achieved)
Otherwise
Update 𝑥0 = 𝑥1 and 𝑦0 = 𝑦1 and go to Step 3 and Step 4 and
continue the process until stopping criterion meet.
35
Chap-2:Roots
Example: Solve the following system of non-linear equations
𝑥 = 0.2𝑥 2 + 0.8 and 𝑦 = 0.2𝑥𝑦 2 + 0.7 using fixed point iteration
method.
Solution: Here, we have
𝐹(𝑥, 𝑦) = 0.2𝑥 2 + 0.8 and G(𝑥, 𝑦) = 0.2𝑥𝑦 2 + 0.7
𝜕𝐹 𝜕𝐹 𝜕𝐺 𝜕𝐺
Now, = 0.4𝑥, = 0 and = 0.2𝑦 2 , = 0.4𝑥𝑦
𝜕𝑥 𝜕𝑦 𝜕𝑥 𝜕𝑦
𝜕𝐹 𝜕𝐹 𝜕𝐺 𝜕𝐺
It can be seen clearly that + < 1 and + <1
𝜕𝑥 𝜕𝑦 𝜕𝑥 𝜕𝑦
in the neighborhood of (0, 0).
Let us start with initial guess (𝑥0, 𝑦0) = (0.5, 0.5). Then,
𝑥1 = 𝐹 𝑥0 , 𝑦0 = 0.8500, 𝑦1 = 𝐺 𝑥0 , 𝑦0 = 0.7375.
Second approximation, we obtain
𝑥2 = 𝐹(𝑥1 , 𝑦1 ) = 0.9445, 𝑦1 = 𝐺 𝑥1 , 𝑦1 = 0.8387.

36
Chap-2:Roots
Example: Continue …
To, speed up the convergence, we can use recently computed values of
𝑥𝑖 .
Third approximation,
𝑥3 = 𝐹(𝑥2 , 𝑦2 ) = 0.9784, 𝑦3 = 𝐺 𝑥3 , 𝑦2 = 0.9065.
Fourth approximation,
𝑥4 = 𝐹(𝑥3 , 𝑦3 ) = 0.9915, 𝑦4 = 𝐺 𝑥4 , 𝑦3 = 0.9444.
Fifth approximation,
𝑦5 = 𝐺 𝑥4 , 𝑦4 = 0.9966, 𝑦5 = 𝐺 𝑥5 , 𝑦4 = 0.9667.
Sixth approximation,
𝑦6 = 𝐺 𝑥5 , 𝑦5 = 0.9986, 𝑦6 = 𝐺 𝑥6 , 𝑦5 = 0.9799.
Continuing the iteration, we will see that the sequence
(𝑥𝑖, 𝑦𝑖) converges to (1, 1) which is the exact solution of the given
system.

37
Chapter-2

Newton Raphson Method


(System of Non-linear Equations)

Students will derive this method and develop


the algorithm

38
Chapter-2
Classwork
On System of Non-Linear Equations
Problem-1: Solve the system of equations 𝑥 2 + 𝑦 2 = 5 ,
𝑥 2 − 𝑦 2 = 1 using fixed-point iteration method with starting
values 𝑥0 = 1, 𝑦0 = 1.

Problem-2: Solve the system of non-linear equations


𝑥 2 = 3𝑥𝑦 − 7 ,𝑦 = 2(𝑥 + 1) by using Newton-Raphson
method.

39
End of Lecture-2

Next
Interpolation
40
NUMERICAL METHODS
(MCSC-202)

By
Samir Shrestha
Department of Mathematics
Kathmandu University, Dhulikhel

Lecture 1
Chap-3: Interpolation
1
Numerical Methods
Contents

 Basic introduction of Computer programming


language [4]
 Errors in numerical computation [5]
 Root findings [7]
• Finite differences and Interpolation [8]
• Solving ODE (IVP) [6]
• Numerical Differentiation and Integration [7]
• Matrices and System of linear equations [6]
• Curve fitting [2]
References
Recommended Text Book
• Introductory Methods of Numerical analysis, S. S. Sastry, PHI
Learning Private Limited, New Delhi, 5th edition, 2012.

Supplementary Text Book


• Numerical Methods for Scientific and Engineering computation,
M. K. Jain, S. R. K Iyengar & R. K. Jain, New Age International
Publisher, 4th edition, 2005.
3
Interpolation

Outline

• Introduction

• Finite differences
• Forward, Backward, central, Shifting operator

• Relations bewteen the operators

• Examples and Class work

4
Chapter-3

Introduction
to
Interpolation

5
Introduction: Chap-3:Interpolation
Scientist and engineers are often faced with the task of estimating
the value of the dependent variable 𝑦 for an intermediate vaule(s)
of the independent variable 𝑥 for a given discrete data set
(𝑥𝑖 , 𝑦𝑖 ), 𝑖 = 0,1,2, … , 𝑛 of some unknown function 𝑦 = 𝑓(𝑥) by
constructing a function 𝑦 = 𝜙(𝑥) s.t. 𝑓 𝑥 ≈ 𝜙(𝑥)
𝒚 𝒚 = 𝝓(𝒙)

6
Introduction: Continue … Chap-3:Interpolation

• From the given set of data points 𝑥𝑖 , 𝑦𝑖 , 𝑖 = 0,1,2, … , 𝑛 of


some unknown function 𝑦 = 𝑓(𝑥), estimating the value of the
dependent variable 𝑦 for in intermediate value of the
independent 𝑥 by constructing a simple function 𝑦 = 𝜙(𝑥) is
called interpolation
• The construced such function 𝑦 = 𝜙(𝑥) is interpolating
function.
• If 𝑦 = 𝜙(𝑥) is taken as a polynomials, then the process is
called polynomial interpolation and 𝑦 = 𝜙(𝑥) is called
interpolating polynomial.
• It has to be noted that the interpolating function 𝑦 = 𝜙(𝑥)
passes through the given set of points 𝑥𝑖 , 𝑦𝑖 , 𝑖 = 0,1,2, … , 𝑛

7
Introduction: Continue … Chap-3:Interpolation
Following Interpolation formula will be discussed:

A. Interpolation formula with equally spaced points:


(i) Newton‘s forward and backward interpolation formulas
(ii) Gauss forward and backward interpolation formulas
(iii) Stirling‘s formula
(iv) Bessel‘s formula

B. Interpolation formula with unequally spaced points:


(i) Lagrange‘s interpolation formula
(ii) Newton‘s general interpolation formula

8
Chapter-2

Finite Differences
• To develop the interpolation formulas with equally
spaced points, finite difference and its notations are
explained

• Three types of finite differences:


(i) Forward difference
(ii) Backward difference
(iii) Central difference
9
Forward Differences Chap-3:Interpolation
Let us assume that a discrete set of points 𝑥𝑖 , 𝑦𝑖 , 𝑖 =
0,1,2, … , 𝑛 of any function 𝑦 = 𝑦 𝑥 is given with values of 𝑥
being equally spaced, that means, 𝑥𝑖 = 𝑥0 + 𝑖ℎ, 𝑖 = 0,1,2, … , 𝑛
First order forward differences of 𝑦𝑖 , 𝑖 = 0,1,2, … , 𝑛 is
denoted by Δ𝑦𝑖 and defined by
Δ𝑦𝑖 = 𝑦𝑖+1 − 𝑦𝑖 , 𝑖 = 0,1, … , 𝑛 − 1
Here, Δ is known as foward difference operator.
For example:
Δ𝑦0 = 𝑦1 − 𝑦0 ,
Δ𝑦1 = 𝑦2 − 𝑦1 ,

Δ𝑦𝑛−1 = 𝑦𝑛 − 𝑦𝑛−1

10
Forward Differences: Continue ... Chap-3:Interpolation

Second order forward differences of 𝑦𝑖 , 𝑖 = 0,1,2, … , 𝑛 is


denoted by Δ2 𝑦𝑖 and defined by
Δ2 𝑦𝑖 = Δ𝑦𝑖+1 − Δ𝑦𝑖
Which is the forward difference of first oder forward differences
For example:
Δ2 𝑦0 = Δ𝑦1 − Δ𝑦0 = 𝑦2 − 𝑦1 − 𝑦1 − 𝑦0 = 𝑦2 − 2𝑦1 + 𝑦0 ,

Δ2 𝑦1 = Δ𝑦2 − Δ𝑦1 = 𝑦3 − 𝑦2 − 𝑦2 − 𝑦1 = 𝑦3 − 2𝑦2 + 𝑦1 ,

Δ2 𝑦𝑛−2 = Δ𝑦𝑛−1 − Δ𝑦𝑛−2 = 𝑦𝑛 − 2𝑦𝑛−1 + 𝑦𝑛−2

Similarly, third order, fourth order and higher order forward


differences can be defined.
11
Forward Differences: Continue ... Chap-3:Interpolation

12
Forward Differences: Continue ... Chap-3:Interpolation
Example: Construct the forward difference table of follwing data:
𝑥 15 20 25 30 35 40
𝑦 0.2588190 0.3420201 0.4226183 0.5 0.5735764 0.6427876

Solution: The forward difference table of the given data is :


𝑥 𝑦 Δ Δ2 Δ3 Δ4 Δ5

13
Backward Differences Chap-3:Interpolation
Let us assume that a discrete set of points 𝑥𝑖 , 𝑦𝑖 , 𝑖 =
0,1,2, … , 𝑛 of any function 𝑦 = 𝑦 𝑥 is given with values of x
being equally spaced, that means, 𝑥𝑖 = 𝑥0 + 𝑖ℎ, 𝑖 = 0,1,2, … , 𝑛
First order backward differences of 𝑦𝑖 , 𝑖 = 0,1,2, … , 𝑛 is
denoted by 𝛻𝑦𝑖 and defined by
𝛻𝑦𝑖 = 𝑦𝑖 − 𝑦𝑖−1 , 𝑖 = 1, … , 𝑛
Here,𝛻 is known as backward difference operator.
For example:
𝛻𝑦1 = 𝑦1 − 𝑦0 ,
𝛻𝑦2 = 𝑦2 − 𝑦1 ,

𝛻𝑦𝑛 = 𝑦𝑛 − 𝑦𝑛−1

14
Backward Differences: Continue ... Chap-3:Interpolation

Second order backward differences of 𝑦𝑖 , 𝑖 = 0,1,2, … , 𝑛 is


denoted by 𝛻 2 𝑦𝑖 and defined by
𝛻 2 𝑦𝑖 = 𝛻𝑦𝑖 − 𝛻𝑦𝑖−1
Which is the backward difference of first oder backward
differences
For example:
𝛻 2 𝑦2 = 𝛻𝑦2 − 𝛻𝑦1 = 𝑦2 − 𝑦1 − 𝑦1 − 𝑦0 = 𝑦2 − 2𝑦1 + 𝑦0 ,

𝛻 2 𝑦3 = 𝛻𝑦3 − 𝛻𝑦2 = 𝑦3 − 𝑦2 − 𝑦2 − 𝑦1 = 𝑦3 − 2𝑦2 + 𝑦1 ,

𝛻 2 𝑦𝑛−2 = 𝛻𝑦𝑛−1 − 𝛻𝑦𝑛−2 = 𝑦𝑛 − 2𝑦𝑛−1 + 𝑦𝑛−2

Similarly, third order, fourth order and higher order backward


differences can be defined. 15
Backward Differences: Continue ... Chap-3:Interpolation

16
Backward Differences: Continue Chap-3:Interpolation
...
Example: Construct the backward difference table of follwing data:
𝑥 15 20 25 30 35 40
𝑦 0.2588190 0.3420201 0.4226183 0.5 0.5735764 0.6427876

Solution: The backward difference table of the given data is :


𝑥 𝑦 𝛻 𝛻2 𝛻3 𝛻4 𝛻5

17
Central Differences Chap-3:Interpolation
Let us assume that a discrete set of points 𝑥𝑖 , 𝑦𝑖 , 𝑖 =
0,1,2, … , 𝑛 of any function 𝑦 = 𝑦 𝑥 is given with values of x
being equally spaced, that means, 𝑥𝑖 = 𝑥0 + 𝑖ℎ, 𝑖 = 0,1,2, … , 𝑛
First order central differences of 𝑦𝑖 , 𝑖 = 0,1,2, … , 𝑛 is denoted by
𝛿𝑦𝑖 and defined by
𝛿𝑦𝑖 = 𝑦 1 − 𝑦 1 ,
𝑖+ 𝑖−
2 2
Here, 𝛿 is known as central difference operator.
For example:
𝑦1 − 𝑦0 = 𝛿𝑦1 ,
2
𝑦2 − 𝑦1 = 𝛿𝑦3 ,
2

Similarly, second order, third order and higher order backward


18
differences can be defined.
Central Differences: Continue ... Chap-3:Interpolation

19
Central Differences: Continue ... Chap-3:Interpolation
Example: Construct the central difference table of follwing data:
𝑥 15 20 25 30 35 40
𝑦 0.2588190 0.3420201 0.4226183 0.5 0.5735764 0.6427876

Solution: The central difference table of the given data is :


𝑥 𝑦 𝛿 𝛿2 𝛿3 𝛿4 𝛿5

20
Central Differences Chap-3:Interpolation
Shift Operator
Shift opterator on 𝑦𝑖 is denoted by 𝐸𝑦𝑖 and defined by
𝐸𝑦𝑖 = 𝑦𝑖+1

Higher order shift operators can also be defined. For examples,

The second order shift operator is


𝑬𝟐 𝒚𝒊 = 𝐸𝐸𝑦𝑖 = 𝐸𝑦𝑖+1 = 𝒚𝒊+𝟐

In general, 𝑬𝒏 𝒚𝒊 = 𝒚𝒊+𝒏

21
Central Differences Chap-3:Interpolation
Symbolic Relation Between the Operators:
1. Relation between 𝚫 and 𝑬
Show that Δ and 𝐸 are related by Δ ≡ 𝐸 − 1
Proof: Let 𝑦𝑖 be any number, then by defintion of
Δ𝑦𝑖 = 𝑦𝑖+1 − 𝑦𝑖
⇒ Δ𝑦𝑖 = 𝐸𝑦𝑖 − 𝑦𝑖
⇒ Δ𝑦𝑖 = (𝐸 − 1)𝑦𝑖 , which is true for all 𝑦𝑖 .
Hence, Δ ≡ (𝐸 − 1), equivalently 𝐸 ≡ Δ + 1. Proved
2. Relation between 𝛁 and 𝑬
Show that 𝛁 and 𝐸 are related by 𝛻 ≡ 1 − 𝐸 −1
Proof: Let 𝑦𝑖 be any number, then by defintion of
𝛻𝑦𝑖 = 𝑦𝑖 − 𝑦𝑖−1
⇒ 𝛻𝑦𝑖 = 𝑦𝑖 − 𝐸 −1 𝑦𝑖
⇒ 𝛻𝑦𝑖 = (1 − 𝐸 −1 )𝑦𝑖 , which is true for all 𝑦𝑖 .
Hence, 𝛻 ≡ (1 − 𝐸 −1 ). Proved 22
Central Differences Chap-3:Interpolation
Symbolic Relation Between the Operators:
3. Relation between 𝜹 and 𝑬
1 1

Show that 𝛿 and 𝐸 are related by 𝛿 ≡ 𝐸 − 𝐸
2 2

Proof: Let 𝑦𝑖 be any number, then by defintion of


𝛿𝑦𝑖 = 𝑦 1 − 𝑦 1
𝑖+ 𝑖−
2 2
1 1

⇒ 𝛿𝑦𝑖 = 𝐸 2 𝑦𝑖 −𝐸 2 𝑦𝑖
1 1

⇒ 𝛿𝑦𝑖 = (𝐸 − 𝐸 )𝑦𝑖 , which is true for all 𝑦𝑖 .
2 2
1 1

Hence, 𝛿 ≡ (𝐸 − 𝐸 ) Proved.
2 2

23
Central Differences Chap-3:Interpolation
Symbolic Relation Between the Operators:
4. Show that 𝚫𝛁 ≡ 𝛁𝚫
Proof: Let 𝑦𝑖 be any number, then
LHS:
Δ𝛻𝑦𝑖 = Δ(𝑦𝑖 − 𝑦𝑖−1 )
⇒ Δ𝛻𝑦𝑖 = Δ𝑦𝑖 − Δ𝑦𝑖−1
⇒ Δ𝛻𝑦𝑖 = (𝑦𝑖+1 −𝑦𝑖 ) − (𝑦𝑖 − 𝑦𝑖−1 )
⇒ Δ𝛻𝑦𝑖 = (𝑦𝑖+1 −2𝑦𝑖 + 𝑦𝑖−1 ) -------------------------(1)
RHS:
𝛻Δ𝑦𝑖 = 𝛻(𝑦𝑖+1 − 𝑦𝑖 )
⇒ 𝛻Δ𝑦𝑖 = 𝛻𝑦𝑖+1 − 𝛻𝑦𝑖
⇒ 𝛻Δ𝑦𝑖 = (𝑦𝑖+1 −𝑦𝑖 ) − (𝑦𝑖 −𝑦𝑖−1 )
⇒ 𝛻Δ𝑦𝑖 = (𝑦𝑖+1 −2𝑦𝑖 + 𝑦𝑖−1 ) --------------------------(2)
From (1) and (2), we get
Δ𝛻yi = 𝛻Δyi , for all 𝑦𝑖
Thus, Δ𝛻 ≡ 𝛻Δ. Proved. 24
Chapter-2
Classwork
On Finite Differences
Problem 1: From the given table
𝑥 0.1 0.2 0.3 0.4 0.5
𝑦 0.21 0.35 0.40 0.35 0.21

Compute: (i) Δ2 𝑦1 (ii) 𝛻 2 y2 (iii) 𝛿𝑦5 (iv) 𝐸 −2 𝑦3


2

Problem 2: Show the following relations:

(i) 𝛿 2 𝐸 ≡ Δ2
1

(ii) 𝛻 ≡ 𝛿𝐸 2

(iii) Δ − 𝛻 ≡ 𝛿 2 25
End of Lecture-1

Next
Lecture-2
Newton‘s Forward and Backward
interpolation Formulas
26
NUMERICAL METHODS
(MCSC-202)

By
Samir Shrestha
Department of Mathematics
Kathmandu University, Dhulikhel

Lecture 2
Chap-3: Interpolation
1
Numerical Methods
Contents

 Basic introduction of Computer programming


language [4]
 Errors in numerical computation [5]
 Root findings [7]
• Finite differences and Interpolation [8]
• Solving ODE (IVP) [6]
• Numerical Differentiation and Integration [7]
• Matrices and System of linear equations [6]
• Curve fitting [2]
References
Recommended Text Book
• Introductory Methods of Numerical analysis, S. S. Sastry, PHI
Learning Private Limited, New Delhi, 5th edition, 2012.

Supplementary Text Book


• Numerical Methods for Scientific and Engineering computation,
M. K. Jain, S. R. K Iyengar & R. K. Jain, New Age International
Publisher, 4th edition, 2005.
3
Interpolation

Outline

• Newton‘s forward interpolation formula

• Newton‘s backward interpolation formula

• Examples and Class work

4
Chapter-3

Newton’s Forward Interpolation


Formula

5
Chap-3:Interpolation
Newton’s Forward Interpolation Formula
Let 𝑛 + 1 set of points 𝑥𝑖 , 𝑦𝑖 , 𝑖 = 0,1,2, … , 𝑛 of some unknown
function 𝑦 = 𝑦 𝑥 be given with values of 𝑥 being equally spaced, that
means, 𝑥𝑖 = 𝑥0 + 𝑖ℎ, 𝑖 = 0,1,2, … , 𝑛.
We construct a polynomial 𝑦 = 𝑦𝑛 (𝑥) of degree n that agrees at the
given set of points, that means,
𝑦𝑛 𝑥𝑖 = 𝑦𝑖 for all 𝑖 = 0,1,2, … , 𝑛 --------------------(1)
Let us write the polynomial 𝑦𝑛 𝑥 of degree n in the following way:
𝑦𝑛 𝑥 = 𝑎0 + 𝑎1 𝑥 − 𝑥0 + 𝑎2 𝑥 − 𝑥0 𝑥 − 𝑥1 + 𝑎3 (𝑥 − 𝑥0 )
(𝑥 − 𝑥1 ) 𝑥 − 𝑥2 + ⋯ + 𝑎𝑛 (𝑥 − 𝑥0 ) (𝑥 − 𝑥1 )… 𝑥 − 𝑥𝑛−1 ,
where 𝑎0 , 𝑎1 , … , 𝑎𝑛 are scalars constants and will be computed
applying condition (1) into the polynomial (2)
For 𝒊 = 𝟎: For 𝒊 = 𝟏: 𝑦1 − 𝑦0
⇒ 𝑎1 =
𝑦𝑛 𝑥0 = 𝑦0 𝑦𝑛 𝑥1 = 𝑦1 𝑥1 − 𝑥0
⇒ 𝒂𝟎 = 𝒚𝟎 ⇒ 𝑎0 + 𝑎1 (𝑥1 − 𝑥0 ) = 𝑦1 ⇒ 𝒂 = 𝚫𝒚𝟎
𝟏
⇒ 𝑦0 + 𝑎1 (𝑥1 − 𝑥0 ) = 𝑦1 𝒉 6
Chap-3:Interpolation
Newton’s Forward Interpolation Formula
For 𝒊 = 𝟐:
𝑦𝑛 𝑥2 = 𝑦2
⇒ 𝑎0 + 𝑎1 𝑥2 − 𝑥0 + 𝑎2 (𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 ) = 𝑦2
Δ𝑦0
⇒ 𝑦0 + 2ℎ + 𝑎2 2ℎ ℎ = 𝑦2

⇒ 𝑦0 + 2Δ𝑦0 + 2ℎ2 𝑎2 = 𝑦2
𝑦2 − 2𝑦1 + 𝑦0
⇒ 𝑎2 =
2ℎ2
𝚫𝟐 𝒚𝟎
⇒ 𝒂𝟐 =
𝟐! 𝒉𝟐
Similarly, we can compute other coefficients

𝚫𝟑 𝒚𝟎 𝚫𝟒 𝒚𝟎 𝚫𝒏 𝒚𝟎
𝒂𝟑 = 𝟑
, 𝒂𝟒 = 𝟒
,… , 𝒂𝒏 =
𝟑! 𝒉 𝟒! 𝒉 𝒏! 𝒉𝒏 7
Chap-3:Interpolation
Newton’s Forward Interpolation Formula
𝑥−𝑥0
Now, we set = 𝑝, then we have

• 𝑥 − 𝑥0 = ℎ𝑝
• 𝑥 − 𝑥1 = ℎ(𝑝 − 1)
• 𝑥 − 𝑥2 = ℎ(𝑝 − 2)
• 𝑥 − 𝑥𝑛−1 = ℎ(𝑝 − 𝑛 + 1)
Substitute the values of the coefficients 𝑎0 , 𝑎1 , 𝑎2 , … , 𝑎𝑛 and the above
setting in the relation (2), we get
Δ𝑦0 Δ2 𝑦0 2 Δ3 𝑦0 3
𝑦𝑛 𝑥 = 𝑦0 + ℎ𝑝 + ℎ 𝑝(𝑝 − 1) + 3ℎ 𝑝 𝑝−1 𝑝−2 +
1!ℎ 2!ℎ 2 3!ℎ
Δn 𝑦0 𝑛
⋯+ ℎ 𝑝 𝑝 − 1 𝑝 − 2 … (𝑝 − 𝑛 + 1)
𝑛!ℎ 𝑛
𝚫𝒚𝟎 𝜟𝟐 𝒚𝟎 𝜟𝟑 𝒚𝟎
⇒ 𝒚𝒏 𝒙 = 𝒚𝟎 + 𝟏! 𝒑 + 𝟐!
𝒑(𝒑 − 𝟏) + 𝟑! 𝒑 𝒑− 𝟏 𝒑− 𝟐 + ⋯+
𝜟𝒏 𝒚𝟎 𝑥−𝑥
𝒏!
𝒑 𝒑 − 𝟏 𝒑 − 𝟐 … 𝒑 − 𝒏 + 𝟏 , where 𝑝 = ℎ 0------------- (3)
Equation (3) is called the Newton‘s forward interpolation formula. It is
useful to interpolating near the beginning of the given set of data points.
8
Chapter-3

Newton’s Backward Interpolation


Formula

9
Chap-3:Interpolation
Newton’s Backward Interpolation Formula
Let 𝑛 + 1 set of points 𝑥𝑖 , 𝑦𝑖 , 𝑖 = 0,1,2, … , 𝑛 of some unknown
function 𝑦 = 𝑦 𝑥 be given with values of 𝑥 being equally spaced, that
means, 𝑥𝑖 = 𝑥0 + 𝑖ℎ, 𝑖 = 0,1,2, … , 𝑛.
We construct a polynomial 𝑦 = 𝑦𝑛 (𝑥) of degree n that agrees at the
given set of points, that means,
𝑦𝑛 𝑥𝑖 = 𝑦𝑖 for all 𝑖 = 0,1,2, … , 𝑛 --------------------(1)
Let us write the polynomial 𝑦𝑛 𝑥 of degree n in the following way:
𝑦𝑛 𝑥 = 𝑎0 + 𝑎1 𝑥 − 𝑥𝑛 + 𝑎2 𝑥 − 𝑥𝑛 𝑥 − 𝑥𝑛−1 + 𝑎3 (𝑥 − 𝑥𝑛 )
(𝑥 − 𝑥𝑛−1 ) 𝑥 − 𝑥𝑛−2 + ⋯ + 𝑎𝑛 (𝑥 − 𝑥𝑛 ) (𝑥 − 𝑥𝑛−1 )… 𝑥 − 𝑥1 ,
where 𝑎0 , 𝑎1 , … , 𝑎𝑛 are scalars constants and will be computed
applying condition (1) into the polynomial (2)
For 𝒊 = 𝒏: For 𝒊 = 𝒏 − 𝟏: 𝑦𝑛 − 𝑦𝑛−1
⇒ 𝑎1 =
𝑦𝑛 𝑥𝑛 = 𝑦𝑛 𝑦𝑛 𝑥𝑛−1 = 𝑦𝑛−1 𝑥𝑛 − 𝑥𝑛−1
⇒ 𝒂𝟎 = 𝒚𝒏 ⇒ 𝑎0 + 𝑎1 (𝑥𝑛−1 − 𝑥𝑛 ) = 𝑦𝑛−1 ⇒ 𝒂 = 𝛁𝒚𝒏
𝟏
⇒ 𝑦𝑛 + 𝑎1 (𝑥𝑛−1 − 𝑥𝑛 ) = 𝑦𝑛−1 𝒉 10
Chap-3:Interpolation
Newton’s backward Interpolation Formula
For 𝒊 = 𝒏 − 𝟐:
𝑦𝑛 𝑥𝑛−2 = 𝑦𝑛−2
⇒ 𝑎0 + 𝑎1 𝑥𝑛−2 − 𝑥𝑛 + 𝑎2 (𝑥𝑛−2 − 𝑥𝑛 )(𝑥𝑛−2 − 𝑥𝑛−1 ) = 𝑦𝑛−2
𝛻𝑦𝑛
⇒ 𝑦𝑛 + −2ℎ + 𝑎2 −2ℎ −ℎ = 𝑦𝑛−2

⇒ 𝑦𝑛 − 2𝛻𝑦𝑛 + 2ℎ2 𝑎2 = 𝑦2
𝑦𝑛 − 2𝑦𝑛−1 + 𝑦𝑛−2
⇒ 𝑎2 =
2ℎ2
𝛁 𝟐 𝒚𝒏
⇒ 𝒂𝟐 =
𝟐! 𝒉𝟐

Similarly, we can compute other coefficients


𝛁 𝟑 𝒚𝒏 𝛁 𝟒 𝒚𝒏 𝛁 𝒏 𝒚𝒏
𝒂𝟑 = 𝟑
, 𝒂𝟒 = 𝟒
,… , 𝒂𝒏 =
𝟑! 𝒉 𝟒! 𝒉 𝒏! 𝒉𝒏
11
Chap-3:Interpolation
Newton’s backward Interpolation Formula
𝑥−𝑥𝑛
Now, we set = 𝑝, then we have

• 𝑥 − 𝑥𝑛 = ℎ𝑝
• 𝑥 − 𝑥𝑛−1 = ℎ(𝑝 + 1)
• 𝑥 − 𝑥𝑛−2 = ℎ(𝑝 + 2)
• 𝑥 − 𝑥𝑛−3 = ℎ(𝑝 + 𝑛 − 1)
Substitute the values of the coefficients 𝑎0 , 𝑎1 , 𝑎2 , … , 𝑎𝑛 and the above
setting in the relation (2), we get
𝛻𝑦𝑛 𝛻 2 𝑦𝑛 2 𝛻 3 𝑦𝑛 3
𝑦𝑛 𝑥 = 𝑦𝑛 + ℎ𝑝 + 2 ℎ 𝑝(𝑝 + 1) + 3 ℎ 𝑝 𝑝 + 1 𝑝 + 2 +
1!ℎ 2!ℎ 3!ℎ
n
𝛻 𝑦𝑛 𝑛
⋯ + 𝑛 ℎ 𝑝 𝑝 + 1 𝑝 + 2 … (𝑝 + 𝑛 − 1)
𝑛!ℎ
𝛁𝒚𝒏 𝛁 𝟐 𝒚𝒏 𝛁 𝟑 𝒚𝒏
⇒ 𝒚𝒏 𝒙 = 𝒚𝒏 + 𝟏! 𝒑 + 𝟐! 𝒑(𝒑 + 𝟏) + 𝟑! 𝒑 𝒑 + 𝟏 𝒑 + 𝟐 + ⋯ +
𝛁 𝒏 𝒚𝒏 𝑥−𝑥𝑛
𝒏!
𝒑 𝒑 + 𝟏 𝒑 + 𝟐 … 𝒑 + 𝒏 − 𝟏 , where 𝑝 = ℎ
------------- (3)
Equation (3) is called the Newton‘s backward interpolation formula. It is
useful to interpolating at the end of the given set of data points. 12
Chap-3:Interpolation
Newton‘s Forward and Backward Interpolation Formulas : Continue ...
Example: Find the values of 𝑓(0.23) and 𝑓 0.29 from the following
table:
𝑥 0.20 0.22 0.24 0.26 0.28 0.30
𝑓(𝑥) 1.6596 1.6698 1.6804 1.6912 1.7024 1.7139

Solution: We construct the finite difference table:


𝒙 𝒚 = 𝒇(𝒙) 1st 2nd 3rd 4th 5th
0.20 1.6596
0.0102
0.22 1.6698 0.0004
0.0106 -0.0002
0.24 1.6804 0.0002 0.0004
0.0108 0.0002 -0.0007
0.26 1.6912 0.0004 -0.0003
0.0112 -0.0001
0.28 1.7024 0.0003
0.0115
0.30 1.7139 13
Chap-3:Interpolation
Solution: Continue ...
I. To find the value of value of 𝑓(0.23), we apply Newton’s forward
interpolation formula, here ℎ = 0.02 , 𝑥0 = 0.23, 𝑥 = 0.23
𝑥−𝑥0 0.23−0.20
So, 𝒑 = ℎ = 0.02 = 𝟏. 𝟓
The Newton‘s forward interpolation formula gives
1.5 × (1.5 − 1)
𝑦 = 1.6596 + 1.5 × 0.0102 + × 0.0004
2!
1.5 × (1.5 − 1)(1.5 − 2)
+ × (−0.0002)
3!
𝑦 = 1.6751

So, 𝒇 𝟎. 𝟐𝟑 = 𝟏. 𝟔𝟕𝟓𝟏
II. To find the value of value of 𝑓(0.29), we apply Newton’s backward
interpolation formula, here ℎ = 0.02 , 𝑥𝑛 = 0.30, 𝑥 = 0.29
𝑥−𝑥 0.29−0.30
So, 𝒑 = ℎ 𝑛 = 0.02 = −𝟎. 𝟓
14
Chap-3:Interpolation

The Newton‘s backward interpolation formula gives

(−0.5) × (−0.5 + 1)
𝑦 = 1.7139 + (−0.5) × 0.0115 + × 0.0003
2!
(−0.5) × (−0.5 + 1)(−0.5 + 2)
+ × (−0.0001)
3!
𝑦 = 1.7081

So, 𝒇 𝟎. 𝟐𝟗 = 𝟏. 𝟕𝟎𝟖𝟏

15
Chap-3:Interpolation

⇒ 𝑦(𝑥) (1)
The value 𝑦(8) is obtained from (1) by
𝑦 8 = 83 + 6 × 82 + 11 × 8 + 6 = 990
⇒ 𝒚 𝟖 = 𝟗𝟗𝟎 16
Chap-3:Interpolation

17
Chap-3:Interpolation

18
Chap-3:Interpolation

19
Chapter-2
Classwork
Newton‘s Forward & Backward Interpolation
Problem 1: Using the given data
𝒙 0.1 0.2 0.3 0.4 0.5 0.6 0.7
𝒇(𝒙) 2.631 3.328 4.097 4.944 5.875 6.896 8.013

estimate the functional values 𝑓(0.15) and 𝑓(0.65)

Problem 2: Find the missing term in the following data:


𝒙 0 5 10 15 20 25 30
𝒚 1 3 ? 73 225 ? 1153

20
End of Lecture-2

Next
Lecture-3
Gauss Forward and Backward, Stirling‘s,
Bessel‘s interpolation Formulas
21
NUMERICAL METHODS
(MCSC-202)

By
Samir Shrestha
Department of Mathematics
Kathmandu University, Dhulikhel

Lecture 3
Chap-3: Interpolation
1
Numerical Methods
Contents

 Basic introduction of Computer programming


language [4]
 Errors in numerical computation [5]
 Root findings [7]
• Finite differences and Interpolation [8]
• Solving ODE (IVP) [6]
• Numerical Differentiation and Integration [7]
• Matrices and System of linear equations [6]
• Curve fitting [2]
References
Recommended Text Book
• Introductory Methods of Numerical analysis, S. S. Sastry, PHI
Learning Private Limited, New Delhi, 5th edition, 2012.

Supplementary Text Book


• Numerical Methods for Scientific and Engineering computation,
M. K. Jain, S. R. K Iyengar & R. K. Jain, New Age International
Publisher, 4th edition, 2005.
3
Interpolation

Outline

• Gauss forward interpolation formula

• Gauss backward interpolation formula

• Stirling‘s formula

• Bessel‘s Formula

• Examples and Class work

4
Chapter-3
Central Difference Interpolation
Formula
• Newton‘s forward and backward interpolation formulas are
applicable to interpolate near the begining and end of the given
tabulated data points
• We now discuss the central difference interpolation formulas
which are most suited for interpolating neat the middle of the
tabulated data points
• Gauss‘s Central Difference formulas
• Gauss forward interpolation
• Gauss Backward inerpolation

• Striling‘s Formula 5
Chapter-3

Gauss Forward Interpolation Formula

6
Chap-3:Interpolation
Gauss Forward Interpolation Formula
Let us consider the following difference table in which is central
ordinate is taken as 𝑦0 corresponding to 𝑥0 𝐺1 = 𝑝
𝑝 𝑝−1
𝐺2 = 2!
(𝑝+1)𝑝 𝑝−1
𝐺3 =
3!
(𝑝+1)𝑝 𝑝−1 (𝑝−2)
𝐺4 =
4!
(𝑝+2)(𝑝+1)𝑝 𝑝−1 (𝑝−2)
𝐺5 =
5!
𝑥−𝑥0
where, 𝑝 =

The Gauss forward interpoaltion formula is given into the form:


𝑦 = 𝑦0 + 𝐺1 Δ𝑦0 + 𝐺2 Δ2 𝑦−1 + G3 Δ3 𝑦−1 + 𝐺4 Δ4 𝑦−2 + 𝐺5 Δ5 𝑦−2 + ⋯,
where 𝐺1 , 𝐺2 , 𝐺3 , … can be calculated and given by
7
Chap-3:Interpolation
Gauss Forward Interpolation Formula (continue …)
Thus, the Gauss forward interpoaltion formula is given by
𝑝 𝑝−1 2 (𝑝 + 1)𝑝 𝑝 − 1
𝑦 = 𝑦0 + 𝑝Δ𝑦0 + Δ 𝑦−1 + Δ3 𝑦−1 +
2! 3!
(𝑝+1)𝑝 𝑝−1 (𝑝−2) 4 (𝑝+2)(𝑝+1)𝑝 𝑝−1 (𝑝−2) 5
Δ 𝑦−2 + Δ 𝑦−2 + ⋯,
4! 5!
𝑥−𝑥0
where 𝑝 = ℎ .

This formula is used to interpolate near the centre of the


tabulated data that falls after the central point (𝑥0 , 𝑦0 ).

8
Chapter-3

Gauss Backward Interpolation Formula

9
Chap-3:Interpolation
Gauss Backward Interpolation Formula
Let us consider the following difference table in which is central
ordinate is taken as 𝑦0 corresponding to 𝑥0 𝐺1′ = 𝑝
𝑝+1 𝑝
𝐺2′ = 2!
(𝑝+1)𝑝 𝑝−1
𝐺3′ =
3!
(𝑝+2)(𝑝+1)𝑝 𝑝−1
𝐺4′ =
4!
(𝑝+2)(𝑝+1)𝑝 𝑝−1 (𝑝−2)
𝐺5′ =
5!
𝑥−𝑥0
where, 𝑝 =

The Gauss backward interpoaltion formula is given into the form:


𝑦 = 𝑦0 + 𝐺1′ Δ𝑦−1 + 𝐺2′ Δ2 𝑦−1 + G3′ Δ3 𝑦−2 + 𝐺4′ Δ4 𝑦−2 + 𝐺5′ Δ5 𝑦−3 + ⋯,
where 𝐺1′ , 𝐺2′ , 𝐺3′ , … can be calculated and given by
10
Chap-3:Interpolation
Gauss Backward Interpolation Formula (continue …)
Thus, the Gauss backward interpoaltion formula is given by
𝑝+1 𝑝 2 (𝑝 + 1)𝑝 𝑝 − 1
𝑦 = 𝑦0 + 𝑝Δ𝑦−1 + Δ 𝑦−1 + Δ3 𝑦−2 +
2! 3!
(𝑝+2)(𝑝+1)𝑝 𝑝−1 4 (𝑝+2)(𝑝+1)𝑝 𝑝−1 (𝑝−2) 5
Δ 𝑦−2 + Δ 𝑦−3 + ⋯,
4! 5!
𝑥−𝑥0
where 𝑝 = ℎ

This formula is used to interpolate near the centre of the tabulated data
that falls before the central point (𝑥0 , 𝑦0 ).

11
Chapter-3

Stirling’s Interpolation Formula

Stirling‘s formula is obtained by taking the average of


Gauss‘ forward and backward interploation formulas

12
Chap-3:Interpolation
Stirling’s Formula:

The Stirling‘s formula is the avarate of Gauss forward and Gauss


backward interpolation formulas, which is given by
Δ𝑦−1 + Δ𝑦0 𝑝2 2
𝑦 = 𝑦0 + 𝑝 + Δ 𝑦−1 +
2 2!
𝑝 𝑝2 − 1 (Δ3 𝑦−1 + Δ3 𝑦−2 ) 𝑝2 𝑝2 − 1 4
+ Δ 𝑦−2 + ⋯ ,
3! 2 4!
𝑥−𝑥0
where 𝑝 = ℎ

This formula is used to interpolate near the centre of the tabulated data
that may fall before and after the central point (𝑥0 , 𝑦0 ).

13
Stirling’s Formula: Continue … Chap-3:Interpolation

Δ𝑦−1 + Δ𝑦0 𝑝2 2 𝑝 𝑝2 − 1 (Δ3 𝑦−1 + Δ3 𝑦−2 )


𝑦 = 𝑦0 + 𝑝 + Δ 𝑦−1 +
2 2! 3! 2
2 2
𝑝 𝑝 −1 4
+ Δ 𝑦−2 + ⋯ ,
4! 14
Chapter-3

Examples

15
Example 1: Chap-3:Interpolation
From the following table estimate the value of 𝑦 for 𝑥 = 0.4 and 𝑥 = 0.6:
𝑥 0 0.16 0.32 0.48 0.64 0.80 0.96
𝑦 0 0.1682 0.3463 0.5463 0.7868 1.1008 1.5574

Solution: From the given data, we construct the difference table:


𝒙 𝒚 𝚫 𝚫𝟐 𝚫𝟑 𝚫𝟒 𝚫𝟓 𝚫𝟔
0 0
0.1682
0.16 0.1682 0.0098
0.1780 0.0122
0.32 0.3463 0.0220 0.0062
0.2000 0.0185 0.0082
(𝒙𝟎 , 𝒚𝟎 ) 0.48 0.5463 0.0405 0.0144 0.0138
0.2405 0.0329 0.0220
0.64 0.7868 0.0734 0.0364
0.3139 0.0693
0.80 1.1008 0.1424
0.4566
16
0.96 1.5574
Example 1: Chap-3:Interpolation
Solution: Continue …

We choose the central point to be 𝑥0 = 0.48 , ℎ = 0.16 there


𝑥−𝑥0 𝒙−𝟎.𝟒𝟖
𝒑 = ℎ = 𝟎.𝟏𝟔 .
At 𝑥 = 0.4, we apply Gauss backward interpolation formula, where
𝟎.𝟒−𝟎.𝟒𝟖
𝒑 = 𝟎.𝟏𝟔 = −𝟎. 𝟓.
𝑝+1 𝑝 2 (𝑝+1)𝑝 𝑝−1
So, 𝑦 = 𝑦0 + 𝑝Δ𝑦−1 + 2!
Δ 𝑦−1 + 3!
Δ3 𝑦−2 + ⋯
−0.5 + 1 (−0.5)
⇒ 𝑦 = 𝟎. 𝟓𝟒𝟔𝟑 + −0.5 × 𝟎. 𝟐𝟎𝟎𝟎 + × 𝟎. 𝟎𝟒𝟎𝟓
2!
(−0.5 + 1)(−0.5) −0.5 − 1
+ × 𝟎. 𝟎𝟏𝟖5 + ⋯
3!
⇒ 𝒚 = 𝟎. 𝟒𝟒𝟐𝟒
Thus, the estimated value of 𝑦 = 0.4424 at 𝑥 = 0.4

17
Example 1: Continue… Chap-3:Interpolation
Gauss Backward Interpolation
𝒙 𝒚 𝚫 𝚫𝟐 𝚫𝟑 𝚫𝟒 𝚫𝟓 𝚫𝟔
0 0
0.1682
0.16 0.1682 0.0098
0.1780 0.0122
0.32 0.3463 0.0220 0.0062
0.2000 0.0185 0.0082

0.48 0.5463 0.0405 0.0144 0.0138


0.2405 0.0329 0.0220
0.64 0.7868 0.0734 0.0364
0.3139 0.0693
0.80 1.1008 0.1424
0.4566
18
0.96 1.5574
Example 1: Chap-3:Interpolation
Solution: Continue …
We choose the central point to be 𝑥0 = 0.48 , ℎ = 0.16 there
𝑥−𝑥 𝒙−𝟎.𝟒𝟖
𝒑 = ℎ 0 = 𝟎.𝟏𝟔 .
At 𝑥 = 0.6, we apply Gauss forward interpolation formula, where
𝟎.𝟔−𝟎.𝟒𝟖
𝒑 = 𝟎.𝟏𝟔 = 𝟎. 𝟕𝟓.
𝑝 𝑝−1 2 (𝑝+1)𝑝 𝑝−1
So, 𝑦 = 𝑦0 + 𝑝Δ𝑦0 + Δ 𝑦−1 + Δ3 𝑦−1 + ⋯
2! 3!
0.75 0.75 − 1
⇒ 𝑦 = 𝟎. 𝟓𝟒𝟔𝟑 + 0.75 × 𝟎. 𝟐𝟒𝟎𝟓 + × 𝟎. 𝟎𝟒𝟎𝟓
2!
0.75 + 1 0.75 0.75 − 1
+ × 𝟎. 𝟎𝟑𝟐𝟗 + ⋯
3!

⇒ 𝒚 = 𝟎. 𝟕𝟐𝟏𝟏
Thus, the estimated value of 𝑦 = 0.7211 at 𝑥 = 0.6
19
Example 1: Continue… Chap-3:Interpolation
Gauss Forward Interpolation
𝒙 𝒚 𝚫 𝚫𝟐 𝚫𝟑 𝚫𝟒 𝚫𝟓 𝚫𝟔
0 0
0.1682
0.16 0.1682 0.0098
0.1780 0.0122
0.32 0.3463 0.0220 0.0062
0.2000 0.0185 0.0082

0.48 0.5463 0.0405 0.0144 0.0138


0.2405 0.0329 0.0220
0.64 0.7868 0.0734 0.0364
0.3139 0.0693
0.80 1.1008 0.1424
0.4566
20
0.96 1.5574
Example 2: Chap-3:Interpolation
From the following table, use Stirling’s formula to estimate the value of 𝑦 for
𝑥 = 0.4 and 𝑥 = 0.6:
𝑥 0 0.16 0.32 0.48 0.64 0.80 0.96
𝑦 0 0.1682 0.3463 0.5463 0.7868 1.1008 1.5574

Solution: From the given data, we construct the difference table:


𝒙 𝒚 𝚫 𝚫𝟐 𝚫𝟑 𝚫𝟒 𝚫𝟓 𝚫𝟔
0 0
0.1682
0.16 0.1682 0.0098
0.1780 0.0122
0.32 0.3463 0.0220 0.0062
0.2000 0.0185 0.0082
(𝒙𝟎 , 𝒚𝟎 ) 0.48 0.5463 0.0405 0.0144 0.0138
0.2405 0.0329 0.0220
0.64 0.7868 0.0734 0.0364
0.3139 0.0693
0.80 1.1008 0.1424
0.4566
21
0.96 1.5574
Example 2: Continue… Chap-3:Interpolation
For Stirling‘s Formula
𝒙 𝒚 𝚫 𝚫𝟐 𝚫𝟑 𝚫𝟒 𝚫𝟓 𝚫𝟔
0 0
0.1682
0.16 0.1682 0.0098
0.1780 0.0122
0.32 0.3463 0.0220 0.0062
0.2000 0.0185 0.0082

0.48 0.5463 0.0405 0.0144 0.0138


0.2405 0.0329 0.0220
0.64 0.7868 0.0734 0.0364
0.3139 0.0693
0.80 1.1008 0.1424
0.4566
22
0.96 1.5574
Example 2: Chap-3:Interpolation
Solution: Continue …

We choose the central point to be 𝑥0 = 0.48 , ℎ = 0.16 there


𝑥−𝑥0 𝒙−𝟎.𝟒𝟖
𝒑 = ℎ = 𝟎.𝟏𝟔 .
𝟎.𝟒−𝟎.𝟒𝟖
At 𝑥 = 0.4, by Stirlings formula, where 𝒑 = = −𝟎. 𝟓.
𝟎.𝟏𝟔
Δ𝑦−1 +Δ𝑦0 𝑝2 2 𝑝 𝑝2 −1 (Δ3 𝑦−1 +Δ3 𝑦−2 )
So, 𝑦 = 𝑦0 + 𝑝 + Δ 𝑦−1
+ ⋯ +
2 2! 3! 2
2
0.2000 + 0.2405 −0.5
𝑦 = 0.5463 + −0.5 + × 0.0405
2 2!
(−0.5) (−0.5)2 −1 (0.0185 + 0.0329)
+ × +⋯
3! 2

⇒ 𝒚 = 𝟎. 𝟒𝟒𝟐𝟖
Thus, the estimated value of 𝒚 = 𝟎. 𝟒𝟒𝟐𝟖 at 𝒙 = 𝟎. 𝟒
23
Example 2: Chap-3:Interpolation
Solution: Continue …

We choose the central point to be 𝑥0 = 0.48 , ℎ = 0.16 there


𝑥−𝑥0 𝒙−𝟎.𝟒𝟖
𝒑 = ℎ = 𝟎.𝟏𝟔 .
𝟎.𝟒−𝟎.𝟒𝟖
At 𝑥 = 0.6, by Stirlings formula, where 𝒑 = = 𝟎. 𝟕𝟓.
𝟎.𝟏𝟔
Δ𝑦−1 +Δ𝑦0 𝑝2 2 𝑝 𝑝2 −1 (Δ3 𝑦−1 +Δ3 𝑦−2 )
So, 𝑦 = 𝑦0 + 𝑝 + Δ 𝑦−1
+ ⋯ +
2 2! 3! 2
2
0.2000 + 0.2405 0.75
𝑦 = 0.5463 + 0.75 + × 0.0405
2 2!
(0.75) (0.75)2 −1 (0.0185 + 0.0329)
+ × +⋯
3! 2

⇒ 𝒚 = 𝟎. 𝟕𝟐𝟏𝟓
Thus, the estimated value of 𝒚 = 𝟎. 𝟕𝟐𝟏𝟓 at 𝒙 = 𝟎. 𝟔
24
Chap-3:Interpolation
Classwork
Gauss‘ Forward and Backward & Stirling‘s
Problem 1: Use Stirling’s formula for the given data
𝒙 0.20 0.22 0.24 0.26 0.28 0.30
𝒇(𝒙) 1.6596 1.6698 1.6804 1.6912 1.7024 1.7139
to estimate the functional values 𝑓(0.25)
Problem 2:

Problem 3:

25
Chap-3:Interpolation

Bessel’s Formula
To be done by students

26
Chap-3:Interpolation
Bessel‘s formula uses following table:

27
Bessel‘s formula (Continue ...) Chap-3:Interpolation

28
Bessel‘s formula (Continue ...) Chap-3:Interpolation

29
Bessel‘s formula (Continue ...) Chap-3:Interpolation

30
Chap-3:Interpolation

31
Chap-3:Interpolation
End of Lecture-3

Next
Lecture-4
Lagrange Interpolation & Netwon‘s
General Interpolation
33
NUMERICAL METHODS
(MCSC-202)

By
Samir Shrestha
Department of Mathematics
Kathmandu University, Dhulikhel

Lecture 4
Chap-3: Interpolation
1
Numerical Methods
Contents

 Basic introduction of Computer programming


language [4]
 Errors in numerical computation [5]
 Root findings [7]
• Finite differences and Interpolation [8]
• Solving ODE (IVP) [6]
• Numerical Differentiation and Integration [7]
• Matrices and System of linear equations [6]
• Curve fitting [2]
References
Recommended Text Book
• Introductory Methods of Numerical analysis, S. S. Sastry, PHI
Learning Private Limited, New Delhi, 5th edition, 2012.

Supplementary Text Book


• Numerical Methods for Scientific and Engineering computation,
M. K. Jain, S. R. K Iyengar & R. K. Jain, New Age International
Publisher, 4th edition, 2005.
3
Interpolation

Outline

• Lagrange‘s interpolation formula

• Divided differences

• Newton‘s general interpolation formula

• Examples and Class work

4
Chap-3:Interpolation
Chap-3:Interpolation
Chap-3:Interpolation
Chap-3:Interpolation
Chap-3:Interpolation
Chap-3:Interpolation
Chap-3:Interpolation
Chap-3:Interpolation
Chap-3:Interpolation
Lagrange Interpolation formula: Example
Chap-3:Interpolation

Lagrange Interpolation formula: Example (Continue ...)


Chap-3:Interpolation
Lagrange Interpolation formula: Example (Continue ...)
Chapter-3
Newton’s General Interpolation Formula
• Lagrange interpolation formula has disadvandage that if
another interpolating point is added, the interpopation
coeffficiants 𝑙𝑖 (𝑥) will have to recomputed

• It is therefore necessary to look for some other form of


formula to overcome this drawback

• Netwon‘s general interpoaltion formula is one such that deals


with this kind of situation

• To develop Netwon‘s general interpoaltion formula, we need


the idea of divided differerence
16
Chap-3:Interpolation
Chap-3:Interpolation
Chap-3:Interpolation
Chap-3:Interpolation
Chap-3:Interpolation
Chap-3:Interpolation
Chap-3:Interpolation
Chap-3:Interpolation
Chap-3:Interpolation
Chap-3:Interpolation
Chapter-3
Classwork
Lagrange and Newton‘s Genereal Formula
Problem 1:

Problem 2:

27
End of Chapter-3

Next
Chapter-4
Numerical Differentiation and Integraion

28
NUMERICAL METHODS
(MCSC-202)

By
Samir Shrestha
Department of Mathematics
Kathmandu University, Dhulikhel

Lecture 1
Chap-4: Numerical Differentiation &
Integration 1
Numerical Methods
Contents

 Basic introduction of Computer programming


language [4]
 Errors in numerical computation [5]
 Root findings [7]
 Finite differences and Interpolation [8]
• Numerical Differentiation and Integration [7]
• Numerical Methods for Differential Equations
ODE (IVP) [6]
• Matrices and System of linear equations [6]
• Curve fitting [2]
References
Recommended Text Book
• Introductory Methods of Numerical analysis, S. S. Sastry, PHI
Learning Private Limited, New Delhi, 5th edition, 2012.

Supplementary Text Book


• Numerical Methods for Scientific and Engineering computation,
M. K. Jain, S. R. K Iyengar & R. K. Jain, New Age International
Publisher, 4th edition, 2005.
3
Chap-4: Numerical Differentiation &
Integration

Outline

• Introduction
• Numerical Differentiaion formulas based on
• Newton‘s Forward Interpolation Method
• Newton‘s Backward Interpolation Method
• Stirling‘s Formula
• Maximimum and Minimum values
• Numerical Integraion
- Trapezoidal, Simpson‘s 1/3, Simpson‘s 3/8 Rules

• Examples and Class work


4
Chap-4: Numerical Differentiation &
Integration

Introduction
to
Numerical Differentiation &
Integration

5
Chap-4: Numerical Differentiation &
Integration
Introduction:
• Need for differentiation and integration of a function arises
quite often in engineering and science problems
• If the function is given explicitly its derivative and in many
cases the integration can be found exactly
• However in many situations, we may not know the exact
function function but what we know is only the values of the
function in the discrete set of point 𝑥𝑖 , 𝑦𝑖 , 𝑖 = 0,1,2, … , 𝑛
• In some situation function is known but too complex to do
differentiation and integration
• In both the situations, we seek the help of numerical
techniques to compute the differentiation and integration
• Process of estimating the differentiation and integration using
the approximatation techniues is known as Numerical
Differentiaon and Numerical Integration
6
Chap-4: Numerical Differentiation &
Introduction: Continues … Integration

Idea !!!
• From the given set of data points 𝑥𝑖 , 𝑦𝑖 , 𝑖 = 0,1,2, … , 𝑛, we
construct the interpolating polynomial 𝑦 = 𝑦𝑛 (𝑥) that we
learned in Chapter 3
• Then, the derivatives and integrations can be very easily
performed on such interpolating polynomials
𝒚 𝒚 = 𝒚𝒏 (𝒙)

𝒙
7
Chap-4: Numerical Differentiation &
Integration

Numerical Differentiation
• Differentiation formula developed by using Newton‘s
forward interpolation

• Differentiation formula developed by using Newton‘s


backward interpolation

• Differentiation formula developed by Stirling‘s


interpolation
8
Chap-4: Numerical Differentiation &
Numerical Differentiation Formulas: Integration

(1)

(2)

(3)
9
Chap-4: Numerical Differentiation &
Numerical Differentiation Formulas: Integration

(4)

(5)

10
Chap-4: Numerical Differentiation &
Numerical Differentiation Formulas: Integration

(6)

(7)

11
Chap-4: Numerical Differentiation &
Numerical Differentiation Formulas: Integration

(8)

(9)

12
Chap-4: Numerical Differentiation &
Integration

Examples
Finding the Derivatives

13
Chap-4: Numerical Differentiation &
Integration

14
Chap-4: Numerical Differentiation &
Integration

15
Chap-4: Numerical Differentiation &
Integration

16
Chap-4: Numerical Differentiation &
Integration

with 𝑥𝑛 = 2.0

17
Chap-4: Numerical Differentiation &
Integration
with 𝑥𝑛 = 2.0

18
Chap-4: Numerical Differentiation &
Integration

19
Chap-4: Numerical Differentiation &
Integration

20
Chapter-4
Classwork
Numerical Differentiation
𝑑𝑦 𝑑 2 𝑦
Problem 1: Compute the first and second derivatives , at the
𝑑𝑥 𝑑𝑥 2
points 𝑥 = 0 and 𝑥 = 6 using the following tabulated data:
𝑥 0 1 2 3 4 5 6
𝑦 2 3 10 29 66 127 218

Problem 2: Apply the Striling’s formula to compute the derivatives


𝑑𝑦 𝑑 2 𝑦
,
𝑑𝑥 𝑑𝑥 2
of the function at the points 𝑥 = 1.4 by using the following
tabulated data:
𝑥 1.0 1.2 1.4 1.6 1.8
𝑦 2.7183 3.3201 4.0552 4.9530 6.0496
21
Chapter-4
Classwork
Numerical Differentiation
Problem 3:

Problem 4:

22
Chap-4: Numerical Differentiation &
Integration

Maximum and Minimum Values

23
Chap-4: Numerical Differentiation &
Integration

24
Chap-4: Numerical Differentiation &
Integration

25
Chap-4: Numerical Differentiation &
Integration

Examples
Finding Maximum/Minimum Values

26
Chap-4: Numerical Differentiation &
Integration

27
Chap-4: Numerical Differentiation &
Integration

28
Chapter-4
Classwork
Maximum/Minimum Values
Problem 1:

29
Chap-4: Numerical Differentiation &
Integration

Numerical Integration
• If tabulated data 𝑥𝑖 , 𝑦𝑖 , 𝑖 = 0,1,2, … , 𝑛 of unkown function𝑦 =
𝑏
𝑦(𝑥) and asked to evaluate 𝑎 𝑦𝑑𝑥
𝑏 −𝑥 2
• The function integral of type 𝑎
𝑒 𝑑𝑥 to be evaluated, which
2
can not be done exaclty as 𝑒 −𝑥 has not anti-derivative
• In both the situations above, we need to appy numerical methods
of integraions
• Idea is to construct the interpolating polynomial 𝑦 = 𝑦𝑛 (𝑥) using
the given set of data point (𝑥𝑖 , 𝑦𝑖 ) and the integrand function is
replaced by interpolating polynomial 𝑦 = 𝑦𝑛 (𝑥) so that
𝑏 𝑏
𝑎
𝑦𝑑𝑥 ≈ 𝑎 𝑛
𝑦 𝑑𝑥
30
Chap-4: Numerical Differentiation &
Integration

31
General formula for Numerical Integration: Chap-4: Numerical Differentiation &
Integration

32
Chap-4: Numerical Differentiation &
Integration

Trapezoidal Rule

33
Chap-4: Numerical Differentiation &
Integration

34
Trapezoidal Rule: Continue ... Chap-4: Numerical Differentiation &
Integration

35
Chap-4: Numerical Differentiation &
Integration

Simpson’s 1/3- Rule

36
Chap-4: Numerical Differentiation &
Integration

37
Simpson‘s 1/3 Rule: Continue ... Chap-4: Numerical Differentiation &
Integration

38
Chap-4: Numerical Differentiation &
Integration

Simpson’s 3/8- Rule

39
Chap-4: Numerical Differentiation &
Integration

40
Simpson‘s 3/8 Rule: Continue ... Chap-4: Numerical Differentiation &
Integration

41
Chap-4: Numerical Differentiation &
Integration

Examples
Trapezoidal and Simpson‘s-1/3

42
Chap-4: Numerical Differentiation &
Integration

43
Chap-4: Numerical Differentiation &
Integration

44
Chap-4: Numerical Differentiation &
Integration

45
Chap-4: Numerical Differentiation &
Integration

𝒉 Trapezoidal Simpson‘s-1/3 Exact


0.5 0.7084 0.6945 0.693147
0.25 0.6970 0.6932 0.693147
0.125 0.6941 0.6932 0.693147
46
Chap-4: Numerical Differentiation &
Integration

47
Chap-4: Numerical Differentiation &
Integration

48
Chap-4: Numerical Differentiation &
Integration

49
Chapter-4
Classwork
Numerical Integration
𝜋
Problem 1: Evaluate the integral 0
𝑥 𝑠𝑖𝑛𝑥 𝑑𝑥 using Trapezoidal rule
with five ordinates.

31
Problem 2: Estimate the value of the integral 1 𝑥
𝑑𝑥
by Simpson’s-1/3
with 4 strips and 8 strips respectively. Determine the error in each
case.

50
Chapter-4
Classwork
Numerical Integration
Problem 3: The velocities of a car (running on a straight road) at
interval of 2 minutes are given below.
Time in minutes 0 2 4 6 8 10 12
Velocity in k.m 0 22 30 27 18 7 0

Apply the Simpson’s-1/3-Rule to find the distance covered by the car.

Problem 4: A curve 𝑦 = 𝑓(𝑥) is given by the points of the table given


below:
𝑥 0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
𝑦 23 19 14 11 12.5 16 19 20 20

Estimate the area bounded by the curve, the x-axis and the extreme
ordinates. 51
Chap-4: Numerical Differentiation &
Integration

Numerical Double Integration


Left for students to prepare

52
End of Lecture-1

Next
Chapter-5
Least Square Curve Fitting

53
NUMERICAL METHODS
(MCSC-202)

By
Samir Shrestha
Department of Mathematics
Kathmandu University, Dhulikhel

Lecture 1
Chap-5: Least Square Curve Fitting
1
Numerical Methods
Contents

 Basic introduction of Computer programming


language [4]
 Errors in numerical computation [5]
 Root findings [7]
 Finite differences and Interpolation [8]
 Numerical Differentiation and Integration [7]
• Curve fitting [2]
• Numerical Methods for Differential Equations
ODE (IVP) [6]
• Matrices and System of linear equations [6]
References
Recommended Text Book
• Introductory Methods of Numerical analysis, S. S. Sastry, PHI
Learning Private Limited, New Delhi, 5th edition, 2012.

Supplementary Text Book


• Numerical Methods for Scientific and Engineering computation,
M. K. Jain, S. R. K Iyengar & R. K. Jain, New Age International
Publisher, 4th edition, 2005.
3
Chap-5: Curve Fitting

Outline

• Introduction
• Least square curve fitting procedure
• Fitting a straight line
• Fitting a quadratic curve
• Non-linear curve fitting
- Fitting by power function
- Fitting by exponential function
• Fitting a polynomial degree n

4
Chap-5: Curve Fitting

Introduction
to
Curve Fitting

5
Chap-5: Curve Fitting
Introduction:

• In many applications, it is often becomes necessary to


establish a mathematical relationship between experimental
values
• This mathematical relation could be used to find the missing
value in the data or to predict/forcast the value in the future
• A common strategy for such cases is to derive an
approximating function that broadly fits the general trend of
the data without necessarily passing through the individual
points
• The curve fitted is such that the error between data points and
curve fitted is least. Hence, the method is widely known as
Least Square Method
6
Introduction: Continues … Chap-5: Curve Fitting

Idea !!!
From the given set of data points 𝑥𝑖 , 𝑦𝑖 , 𝑖 = 1,2, … , 𝑚, we fit a
curve 𝑌 = 𝑓(𝑥) that minimize the square of the errors

𝒚 𝒀 = 𝒇 (𝒙)

7
Chap-5: Curve Fitting

8
Chap-5: Curve Fitting

9
Fitting a Straight line (Continue ...) Chap-5: Curve Fitting

10
Fitting a Straight line (Continue ...) Chap-5: Curve Fitting

11
Chap-5: Curve Fitting

Examples
Straight Line Fitting

12
Chap-5: Curve Fitting

13
Chap-5: Curve Fitting

14
Chap-5: Curve Fitting

Non-linear Curve Fitting


(Transforming into Linear Form)
• Given data may not always follow the linear relationship that
can be ensured by plotting the data
• Some of the non-linear model can be easily transformed into
linear relationship
• Some non-linear laws and their transformations into linear
relation are presented in this section

15
Chap-5: Curve Fitting
Non-linear curves transformed into linear form:

16
Chap-5: Curve Fitting
Non-linear curves transformed into linear form: Continue ...

17
Chap-5: Curve Fitting

Examples
Non-linear Curve Fitting

18
Chap-5: Curve Fitting
Example: (Non-linear curve fitting ): Continue ...

19
Chap-5: Curve Fitting
Example: (Non-linear curve fitting) : Continue ...

(2)

20
Chap-5: Curve Fitting
Example: (Non-linear curve fitting) : Continue ...

21
Chap-5: Curve Fitting

Curve Fitting by Polynomials

22
Chap-5: Curve Fitting

23
Curve Fitting by Polynomials: Continue ... Chap-5: Curve Fitting

24
Example (Curve Fitting by Polynomials) Chap-5: Curve Fitting

25
Chap-5: Curve Fitting
Example (Curve Fitting by Polynomials): Continue ...

26
Chapter-4
Classwork
Least Square Fitting
Problem 1:

Problem 2

27
Chapter-4
Classwork
Least Square Fitting
Problem 3:

Problem 4

28
End of Lecture-1

Next
Chapter-6
Numerical Soltuons of Ordinary
Differential Equations (ODEs)
29
NUMERICAL METHODS
(MCSC-202)

By
Samir Shrestha
Department of Mathematics
Kathmandu University, Dhulikhel

Lecture 1
Chap-6: Numerical Solution of ODEs
1
Numerical Methods
Contents

 Basic introduction of Computer programming


language [4]
 Errors in numerical computation [5]
 Root findings [7]
 Finite differences and Interpolation [8]
 Numerical Differentiation and Integration [7]
 Curve fitting [2]
• Numerical Solutions of Ordinary Differential
Equations (ODE-IVP) [6]
• Matrices and System of linear equations [6]
References
Recommended Text Book
• Introductory Methods of Numerical analysis, S. S. Sastry, PHI
Learning Private Limited, New Delhi, 5th edition, 2012.

Supplementary Text Book


• Numerical Methods for Scientific and Engineering computation,
M. K. Jain, S. R. K Iyengar & R. K. Jain, New Age International
Publisher, 4th edition, 2005.
3
Chap-6: Numerical Solutions of ODEs
Outline
 Introduction

 Solution by Taylor’s Series

 Picard’s Method of Successive Approximations

 Examples and classwork


Chap-6: Numerical Solutions of ODEs

Introduction
Chap-6: Numerical Solutions of ODEs

Why do we need to numerical methods for differential equations?

Simple first order equation :

𝑑𝑦 −𝑡 2
= 𝑦 − 𝑒 , 𝑦 𝑡0 = 𝑦0
𝑑𝑡

Remark: This first order equation can not be solved


analytically (exactly)
Chap-6: Numerical Solutions of ODEs

Why do we need to learn numerical methods for differential


equations? (Continue ...)

Motion of a simple pendulum:


𝜽 𝒍
𝑑2 𝜃 𝑔
+ sin 𝜃 = 0,
𝑑𝑡 𝑙
𝜃 𝑡0 = 𝜃0 , 𝜃 ′ 𝑡0 = 𝜔0 𝒎

Remark: This second order ODE can not be solved


analytically
Chap-6: Numerical Solutions of ODEs

Why do we need to learn numerical methods for differential


equations? (Continue ...)

Planetary Motion under gravity:

𝑑2𝒓 𝒓
𝑚 2 = −𝐺𝑚𝑀
𝑑 𝑡 𝒓 3
r

Remark: This second order ODE can not be solved


analytically
Numerical Methods: System of ODEs

System of First Order ODEs : Spread of Infectious Diseases


Kermack-McKendrick’s SIR Model:
𝑑
𝑑𝑡
𝑆 𝑡 = −𝛽𝑆 𝑡 𝐼 𝑡 , 𝜷 𝜸
𝑑
𝐼 𝑡 = 𝛽𝑆 𝑡 𝐼 𝑡 − 𝛾𝐼 𝑡 ,
𝑑𝑡
𝑑
𝑅 𝑡 = 𝛾𝐼 𝑡 ,
𝑑𝑡
Figure: Dynamics of spread of deseases
where,
𝑆 𝑡 : susceptible population at time t
I 𝑡 : infectious population at time t
R 𝑡 : recovered population at time t
𝛽, 𝛾 are the rates
Chap-6: Numerical Solutions of ODEs

Why do we need to learn numerical methods for differential


equations? (Continue ...)
• Differential equations play a central role in modeling problems of
engineering, physics, aeronautics, astronomy, biology, medicine,
chemistry, environmental science, economics, and many other
areas
• There are many differential equation encountered in applications
which may be difficult and impossible to solve analytically
• To solve such problem, one has to devise alternative approach
that provide the approximated solution of the given equation
known as numerical methods
• The goal of this chapter is to develop the numerical methods that
one can use calculator or computer program to solve the
differential equations to get approximated solution
• Most used ones are Euler‘s and Runge-Kutta Methods to solve
ODEs
Chap-6: Numerical Solutions of ODEs

Numerical Solution of Ordinary Differential Equations (ODEs)


We will discuss various numerical methods to find the numerical
solutions of the ordinary differential equation of the form
𝑑𝑦
𝑑𝑥
= 𝑓(𝑥, 𝑦)……………………..(1)
With initial condtion 𝑦 𝑥0 = 𝑦0 .

A. Solution in terms of the power series of 𝒙


(i) Solution by Taylor‘s Series method
(ii) Picard‘s method of successive approximations

B. Solution as a set of tabulated values of 𝒙 and 𝒚


(i) Euler‘s Method
(ii) Runge-Kutta Methods
Chap-6: Numerical Solutions of ODEs

A. Solution in Terms of Power


Series of 𝒙

Solution by Taylor‘s Series


Solution by Taylor‘s Series: Chap-6: Numerical Solutions of ODEs
Chap-6: Numerical Solutions of ODEs
Solution by Taylor‘s Series: Continue ...
Chap-6: Numerical Solutions of ODEs
Solution by Taylor‘s Series: Continue ...
Chap-6: Numerical Solutions of ODEs
Solution by Taylor‘s Series: Continue ...
Chap-6: Numerical Solutions of ODEs
Example:
Chap-6: Numerical Solutions of ODEs
Example: Continue ...
Chap-6: Numerical Solutions of ODEs
Example: Continue ...
Chap-6: Numerical Solutions of ODEs
Example: Continue ...
Example: Chap-6: Numerical Solutions of ODEs

Solution:
Example: Continue ... Chap-6: Numerical Solutions of ODEs
Chapter-6
Classwork
Taylor‘s Series Method
Problem 1:

Problem 2

23
Chap-6: Numerical Solutions of ODEs

A. Solution in Terms of Power


Series of 𝒙

Picard‘s Method of Successive


Approximations
Picard‘s Method: Chap-6: Numerical Solutions of ODEs
Picard‘s Method: Continue ... Chap-6: Numerical Solutions of ODEs
Picard‘s Method: Continue ... Chap-6: Numerical Solutions of ODEs

Which is known as the Picard‘s method‘s of successive approxiamtions.


Example: Chap-6: Numerical Solutions of ODEs
Example: Continue ... Chap-6: Numerical Solutions of ODEs
Example: Continue ... Chap-6: Numerical Solutions of ODEs
Example: Continue ... Chap-6: Numerical Solutions of ODEs
Example: Chap-6: Numerical Solutions of ODEs

Solution:
Chapter-6
Classwork
Picard‘s Successive Approx. Method
Problem 1:

Problem 2

33
End of Lecture-1

Next Lecture
Euler‘s and Runge-Kutta Methods

34
NUMERICAL METHODS
(MCSC-202)

By
Samir Shrestha
Department of Mathematics
Kathmandu University, Dhulikhel

Lecture 2
Chap-6: Numerical Solution of ODEs
1
Numerical Methods
Contents

 Basic introduction of Computer programming


language [4]
 Errors in numerical computation [5]
 Root findings [7]
 Finite differences and Interpolation [8]
 Numerical Differentiation and Integration [7]
 Curve fitting [2]
• Numerical Solutions of Ordinary Differential
Equations (ODE-IVP) [6]
• Matrices and System of linear equations [6]
References
Recommended Text Book
• Introductory Methods of Numerical analysis, S. S. Sastry, PHI
Learning Private Limited, New Delhi, 5th edition, 2012.

Supplementary Text Book


• Numerical Methods for Scientific and Engineering computation,
M. K. Jain, S. R. K Iyengar & R. K. Jain, New Age International
Publisher, 4th edition, 2005.
3
Chap-6: Numerical Solutions of ODEs
Outline
 Euler’s Method

 Euler’s modified method/Heun’s Method/Runge-


Kutta 2nd order method

 Runge-Kutta fourth order method

 Boundary value problem ODE and finite


difference method

 Examples and classwork


Chap-6: Numerical Solutions of ODEs

Numerical Solution of Ordinary Differential Equations (ODEs)


We will discuss various numerical methods to find the numerical
solutions of the ordinary differential equation of the form
𝑑𝑦
𝑑𝑥
= 𝑓(𝑥, 𝑦)……………………..(1)
With initial condtion 𝑦 𝑥0 = 𝑦0 .

A. Solution in terms of the power series of 𝒙


(i) Solution by Taylor‘s Series method
(ii) Picard‘s method of successive approximations

B. Solution as a set of tabulated values of 𝒙 and 𝒚


(i) Euler‘s Method
(ii) Runge-Kutta Methods
Chap-6: Numerical Solutions of ODEs

B. Solution as a Set of
Tabulated Values of 𝒙 and 𝒚
• Euler‘s Method
• Euler‘s Modified Method
• Runge-Kutta Method
Chap-6: Numerical Solutions of ODEs
General Idea:
Let us consider first order ODE (IVP)
𝑑𝑦
= 𝑓 𝑥, 𝑦 , 𝑦 𝑥0 = 𝑦0 --------------------- (1)
𝑑𝑡
Let us assume that (1) has unique solution 𝑦 = 𝜙(𝑥) on the
interval 𝑎 < 𝑥 < 𝑏.
We divide the interval by set of points
(equidistance ) 𝑥0 , 𝑥1 , 𝑥2 , … , 𝑥𝑛 such
that 𝑎 = 𝑥0 < 𝑥1 < 𝑥2 < ⋯ < 𝑥𝑛 = 𝑏 (𝒙𝒊 , 𝒚𝒊 )
and 𝑥𝑖+1 − 𝑥𝑖 = ℎ (fixed step size).
At each point 𝑥 = 𝑥𝑖 , we compute the
approximation solution 𝑦 = 𝑦𝑖 of (1), 𝒙𝟎 𝒙𝟏
𝒂 𝒙𝟐 𝒙𝟑 𝒙𝒊 𝒙𝒊+𝟏 b𝒙𝒏
that means, 𝑦𝑖 ≈ 𝜙 𝑥𝑖 , 𝑖 = 1,2, … , 𝑛
by using some numerical scheme of the
form:
𝑦𝑖+1 = 𝑦𝑖 + ℎΨ(xi , yi , h), here Ψ is
known as slope estimator at (𝑥𝑖 , 𝑦𝑖 )
Chap-6: Numerical Solutions of ODEs
General Idea: (Continue...)
𝒚𝒊+𝟏 = 𝒚𝒊 + 𝒉𝚿(𝒙𝒊 , 𝒚𝒊 , 𝒉), here 𝚿 is known as slope estimator at 𝒙𝒊 , 𝒚𝒊

(i) Euler‘s method: (1st order)


Ψ = k1 , where 𝑘1 = 𝑓 𝑥𝑖 , 𝑦𝑖
𝒚 = 𝝓(𝒙)
(ii) Euler‘s modified method: (2st order)
1
Ψ = 2 (k1 + 𝑘2 ), where 𝑘1 = 𝑓 𝑥𝑖 , 𝑦𝑖 ,
𝑘2 = 𝑓(𝑥𝑖 + ℎ, 𝑦𝑖 + ℎ𝑘1 )
(𝒙𝒊 , 𝒚𝒊 )
(iii) Runge-Kutta Method: (4st order)
1
Ψ = (k1 + 2𝑘2 + 2𝑘3 + 𝑘4 ), where
6 𝒙𝒊 𝒙𝒊+𝟏
𝒙𝟎 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒙𝒏
𝑘1 = 𝑓 𝑥𝑖 , 𝑦𝑖 ,
ℎ ℎ
𝑘2 = 𝑓(𝑥𝑖 + , 𝑦𝑖 + 𝑘1 )
2 2
ℎ ℎ
𝑘3 = 𝑓(𝑥𝑖 + , 𝑦𝑖 + 𝑘2 )
2 2
𝑘4 = 𝑓(𝑥𝑖 + ℎ, 𝑦𝑖 + ℎ𝑘3 )
Chap-6: Numerical Solutions of ODEs

Euler‘s Method
Explained by Three Approaches
(i) By Tangent Line
(ii) By Derivative Approximation
(iii) By Integral Approximation
Chap-6: Numerical Solutions of ODEs

Euler‘s Method:
Let us consider first order ODE (IVP)
𝑑𝑦
= 𝑓 𝑥, 𝑦 , 𝑦 𝑥0 = 𝑦0 --------------------- (1)
𝑑𝑡
Let us assume that (1) has unique solution 𝑦 = 𝜙(𝑥) on the
interval 𝑎 < 𝑥 < 𝑏.
Then, the Euler‘s method for the (1) is given by
𝑦𝑖+1 = 𝑦𝑖 + (𝑥𝑖+1 − 𝑥𝑖 )𝑓(𝑥𝑖 , 𝑦𝑖 )

If step size is uniform, i.e., 𝑥𝑖+1 − 𝑥𝑖 = ℎ, then Euler’s method


is
𝒚𝒊+𝟏 = 𝒚𝒊 + 𝒉𝒇 𝒙𝒊 , 𝒚𝒊 , 𝒊 = 𝟎, 𝟏, … 𝒏 − 𝟏 ------------------(2)

where 𝑦0 = 𝑦(𝑡0 ) and 𝑦𝑖 ≈ 𝜙(𝑥𝑖 ) is the solution approximation


of (1) at 𝑥 = 𝑥𝑖
Chap-6: Numerical Solutions of ODEs

Derivation (Euler‘s Method): By tangent method


Let us divide the interval 𝑎 ≤ 𝑥 ≤ 𝑏 by set of points (equidistance )
𝑥0 , 𝑥1 , 𝑥2 , … , 𝑥𝑛 such that 𝑎 = 𝑥0 < 𝑥1 < 𝑥2 < ⋯ < 𝑥𝑛 = 𝑏 and
𝑥𝑖+1 − 𝑥𝑖 = ℎ (fixed step size).
𝒚 = 𝝓(𝒙)
The equation of tangent to the solution curve
𝑦 = 𝜙(𝑥) of (1) at the point (𝑥𝑖 , 𝑦𝑖 ) is
𝑦 = 𝑦𝑖 + 𝑓 𝑥𝑖 , 𝑦𝑖 𝑥 − 𝑥𝑖
The value of 𝑦 from this tangent at 𝑥 = 𝑥𝑖+1 is (𝒙𝒊 , 𝒚𝒊 )

𝑦𝑖+1 = 𝑦𝑖 + 𝑓 𝑥𝑖 , 𝑦𝑖 𝑥𝑖+1 − 𝑥𝑖
𝒙𝟎 𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒙𝒊 𝒙𝒊+𝟏 𝒙𝒏
⇒ 𝑦𝑖+1 = 𝑦𝑖 + ℎ𝑓 𝑥𝑖 , 𝑦𝑖 , 𝑖 = 0,1, … , 𝑛 − 1
Which is the Euler‘s method to approximate the
value of 𝑦 = 𝜙(𝑥) at 𝑥 = 𝑥𝑖+1 , that means,
𝑦𝑖+1 ≈ 𝜙(𝑥𝑖+1 ) when 𝑦𝑖 = 𝜙(𝑥𝑖 ) is given.
Chap-6: Numerical Solutions of ODEs
Derivation (Euler‘s Method): By forward difference
approximation of derivative
Let us divide the interval 𝑎 ≤ 𝑥 ≤ 𝑏 by set of points (equidistance )
𝑥0 , 𝑥1 , 𝑥2 , … , 𝑥𝑛 such that 𝑎 = 𝑥0 < 𝑥1 < 𝑥2 < ⋯ < 𝑥𝑛 = 𝑏 and
𝑥𝑖+1 − 𝑥𝑖 = ℎ (fixed step size).
ODE (1) will be true at each point 𝑥 = 𝑥𝑖 for the solution 𝑦 = 𝜙 𝑥 ,
that means,
𝑑𝜙(𝑥𝑖 )
= 𝑓(𝑥𝑖 , 𝜙(𝑥𝑖 ))
𝑑𝑥
Approximating the defivative by forward finite difference method
𝜙 𝑥𝑖+1 − 𝜙(𝑥𝑖 )
⇒ ≈ 𝑓(𝑥𝑖 , 𝜙(𝑥𝑖 ))
𝑥𝑖+1 − 𝑥𝑖
𝜙 𝑥𝑖+1 ≈ 𝜙 𝑥𝑖 + (𝑥𝑖+1 − 𝑥𝑖 )𝑓(𝑥𝑖 , 𝜙(𝑥𝑖 ))

Let 𝑦𝑖 dentes the approximation value of 𝜙(𝑥𝑖 ), then above relation


becomes 𝒚𝒊+𝟏 = 𝒚𝒊 + 𝒉𝒇 𝒙𝒊 , 𝒚𝒊 , 𝒊 = 𝟎, 𝟏, … , 𝒏 − 𝟏
Which is the Euler‘s method for ODE(IVP) (1).
Chap-6: Numerical Solutions of ODEs
Derivation (Euler‘s Method): By approximation of integral
Integrate both sides of the given ODE (1)
w.r.t. x from 𝑥 = 𝑥𝑖 to 𝑥𝑖+1 , we get
𝑥𝑖+1 𝑥𝑖+1

𝑑𝜙 𝑥 = 𝑓 𝑥, 𝜙 𝑥 𝑑𝑥 𝒇(𝒙, 𝝓(𝒙))
𝑥𝑖 𝑥𝑖
If
𝑥𝑖+1 − 𝑥𝑖 ≈ 0 ⇒ 𝑓 𝑥, 𝜙 𝑥 ≈ 𝑓 𝑥𝑖 , 𝜙 𝑥𝑖 ,
then above integral becomes
𝑥𝑖+1 𝑥𝑖+1

𝑑𝜙 𝑥 ≈ 𝑓 𝑥𝑖 , 𝜙 𝑥𝑖 𝑑𝑥
𝒙𝒊 𝒙𝒊+𝟏 𝒙
𝑥𝑖 𝑥𝑖
⇒ 𝜙 𝑥𝑖+1 − 𝜙 𝑥𝑖 ≈ (𝑥𝑖+1 − 𝑥𝑖 )
𝑓 𝑥𝑖 , 𝜙 𝑥𝑖
Do change 𝜙 𝑥𝑖 ⟶ 𝑦𝑖 , we have the Euler‘s
method
𝒚𝒊+𝟏 = 𝒚𝒊 +h 𝒇 𝒙𝒊 , 𝒚𝒊 , 𝒊 = 𝟎, 𝟏, … , 𝒏 − 𝟏
Chap-6: Numerical Solutions of ODEs

ALGORITHM EULER (𝐟, 𝒙𝟎 , 𝒚𝟎 , 𝒉, 𝒏)


This algorithm computes the solution of the ODE (IVP):
𝑑𝑦
𝑑𝑡
= 𝑓 𝑥, 𝑦 , 𝑦 𝑥0 = 𝑦0
at equidistant points 𝑥1 = 𝑥0 + ℎ, 𝑥2 = 𝑥1 + ℎ, …, 𝑥𝑛 = 𝑥𝑛−1 + ℎ;
here ƒ is such that this problem has a unique solution on the interval
[𝑥0 , 𝑥𝑛 ]

INPUT: Initial values 𝑥0 , 𝑦0 , 𝑓 = 𝑓(𝑥, 𝑦), step size ℎ, number of steps 𝑛

OUTPUT: 𝒙𝒊 , 𝒚𝒊 where 𝑦𝑖 ≈ 𝜙(𝑥𝑖 ) at 𝑥𝑖 , 𝑖 = 1,2, … , 𝑛


for i = 0,1,2,...,n-1
𝑘1 = ℎ𝑓(𝑥𝑖 , 𝑦𝑖 )
𝑦𝑖+1 = 𝑦𝑖 + 𝑘1
𝑥𝑖+1 = 𝑥𝑖 + ℎ
end
Chap-6: Numerical Solutions of ODEs

Example by Euler‘s
Method
Chap-6: Numerical Solutions of ODEs
𝑑𝑦 1
Example: Consider the ODE (IVP) = 3 − 2𝑥 − 𝑦, 𝑦
0 = 1,
𝑑𝑥 2
use Euler’s method to find the approximate solution on the
interval 0 ≤ 𝑥 ≤ 2 by taking the step size 0.5
𝑑𝑦 1
Solution: Given ODE (IVP) is 𝑑𝑥
= 3 − 2𝑥 − 2
𝑦, 𝑦 0 = 1. Here,
1
𝑓 𝑥, 𝑦 = 3 − 2𝑥 − 2
𝑦,
𝑥0 = 0, 𝑦0 = 1
Now, we compute 𝑓 𝑥0 , 𝑦0 = 2.5 and so
𝑦1 = 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 )
⇒ 𝑦1 = 1 + 0.5 × 2.5
⇒ 𝒚𝟏 = 𝟐. 𝟐𝟓
Now, we have 𝑥1 = 𝑥0 + ℎ = 0.5 and we compute 𝑓 𝑥1 , 𝑦1 =
0.875 and so
𝑦2 = 𝑦1 + ℎ𝑓(𝑥1 , 𝑦1 )
⇒ 𝑦2 = 2.25 + 0.5 × 2.5
⇒ 𝒚𝟐 = 𝟐. 𝟔𝟖𝟕𝟓
Chap-6: Numerical Solutions of ODEs

Solution: (Continue...)
Next, we have 𝑥2 = 𝑥1 + ℎ = 1.0 and we compute 𝑓 𝑥2 , 𝑦2 =
− 0.3438 and so
𝑦3 = 𝑦2 + ℎ𝑓(𝑥2 , 𝑦2 )
⇒ 𝑦3 = 2.6875 + 0.5 × (−0.3438)
⇒ 𝒚𝟐 = 𝟐. 𝟓𝟏𝟓𝟔
Finally, we have 𝑥3 = 𝑥2 + ℎ = 1.5 and we compute 𝑓 𝑥3 , 𝑦3 =
− 1.2578 and so
𝑦4 = 𝑦3 + ℎ𝑓(𝑥3 , 𝑦3 )
⇒ 𝑦4 = 2.5156 + 0.5 × (−1.2578)
⇒ 𝒚𝟒 = 𝟏. 𝟖𝟖𝟔𝟕
The solution set is shown in the following table:
𝒙 0 0.5 1.0 1.5 2.0
𝒚 1 2.25 2.6875 2.5156 1.8867
Chap-6: Numerical Solutions of ODEs
Solution: (Continue...)
Plot of the solution for 𝒉 = 𝟎. 𝟓
𝑥

Exact solution is 𝜙 𝑥 = 14 − 4𝑥 − 13𝑒 2
Chap-6: Numerical Solutions of ODEs
Solution: (Continue...)
Plot of the solution for 𝒉 = 𝟎. 𝟏
𝑥

Exact solution is 𝜙 𝑥 = 14 − 4𝑥 − 13𝑒 2
Chap-6: Numerical Solutions of ODEs

Solution: (Continue...)
Plot of the solution for 𝒉 = 𝟎. 𝟎𝟏
𝑥

Exact solution is 𝜙 𝑥 = 14 − 4𝑥 − 13𝑒 2
Chap-6: Numerical Solutions of ODEs

Error in Numeriacal
Approximation
 Local Truncation Error

 Global Truncation Error

 Order of Convergence
Chap-6: Numerical Solutions of ODEs

Error in Numerical Approximations


Two important question arises:
1. As step size ℎ → 0, do the values of the numerical
approximation 𝑦1 , 𝑦2 , . . . , 𝑦𝑛 approach the
corresponding values of the exact solution ?

2. Even if the numerical approximation approach to


exact solution, there remains the important
practical question of how fast the numerical
approximation converges to the exact solution
Chap-6: Numerical Solutions of ODEs

Error in Numerical Approximations


Sources of Errors: Basically, there are two types
sources of error in the approximated solution of ODE
(IVP):
1. Rounding off error (Machine precision)
2. Truncation error

(i) Local truncation error


𝑒𝑛+1 = 𝜙 𝑥𝑛+1 − 𝑦𝑛+1 , given that 𝑦𝑛 = 𝜙 𝑥𝑛

(ii) Global trunction error


𝐸𝑛+1 = 𝜙 𝑥𝑛+1 − 𝑦𝑛+1 , given that 𝑦𝑛 ≈ 𝜙 𝑥𝑛
Chap-6: Numerical Solutions of ODEs

Truncation errors in Euler’s method:


(i) Local truncation error in Euler’s method:
1 ′
𝑒𝑛+1 = 2
𝜙 𝑥𝑛∗ ℎ2 , where 𝑥𝑛 < 𝑥𝑛∗ < 𝑥𝑛 + ℎ
𝑀ℎ2
⇒ 𝑒𝑛+1 ≤
2
That means, Euler‘s method has local truncation error
propotional to the ℎ2
(ii) Global trunction error in Euler‘s method:
𝑀ℎ2
Error in each step is at most 2
,if it needs 𝑛-steps to reach
from initial point 𝑥 = 𝑥0 to the final point 𝑥 = 𝑏, the global
𝑀ℎ2 1
truncation error will be at most 𝑛 2 and 𝑛 ∝ ℎ
. So global
truncation error is proportional to ℎ.
Thus, Euler’s method is called first order method
Chap-6: Numerical Solutions of ODEs

Euler‘s Modified Method


(Heun‘s Method/Runge-Kutta Second
Order Method)

Explained by Two Approaches


(i) By Integral Approximation
(ii) Explained Graphically
Chap-6: Numerical Solutions of ODEs

Euler‘s Modified Method


• Since for many problems the Euler’s method requires a
very small step size to produce sufficiently accurate
results
• Euler‘s method is said to be first order approxiamtion
method
• There are even better than Euler‘s method those do not
require much small step size to produce good results
• Those method are called higer-order methods
• Euler‘s Modified and Runge-Kutta methods are better
methods than Euler‘s method.
• Euler‘s Modified is second order approxiamtion method
Chap-6: Numerical Solutions of ODEs

Euler‘s Modified Method:


Let us consider first order ODE (IVP)
𝑑𝑦
= 𝑓 𝑥, 𝑦 , 𝑦 𝑡0 = 𝑦0 --------------------- (1)
𝑑𝑡
Let us assume that (1) has unique solution 𝑦 = 𝜙(𝑥) on the
interval 𝑎 < 𝑥 < 𝑏.
Then, the modified Euler‘s method for the (1) is given by

𝑦𝑖+1 = 𝑦𝑖 + 𝑓 𝑥𝑖 , 𝑦𝑖 + 𝑓(𝑥𝑖 + ℎ, 𝑦𝑖 + ℎ𝑓(𝑥𝑖 , 𝑦𝑖 ))
2
This can also written in the form:
𝟏
𝒚𝒊+𝟏 = 𝒚𝒊 + 𝟐 𝒌𝟏 + 𝒌𝟐 ------------------(2),
where 𝑘1 = ℎ𝑓(𝑥𝑖 , 𝑦𝑖 ) and 𝑘2 = ℎ𝑓(𝑥𝑖 + ℎ, 𝑦𝑖 + 𝑘1 )
Here, 𝑦0 = 𝑦(𝑥0 ) and 𝑦𝑖 ≈ 𝜙(𝑥𝑖 ) is the solution approximation
of (1) at 𝑥 = 𝑥𝑖
Chap-6: Numerical Solutions of ODEs
Derivation (Euler‘s Modified Method):
Integrate both sides of the given ODE (1) w.r.t. 𝑥 from 𝑥 = 𝑥𝑖 to 𝑥𝑖+1 , we get
𝑥𝑖+1 𝑥𝑖+1

𝑑𝜙 𝑥 = 𝑓 𝑥, 𝜙 𝑥 𝑑𝑥
𝑥𝑖 𝑥𝑖
1
Let 𝑓 𝑥, 𝜙 𝑥 ≈ 𝑓 𝑥𝑖 , 𝜙 𝑥𝑖 + 𝑓 𝑥𝑖+1 , 𝜙 𝑥𝑖+1 , then above integral
2
becomes
𝑥𝑖+1 𝑥𝑖+1
1
𝑑𝜙 𝑥 ≈ 𝑓 𝑥𝑖 , 𝜙 𝑥𝑖 + 𝑓 𝑥𝑖+1 , 𝜙 𝑥𝑖+1 𝑑𝑥
2
𝑥𝑖 𝑥𝑖
1
⇒ 𝜙 𝑥𝑖+1 − 𝜙 𝑥𝑖 ≈ (𝑥𝑖+1 − 𝑥𝑖 ) 𝑓 𝑥𝑖 , 𝜙 𝑥𝑖 + 𝑓 𝑥𝑖+1 , 𝜙 𝑥𝑖+1
2
Do change 𝜙 𝑥𝑖 ⟶ 𝑦𝑖

𝑦𝑖+1 = 𝑦𝑖 + 𝑓 𝑥𝑖 , 𝑦𝑖 + 𝑓(𝑥𝑖+1 , 𝒚𝒊+𝟏 )
2
But, we take 𝒚𝒊+𝟏 = 𝑦𝑖 + ℎ𝑓(𝑥𝑖 , 𝑦𝑖 ) from Euler‘s method to get the Euler‘s
modified method
𝒉
𝒚𝒊+𝟏 = 𝒚𝒊 + 𝒇 𝒙𝒊 , 𝒚𝒊 + 𝒇(𝒙𝒊+𝟏 , 𝒚𝒊 + 𝒉𝒇(𝒙𝒊 , 𝒚𝒊 ))
𝟐
Chap-6: Numerical Solutions of ODEs
Derivation (Euler‘s Modifed Method): Graphically
Integrate both sides of the given ODE (1) w.r.t. t from 𝑥 = 𝑥𝑖 to 𝑥𝑖+1 , we
get
𝑥𝑖+1 𝑥𝑖+1
𝑥
𝑑𝜙 𝑥 = 𝑥 𝑓 𝑥, 𝜙 𝑥 𝑑𝑥 ------------ (2)
𝑖 𝑖

If we approximate the integral in right-side is approximated by area of


the trapezium as shown in the figure, that means

𝑥𝑖+1 h
𝑥𝑖
𝑓 𝑥, 𝜙 𝑥 𝑑𝑥 = 2 𝑓 𝑥𝑖 , 𝜙 𝑡𝑖 + 𝑓 𝑥𝑖+1 , 𝜙 𝑥𝑖+1 --------(3)
𝒇
Then, from (2) and (3)
h

𝒇(𝒙𝒊+𝟏 , 𝝓(𝒙𝒊+𝟏 ))
⇒ 𝜙 𝑥𝑖+1 − 𝜙 𝑥𝑖 ≈ 𝑓 𝑥𝑖 , 𝜙 𝑥𝑖 + 𝑓 𝑥𝑖+1 , 𝜙 𝑥𝑖+1
2

𝒇(𝒙𝒊 , 𝝓(𝒙𝒊 ))
Do change 𝜙 𝑥𝑖 ⟶ 𝑦𝑖
𝒉
𝒚𝒊+𝟏 = 𝒚𝒊 + 𝒇 𝒕𝒊 , 𝒚𝒊 + 𝒇(𝒙𝒊 + 𝒉, 𝒚𝒊 + 𝒉𝒇(𝒙𝒊 , 𝒚𝒊 ))
𝟐 𝒙𝒊 𝒙𝒊+𝟏
Chap-6: Numerical Solutions of ODEs
ALGORITHM EULER Modified (𝐟, 𝒙𝟎 , 𝒚𝟎 , 𝒉, 𝒏)

This algorithm computes the solution of the ODE (IVP):


𝑑𝑦
𝑑𝑡
= 𝑓 𝑥, 𝑦 , 𝑦 𝑥0 = 𝑦0
at equidistant points 𝑥1 = 𝑥0 + ℎ, 𝑥2 = 𝑥1 + ℎ, …, 𝑥𝑛 = 𝑥𝑛−1 + ℎ;
here ƒ is such that this problem has a unique solution on the interval
[𝑥0 , 𝑥𝑛 ]

INPUT: Initial values 𝑥0 , 𝑦0 , 𝑓 = 𝑓(𝑥, 𝑦), step size ℎ, number of steps 𝑛


OUTPUT: 𝒙𝒊 , 𝒚𝒊 where 𝑦𝑖 ≈ 𝜙(𝑥𝑖 ) at 𝑥𝑖 , 𝑖 = 1,2, … , 𝑛
for i = 0,1,2,...,n-1
𝑘1 = ℎ𝑓 𝑥𝑖 , 𝑦𝑖
𝑘2 = ℎ𝑓 𝑥𝑖 + ℎ, 𝑦𝑖 + 𝑘1
1
𝑦𝑖+1 = 𝑦𝑖 + 2 (𝑘1 + 𝑘2 )
𝑥𝑖+1 = 𝑥𝑖 + ℎ
end
Chap-6: Numerical Solutions of ODEs

Example by Euler‘s
Modified Method
Chap-6: Numerical Solutions of ODEs
𝑑𝑦 1
Example: Consider the ODE (IVP) 𝑑𝑥 = 3 − 2𝑥 − 2 𝑦, 𝑦 0 = 1, use
Euler’s modified method to find the approximate solution on the
interval 0 ≤ 𝑥 ≤ 2 by taking the step size 0.5
𝑑𝑦 1
Solution: Given ODE (IVP) is = 3 − 2𝑥 − 𝑦, 𝑦 0 = 1. Here, 𝑓 𝑥, 𝑦 =
𝑑𝑥 2
1
3 − 2𝑥 − 𝑦, 𝑥0 = 0, 𝑦0 = 1
2
Now, we compute k1 = ℎ𝑓 𝑥0 , 𝑦0 = 1.250 and k 2 =
ℎ𝑓 𝑥0 + ℎ, 𝑦0 + 𝑘1 =0.4375, so
1
𝑦1 = 𝑦0 + 𝑘1 + 𝑘2
2
1
⇒ 𝑦1 = 1 + 1.250 + 0.4375
2
⇒ 𝒚𝟏 = 1.8438
Now, we have 𝑥1 = 𝑡0 + ℎ = 0.5 and we compute k1 = ℎ𝑓 𝑥1 , 𝑦1 = 0.5391
and k 2 = ℎ𝑓 𝑥1 + ℎ, 𝑦1 + 𝑘1 = −0.0957, so
1
𝑦2 = 𝑦1 + 𝑘1 + 𝑘2
2
1
⇒ 𝑦2 = 1.8438 + 0.5391 − 0.0957 ⇒ 𝒚𝟐 = 2.0654
Chap-6: Numerical Solutions of ODEs
Solution: (Continue...)
Next, we have 𝑥2 = 𝑥1 + ℎ = 1.0 and we compute k1 = ℎ𝑓 𝑥2 , 𝑦2 =
− 0.0164 and k 2 = ℎ𝑓 𝑥2 + ℎ, 𝑦2 + 𝑘2 = −0.5123, so
1
𝑦3 = 𝑦2 + 𝑘1 + 𝑘2
2
⇒ 𝑦3 = 2.0654
1
+ −0.0164 − 0.5123 ⇒ 𝒚𝟑 = 𝟏. 𝟖𝟎𝟏𝟏
2
Finally, we have 𝑥3 = 𝑥2 + ℎ = 1.5 and we compute k1 = ℎ𝑓 𝑥3 , 𝑦3 =
− 0.4503 and k 2 = ℎ𝑓 𝑥3 + ℎ, 𝑦3 + 𝑘3 = −0.8377, so
1
𝑦4 = 𝑦3 + 𝑘1 + 𝑘2
2
1
⇒ 𝑦4 = 1.8011 + −0.4503 − 0.8377 ⇒ 𝒚𝟒 = 𝟏. 𝟏𝟓𝟕𝟏
2
The solution set is shown in the following table:
𝒙 0 0.5 1.0 1.5 2.0
𝒚 1 1.8438 2.0654 1.8011 1.1571
Chap-6: Numerical Solutions of ODEs

Solution: (Continue...)
Plot of the solution for 𝒉 = 𝟎. 𝟓 𝑥

Exact solution is 𝜙 𝑥 = 14 − 4𝑥 − 13𝑒 2
Chap-6: Numerical Solutions of ODEs

Solution: (Continue...)
Plot of the solution for 𝒉 = 𝟎. 𝟏
𝑥

Exact solution is 𝜙 𝑥 = 14 − 4𝑥 − 13𝑒 2
Chap-6: Numerical Solutions of ODEs

Comparison of Euler‘s
and Euler‘modified
𝒅𝒚 𝟏
Let us take the ODE (IVP) = 𝟑 − 𝟐𝒙 − 𝒚, 𝒚
𝟎 = 𝟏, find the
𝒅𝒙 𝟐
approximate solution on the interval 𝟎 ≤ 𝒙 ≤ 𝟐 by taking the step
sizes 0.5 and 0.1
𝑥

The analytic solution is 𝜙 𝑥 = 14 − 4𝑥 − 13𝑒 2
Chap-6: Numerical Solutions of ODEs

Solution comparison for 𝒉 = 𝟎. 𝟓


Chap-6: Numerical Solutions of ODEs

Solution comparison for 𝒉 = 𝟎. 𝟏


Chap-6: Numerical Solutions of ODEs

Example: Euler‘s Modified Method


Chap-6: Numerical Solutions of ODEs

Truncation errors in Euler’s Modified method:


(i) Local truncation error in Euler’s modified method:

Improved Euler‘s method has local truncation error propotional


to the ℎ3

(ii) Global trunction error in modified Euler‘s method:


Improved Euler’s method has global truncation error is
proportional to ℎ2 .

Thus, Euler’s modified method is called second order method


Chapter-6
Classwork
Euler‘s and Euler‘s Modified Method

𝑑𝑦
Problem 1: Consider the ODE (IVP) 𝑑𝑥
= 5𝑥 − 3 𝑦, 𝑦 0 = 2 using
Euler’s Method

𝑑𝑦
Problem 2: Consider the ODE (IVP) 𝑑𝑥
= 𝑥 + 𝑦, 𝑦 0 = 0 using Euler’s
Modified Method

41
Chap-6: Numerical Solutions of ODEs

Runge-Kutta Method

• Method is originally developed by Runge and Kutta

• This method is now called the classic fourth order


four-stage Runge–Kutta method

• This method is two orders of magnitude more


accurate than the modified Euler method and three
orders of magnitude better than the Euler method
Chap-6: Numerical Solutions of ODEs

Runge-Kutta Method:
Let us consider first order ODE (IVP)
𝑑𝑦
𝑑𝑥
= 𝑓 𝑥, 𝑦 , 𝑦 𝑥0 = 𝑦0 --------------------- (1)
Let us assume that (1) has unique solution 𝑦 = 𝜙(𝑥) on the
interval 𝑎 < 𝑥 < 𝑏.
Then, the Runge-Kutta method for the (1) is given by
𝟏
𝒚𝒊+𝟏 = 𝒚𝒊 + 𝒌𝟏 + 𝟐𝒌𝟐 + 𝟐𝒌𝟑 + 𝒌𝟒 −−−−−−−−−−−(2),
𝟔
ℎ 𝑘1
where 𝒌𝟏 = ℎ𝑓 𝑥𝑖 , 𝑦𝑖 , 𝒌𝟐 = ℎ𝑓(𝑥𝑖 + 2 , 𝑦𝑖 + 2 ),
ℎ 𝑘2
𝒌𝟑 = ℎ𝑓(𝑥𝑖 + ,𝑦
2 𝑖
+ 2
), 𝒌𝟒 = ℎ𝑓(𝑥𝑖 + ℎ, 𝑦𝑖 + 𝑘3 )

Here, 𝑦0 = 𝑦(𝑥0 ) and 𝑦𝑖 ≈ 𝜙(𝑥𝑖 ) is the solution approximation


of (1) at 𝑥 = 𝑥𝑖
Chap-6: Numerical Solutions of ODEs
ALGORITHM RUNGE-KUTTA (𝐟, 𝒕𝟎 , 𝒚𝟎 , 𝒉, 𝒏)
This algorithm computes the solution of the ODE (IVP):
𝑑𝑦
= 𝑓 𝑥, 𝑦 , 𝑦 𝑥0 = 𝑦0
𝑑𝑥
at equidistant points 𝑥1 = 𝑥0 + ℎ, 𝑥2 = 𝑥1 + ℎ, …, 𝑥𝑛 = 𝑥𝑛−1 + ℎ; here ƒ is
such that this problem has a unique solution on the interval [𝑥0 , 𝑥𝑛 ]

INPUT: Initial values 𝑥0 , 𝑦0 , 𝑓 = 𝑓(𝑥, 𝑦), step size ℎ, number of steps 𝑛


OUTPUT: 𝒙𝒊 , 𝒚𝒊 where 𝑦𝑖 ≈ 𝜙(𝑥𝑖 ) at 𝑥𝑖 , 𝑖 = 1,2, … , 𝑛
for i = 0,1,2,...,n-1
𝑘1 = ℎ𝑓 𝑥𝑖 , 𝑦𝑖
ℎ 𝑘1
𝑘2 = ℎ𝑓 𝑥𝑖 + 2 , 𝑦𝑖 + 2
ℎ 𝑘2
𝑘3 = ℎ𝑓 𝑥𝑖 + , 𝑦𝑖 +
2 2
𝑘4 = ℎ𝑓 𝑥𝑖 + ℎ, 𝑦𝑖 + 𝑘3
1
𝑦𝑖+1 = 𝑦𝑖 + 6 (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
𝑥𝑖+1 = 𝑥𝑖 + ℎ
end
Chap-6: Numerical Solutions of ODEs

Example by Runge-
Kutta Method
Chap-6: Numerical Solutions of ODEs
𝑑𝑦 1
Example: Consider the ODE (IVP) 𝑑𝑡 = 3 − 2𝑥 − 2 𝑦, 𝑦 0 = 1, use
Runge-Kutta method to find the approximate solution on the interval
0 ≤ 𝑥 ≤ 2 by taking the step size 0.5
𝑑𝑦 1
Solution: Given ODE (IVP) is = 3 − 2𝑥 − 𝑦, 𝑦 0 = 1. Here, 𝑓 𝑥, 𝑦 = 3 − 2𝑥 −
𝑑𝑥 2
1
𝑦, 𝑥0 = 0, 𝑦0 = 1
2
ℎ 𝑘1
Now, we compute 𝒌𝟏 = ℎ𝑓 𝑥, 𝑦0 , 𝒌𝟐 = ℎ𝑓(𝑥0 + , 𝑦0 + ),
2 2
ℎ 𝑘
𝒌𝟑 = ℎ𝑓(𝑥0 + , 𝑦0 + 2 ), 𝒌𝟒 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘3 ), so
2 2
1
𝑦1 = 𝑦0 + 𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4
6
⇒ 𝐲𝟏 = 1.8755
ℎ 𝑘
Now, 𝑥1 = 𝑥0 + ℎ = 0.5 and compute 𝒌𝟏 = ℎ𝑓 𝑥1 , 𝑦1 , 𝒌𝟐 = ℎ𝑓(𝑥1 + , 𝑦1 + 1 ),
2 2
ℎ 𝑘2
𝒌𝟑 = ℎ𝑓(𝑥1 + + ,𝑦 ),
𝒌𝟒 = ℎ𝑓(𝑥1 + ℎ, 𝑦1 + 𝑘3 ), so
2 1 2
1
𝑦2 = 𝑦1 + 𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4
6
⇒ 𝒚𝟐 = 2.1149
Chap-6: Numerical Solutions of ODEs
Solution: (Continue...)

Next, 𝑥2 = 𝑥1 + ℎ = 1 and compute 𝒌𝟏 = ℎ𝑓 𝑥2 , 𝑦2 , 𝒌𝟐 = ℎ𝑓(𝑥2 + 2 , 𝑦2 +
𝑘1
),
2
ℎ 𝑘
𝒌𝟑 = ℎ𝑓(𝑥2 + , 𝑦2 + 2 ), 𝒌𝟒 = ℎ𝑓(𝑥2 + ℎ, 𝑦2 + 𝑘3 ), so
2 2
1
𝑦3 = 𝑦2 + 𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4
6
⇒ 𝒚𝟑 = 1.8591
Finally, we have 𝑥3 = 𝑥2 + ℎ = 1.5 and we compute compute 𝒌𝟏 =
ℎ 𝑘
ℎ𝑓 𝑥3 , 𝑦3 , 𝒌𝟐 = ℎ𝑓(𝑥3 + 2 , 𝑦3 + 21 ),
ℎ 𝑘2
𝒌𝟑 = ℎ𝑓(𝑥3 + ,𝑦 + ), 𝒌𝟒 = ℎ𝑓(𝑥3 + ℎ, 𝑦3 + 𝑘3 ), so
2 3 2
1
𝑦4 = 𝑦3 + 𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4
6
⇒ 𝒚𝟒 = 1.2174
The solution set is shown in the following table:
𝒙 0 0.5 1.0 1.5 2.0
𝒚 1 1.8755 2.1149 1.8591 1.2174
Chap-6: Numerical Solutions of ODEs

Solution: (Continue...)
Plot of the solution for 𝒉 = 𝟎. 𝟓 𝑥

Exact solution is 𝜙 𝑥 = 14 − 4𝑥 − 13𝑒 2
Chap-6: Numerical Solutions of ODEs

Comparison of Euler, Modified


Euler and Runge-Kutta
𝒅𝒚 𝟏
Let us take the ODE (IVP) = 𝟑 − 𝟐𝒙 − 𝒚, 𝒚
𝟎 = 𝟏, find the
𝒅𝒙 𝟐
approximate solution on the interval 𝟎 ≤ 𝒙 ≤ 𝟐 by taking the step
size 0.5
𝑥

The analytic solution is 𝜙 𝑥 = 14 − 4𝑥 − 13𝑒 2
Chap-6: Numerical Solutions of ODEs

Solution comparison for 𝒉 = 𝟎. 𝟓


Chap-6: Numerical Solutions of ODEs

Example: Runge-Kutta 4th Order Method

Solution:
Chap-6: Numerical Solutions of ODEs

Example: Continue...
Chap-6: Numerical Solutions of ODEs

Truncation errors in Runge-Kutta method:


(i) Local truncation error in Runge-Kutta method:

Runge-Kutta method has local truncation error propotional to


the ℎ5

(ii) Global trunction error in Runge-Kutta method :


Runge-Kutta method has global truncation error is proportional
to ℎ4 .

Thus, Runge-Kutta method is called fourth order method


Chapter-6
Classwork
Runge-Kutta Method

Problem 1: Use Runge-Kutta fourth order method to find


𝑦 0.1 , 𝑦(0.2) and 𝑦(0.3) given that onsider the ODE (IVP)
𝑑𝑦 2𝑥𝑦
𝑑𝑥
= 1 + 1+𝑥 2 𝑦 0 = 0

54
Chap-6: Numerical Solutions of ODEs

Boundary Value Problem and Finite Difference Method


Chap-6: Numerical Solutions of ODEs

Boundary Value Problem and Finite Difference Method


Continue ...
Chap-6: Numerical Solutions of ODEs

Boundary Value Problem and Finite Difference Method


Continue ...
Chap-6: Numerical Solutions of ODEs

Boundary Value Problem and Finite Difference Method


Continue ...
Chap-6: Numerical Solutions of ODEs

Boundary Value Problem and Finite Difference Method


Continue ...
Chap-6: Numerical Solutions of ODEs
Boundary Value Problem and Finite Difference Method
Continue ...
Chap-6: Numerical Solutions of ODEs

Boundary Value Problem and Finite Difference Method


Continue ...
Chap-6: Numerical Solutions of ODEs

Boundary Value Problem and Finite Difference Method


Continue ...
Chap-6: Numerical Solutions of ODEs

Boundary Value Problem and Finite Difference Method


Continue ...
Chap-6: Numerical Solutions of ODEs

Boundary Value Problem and Finite Difference Method


Continue ...
Chap-6: Numerical Solutions of ODEs

Boundary Value Problem and Finite Difference Method


Continue ...
Chap-6: Numerical Solutions of ODEs

Boundary Value Problem and Finite Difference Method


Continue ...
Chap-6: Numerical Solutions of ODEs

Boundary Value Problem and Finite Difference Method


Chap-6: Numerical Solutions of ODEs

Boundary Value Problem and Finite Difference Method


Example: Continue ...
Chap-6: Numerical Solutions of ODEs

Boundary Value Problem and Finite Difference Method


Example: Continue ...
Chap-6: Numerical Solutions of ODEs

Boundary Value Problem and Finite Difference Method


Example: Continue ...
Chapter-6
Classwork
Boundary Value Problem

Problem 1: Solve the boundary value problem given by


y ′′ − y = 0 with bounday conditions 𝑦 0 = 0, 𝑦 1 = 1 by using
finite difference method (FDM) taking step size ℎ = 0.25.

71
End of Lecture-2

Next Chapter
Matrices and System of Linear
Equations
72
Unfiled Notes Page 1
Unfiled Notes Page 2
Unfiled Notes Page 3
Unfiled Notes Page 4
Unfiled Notes Page 5
NUMERICAL METHODS
(MCSC-202)

By
Samir Shrestha
Department of Mathematics
Kathmandu University, Dhulikhel

Lecture 1
Chap-7: Matrices and System of linear
equations 1
Numerical Methods
Contents

 Basic introduction of Computer programming


language [4]
 Errors in numerical computation [5]
 Root findings [7]
 Finite differences and Interpolation [8]
 Numerical Differentiation and Integration [7]
 Curve fitting [2]
• Numerical Solutions of Ordinary Differential
Equations (ODE-IVP) [6]
• Matrices and System of linear equations [6]
References
Recommended Text Book
• Introductory Methods of Numerical analysis, S. S. Sastry, PHI
Learning Private Limited, New Delhi, 5th edition, 2012.

Supplementary Text Book


• Numerical Methods for Scientific and Engineering computation,
M. K. Jain, S. R. K Iyengar & R. K. Jain, New Age International
Publisher, 4th edition, 2005.
3
Chap-7: System of Linear Equations
Outline
 Introduction

 LU-decomposition method
- Tri-diagonal system

 Iterative methods
- Jacobi method
- Gauss-Seidel method

 Examples and classwork


Chap-7: System of Linear Equations

Introduction
Chap-7: System of Linear Equations
Chap-7: System of Linear Equations

LU- Decomposition of a Matrix


Chap-7: System of Linear Equations
Chap-7: System of Linear Equations
LU Decomposition: Continue ...
LU Decomposition: Continue ... Chap-7: System of Linear Equations
LU Decomposition: Continue ... Chap-7: System of Linear Equations
LU Decomposition: Continue ... Chap-7: System of Linear Equations
Chap-7: System of Linear Equations

LU- Decomposition Method


Chap-7: System of Linear Equations
Chap-7: System of Linear Equations
LU Decomposition Method: Continue ...
Chap-7: System of Linear Equations
LU Decomposition Method: Continue ...
Chap-7: System of Linear Equations
LU Decomposition Method: Continue ...
Chap-7: System of Linear Equations
Chap-7: System of Linear Equations

Tri-diagonal System of Linear


Equations
Chap-7: System of Linear Equations
Tri-diagonal System: Continue ...
Chap-7: System of Linear Equations

Iteration Methods

• Jacobi Method
• Gauss-Seidel Method
Chap-7: System of Linear Equations
Chap-7: System of Linear Equations
Soltion by Iterative methods: Continue ...
Chap-7: System of Linear Equations
Soltion by Iterative methods: Continue ...
Chap-7: System of Linear Equations
Soltion by Iterative methods: Continue ...
Chap-7: System of Linear Equations
Soltion by Iterative methods: Continue ...
Chap-7: System of Linear Equations
Soltion by Iterative methods: Continue ...

Remark: Condition (5) is also know as diagonally dominant condition, that


means, in each row the sum of the absolute values of non-diagonal
elements of the coefficient matrix 𝐴 = (𝑎𝑖𝑗 ) should be less than or equal
to the absolute value of the diagonal element.
Chap-7: System of Linear Equations
Soltion by Iterative methods: Continue ...

Solution:
Chap-7: System of Linear Equations
Soltion by Iterative methods: Continue ...
Classwork Chapter-7
Problem 1:

Problem 2:

30
Classwork Chapter-7
Problem 3:

Problem 4:

31
End of Lecture-1

End of the Course: MCSC-202

32

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy