0% found this document useful (0 votes)
2 views160 pages

Handouts Mth603 Final

The document outlines a series of lectures on Numerical Analysis, covering topics such as errors in computations, solutions to non-linear and linear equations, eigenvalue problems, interpolation, differentiation, and ordinary differential equations. It includes detailed explanations of finite difference operators, interpolation methods, and various numerical techniques. The content is structured in a lecture format, providing a comprehensive guide to numerical analysis methods and their applications.

Uploaded by

iqrabashir123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views160 pages

Handouts Mth603 Final

The document outlines a series of lectures on Numerical Analysis, covering topics such as errors in computations, solutions to non-linear and linear equations, eigenvalue problems, interpolation, differentiation, and ordinary differential equations. It includes detailed explanations of finite difference operators, interpolation methods, and various numerical techniques. The content is structured in a lecture format, providing a comprehensive guide to numerical analysis methods and their applications.

Uploaded by

iqrabashir123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 160

Table of Contents

Lecture # Topics Page #


Lecture 1 Introduction 3
Lecture 2 Errors in Computations 6
Lecture 3 Solution of Non Linear Equations (Bisection Method) 8
Lecture 4 Solution of Non Linear Equations (Regula-Falsi Method) 15
Lecture 5 Solution of Non Linear Equations (Method of Iteration) 21
Lecture 6 Solution of Non Linear Equations (Newton Raphson Method) 26
Lecture 7 Solution of Non Linear Equations (Secant Method) 35
Lecture 8 Muller's Method 42
Lecture 9 Solution of Linear System of Equations (Gaussian Elimination Method) 48
Lecture 10 Solution of Linear System of Equations(Gauss–Jordon Elimination Method) 58
Lecture 11 Solution of Linear System of Equations(Jacobi Method) 68
Lecture 12 Solution of Linear System of Equations(Gauss–Seidel Iteration Method) 74
Lecture 13 Solution of Linear System of Equations(Relaxation Method) 82
Lecture 14 Solution of Linear System of Equations(Matrix Inversion) 88
Lecture 15 Eigen Value Problems (Power Method) 96
Lecture 16 Eigen Value Problems (Jacobi’s Method) 104
Lecture 17 Eigen Value Problems (continued) 105
Lecture 18 Interpolation(Introduction and Difference Operators) 110
Lecture 19 Interpolation(Difference Operators Cont.) 114
Lecture 20 Interpolation( Operators Cont.) 118
Lecture 21 Interpolation Newton’s Forward difference Formula 122
Lecture 22 Newton’s Backward Difference Interpolation Formula 127
Lecture 23 Lagrange’s Interpolation formula 131
Lecture 24 Divided Differences 135
Lecture 25 Lagrange’s Interpolation formula, Divided Differences (Examples) 140
Lecture 26 Error Term in Interpolation Formula 144
Lecture 27 Differentiation Using Difference Operators 148
Lecture 28 Differentiation Using Difference Operators (continued) 152
Lecture 29 Differentiation Using Interpolation 157
Lecture 30 Richardson’s Extrapolation Method 162
Lecture 31 Numerical Differentiation and Integration 165
Lecture 32 Numerical Differentiation and Integration(Trapezoidal and Simpsons Rules) 170
Lecture 33 Numerical Differentiation and Integration(Trapezoidal and Simpsons Rules)Continued 174
Numerical Differentiation and Integration(Rombergs Integration and Double
Lecture 34 integration)Continued 177
Lecture 35 Ordinary Differential Equations (Taylo's series method)Euler Method 183
Lecture 36 Ordinary Differential Equations (Euler Method) 188
Lecture 37 Ordinary Differential Equations (Runge-Kutta Method) 194
Lecture 38 Ordinary Differential Equations (Runge-Kutta Method)Continued 198
Lecture 39 Ordinary Differential Equations(Adam-Moultan’s Predictor-Corrector Method) 206
Lecture 40 Ordinary Differential Equations(Adam-Moultan’s Predictor-Corrector Method) 213
Lecture 41 Examples of Differential Equations 220
Lecture 42 Examples of Numerical Differentiation 226
Lecture 43 An Introduction to MAPLE 236
Lecture 44 Algorithms for method of Solution of Non-linear Equations 247
Lecture 45 Non-linear Equations 255

2
Numerical Analysis –MTH603 VU

Interpolation
Introduction
Finite differences play an important role in numerical techniques, where tabulated values
of the functions are available.
For instance, consider a function y = f ( x).
As x takes values x0 , x1 , x2 ,… , xn ,
Let the corresponding values of y be y0 , y1 , y2 , … , yn .
That is, for given a table of values, ( xk , yk ), k = 0,1, 2,… , n;
The process of estimating the value of y, for any intermediate value of x, is called
interpolation. The method of computing the value of y, for a given value of x, lying
outside the table of values of x is known as extrapolation. If the function f (x) is known,
the value of y corresponding to any x can be readily computed to the desired accuracy.
For interpolation of a tabulated function, the concept of finite differences is important.
The knowledge about various finite difference operators and their symbolic relations are
very much needed to establish various interpolation formulae.
Finite Difference Operators
Forward Differences
For a given table of values ( xk , yk ), k = 0,1, 2,..., n with equally spaced abscissas of a
function y = f ( x), we define the forward difference operator ∆ as follows
∆yi = yi +1 − yi , i = 0,1,..., (n − 1)
To be explicit, we write
∆y0 = y1 − y0
∆y1 = y2 − y1

∆yn −1 = yn − yn −1

These differences are called first differences of the function y and are denoted by the
symbol ∆yi Here, ∆ is called the first difference operator
Similarly, the differences of the first differences are called second differences, defined by
∆ 2 y0 = ∆y1 − ∆y0 , ∆ 2 y1 = ∆y2 − ∆y1
Thus, in general
∆ 2 yi = ∆yi +1 − ∆yi
Here ∆ 2 is called the second difference operator. Thus, continuing, we can define,
r-th difference of y, as
∆ r yi = ∆ r −1 yi +1 − ∆ r −1 yi
By defining a difference table as a convenient device for displaying various differences,
the above defined differences can be written down systematically by constructing a
difference table for values
( xk , yk ), k = 0,1,..., 6
Forward Difference Table

© Copyright Virtual University of Pakistan 1

114
Numerical Analysis –MTH603 VU

This difference table is called forward difference table or diagonal difference table. Here,
each difference is located in its appropriate column, midway between the elements of the
previous column.
Please note that the subscript remains constant along each diagonal of the table. The first
term in the table, that is y0 is called the leading term, while the differences
∆y0 , ∆ 2 y0 , ∆ 3 y0 ,... are called leading differences
Example
Construct a forward difference table for the following values of x and y:

Solution

© Copyright Virtual University of Pakistan 2

115
Numerical Analysis –MTH603 VU

Example
Express ∆ 2 y0 and ∆ 3 y0 in terms of the values of the function y.
Solution:
Noting that each higher order difference is defined in terms of the lower order difference,
we have
∆ 2 y0 = ∆y1 − ∆y0 = ( y2 − y1 ) − ( y1 − y0 )
= y2 − 2 y1 + y0
And
∆ 3 y0 = ∆ 2 y1 − ∆ 2 y0 = (∆y2 − ∆y1 ) − (∆ y1 − ∆y0 )
= ( y3 − y2 ) − ( y2 − y1 ) − ( y2 − y1 ) + ( y1 − y0 )
= y3 − 3 y2 + 3 y1 − y0
Hence, we observe that the coefficients of the values of y, in the expansion of
∆ 2 y0 , ∆ 3 y0 , are binomial coefficients.
Thus, in general, we arrive at the following result: -
∆ n y0 = yn − n C1 yn −1 + n C2 yn − 2 − n C3 yn −3 + + (−1) n y0
Example
Show that the value of yn can be expressed in terms of the leading value y0 and the
leading differences
∆y0 , ∆ 2 y0 ,…, ∆ n y0 .

Solution
The forward difference table will be

© Copyright Virtual University of Pakistan 3

116
Numerical Analysis –MTH603 VU

y1 − y0 = ∆y0 or y1 = y0 + ∆y0 

y2 − y1 = ∆y1 or y2 = y1 + ∆y1 
y3 − y2 = ∆y2 or y3 = y2 + ∆y2 

Similarly,

∆y1 − ∆y0 = ∆ 2 y0 or ∆y1 = ∆y0 + ∆ 2 y0 



∆y2 − ∆y1 = ∆ 2 y1 or ∆y2 = ∆y1 + ∆ 2 y1 

Similarly, we can also write

∆ 2 y1 − ∆ 2 y0 = ∆ 3 y0 or ∆ 2 y1 = ∆ 2 y0 + ∆ 3 y0 

∆ 2 y2 − ∆ 2 y1 = ∆ 3 y1 or ∆ 2 y2 = ∆ 2 y1 + ∆ 3 y1 

∆y2 = (∆y0 + ∆ 2 y0 ) + (∆ 2 y0 + ∆ 3 y0 )

= ∆y0 + 2∆ 2 y0 + ∆ 3 y0

y3 = y2 + ∆y2 = ( y1 + ∆y1 ) + (∆y1 + ∆ 2 y1 )


= y0 + 3∆y0 + 3∆ 2 y0 + ∆ 3 y0
= (1 + ∆ )3 y0
Similarly, we can symbolically write
y1 = (1 + ∆) y0 ,
y2 = (1 + ∆ ) 2 y0 ,
y3 = (1 + ∆ )3 y0
........
yn = (1 + ∆ ) n y0
Hence, we obtain
yn = y0 + n C1∆y0 + n C2 ∆ 2 y0 + n C3∆ 3 y0 + + ∆ n y0
OR
n
yn = ∑ n Ci ∆ i y0
i =0

© Copyright Virtual University of Pakistan 4

117
Numerical Analysis –MTH603 VU

Backward Differences
For a given table of values ( xk , yk ), k = 0,1, 2,..., n of a function y = f (x) with equally
spaced abscissas, the first backward differences are usually expressed in terms of the
backward difference operator ∇ as
∇yi = yi − yi −1i = n, (n − 1),… ,1
To be explicit, we write

∇y1 = y1 − y0
∇y2 = y2 − y1
OR

∇yn = yn − yn −1
The differences of these differences are called second differences and they are denoted by
∇ 2 y2 , ∇ 2 y3 ,… , ∇ 2 yn .
∇ 2 y1 = ∇y2 − ∇y1
∇ 2 y2 = ∇y3 − ∇y2
That is

∇ 2 yn = ∇yn − ∇yn −1
Thus, in general, the second backward differences are
∇ 2 yi = ∇yi − ∇yi −1 , i = n, (n − 1),..., 2
While the k-th backward differences are given as
∇ k yi = ∇ k −1 yi − ∇ k −1 yi −1 , i = n, (n − 1),..., k
These backward differences can be systematically arranged for a table of values
( xk , yk ), k = 0,1,..., 6 shown below.
Backward Difference Table

© Copyright Virtual University of Pakistan 1

118
Numerical Analysis –MTH603 VU

From this table, it can be observed that the subscript remains constant along every
backward diagonal.
Example
Show that any value of y can be expressed in terms of y and its backward differences.
n
Solution:
From
∇yi = yi − yi −1i = n, (n − 1),… ,1
We get

yn −1 = yn − ∇yn yn − 2 = yn −1 − ∇yn −1

From ∇ 2 yi = ∇yi − ∇yi −1 , i = n, (n − 1),..., 2

We get ∇yn −1 = ∇yn − ∇ 2 yn

From these equations, we obtain


yn − 2 = yn − 2∇yn + ∇ 2 yn

Similarly, we can show that


yn −3 = yn − 3∇yn + 3∇ 2 yn − ∇ 3 yn

Symbolically, these results can be rewritten as follows:

yn −1 = (1 − ∇) yn
yn − 2 = (1 − ∇) 2 yn
yn −3 = (1 − ∇)3 yn
.......
yn − r = (1 − ∇) r yn

yn − r = yn − nC1∇yn + nC2∇ 2 yn − + (−1) r ∇ r yn


Central Differences
In some applications, central difference notation is found to be more convenient to
represent the successive differences of a function. Here, we use the symbol δ to
represent central difference operator and the subscript of δ y bb for any difference as
the average of the subscripts

δ y1 2 = y1 − y0 , δ y3 2 = y2 − y1 ,
In General,
δ yi = yi + (1 2) − yi −(1 2)

© Copyright Virtual University of Pakistan 2

119
Numerical Analysis –MTH603 VU

Higher order differences are defined as follows:


δ 2 yi = δ yi + (1 2) − δ yi −(1 2)

δ n yi = δ n −1 yi + (1 2) − δ n −1 yi −(1 2)
These central differences can be systematically arranged as indicated in the Table

Thus, we observe that all the odd differences have a fractional suffix and all the even
differences with the same subscript lie horizontally.
The following alternative notation may also be adopted to introduce finite difference
operators. Let y = f (x) be a functional relation between x and y, which is also denoted by
y .
x
Suppose, we are given consecutive values of x differing by h say x, x + h, x +2h, x +3h,
etc. The corresponding values of y are y x , y x + h , yx + 2 h , yx +3h ,
As before, we can form the differences of these values.
Thus
∆y x = y x + h − y x = f ( x + h) − f ( x)

∆ 2 y x = ∆yx + h − ∆yx

Similarly,
∇y x = y x − y x − h = f ( x ) − f ( x − h )

 h  h
δ yx = yx + ( h / 2) − yx −( h / 2) = f  x +  − f  x − 
 2  2

To be explicit, we write

© Copyright Virtual University of Pakistan 3

120
Numerical Analysis –MTH603 VU

∆y0 = y1 − y0
∆y1 = y2 − y1

∆yn −1 = yn − yn −1

∇yi = yi − yi −1i = n, (n − 1),… ,1


OR
∇y1 = y1 − y0
∇y2 = y2 − y1

∇yn = yn − yn −1

δ y1 2 = y1 − y0 , δ y3 2 = y2 − y1 ,
In General,
δ yi = yi + (1 2) − yi −(1 2)
Higher order differences are defined as follows:
δ 2 yi = δ yi + (1 2) − δ yi −(1 2)

δ n yi = δ n −1 yi + (1 2) − δ n −1 yi −(1 2)

© Copyright Virtual University of Pakistan 4

121
Numerical Analysis –MTH603 VU

Shift operator, E
Let y = f (x) be a function of x, and let x takes the consecutive values x, x + h, x + 2h, etc.
We then define an operator having the property
E f ( x ) = f ( x + h)
Thus, when E operates on f (x), the result is the next value of the function. Here, E is
called the shift operator. If we apply the operator E twice on f (x), we get
E 2 f ( x) = E[ E f ( x)]
= E[ f ( x + h)] = f ( x + 2h)
Thus, in general, if we apply the operator ‘E’ n times on f (x), we get
E n f ( x) = f ( x + nh)
OR
E n yx = yx + nh
Ey0 = y1 , E 2 y0 = y2 , E 4 y0 = y4 , … , E 2 y2 = y 4
-1
The inverse operator E is defined as
E −1 f ( x) = f ( x − h)
Similarly
E − n f ( x) = f ( x − nh)
Average Operator, µ ;
it is defined as
1  h  h 
µ f ( x) =  f  x +  + f  x −  
2  2  2 
1
=  y x + ( h / 2) + y x −( h / 2) 
2
Differential Operator, D
it is defined as
d 
Df ( x) = f ( x) = f ′( x) 
dx 
2 
d
D f ( x) = 2 f ( x) = f ′′( x) 
2

dx 

Important Results Using { ∆, ∇, δ , E , µ }

∆y x = y x + h − y x = Ey x − y x
= ( E − 1) y x
⇒ ∆ = E −1
Also
∇y x = y x − y x − h = y x − E −1 yx
= (1 − E −1 ) y x

© Copyright Virtual University of Pakistan 1

122
Numerical Analysis –MTH603 VU

E −1
⇒ ∇ = 1 − E −1 =
E
And
δ yx = yx + ( h / 2) − yx −( h / 2)
= E1/ 2 yx − E −1/ 2 y x
= ( E1/ 2 − E −1/ 2 ) y x
δ = E1/ 2 − E −1/ 2
The definition of µ and E similarly yields

1
µ yx =  yx + ( h / 2) + yx −( h / 2) 
2
1
= ( E1/ 2 + E −1/ 2 ) y x
2
1
⇒ µ = ( E1/ 2 + E −1/ 2 )
2
We know that
Ey x = y x + h = f ( x + h)
h2
Ey x = f ( x) + hf ′( x) + f ′′( x) +
2!
h2 2
= f ( x) + hDf ( x) +
D f ( x) +
2!
 hD h 2 D 2 
= 1 + + +  f ( x) = e hD y x
 1! 2! 
Thus
hD = log E
Example:
Prove that
hD = log(1 + ∆)
= − log(1 − ∇)
= sinh −1 ( µδ )
Solution:
Using the standard relations we have
hD = log E
= log(1 + ∆)
= − log E −1
= − log(1 − ∇)

© Copyright Virtual University of Pakistan 2

123
Numerical Analysis –MTH603 VU

1
µδ = ( E1/ 2 + E −1/ 2 )( E1/ 2 − E −1/ 2 )
2
1
= ( E − E −1 )
2
1 hD − hD
= (e − e )
2
= sinh(hD)
⇒ hD = sinh −1 µδ
Example
Prove that
2
 δ2 
1) 1 + δ µ = 1 + 
2 2

 2 
δ
2) E1/ 2 = µ +
2
δ 2
3) ∆= + δ 1 + (δ 2 / 4)
2
∆E −1 ∆
4) µδ = +
2 2
∆+∇
5) µδ =
2
Solution
From the definition, we have:
1 1
(1) µδ = ( E1/ 2 + E −1/ 2 )( E1/ 2 − E −1/ 2 ) = ( E − E −1 )
2 2
1 1
∴ 1 + µ 2δ 2 = 1 + ( E 2 − 2 + E −2 ) = ( E + E −1 ) 2
4 4
δ 2
1 1
1+ = 1 + ( E1/ 2 − E −1/ 2 ) 2 = ( E + E −1 ) 2
2 2 2

(2)
µ + (δ / 2)
1
= ( E1/ 2 + E −1/ 2 + E1/ 2 − E −1/ 2 ) = E1/ 2
2

(3)

© Copyright Virtual University of Pakistan 3

124
Numerical Analysis –MTH603 VU

1 1/ 2
(E − E −1/ 2 ) 1 + ( E − E −1/ 2 )
2

(E − E −1/ 2 )
2 1/ 2
1/ 2
δ 2
4
+ δ 1 + (δ 2 / 4) = +
2 2 1

E − 2 + E −1 1 1/ 2
= + ( E − E −1/ 2 )( E1/ 2 + E −1/ 2 )
2 2

E − 2 + E −1 E − E −1
= +
2 2

= E −1 = ∆
(4)
1 1
µδ = ( E1/ 2 + E −1/ 2 )( E1/ 2 − E −1/ 2 ) = ( E − E −1 )
2 2
1 ∆ 1
= (1 + ∆ − E −1 ) = + (1 − E −1 )
2 2 2
∆ 1  E −1  ∆ ∆
= +  = +
2 2  E  2 2E
(5)
1
µδ = ( E1/ 2 + E −1/ 2 )( E1/ 2 − E −1/ 2 )
2
1
= ( E − E −1 )
2
1 1
= (1 + ∆ − 1 + ∇) = (∆ + ∇)
2 2

© Copyright Virtual University of Pakistan 4

125
Numerical Analysis –MTH603 VU

Interpolation Newton’s Forward Difference Formula


Let y = f (x) be a function which takes values f(x0), f(x0+ h), f(x0+2h), …, corresponding
to various equi-spaced values of x with spacing h, say x0, x0 + h, x0 + 2h, … .
Suppose, we wish to evaluate the function f (x) for a value x0 + ph, where p is any real
number, then for any real number p, we have the operator E such that
E p f ( x) = f ( x + ph).
f ( x0 + ph) = E p f ( x0 ) = (1 + ∆) p f ( x0 )
 p( p − 1) 2 p( p − 1)( p − 2) 3 
= 1 + p∆ + ∆ + ∆ + " f ( x0 )
 2! 3! 
f ( x0 + ph) = f ( x0 ) + p∆f ( x0 )
p( p − 1) 2 p ( p − 1)( p − 2) 3
+ ∆ f ( x0 ) + ∆ f ( x0 )
2! 3!
p ( p − 1)" ( p − n + 1) n
+" + ∆ f ( x0 ) + Error
n!
This is known as Newton’s forward difference formula for interpolation, which gives the
value of f(x + ph) in terms of f(x ) and its leading differences.
0 0
This formula is also known as Newton-Gregory forward difference interpolation formula.
Here p=(x-x )/h.
0
An alternate expression is
p ( p − 1) 2 p ( p − 1)( p − 2) 3
y x = y0 + p ∆ y0 + ∆ y0 + ∆ y0 + "
2! 3!
p ( p − 1)( p − n + 1) n
+ ∆ y0 + Error
n!
If we retain (r + 1) terms, we obtain a polynomial of degree r agreeing with y at
x
x0, x1, …, xr.
This formula is mainly used for interpolating the values of y near the beginning of a set of
tabular values and for extrapolating values of y, a short distance backward from y0

Example:
Evaluate f (15), given the following table of values:

Solution:
The forward differences are tabulated as

© Copyright Virtual University of Pakistan 1

126
Numerical Analysis –MTH603 VU

We have Newton’s forward difference interpolation formula


p ( p − 1) 2 p ( p − 1)( p − 2) 3
y x = y0 + p∆y0 + ∆ y0 + ∆ y0 + "
2! 3!
p ( p − 1)( p − n + 1) n
+ ∆ y0 + Error
n!
Here we have
x0 = 10, y0 = 46, ∆y0 = 20,
∆ 2 y0 = −5, ∆ 3 y0 = 2, ∆ 4 y0 = −3

Let y be the value of y when x = 15, then


15
x − x0 15 − 10
p= = = 0.5
h 10
(0.5)(0.5 − 1)
f (15) = y15 = 46 + (0.5)(20) + (−5)
2
(0.5)(0.5 − 1)(0.5 − 2) (0.5)(0.5 − 1)(0.5 − 2)(0.5 − 3)
+ (2) + (−3)
6 24
= 46 + 10 + 0.625 + 0.125 + 0.1172
f (15) = 56.8672 correct to four decimal places.

Example
Find Newton’s forward difference, interpolating polynomial for the following data:

Solution;
The forward difference table to the given data is

© Copyright Virtual University of Pakistan 2

127
Numerical Analysis –MTH603 VU

rd th
Since, 3 and 4 leading differences are zero, we have Newton’s forward difference
interpolating formula as
p( p − 1) 2
y = y0 + p∆y0 + ∆ y0
2
In this problem,
x0 = 0.1, y0 = 1.40,
∆y0 = 0.16, ∆ 2 y0 = 0.04,
and
x − 0.1
p= = 10 x − 1
0.1
Substituting these values,
(10 x − 1)(10 x − 2)
y = f ( x) = 1.40 + (10 x − 1)(0.16) + (0.04)
2
This is the required Newton’s interpolating polynomial.
Example
Estimate the missing figure in the following table:

Solution
Since we are given four entries in the table, the function y = f (x) can be represented by a
polynomial of degree three.

∆ 3 f ( x) = Constant
and ∆ 4 f ( x) = 0, ∀x
In particular,
∆ 4 f ( x0 ) = 0
Equivalently,
( E − 1) 4 f ( x0 ) = 0

© Copyright Virtual University of Pakistan 3

128
Numerical Analysis –MTH603 VU

Expanding, we have
( E 4 − 4 E 3 + 6 E 2 − 4 E + 1) f ( x0 ) = 0
That is,
f ( x4 ) − 4 f ( x3 ) + 6 f ( x2 ) − 4 f ( x1 ) + f ( x0 ) = 0

Using the values given in the table, we obtain


32 − 4 f ( x3 ) + 6 × 7 − 4 × 5 + 2 = 0

which gives f (x ), the missing value equal to 14.


3
Example
Consider the following table of values
x .2 .3 .4 .5 .6
F(x) .2304 .2788 .3222 .3617 .3979
Find f (.36) using Newton’s Forward Difference Formula.
Solution

x y = f ( x) ∆y ∆2 y ∆3 y ∆4 y
0.2 0.2304 0.0484 -0.005 0.0011 -0.0005
0.3 0.2788 0.0434 -0.0039 0.0006
0.4 0.3222 0.0395 -0.0033
0.5 0.3617 0.0362
0.6 0.3979

p( p −1) 2 p( p −1)( p − 2) 3
yx = y0 + p∆y0 + ∆ y0 + ∆ y0
2! 3!
p( p −1)( p − 2)( p − 3) 4 p( p −1)( p − 2)........( p − n +1) n
+ ∆ y0 +"+ ∆ y0
4! n!
Where
x0 = 0.2, y0 = 0.2304, ∆y0 = 0.0484, x − x0 0.36 − 0.2
p= = = 1.6
∆ y0 = −0.005, ∆ y0 = 0.0011 , ∆ y0 = −.0005
2 3 4
h 0.1

1.6(1.6 −1) 1.6(1.6 −1)(1.6 − 2) 1.6(1.6 −1)(1.6 − 2)(1.6 − 3)


yx = 0.2304 +1.6(0.0484) + ( −0.005) + (0.0011) + (−.0005)
2! 3! 4!
1.6(.6)(−.4) 1.6(.6)(−.4)(−1.4)
= 0.2304 + .077441−.0024 + (.0011) + (−.0005)
6 24
= 0.3078 −.0024 −.00007 −.00001
= .3053

© Copyright Virtual University of Pakistan 4

129
Numerical Analysis –MTH603 VU

Example
Find a cubic polynomial in x which takes on the values
-3, 3, 11, 27, 57 and 107, when x = 0, 1, 2, 3, 4 and 5 respectively.
Solution
Here, the observations are given at equal intervals of unit width.
To determine the required polynomial, we first construct the difference table
Difference Table

th
Since the 4 and higher order differences are zero, the required Newton’s interpolation
formula

p ( p − 1) 2 p ( p − 1)( p − 2) 3
f ( x0 + ph) = f ( x0 ) + p∆f ( x0 ) + ∆ f ( x0 ) + ∆ f ( x0 )
2 6
Here
x − x0 x − 0
p= = =x
h 1
∆f ( x0 ) = 6
∆ 2 f ( x0 ) = 2
∆ 3 f ( x0 ) = 6
Substituting these values into the formula, we have
x( x − 1) x( x − 1)( x − 2)
f ( x) = −3 + 6 x + (2) + (6)
2 6
f ( x) = x 3 − 2 x 2 + 7 x − 3,
The required cubic polynomial.

© Copyright Virtual University of Pakistan 5

130
Numerical Analysis –MTH603 VU

NEWTON’S BACKWARD DIFFERENCE INTERPOLATION


FORMULA
For interpolating the value of the function y = f (x) near the end of table of values, and to
extrapolate value of the function a short distance forward from y , Newton’s backward
n
interpolation formula is used
Derivation
Let y = f (x) be a function which takes on values
f (x ), f (x -h), f (x -2h), …, f (x ) corresponding to equispaced values x , x -h, x -2h,
n n n 0 n n n
…, x . Suppose, we wish to evaluate the function f (x) at (x + ph), where p is any real
0 n
number, then we have the shift operator E, such that
f ( xn + ph) = E p f ( xn ) = ( E −1 ) − p f ( xn ) = (1 − ∇) − p f ( xn )
Binomial expansion yields,
 p( p + 1) 2 p( p + 1)( p + 2) 3
f ( xn + ph) = 1 + p∇ + ∇ + ∇ +"
 2! 3!
p( p + 1)( p + 2)" ( p + n − 1) n 
+ ∇ + Error  f ( xn )
n! 
That is

p( p + 1) 2 p( p + 1)( p + 2) 3
f ( xn + ph) = f ( xn ) + p∇f ( xn ) + ∇ f ( xn ) + ∇ f ( xn ) + "
2! 3!
p ( p + 1)( p + 2)" ( p + n − 1) n
+ ∇ f ( xn ) + Error
n!
This formula is known as Newton’s backward interpolation formula. This formula is also
known as Newton’s-Gregory backward difference interpolation formula.
If we retain (r + 1)terms, we obtain a polynomial of degree r agreeing with f (x) at xn,
xn-1, …, xn-r. Alternatively, this formula can also be written as
p ( p + 1) 2 p ( p + 1)( p + 2) 3
y x = yn + p∇yn + ∇ yn + ∇ yn + "
2! 3!
p( p + 1)( p + 2)" ( p + n − 1) n
+ ∇ yn + Error
n!
x − xn
Here p=
h

Example
For the following table of values, estimate f (7.5).

© Copyright Virtual University of Pakistan 1

131
Numerical Analysis –MTH603 VU

Solution
The value to be interpolated is at the end of the table. Hence, it is appropriate to use
Newton’s backward interpolation formula. Let us first construct the backward difference
table for the given data
Difference Table

th
Since the 4 and higher order differences are zero, the required Newton’s backward
interpolation formula is
p( p + 1) 2
y x = yn + p∇yn + ∇ yn
2!
p ( p + 1)( p + 2) 3
+ ∇ yn
3!
In this problem,
x − xn 7.5 − 8.0
p= = = −0.5
h 1
∇yn = 169, ∇ 2 yn = 42, ∇3 yn = 6
(−0.5)(0.5)
y7.5 = 512 + (−0.5)(169) + (42)
2
(−0.5)(0.5)(1.5)
+ (6)
6
= 512 − 84.5 − 5.25 − 0.375
= 421.875
Example
The sales for the last five years is given in the table below. Estimate the sales for the year
1979

© Copyright Virtual University of Pakistan 2

132
Numerical Analysis –MTH603 VU

Solution
Newton’s backward difference Table

In this example,
1979 − 1982
p= = −1.5
2
and
∇yn = 5, ∇ 2 yn = 1,
∇3 yn = 2, ∇ 4 yn = 5
Newton’s interpolation formula gives
(−1.5)(−0.5) (−1.5)(−0.5)(0.5)
y1979 = 57 + (−1.5)5 + (1) + (2)
2 6
(−1.5)(−0.5)(0.5)(1.5)
+ (5)
24
= 57 − 7.5 + 0.375 + 0.125 + 0.1172
Therefore,
y1979 = 50.1172
Example
Consider the following table of values
x 1 1.1 1.2 1.3 1.4 1.5
F(x) 2 2.1 2.3 2.7 3.5 4.5
Use Newton’s Backward Difference Formula to estimate the value of f(1.45) .

Solution

x y=F(x) ∇y ∇2 y ∇3 y ∇4 y ∇5 y
1 2
1.1 2.1 0.1
1.2 2.3 0.2 0.1
1.3 2.7 0.4 0.2 0.1
1.4 3.5 0.8 0.4 0.2 0.1
1.5 4.5 1 0.2 -0.2 -0.4 -0.5

© Copyright Virtual University of Pakistan 3

133
Numerical Analysis –MTH603 VU

x − xn 1.45 − 1.5
p= = = −0.5 , ∇yn = 1 , ∇ 2 yn = .2 , ∇3 yn = - .2 , ∇ 4 yn = -.4 ,
h 0.1
∇5 yn = -.5
As we know that
p( p + 1) 2 p( p + 1)( p + 2) 3
yx = yn + p∇yn + ∇ yn + ∇ yn
2! 3!
p( p + 1)( p + 2)( p + 3) 4 p( p + 1)( p + 2)( p + 3)( p + 4) 5
+ ∇ yn + ∇ yn
4! 5!

y x = 4.5 + ( −0.5 ) (1) +


( −0.5 ) (−0.5 + 1) (0.2) + ( −0.5) (−0.5 + 1)(−0.5 + 2) −0.2
( )
2! 3!
( −0.5) (−0.5 + 1)(−0.5 + 2)(−0.5 + 3) −0.4 + ( −0.5) (−0.5 + 1)(−0.5 + 2)(−0.5 + 3)(−0.5 + 4) −0.5
+ ( ) ( )
4! 5!

= 4.5 − 0.5 − 0.025 + 0.0125 + 0.015625+ 0.068359


= 4.07148

© Copyright Virtual University of Pakistan 4

134
Numerical Analysis –MTH603 VU

LAGRANGE’S INTERPOLATION FORMULA


Newton’s interpolation formulae developed earlier can be used only when the values of
the independent variable x are equally spaced. Also the differences of y must ultimately
become small.
If the values of the independent variable are not given at equidistant intervals, then we
have the basic formula associated with the name of Lagrange which will be derived now.
Let y = f (x) be a function which takes the values, y , y ,…y corresponding to x
0 1 n 0
, x , …x . Since there are (n + 1) values of y corresponding to (n + 1) values of x, we
1 n
can represent the function f (x) by a polynomial of degree n.
Suppose we write this polynomial in the form .
f ( x) = A0 x n + A1 x n −1 + " + An
or in the form,
y = f ( x) = a0 ( x − x1 )( x − x2 )" ( x − xn )
+ a1 ( x − x0 )( x − x2 )" ( x − xn )
+ a2 ( x − x0 )( x − x1 )" ( x − xn ) + "
+ an ( x − x0 )( x − x1 )" ( x − xn −1 )
Here, the coefficients a are so chosen as to satisfy this equation by the (n + 1) pairs
k
(x , y ). Thus we get
i i
y0 = f ( x0 ) = a0 ( x0 − x1 )( x0 − x1 )( x0 − x2 )" ( x0 − xn )
Therefore,
y0
a0 =
( x0 − x1 )( x0 − x2 )" ( x0 − xn )
Similarily,we obtain
y1
a1 =
( x1 − x0 )( x1 − x2 )" ( x1 − xn )
and
yi
ai =
( xi − x0 )( xi − x1 )" ( xi − xi −1 )( xi − xi +1 )" ( xi − xn )
yn
an =
( xn − x0 )( xn − x1 )" ( xn − xn −1 )

Substituting the values of a , a , …, a we get


0 1 n
( x − x1 )( x − x2 )" ( x − xn ) ( x − x0 )( x − x2 )" ( x − xn )
y = f ( x) = y0 + y1 +
( x0 − x1 )( x0 − x2 )" ( x0 − xn ) ( x1 − x0 )( x1 − x2 )" ( x1 − xn )
( x − x0 )( x − x1 )" ( x − xi −1 )( x − xi +1 )" ( x − xn )
+ yi + "
( xi − x0 )( xi − x1 )" ( xi − xi −1 )( xi − xi +1 )" ( xi − xn )
The Lagrange’s formula for interpolation

© Copyright Virtual University of Pakistan 1

135
Numerical Analysis –MTH603 VU

This formula can be used whether the values x , x , …, x are equally spaced or not.
0 2 n
Alternatively, this can also be written in compact form as
y = f ( x) = L0 ( x) y0 + L1 ( x1 ) y1 + Li ( xi ) yi + " + Ln ( xn ) yn
n n
= ∑ Lk ( x) yk = ∑ Lk ( x) f ( xk )
k =0 k =0
Where
( x − x0 )( x − x1 )" ( x − xi −1 )( x − xi +1 )" ( x − xn )
Li ( x) =
( xi − x0 )( xi − x1 )" ( xi − xi −1 )( xi − xi +1 )" ( xi − xn )
We can easily observe that,
Li ( xi ) = 1 and Li ( x j ) = 0, i ≠ j.
Thus introducing Kronecker delta notation
1, if i = j
Li ( x j ) = δ ij = 
0, if i ≠ j
Further, if we introduce the notation
n
∏( x) = ∏ ( x − xi ) = ( x − x0 )( x − x1 )" ( x − xn )
i =0

That is ∏( x) is a product of (n + 1) factors. Clearly, its derivative


∏′( x) contains a sum of (n + 1) terms in each of which one of the
factors of will ∏( x) be absent.
We also define,
Pk ( x) = ∏ ( x − xi )
i≠k

Which is same as ∏( x) except that the factor (x–xk) is absent. Then


∏′( x) = P0 ( x) + P1 ( x) + " + Pn ( x)
But, when x = xk, all terms in the above sum vanishes except P (x )
k k
Hence
∏′( xk ) = Pk ( xk ) = ( xk − x0 )" ( xk − xk −1 )( xk − xk +1 )" ( xk − xn )
P ( x) P ( x)
Lk ( x) = k = k
Pk ( xk ) ∏′( xk )
∏( x)
=
( x − xk ) ∏′( xk )
Finally, the Lagrange’s interpolation polynomial of degree n can be
written as
n
∏( x)
y ( x) = f ( x) = ∑ f ( xk )
k = 0 ( x − xk ) ∏′( xk )
n n
= ∑ Lk ( x) f ( xk ) =∑ Lk ( x) yk
k =0 k =0

Example
© Copyright Virtual University of Pakistan 2

136
Numerical Analysis –MTH603 VU

Find Lagrange’s interpolation polynomial fitting the points


y(1) = -3, y(3) = 0,
y(4) = 30, y(6) = 132. Hence find y(5).
Solution

The given data can be arranged as

Using Lagrange’s interpolation formula, we have

( x − 3)( x − 4) x − 6) ( x − 1)( x − 4) x − 6)
y ( x) = f ( x) = (−3) + (0)
(1 − 3)(1 − 4)(1 − 6) (3 − 1)(3 − 4)(3 − 6)
( x − 1)( x − 3)( x − 6) ( x − 1)( x − 3)( x − 4)
+ (30) + (132)
(4 − 1)(4 − 3)(4 − 6) (6 − 1)(6 − 3)(6 − 4)

On simplification, we get
1
y ( x) =
10
( −5 x 3 + 135 x 2 − 460 x + 300 )

1
=
(− x 3 + 27 x 2 − 92 x + 60)
2
Which is required Lagrange’s interpolation polynomial.
Now, y(5) = 75.
Example
Given the following data, evaluate f (3) using Lagrange’s interpolating
polynomial.

Solution
Using Lagrange’s formula,
( x − x1 )( x − x2 ) ( x − x0 )( x − x2 ) ( x − x0 )( x − x1 )
f ( x) = f ( x0 ) + f ( x1 ) + f ( x2 )
( x0 − x1 )( x0 − x2 ) ( x1 − x0 )( x1 − x2 ) ( x2 − x0 )( x2 − x1 )
Therefore,
(3 − 2)(3 − 5) (3 − 1)(3 − 5) (3 − 1)(3 − 2)
f (3) = (1) + (4) + (10)
(1 − 2)(1 − 5) (2 − 1)(2 − 5) (5 − 1)(5 − 2)
= 6.49999 = 6.5
Example
Find the interpolating polynomial for the data using Lagrange’s formula
© Copyright Virtual University of Pakistan 3

137
Numerical Analysis –MTH603 VU

x 1 2 -4
F(x) 3 -5 -4
Solution:
As Lagrange’s formula for interpolating polynomial is given by,

y ( x) = f ( x) =
( x − x1 )( x − x2 ) f ( x ) + ( x − x0 )( x − x2 ) f ( x ) + ( x − x0 )( x − x1 ) f ( x )
( x0 − x1 )( x0 − x2 ) 0 ( x1 − x0 )( x1 − x2 ) 1 ( x2 − x0 )( x2 − x1 ) 2
( x − 2 )( x + 4 ) 3 + ( x − 1)( x + 4 ) −5 + ( x − 1)( x − 2 ) −4
= ( ) ( ) ( )
(1 − 2 )(1 + 4 ) ( 2 − 1)( 2 + 4 ) ( −4 − 1)( −4 − 2 )
3 2 5 4
=−
5
( x + 2 x − 8) − ( x2 + 3x − 4 ) − ( x2 − 3x + 2 )
6 30
3 6 24 5 2 15 20 4 2 12 8
= − x2 − x + − x − x+ − x + x−
5 5 5 6 6 6 30 30 30
 3 5 4   6 15 12   24 20 8 
=  − − −  x2 +  − − +  x +  + − 
 5 6 30   5 6 30   5 6 30 
47 33 118
= − x2 − x +
30 10 15
Which is the required polynomial.

© Copyright Virtual University of Pakistan 4

138
Numerical Analysis –MTH603 VU

DIVIDED DIFFERENCES
Let us assume that the function y = f (x) is known for several values of x, (x , y ), for
i i
i=0,1,..n. The divided differences of orders 0, 1, 2, …, n are now defined recursively as:
y[ x0 ] = y ( x0 ) = y0
is the zero-th order divided difference
The first order divided difference is defined as
y −y
y[ x0 , x1 ] = 1 0
x1 − x0
Similarly, the higher order divided differences are defined in terms of lower order divided
differences by the relations of the form
y[ x1 , x2 ] − y[ x0 , x1 ]
y[ x0 , x1 , x2 ] =
x2 − x0
Generally,

1
y[ x0 , x1 ,… , xn ] = [ y[ x1 , x2 ,… , xn ] − y[ x0 , x1 ,… , xn−1 ]]
xn − x0

Standard format of the Divided Differences

We can easily verify that the divided difference is a symmetric function of its arguments.
That is,
y0 y1
y[ x1 , x0 ] = y[ x0 , x1 ] = +
x0 − x1 x1 − x0
y[ x1 , x2 ] − y[ x0 , x1 ]
y[ x0 , x1 , x2 ] =
x2 − x0
Now
1  y2 − y1 y1 − y0 
=  − 
x2 − x0  x2 − x1 x1 − x0 

© Copyright Virtual University of Pakistan 1

139
Numerical Analysis –MTH603 VU

Therefore

y0 y1 y2
y[ x0 , x1 , x2 ] = + +
( x0 − x1 )( x0 − x2 ) ( x1 − x0 )( x1 − x2 ) ( x2 − x0 )( x2 − x1 )
This is symmetric form, hence suggests the general result as

y0 y1 yk
y[ x0 ,… , xk ] = + + +
( x0 − x1 ) ( x0 − xk ) ( x1 − x0 ) ( x1 − xk ) ( xk − x0 ) ( xk − xk −1 )
OR
k
yi
y[ x0 ,..., xk ] = ∑ k
i =0 ∏ ( xi − x j )
i =0
i≠ j

Example:
Construct the Newton’s divided difference table for values of x=1,2,3,4,5,6
and f(x)= -3,0,15,48,105,192.
Solution:

x F(x) 1st 2nd 3rd


difference difference difference
1 -3 15-3/3-
1=6
2 0 0-(-3)=3 33-15/4-2 9-6/4-1=1

=9
3 15 15-0=15 57-33/5-3 12-9/5-
2=1
=12
4 48 48-15=33 87-57/6-4 15-12/6-3

=15 =1
5 105 105-48=57
6 192 192-105=87

NEWTON’S DIVIDED DIFFERENCE INTERPOLATION


FORMULA
Let y = f (x) be a function which takes values y , y , …, y corresponding to x = x , i = 0,
0 1 n i
1,…, n. We choose an interpolating polynomial, interpolating at x = x , i = 0, 1, …, n in
i
the following form
y = f ( x) = a0 + a1 ( x − x0 ) + a2 ( x − x0 )( x − x1 )
+ + an ( x − x0 )( x − x1 ) ( x − xn −1 )

© Copyright Virtual University of Pakistan 2

140
Numerical Analysis –MTH603 VU

Here, the coefficients ak are so chosen as to satisfy above equation by the (n + 1) pairs
(xi, yi).
Thus, we have

y ( x0 ) = f ( x0 ) = y0 = a0 
y ( x1 ) = f ( x1 ) = y1 = a0 + a1 ( x1 − x0 ) 

y ( x2 ) = f ( x2 ) = y2 = a0 + a1 ( x2 − x0 ) + a2 ( x2 − x0 )( x2 − x1 ) 


yn = a0 + a1 ( xn − x0 ) + a2 ( xn − x0 )( xn − x1 ) + + an ( xn − x0 ) ( xn − xn −1 ) 

The coefficients a0, a , …, a can be easily obtained from the above system of
1 n
equations, as they form a lower triangular matrix.
The first equation gives
a0 = y ( x0 ) = y0
The second equation gives
y1 − y0
a1 = = y[ x0 , x1 ]
x1 − x0
Third equation yields
y2 − y0 − ( x2 − x0 ) y[ x0 , x1 ]
a2 =
( x2 − x0 )( x2 − x1 )
Which can be rewritten as
  y1 − y0  
 y2 − y1 +   ( x1 − x0 )  − ( x2 − x0 ) y[ x0 , x1 ]
 x1 − x0 
a2 =  
( x2 − x0 )( x2 − x1 )
That is

y2 − y1 + y[ x0 , x1 ]( x1 − x2 ) y[ x1 , x2 ] − y[ x0 , x1 ]
a2 = =
( x2 − x0 )( x2 − x1 ) x2 − x0
Thus, in terms of second order divided differences, we have
a2 = y[ x0 , x1 , x2 ]
Similarly, we can show that
Newton’s divided difference interpolation formula can be written as

y = f ( x) = y0 + ( x − x0 ) y[ x0 , x1 ] + ( x − x0 )( x − x1 ) y[ x0 , x1 , x2 ]
+ + ( x − x0 )( x − x1 )… ( x − xn −1 ) y[ x0 , x1 ,..., xn ]
Newton’s divided differences can also be expressed in terms of forward, backward and
central differences. They can be easily derived.
Assuming equi-spaced values of abscissa, we have

© Copyright Virtual University of Pakistan 3

141
Numerical Analysis –MTH603 VU

y1 − y0 ∆y0
y[ x0 , x1 ] = =
x1 − x0 h
∆y1 ∆y0

y[ x1 , x2 ] − y[ x0 , x1 ] h h ∆ 2 y0
y[ x0 , x1 , x2 ] = = =
x2 − x0 2h 2!h 2
By induction, we can in general arrive at the result
∆ n y0
y[ x0 , x1 ,… , xn ] =
n !hn
Similarly,
y −y ∇y
y[ x0 , x1 ] = 1 0 = 1
x1 − x0 h

∇y2 ∇y1

y[ x1 , x2 ] − y[ x0 , x1 ] h h = ∇ y2
2
y[ x0 , x1 , x2 ] = =
x2 − x0 2h 2!h 2
In General,we have
∇ n yn
y[ x0 , x1 ,..., xn ] =
n !h n
Also, in terms of central differences, we have
y1 − y0 δ y1/ 2
y[ x0 , x1 ] = =
x1 − x0 h
δ y3/ 2 δ y1/ 2

y[ x1 , x2 ] − y[ x0 , x1 ] h = δ y1
2
y[ x0 , x1 , x2 ] = = h
x2 − x0 2h 2!h 2

In general, we have the following pattern

δ 2 m ym 
y[ x0 , x1 ,..., x2 m ] = 2m 
(2m)!h 

δ 2 m +1
ym + (1/ 2) 
y[ x0 , x1 ,..., x2 m +1 ] =
(2m + 1)!h 2 m +1 

© Copyright Virtual University of Pakistan 4

142
Numerical Analysis –MTH603 VU

Example
Find the interpolating polynomial by (i) Lagrange’s formula and
(ii) Newton’s divided difference formula for the following data. Hence show that they

x 0 1 2 4
y 1 1 2 5
represent the same interpolating polynomial.

X Y 1st D.D 2nd D.D 3rd D.D


0 1
1 1 0
2 2 1 1/2 -1/2
4 5 3/2 1/6

Solution The divided difference table for the given data is constructed as follows:

i) Lagrange’s interpolation formula gives

( x − 1)( x − 2)( x − 4) ( x − 0)( x − 2)( x − 4)


y = f ( x) = (1) + (1)
(−1)(−2)(−4) (1 − 0)(1 − 2)(1 − 4)
( x − 0)( x − 1)( x − 4) ( x − 0)( x − 1)( x − 2)
+ (2) + (5)
(2)(2 − 1)(2 − 4) 4(4 − 1)(4 − 2)
( x 3 − 7 x 2 + 14 x − 8) x 3 − 6 x 2 + 8 x x3 − 5 x 2 + 4 x
=− + −
8 3 2
5( x − 3 x + 2 x)
3 2
+
24
x 3x 2 2
3
=− + − x +1
12 4 3
(ii) Newton’s divided difference formula gives

1  1
y = f ( x) = 1 + ( x − 0)(0) + ( x − 0)( x − 1)   + ( x − 0)( x − 1)( x − 2)  − 
2  12 
3 2
x 3x 2
=− + − x +1
12 4 3
We observe that the interpolating polynomial by both Lagrange’s and Newton’s divided
difference formulae is one and the same.

© Copyright Virtual University of Pakistan 1

143
Numerical Analysis –MTH603 VU

Note!
Newton’s formula involves less number of arithmetic operations than that of Lagrange’s.
Example
Using Newton’s divided difference formula, find the quadratic equation for the following
X 0 1 4
Y 2 1 4

data.
Hence find y (2).

Solution:
The divided difference table for the given data is constructed as:
X Y 1st D.D 2nd D.D
0 2
1 1 -1 1/2
4 4 1

Now, using Newton’s divided difference formula, we have


1
y = 2 + ( x − 0)(−1) + ( x − 0)( x − 1)  
2
1
= ( x 2 − 3 x + 4)
2
Hence, y (2) = 1.
Example
Find the equation of a cubic curve which passes through the points (4 , -43) , (7 , 83) , (9 ,
327) and (12 , 1053) using Dividing Difference Formula.
Solution
The Newton’s divided difference table is given by

X Y 1st divided 2nd divided 3rd


difference difference divided
difference
4 -43
7 83 42
9 327 122 16
12 1053 242 24 1

© Copyright Virtual University of Pakistan 2

144
Numerical Analysis –MTH603 VU

Newton ' s Divided Difference formula is


y = f ( x) = y0 + ( x − x0 ) y[ x0 , x1 ] + ( x − x0 )( x − x1 ) y[ x0 , x1 , x2 ] + +
( x − x0 )( x − x1 )… ( x − xn −1 ) y[ x0 , x1 ,..., xn ]
= −43 + ( x − 4)(42) + ( x − 4)( x − 7)(16) + ( x − 4)( x − 7)( x − 9)(1)
= −43 + ( x − 4) {42 + 16 x − 112 + x 2 − 16 x + 63}
= −43 + ( x − 4)( x 2 − 7)
= − 43 + x 3 − 7 x − 4 x 2 + 28
= x3 − 4 x 2 − 7 x − 15
Which is the required polynomial.
Example
A function y = f (x) is given at the sample points x = x , x and x . Show that the
0 1 2
Newton’s divided difference interpolation formula and the corresponding Lagrange’s
interpolation formula are identical.
Solution
For the function y = f (x), we have the data ( xi , yi ), i = 0,1, 2.
The interpolation polynomial using Newton’ divided difference formula is given as
y = f ( x) = y0 + ( x − x0 ) y[ x0 , x1 ]
+ ( x − x0 )( x − x1 ) y[ x0 , x1 , x2 ]
Using the definition of divided differences, we can rewrite the equation in the form
(y − y )  y0
y = y0 ( x − x0 ) 1 0 + ( x − x0 )( x − x1 ) 
( x1 − x0 )  ( x0 − x1 )( x0 − x2 )
y1 y2 
+ + 
( x1 − x0 )( x1 − x2 ) ( x2 − x0 )( x2 − x1 ) 
 ( x − x) ( x − x0 )( x − x1 ) 
= 1 − 0 +  y0
 ( x0 − x1 ) ( x0 − x1 )( x0 − x2 ) 
 ( x − x0 ) ( x − x0 )( x − x1 )  ( x − x0 )( x − x1 )
+ +  y1 + y2
 ( x1 − x0 ) ( x1 − x0 )( x1 − x2 )  ( x2 − x0 )( x2 − x1 )
On simplification, it reduces to
( x − x1 )( x − x2 ) ( x − x0 )( x − x2 ) ( x − x0 )( x − x1 )
y= y0 + y1 + y2
( x0 − x1 )( x0 − x2 ) ( x1 − x0 )( x1 − x2 ) ( x2 − x0 )( x2 − x1 )
which is the Lagrange’s form of interpolation polynomial.
Hence two forms are identical.
Newton’s Divided Difference Formula with Error Term
Following the basic definition of divided differences, we have for any x

© Copyright Virtual University of Pakistan 3

145
Numerical Analysis –MTH603 VU

y ( x) = y0 + ( x − x0 ) y[ x, x0 ] 
y[ x, x0 ] = y[ x0 , x1 ] + ( x − x1 ) y[ x, x0 , x1 ] 

y[ x, x0 , x1 ] = y[ x0 , x1 , x2 ] + ( x − x2 ) y[ x, x0 , x1 , x2 ] 


y[ x, x0 ,..., xn −1 ] = y[ x0 , x1 ,..., xn ] + ( x − xn ) y[ x, x0 ,..., xn ]
Multiplying the second Equation by (x – x ), third by (x – x )(x – x ) and so on,
0 0 1
and the last by

(x – x )(x – x ) … (x – x ) and adding the resulting equations, we obtain


0 1 n-1
y ( x) = y0 + ( x − x0 ) y[ x0 , x1 ] + ( x − x0 )( x − x1 ) y[ x0 , x1 , x2 ] +
+ ( x − x0 )( x − x1 ) ( x − xn −1 ) y[ x0 , x1 ,..., xn ] + ε ( x)
Where
ε ( x) = ( x − x0 )( x − x1 ) ( x − xn ) y[ x, x0 ,..., xn ]
Please note that for x = x , x , …, x , the error term ε ( x) vanishes
0 1 n

© Copyright Virtual University of Pakistan 4

146
Numerical Analysis –MTH603 VU

Error Term in Interpolation Formulae


We know, if y (x) is approximated by a polynomial Pn (x) of degree n then the error is
given by
ε ( x) = y ( x) − Pn ( x),
Where
ε ( x) = ( x − x0 )( x − x1 )" ( x − xn ) y[ x, x0 ,..., xn ]
Alternatively it is also expressed as
ε ( x) = ∏( x) y[ x, x0 ,..., xn ] = K ∏( x)
Now, consider a function F (x), such that
F ( x) = y ( x) − Pn ( x) − K ∏( x)
Determine the constant K in such a way that F(x) vanishes for x = x0, x1, …, xn and also
for an arbitrarily chosen point x , which is different from the given (n + 1) points.

Let I denotes the closed interval spanned by the values x0 ,..., xn , x . Then F (x) vanishes
(n + 2) times in the interval I.
By Rolle’s theorem F ′( x) vanishes at least (n + 1) times in the interval I, F ′′( x) at
least n times, and so on.
Eventually, we can show that F ( n +1) ( x) vanishes at least once in the interval I, say at
x = ξ Thus, we obtain
0 = y ( n +1) (ξ ) − Pn ( n +1) (ξ ) − K ∏ ( n +1) (ξ )
Since Pn(x) is a polynomial of degree n, its (n + 1)th derivative is zero. Also, from the
definition of Π ( x)
∏ ( n +1) ( x) = (n + 1)!.
Therefore we get
y ( n +1) (ξ )
K=
(n + 1)!
Hence
y ( n +1) (ξ )
ε ( x) = y ( x) − Pn ( x) = ∏( x)
(n + 1)!

for some in the interval I.


Thus the error committed in replacing y (x) by either Newton’s divided difference
formula or by an identical Lagrange’s formula is
y ( n +1) (ξ )
ε ( x) = ∏( x) y[ x, x0 ,..., xn ] = ∏( x)
(n + 1)!

© Copyright Virtual University of Pakistan 1

147
Numerical Analysis –MTH603 VU

INTERPOLATION IN TWO DIMENSIONS

Let u be a polynomial function in two variables, say x and y, in particular quadratic in x


and cubic in y, which in general can be written as
u = f ( x, y ) = a0 + a1 x + a2 y + a3 x 2
+ a4 xy + a5 y 2 + a6 y 3 + a7 y 2 x + a8 yx 2
+ a9 y 3 x + a10 y 2 x 2 + a11 y 3 x 2
This relation involves many terms. If we have to write a relation involving three or more
variables, even low degree polynomials give rise to long expressions. If necessary, we
can certainly write, but such complications can be avoided by handling each variable
separately.
If we let x, a constant, say x = c, the equation simplifies to the form
u = x =c = b0 + b1 y + b2 y 2 + b3 y 3
Now we adopt the following procedure to interpolate at a point (1, m) in a table of two
variables, by treating one variable a constant say x = x1. The problem reduces to that of a
single variable interpolation.
Any one of the methods discussed in preceding sections can then be applied to get
f (x , m). Then we repeat this procedure for various values of x say x = x , x , …, x
1 2 3 n
keeping y constant. Thus, we get a new table with y constant at the value y = m and with
x varying. We can then interpolate at x = 1.
Example
2 2
Tabulate the values of the function f (x) = x +y -y
for x = 0,1,2,3,4 and y = 0,1,2,3,4.
Using the table of the values, compute f(2.5, 3.5) by numerical double interpolation
Solution
The values of the function for the given values of the given values of x and y are given in

X …….. Y ……..
0 1 2 3 4
0 0 0 2 6 12

the following table

© Copyright Virtual University of Pakistan 2

148
Y f
Numerical Analysis –MTH603 VU

1 1 1 3 7 13
2 4 4 6 10 16
3 9 9 11 15 21
4 16 16 18 22 28

Cont! Using quadratic interpolation in both x and y directions we need to consider three
points in x and y directions. To start with we have to treat one variable as constant, say x.
Keeping x = 2.5, y = 3.5 as near the center of the set, we choose the table of values
corresponding to x = 1,2, 3 and y = 2, 3, 4.
Cont! The region of fit for the construction of our interpolation polynomial is shown in
different color in the table

X ………. Y …………
0 1 2 3 4
0 0 0 2 6 12
1 1 1 3 7 13
2 4 4 6 10 16
3 9 9 11 15 21
4 16 16 18 22 28

Cont !
Thus using Newton’s forward difference formula, we have

At x=1
Y f ∆f ∆2 f

2 3
3 7 4 2
4 13 6

Cont !
Similarly

At x=2
Y f ∆f ∆2 f

© Copyright Virtual University of Pakistan 3

149
Numerical Analysis –MTH603 VU

2 6
3 10 4 2
4 16 6

Cont !
Similarly

At x=3
Y f ∆f ∆2 f

2 11
3 15 4 2
4 21 6

y − y0 3.5 − 2
with p= = = 1.5
h 1
Cont !
y − y0 3.5 − 2
with p= = = 1.5
h 1
p( p − 1) 2
f (1,3.5) = f 0 + p∆f 0 + ∆ f0
2!
(1.5)(0.5)
= 3 + (1.5)(4) + (2) = 9.75
2
Cont !
(1.5)(0.5)
f (2,3.5) = 6 + (1.5)(4) + (2) = 12.75
2
(1.5)(0.5)
f (3,3.5) = 11 + (1.5)(4) + (2) = 17.75
2
Cont !
Therefore we arrive at the following result

At x=3.5
Y f ∆f ∆2 f

1 9.75
2 12.75 3 2
3 17.75 5

© Copyright Virtual University of Pakistan 4

150
Numerical Analysis –MTH603 VU

Cont !
2.5 − 1
Now defining p= = 1.5
1
(1.5)(0.5)
f (2.5,3.5) = 9.75 + (1.5)(3) + (2) = 15
2
Cont !

From the functional relation, we also find that


f (2.5,3.5) = (2.5) 2 + (3.5) 2 − 3.5 = 15
And hence no error in interpolation!!!

© Copyright Virtual University of Pakistan 5

151
Numerical Analysis –MTH603 VU

DIFFERENTIATION USING DIFFERENCE OPERATORS


Introduction
Consider a function of single variable y = f (x). If the function is known and simple,
we can easily obtain its derivative (s) or can evaluate its definite integral
However, if we do not know the function as such or the function is complicated and is
given in a tabular form at a set of points x ,x ,…,x , we use only numerical methods for
0 1 n
differentiation or integration of the given function.
We shall discuss numerical approximation to derivatives of functions of two or more
variables in the lectures to follow when we shall talk about partial differential equations.
In what follows, we shall derive and illustrate various formulae for numerical
differentiation of a function of a single variable based on finite difference operators and
interpolation.
Subsequently, we shall develop Newton-Cotes formulae and related trapezoidal rule and
Simpson’s rule for numerical integration of a function.
We assume that the function y = f (x) is given for the values of the independent variable
x = x + ph, for p = 0, 1, 2, … and so on. To find the derivatives of such a tabular
0
function, we proceed as follows
Case I:
Using forward difference operator ∆ and combining equations

E f ( x ) = f ( x + h)
and
∆ = E −1

hD = log E = log(1 + ∆ )
Remember the Differential operator, D is known to represents the property

d 
Df ( x) =f ( x) = f ′( x) 
dx 
2 
d
D f ( x) = 2 f ( x) = f ( x) 
2 "

dx 
This would mean that in terms of ∆ :

1 ∆ 2 ∆3 ∆ 4 ∆5 
D=  ∆ − + − + −"
h 2 3 4 5 
Therefore
1 ∆ 2 f ( x0 ) ∆ 3 f ( x0 )
Df ( x0 ) = f ′( x0 ) =  ∆f ( x0 ) − +
h 2 3
∆ 4 f ( x0 ) ∆ 5 f ( x0 )  d
− + − " = f ( x0 )
4 5  dx

© Copyright Virtual University of Pakistan 1

152
Numerical Analysis –MTH603 VU

in other words
1 ∆ 2 y0 ∆ 3 y0 ∆ 4 y0 
Dy0 = y0′ = ∆
 0 y − + − + "
h 2 3 4 

Also, we can easily verify


2
1  ∆ 2 ∆3 ∆ 4 
D = 2 ∆ −
2
+ − + "
h  2 3 4 
1  2 11 5 
=
2 
∆ − ∆3 + ∆ 4 − ∆5 + " 
h  12 6 
 ∆ y0 − ∆ y0
2 3

d 2 y0 1  
D y0 =
2
= y0′′ = 2 11 4 5 5
dx 2
h  + ∆ y0 − ∆ y0 + " 

 12 6 

Case II:
Using backward difference operator , we have ∇

hD = − log(1 − ∇)
On expansion, we have
1 ∇ 2 ∇3 ∇ 4 
D=  ∇ − + + + "
h 2 3 4 

we can also verify that


2
1  ∇ 2 ∇3 ∇ 4 
D = 2 ∇ −
2
+ + + "
h  2 3 4 
1  2 11 5 
= 2 
∇ + ∇3 + ∇ 4 + ∇5 + " 
h  12 6 
Hence
d
yn = Dyn = yn′
dx
1 ∇ 2 y n ∇ 3 yn ∇ 4 y n 
=  ∇ yn − + + + "
h 2 3 4 
yn′′ = D 2 yn
and 1 11 5 
2 (
= ∇ 2 yn + ∇ 3 yn + ∇ 4 yn + ∇ 5 yn + " 
h 12 6 
2
The formulae for Dy and D y are useful to calculate the first and second derivatives at
0 0
the beginning of the table of values in terms of forward differences; while formulae for
y’ and y’’
n n

© Copyright Virtual University of Pakistan 2

153
Numerical Analysis –MTH603 VU

2
The formulae for Dy and D y are useful to calculate the first and second derivatives at
0 0
the beginning of the table of values in terms of forward differences; while formulae for
y’ and y’’
n n
Case III: Using central difference operator δ and following the definitions of
differential operator D, central difference operator δ and the shift operator E, we have
δ = E1/ 2 − E −1/ 2 = ehD / 2 − e − hD / 2
hD
= 2sinh
2
Therefore, we have
hD δ
= sinh −1
2 2

But,
1 x 3 1× 3 x 5
sinh −1 x = x − +
2 3 2× 4 5
1× 3 × 5 x 7
− +"
2× 4× 6 7
Using the last two equations we get

hD  δ δ 3 3 
= − + δ 5 − "
2  2 6 × 8 40 × 32 

That is,
1 1 3 5 
D = δ − δ 3 + δ −"
h 24 640 
Therefore

d
y = y′ = Dy
dx
1 1 3 5 
= δ y − δ 3 y + δ y −"
h 24 640 
Also
1  2 1 4 3 6 
D2 = 2 
δ − δ + δ −"
h  12 90 
Hence
1  2 1 1 
y′′ = D 2 y = 2 
δ y − δ 4 y + δ 6 y −"
h  12 90 
For calculating the second
derivative at an interior tabular
point, we use the equation

© Copyright Virtual University of Pakistan 3

154
Numerical Analysis –MTH603 VU

1  2 1 4 3 6 
D2 = 2 
δ − δ + δ −"
h  12 90 
while for computing the firstderivative at an interior tabular point, we in general use
another convenient form for D, which is derived as follows. Multiply the right hand
side of
d 1 1 3 5 
y = y′ = Dy =  δ y − δ 3 y + δ y − "
dx h 24 640 
by
µ
1 + (δ 2 4)
which is unity and noting the Binomial expansion

−1 2
 1 2 1 2
1 + δ  = 1 − δ +
 4  8
3 4 15
δ − δ 6 +"
128 48 × 64
we get
µ 1 2 3 4 
D= 1 − δ + δ −"
h 8 128 
 1 3 3 5 
δ − δ + δ −"
 24 640 
On simplification, we obtain

µ 1 3 4 5 
D= δ − δ + δ −"
h 6 120 

Therefore the equation can also be written in another useful form as

µ 1 3 1 5 
y′ = D =  δ y − δ y + δ y − "
h 6 30 
The last two equations for y” and y’ respectively are known as Stirling’s formulae for
computing the derivatives of a tabular function. Similar formulae can be derived for
computing higher order derivatives of a tabular function.
The equation for y’ can also be
written as
µ 12 (1) 2 (2) 2 5 (1) 2 (2) 2 (3) 2 7 
y′ = δ y − δ 3 y + δ y− δ y + "
h 3! 5! 7! 
In order to illustrate the use of formulae derived so far, for computing the derivatives of a
tabulated function, we shall consider some examples in the next lecture.

© Copyright Virtual University of Pakistan 4

155
Numerical Analysis –MTH603 VU

© Copyright Virtual University of Pakistan 5

156
Numerical Analysis –MTH603 VU

DIFFERENTIATION USING DIFFERENCE OPREATORS:


Applications:
Remember Using forward difference operator ∆, the shift operator, the backward
difference operator and the average difference operator, we obtained the following
formulae:
1 ∆ 2 y0 ∆ 3 y0 ∆ 4 y0 
Dy0 = y0′ =  ∆y0 − + − + "
h 2 3 4 
 ∆ 2 y0 − ∆ 3 y0 
d 2 y0 1  
D y0 =
2
= y0′′ = 2 11 5
dx 2 h  + ∆ 4 y0 − ∆ 5 y0 + " 
 12 6 

d 1 ∇ 2 yn ∇ 3 yn ∇ 4 yn 
yn = Dyn = yn′ =  ∇yn − + + + "
dx h 2 3 4 
1 11 5 
yn′′ = D 2 yn = 2 ( ∇ 2 yn + ∇3 yn + ∇ 4 yn + ∇ 5 yn + " 
h 12 6 
1  1 1 
y′′ = D 2 y = 2  δ 2 y − δ 4 y + δ 6 y − " 
h  12 90 
d 1 1 3 5 
y = y′ = Dy =  δ y − δ 3 y + δ y − "
dx h 24 640 
Recall from what we mentioned in the last lecture that for calculating the second
derivative at an interior tabular point, we use the equation

1  2 1 4 3 6 
D2 =
2 
δ − δ + δ −"
h  12 90 
While for computing the first derivative at an interior tabular point, we in general use
another convenient form for D, which is derived as follows. Multiply the right hand
side of

d 1 1 3 3 5 
y = y′ = Dy = δ y − δ y + δ y − "
dx h 24 640 
µ
By
1 + (δ 2 4)
Which is unity and noting the Binomial expansion
−1 2
 1 2 1 2 3 4 15
1 + δ  = 1 − δ + δ − δ 6 +"
 4  8 128 48 × 64
We get
µ 1 3 4  1 3 5 
D = 1 − δ 2 + δ − "  δ − δ 3 + δ −"
h 8 128  24 640 

© Copyright Virtual University of Pakistan 1

157
Numerical Analysis –MTH603
∆f ( x) ∆ 2 f ( x)
VU

Simplification we get
µ1 3 4 5 
D=δ − δ + δ −"
h 6 120 
Therefore the equation can also be written in another useful form as

µ 1 3 1 5 
y′ = D = δ y − δ y + δ y − "
h 6 30 
The last two equations y” andy’ respectively are known as Stirling’s formulae for
computing the derivatives of a tabular function. Similar formulae can be derived for
computing higher order derivatives of a tabular function.
The equation for y’ can also be written as

µ 12 3 (1) 2 (2) 2 5 (1) 2 (2) 2 (3) 2 7 


y′ = δ y − δ y + δ y− δ y + "
h 3! 5! 7! 
In order to illustrate the use of formulae derived so far, for computing the
derivatives of a tabulated function, we consider the following example :
Example:
Compute f ′′(0) and f ′(0.2) from the following tabular data.
x 0.0 0.2 0.4 0.6 0.8 1.0
F(x) 1.00 1.16 3.56 13.96 41.96 101.00

Solution
Since x = 0 and 0.2 appear at and near beginning of the table, it is appropriate to use
formulae based on forward differences to find the derivatives. The difference table for the
given data is depicted below:

x f(x) ∆f ( x) ∆ 2 f ( x) ∆3 f ( x ) ∆ 4 f ( x) ∆5 f ( x )
0.0 1.00
0.2 1.16 0.16
0.4 3.56 2.40 2.24 5.76
0.6 13.96 10.40 8.00 9.60 3.84 0.0
0.8 41.96 28.00 17.60 13.44 3.84

1.0 101.00 59.04 31.04

Using forward difference formula for D 2 f ( x),


i.e

© Copyright Virtual University of Pakistan 2

158
Numerical Analysis –MTH603 VU

1  ∆ 2 f ( x) − ∆ 3 f ( x) 11 4 5 
D 2 f ( x) = 2 
+ ∆ f ( x) − ∆ 5 f ( x) 
h  12 6 

we obtained
1  11 5 
f ′′(0) = 2 
2.24 − 5.76 + (3.84) − (0)  = 0.0
(02)  12 6 
Also, using the formula for Df ( x) , we have

1 ∆ 2 f ( x) ∆ 3 f ( x) ∆ 4 f ( x) 
Df ( x) = ∆f ( x ) − + −
h  2 3 4 

Hence,
1  8.00 9.60 3.84 
f ′(0.2) =  2.40 − + −  = 3.2
0.2  2 3 4 
Example
Find y′(2.2) and y′′(2.2) from the table.
x 1.4 1.6 1.8 2.0 2.2
Y(x) 4.0552 4.9530 6.0496 7.3891 9.0250
Solution:
Since x=2.2 occurs at the end of the table, it is appropriate to use backward difference
formulae for derivatives. The backward difference table for the given data is shown
below:
x f(x) ∇y ∇2 y ∇3 y ∇4 y
1.4 4.0552
1.6 4.9530 0.8978
1.8 6.0496 1.0966 0.1988 0.0441
2.0 7.3891 1.3395 0.2429 0.0535 0.0094
2.2 9.0250 1.6359 0.2964

Using backward difference formulae for y′( x) and y′′( x), we have

1 ∇ 2 yn ∇ 3 yn ∇ 4 yn 
yn′ = ∇
 n y + + + 
h 2 3 4 
Therefore,

© Copyright Virtual University of Pakistan 3

159
Numerical Analysis –MTH603 VU

1  0.2964 0.0535 0.0094 


y′(2.2) = 1.6359 + + +  = 5(1.8043) = 9.02
0.2  2 3 4 
Also
1  2 11 
yn′′ = 2 
∇ yn + ∇ 3 yn + ∇ 4 yn 
h  12 
Therefore
1 11 
2 (
y′′(2.2) = 0.2964 + 0.0535 + (0.0094) 
(0.2) 12 
= 25(0.3585) = 8.9629
Example
Given the table of values, estimate, y // (1.3)

x 1.3 1.5 1.7 1.9 2.1 2.3


y 2.9648 2.6599 2.3333 1.9922 1.6442 1.2969

Solution Since x = 1.3 appear at beginning of the table, it is appropriate to use formulae
based on forward differences to find the derivatives. The difference table for the given
data is depicted below:
x y ∆f ( x) ∆ 2 f ( x) ∆ 3 f ( x) ∆ 4 f ( x) ∆ 5 f ( x)
1.3 2.9648

1.5 2.6599 -0.3049

1.7 2.3333 -0.3266 -0.0217

1.9 1.9922 -0.3411 -0.0145 0.0072

2.1 1.6442 -0.348 -0.0069 0.0076 0.0004

2.3 1.2969 -0.3473 0.0007 0.0076 0 -0.0004

H=0.2
Using forward difference formula for D 2 f ( x),
1 11 5 
D 2 f ( x) =  ∆ 2 f ( x) − ∆ 3 f ( x) + ∆ 4 f ( x) − ∆ 5 f ( x) 
2 
h 12 6 
We obtain

© Copyright Virtual University of Pakistan 4

160
Numerical Analysis –MTH603 VU

1  11 5 
f ′′(1.3) = 2 
-0.0217 − 0.0072 + (0.0004) − (-0.0004) 
(0.2)  12 6 
1
f ′′(1.3) = [ -0.0217 − 0.0072 + (0.9167)(0.0004) −(0.8334)(-0.0004)]
(0.2) 2
1
f ′′(1.3) = [ -0.0217 − 0.0072 + (0.0003667) +(0.0003334)]
(0.2) 2
1
f ′′(1.3) = [ -0.0282] = −0.7050
(0.2) 2

Case IV: Derivation of Two and three point formulae:


Retaining only the first term in equation:

1 ∆ 2 y0 ∆ 3 y0 ∆ 4 y0 
Dy0 = y0′ =  ∆y0 − + − + "
h 2 3 4 
we can get another useful form for the first derivative as
∆yi yi +1 − yi y ( xi + h) − y ( xi )
i= = =
h h h
Similarly, by retaining only the first term in Eqn.

d 1 ∇ 2 yn ∇ 3 yn ∇ 4 yn 
yn = Dyn = yn′ =  ∇yn − + + + "
dx h 2 3 4 
∇y y − yi −1 y ( xi ) − y ( xi − h)
yi′ = i = i =
h h h
Adding the last two equations, we have

y ( xi + h) − y ( xi − h)
yi′ =
2h
These Equations constitute two-point formulae for the first derivative. By retaining only
the first term in Equation ,
 2 11 
2  ∆ y0 − ∆ 3 y0 + ∆ 4 y0 
d y0 1 12
D 2 y0 = 2
= y0′′ = 2  
dx h  5 5 
 + ∆ y0 + " 
 6 
we get,
∆2 y y − 2 yi +1 + yi y ( xi + 2h) − 2 y ( xi + h) + y ( xi )
yi′′ = 2 i = i + 2 =
h h2 h2
Similarly we get
∆ 2 yi y ( xi ) − 2 y ( xi − h) + y ( xi − 2h)
′′
yi = 2 =
h h2
While retaining only the first term in the expression for y” in terms of δ we obtain
© Copyright Virtual University of Pakistan 5

161
Numerical Analysis –MTH603 VU

δ 2 yi δ yi + (1 2) − δ yi −(1 2)
yi +1 − 2 yi + yi −1
yi′′ = 2
= 2
=
h h h2
y ( xi − h) − 2 y ( xi ) + y ( xi + h)
=
h2
The last three equations constitute three-point formulae for computing the second
derivative. We shall see later that these two- and three-point formulae become handy for
developing extrapolation methods to numerical differentiation and integration.

© Copyright Virtual University of Pakistan 6

162
Numerical Analysis –MTH603 VU

Example 1;
Write first derivative of f (x) at x = 0.1, 0.2, 0.3
where f (x) is given by

X 0.1 0.2 0.3 0.4 0.5 0.6


F(x) 0.425 0.475 0.400 0.452 0.525 0.575
solution;
Using the two point equation we have

f (0.2) − f (0.1) 0.475 − 0.425 0.050


f ′(0.1) = = = = 0.5.
h 0.1 0.1
f (0.3) − f (0.2) 0.400 − 0.475 −0.075
f ′(0.2) = = = = −0.75.
h 0.1 0.1
f (0.4) − f (0.3) 0.450 − 0.400 0.05
f ′(0.3) = = = = 0.5.
h 0.1 0.1
Example 2:
nd
Find the 2 derivative at 0.3, 0.4, 0.5 for the function given in the
example above.
Solution;
f (0.4) − 2 f (0.3) + f (0.2) 0.125
f ′′(0.3) = = = 12.5
h 2 = 0.01 0.01
f (0.5) − 2 f (0.4) + f (0.3) 0.025
f ′′(0.4) = = = 2.5
h 2 = 0.01 0.01
f (0.6) − 2 f (0.5) + f (0.4) 0.075
f ′′(0.5) = = = 7.5
0.01 0.01
DIFFERENTIATION USING INTERPOLATION
If the given tabular function y(x) is reasonably well approximated by a polynomial P (x)
n

of degree n, it is hoped that the result of Pn ( x) will also satisfactorily approximate the
corresponding derivative of y(x).
However, even if P (x) and y(x) coincide at the tabular points, their derivatives or slopes
n
may substantially differ at these points as is illustrated in the Figure below:

© Copyright Virtual University of Pakistan 1

163
Numerical Analysis –MTH603 VU

Pn(x)
Y(x)

Deviation of derivatives

O Xi X

For higher order derivatives, the deviations may be still worst. However, we can estimate
the error involved in such an approximation.
For non-equidistant tabular pairs (x , y ), i = 0, …, n we can fit the data by using either
i i
Lagrange’s interpolating polynomial or by using Newton’s divided difference
interpolating polynomial. In view of economy of computation, we prefer the use of the
latter polynomial.
Thus, recalling the Newton’s divided difference interpolating polynomial for fitting this
data as

Pn ( x) = y[ x0 ] + ( x − x0 ) y[ x0 , x1 ] + ( x − x0 )( x − x1 ) y[ x0 , x1 , x2 ]
n −1
+ + ∏ ( x − xi ) y[ x0 , x1 ,..., xn ]
i =0

Assuming that P (x) is a good approximation to y(x), the polynomial approximation to


n
can be y′( x) obtained by differentiating P (x). Using product rule of differentiation,
n
the derivative of the products in P (x) can be seen as follows:
n
© Copyright Virtual University of Pakistan 2

164
Numerical Analysis –MTH603 VU

d n −1 n −1
( x − x0 )( x − x1 ) ( x − xn )

dx i =0
( x − xi ) = ∑
i =0 x − xi

Thus, y′( x) is approximated by Pn′( x) which is given by


Pn′( x) = y[ x0 , x1 ] + [( x − x1 ) + ( x − x0 )] y[ x0 , x1 , x2 ] + +
n =1
( x − x0 )( x − x1 ) ( x − xn −1 )
∑i =0 x − xi
y[ x0 , x1 ,… , xn ]
The error estimate in this approximation can be seen from the following.

We have seen that if y(x) is approximated by P (x), the error estimate is shown to be
n
∏( x) ( n +1)
En ( x) = y ( x) − Pn ( x) = y (ξ )
(n + 1)!
Its derivative with respect to x can be written as

∏′( x) ( n +1) ∏( x) d ( n +1)


En′ ( x) = y′( x) − Pn′( x) = y (ξ ) + y (ξ )
(n + 1)! (n + 1)! dx
Its derivative with respect to x can be written as
∏′( x) ( n +1) ∏( x) d ( n +1)
En′ ( x) = y′( x) − Pn′( x) = y (ξ ) + y (ξ )
(n + 1)! (n + 1)! dx
Since ξ(x) depends on x in an unknown way the derivative

d ( n +1)
y (ξ )
dx
cannot be evaluated. However, for any of the tabular points x = x , ∏(x) vanishes and
i
the difficult term drops out. Thus, the error term in the last equation at the tabular point
x = x simplifies to
i
y ( n +1) (ξ )
En′ ( xi ) = Error = ∏′( xi )
(n + 1)!
for some ξ in the interval I defined by the smallest and largest of x, x , x , …, x and
0 1 n
n
∏′( xi ) = ( xi − x0 ) ( xi − xn ) = ∏ ( xi − x j )
j =0
j ≠1

The error in the r-th derivative at the tabular points can indeed be expressed analogously.

To understand this method better, we consider the following example.


Example

© Copyright Virtual University of Pakistan 3

165
Numerical Analysis –MTH603 VU

Find f ′(0.25) and f ′(0.22) from the following data using the method based on divided
differences:

X 0.15 0.21 0.23


Y=f(x) 0.1761 0.3222 0.3617

X 0.27 0.32 0.35


Y=f(x) 0.4314 0.5051 0.5441
Solution We first construct divided difference table for the given data as shown
below:
X Y 1st D.D 2nd D.D 3rd D.D
0.15 0.1761
0.21 0.3222 2.4350 -5.700
0.23 0.3617 1.9750 -3.8750 15.6250
0.27 0.4314 1.7425 -2.9833 8.1064

0.32 0.5051 1.4740 -2.1750 6.7358


0.35 0.5441 1.3000

Using divided difference formula

n =1
( x − x0 )( x − x1 ) ( x − xn −1 )

i =0 x − xi
y[ x0 , x1 ,… , xn ]
from a quadratic polynomial, we have
y′( x) = P3′( x) = y[ x0 , x1 ] + {( x − x1 ) + ( x − x0 )} y[ x0 , x1 , x2 ]
+ {( x − x1 )( x − x2 ) + ( x − x0 )( x − x2 ) + ( x − x0 )( x − x1 )} y[ x0 , x1 , x2 , x3 ]
Thus, using first, second and third differences from the table, the above equation yields

y′(0.25) = 2.4350 + [(0.25 − 0.21) + (0.25 − 0.15)](−5.75)


+[(0.25 − 0.21)(0.25 − 0.23) + (0.25 − 0.15)(0.25 − 0.23)
+ (0.25 − 0.15)(0.25 − 0.21)](15.625)
Therefore
f ′(0.25) = 2.4350 − 0.805 + 0.10625
= 1.7363
Similarly we can show that
f ′(0.22) = 1.9734
Example
© Copyright Virtual University of Pakistan 4

166
Numerical Analysis –MTH603 VU

The following divided difference table is for y=1/x.use it to find y / (0.75) from a
quadratic polynomial
Fit.
x Y=1/x 1st divided 2nd divided 3rd divided
difference difference difference
.25 4.0000 -8.0000
.50 2.0000 -2.6668 10.6664
.75 1.3333 -1.3332 2.6672 -14.2219
1.00 1.0000 -.8000 1.0664 -2.1344
1.25 .8000 -.5332 0.5336 -0.7104
1.50 .6667
Solution:
Using divided difference formula
n =1
( x − x0 )( x − x1 ) ( x − xn −1 )

i =0 x − xi
y[ x0 , x1 ,… , xn ]
If we find the values for quadratic fit then we use the following formula,
y ′( x ) = P3′( x )
= y[ x0 , x1 ] + {( x − x1 ) + ( x − x0 )} y[ x0 , x1 , x2 ]
+ {( x − x1 )( x − x2 ) + ( x − x0 )( x − x2 ) + ( x − x0 )( x − x1 )} y[ x0 , x1 , x2 , x3 ]

Thus, using first, second, third differences from the table, we get

y′(x) = P3′(x) = -8.0000 +{(x − .50) + (x − .25)}10.6664


+{(x − .50)(x − .75) + (x − 0.25)(x − 0.75) + ( x − 0.25)(x − 0.50)}( -14.2219)
Put x = 7.5
y′(0.75) = P3′(0.75) = -8.0000 +{(0.75 − .50) + (0.75 − .25)}10.6664
+{(0.75 − .50)(0.75 − .75) + (0.75 − 0.25)(0.75 − 0.75) + (0.75 − 0.25)(0.75 − 0.50)}( -14.2219)
y′(0.75) = P3′(0.75) = -8.0000 +{(0.25) + (0.50)}10.6664
+{(0.25)(0) + (0.50)(0) + (0.50)(0.25)}( -14.2219)
y′(0.75) = P3′(0.75) = -8.0000 +{0.75}10.6664+{0.125}( -14.2219)
y′(0.75) = P3′(0.75) = -8.0000 + 7.9998 −1.7777375
y′(0.75) = P3′(0.75) = −1.7779

© Copyright Virtual University of Pakistan 5

167
Numerical Analysis –MTH603 VU

© Copyright Virtual University of Pakistan 6

168
Numerical Analysis –MTH603 VU

RICHARDSON’S EXTRAPOLATION METHOD


To improve the accuracy of the derivative of a function, which is computed by starting
with an arbitrarily selected value of h, Richardson’s extrapolation method is often
employed in practice, in the following manner:
Suppose we use two-point formula to compute the derivative of a function, then we have
y ( x + h) − y ( x − h)
y′( x) = + ET
2h
= F (h) + ET
Where E is the truncation error. Using Taylor’s series expansion, we can see that
T
ET = c1h 2 + c2 h 4 + c3 h6 +
The idea of Richardson’s extrapolation is to combine two computed values of y′( x)
using the same method but with two different step sizes usually h and h/2 to yield a
higher order method. Thus, we have
y′( x) = F (h) + c1h 2 + c2 h 4 +
And
h h2 h4
y′( x) = F   + c1 + c2 +
2 4 16
Here, c are constants, independent of h, and F(h) and F(h/2) represent approximate
i
values of derivatives. Eliminating c from the above pair of equations, we get
1
h
4 F   − F ( h)
y′( x) = 2 + d1h 4 + O(h6 )
3
Now assuming that
h
4 F   − F ( h)
h 2
F1   =
2 3
Equation for y’(x) above reduces to
h
y′( x) = F1   + d1h 4 + O(h 6 )
2
Thus, we have obtained a fourth-order accurate differentiation formula by combining two
results which are of second-order accurate. Now, repeating the above argument, we have
h
y′( x) = F1   + d1h 4 + O(h 6 )
2
4
h d h
y′( x) = F1   + 1 + O(h 6 )
 4  16
Eliminating d from the above pair of equations, we get a better approximation as
1
h
y′( x) = F2   + O(h 6 )
4

© Copyright Virtual University of Pakistan 1

169
Numerical Analysis –MTH603 VU

Which is of sixth-order accurate, where


 h  h
42 F1  2  − F1  
h 2  2
F2   =
4 4 −1
2

This extrapolation process can be repeated further until the required accuracy is achieved,
which is called an extrapolation to the limit. Therefore the equation for F above can be
2
generalized as
 h   h 
4m Fm −1  m  − Fm −1  m −1 
 h  2  2 ,
Fm  m  =
2  4 −1
m

m = 1, 2,3,…

Where F (h) = F(h).


0
To illustrate this procedure, we consider the following example.

Example: Using the Richardson’s extrapolation limit, find y’(0.05) to the function
y = -1/x, with h = 0.0128, 0.0064, 0.0032.
Solution
To start with, we take, h = 0.0128, then compute F (h) as

1 1
− +
y ( x + h) − y ( x − h)
F ( h) = = 0.05 + 0.0128 0.05 − 0.0128
2h 2(0.0128)
−15.923566 + 26.88172
=
0.0256
= 428.05289
Similarly, F(h/2) = 406.66273. Therefore, using Eq. (7.30), we get

h h
4F   − F  
h 2  2  = 399.5327
1 =
2 4 −1
4
Which is accurate to O(h ). Halving the step size further, we compute
1 1
− +
 h
F  2  = 0.05 + 0.0032 0.05 − 0.0032
2  2(0.0032)
= 401.64515
And

© Copyright Virtual University of Pakistan 2

170
Numerical Analysis –MTH603 VU

 h h
4F  2  − F  
 h 2  2
F1  2  =
2  4 −1
= 399.97263
Again, using Eq. , we obtain
 h h
42 F1  2  − F1  
 h 2  2
F2  2  =
2  4 −1
2

= 400.00195
The above computation can be summarized in the following table:

h F F1 F2
0.0128 428.0529
0.00064 406.6627 399.5327 400.00195
0.0032 401.6452 399.9726

Thus, after two steps, it is found that y′(0.05) = 400.00915 while the exact value is

 1  1
y′(0.05) =  2  = = 400
 x  x =0.05 0.0025

© Copyright Virtual University of Pakistan 3

171
Numerical Analysis –MTH603 VU

Numerical Differentiation and Integration


INTRODUCTION
DIFFERENTIATION USING DIFFERENCE OPREATORS
DIFFERENTIATION USING INTERPOLATION
RICHARDSON’S EXTRAPOLATION METHOD
NUMERICAL INTEGRATION
NEWTON-COTES INTEGRATION FORMULAE
THE TRAPEZOIDAL RULE ( COMPOSITE FORM)
SIMPSON’S RULES (COMPOSITE FORM)
ROMBERG’S INTEGRATION
DOUBLE INTEGRATION
Basic Issues in Integration What does an integral represent?


AREA = f (x) dx
a
d b
VOLUME=
∫∫ g(x, y) dx dy
c a
Basic definition of an integral::
b n

∫ f (x) dx = lim ∑
a
n →∞
k =1
f (x k ) ∆x

Sum of Height x Width


Objective:
b
Evaluate I =
∫ f (x) dx
a
without doing calculation analytically.

When would we want to do this?


1. Integrand is too complicated to integrate analytically.
2
2 + cos(1 + x ) 0.5x
∫0 1 + 0.5x
e dx

2. Integrand is not precisely defined by an equation,i.e., we are given a set of data


(xi,ƒ(xi)), i=1,...,n.
All methods are applicable to integrands that are functions.
Some are applicable to tabulated values.
Key concepts:
1. Integration is a summing process. Thus virtually all numerical
approximations can be represented by

© Copyright Virtual University of Pakistan 1

172
Numerical Analysis –MTH603 VU

b n
I = ∫ f(x)dx = ∑ ∆xf(x ) + Ei t
a i =0

Where:
x = weights
x = sampling points
i
E = truncation error
t
2. Closed & Open forms:

Closed forms include the end points a & b in x . Open forms do not.
i
NUMERICAL INTEGRATION
Consider the definite integral
b
I =∫ f ( x)dx
x=a
Where f (x) is known either explicitly or is given as a table of values corresponding to
some values of x, whether equispaced or not. Integration of such functions can be carried
out using numerical techniques.
Of course, we assume that the function to be integrated is smooth and Riemann integrable
in the interval of integration. In the following section, we shall develop Newton-Cotes
formulae based on interpolation which form the basis for trapezoidal rule and Simpson’s
rule of numerical integration.
NEWTON-COTES INTERGRATION FORMULAE
In this method, as in the case of numerical differentiation, we shall approximate the given
tabulated function, by a polynomial P (x) and then integrate this polynomial.
n
Suppose, we are given the data (x , y ), i = 0(1)n, at equispaced points with spacing h =
i i
x – x , we can represent the polynomial by any standard interpolation polynomial.
i+1 i
Suppose, we use Lagrangian approximation, then we have
f ( x) = ∑ Lk ( x) y ( xk )
With associated error given by
∏( x) ( n +1)
E ( x) = y (ξ )
(n + 1)!
Where

∏( x)
Lk ( x) =
( x − xk ) ∏′( xk )
And

∏( x) = ( x − x0 )( x − x1 )… ( x − xn )

© Copyright Virtual University of Pakistan 2

173
Numerical Analysis –MTH603 VU

Then, we obtain an equivalent integration formula to the definite integral in the form
n
f ( x)dx ≈ ∑ ck y ( xk )
b
∫ a
k =1
Where c are the weighting coefficients given by
k
b
ck = ∫ Lk ( x)dx
a
Which are also called Cotes numbers. Let the equispaced nodes are defined by
b−a
x0 = a, xn = b, h = , and xk = x0 + kh
n
So that x – x = ( k – 1)h etc. Now, we shall change the variable x to p such that,
k 1
x = x + ph, then we can rewrite equations.
0
∏( x)
Lk ( x) =
( x − xk ) ∏′( xk )

∏( x) = ( x − x0 )( x − x1 )… ( x − xn )

As ∏( x) = h n +1 p ( p − 1)… ( p − n)

( x − x0 )( x − x1 ) ( x − xk −1 )( x − xk +1 ) ( x − xn )
And Lk ( x) =
( xk − x0 )( xk − x1 ) ( xk − xk −1 )( xk − xk +1 ) ( xk − xn )

( ph)( p − 1)h ( p − k + 1)h( p − k − 1)h ( p − n)h


=
(kh)(k − 1)h (1)(h)(−1)h (k − n)h
Or
p( p − 1)
( p − k + 1)( p − k − 1) ( p − n)
Lk ( x) = (−1)( n − k )
k !(n − k )!
Also, noting that dx = h dp. The limits of the integral in Equation
b
ck = ∫ Lk ( x)dx
a
change from 0 to n and equation reduces to
p( p − 1)
(−1) n − k (h) n
k !(n − k )! ∫0
ck = ( p − k + 1)( p − k − 1)
( p − n)dp

The error in approximating the integral can be obtained from


p ( p − 1)
hn+2 n
(n + 1)! ∫0
En = ( p − n) y ( n +1)
(ξ )dp
Where x < ξ < x . For illustration, consider the cases for n = 1, 2; For which we get
0 n

© Copyright Virtual University of Pakistan 3

174
Numerical Analysis –MTH603 VU

1 h 1 h
c0 = −h ∫ ( p − 1)dp = , c1 = h ∫ pdp =
0 2 0 2
And
h3 1 h3
y′′(ξ ) ∫ p ( p − 1)dp = − y′′(ξ )
E1 =
2 0 12
Thus, the integration formula is found to be
x1 h h3
∫x0 f ( x)dx = c0 y0 + c1 y1 + Error = 2 ( y0 + y1 ) − 2 y′′(ξ )
This equation represents the Trapezoidal rule in the interval [x , x ] with error term.
0 1
Geometrically, it represents an area between the curve y = f (x), the x-axis and the
ordinates erected at x = x ( = a) and x = x as shown in the figure.
0 1
Y

y = f(x)

(x1, y1) (x2, y2)

(x0, y0)

y0 y1 y2 y3 yn-1 yn

O X
x0 = a x1 x2 x3 xn-1 xn = b

This area is approximated by the trapezium formed by replacing the curve with its secant
line drawn between the end points (x , y ) and (x , y ).
0 0 1 1
For n =2, We have
h 2 h
c0 = ∫ ( p − 1)( p − 2)dp =
2 0 3
2 4
c1 = − h ∫ p( p − 2)dp = h
0 3
h 2 h
c2 = ∫ p ( p − 1)dp =
2 0 3
© Copyright Virtual University of Pakistan 4

175
Numerical Analysis –MTH603 VU

and the error term is given by


h5 ( iv )
E2 =y (ξ )
90
Thus, for n = 2, the integration takes the form
x2
∫x0
f ( x)dx = x0 y0 + x1 y1 + x2 y2 + Error

h h5 ( iv )
=
( y0 + 4 y1 + y2 ) − y (ξ )
3 90
This is known as Simpson’s 1/3 rule. Geometrically, this equation represents the area
between the curve y = f (x), the x-axis and the ordinates at x = x and x after replacing
0 2
the arc of the curve between (x , y ) and (x , y ) by an arc of a quadratic polynomial as
0 0 2 2
in the figure

y = f(x)

(x2, y2)

(x0, y0)

y0 y1 y2

O X
x0 = a x1 x2 x3 xn-1 xn = b

Thus Simpson’s 1/3 rule is based on fitting three points with a quadratic.
Similarly, for n = 3, the integration is found to be
x3 3 3 5 (iv )
∫x0 f ( x)dx = 8 h( y0 + 3 y1 + 3 y2 + 3 y3 ) − 80 h y (ξ )
This is known as Simpson’s 3/8 rule, which is based on fitting four points by a cubic.
Still higher order Newton-Cotes integration formulae can be derived for large values of n.
But for all practical purposes, Simpson’s 1/3 rule is found to be sufficiently accurate.

© Copyright Virtual University of Pakistan 5

176
Numerical Analysis –MTH603 VU

© Copyright Virtual University of Pakistan 6

177
Numerical Analysis –MTH603 VU

The Trapezoidal Rule (Composite Form)


The Newton-Cotes formula is based on approximating y = f (x) between (x , y ) and
0 0
(x , y )by a straight line, thus forming a trapezium, is called trapezoidal rule. In order to
1 1
evaluate the definite integral
b
I = ∫ f ( x)dx
a
we divide the interval [a, b] into n sub-intervals, each of size h = (b – a)/n and denote the
sub-intervals by [x , x ], [x , x ], …, [x , x ], such that x = a and x = b and x = x
0 1 1 2 n-1 n 0 n k 0
+ k , k = 1, 2, …, n – 1.
h
Thus, we can write the above definite integral as a sum. Therefore,
xn x1 x2 xn
I = ∫ f ( x)dx = ∫ f ( x)dx + ∫ f ( x)dx + " + ∫ f ( x)dx
x0 x0 x1 xn−1

The area under the curve in each sub-interval is approximated by a trapezium. The
integral I, which represents an area between the curve y = f (x), the x-axis and the
ordinates at x = x and x = x is obtained by adding all the trapezoidal areas in each sub-
0 n
interval.
Now, using the trapezoidal rule into equation:
x1 h h3
∫x0 f ( x)dx = c0 y0 + c1 y1 + Error = 2 ( y0 + y1 ) − 2 y′′(ξ )
We get

xn h h3 h h3
I = ∫ f ( x)dx = ( y0 + y1 ) − y (ξ1 ) + ( y1 + y2 ) − y′′(ξ 2 )
′′
x0 2 2 2 12
3
h h
+ " + ( yn −1 + yn ) − y′′(ξ n )
2 12
Where x < ξ < x , for k = 1, 2, …, n – 1.
k-1 k
Thus, we arrive at the result
xn h
∫x0 f ( x)dx = 2 ( y0 + 2 y1 + 2 y2 + " + 2 yn−1 + yn ) + En
Where the error term E is given by
n
h3
En = − [ y′′(ξ1 ) + y′′(ξ 2 ) + " + y′′(ξ n )]
12
Equation represents the trapezoidal rule over [x , x ], which is also called the composite
0 n
form of the trapezoidal rule. The error term given by Equation:
h3
En = − [ y′′(ξ1 ) + y′′(ξ 2 ) + " + y′′(ξ n )]
12
is called the global error.

© Copyright Virtual University of Pakistan 1

178
Numerical Analysis –MTH603 VU

However, if we assume that y′′( x) is continuous over [x , x ] then there exists some ξ
0 n
in [x , x ] such that x = x + nh and
0 n n 0
h3 x −x
En = − [ny′′(ξ )] = − n 0 h 2 y′′(ξ )
12 12
2
Then the global error can be conveniently written as O(h ).
Simpson’s Rules (Composite Forms)
In deriving equation. ,

x2 h h5 ( iv )
∫x0
f ( x)dx = x0 y0 + x1 y1 + x2 y2 + Error =
3
( y0 + 4 y1 + y2 ) −
90
y (ξ )

The Simpson’s 1/3 rule, we have used two sub-intervals of equal width. In order to get a
composite formula, we shall divide the interval of integration [a, b] Into an even
number of sub- intervals say 2N, each of width (b – a)/2N, thereby we have
x = a, x , …, x = b and x =x +kh, k = 1,2, … (2N – 1).
0 1 2N k 0
Thus, the definite integral I can be written as

b x2 x4 x2 N
I = ∫ f ( x)dx = ∫ f ( x)dx + ∫ f ( x)dx + " + ∫ f ( x)dx
a x0 x2 x2 N −2

Applying Simpson’s 1/3 rule as in equation


x2 h h5 ( iv )
∫x0
f ( x)dx = x0 y0 + x1 y1 + x2 y2 + Error =
3
( y0 + 4 y1 + y2 ) −
90
y (ξ )

to each of the integrals on the right-hand side of the above equation, we obtain
h
I = [( y0 + 4 y1 + y2 ) + ( y2 + 4 y3 + y4 ) +"
3
N
+ ( y2 N − 2 + 4 y2 N −1 + y2 N )] − h5 y ( iv ) (ξ )
90
That is

x2 N h
∫ x0
f ( x)dx = [ y0 + 4( y1 + y3 + " + y2 N −1 ) + 2( y2 + y4 + " + y2 N − 2 ) + y2 N ] + Error term
3
This formula is called composite Simpson’s 1/3 rule. The error term E, which is also
called global error, is given by
N x − x0 4 ( iv )
E = − h5 y ( iv ) (ξ ) = − 2 N h y (ξ )
90 180
4
for some ξ in [x , x ]. Thus, in Simpson’s 1/3 rule, the global error is of O(h ).
0 2N
Similarly in deriving composite Simpson’s 3/8 rule, we divide the interval of integration
into n sub-intervals, where n is divisible by 3, and applying the integration formula

© Copyright Virtual University of Pakistan 2

179
Numerical Analysis –MTH603 VU

3
x3 3
x0 ∫
f ( x)dx = h( y0 + 3 y1 + 3 y2 + y3 ) − h 5 y (iv ) (ξ )
8 80
to each of the integral given below
xn x3 x6 xn
∫x0
f ( x)dx = ∫ f ( x)dx + ∫ f ( x)dx + " + ∫
x0 x3 xn−3
f ( x)dx
We obtain the composite form of Simpson’s 3/8 rule as
b 3
∫a f ( x)dx = 8 h[ y(a) + 3 y1 + 3 y2 + 2 y3 + 3 y4 + 3 y5 + 2 y6 + "
+ 2 yn −3 + 3 yn − 2 + 3 yn −1 + y (b)]
With the global error E given by
x −x
E = − n 0 h 4 y (iv ) (ξ )
80
It may be noted that the global error in Simpson’s 1/3 and 3/8 rules are of the same order.
However, if we consider the magnitudes of the error terms, we notice that Simpson’s 1/3
rule is superior to Simpson’s 3/8 rule. For illustration, we consider few examples.
Example
π
Find the approximate value of y = ∫ sin xdx using
0
(i) Trapezoidal rule
(ii) Simpson’s 1/3 rule by dividing the range of integration into six equal parts. Calculate
the percentage error from its true value in both the cases.
Solution
We shall at first divide the range of integration (0, π ) into six equal parts so that each
part is of width π 6 and write down the table of values:
X 0 π/6 π/3 π/2 2π/3 5π/6 π
Y=sinx 0.0 0.5 0.8660 1.0 0.8660 0.5 0.0

Applying trapezoidal rule, we have


π h
∫0 sin xdx = 2 [ y0 + y6 + 2( y1 + y2 + y3 + y4 + y5 )]
Here, h, the width of the interval is π /6. Therefore,
π π 3.1415
y = ∫ sin xdx = [0 + 0 + 2(3.732)] = × 3.732 = 1.9540
0 12 6
Applying Simpson’s 1/3 rule

x2 h h5 ( iv )
∫ x0
f ( x)dx = x0 y0 + x1 y1 + x2 y2 + Error = ( y0 + 4 y1 + y2 ) −
3 90
y (ξ )
We have
π h
0 ∫sin xdx = [ y0 + y6 + 4( y1 + y3 + y5 ) + 2( y2 + y4 )]
3
π 3.1415
= [0 + 0 + (4 × 2) + (2)(1.732)] = × 11.464 = 2.0008
18 18
But the actual value of the integral is

© Copyright Virtual University of Pakistan 3

180
Numerical Analysis –MTH603 VU

π

0
sin xdx = [ − cos x]π0 = 2
Hence, in the case of trapezoidal rule
The percentage of error
2 − 1.954
= × 100 = 23
2
While in the case of Simpson’s rule the percentage error is
2 − 2.0008
× 100 = 0.04
2
(sign ignored)

© Copyright Virtual University of Pakistan 4

181
Numerical Analysis –MTH603 VU

Example :
5
From the following data, estimate the value of ∫
1
log xdx using Simpson’s 1/3
rule.Also, obtain the value of h, so that the value of the integral will be accurate up to five
decimal places.

X 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0


Y=logx 0.0000 0.4055 0.6931 0.9163 1.0986 1.2528 1.3863 1.5041 1.6094

Solution
We have from the data, n = 0, 1, …, 8, and h = 0.5. Now using Simpson’s 1/3 rule,
5 h
∫1 log xdx = 3 [ y0 + y8 + 4( y1 + y3 + y5 + y7 ) + 2( y2 + y4 + y6 )]
0.5
= [(0 + 1.6094) + 4(4.0787) + 2(3.178)]
3
0.5
= (1.6094 + 16.3148 + 6.356) = 4.0467
3
The error in Simpson’s rule is given by
x − x0 4 ( iv )
E = − 2N h y (ξ )
180
(ignoring the sign)
1 1 2 6
Since y = log x, y′ = , y′′ = − 2 , y′′′ = 3 , y ( iv ) = − 4
x x x x
Max y ( x) = 6,
( iv )
1≤ x ≤ 5

Max y (iv ) ( x) = 0.0096


1≤ x ≤ 5

Therefore, the error bounds are given by

(0.0096)(4)h 4 (6)(4)h 4
<E<
180 180
If the result is to be accurate up to five decimal places, then
24h 4
< 10−5
180
4
That is, h < 0.000075 or h < 0.09. It may be noted that the actual value of integrals is
5

1
log xdx = [ x log x − x]15 = 5log 5 − 4
Example :
1 dx
Evaluate the integral I = ∫ by using
0 1 + x2

© Copyright Virtual University of Pakistan 1

182
Numerical Analysis –MTH603 VU

(i) Trapezoidal rule


(ii) Simpson’s 1/3 rule by taking h = ¼. Hence compute the approximate value of π.
Solution
At first, we shall tabulate the function as

X 0 ¼ ½ ¾ 1
2 1 0.9412 0.8000 0.6400 0.5000
y = 1/ 1+x

Using trapezoidal rule, and taking h = ¼


h 1
I = [ y0 + y4 + 4( y1 + y2 + y3 ) ] = [1.5 + 2(2.312) ] = 0.7828
2 8
using Simpson’s 1/3 rule, and taking h = ¼, we have
h 1
I = [ y0 + y4 + 4( y1 + y3 ) + 2 y2 ] = [1.5 + 4(1.512) + 1.6] = 0.7854
3 12
But the closed form solution to the given integral is

1 dx 1 π
∫0 1 + x 2  
−1
+ tan x 0 = 4

Equating the last two equations, we get π = 3.1416.


2 1 − x2 / 2
Example: Compute the integral I = ∫ e dx using Simpson’s 1/3 rule,
π 0

Taking h = 0.125.
Solution At the outset, we shall construct the table of the function as required.
X 0 0.125 0.250 0.375 0.5 0.625 0.750 0.875 1.0
2 2
0.7 979 0 .7917 0.7 733 0.7 437 0 .7041 0 .6563 0.6023 0.5441 0.4839
y= exp(− x / 2)
π

Using Simpson’s 1.3 rule, we have


h
= [ y0 + y8 + 4( y1 + y3 + y5 + y7 ) + 2( y2 + y4 + y6 )]
3
0.125
= [0.7979 + 0.4839 + 4(0.7917 + 0.7437 + 0.6563 + 0.5441)
3
+ 2(0.7733 + 0.7041 + 0.6023)]
0.125
= (1.2812 + 10.9432 + 4.1594)
3
= 0.6827
Example :
A missile is launched from a ground station. The acceleration during first 80
seconds of flight, as recorded, is given in the following table:

© Copyright Virtual University of Pakistan 2

183
Numerical Analysis –MTH603 VU

t ( s) 0 10 20 30 40 50 60 70 80
a(m / s )2
30.00 31.63 33.34 35.47 37.75 40.33 43.25 46.69 50.67

Compute the velocity of the missile when t = 80 s, using Simpson’s 1/3 rule.
Solution:
Since acceleration is defined as the rate of change of velocity, we have
80 dv
v = ∫ a dt Or =a
0 dt
Using Simpson’s 1/3-rule, we have

h
v = [( y0 + y8 ) + 4( y1 + y3 + y7 ) + 2( y2 + y4 + y6 )]
3
10
= [(30 + 50.67) + 4(31.63 + 35.47 + 40.33 + 46.69) + 2(33.34 + 37.75 + 43.25)]
3
= 3086.1 m / s
Therefore, the required velocity is given by y = 3.0861 km/s.

© Copyright Virtual University of Pakistan 3

184
Numerical Analysis –MTH603 VU

Simpson’s 3/8 rule


b
Consider the definite integral I = ∫ f ( x)dx
x=a

x1 h h3
∫ x0
f ( x)dx = c0 y0 + c1 y1 + Error =
2
( y0 + y1 ) − y′′(ξ )
12
Then, if n = 2, the integration takes the form
x2
∫ x0
f ( x)dx = x0 y0 + x1 y1 + x2 y2 + Error

h h5 ( iv )
=
( y0 + 4 y1 + y2 ) − y (ξ )
3 90
Thus Simpson’s 1/3 rule is based on fitting three points with a quadratic.
Similarly, for n = 3, the integration is found to be
x3 3 3 5 (iv )
∫x0 f ( x)dx = 8 h( y0 + 3 y1 + 3 y2 + y3 ) − 80 h y (ξ )
This is known as Simpson’s 3/8 rule, which is based on fitting four points by a cubic.
Still higher order Newton-Cotes integration formulae can be derived for large values of n.
TRAPEZOIDAL RULE
xn h
∫x0
f ( x)dx =
2
( y0 + 2 y1 + 2 y2 + " + 2 yn −1 + yn ) + En
xn h
∫x0
f ( x)dx = ( y0 + 2 y1 + 2 y2 + " + 2 yn −1 + yn ) + En
2
SIMPSON’S 1/3 RULE
x2 h h5 (iv )
I =∫ f ( x)dx = ( y0 + 4 y1 + y2 ) − y (ξ )
x0 3 90
x2 N h

x0
f ( x)dx = [ y0 + 4( y1 + y3 + " + y2 N −1 ) + 2( y2 + y4 + " + y2 N − 2 ) + y2 N ] + Error term
3
x − x0 4 (iv )
E = − 2N h y (ξ )
180
Simpson’s 3/8 rule is

b 3
∫a
f ( x)dx = h[ y (a ) + 3 y1 + 3 y2 + 2 y3 + 3 y4 + 3 y5 + 2 y6 + " + 2 yn −3 + 3 yn − 2 + 3 yn −1 + y (b)]
8
With the global error E given by

xn − x0 4 ( iv )
E=− h y (ξ )
80
ROMBERG’S INTEGRATION

We have observed that the trapezoidal rule of integration of a definite integral is of


2
O(h ), while that of Simpson’s 1/3 and 3/8 rules are of fourth-order accurate.

© Copyright Virtual University of Pakistan 1

185
Numerical Analysis –MTH603 VU

We can improve the accuracy of trapezoidal and Simpson’s rules using Richardson’s
extrapolation procedure which is also called Romberg’s integration method.
For example, the error in trapezoidal rule of a definite integral
b
I = ∫ f ( x)dx
a
can be written in the form
I = IT + c1h 2 + c2 h 4 + c3 h6 +"
By applying Richardson’s extrapolation procedure to trapezoidal rule, we obtain the
following general formula
 h   h 
4m IT ( m −1)  m  − IT ( m −1)  m −1 
 h  2  2 
ITm  m  = m −1
2  4
Where m = 1, 2, … , with
I (h) = I (h).
T0 T

For illustration, we consider the following example.


Example:
1.8
Using Romberg’s integration method, find the value of ∫
1
y ( x)dx starting with
trapezoidal rule, for the tabular values

x 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8

y = f(x) 1.543 1.669 1.811 1.971 2.151 2.352 2.577 2.828 3.107

Solution:
x0 = 1, xn = 1.8,

1.8 − 1.0
h= , xi = x0 + ih
N
Let IT denote the integration by Trapezoidal rule, then for
h
N = 1, h = 0.8, IT = ( y0 + y1 )
2
= 0.4(1.543 + 3.107) = 1.8600
h
N = 2, h = 0.4, IT = ( y0 + 2 y1 + y2 ) = 0.2[1.543 + 2(2.151) + 3.107]
2
= 1.7904
h
N = 4, h = 0.2, IT = [ y0 + 2( y1 + y2 + y3 ) + y4 ] = 0.1[1.543 + 2(1.811 + 2.151 + 2.577) + 3.107]
2
= 1.7728

Similarly for

© Copyright Virtual University of Pakistan 2

186
Numerical Analysis –MTH603 VU

N = 8, h = 0.1,

IT = 1.7684
Now, using Romberg’s formula , we have
 h  4(1.7904) − 1.8600
IT 1   =
2 3
= 1.7672
 h  4 (1.7728) − 1.7672
2
IT 2  2  =
2  42 − 1

= 1.77317
 h  4 (1.7672) − 1.77317
3
IT 3  3  =
2  43 − 1
= 1.7671

Thus, after three steps, it is found


that the value of the tabulated integral is 1.7671.

DOUBLE INTEGRATION
To evaluate numerically a double integral of the form
I = ∫  ∫ ( x, y )dx  dy
over a rectangular region bounded by the lines x = a, x =b, y = c, y = d we shall employ
either trapezoidal rule or Simpson’s rule, repeatedly With respect to one variable at
a time. Noting that, both the integrations are just a linear combination of values of the
given function at different values of the independent variable, we divide the interval [a, b]
into N equal sub-intervals of size h, such that h = (b – a)/N; and the interval (c, d) into M
equal sub-intervals of size k, so that k = (d – c)/M. Thus, we have
xi = x0 + ih, x0 = a,
xN = b, for i = 1, 2,..., N − 1
yi = y0 + ik , y0 = c,
yM = d , for i = 1, 2,..., M − 1
Thus, we can generate a table of values of the integrand, and the above procedure of
integration is illustrated by considering a couple of examples.
Example Evaluate the double integral
2dxdy 2
I =∫ ∫
1 1 x+ y

by using trapezoidal rule, with h = k = 0.25.


Solution

© Copyright Virtual University of Pakistan 3

187
Numerical Analysis –MTH603 VU

Taking x = 1, 1.25, 1.50, 1.75, 2.0 and y = 1, 1.25, 1.50, 1.75, 2.0, the following table is
generated using the integrand
1
f ( x, y ) =
x+ y
x y

1.00 1.25 1.50 1.75 2.00

1.00 0.5 0.4444 0.4 0.3636 0.3333

1.25 0.4444 0.4 0.3636 0.3333 0.3077

1.50 0.4 0.3636 0.3333 0.3077 0.2857

1.75 0.3636 0.3333 0.307 0.2857 0.2667

2.00 0.3333 0.3077 0.2857 0.2667 0.25

Keeping one variable say x fixed and varying the variable y, the application of
trapezoidal rule to each row in the above table gives
2 0.25
∫1 f (1, y)dy = 2 [0.5 + 2(0.4444 + 0.4 + 0.3636) + 0.3333]
= 0.4062
2 0.25
∫1 f (1.25, y)dy = 2 [0.4444 + 2(0.4 + 0.3636 + 0.3333) + 0.3077]
= 0.3682
2 0.25
∫1 f (1.5, y)dy = 2 [0.4 + 2(0.3636 + 0.3333 + 0.3077)] + 0.2857
= 0.3369
2 0.25
∫1 f (1.75, y)dy = 2 [0.3636 + 2(0.3333 + 0.3077 + 0.2857) + 0.2667]
= 0.3105
and
2 0.25
∫1 f (2, y)dy = 2 [0.3333 + 2(0.3077 + 0.2857) + 0.25]
= 0.2879
Therefore
© Copyright Virtual University of Pakistan 4

188
Numerical Analysis –MTH603 VU

dxdy h
2 2
I =∫ = { f (1, y ) + 2[ f (1.25, y ) + f (1.5, y ) + f (1.75, y )] + f (2, y )}

x+ y 2
1 1

By use of the last equations we get the required result as


0.25
I= [.04602 + 2(0.3682 + 0.3369 +0.3105) + 0.2879] = 0.3407
2
Example :Evaluate
π /2 π /2
∫ ∫
0 0
sin( x + y )dxdy

by numerical double integration.


Solution
Taking x = y = π/4, 3 π /8, π /2, we can generate the following table of the integrand

f ( x, y ) = sin( x + y )

x y
0 π/8 π/4 3π/8 π/2

0 0.0 0.6186 0.8409 0.9612 1.0


π/8 0.6186 0.8409 0.9612 1.0 0.9612
π/4 0.8409 0.9612 1.0 0.9612 0.8409
3π/8 0.9612 1.0 0.9612 0.8409 0.6186
π/2 1.0 0.9612 0.8409 0.6186 0.0

Keeping one variable as say x fixed and y as variable, and applying trapezoidal rule to
each row of the above table, we get
π /2 π
∫ f (0, y )dx = [0.0 + 2(0.6186 + 0.8409 +0.9612) + 1.0] = 1.1469
0 16
π  π 2 π
 , y dx = [ 0.6186 + 2(0.8409 + 0.9612 + 1.0) + 0.9612] = 1.4106

8  0 16
Similarly we get

© Copyright Virtual University of Pakistan 5

189
Numerical Analysis –MTH603 VU

π 2 π 
∫0
f  , y dx = 1.4778,
4 

π 2  3π 
∫0
f  , y dx = 1.4106
 8 
and
π 
π 2

0∫ f  , y dx = 1.1469
2 
Using these results, we finally obtain
π 2 π 2 π  π  π   3π   π 
∫0 ∫0 sin( x + y)dxdy = 16  f (0, y) + 2  8 , y  + f  , y+
4 
f  , y+
 8 
f  , y 
 2 
π
= [1.1469 + 2(1.4106 +1.4778 + 1.4106) + 1.1469]
16
= 2.1386

© Copyright Virtual University of Pakistan 6

190
Numerical Analysis –MTH603 VU

Ordinary Differential Equations


Introduction
Taylor Series
Euler Method
Runge-Kutta Method
Predictor Corrector Method
Introduction
Many problems in science and engineering when formulated mathematically are readily
expressed in terms of ordinary differential equations (ODE) with initial and boundary
condition.
Example
The trajectory of a ballistic missile, the motion of an artificial satellite in its orbit, are
governed by ordinary differential equations.
Theories concerning electrical networks, bending of beams, stability of aircraft, etc., are
modeled by differential equations.
To be more precise, the rate of change of any quantity with respect to another can be
modeled by an ODE
Closed form solutions may not be possible to obtain, for every modeled problem, while
numerical methods exist, to solve them using computers.
In general, a linear or non-linear ordinary differential equation can be written as
dny  dy d n −1 y 
= f  t , y , , , 
dt n  dt dt n −1 
Here we shall focus on a system of first order
dy
differential equations of the form = f (t , y ) with the initial condition y (t ) = y ,
dt 0 0
which is called an initial value problem (IVP).

It is justified, in view of the fact that any higher order ODE can be reduced to a system of
first order differential equations by substitution.
For example, consider a second order differential equation of the form
y′′ = f (t , y, y′)
Introducing the substitution p = y′, the above equation reduces to a system of two first
order differential equations, such as

y ′ = p, p′ = f (t , y, p)
Theorem
Let f (t, y) be real and continuous in the strip R, defined by t ∈ [t0 , T ] , −∞ ≤ y ≤ ∞

Then for any t ∈ [t0 , T ] and for any y , y , there exists a constant L, satisfying the
1 2
inequality f (t , y1 ) − f (t , y2 ) ≤ L y1 − y2 so that f y (t , y ) ≤ L, for every t , y ∈ R

Here, L is called Lipschitz constant.


© Copyright Virtual University of Pakistan 1

191
Numerical Analysis –MTH603 VU

If the above conditions are satisfied, then for any y , the IVP has a unique solution y ( t ),
0
for t ∈ [t0 , T ]
In fact, we assume the existence and uniqueness of the solution to the above IVP
The function may be linear or non-linear. We also assume that the function f (t, y) is
sufficiently differentiable with respect to either t or y.
TAYLOR’S SERIES
METHOD
Consider an initial value problem described by
dy
= f (t , y ), y (t0 ) = y0
dt
Here, we assume that f (t, y) is sufficiently differentiable with respect to x and y.
If y (t) is the exact solution, we can expand y (t) by Taylor’s series about the point t = t
0
and obtain
(t − t0 ) 2 (t − t0 )3 (t − t0 ) 4 IV
y (t ) = y (t0 ) + (t − t0 ) y′(t0 ) + y′′(t0 ) + y′′′(t0 ) + y (t0 ) +
2! 3! 4!
Since, the solution is not known, the derivatives in the above expansion are not known
explicitly. However, f is assumed to be sufficiently differentiable and therefore, the
derivatives can be obtained directly from the given differential equation.
Noting that f is an implicit function of y, we have
y′ = f (t , y ) 

∂f ∂f dy 
y′′ = + = f x + ff y 
∂x ∂y dx 
Similarly
y′′′ = f xx + ff xy + f ( f xy + ff yy ) + f y ( f x + ff y ) 

= f xx + 2 ff xy + f 2 f yy + f y ( f x + ff y ) 

y = f xxx + 3 ff xxx + 3 f f xyy
IV 2


+ f y ( f xx + 2 ff xy + f f yy )
2

+ 3( f x + ff y )( ff xy + ff yy ) 

+ f y ( f x + ff y )
2 

Continuing in this manner, we can express any derivative of y in terms of
f (t, y) and its partial derivatives.
Example
Using Taylor’s series method, find the solution of the initial value problem
dy
= t + y, y (1) = 0
dt
at t = 1.2, with h = 0.1 and compare the result with the closed form solution
Solution
Let us compute the first few derivatives from the given differential equation as follows:

© Copyright Virtual University of Pakistan 2

192
Numerical Analysis –MTH603 VU

y′ = t + y, y′′ = 1 + y′, y′′′ = y′′,


y IV = y′′′, yV = y IV
Prescribing the initial condition, that is, at
t =1, y – y (t ) = 0, we have
0 0 0
y0′ = 1, y0′′ = 2,
y0′′′ = y0IV = y0V = 2
Now, using Taylor’s series method, we have
(t − t0 ) 2 (t − t0 )3
y (t ) = y0 + (t − t0 ) y0′ + y0′′ + y0′′′
2 6
(t − t0 ) 4 IV (t − t0 )5 V
+ y0 + y0 +
24 120
Substituting the above values of the derivatives, and the initial condition, we obtain
0.01 0.001 0.00001
y (1.1) = 0 + (0.1)(1) + (2) + (2) + (2) +
2 6 120
0.001 0.0001 0.00001
= 0.1 + 0.01 + + + +
3 12 60
= 0.1 + 0.01 + 0.000333 + 0.0000083 + 0.0000001 +
= 0.1103414
Therefore
y (1.1) = y1 = 0.1103414 ≅ 0.1103
Taking y = 0.1103 at t = 1.1, the values of the derivatives are
1
y1′ = 1.1 + 0.1103 = 1.2103
y1′′ = 1 + 1.2103 = 2.2103
y1′′′= y1IV = y1V = 2.2103

Substituting the value of y and its derivatives into Taylor’s series expansion we get,
1
after retaining terms up to fifth derivative only……
(t − t1 ) 2
y (1.2) = y1 + (t − t1 ) y1′ + y1′′
2
(t − t1 )3 (t − t1 ) 4 IV (t − t1 )5 V
+ y1′′′+ y1 + y1
6 24 120
y (1.2) = 0.1103 + 0.12103
+0.0110515 + 0.0003683
+0.000184 + 0.0000003
= 0.2429341 ∼ 0.2429 ≈ 0.243
To obtain the closed form solution, we rewrite the given IVP as
dy
−y=t or d ( ye− t ) = te −t
dt
On integration, we get

© Copyright Virtual University of Pakistan 3

193
Numerical Analysis –MTH603 VU

y = −et (te − t + e − t ) + cet


= cet − t − 1
Using the initial condition, we get Therefore, the closed form solution is
y = −t − 1 + 2et −1
When t = 1.2, the closed form solution becomes
y (1.2) = −1.2 − 1 + 2(1.2214028)
−2.2 + 2.4428056
= 0.2428 ≈ 0.243
Example
Using Taylor’s Series method taking algorithm of order 3, solve the initial value problem
y / = 1 − y ; y (0) = 0 with h = 0.25 at x= .50
Solution:

© Copyright Virtual University of Pakistan 4

194
Numerical Analysis –MTH603 VU

L e t u s c o m p u te th e fir s t th r e e d e r iv a tiv e s
y/ = 1− y y // = − y / y /// = − y //
T h e in itia l c o n d itio n is
t0 = 0 y 0 = 0 , w e h a v e
y / = 1 − (0 ) = 1
y // = − y / = − 1
y /// = − y // = − ( − 1 ) = 1
N o w , T a y l o r ' s s e r i e s m e t h o s d a lg r i t h m i s
( t1 ) ( t1 )
2 3
− t0 − t0
y ( t1 ) = y 0 + ( t1 − t 0 ) y0 /
+ y0 //
+ y 0 ///
2 3!
1 1
OR y ( t1 ) = y 0 + h y 0 / + h 2 y 0 // + h 3 y 0 /// h = 0 .2 5
2 3!
( 0 .2 5 ) ( 0 .2 5 )
2 3

y ( 0 . 2 5 ) = 0 + ( 0 .2 5 ) (1 ) + (− 1 ) + (1 )
2 3!
0 .0 6 2 5 0 .0 1 5 6 2 5
y ( 0 .2 5 ) = 0 . 2 5 − +
2 6
y ( 0 . 2 5 ) = 0 .2 5 − 0 . 0 3 1 2 5 + 0 .0 0 2 6 0 4 1 6 7
y ( 0 . 2 5 ) = 0 .2 2 1 3 5
T a k in g y1 = 0 .2 2 1 3 5 , N o w
y / = 1 − y = 1 − (0 .2 2 1 3 5 ) = 0 .7 7 8 6
y // = − y / = − 0 . 7 7 8 6
y /// = − y // = − ( − 0 . 7 7 8 6 ) = 0 . 7 7 8 6

(t − t ) (t − t )
2 3

y ( t 2 ) = y1 + ( t 2 − t1 ) y1 /
+ 2 1 y1 //
+ 2 1 y1 ///
2 3!
1 1
O R y ( t 2 ) = y1 + hy1 / + h 2 y1 // + h 3 y1 /// h = .25
2 6
1 1
= 0.22135 + (0.25)(0.7786) + (0.25) 2 ( − 0.7786) + (0.25) 3 (0.7786)
2 6
= 0.22135 + 0.19465 − 0.02433 + 0.002027
= 0.39369

© Copyright Virtual University of Pakistan 5

195
Numerical Analysis –MTH603 VU

EULER METHOD
Euler method is one of the oldest numerical methods used for integrating the ordinary
differential equations. Though this method is not used in practice, its understanding will
help us to gain insight into nature of predictor-corrector method
Consider the differential equation of first order with the initial condition y(t ) = y .
0 0
dy
= f (t , y )
dt
The integral of this equation is a curve in, ty-plane.
Here, we find successively y , y , …, y ; where y is the value of y at t =t = t +mh,
1 2 m m m 0
m =1, 2,… and h being small.
Here we use a property that in a small interval, a curve is nearly a straight line. Thus at
(t , y ), we approximate the curve by a tangent at that point.
0 0
Therefore,
 dy  y − y0
  = = f (t0 , y0 )
 dt ( t0 , y0 ) t − t0
That is y = y0 + (t − t0 ) f (t0 , y0 )
Hence, the value of y corresponding to t = t is given by
1
y1 = y0 + (t1 − t0 ) f (t0 , y0 )
Similarly approximating the solution curve in the next interval (t , t ) by a line through
1 2
(t , y ) having its slope f(t , y ), we obtain
1 1 1 1
y2 = y1 + hf (t1 , y1 )
Thus, we obtain in general, the solution of the given differential equation in the form of a
recurrence relation
ym +1 = ym + hf (tm , ym )
Geometrically, this method has a very simple meaning. The desired function curve is
approximated by a polygon train, where the direction of each part is determined by the
value of the function f (t, y) at its starting point.
Example
dy y − t
Given = with the initial condition y = 1 at t = 0. Using Euler method, find y
dt y + t
approximately at x = 0.1, in five steps.
Solution
Since the number of steps are five, we shall proceed in steps of (0.1)/5 = 0.02.
Therefore, taking step size
h = 0.02, we shall compute the value of y at
t = 0.02, 0.04, 0.06, 0.08 and 0.1
Thus y1 = y0 + hf (t0 , y0 ), where y0 = 1, t0 = 0

© Copyright Virtual University of Pakistan 1

196
Numerical Analysis –MTH603 VU

1− 0
Therefore y1 = 1 + 0.02 = 1.02
1+ 0
1.02 − 0.02
y2 = y1 + hf (t1 + y1 ) = 1.02 + 0.02 = 1.0392
1.02 + 0.02
1.0392 − 0.04
y3 = y2 + hf (t2 , y2 ) = 1.0392 + 0.02 = 1.0577
1.0392 + 0.04
Similarly,
1.0577 − 0.06
y4 = y3 + hf (t3 , y3 ) = 1.0577 + 0.02 = 1.0738
1.0577 + 0.06
1.0738 − 0.08
y5 = y4 + hf (t4 , y4 ) = 1.0738 + 0.02 = 1.0910
1.0738 + 0.08
Hence the value of y corresponding to t = 0.1 is 1.091
Example
Solve the differential equation
y / = x + y ; y (0) = 1
in the interval [0,0.5] using Euler’s method by taking h=0.1
Soution:

x0 0
x1 0.1
x2 0.2
x3 0.3
x4 0.4
x5 0.5

Since h = 0.1 so we calculate values at x = 0, 0.1, 0.2, 0.3, 0.4, 0.5


y (0) = 1 here x 0 = 0 y0 = 1
y1 = y0 + hf ( x0 , y0 )
y1 = 1 + (0.1)(0 + 1) = 1 + 0.1 = 1.1
y1 = 1.1 x1 = 0.1
y2 = y1 + hf ( x1 , y1 )
y2 = 1.1 + (0.1)(0.1 + 1.1) = 1.1 + 0.12 = 1.22

© Copyright Virtual University of Pakistan 2

197
Numerical Analysis –MTH603 VU

now
y2 = 1.22 x2 = 0.2
y3 = y2 + hf ( x2 , y2 )
y3 = 1.22 + (0.1)(1.22 + 0.2) = 1.22 + 0.142 = 1.3620
now
y3 = 1.3620 x3 = 0.3
y4 = y3 + hf ( x3 , y3 )
y4 = 1.3620 + (0.1)(1.3620 + 0.3) = 1.3620 + 0.1662 = 1.5282
now
y4 = 1.3620 x4 = 0.4
y5 = y4 + hf ( x4 , y4 )
y5 = 1.5282 + (0.1)(1.5282 + 0.4) = 1.7210
now
y5 = 1.7210 x5 = 0.5
y6 = y5 + hf ( x5 , y5 )
y6 = 1.7210 + (0.1)(1.7210 + 0.5) = 1.9431
Hence the value of y corresponding to t = 0.5 is 1.9431
MODIFIED EULER’S METHOD
The modified Euler’s method gives greater improvement in accuracy over the original
Euler’s method. Here, the core idea is that we use a line through (t , y ) whose slope is
0 0
(1) (1)
the average of the slopes at (t ,y ) and (t , y ) Where y = y + hf (t , y ) is the
0 0 1 1 1 0 0 0
value of y at t = t as obtained in Euler’s method, which approximates the curve in the
1
interval (t , t )
0 1

© Copyright Virtual University of Pakistan 3

198
Numerical Analysis –MTH603 VU

y
(t1 , y1) L

B
L2

L
(t0 , y0)

L1
A (t1 , y1)

0 t

(Modified Euler’s Method)

Geometrically, from Figure, if L is the tangent at (t , y ), L is the line through (t ,


1 0 0 2 1
(1) (1) - (1)
y ) of slope f(t , y ) and L is the line through (t , y ) but with a slope equal to
1 1 1 1 1
(1)
the average of f(t ,y ) and f(t , y ),…. the line L through (t , y ) and parallel to L
0 0 1 1 0 0
is used to approximate the curve in the interval (t , t ).
0 1
Thus, the ordinate of the point B will give the value of y .
1
Now, the equation of the line AL is given by

 f (t , y ) + f (t1 , y1(1) ) 
y1 = y0 +  0 0  (t1 − t0 )
 2 
 f (t , y ) + f (t1 , y1 ) 
(1)
= y0 + h  0 0 
 2 
Similarly proceeding, we arrive at the recurrence relation
 f (tm , ym ) + f (tm +1 , ym(1)+1 ) 
ym +1 = ym + h  
 2 

© Copyright Virtual University of Pakistan 4

199
Numerical Analysis –MTH603 VU

This is the modified Euler’s method.

Example
Using modified Euler’s method, obtain the solution of the differential equation
dy
= t + y = f (t , y )
dt
with the initial condition
y = 1 at t = 0 for the range 0 ≤ t ≤ 0.6 in steps of 0.2
0 0
Solution
At first, we use Euler’s method to get
y1(1) = y0 + hf (t0 , y0 ) = 1 + 0.2(0 + 1) = 1.2
Then, we use modified Euler’s method to find
f (t0 , y0 ) + f (t1 , y1(1) )
y (0.2) = y1 = y0 + h
2

= 1.0 + 0.2
(
1 + 0.2 + 1.2 ) = 1.2295
2
Similarly proceeding, we have from Euler’s method
y2(1) = y1 + hf (t1 , y1 ) = 1.2295 + 0.2(0.2 + 1.2295)
= 1.4913
Using modified Euler’s method, we get
f (t , y ) + f (t2 , y2(1) )
y2 = y1 + h 1 1
2

= 1.2295 + 0.2
( ) (
0.2 + 1.2295 + 0.4 + 1.4913 )
2
= 1.5225
Finally,
(
y3(1) = y2 + hf (t2 , y2 ) = 1.5225 + 0.2 0.4 + 1.5225 )
= 1.8493
Modified Euler’s method gives
f (t2 , y2 ) + f (t3 , y3(1) )
y (0.6) = y3 = y2 + h
2
= 1.5225 + 0.1 (0.4 + 1.5225) + (0.6 + 1.8493) 
= 1.8819
Hence, the solution to the given problem is given by

© Copyright Virtual University of Pakistan 5

200
Numerical Analysis –MTH603 VU

t 0.2 0.4 0.6


y 1.2295 1.5225 1.8819

© Copyright Virtual University of Pakistan 6

201
Numerical Analysis –MTH603 VU

RUNGE – KUTTA METHOD


These are computationally, most efficient methods in terms of accuracy. They were
developed by two German mathematicians, Runge and Kutta.
They are distinguished by their orders in the sense that they agree with Taylor’s series
r
solution up to terms of h , where r is the order of the method. These methods do not
demand prior computation of higher derivatives of y(t) as in TSM.
Fourth-order Runge-Kutta methods are widely used for finding the numerical solutions of
linear or non-linear ordinary differential equations, the development of which is
complicated algebraically.
Therefore, we convey the basic idea of these methods by developing the second-order
Runge-Kutta method which we shall refer hereafter as R-K method.
Please recall that the modified Euler’s method: which can be viewed as
yn +1 = yn + h (average of slopes)
This, in fact, is the basic idea of R-K method.
Here, we find the slopes not only at t but also at several other interior points, and take the
n
weighted average of these slopes and add to y to get y . Now, we shall derive the
n n+1
second order R-K method in the following slides.
Consider the IVP
dy
= f (t , y ), y (tn ) = yn
dt
We also define
k1 = hf (tn , yn ), k2 = hf (tn + α h, yn + β k1 )
and take the weighted average of k and k and add to y to get y
1 2 n n+1
We seek a formula of the form yn +1 = yn + W1k1 + W2 k2
Where α , β ,W1 and W2 are constants to be determined so that the above equation
agree with the Taylor’s series expansion as high an order as possible
Thus, using Taylor’s series expansion, we have
h2 h3

y (tn +1 ) = y (tn ) + hy (tn ) + y (tn ) + y′′′(tn ) + "
′′
2 6
Rewriting the derivatives of y in terms of f of the above equation, we get
h2
yn +1 = yn + hf (tn , yn ) + ( ft + ff y )
2
3
h
+  ftt + 2 ffty + f 2 f yy + f y ( ft + ff y )  + O(h 4 )
6
Here, all derivatives are evaluated at (t , y ).
n n
Next, we shall rewrite the given equation after inserting the expressions or k and k as
1 2
yn +1 = yn + W1hf (tn , yn ) + W2 hf (tn + α h, yn + β k1 )
Now using Taylor’s series expansion of two variables, we obtain

© Copyright Virtual University of Pakistan 1

202
Numerical Analysis –MTH603 VU

yn +1 = yn + W1hf (tn , yn ) + W2 h  f (tn , yn ) + (α hft + β k1 f y )


 α 2 h2 β 2 k12  
 f n + α hβ k1 fty + f yy  + O(h3 ) 
 2 2  
Here again, all derivatives are computed at (t , y ). On inserting the expression for k ,
n n 1
the above equation becomes
yn +1 = yn + (W1 + W2 )hf + W2 h (α hf t + β hff y )

 α 2h2 β 2h2 2  
+ f tt + αβ h 2 ff ty + f f yy  + O(h3 ) 
 2 2  
On rearranging in the increasing powers of h, we get
yn +1 = yn + (W1 + W2 )hf + W2 h 2 (α f t + β ff y )
α2 β 2 f 2 ff y  
+W2 h3  ftt + αβ ff ty +  + O(h4 ) 
 2 2 
  
2
Now, equating coefficients of h and h in the two equations, we obtain
f + ff y
W1 + W2 = 1, W2 (α ft + β ff y ) = t
2
Implying
1
W1 + W2 = 1, W2α = W2 β =
2
Thus, we have three equations in four unknowns and so, we can chose one value
arbitrarily. Solving we get
1 1
W1 = 1 − W2 , α= , β=
2W2 2W2
Where W is arbitrary and various values can be assigned to it.
2
We now consider two cases, which are popular
Case I
If we choose W = 1/3,
2
then W = 2/3 and α = β = 3 / 2
1
1
yn +1 = yn + (2k1 + k2 )
3
 3 3 
k1 = hf (t , y ), k2 = hf  t + h, y + k1 
 2 2 
Case II: If we consider
W = ½, then W = ½ and α = β = 1.
2 1
k +k
Then yn +1 = yn + 1 2
2

© Copyright Virtual University of Pakistan 2

203
Numerical Analysis –MTH603 VU

k1 = hf (t , y ), k2 = hf (t + h, y + k1 )

In fact, we can recognize that this equation is the modified Euler’s method and is
nd
therefore a special case of a 2 order Runge-Kutta method. These equations are known
nd
as 2 order R –K Methods, since they agree with Taylor’s series solution up to the term
2
h .
Defining the local truncation error, TE, as the difference between the exact solution
y( t ) at t = t and the numerical solution y , obtained using the second order R –
n+1 n+1 n+1
K method, we have
TE = y (tn +1 ) − yn +1
Now, substituting
1 1
W2 = , W1 = 1 − , β = α,
2α 2α
into the above equation, we get
h2
yn +1 = yn + hf tt + ( ft + ff y )t =tn
2

3
+ ( ftt + 2 ffty + f 2 f yy )t =tn + "
4
Finally we obtain
 1 α  1 
TE = h3  −  ( ftt + 2 ff ty + f 2 f yy ) + f y ( ft + ff y ) 
 6 4  6 
The expression can further be simplified to
 1 α  1 
TE = h3  −  ( y′′′ − f y y′) + f y y′
 6 4  6 
Therefore, the expression for local truncation error is given by

 1 α  α 
TE = h3  −  y′′′ + f y y′
 6 4  4 
Please verify that the magnitude of the TE in case I is less than that of case II
Following similar procedure, Runge-Kutta formulae of any order can be obtained.
However, their derivations becomes exceedingly lengthy and complicated.
Amongst them, the most popular and commonly used in practice is the R-K method of
4
fourth-order, which agrees with the Taylor series method up to terms of O (h ).

This well-known fourth-order R-K method is described in the following steps.


1
yn +1 = yn + (k1 + 2k2 + 2k3 + k4 )
6

© Copyright Virtual University of Pakistan 3

204
Numerical Analysis –MTH603 VU

k1 = hf (tn , yn )
 h k 
k2 = hf  tn + , yn + 1 
 2 2
where
 h k 
k3 = hf  tn + , yn + 2 
 2 2
k4 = hf (tn + h, yn + k3 )
Please note that the second-order Runge-Kutta method described above requires the
evaluation of the function twice for each complete step of integration.
Similarly, fourth-order Runge-Kutta method requires the evaluation of the function four
times. The discussion on optimal order R-K method is very interesting, but will be
discussed some other time.

© Copyright Virtual University of Pakistan 4

205
Numerical Analysis –MTH603 VU

Example
Use the following second order Runge-Kutta method described by
1
yn +1 = yn + (2k1 + k2 )
3
 3 3 
where k1 = hf ( xn , yn ) and k2 = hf  xn + h, yn + k1 
 2 2 
and find the numerical solution of the initial value problem described as
dy y + x
= , y (0) = 1
dx y − x
at x = 0.4 and taking h = 0.2.
Solution
Here
y+x
f ( x, y ) = , h = 0.2, x0 = 0, y0 = 1
y−x
1+ 0
We calculate k1 = hf ( x0 , y0 ) = 0.2 = 0.2
1− 0
k2 = hf [ x0 + 0.3, y0 + (1.5)(0.2) ]
1.3 + 0.3
= hf (0.3,1.3) = 0.2 = 0.32
1.3 − 0.3
Now, using the given R-K method, we get
1
y (0.2) = y1 = 1 + (0.4 + 0.32) = 1.24
3
Now, taking x = 0.2, y = 1.24, we calculate
1 1
1.24 + 0.2
k1 = hf ( x1 , y1 ) = 0.2 = 0.2769
1.24 − 0.2

 3 3 
k2 = hf  x1 + h, y1 + k1  = hf (0.5,1.6554)
 2 2 
1.6554 + 0.5
= 0.2 = 0.3731
1.6554 − 0.5
Again using the given R-K method, we obtain
1
y (0.4) = y2 = 1.24 + [ 2(0.2769) + 0.3731]
3
= 1.54897
Example
Solve the following differential equation
dy
=t+ y
dt
with the initial condition y(0) = 1, using fourth- order Runge-Kutta method from t = 0 to
t = 0.4 taking h = 0.1

© Copyright Virtual University of Pakistan 1

206
Numerical Analysis –MTH603 VU

Solution
The fourth-order Runge-Kutta method is described as

1
yn +1 = yn + (k1 + 2k2 + 2k3 + k4 ) (1)
6
Where
k1 = hf (tn , yn )
 h k 
k2 = hf  tn + , yn + 1 
 2 2
 h k 
k3 = hf  tn + , yn + 2 
 2 2
k4 = hf (tn + h, yn + k3 )
In this problem
f (t , y ) = t + y, h = 0.1, t0 = 0, y0 = 1.
As a first step, we calculate

k1 = hf (t0 , y0 ) = 0.1(1) = 0.1


k2 = hf (t0 + 0.05, y0 + 0.05)
= hf (0.05,1.05) = 0.1[0.05 + 1.05]
= 0.11
k3 = hf (t0 + 0.05, y0 + 0.055)
= 0.1(0.05 + 1.055)
= 0.1105
k4 = 0.1(0.1 + 1.1105) = 0.12105
Now, we compute from
1
y1 = y0 + (k1 + 2k2 + 2k3 + k4 )
6
1
= 1 + (0.1 + 0.22 + 0.2210 + 0.12105)
6
= 1.11034
Therefore y(0.1) = y =1.1103
1
In the second step, we have to find y = y(0.2)
2
We compute

© Copyright Virtual University of Pakistan 2

207
Numerical Analysis –MTH603 VU

k1 = hf (t1 , y1 ) = 0.1(0.1 + 1.11034) = 0.121034


 h k 
k2 = hf  t1 + , y1 + 1 
 2 2
= 0.1[0.15 + (1.11034 + 0.060517)] = 0.13208
 h k 
k3 = hf  t1 + , y1 + 2 
 2 2
= 0.1[0.15 + (1.11034 + 0.06604)] = 0.132638
k4 = hf (t1 + h, y1 + k3 )
= 0.1[0.2 + (1.11034 + 0.132638)] = 0.1442978
From Equation (1), we see that
1
y2 = 1.11034 + [0.121034 + 2(0.13208)
6
+2(0.132638) + 0.1442978] = 1.2428
Similarly we calculate,
k1 = hf (t2 , y2 ) = 0.1[0.2 + 1.2428] = 0.14428
 h k 
k2 = hf  t2 + , y2 + 1  = 0.1[0.25 + (1.2428 + 0.07214)] = 0.156494
 2 2
 h k 
k3 = hf  t1 + , y1 + 2  = 0.1[0.3 + (1.2428 + 0.078247)] = 0.1571047
 2 2
k4 = hf (t2 + h, y2 + k3 ) = 0.1[0.3 + (1.2428 + 0.1571047)] = 0.16999047
Using equation (1), we compute
1
y (0.3) = y3 = y2 + (k1 + 2k2 + 2k3 + k4 ) = 1.399711
6
Finally, we calculate
k1 = hf (t3 , y3 ) = 0.1[0.3 + 1.3997] = 0.16997
 h k 
k2 = hf  t3 + , y3 + 1  = 0.1[0.35 + (1.3997 + 0.084985)] = 0.1834685
 2 2
 h k 
k3 = hf  t3 + , y3 + 2  = 0.1[0.35 + (1.3997 + 0.091734)] = 0.1841434
 2 2
k4 = hf (t3 + h, y3 + k3 ) = 0.1[0.4 + (1.3997 + 0.1841434)] = 0.19838434
Using them in equation (1), we get
1
y (0.4) = y4 = y3 + (k1 + 2k2 + 2k3 + k4 )
6
= 1.58363
which is the required result.
Example
Apply Runge-Kutta Method of order four to solve the initial value problem at x=1.2
y / = xy1/ 3 ; y (1) = 1 ; h = 0.1

© Copyright Virtual University of Pakistan 3

208
Numerical Analysis –MTH603 VU

Solution:

The Fourth-Order Runge-Kutta Method is described as :-

1
yn +1 = yn + ( k1 + 2k2 + 2k3 + k4 )
6

where,

k1 = hf ( tn , yn )
 h k 
k2 = hf  tn + , yn + 1 
 2 2
 h k 
k3 = hf  tn + , yn + 2 
 2 2
k4 = hf ( tn + h, yn + k3 )

( Here, we are taking x as t )

First Iteration:-

© Copyright Virtual University of Pakistan 4

209
Numerical Analysis –MTH603 VU

t0 = 1 , y0 = 1

k1 = hf (t0 , y0 ) = 0.1 1×11/ 3 


= 0.1

 h k 
k2 = hf  tn + , yn + 1 
 2 2
= 0.1 (1 + 0.05 ) × (1 + k1 / 2 ) 
1/ 3
 
= 0.106721617

 h k 
k3 = hf  tn + , yn + 2 
 2 2
= 0.1 (1 + 0.05 ) × (1 + k2 / 2 ) 
1/ 3
 
= 0.10683536

k4 = hf ( tn + h, yn + k3 )
= 0.1 (1 + 0.1) × (1 + k3 ) 
1/ 3
 
= 0.113785527

yn +1 = yn + ( k1 + 2k2 + 2k3 + k4 )
y1 = 1 + 0.10681658
y1 = 1.10681658

Second Iteration:-

© Copyright Virtual University of Pakistan 5

210
Numerical Analysis –MTH603 VU

t1 = 1.1 , y1 = 1.10681658

k1 = hf (t1 , y1 ) = 0.1 1.1× 1.106816581/ 3 


= 0.113784884

 h k 
k2 = hf  tn + , yn + 1 
 2 2
= 0.1 (1.1 + 0.05 ) × (1 + 1.10681658 / 2 ) 
1/ 3
 
= 0.120961169

 h k 
k3 = hf  tn + , yn + 2 
 2 2
= 0.1 (1.1 + 0.05 ) × (1.10681658 + k2 / 2 ) 
1/ 3
 
= 0.121085364

k4 = hf ( tn + h, yn + k3 )
= 0.1 (1.1 + 0.1) × (1.10681658 + k3 ) 
1/ 3
 
= 0.128499807

yn +1 = yn + ( k1 + 2k2 + 2k3 + k4 )
y2 = 1.10681658 + 0.12106296
y2 = 1.22787954

So, at x = 1.2 , we get:

y(1.2) = y2 = 1.22787954

RUNGE – KUTTA METHOD FOR SOLVING A SYSTEM OF


EQUATIONS
The fourth-order Runge-Kutta method can be extended to numerically solve the higher-
order ordinary differential equations- linear or non-linear
For illustration, let us consider a second order ordinary differential equation of the form
d2y  dy 
2
= f  t , y, 
dt  dt 

© Copyright Virtual University of Pakistan 6

211
Numerical Analysis –MTH603 VU

dy
Using the substitution = p this equation can be reduced to two first-order
dt
simultaneous differential equations, as given by

dy dp
= p = f1 (t , y, p), = f 2 (t , y, p )
dt dt

Now, we can directly write down the Runge-Kutta fourth-order


formulae for solving the system.
Let the initial conditions of the above system be given by
y ( xn ) = yn , y′( xn ) = p( xn ) = pn
Then we define
k1 = hf1 ( tn , yn , pn ) , l1 = hf 2 (tn , yn , pn ) 

 h k l   h k l 
k2 = hf1  tn + , yn + 1 , pn + 1  , l2 = hf 2  tn + , yn + 1 , pn + 1  
 2 2 2  2 2 2  

 h k l   h k2 l2  
k3 = hf1  tn + , yn + 2 , pn + 2  , l3 = hf 2  tn + , yn + , pn + 
 2 2 2  2 2 2 

k4 = hf1 (tn + h, yn + k3 , pn + l3 ), l4 = hf 2 (tn + h, yn + k3 , pn + l3 ) 
th
Now, using the initial conditions y , p and 4 -order R-K formula, we compute
n n
1 
yn +1 = yn + (k1 + 2k2 + 2k3 + k4 ) 
6 

1
pn +1 = pn + (l1 + 2l2 + 2l3 + l4 ) 
6 
This method can be extended on similar lines to solve system of n first order differential
equations.
Example
Solve the following equation
y′′ − (0.1)(1 − y 2 ) y′ + y = 0
th
Using 4 order Runge-Kutta method for x = 0.2, with the initial values y(0) = 1, y’(0)=0
Solution:
dy
Let = p = f1 ( x, y, p)
dx
dp
Then = (0.1)(1 − y2 ) p − y = f 2 ( x, y, p)
dx
Thus, the given equation reduced to two first-order equations.
In the present problem, we are given
x = 0, y = 1, p =y ’ =0
0 0 0 o
Taking h = 0.2, we compute

© Copyright Virtual University of Pakistan 7

212
Numerical Analysis –MTH603 VU

k1 = hf1 ( x0 , y0 , p0 ) = 0.2(0.0) = 0.0


l1 = hf 2 ( x0 , y0 , p0 ) = 0.2(0.0 − 1) = −0.2
 h k l 
k2 = hf1  x0 + , y0 + 1 , p0 + 1 
 2 2 2
= hf 2 (0.1,1.0, −0.1) = −0.02
 h k l 
l2 = hf 2  x0 + , y0 + 1 , p0 + 1 
 2 2 2
= hf 2 (0.1,1.0, −0.1) = 0.2(−0.1) = −0.02
 h k l 
k3 = hf1  x0 + , y0 + 2 , p0 + 2 
 2 2 2
= hf1 (0.1, 0.99, −0.1) = 0.2(−0.1) = −0.02
 h k l 
l3 = hf 2  x0 + , y0 + 2 , p0 + 2  = hf 2 (0.1, 0.99, −0.1)
 2 2 2
= 0.2[(0.1)(0.0199)(−0.1) − 0.99] = −0.1980
k4 = hf1 ( x0 + h, y0 + k3 , p0 + l3 )
= hf1 (0.2, 0.98, −0.1980) = −0.0396
l4 = hf 2 ( x0 + h, y0 + k3 , p0 + l3 ) = hf 2 (0.2, 0.98, −0.1980)
= 0.2[(0.1)(1 − 0.9604)(−0.1980) − 0.98] = −0.19616
Now, y (0.2) = y is given by
1
1
y (0.2) = y1 = y0 + [k1 + 2k2 + 2k3 + k4 ]
6
1
= 1 + [0.0 + 2(−0.02) + 2(−0.02) + (−0.0396)]
6
= 1 − 0.019935 = 0.9801
1
y′(0.2) = p1 = p0 + [l1 + 2l2 + 2l3 + l4 ]
6
1
= 0 + [−0.2 + 2(−0.2) + 2(−0.1980) + (−0.19616)]
6
= −0.19869(= −0.1987)
Therefore, the required solution is
y (0.2) = 0.9801, y′(0.2) = −0.1987

© Copyright Virtual University of Pakistan 8

213
Numerical Analysis –MTH603 VU

PREDICTOR – CORRECTOR METHOD


The methods presented so far are called single-step methods, where we have seen that the
computation of y at t that is y requires the knowledge of y only.
n+1 n+1 n
In predictor-corrector methods which we will discuss now, is also known as multi-step
methods.
To compute the value of y at t , we must know the solution y at t , t , t , etc.
n+1 n n-1 n-2
Thus, a predictor formula is used to predict the value of y at t and then a corrector
n+1
formula is used to improve the value of y .
n+1
Let us consider an IVP
dy
= f (t , y ), y (tn ) = yn
dt
Using simple Euler’s and modified Euler’s method, we can write down a simple
predictor-corrector pair
(P – C) as
P : yn(0)+1 = yn + hf (tn , yn ) 

h 
C : yn +1 = yn +  f (tn , yn ) + f (tn +1 , yn +1 )  
(1) (0)

2 
(1)
Here, y is the first corrected value of y . The corrector formula may be
n+1 n+1
used iteratively as defined below:
h
yn( r+)1 = yn +  f (tn , yn ) + f (tn +1 , yn( r+−11) )  , r = 1, 2,…
2
The iteration is terminated when two successive iterates agree to the desired accuracy
In this pair, to extrapolate the value of y , we have approximated the solution curve in
n+1
the interval (t , t ) by a straight line passing through (t , y ) and (t ,y ).
n n+1 n n n+1 n+1
The accuracy of the predictor formula can be improved by considering a quadratic curve
through the equally spaced points (t , y ), (t , y ), (t ,y )
n-1 n-1 n n n+1 n+1
Suppose we fit a quadratic curve of the form
y = a + b(t − tn −1 ) + c(t − tn )(t − tn −1 )
where a, b, c are constants to be determined As the curve passes through
(t , y ) and (t , y ) and satisfies
n-1 n-1 n n
 dy 
  = f (tn , yn )
 dt (tn , yn )
We obtain yn −1 = a, yn = a + bh = yn −1 + bh
Therefore
yn − yn −1
b=
h
and

© Copyright Virtual University of Pakistan 1

214
Numerical Analysis –MTH603 VU

 dy 
  = f (tn , yn ) = {b + c[(t − tn −1 ) + (t − tn )]}( tn , yn )
 dt ( tn , yn )
Which give
f (tn , yn ) = b + c(tn − tn −1 ) = b + ch
f (tn , yn ) ( yn − yn −1 )
or c= −
h h2
Substituting these values of a, b and c into the quadratic equation, we get
yn +1 = yn −1 + 2( yn − yn −1 ) + 2[hf (tn , yn ) − ( yn − yn −1 )]
That is
yn +1 = yn −1 + 2hf (tn , yn )
Thus, instead of considering the P-C pair, we may consider the P-C pair given by
P : yn +1 = yn −1 + 2hf (tn , yn ) 

h 
C : yn +1 = yn + [ f (tn , yn ) + f (tn +1 , yn +1 )]
2 
The essential difference between them is, the one given above is more accurate
However, this one can not be used to predict y for a given IVP, because its use
n+1
require the knowledge of past two points. In such a situation, a R-K method is generally
used to start the predictor method.
Milne’s Method
It is also a multi-step method where we assume that the solution to the given IVP is
known at the past four equally spaced point t , t , t and t .
0 1 2 3
To derive Milne’s predictor-corrector pair, let us consider a typical IVP
dy
= f (t , y ), y (t0 ) = y0
dt
On integration between the limits t and t , we get
0 4
t4 dy t4
∫t0 dt dt = ∫t0 f (t , y)dt
t4
y4 − y0 = ∫ f (t , y )dt
t0

But we know from Newton’s forward difference formula


s( s − 1) 2 s( s − 1)( s − 2) 3
f (t , y ) = f 0 + s∆f 0 + ∆ f0 + ∆ f0 +
2 6
t − t0
where s= , t = t0 + sh
h
t4  s ( s − 1) 2 s ( s − 1)( s − 2) 3
y 4 = y0 + ∫  f 0 + s ∆ f 0 + ∆ f0 + ∆ f0
t0
 2 6
s ( s − 1( s − 2)( s − 3) 4 
+ ∆ f 0 +  dt
24 
Now, by changing the variable of integration (from t to s), the limits of integration also
changes (from 0 to 4), and thus the above expression becomes

© Copyright Virtual University of Pakistan 2

215
Numerical Analysis –MTH603 VU

4 s ( s − 1) 2 s ( s − 1)( s − 2) 3
y 4 = y0 + h ∫  4 f 0 + s∆f 0 + 2 ∆ f 0 + ∆ f0
0 6
s ( s − 1( s − 2)( s − 3) 4 
+ ∆ f 0 +  ds
24 
which simplifies to
 20 8 28 
y4 = y0 + h  4 f 0 + 8∆f 0 + ∆ 2 f 0 + ∆ 3 f 0 + ∆ 4 f 0 
 3 3 90 
Substituting the differences
∆f 0 = f1 − f 0 , ∆ 2 f 0 = f 2 − 2 f1 + f 0 ,
It can be further simplified to
4h 28
y4 = y0 + (2 f1 − f 2 + 2 f 3 ) + h∆ 4 f 0
3 90
Alternatively, it can also be written as
4h 28
y4 = y0 + (2 y1′ − y2′ + 2 y3′ ) + h∆ 4 y0′
3 90
This is known as Milne’s predictor formula.
Similarly, integrating the original over the interval t to t or s = 0 to 2 and repeating the
0 2
above steps, we get
h 1
y2 = y0 + ( y0′ + 4 y1′ + y2′ ) − h∆ 4 y0′
3 90
which is known as Milne’s corrector formula.
In general, Milne’s predictor-corrector pair can be written as
4h 
P : yn +1 = yn −3 + (2 yn′ − 2 − yn′ −1 + 2 yn′ ) 
3 

h
C : yn +1 = yn −1 + ( yn −1 + 4 yn + yn +1 ) 
′ ′ ′
3 
From these equations, we observe that the magnitude of the truncation error in
corrector formula is 1/ 90h∆ 4 y0 / while the truncation error in predictor formula is
28 / 90h∆ 4 y0 /
Thus: TE in, c-formula is less than the TE in p-formula.
In order to apply this P – C method to solve numerically any initial value problem, we
first predict the value of y by means of predictor formula, where derivatives are
n+1
computed using the given differential equation itself.
Using the predicted value y , we calculate the derivative y’ frrom the given
n+1 n+1
differential equation and then we use the corrector formula of the pair to have the
corrected value of y This in turn may be used to obtain improved value of y by
n+1 n+1
using corrector again. This in turn may be used to obtain improved value of y by
n+1
using the corrector again. This cycle is repeated until we achieve the required accuracy.
Example

© Copyright Virtual University of Pakistan 3

216
Numerical Analysis –MTH603 VU

dy 1
Find y (2.0) if y ( t ) is the solution of = (t + y )
dt 2
y (0) = 2, y (0.5) = 2.636,
y (1.0) = 3.595 and y(1.5) = 4.968 Use Milne’s P-C method.
Solution
Taking t = 0.0, t = 0.5, t = 1.0, t = 1.5 y , y , y and y , are given, we have to
0 1 2 3 0 1 2 3
compute y , the solution of the given differential equation corresponding to t =2.0
4
The Milne’s P – C pair is given as
4h
P : yn +1 = yn −3 + (2 yn′ − 2 − yn′ −1 + 2 yn′ )
3
h
C : yn +1 = yn −1 + ( yn′ −1 + 4 yn′ + yn′ +1 )
3
From the given differential equation, y′ = (t + y ) / 2
We have
t + y 0.5 + 2.636
y1′ = 1 1 = = 1.5680
2 2
t + y2 1.0 + 3.595
y2′ = 2 = = 2.2975
2 2
t + y3 1.5 + 4.968
y3′ = 3 = = 3.2340
2 2

Now, using predictor formula, we compute


4h
y4 = y0 + (2 y1′ − y2′ + 2 y3′ )
3
4(0.5)
= 2+ [ 2(1.5680) − 2.2975 + 2(3.2340)]
3
= 6.8710
Using this predicted value, we shall compute the improved value of y from corrector
4
formula
h
y4 = y2 + ( y2′ + 4 y3′ + y4′ )
3
Using the available predicted value y and the initial values, we compute
4
t4 + y4 2 + 6.68710
y4′ = = = 4.4355
2 2
t + y3 1.5 + 4.968
y3′ = 3 = = 3.2340
2 2
and y2′ = 2.2975
Thus, the first corrected value of y is given by
4

© Copyright Virtual University of Pakistan 4

217
Numerical Analysis –MTH603 VU

0.5
y4(1) = 3.595 + [2.2975 + 4(3.234) + 4.4355]
3
= 6.8731667
Suppose, we apply the corrector formula again, then we have
h
y4(2) = y2 + ( y2′ + 4 y3′ + ( y4(1) )′
3
0.5  2 + 6.8731667 
= 3.595 +  2.2975 + 4(3.234) + 
3  2
= 6.8733467
Finally, y (2.0) = y = 6.8734.
4

Example
dy
Tabulate the solution of = t + y, y (0) = 1 in the interval [0, 0.4] with h = 0.1,
dt
using Milne’s P-C method.
Solution
Milne’s P-C method demand the solution at first four points t , t , t and t . As it is not a
0 1 2 3
self – starting method, we shall use R-K method of fourth order to get the required
solution and then switch over to Milne’s P – C method.
Thus, taking t = 0, t = 0.1, t = 0.2, t = 0.3 we get the corresponding y values using
0 1 2 3
th
R–K method of 4 order; that is y = 1, y = 1.1103, y = 1.2428 and y = 1.3997
0 1 2 3
(Reference Lecture 38)
Now we compute
y1′ = t1 + y1 = 0.1 + 1.1103 = 1.2103
y2′ = t2 + y2 = 0.2 + 1.2428 = 1.4428
y3′ = t3 + y3 = 0.3 + 1.3997 = 1.6997
Using Milne’s predictor formula
4h
P : y4 = y0 + (2 y1′ − y2′ + 2 y3′ )
3
4(0.5)
= 1+ [ 2(1.21103) − 1.4428 + 2(1.69971)]
3
= 1.58363
Before using corrector formula, we compute
y4′ = t4 + y4 ( predicted value)
= 0.4 + 1.5836 = 1.9836
Finally, using Milne’s corrector formula, we compute

© Copyright Virtual University of Pakistan 5

218
Numerical Analysis –MTH603 VU

h
C : y4 = y2 + ( y4′ + 4 y3′ + y2′ )
3
0.1
= 1.2428 + (1.9836 + 6.7988 + 1.4428)
3
= 1.5836
The required solution is:

t 0 0.1 0.2 0.3 0.4


y 1 1.1103 1.2428 1.3997 1.5836

Example
Using Milne’s Predictor-Corrector Formula find f(0.4) from Ordinary Differential
Equation
y / = x − y ; y (0) = 1 ; h = 0.1
with the help of following table.
X 0 0.1 0.2 0.3
Y 1 0.9097 0.8375 0.7816

Solution:

Here,

x0 = 0 , x1 = 0.1 , x2 = 0.2 , x3 = 0.3 , x4 = 0.4

y1 ' = x1 − y1 = 0.1 − 0.9097 = −0.8097


y2 ' = x2 − y2 = 0.2 − 0.8375 = −0.6375
y3 ' = x3 − y3 = 0.3 − 0.7816 = −0.4816

Now, using Predictor Formula

4h
y 4 = y0 + ( 2 y1 '− y2 '+ 2 y3 ')
3
4*0.1
y4 = 1 + ( -1.9451)
3
y 4 = 0.740653333

© Copyright Virtual University of Pakistan 6

219
Numerical Analysis –MTH603 VU

Using the predicted value, we shall now compute the corrected value as;

h
y4 = y2 + ( y2 '+ 4 y3 '+ y4 ')
3

Now,

y4 ' = x4 − y4 = 0.4 − 0.740653333= - 0.340653333

Putting the values,int o the Corrector Formula;

h
y4 = y2 + ( y2 '+ 4 y3 '+ y4 ')
3
0.1
y4 = 0.8375+
3
( −0.6375+ ( 4* − 0.4816 ) - 0.340653333)
y4 = 0.8375 - 0.096818444

y4 = 0.740681556 Ans.

© Copyright Virtual University of Pakistan 7

220
Numerical Analysis –MTH603 VU

© Copyright Virtual University of Pakistan 8

221
Numerical Analysis –MTH603 VU

Adam-Moulton Method

It is another predictor-corrector method, where we use the fact that the solution to the
given initial value problem is known at past four equally spaced points
t ,t ,t ,t .
n n-1 n-2 n-3
The task is to compute the value of y at t .
n+1
Let us consider the differential equation
dy
= f (t , y )
dt
Integrating between the limits t to t , we have
n n+1
tn+1 dy tn+1
∫tn dt dt = ∫tn f (t , y)dt
That is
tn+1
yn +1 − yn = ∫ f (t , y )dt
tn

To carry out integration, we proceed as follows. We employ Newton’s backward


interpolation formula, so that

s ( s + 1) 2 s ( s + 1)( s + 2) 3
f (t , y ) = f n + s∇f n + ∇ fn + ∇ fn + "
2 6
Where
t − tn
s=
h
After substitution, we obtain
s ( s + 1) 2
[ f n + s∇f n +
tn+1
yn +1 = yn + ∫ ∇ fn
tn 2
s ( s + 1)( s + 2) 3 s ( s + 1)( s + 2)( s + 3) 4 
+ ∇ fn + ∇ f n + " dt
6 24 
Now by changing the variable of integration (from t to s), the limits of integration also
changes (from 0 to 1), and thus the above expression becomes
s ( s + 1) 2
yn +1 = yn + h ∫ [ f n + s∇f n +
1
∇ fn
0 2
s ( s + 1)( s + 2) 3 s ( s + 1)( s + 2)( s + 3) 4 
+ ∇ fn + ∇ f n + " ds
6 24 
Actual integration reduces the above expression to
 1 5 3 251 4 
yn +1 = yn + h  f n + ∇f n + ∇ 2 f n + ∇ 3 f n + ∇ fn 
 2 12 8 720 
Now substituting the differences such as
∇f n = f n − f n −1
© Copyright Virtual University of Pakistan 1

222
Numerical Analysis –MTH603 VU

∇ 2 f n = f n − 2 f n −1 + f n − 2
∇3 f n = f n − 3 f n −1 + 3 f n − 2 − f n −3
Equation simplifies to

h 251 4
yn +1 = yn + (55 f n − 59 f n −1 + 37 f n − 2 − 9 f n −3 ) + h∇ f n
24 720

Alternatively, it can be written as


h 251 4
yn +1 = yn + [55 yn′ − 59 yn′ −1 + 37 yn′ − 2 − 9 yn′ −3 ] + h∇ y′
24 720
This is known as Adam’s predictor formula.
The truncation error is (251/ 720)h∇ 4 yn′ .
To obtain corrector formula, we use Newton’s backward interpolation formula about
f instead of f .
n+1 n
We obtain
s( s + 1) 2
yn +1 = yn + h ∫ [ f n +1 + s∇f n +1 +
0
∇ f n +1
−1 2
s( s + 1)( s + 2) 3 s( s + 1)( s + 2)( s + 3) 4 
+ ∇ f n +1 + ∇ f n +1 + " ds
6 24 
Carrying out the integration and repeating the steps, we get the corrector formula as
h  −19  4 ′
yn +1 = yn + ( 9 yn′ +1 + 19 yn′ − 5 yn′ −1 + yn′ − 2 ) +   h∇ yn +1
24  720 
Here, the truncation error is (19 20 ) h∇ 4 yn′ +1.
The truncation error in Adam’s predictor is approximately thirteen times more than that
in the corrector, but with opposite sign.
In general, Adam-Moulton predictor-corrector pair can be written as
h 
P : yn +1 = yn + ( 55 yn′ − 59 yn′ −1 + 37 yn′ − 2 − 9 yn′ −3 ) 
24 

h 
C : yn +1 = yn + ( 9 yn′ +1 + 19 yn′ − 5 yn′ −1 + yn′ − 2 )
24 
Example
Using Adam-Moulton predictor-corrector method, find the solution of the initial value
problem
dy
= y − t2, y (0) = 1
dt
at t = 1.0, taking h = 0.2. Compare it with the analytical solution.
Solution
In order to use Adam’s P-C method, we require the solution of the given differential
th
equation at the past four equally spaced points, for which we use R-K method of 4
order which is self starting.
© Copyright Virtual University of Pakistan 2

223
Numerical Analysis –MTH603 VU

Thus taking t =0, y = 1, h = 0.2, we compute


0 0
k = 0.2, k = 0.218,
1 2
k = 0.2198, k = 0.23596,
3 4
and get
1
y1 = y0 + (k1 + 2k2 + 2k3 + k4 ) = 1.21859
6
Taking t = 0.2,
1
y = 1.21859, h = 0.2,
1
we compute k = 0.23571,
1
k = 0.2492,
2
k = 0.25064, k = 0.26184, and get
3 4
1
y2 = y1 + (k1 + 2k2 + 2k3 + k4 ) = 1.46813
6
Now, we take t = 0.4,
2
y = 1.46813, h = 0.2, and compute k = 0.2616,
2 1
k = 0.2697,
2
k = 0.2706, k = 0.2757
3 4
to get
1
y3 = y (0.6) = y2 + (k1 + 2k2 + 2k3 + k4 ) = 1.73779
6
Thus, we have at our disposal
y0 = y (0) = 1
y1 = y (0, 2) = 1.21859
y2 = (0.4) = 1.46813
y3 = y (0.6) = 1.73779
Now, we use Adam’s P-C pair to calculate y (0.8) and y (1.0) as follows:
h
P : yn +1 = yn + ( 55 yn′ − 59 yn′ −1 + 37 yn′ − 2 − 9 yn′ −3 )
24
h
C : yn +1 = yn + ( 9 yn′ +1 + 19 yn′ − 5 yn′ −1 + yn′ − 2 )
24
h
Thus y4p = y3 + ( 55 y3′ − 59 y2′ + 37 y1′ − 9 y0′ ) (1)
24
From the given differential equation, we have

y′ = y − t 2 .
Therefore

© Copyright Virtual University of Pakistan 3

224
Numerical Analysis –MTH603 VU

y0′ = y0 − t02 = 1.0


y1′ = y1 − t12 = 1.17859
y2′ = y2 − t22 = 1.30813
y3′ = y3 − t32 = 1.37779
Hence, from Eq. (1), we get
0.2
y (0.8) = y4p = 1.73779 + ( 75.77845 − 77.17967 + 43.60783 − 9 ) = 2.01451
24

Now to obtain the corrector value of y at t = 0.8, we use


h
y4c = y c (0.8) = y3 + ( 9 y4′ + 19 y3′ − 5 y2′ + y1′ ) (2)
24
But 9 y4′ = 9( y4 − t4 ) = 9[2.01451 − (0.8) 2 ] = 12.37059
p 2

Therefore
0.2
y4 = y c (0.8) = 1.73779 + (12.37059 + 26.17801 − 6.54065 + 1.17859 ) = 2.01434 (3)
24
Proceeding similarly, we get

h
y5p = y p (1.0) = y4 + ( 55 y4′ − 59 y3′ + 37 y2′ − 9 y1′ )
24
Noting that
y4′ = y4 − t42 = 1.3743,
We calculate

0.2
y5p = 2.01434 + ( 75.5887 − 81.28961 + 48.40081 − 10.60731) = 2.28178
24
Now, the corrector formula for computing y is given by
5
h
y5c = y c (1.0) = y4 + ( 9 y5′ + 19 y4′ − 5 y3′ + y2′ ) (4)
24
But
9 y5′ = 9 ( y5p − t52 ) = 11.53602
Then finally we get
0.2
y5 = y (1.0) = 2.01434 + (11.53602 + 26.17801 − 6.54065 + 1.17859 )
24
= 2.28393 (5)
The analytical solution can be seen in the following steps.
dy
− y = −t 2
dt
After finding integrating factor and solving, we get
d −1
ye = −e − t t 2
dt

© Copyright Virtual University of Pakistan 4

225
Numerical Analysis –MTH603 VU

Integrating, we get
ye − t = − ∫ e −t t 2 dt = ∫ t 2 d (e −t ) = t 2 e− t + 2te − t + c
That is
c
y = t 2 + 2t + 2 +
e−t
Now using the initial condition, y(0) = 1,
we get c = – 1.
Therefore, the analytical solution is given by
y = t 2 + 2t + 2 − et
From which we get
y (1.0) = 5 − e = 2.2817
Example
Using Adam-Moulton Predictor-Corrector Formula find f(0.4) from Ordinary
Differential Equation
y / = 1 + 2 xy ; y (0) = 0 ; h = 0.1
with the help of following table.
X 0 0.1 0.2 0.3
Y 0 0.1007 0.2056 0.3199

Solution:
Here

© Copyright Virtual University of Pakistan 5

226
Numerical Analysis –MTH603 VU

h = 0.1

f ( x, y ) = 1 + 2 xy

y0 ' = 1 + 2 x0 y0 = 1 + 2 ( 0 )( 0 ) = 1
y1 ' = 1 + 2 x1 y1 = 1 + 2 ( 0.1)( 0.1007 ) = 1.02014
y2 ' = 1 + 2 x2 y2 = 1 + 2 ( 0.2 )( 0.2056 ) = 1.08224
y3 ' = 1 + 2 x3 y3 = 1 + 2 ( 0.3)( 0.3199 ) = 1.19194

Now, Using Adam's P-C Pair Formula:-

h
yn +1 = yn + ( 55 y 'n − 59 y 'n−1 + 37 y 'n−2 − 9 y 'n−3 )
24

Putting the values;

h
y4 = y3 + ( 55 y '3 − 59 y '2 + 37 y '1 − 9 y '0 )
24
0.1
y4 = 0.3199 +
24
( 55 (1.19194 ) − 59 (1.08224 ) + 37 (1.02014 ) − 9 (1) )
y4 = 0.446773833

© Copyright Virtual University of Pakistan 6

227
Numerical Analysis –MTH603 VU

Computing y '4 for the Corrector Formula;

y '4 = 1 + 2 x4 y4 = 1 + 2 ( 0.4 )( 0.446773833)


y '4 = 1.3574190664

Now Applying the Corrector Formula;

h
yn +1 = yn + ( 9 y 'n+1 + 19 y 'n − 5 y 'n−1 + y 'n−2 )
24
h
y4 = y3 + ( 9 y '4 + 19 y '3 − 5 y '2 + y '1 )
24
0.1
y4 = 0.3199 +
24
( 9 (1.3574190664 ) + 19 (1.19194 ) − 5 (1.08224 ) + 1.02014 )

y4 = 0.446869048

Convergence and Stability Considerations


The numerical solution of a differential equation can be shown to converge to its exact
solution, if the step size h is very small.The numerical solution of a differential equation
is said to be stable if the error do not grow exponentially as we compute from one step to
another. Stability consideration are very important in finding the numerical solutions of
the differential equations either by single-step methods or by using multi-step methods.
However, theoretical analysis of stability and convergence of R -K methods and P –C
th
methods are highly involved and obtain numerically stable solution using 4 order R – K
method to the simple problem y’ = Ay gives us stability condition as -2.78<Ah
In practice, to get numerically stable solutions to similar problems, we choose the value
of h much smaller than the value given by the above condition and also check for
consistency of the result.
Another topic of interest which is not considered, namely the stiff system of differential
equations that arises in many chemical engineering systems, such as chemical reactors,
where the rate constants for the reactions involved are widely different.
Most of the realistic stiff DE do not have analytical solutions and therefore only
numerical solutions can be obtained. However, to get numerically stable solutions, a very
small step size h is required, to use either R-K methods or P – C methods.
More computer time is required

© Copyright Virtual University of Pakistan 7

228
Lecture # 41

Examples of Differential Equations

Recall EULER METHOD

We considered the differential equation of first order with the initial condition y(t0) = y0.
dy
= f (t , y )
dt
We obtained the solution of the given differential equation in the form of a recurrence relation
ym +1 = ym + hf (tm , ym )
In fact Euler’s method constructs wi ~ y(ti ) for each i = 0, 1,…, N-1 by deleting the remainder
term. Thus the Euler’s Method is
w0 = α ,
wi +1 = wi + hf (ti , wi )
for each i = 0,1,..., N − 1

Euler’s algorithm
Let us try to approximate the solution of the given IVP at (N+1) equally spaced numbers in the
interval [a ,b]
y′ = f (t , y ),
a ≤ t ≤ b, y (a) = α
INPUT endpoints a, b; integer N, initial condition (alpha)

OUTPUT approximate w to y at the (N+1) values of t


Step 1
Set h=(b-a) / N
t=a
w = (alpha)
OUTPUT (t , w)
Step 2
For i = 0,1,…N do Step 3, 4.
Step 3
Set w = w + h f (t , w); (compute wi ).
t = a + i h (compute ti )
Step 4 OUTPUT (t , w)
Step 5 STOP
Example
Use Euler’s method to approximate the solution of IVP
y’= y – t^2 + 1, 0 < t < 2,
y ( 0 ) = 0.5 with N = 10.
Solution
Here, h = 0.2, ti = 0.2i, w0= 0.5 and wi+1 = wi + h (wi – ti^2 + 1)
= wi+0.2[wi - 0.04i2 +1]
=1.2 wi - 0.008i^2 + 0.2
for i = 0,1,…,9.
The exact solution is
y ( t )= (t+1)^2 -0.5 e t
> alg051();
This is Euler's Method.
Input the function F(t,y) in terms of t and y

229
For example: y-t^2+1
> y-t^2+1
Input left and right endpoints separated by blank
>02
Input the initial condition
> 0.5
Input a positive integer for the number of subintervals
> 10
Choice of output method:
1. Output to screen
2. Output to text file
Please enter 1 or 2
>1
Output
t w
0.000 0.5000000
0.200 0.8000000
0.400 1.1520000
0.600 1.5504000
0.800 1.9884800
1.000 2.4581760
> alg051();
This is Euler's Method.
Input the function F (t,y) in terms of t and y
For example: y-3*t^2+4
> y-3*t^2+4
Input left and right hand points separated by a blank
>0 1
Input the initial condition
> 0.5
Input a positive integer for the number of subintervals
> 10
Choice of output method:
1. Output to screen
2. Output to text file
Please enter 1 or 2
>1
Output
t w
0.000 0.5000000
0.100 0.9500000
0.200 1.4420000
0.300 1.9742000
0.400 2.5446200
0.500 3.1510820
0.600 3.7911902
0.700 4.4623092
0.800 5.1615401
0.900 5.8856942
1.000 6.6312636
Recall Runge-Kutta (Order Four) METHOD
The fourth-order R-K method was described as
1
yn +1 = yn + (k1 + 2k2 + 2k3 + k4 )
6
where

230
k1 = hf (tn , yn )
 h k 
k2 = hf  tn + , yn + 1 
 2 2
 h k 
k3 = hf  tn + , yn + 2 
 2 2
k4 = hf (tn + h, yn + k3 )

Example
Solve the following differential equation
dy
= t + y with the initial condition y(0) = 1, using fourth- order Runge-Kutta method from t = 0
dt
to t = 0.4 taking h = 0.1

Solution
The fourth-order Runge-Kutta method is described as
1
yn +1 = yn + (k1 + 2k2 + 2k3 + k4 ) ..................(1)
6
where
k1 = hf (tn , yn )
 h k 
k2 = hf  tn + , yn + 1 
 2 2
 h k 
k3 = hf  tn + , yn + 2 
 2 2
k4 = hf (tn + h, yn + k3 )
In this problem,
f (t , y ) = t + y, h = 0.1, t0 = 0, y0 = 1.
As a first step, we calculate
k1 = hf (t0 , y0 ) = 0.1(1) = 0.1
k2 = hf (t0 + 0.05, y0 + 0.05)
= hf (0.05,1.05) = 0.1[0.05 + 1.05]
= 0.11
k3 = hf (t0 + 0.05, y0 + 0.055)
= 0.1(0.05 + 1.055)
= 0.1105
k4 = 0.1(0.1 + 1.1105) = 0.12105
Now, we compute from

231
1
y1 = y0 + (k1 + 2k2 + 2k3 + k4 )
6
1
= 1 + (0.1 + 0.22 + 0.2210 + 0.12105)
6
= 1.11034
Therefore y(0.1) = y1=1.1103
In the second step, we have to find y2 = y(0.2)
We compute
k1 = hf (t1 , y1 ) = 0.1(0.1 + 1.11034) = 0.121034
 h k 
k2 = hf  t1 + , y1 + 1 
 2 2
= 0.1[0.15 + (1.11034 + 0.060517)] = 0.13208
 h k 
k3 = hf  t1 + , y1 + 2 
 2 2
= 0.1[0.15 + (1.11034 + 0.06604)] = 0.132638
k4 = hf (t1 + h, y1 + k3 )
= 0.1[0.2 + (1.11034 + 0.132638)] = 0.1442978
From Equation (1), we see that
1
y2 = 1.11034 + [0.121034 + 2(0.13208)
6
+2(0.132638) + 0.1442978] = 1.2428
Similarly we calculate,
k1 = hf (t2 , y2 ) = 0.1[0.2 + 1.2428] = 0.14428
 h k 
k2 = hf  t2 + , y2 + 1  = 0.1[0.25 + (1.2428 + 0.07214)] = 0.156494
 2 2
 h k 
k3 = hf  t1 + , y1 + 2  = 0.1[0.3 + (1.2428 + 0.078247)] = 0.1571047
 2 2
k4 = hf (t2 + h, y2 + k3 ) = 0.1[0.3 + (1.2428 + 0.1571047)] = 0.16999047
Using equation (1), we compute
1
y (0.3) = y3 = y2 + (k1 + 2k2 + 2k3 + k4 ) = 1.399711
6
Finally, we calculate
k1 = hf (t3 , y3 ) = 0.1[0.3 + 1.3997] = 0.16997
 h k 
k2 = hf  t3 + , y3 + 1  = 0.1[0.35 + (1.3997 + 0.084985)] = 0.1834685
 2 2
 h k 
k3 = hf  t3 + , y3 + 2  = 0.1[0.35 + (1.3997 + 0.091734)] = 0.1841434
 2 2
k4 = hf (t3 + h, y3 + k3 ) = 0.1[0.4 + (1.3997 + 0.1841434)] = 0.19838434
Using them in equation (1), we get

232
y (0.4) = y4
1
= y3 + (k1 + 2k2 + 2k3 + k4 )
6
= 1.58363
which is the required result
Runge-Kutta Order Four

w0 = α
k1 = hf (ti , wi )
 h k 
k2 = hf  ti + , wi + 1 
 2 2
 h k 
k3 = hf  ti + , wi + 2 
 2 2
k4 = hf (ti + h, wi + k3 )
1
wi +1 = wi + (k1 + 2k2 + 2k3 + k4 ) .................(1)
6
RK4 algorithm
Let us try to approximate the solution of the given IVP at (N+1) equally spaced numbers in the
interval [a ,b]
y′ = f (t , y ),
a ≤ t ≤ b, y (a) = α
INPUT endpoints a, b; integer N, initial condition (alpha)

OUTPUT approximate w to y at the (N+1) values of t


Step 1
Set h=(b-a) / N
t=a
w = (alpha)
OUTPUT (t , w)
Step 2
For i = 0,1,…N do Step 3 - 5.
Step 3
Set
K1 = hf (t , w)
 h K 
K 2 = hf  t + , w + 1 
 2 2 
 h K 
K 3 = hf  t + , w + 2 
 2 2 
K 4 = hf (t + h, w + K 3 )
1
wi +1 = wi + (k1 + 2k2 + 2k3 + k4 )
6
Step 3
Set w = w + h f (t , w); (compute wi ).
t = a + i h (compute ti )
Step 4 OUTPUT (t , w)

233
Step 5 STOP
> alg052();
This is the Runge-Kutta Order Four Method.
Input the function F(t,y) in terms of t and y
For example: y-t^2+1
> y-t^2+1
Input left and right endpoints separated by blank
>02
Input the initial condition
> 0.5
Input a positive integer for the number of subintervals
> 10
Choice of output method:
1. Output to screen
2. Output to text file
Please enter 1 or 2
>1
Output
t w
0.000 0.5000000
0.200 0.8292933
0.400 1.2140762
0.600 1.6489220
0.800 2.1272027
1.000 2.6408227
1.200 3.1798942
1.400 3.7323401
1.600 4.2834095
1.800 4.8150857
2.000 5.3053630

234
Lecture 42
Examples of Numerical Differentiation

The simplest formula for differentiation is

1 h2
f ′( x0 ) = [ f ( x0 + h) − f ( x0 )] − f (2) (ξ ),
h 2

Example
Let f(x)= In x and x0 = 1.8. Then quotient
f (1.8 + h) − f (1.8)
, h > 0,
h
is used to approximate f ′(1.8) with error
hf ′′(ξ ) h h
= ≤ , where 1.8 < ξ < 1.8 + h.
2 2ξ 2
2(1.8) 2
Let us see the results for h = 0.1, 0.01, and 0.001.

h f(1.8 + h) f (1.8 + h) − f (1.8) h


h 2(1.8) 2

0.1 0.64185389 0.5406722 0.0154321

0.01 0.59332685 0.5540180 0.0015432

0.001 0.58834207 0.5554013 0.0001543

235
Since f ′( x) = 1/ x,

The exact value of f ′(1.8) is 0.555 and the error bounds are a appropriate.
The following two three point formulas become especially useful if the nodes are equally spaced,
that is, when
x1 = x0 + h and
x2 = x0 + 2h,
1 h2
f ′( x0 ) = [ −3 f ( x0 ) + 4 f ( x0 + h) − f ( x0 + 2h)] + f (3) (ξ0 ),
2h 3
where ξ0 lies between x0 and x0 + 2h, and
1 h 2 (3)
f ′( x0 ) = [ f ( x0 + h) − f ( x0 − h)] − f (ξ1 ),
2h 6
where ξ1 lies between (x0 – h) and (x0 + h).
Given in Table below are values for f (x) = xex.
x f (x)

1.8 10.889365

1.9 12.703199

2.0 14.778112

2.1 17.148957

2.2 19.855030

Since
f ′( x) = ( x + 1)e x , f ′(2.0) = 22.167168.
Approximating f ′(2.0)
using the various three-and five-point formulas produces the following results.
Three point formulas:
1 h 2 (3)
f ′( x0 ) = [ −3 f ( x0 ) + 4 f ( x0 + h) − f ( x0 + 2h) ] + f (ξ 0 ),
2h 3
1 h 2 (3)
f ′( x0 ) = [ f ( x0 + h) − f ( x0 − h) ] − f (ξ1 ),
2h 6
Using three point formulas we get
1
h = 0.1: [ −3 f (2.0) + 4 f (2.1) − f (2.2)]
0.2
= 22.032310,

236
1
h = −0.1: [ −3 f (2.0) + 4 f (1.9) − f (1.8)]
−0.2
= 22.0054525,
1
h = 0.1: [ f (2.1) − f (1.9)]
0.2
= 22.228790,
1
h = 0.2 : [ f (2.2) − f (1.8)]
0.4
= 22.414163.
Five point formula
Using the five point formula with h = 0.1 (the only formula applicable):
1
f ′( x0 ) = [ f ( x0 − 2h) − 8 f ( x0 − h) + 8 f ( x0 + h) − f ( x0 + 2h)]
12h
1
= [ f (1.8) − 8 f (1.9) + 8 f (2.1) − f (2.2)]
0.2
= 22.166999.
The errors in the formulas are approximately
1.35 × 10−1 ,1.13 × 10−1 , − 6.16 × 10−2 , −2.47 × 10−1 ,

−4
and 1.69 × 10 ,
respectively. Clearly, the five-point formula gives the superior result.
Consider approximating for f (x) = sin x, using the values in table [ the true value is cos (0.900) =
0.62161.]

x sin x x sin x

0.800 0.71736 0.901 0.78395

0.850 0.75128 0.902 0.78457

0.880 0.77074 0.905 0.78643

0.890 0.77707 0.910 0.78950

0.895 0.78021 0.920 0.79560

0.898 0.78208 0.950 0.81342

0.899 0.78270 1.000 0.84147

Using the formula

237
f (0.900 + h) − f (0.900 − h)
f ′(0.900) ≈
2h
with different values of h gives the approximations in table given below:

h Approximation to Error
f ′(0.900)

0.001 0.62500 0.00339

0.002 0.62250 0.00089

0.005 0.62200 0.00039

0.010 0.62150 -0.00011

0.020 0.62150 -0.00011

0.050 0.62140 -0.00021

0.100 0.62055 -0.00106

238
Examples of
Numerical Integration
EXAMPLE
The Trapezoidal rule for a function f on the interval [0, 2] is
2 h
∫0
f ( x)dx = [ f ( x0 ) + f ( x1 )]
2
2
∫0
f ( x)dx ≈ f (0) + f (2),
while Simpson’s rule for f on [0, 2] is
2 h
∫0
f ( x)dx = [ f ( x0 ) + 4 f ( x1 ) + f ( x2 )].
3
That is

2 1
∫0
f ( x)dx ≈ [ f (0) + 4 f (1) + f (2)].
3
f (x) x2 x4 1/(x + 1) sin x ex
1 + x2

Exact value 2.667 6.400 1.099 2.958 1.416 6.389

Trapezoidal 4.000 16.000 1.333 3.326 0.909 8.389

Simpson’s 2.667 6.667 1.111 2.964 1.425 6.421

Use close and open formulas listed below to approximate


π /4
∫0
sin xdx = 1 − 2 2

Some of the common closed Newton-Cotes formulas with their error terms are as follows:
n = 1: Trapezoidal rule
x1 h h3
∫x0 f ( x ) dx =
2
[ f ( x0 ) + f ( x1 )] −
12
f ′′(ξ ),
Where x0 < ξ < x1.
n = 2: Simpson’s rule

239
x2 h h5 (4)
∫ x0
f ( x)dx = [ f ( x0 ) + 4 f ( x1 ) + f ( x2 )] −
2 90
f (ξ ), x0 < ξ < x2 .

n = 3: Simpson’s rule
x3 3h 3h5 (4)
∫ x0
f ( x) dx = [ f ( x0 ) + 3 f ( x1 ) + 3 f ( x2 ) + f ( x3 )] −
8 80
f (ξ ), x0 < ξ < x3 .
n = 4:
x1 2h 8h 7 (6)
∫ x2
f ( x)dx = [7 f ( x0 ) + 32 f ( x1 ) + 12 f ( x2 ) + 32 f ( x3 ) + 7 f ( x4 )] −
45 945
f (ξ

Where x0 < ξ < x4 .

n = 0: Midpoint rule

x1 h3
∫x2
f ( x)dx = 2hf ( x0 ) +
3
f ′′(ξ ), where x−1 < ξ < x1.

n 1 2 3 4

Closed 0.27768018 0.29293264 0.29291070 0.29289318


formulas

Error 0.01521303 0.00003942 0.00001748 0.00000004

Open formulas 0.29798754 0.29285866 0.29286923

Error 0.00509432 0.00003456 0.00002399

Composite Numerical Integration

240
EXAMPLE 1
π
Consider approximating ∫ 0
sin xdx with an absolute error less than 0.00002, using the
Composite Simpson’s rule. The Composite Simpson’s rule gives
π h  ( n / 2) −1 n/2
 π h4
∫ sin xdx =  2 ∑ sin x2 j + 4∑ sin x2 j −1  − sin µ .
0 3  j =1 j =1  180
Since the absolute error is required to be less than 0.00002, the inequality
π h4 π h4 π5
sin µ ≤ = < 0.00002
180 180 180n 4
is used to determine n and h. Computing these calculations gives n greater than or equal to 18. If
n = 20, then the formula becomes
π π  9
 jπ  10
 (2 j − 1)π 
∫ sin xdx ≈  ∑ sin 
2  ∑ sin 
+ 4   = 2.000006.
0 60  j =1  10  j =1  20 
To be assured of this degree of accuracy using
the Composite Trapezoidal rule requires that
π h2 π h2 π3
sin µ ≤ = < 0.00002
12 12 12n 2
or that n ≥ 360. Since this is many more
calculations than are needed for the
Composite Simpson’s rule, it is clear
that it would be undesirable to use the Composite Trapezoidal rule on this problem. For
comparison purposes, the Composite Trapezoidal rule with n = 20 and
gives
π π 
19
 jπ  
∫0 sin xdx ≈ ∑ sin 
2  + sin 0 + sin π 
40  j =1  20  
π  19  jπ  
=  2∑ sin    = 1.9958860.
40  j =1  20  
The exact answer is 2; so Simpson’s rule with n = 20 gave an answer well within the required
error bound, whereas the Trapezoidal rule with n = 20 clearly did not.
An Example of Industrial applications: A company advertises that every roll of toilet paper
has at least 250 sheets. The probability that there are 250 or more sheets in the toilet paper is
given by

P ( y ≥ 250) = ∫ 0.3515
2
e −0.3881( y − 252.2) dy
250
Approximating the above integral as
270
P ( y ≥ 250) = ∫
2
0.3515 e −0.3881( y − 252.2) dy
250
a)use single segment Trapezoidal rule
to find the probability that there are 250
or more sheets.
b)Find the true error, Et for part (a).
C)Find the absolute relative true
error for part (a).

241
 f (a ) + f (b) 
I ≈ (b − a)   where
 2
a = 250 b = 270
2 2
f ( y ) = 0.3515e −0.3881( y − 252.2) f (250) = 0.3515e −0.3881(250− 252.2)
= 0.053721
2
f (270) = 0.3515e −0.3881(270− 252.2) = 1.3888 × 10−54
 0.053721 + 1.3888 × 10−54 
I = (270 − 250)  
 2 
= 0.53721
b) The exact value of the above integral cannot be found. We assume the value obtained by
adaptive numerical integration using Maple as the exact value for calculating the true error and
relative true error.
270
P ( y ≥ 250) = ∫
2
0.3515 e −0.3881( y − 252.2) dy
250
= 0.97377

so the true error is = 0.97377 − 0.53721 = 0.43656


The absolute relative true error,
∈t , would then be
True Error
∈t = × 100
True Value
0.97377 − 0.53721
= × 100
0.97377
= 44.832%
Improper Integrals
EXAMPLE
To approximate the values of the improper integral
1 ex
∫0
x
dx,

we will use the Composite Simpson’s rule with h = 0.25. Since the fourth Taylor polynomial for ex
about x = 0 is
x 2 x3 x 4
P4 ( x) = 1 + x + + + .
2 6 24
We have
1
P ( x)
1  12 2 32 1 52 1 7 2 1 9 2
∫0 4 x dx = Mlim+ 
→0 
2x + x + x + x +
3 5 21 180
x 
M
2 1 1 1
= 2+ + + + ≈ 2.9235450.
3 5 21 108
Table below lists the approximate values of

242
 e x − P4 ( x)
 when 0 < x ≤1
G ( x) =  4
0, when x=0

x G(x)

0.00 0

0.25 0.0000170

0.50 0.0004013

0.75 0.0026026

1.00 0.0099485

Applying the Composite Simpson’s rule to G using these data gives


1
∫ G( x)dx ≈
0

0.25
[0 + 4(0.0000170) + 2(0.0004013)
3
+4(0.0026026) + 0.0099485]
= 0.0017691
Hence
1 ex
∫ 0
x
dx ≈ 2.9235450 + 0.0017691 = 2.9253141.
This result is accurate within the accuracy of the Composite Simpson’s rule approximation for the
function G. Since G
(4)
( x) < 1
on [0, 1], the error is bounded by
1− 0
(0.25) 4 (1) = 0.0000217.
180
EXAMPLE
To approximate the value of the improper integral

∞ 1
I = ∫ x −3 2 sin dx.
1 x

243
1
we make the change of variable t = x-1 to obtain I = ∫t
12
sin t dt.
0

The fourth Taylor polynomial, P4(t), for sin t about 0 is


1
P4 (t ) = t − t 3 ,
6

sin t − t + 16 t 3
1 1 1
So we have I = ∫ 12
dt + ∫ t1 2 − t 5 2 dt
0 t 0 6
1
1 sin t − t + t
3
1
2 1 
=∫ 1 2
6
dt +  t 3 2 − t 7 2 
0 t 3 21  0
1sin t − t + 16 t 3
=∫ dt + 0.61904761.
0 t1 2

Applying the Composite Simpson’s rule with n = 8 to the remaining integral gives
I = 0.0014890097 + 0.61904761 = 0.62053661,

−8
which is accurate to within 4.0 × 10 .

244
An
Introduction to
MAPLE

Maple is a comprehensive
computer system for advanced mathematics.

It includes facilities for interactive algebra, calculus, discrete mathematics, graphics, numerical
computation etc.

It provides a unique environment for rapid development of mathematical programs using its vast
library of built-in functions and operations.
Syntax :As with any computer language, Maple has its own syntax.
We try to explain some of the symbols used in Maple

Sample
Symbol Description Examples
Output

End-of-line. Tells Maple to process the


; hello; hello
line and show the output.

End-of-line. Tells Maple to process the


: hello:
line and hide the output.

Assignment. Lets you assign values to a := 3; a := 3


:=
variables. a; 3

1 + 3; 4
+, - Addition, subtraction.
1 - 3; -2

3*412; 1236
*, / Multiplication, division 1236/3; 412
7/3; 7/3

2^3;
^, sqrt Power, square root sqrt(2);
2^(1/2);

evalf(7/3); 2.333333333
evalf, . Floating-point (decimal) evaluation
7.0/3; 2.333333333

245
2 + 3*I; 2+3I
I,Pi Imaginary unit, Pi. (2*I)^2; -4
evalf(Pi); 3.14159265

Recall the last output, recall the second- %;


%, %% 3.14159265-4
to-last output, etc. %%%;

Some syntactical Tips:


Maple is case sensitive. foo, Foo, and FOO are three different things.

x*y gives the product of x and y,


xy is one variable

To get the constant e use exp(1).

Using the % operator can give confusing results. It always returns the last output from the
Kernel, which may have nothing to do with where the cursor is (or which worksheet is active).
If Maple doesn't recognize something, it assumes it as a variable; e.g. typing i^2 will give you
i2,while we may be wanted -1.

Spaces are optional.

Greek letters may be entered by spelling their name. For example, alpha is always displayed
as α , and Gamma is displayed as Γ
(note upper-case).
Built-in Data Capabilities
Maple can handle arbitrary-precision floating point numbers. In other words, Maple can store as
many digits for a number as you like, up to the physical limits of your computer's memory. To
control this, use the Digits variable.
sqrt(2.0);
1.414213562
Digits := 20:
sqrt(2.0);
1.4142135623730950488

Maple sets Digits to be 10 by default. You can also temporarily get precision results by calling
evalf with a second argument.
evalf(sqrt(2), 15);
1.41421356237310
Large integers are handled automatically

Using symbolic computation


The main feature of Maple is symbolic computation.

246
In other words, Maple does algebra.

Example Output Comments

(x + y)^2; (x + y)2 A basic expression.

k is now an alias for the expression. Note that k is


k := x*y + y^2; k := xy + y2 simply another name for the expression - they are not
equal in the mathematical sense.

You can now use k to refer to the expression. Maple


p := k /(x+y);
immediately substitutes the value of k.

You can unassign a variable by assigning it to its own


k := 'k'; k
name in single quotes.

247
simplify(p); y The simplify command does algebraic simplification.

p := x^2 - p := x2 - Maple doesn't mind if you re-use names. The old value is
8*x +15; 8x + 15 lost.

Use the solve command to solve equations. Note the


use of the = sign. Here, it is used in a mathematical
solve(p=3,x); 2,6 sense. Maple will try different values for x until it finds all
of them that make the mathematical statement x2 - 8x +
15 = 3 true.

The diff command differentiates an expression


dpdx := diff(p,x); dpdx := 2x - 8 with
respect to a variable.

The int command integrates an expression.


int(p,x);
Note that the constant of integration is left off.

Basic Plotting
Maple can produce graphs very easily. Here are some examples, showcasing the basic
capabilities.

248
plot( x^2, x=-2..2); plot( x^2, x=-2..2, y=-10..10);

A basic plot. A plot with vertical axis control.

plot([x, x^2, x^3], x=-2..2);


Plot multiple expressions by enclosing them in brackets.

249
plot3d(4-x^2-y^2, x=-3..3, y=-2..2);

A basic 3-d plot.

smartplot3d(x^2-y^2);

250
Using smartplot to let maple set it's own scaling.

Eigenvals and vectors of a numeric matrix :


Calling Sequence
Eigenvals( A, vecs)
Eigenvals( A, B, vecs)

Parameters
A,B - square matrices of real or complex numbers
vecs - (optional) name to be assigned the matrix of eigenvectors

Example
> A := array([[1,2,4],[3,7,2],[5,6,9]]);

evalf(Eigenvals(A));

> lambda := evalf(Eigenvals(A,vecs));

> print(vecs);

linalg[eigenvectors] - find the eigenvectors of a matrix

Calling Sequence
eigenvectors( A)
eigenvectors( A, 'radical')
eigenvectors( A, 'implicit')

251
Parameters
A - square matrix
The command with(linalg,eigenvectors) allows the use of the abbreviated form of this
command.
> with(linalg):
Warning, the protected names norm and trace have been redefined and unprotected
> A := matrix(3,3, [1,-3,3,3, -5,3,6,-6,4]);

> e := eigenvalues(A);

> v := [eigenvectors(A)];

> v[1][1]; # The first eigenvalue


4
> v[1][2]; # Its multiplicity
1
> v[1][3]; # Its eigenvectors

> v[2][1]; # The second eigenvalue


-2
> v[2][2]; # Its multiplicity
> v[2][2]; # Its multiplicity
2
Help
its worksheet interface

¾ Waiting for command;


¾ Restart; refresh memory;
¾ # #; comments so no action implied
Eval - Evaluate an expression
Calling Sequence
Eval(a, x=n))
Eval(a, {x1=n1,x2=n2,...})
Parameters
a - an expression
x, x1, x2,... - names
n, n1, n2,... - evaluation points
Description
The Eval function is a place holder for evaluation at a point.
The expression a is evaluated at
x = n (x1=n1, x2=n2, ... for the multivariate case).

252
The call Eval (a, x=n) mod p evaluates the polynomial a at x=n modulo p .
Note: The polynomial must be a multivariate polynomial over a finite field.
The call modp1(Eval(a,n),p) evaluates the polynomial a at x = n modulo p where a
must be a univariate polynomial in the modp1 representation, with n an integer and p an
integer > 1.
Examples

¾ Eval(x^2+1,x=3) mod 5;

¾ Eval(x^2+y,{x=3,y=2}) mod 5;
> Eval (int (f(x),x), x=y);
⌠ f( x ) d x

⌡ x=y
Eigen values ?;
Solution of Problems
We can use Maple For:
Solution of non-linear equations
by Newton’s Method
by Bisection Method
Solution of System of linear equations.
Numerical Integration.
Numerical Solution of ODE’s.
Maple performs both numerical and symbolic itegration.
Please note that the Maple uses the int function for the both numerical and symbolic
integration, but for numerical integration we have to use the additional evalf command
Some inbuilt functions in Maple being used for integration
Numerical Integration
Calling Sequences
evalf(Int(f, x=a..b))
evalf(Int(f, a..b))
evalf(Int(f, x=a..b, opts))
evalf(Int(f, a..b, opts))
evalf(int(f, x=a..b))
We Define Parameters
f - algebraic expression or procedure; integrand
x - name; variable of integration
a,b - endpoints of the interval of integration
opts - (optional) name or equation of the form
option=name; options
Description
In the case of a definite integral, which is returned unevaluated, numerical integration
can be invoked by applying evalf to the unevaluated integral. To invoke numerical
integration without
first invoking symbolic integration, use the inert function Int as in: evalf( Int(f, x=a..b) ).

253
If the integrand f is specified as a procedure or a Maple operator, then the second
argument must be the range a..b and not an equation. (i.e., a variable of integration must
not be specified.)
>evalf(Int( exp(-x^3), x = 0..1 ));.8075111821
>evalf(Int( exp(-x^3), x = 0..1 ));.8075111821
>evalf(Int( exp(-x^3), x = 0..1 ));8075111821
>alg041(); This is Simpson’s Method.
`Input the function F(x) in terms of x`
`For example:
> sin (x)
`Input lower limit of integration and upper limit of integration`
`separated by a blank`
> 0 3.14159265359
`Input an even positive integer N.`
> 20
The integral of F from 0.00000000
to
3.14159265

is 2.00000678
alg041(); This is Simpson’s Method.
`Input the function F(x) in terms of x`
> x^2
`Input lower limit of integration and upper limit of integration separated by a blank’
>0 2
Input an even positive integer N
>20
The integral of F from
0.00000000
to
2.00000000

is 2.66666667
> alg041();
This is Simpson’s Method.
Input the function F(x) in terms of x, for example: cos(x)
> exp(x-x^2/2)
Input lower limit of integration and upper limit of integration separated by a blank
> 0 3.14159265359
Input an even positive integer N.
> 20
The integral of F from
0.00000000
to
3.14159265

254
is 3.41046542
> alg044();
This is Simpson's Method for double integrals.

Input the functions F(X,Y), C(X), and D(X) in terms of x and y separated by a space.
For example: cos(x+y) x^3 x
> exp(y/x) x^3 x^2
Input lower limit of integration and upper limit of integration separated by a blank
> 0.1 0.5
Input two even positive integers N, M ; there will be N subintervals for outer integral and
M subintervals for inner integral - separate with blank
> 10 10
The double integral of F from
0.100 to 0.500
Is .03330546
obtained with
N := 10 and M := 10
> alg045();
`This is Gaussian Quadrature for double integrals.`
`Input the function F(x,y) in terms of x and y`
`For example: sqrt(x^2+y^2)`
> exp (y/x)
Input the functions C(x), and D(x) in terms of x separated by a space
For example: cos (x) sin (x)
> x^3 x^2
Input lower limit of integration and upper limit of integration separated by a blank space
>0.1 0.5
Input two integers M > 1 and N > 1. This implementation of Gaussian quadrature
requires both to be less than or equal to 5.
M is used for the outer integral and N for the inner integral - separated by a space.
>55
The double integral of F from
0.1000 to 0.5000
is 3.3305566120e-02
Or 0.03305566120
obtained with
M = 5 and N = 5

255
Lecture 44

Solution of
Non-Linear Equations

Bisection Method
Regula-Falsi Method
Method of iteration
Newton - Raphson Method
Muller’s Method
Graeffe’s Root Squaring Method

Newton -Raphson Method


An approximation to the root is given by
f ( x0 )
x1 = x0 −
f ′( x0 )
Better and successive approximations x2, x3, …, xn to the root are obtained from
f ( xn )
xn +1 = xn −
f ′( xn )
N-R Formula
Newton’s algorithm
To find a solution to f(x)=0 given an initial approximation p0
INPUT initial approximation p0; tolerance TOL; maximum number of iterations N0

OUTPUT approximate solution p or message of failure


Step 1
Set I=1

Step 2
While i < N0 do Steps 3-6
Step 3
Set p = p0 – f ( p0 ) / f’ ( p0 ) (compute pi ).
Step 4
If Abs (p – p0) < TOL OUTPUT ( p );
(The procedure was successful.)
STOP
Step 5 Set i=i+1
Step 6 Set p0 = p (Update p0 )
Step 7 OUTPUT
(The method failed after N0 iterations, N0 = ‘,N0 )
The procedure was unsuccessful
STOP
Example
Using Maple to solve a non-linear equation.
cos( x) − x = 0

256
Solution
The Maple command will be as follows,
Fsolve ( cos (x) -x);
¾ alg023();
¾ This is Newton's Method
Input the function F(x) in terms of x
For example:
> cos(x)-x
Input initial approximation
> 0.7853981635
Input tolerance
> 0.00005
Input maximum number of iterations - no decimal point
> 25
Select output destination
1. Screen
2. Text file
Enter 1 or 2
>1
Select amount of output
1. Answer only
2. All intermediate approximations
Enter 1 or 2
>2
Newton's Method
I P F(P)
1 0.739536134 -7.5487470e-04
2 0.739085178 -7.5100000e-08
3 0.739085133 0.0000000e-01
Approximate solution = 0.73908513
with F(P) = 0.0000000000
Number of iterations = 3
Tolerance = 5.0000000000e-05

Another Example
> alg023();
Input the function F(x) in terms of x ,
> sin(x)-1
Input initial approximation
> 0.17853
Input tolerance
> 0.00005
Input maximum number of iterations – no decimal point
> 25
Select output destination
1. Screen
2. Text file
Enter 1 or 2
>2
Select amount of output
1. Answer only
2. All intermediate approximations
Enter 1 or 2
>2
Newton's Method

257
I P F(P)
1 1.01422964e+00 -1.5092616e-01
2 1.29992628e+00 -3.6461537e-02
3 1.43619550e+00 -9.0450225e-03
4 1.50359771e+00 -2.2569777e-03
5 1.53720967e+00 -5.6397880e-04
6 1.55400458e+00 -1.4097820e-04
7 1.56240065e+00 -3.5243500e-05

More…
8 1.56659852e+00 -8.8108000e-06
9 1.56869743e+00 -2.2027000e-06
10 1.56974688e+00 -5.5070000e-07
11 1.57027163e+00 -1.3770000e-07
12 1.57053407e+00 -3.4400000e-08
13 1.57066524e+00 -8.6000000e-09
14 1.57073085e+00 -2.1000000e-09
15 1.57076292e+00 -6.0000000e-10
Approximate solution
= 1.57076292
with
F(P) =6.0000000000e-10
Number of iterations = 15
Tolerance = 5.0000000000e-05
Bisection Method

> alg021();
This is the Bisection Method.
Input the function F(x) in terms of x
For example:
> x^3+4*x^2-10
Input endpoints A < B separated by blank
>12
Input tolerance
> 0.0005
Input maximum number of iterations - no decimal point
> 25
Select output destination
1. Screen ,
2. Text file
Enter 1 or 2
>1
Select amount of output
1. Answer only
2. All intermediate approximations
Enter 1 or 2
>2
Bisection Method
I P F(P)
1 1.50000000e+00 2.3750000e+00
2 1.25000000e+00 -1.7968750e+00
3 1.37500000e+00 1.6210938e-01
4 1.31250000e+00 -8.4838867e-01
5 1.34375000e+00 -3.5098267e-01
6 1.35937500e+00 -9.6408842e-02

258
7 1.36718750e+00 3.2355780e-02
8 1.36328125e+00 -3.2149969e-02
9 1.36523438e+00 7.2030000e-05
10 1.36425781e+00 -1.6046697e-02
11 1.36474609e+00 -7.9892590e-03

Approximate solution P = 1.36474609


with F(P) = -.00798926
Number of iterations = 11
Tolerance = 5.00000000e-04
alg021(); Another example of the Bisection Method.
Input the function F(x) in terms of x,
> cos(x)
Input endpoints A < B separated by blank
>12
Input tolerance
> 0.0005
Input maximum number of iterations - no decimal point
> 25
Select output destination
1. Screen , 2. Text file
Enter 1 or 2
>1
Select amount of output
1. Answer only
2. All intermediate approximations
Enter 1 or 2
>2
Bisection Method
1 P F(P)
1 1.50000000e+00 7.0737202e-02
2 1.75000000e+00 -1.7824606e-01
3 1.62500000e+00 -5.4177135e-02
4 1.56250000e+00 8.2962316e-03
5 1.59375000e+00 -2.2951658e-02
6 1.57812500e+00 -7.3286076e-03
7 1.57031250e+00 4.8382678e-04
8 1.57421875e+00 -3.4224165e-03
9 1.57226563e+00 -1.4692977e-03
10 1.57128906e+00 -4.9273519e-04
11 1.57080078e+00 -4.4542051e-06
Approximate solution P = 1.57080078
with F(P) = -.00000445
Number of iterations = 11
Tolerance = 5.00000000e-04
¾ alg025(); This is the Method of False
Position
Input the function F(x) in terms of x
> cos(x)-x
Input endpoints P0 < P1 separated by a blank space
¾ 0.5 0.7853981635
Input tolerance
>0.0005
Input maximum number of
iterations - no decimal point
> 25

259
Select output destination
1. Screen
2. Text file
Enter 1 or 2
>1
Select amount of output
1. Answer only
2. All intermediate approximations
Enter 1 or 2
>2
METHOD OF FALSE POSITION
I P F(P)
2 7.36384139e-01 4.51771860e-03
3 7.39058139e-01 4.51772000e-05
4 7.39084864e-01 4.50900000e-07

Approximate solution P = .73908486


with F(P) = .00000045
Number of iterations = 4
Tolerance = 5.00000000e-04
System of Linear Equations
Gaussian Elimination Gauss-Jordon Elimination
Crout’s Reduction Jacobi’s
Gauss- Seidal Iteration Relaxation
Matrix Inversion
> alg061();
This is Gaussian Elimination to solve a linear system.
The array will be input from a text file in the order:
A(1,1), A(1,2), ..., A(1,N+1), A(2,1), A(2,2), ..., A(2,N+1),..., A(N,1), A(N,2), ..., A(N,N+1)

Place as many entries as desired on each line, but separate entries with
at least one blank.
Has the input file been created? - enter Y or N.
>y
Input the file name in the form - drive:\name.ext
for example: A:\DATA.DTA
> d:\maple00\dta\alg061.dta
Input the number of equations - an integer.
>4
Choice of output method:
1. Output to screen 2. Output to text file
Please enter 1 or 2.
>1
GAUSSIAN ELIMINATION
The reduced system - output by rows:
1.00000000 -1.00000000 2.00000000 -1.00000000 -8.00000000
0.00000000 2.00000000 -1.00000000 1.00000000 6.00000000
0.00000000 0.00000000 -1.00000000 -1.00000000 -4.00000000
0.00000000 0.00000000 0.00000000 2.00000000 4.00000000

Has solution vector:


-7.00000000 3.00000000 2.00000000 2.00000000

260
with 1 row interchange (s)
> alg071();
This is the Jacobi Method for Linear Systems.
The array will be input from a text file in the order
A(1,1), A(1,2), ..., A(1,n+1), A(2,1), A(2,2), ...,
A(2,n+1),..., A(n,1), A(n,2), ..., A(n,n+1)
Place as many entries as desired on each line, but separate
entries with at least one blank.
The initial approximation should follow in the same format has the input file been created? - enter
Y or N.
>y
Input the file name in the form - drive:\name.ext
for example: A:\DATA.DTA
> d:\maple00\alg071.dta
Input the number of equations - an integer.
>4
Input the tolerance.
> 0.001
Input maximum number of iterations.
> 15
Choice of output method:
1. Output to screen
2. Output to text file
Please enter 1 or 2.
>1
JACOBI ITERATIVE METHOD FOR LINEAR SYSTEMS

The solution vector is :


1.00011860 1.99976795
-.99982814 0.99978598
using 10 iterations
with Tolerance 1.0000000000e-03

An approximation to the root is given by

Better and successive approximations x2, x3, …, xn to the root are obtained from

Example
Using Maple to solve a non-linear equation.

261
System of Linear Equations

Input the tolerance.


> 0.001
Input maximum number of iterations.
> 15
Choice of output method:
1. Output to screen
2. Output to text file
Please enter 1 or 2.
>1

262
Summing up

Non-Linear
Equations

Bisection Method (Bolzano)


Regula-Falsi Method
Method of iteration
Newton - Raphson Method
Muller’s Method
Graeffe’s Root Squaring Method

In the method of False Position, the first approximation to the root of f (x) = 0 is
given by

xn − xn −1
xn +1 = xn − f ( xn ) ………..(2.2)
f ( xn ) − f ( xn −1 )

Here f (xn-1) and f (xn+1) are of opposite sign. Successive approximations to the
root of f (x) = 0 is given by Eq. (2.2).

METHOD OF ITERATION can be applied to find a real root of the equation f (x)
= 0 by rewriting the same in the form , x = φ ( x)
x1 = φ ( x0 ) 

x2 = φ ( x1 ) 


xn +1 = φ ( xn ) 

In Newton – Raphson Method successive approximations


x2, x3, …, xn to the root are obtained from

f ( xn )
xn +1 = xn −
f ′( xn )
N-R Formula

263
xn −1 f ( xn ) − xn f ( xn −1 )
xn +1 = , n = 12,3…
f ( xn ) − f ( xn −1 )
This sequence converges to the root ‘b’ of f (x) = 0 i.e. f( b ) = 0.

The Secant method converges faster than linear and slower than Newton’s
quadratic.

In Muller’s Method we can get a better approximation to the root, by using


xi +1 = xi + hi λ
−2 fiδ i
λ=
gi ± [ g − 4 fiδ i λi ( f i − 2 λi − fi −1δ i + fi )]1/ 2
2
i

Where we defined
h x − xi
λ= =
hi xi − xi −1
hi
λi =
hi −1
δ i = 1 + λi

Systems of Linear
Equations
Gaussian Elimination Gauss-Jordon Elimination
Crout’s Reduction Jacobi’s
Gauss- Seidal Iteration Relaxation
Matrix Inversion

In Gaussian Elimination method, the solution to the system of equations is


obtained in two stages.
•the given system of equations is reduced to an equivalent upper triangular form
using elementary transformations
•the upper triangular system is solved using back substitution procedure

Gauss-Jordon method is a variation of Gaussian method. In this method, the


elements above and below the diagonal are simultaneously made zero

In Crout’s Reduction Method the coefficient matrix [A] of the system of equations
is decomposed into the product of two matrices [L] and [U], where [L] is a lower-

264
triangular matrix and [U] is an upper-triangular matrix with 1’s on its main
diagonal.

For the purpose of illustration, consider a general matrix in the form


[ L ][U ] = [ A]
 l11 0 0  1 u12 u13   a11 a12 a13 
l 0  0 1 u23  =  a21 a23 
 21 l22 a22
l31 l32 l33  0 0 1   a31 a32 a33 

Jacobi’s Method is an iterative method, where initial approximate solution to a


given system of equations is assumed and is improved towards the exact
solution in an iterative way.
In Jacobi’s method, the (r + 1)th approximation to the above system is given by
Equations
b a a 
x1( r +1) = 1 − 12 x2( r ) − − 1n xn( r ) 
a11 a11 a11

b a a 
x2( r +1) = 2 − 21 x1( r ) − − 2 n xn( r ) 
a22 a22 a22 


b a a − 
xn( r +1) = n − n1 x1( r ) − −
n ( n 1)
xn( r−)1 
ann ann ann 

( r +1) (r )
Here we can observe that no element of xi replaces xi entirely
for the next cycle of computation.

In Gauss-Seidel method, the corresponding elements of


xi( r +1) replaces
(r )
those of xi as soon as they become available. It is also called method of
Successive Displacement.

The Relaxation Method is also an iterative method and is due to Southwell.

Ri
dxi =
aii
Eigen Value Problems

265
Power Method
Jcobi’s Method
In Power Method the result looks like

u ( k ) = [ A]v ( k −1) = qk v ( k )

Here, qk
(k )
is the desired largest eigen value and v is the corresponding
eigenvector.

Interpolation

Finite Difference Operators


Newton’s Forward Difference Interpolation Formula
Newton’s Backward Difference Interpolation Formula
Lagrange’s Interpolation Formula
Divided Differences
Interpolation in Two Dimensions
Cubic Spline Interpolation

Finite Difference Operators

Forward Differences

Backward Differences

Central Difference
r −1 r −1
∆ yi = ∆
r
yi +1 − ∆ yi
∇ k yi = ∇ k −1 yi − ∇ k −1 yi −1 ,
i = n, (n − 1),..., k
δ n yi = δ n −1 yi + (1 2) − δ n −1 yi −(1 2)

Thus
∆y x = y x + h − y x = f ( x + h ) − f ( x )
∆ 2 y x = ∆y x + h − ∆ y x
Similarly

266
∇y x = y x − y x − h = f ( x ) − f ( x − h )
 h  h
δ yx = yx + ( h / 2) − yx −( h / 2) = f  x +  − f  x − 
2
   2 

Shift operator, E
E f ( x ) = f ( x + h)
E n f ( x) = f ( x + nh)
E n y x = y x + nh
The inverse operator E-1 is defined as
E −1 f ( x) = f ( x − h)
Similarly,
E − n f ( x) = f ( x − nh)
Average Operator, µ
1  h  h 
µ f ( x) =  f  x +  + f  x −  
2  2  2 
1
=  y x + ( h / 2) + y x −( h / 2) 
2
Differential Operator, D

d 
Df ( x) = f ( x) = f ′( x) 
dx 
2 
d
D f ( x) = 2 f ( x) = f ( x) 
2 n

dx 
Important Results
∆ = E −1
E −1
∇ = 1 − E −1 =
E
δ =E −E 1/ 2 1/ 2

hD = log E
1
µ = ( E1/ 2 + E −1/ 2 )
2
The Newton’s forward difference formula for interpolation, which gives the value
of f (x0 + ph) in terms of f (x0) and its leading differences.
f ( x0 + ph) = f ( x0 ) + p∆f ( x0 )
p( p − 1) 2 p ( p − 1)( p − 2) 3
+ ∆ f ( x0 ) + ∆ f ( x0 )
2! 3!
p ( p − 1) ( p − n + 1) n
+ + ∆ f ( x0 ) + Error
n!

267
An alternate expression is
p ( p − 1) 2
y x = y0 + p ∆y0 + ∆ y0
2!
p ( p − 1)( p − 2) 3
+ ∆ y0 +
3!
p ( p − 1) ( p − n + 1) n
+ ∆ y0 + Error
n!
Newton’s Backward difference formula is,
f ( xn + ph) = f ( xn ) + p∇f ( xn )
p ( p + 1) 2
+ ∇ f ( xn )
2!
p ( p + 1)( p + 2) 3
+ ∇ f ( xn ) +
3!
p ( p + 1)( p + 2) ( p + n − 1) n
+ ∇ f ( xn ) + Error
n!
Alternatively, this formula can also be written as
p( p + 1) 2
y x = yn + p∇yn + ∇ yn
2!
p( p + 1)( p + 2) 3
+ ∇ yn +
3!
p( p + 1)( p + 2) ( p + n − 1) n
+ ∇ yn + Error
n!
Here
x − xn
p=
h
The Lagrange’s formula for interpolation
( x − x1 )( x − x2 ) ( x − xn ) ( x − x0 )( x − x2 ) ( x − xn )
y = f ( x) = y0 + y1 +
( x0 − x1 )( x0 − x2 ) ( x0 − xn ) ( x1 − x0 )( x1 − x2 ) ( x1 − xn )
( x − x0 )( x − x1 ) ( x − xi −1 )( x − xi +1 ) ( x − xn ) ( x − x0 )( x − x1 )( x − x2 ) ( x − xn −1 )
+ yi + + yn
( xi − x0 )( xi − x1 ) ( xi − xi −1 )( xi − xi +1 ) ( xi − xn ) ( xn − x0 )( xn − x1 )( xn − x2 ) ( xn − xn −1 )

Newton’s divided difference interpolation formula can be written as


y = f ( x) = y0 + ( x − x0 ) y[ x0 , x1 ]
+ ( x − x0 )( x − x1 ) y[ x0 , x1 , x2 ]
+ +
( x − x0 )( x − x1 )… ( x − xn −1 ) y[ x0 , x1 ,..., xn ]
Where the first order divided difference is defined as

268
y1 − y0
y[ x0 , x1 ] =
x1 − x0
Numerical Differentiation and Integration
We expressed D in terms of ∆ :
1 ∆ 2 ∆3 ∆ 4 ∆5 
D = ∆ − + − + − 
h 2 3 4 5 
Using backward difference operator , we have
hD = − log(1 − ∇).
On expansion, we have
1 ∇ 2 ∇3 ∇ 4 
D = ∇ − + + + 
h 2 3 4 
Using Central difference Operator
1 1 3 5 
D = δ − δ 3 + δ − 
h 24 640 
Differentiation Using Interpolation
Richardson’s Extrapolation
d n −1
∏ ( x − xi )
dx i =0
( x − x0 )( x − x1 ) ( x − xn )
n −1
=∑
i =0 x − xi

Thus, y′( x) is approximated by Pn ( x) which is given by
Pn′( x) = y[ x0 , x1 ] + [( x − x1 ) + ( x − x0 )] y[ x0 , x1 , x2 ] +
n =1
( x − x0 )( x − x1 ) ( x − xn −1 )
+∑ y[ x0 , x1 ,… , xn ]
i =0 x − xi

 h 
Fm  m 
2 
 h   h 
4m Fm −1  m  − Fm −1  m −1 
= 2  2 ,
4 −1
m

m = 1, 2,3,
Basic Issues in Integration
What does an integral represent?
b

∫ f (x) dx
a
= AREA

269
d b

∫∫ g(x, y) dx dy
c a
= VOLUME

y = f(x)

(x1, y1) (x2, y2)

(x0, y0)

y0 y1 y2 y3 yn-1 yn

O X
x0 = a x1 x2 x3 xn-1 xn = b

x3

x0
f ( x)dx
3 3
= h( y0 + 3 y1 + 3 y2 + y3 ) − h 5 y ( iv ) (ξ )
8 80

270
Y

y = f(x)

(x2, y2)

(x0, y0)

y0 y1 y2

O X
x0 = a x1 x2 x3 xn-1 xn = b

TRAPEZOIDAL RULE
xn

x0
f ( x)dx
h
= ( y0 + 2 y1 + 2 y2 + + 2 yn −1 + yn ) + En
2

DOUBLE INTEGRATION
We described procedure to evaluate numerically a double integral of the form
I = ∫  ∫ ( x, y )dx  dy

Differential Equations

Taylor Series
Euler Method
Runge-Kutta Method
Predictor Corrector Method

271
In Taylor’s series we expanded y (t ) by Taylor’s series about the
point t = t0 and obtain

(t − t0 ) 2 (t − t0 )3 (t − t0 ) 4 IV
y (t ) = y (t0 ) + (t − t0 ) y′(t0 ) + y′′(t0 ) + y′′′(t0 ) + y (t0 ) +
2! 3! 4!

In Euler Method we obtained the solution of the differential equation in the form
of a recurrence relation
ym +1 = ym + hf (tm , ym )
We derived the recurrence relation
 f (t , y ) + f (tm +1 , ym(1)+1 ) 
ym +1 = ym + h  m m 
 2 
Which is the modified Euler’s method.
The fourth-order R-K method was described as
1
yn +1 = yn + (k1 + 2k2 + 2k3 + k4 )
6
where
k1 = hf (tn , yn )
 h k 
k2 = hf  tn + , yn + 1 
 2 2
 h k 
k3 = hf  tn + , yn + 2 
 2 2
k4 = hf (tn + h, yn + k3 )
In general, Milne’s predictor-corrector pair can be written as
4h 
P : yn +1 = yn −3 + (2 yn′ − 2 − yn′ −1 + 2 yn′ ) 
3 

h
C : yn +1 = yn −1 + ( yn′ −1 + 4 yn′ + yn′ +1 ) 
3 
This is known as Adam’s predictor formula.
h 251 4
yn +1 = yn + (55 f n − 59 f n −1 + 37 f n − 2 − 9 f n −3 ) + h∇ f n
24 720
Alternatively, it can be written as
h 251 4
yn +1 = yn + [55 yn′ − 59 yn′ −1 + 37 yn′ − 2 − 9 yn′ −3 ] + h∇ yn′
24 720

272

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy