MTH - 209 Lecture Note
MTH - 209 Lecture Note
MTH - 209 Lecture Note
1.1 Introduction
In engineering and sciences, a frequently occurring problem is that of determining
the solutions of equations of the form
f (x) = 0 (1.1)
where f (x) can be a linear, quadratic, cubic or quartic expression, in which case
algebraic methods of solution are available. However, when f (x) is a polynomial
of higher degrees, or a transcendental function, algebraic methods of solution are
not available, hence we have to look for approximate methods of solution. In this
section, we shall consider approximation methods such as the bisection method,
fixed-point iterative method, method of false position and the Newton-Raphson
method.
1
1.2 The Bisection Method
Recall from MTH103 the intermediate value theorem thus:
Theorem 1 If f (x) is a continuous function on [a, b], and if f (a) and f (b) are of
opposite signs, then there exists (a, b) such that f ()=0.
In particular, suppose that f (a) < 0 and f (b) > 0, then there is a root lying
between a and b. The approximate value of this root is given by
a+b
x0 = .
2
If f (x0 ) = 0, we conclude that x0 is the root of the equation (1.1). Otherwise, the
root lies between a and x0 or between x0 and b according as f (x0 ) is negative or
positive. Continuing, we bisect the interval and repeat the process until a root is
known to the desired accuracy. This process is shown in the diagram below:
Example 1
Find a real root of the equation x3 x 1 = 0.
Solution
Note that f (1) < 0 and f (2) > 0, thus a root lies between 1 and 2 and we thus
take
1+2 3
x0 = = .
2 2
Hence
3 27 3 7
f (x0 ) = f = 1 = > 0.
2 8 2 8
Thus the root lies between 1 and 32 . We then obtain x1 = 1+1.5
2
= 1.25, and
f (x1 ) = 19
64
< 0. We therefore conclude that the root lies between 1.25 and 1.5.
Continuing, one obtains
1.25 + 1.5
x2 = = 1.375.
2
2
The procedure is repeated and we obtain the successive approximations
x = (x) (1.2)
The solution of (1.2) is called a fixed point of , hence the name of the method.
From (1.1), we may get several forms of (1.2). For example, the equation
x3 + x2 1 = 0
can be expressed as
(i) x = 1
1+x
(ii) x = 1 x2
3
(iii) x = 1 x2 and so on.
x1 = (x0 )
x2 = (x1 )
x3 = (x2 )
3
..
.
xn = (xn1 )
and in general,
xn+1 = (xn ).
x = 10x + 1.
Example 2
Use the fixed point iteration method to find the solution of the equation
x3 + x 1 = 0.
Solution:
1
The equation may be written in the form x = 1+x2
, while the iteration can be
expressed as
1
xn+1 = .
1 + x2n
A rough sketch indicates that there is a solution near x = 1, and taking x0 = 1,
we obtain the iterations
4
The solution to 6 places of decimals is 0.682328.
Exercise
Use the fixed point iteration method to determine the solution of the equation
x2 x = 0. Compare your solution with the exact solution.
5
Example 3
Determine the solution of the equation x3 + x 1 = 0 using the Newton-Raphson
method.
Solution:
x2 = 2.7986
x3 = 2.7984
x4 = 2.7984,
6
1.4.1 Convergence of Newton-Raphson Method
f (x) f (x)f (x)
Consider (x) = x f (x)
, thus (x) = (f (x))2
. We assume f (x), f (x) and
f (x) are continuous and bounded on any interval containing the root x = of
equation (1.1). If is a simple root, then f () 6= 0, and since f (x) is continu-
ous, |f (x)| , for some > 0 in a suitable neighbourhood of . Within this
neighbourhood, we can select an interval such that |f (x)f (x)| < 2 and this is
possible since f () = 0 and since f (x) is continuously twice differentiable. Thus,
in this interval we have | (x)| < 1.
Hence by theorem 1, Newton-Raphson method converges, provided the initial
approximation is sufficiently close to . Newton-Raphson method still converges,
but slowly when is a multiple root. To determine the rate of convergence of the
method, we note that f () = 0 so that Taylors expansion gives
1
f (xn ) + ( xn )f (xn ) + ( xn )2 f (xn ) + = 0
2
7
3
2. ex tan1 x 2
=0
3. x2 + cos x xex = 0
correct to 3 places of decimals using
Most of the rootfinding methods considered so far perform poorly when the root
being sought is multiple. We now investigate this for Newton-Raphson method as
a fixed point method with f (x) satisfying (1.5). Thus
f (x)
xn+1 = g(xn ), g(x) = x , x 6= .
f (x)
The expression for g(x) on using (1.5) is obtained as
8
(x )p h(x) (x )h(x)
g(x) = x = x .
(x )p1 [(x )h (x)] ph(x) + (x )h (x)
One then obtains
h(x) d h(x)
g (x) = 1 (x )
ph(x) + (x )h (x) dx ph(x) + (x )h (x)
and g () = 1 p1 6= 0 for p > 1. You can then find p for any given . This shows
p1
that Newton-Raphson Method is a linear method with rate of convergence p
.
Example 5
Determine the multiplicity of the root of the equation
9
(a) Fixed Point Iteration method and
f (x, y) = 0 (1.6)
g(x, y) = 0
x = F (x, y) (1.7)
y = G(x, y)
and
F G
y + y < 1
x1 = F (x0 , y0 ), y1 = G(x1 , y0 ),
x2 = F (x1 , y1 ), y1 = G(x2 , y1 ),
x3 = F (x2 , y2 ), y3 = G(x3 , y2 ),
10
..
.
Iterate to convergence.
Example 6
Find the root of the system of equations
x2 + y 2 1 = 0 (1.8)
2
x
+ 4y 2 1 = 0 (1.9)
2
correct to 3 places of decimals using the initial value (x0 , y0 ) = (1, 1). Trans-
forming the equations lead to the iteration scheme
r
p 1 x2n+1
xn+1 = 1 yn2 , yn+1 = .
4 8
Thus we obtain the iterations
x1 = 0, y1 = 0.5
x2 = 0.866025404, y2 = 0.3952847075
x3 = 0.9185586535, y2 = 0.3801726581
x4 = 0.9249155367, y4 = 0.3782412011
x5 = 0.9257070778, y5 = 0.3779990751
x6 = 0.9258059728, y6 = 0.3779687984
We have then achieved 3 places of decimals accuracy.
That is x 0.926, y 0.378.
Suppose the initial approximation to the root of the system is (x0 , y0 ), and if
(x0 + h, y0 + k) is the exact root of the system, then we must have that
f (x0 + h, y0 + k) = 0, g(x0 + h, y0 + k) = 0.
11
If f and g are sufficiently differentiable, we can expand them in Taylors series as
f f
f0 + h +k + = 0 (1.10)
x0 y0
g g
g0 + h +k + = 0
x0 y0
where
f f f f
f0 = f (x0 , y0), g0 = g(x0 , y0), = , = .
x0 x x=x0 y0 y y=y0
Neglecting the second and higher order terms, we obtain the system of linear
equations
f f
h +k = f0 (1.11)
x0 y0
g g
h +k = g0
x0 y0
The system (1.9) can be solved for h and k to obtain the new solution as
x1 = x0 + h, y1 = y0 + k.
x2 y 2 = 4
x2 + y 2 = 16
Solution:
This system has the solutions x = 10 = 3.162 and y = 6 = 2.449. To apply
the Newton-Raphson method, we replace the hyperbola by its asymptote y = x.
This produces the initial values x0 = y0 = 2 2 = 2.828. However, in general,
this is obtained from a rough graph.
12
Taking f = x2 y 2 4, g = x2 + y 2 16
f
x
= 2x, f
y
g
= 2x, x = 2x, g
y
= 2y,
and f0 = x20 y02 4 = 4, g0 = x20 + y02 16 = 0.
f
f g g
x 0
= 2x 0 = 4 2, y 0
= 2x0 = 4 2, x 0
= 4 2, y 0
= 4 2. Thus one
obtains the equations
h k = 0.707
h+k =0
f (x, y) = x2 + y 2 1 = 0
x2
g(x, y) = + 4y 2 1 = 0
2
subject to the initial approximation [x0 , y0 ] = [1, 1], we replace f (x) in the
Newton-Raphson iteration scheme with the Jacobian J(x) of the system. Thus
the iteration scheme becomes
f (xn )
xn+1 = xn = xn f (xn )J 1 (xn ) (1.12)
J(xn )
f f
xn yn
where Xn = (xn , yn ) and J(xn ) = and J 1 (xn ) is the inverse of
g g
xn yn
13
J(xn ). Thus the iteration scheme becomes
(f (xn , yn ), g(xn , yn ))
(xn+1 , xn+1 ) = (xn , yn ) (1.13)
f f
xn yn
g g
xn yn
14
2. Use the fixed point iteration method to find a root of each of the following
equations correct to 4 s.f.
1
(a) cos x = 3x 1 (b) x = (1+x)2
15
Chapter 2
Interpolation
Given a function f (x) defined on the interval x0 < x < xn , and assuming that
f (x) is single-valued and continuous and that it is known explicitly, then the val-
ues of f (x) corresponding to certain given values of x, say x0 , x1 , x2 , , xn , can
easily be computed and tabulated. However, the major problem of numerical anal-
ysis is the converse, that is, given the set of tabular values
(x0 , y0 ), (x1 , y1 ), (x2 , y2 ), , (xn , yn ) satisfying the relation y = f (x) where the
explicit nature of f (x) is unknown, it is required to find a simpler function (x)
such that f (x) and (x) agree at the set of tabulated points. Such a process is
known as interpolation. If (x) is a polynomial, then the process is called poly-
nomial interpolation and (x) is called the interpolating polynomial.
16
The methods for the solution of these problems are based on the concept of the
differences of a function which we now proceed to define.
y1 y0 , y2 y1 , y3 y2 , , yn yn1
y0 = y1 y0 , y1 = y2 y1 , y2 = y3 y2 , , yn1 = yn yn1 .
2 y0 = (y1 y0 ) = y1 y0 = y2 2y1 + y0
3 y0 = 2 y1 2 y0 = y2 y1 (y1 y0 )
= y3 3y2 + 3y1 y0 .
Similarly,
Note that the coefficients in the above expansions are binomial coefficients.
17
x y 2 3 4 5 6
x0 y0 y0
x1 y1 y1 2 y0 3 y0
x2 y2 y2 2 y1 3 y1 4 y0
x3 y3 y3 2 y2 3 y2 4 y1 5 y0
x4 y4 y4 2 y3 3 y3 4 y2 5 y1 6 y0
x5 y5 y5 2 y4 3 y4 4 y3
x6 y6 y6 2 y5
2 y2 = y2 y1 = y2 2y1 + y0 ,
3 y3 = 2 y3 2 y2 = y3 3y2 + 3y1 y0 ,
and so forth.
y 1 = y1 y0 , y 3 = y2 y1 , , yn 1 = yn yn1 .
2 2 2
Higher order central differences can be obtained in a similar manner. Using the
values from the two tables above, the central difference table can be formed thus
18
x y 2 3 4 5 6
x0 y0
x1 y1 y1
x2 y2 y2 2 y 2
x3 y3 y3 2 y 3 3 y 3
x4 y4 y4 2 y 4 3 y 4 4 y 4
x5 y5 y5 2 y 5 3 y 5 4 y 5 5 y 5
x6 y6 y6 2 y 6 3 y 6 4 y 6 5 y 6 6 y 6
x y 2 3 4 5 6
x0 y0 y 1
2
x1 y1 y 3 2 y1 3y 3
2 2
2 3
x2 y2 y 5 y2 y5 4 y2 5y 5
2 2 2
2 3 4 5
x3 y3 y 7 y3 y7 y3 y7 6 y3
2 2 2
x4 y4 y 9 2 y4 3y 9 4 y2
2 2
x5 y5 y 11 2 y5
2
x6 y6
y0 = y1 = y 1 ,
2
3 y2 = 3 y5 = 3 y 7 ,
2
and so on.
19
2.1.4 symbolic Relations
Eyi = yi+1
and in general
E n (yi ) = yi+n (2.1)
y0 = y0 E 1 y0 = (1 E 1 )y0
1 E 1 .
Also,
1 1 1 1
y0 = E 2 y0 E 2 = (E 2 E 2 )y0
1 1
E 2 E 2 ,
Hence
1
E E 2 .
20
2.1.5 Detecting Errors by Use of Difference Tables
Errors in tabulated values can easily be detected using difference tables. For ex-
ample, if we introduce an error e in a tabulated values of zeroes, we can notice
how the error propagates fanwise at higher differences and also gets magnified.
The following characteristics of error propagation can be deduced from the table
that follows:
(ii) The errors in any one column are the binomial coefficients with alternating
signs
(iii) The algebraic sum of the errors in any difference column is zero
(iv) The maximum error occurs opposite the function value containing the error.
Suppose an error e is introduced into the table of values of a given function, then
these indicators can be used to detect errors in difference tables as can be seen
from table (2.4) below.
= a0 nhxn1 + a1 xn2 + + an
21
y 2 3 4 5
0
0 0 0 0
0 0 0 0 0 e
0 0 0 e e -5e
0 e e -3e -4e 10e
e -e -2e 3e 6e -10e
0 0 e -e -4e 5e
0 0 0 0 e -e
0 0 0 0 0
0 0 0
0 0
= (2ax + ah + b)h
= 2ah2
22
2.2.1 Newtons Interpolation Formulae
Given the set of n + 1 values, (x0 , y0), (x1 , y1 ), (x2 , y2 ), , (xn , yn ), of x and
y, it is required to find yn (x), a polynomial of degree n such that y and yn (x)
agree at the tabulated points. Suppose the values of x are equidistant, and let
xi = x0 + ih, i = 0, 1, 2, , n. Since yn (x) is a polynomial of degree n, it may
be written as
y1 y0 y0
y0 = a0 , y1 = a1 (x1 x0 ) + y0 a1 = = .
x1 x0 h
Similarly,
3 y0 4 y0 n y0
a3 = , a4 = , , an = .
3!h3 4!h4 n!hn
Setting x = x0 + ph and substituting for a0 , a1 , a2 , , an in (2.4) above gives
23
This is called Newtons forward interpolation formula and it is useful for interpo-
lation near the beginning of a set of tabular values.
The error committed in replacing the function y(x) by the polynomial yn (x)
can be obtained as
(x x0 )(x x1 ) (x xn ) (n+1)
y(x) yn (x) = y () (2.6)
(n + 1)!
for x0 < < xn . However, (2.6) is not useful in practice, since we have no
information concerning y (n) (x). In any case, if y (n) (x) does not vary rapidly in
the interval, a useful estimate of the derivative can be obtained thus:
We obtain the Taylors series expansion of y(x + h) as
h2
y(x + h) = y(x) + hy (x) + y (x) +
2!
1 1
y (x) = [y(x + h) y(x)] = y(x).
h h
1
Thus y (n+1) (x) = hn+1
n+1 y(x). Equation (2.6) can then be written as
yn (x) = a0 +a1 (xxn )+a2 (xxn )(xxn1 )+ +an (xxn )(xxn1 ) (xx1 )
and then impose the condition that y(x) and yn (x) should agree at the tabulated
points xn , xn1 , xn2 , , x2 , x1 , x0 , we obtain in a similar manner as above
p(p + 1) 2 p(p + 1) (p + n 1) n
yn (x) = yn + pyn + yn + + yn
2! n!
(2.8)
24
xxn
where p = h
. This is called Newtons backward difference interpolating for-
mula and it uses tabular values to the left of yn . This formula is useful for inter-
polation near the end of tabulated values. The error is estimated as
x y 2 3
0 1 -1
1 0 1 2
2 1 9 8 6
3 10
Here, h = 1 (from the forward difference table). Using x = x0 + ph and choosing
x0 = 0, one obtains x = 0 + p = p. Thus substituting for p in (2.5), one obtains
x(x 1) x(x 1)(x 2)
y(x) = 1 + x(1) + (2) + (6) = x3 2x2 + 1,
2! 3!
which is the polynomial from which we obtained the tabulated values above. y(4)
can be obtained by substituting x = 4 in the polynomial obtained above. Note that
this process of obtaining the value of y for some value of x outside the given range
is called extrapolation and this demonstrates the fact that if a tabulated function is
a polynomial, then interpolation and extrapolation would yield exact values.
25
Example 8
The table below gives the values of tan x for 0.10 x 0.30.
x y 2 3 4
0.10 0.1003
0.15 0.1511 0.0508 0.0008
0.20 0.2027 0.0516 0.0010 0.0002
0.25 0.2553 0.0526 0.0014 0.0004 0.0002
0.30 0.3093 0.0540
(i) Taking x = x0 + ph, where x0 = 0.10, h = 0.05, thus to find the value of
tan(0.12), we have that 0.12 = 0.10 + 0.05p p = 0.4, and using (2.5) yields
0.4(0.4 1)
tan(0.12) = 0.1003 + 0.4(0.0508) + (0.0008)
2
0.4(0.4 1)(0.4 2) 0.4(0.4 1)(0.4 2)(0.4 3)
+ (0.0002) + (0.0002) = 0.1205.
6 24
(i) To obtain tan(0.26), we have 0.26 = 0.30 + p(0.05) p = 0.8. We now use
Newtons backward interpolation formula (2.8), since we are estimating a value at
26
the end of the data. Hence one obtains
0.8(0.8 + 1)
tan(0.26) = 0.3093 0.8(0.0540) (0.0014)
2
0.8(0.8 + 1)(0.8 + 2) 0.8(0.8 + 1)(0.8 + 2)(0.8 + 3)
(0.0004) (0.0002)
6 24
= 0.2662.
Let y(x) be continuous and differentiable (n+1) times in the interval (a, b). Given
the n+1 points (x0 , y0 ), (x1 , y1 ), (x2 , y2 ), , (xn , yn ), where the values of x need
not necessarily be equally spaced, we wish to find a polynomial of degree n, say
27
n (x) such that
i (xi ) = y(xi ) = yi , i = 0, 1, 2, , n (2.10)
Let
n (x) = a0 + a1 x + a2 x2 + + an xn (2.11)
be the desired polynomial. Substituting the condition (2.10) into (2.11), we obtain
the system of equations
y0 = a0 + a1 x0 + a2 x20 + + an xn0
y1 = a0 + a1 x1 + a2 x21 + + an xn1
yn = a0 + a1 xn + a2 x2n + + an xnn
28
Eliminating a0 , a1 , a2 , , an from equations (2.11) and (2.12), we obtain
2
n (x) 1 x x x
n
2 n
y0 1 x0 x0 x0
2 n = 0 (2.14)
y1 1 x1 x1 x1
.. .. .. .. .
. . . . ..
1 xn x2n xnn
yn
may be used to express the Lagrange multiplier function in the more compact
form
(x)
Li (x) = .
(x xi ) (xi )
Hence (2.15) yields
n
X (x)
n (x) = yi , (2.18)
i=0
(x xi ) (xi )
29
which is known as Lagranges interpolation formula. The Li (x) are called
Lagranges interpolation coefficients. Interchanging x and y in (2.18) yields the
formula
n
X (y)
n (y) = xi , (2.19)
i=0
(y yi ) (yi )
which is useful for inverse interpolation. It is important to note that the Lagranges
interpolation polynomial is unique. From (2.18), we note that
where
(x) (x x0 )(x x1 ) x x1
L0 (x) = = =
(x x0 ) (x)
(x x0 )(x0 x1 ) x0 x1
(x) (x x0 )(x x1 ) x x0
L1 (x) = = =
(x x1 )(x0 x1 ) (x x1 )(x0 x1 ) x1 x0
x x1 x x0
(x) = y0 + y1 .
x0 x1 x1 x0
This is a linear interpolation formula. For a quadratic interpolation, we have
where
30
Hence find the value of y(9.2) correct to 4 places of decimals, and determine
the error in the estimate.
x 9.0 9.5
y = ln x 2.1972 2.2513
Solution:
Here, x0 = 9.0, x1 = 9.5, y0 = 2.1972, y1 = 2.2513.
x x1 x x0
1 (x) = L0 (x)y0 + L1 (x)y1 = y0 + y1
x0 x1 x0 x1
x 9.5 x 9.0 x 9.5 x 9.0
= ln(9.0) + ln(9.5) = (2.1972) + (2.2513)
9.0 9.5 9.5 9.0 0.5 0.5
9.2 9.5 9.2 9.0
1 (9.2) = (2.1972) + (2.2513) = 2.2188.
0.5 0.5
error = 2.2192 2.2188 = 0.0004.
Thus linear interpolation is not sufficient to get 4 d.p. accuracy, it would suffice
for 3 d.p. accuracy.
Example 10
Obtain the value of ln(9.2) using the quadratic interpolation polynomial using the
data below:
x 9.0 9.5 11.0
y = ln x 2.1972 2.2513 2.3979
Solution:
(x 9.5)(x 11.0)
L0 (x) = = x2 20.5x 104.5, L0(9.2) = 0.5400
(9.0 9.5)(9.0 11.0)
(x 9.0)(x 11.0) 4
L1 (x) = = (x2 20x + 99), L1 (9.2) = 0.4800
(9.5 9.0)(9.5 11.0) 3
(x 9.0)(x 9.5) 1
L2 (x) = = (x2 18.5x + 85.5), L2 (9.2) = 0.0200.
(11.0 9.0)(11.0 9.5) 3
31
2 (9.2) = ln(9.2) = 0.5400(2.1972)+0.4800(2.2513)0.0200(2.3979) = 2.2192
x 0 1 2 3
y 1 0 1 10
Solution:
(x x1 )(x x2 )(x x3 ) 1
L0 (x) = y0 = (x 1)(x 2)(x 3)
(x0 x1 )(x0 x2 )(x0 x3 ) 6
(x x0 )(x x2 )(x x3 )
L1 (x) = y1 = 0
(x1 x0 )(x1 x2 )(x1 x3 )
(x x0 )(x x2 )(x x3 ) 1
L2 (x) = y2 = x(x 1)(x 3)
(x2 x0 )(x2 x1 )(x2 x3 ) 2
(x x0 )(x x1 )(x x2 ) 5
L3 (x) = y3 = x(x 1)(x 2).
(x3 x0 )(x3 x1 )(x3 x2 ) 3
3 (x) = L0 (x)y0 + L1 (x)y1 + L2 (x)y2 + L3 (x)y3
1 1 5
= (x1)(x2)(x3)+0 x(x1)(x3)+ x(x1)(x2) = (x1)(x2 x1)
6 2 3
3 (x) = x3 2x + 1.
This is the same polynomial we have earlier obtained using Newtons forward
interpolation.
32
Let the set of data points be (xi , yi ), i = 0, 1, 2, , m, and let the curve given by
y = f (x) be fitted to this data. At x = xi , the experimental (or observed) value of
the ordinate is yi and the corresponding value on the fitting curve is f (xi ). If ei is
the error of approximation at x = xi , then we have
ei = yi f (xi ) (2.22)
Now, if we let
S = [y1 f (x1 )]2 + [y2 f (x2 )]2 + + [ym f (xm )]2 (2.23)
then the method of least squares consists of minimising S, that is the sum of the
squares of the errors.
Let y = a0 + a1 x be the straight line to be fitted to the given data. Then, corre-
sponding to equation (2.23), we have
S
= 2[y1 (a0 + a1 x1 )] 2[y2 (a0 + a1 x2 )] 2[ym (a0 + a1 xm )] = 0
a0
and
S
= 2x1 [y1 (a0 +a1 x1 )]2x2 [y2 (a0 +a1 x2 )] 2xm [ym (a0 +a1 xm )] = 0
a1
ma0 + a1 (x1 + x2 + + xm ) = y0 + y1 + y2 + + ym
33
and
where
yi )( x2i ) ( xi )( xi yi )
P P P P
(
a0 =
m x2i ( xi )2
P P
and P P P
m xi yi ( xi )( yi )
a1 = .
m x2i ( xi )2
P P
Since the xi and yi are known quantities, the equations (2.25) are called the normal
2S
equations, and can be solved for the unknowns a0 and a1 . We also note that a20
2S
and a21
are both positive at the points a0 and a1 . Thus these values provide a
minimum of S.
Now, dividing the first of equations (2.25) by m yields
a0 + a1 x = y (2.26)
where (x, y) is the centroid of the given data points. Hence the fitted straight line
passes through the centroid of the data points.
Example 12
The table below gives the temperatures T (in C) and lengths l (in mm) of heated
rod. If l = a + bT , find the best values for a and b that approximate the data of the
table below.
T 20 30 40 50 60 70
l 800.3 800.4 800.6 800.7 800.9 801.0
34
Solution:
T l T2 lT
20 800.3 400 16006
30 800.4 900 24012
40 800.6 1600 32024
50 800.7 2500 40035
60 800.9 3600 48054
70 801.0 4900 56070
270 4803.9 13900 216201
6a + 270b = 4803.9
A nonlinear function y = xc can be fitted to the given set of data points. Taking
the logarithms of both sides yields
Y = a0 + a1 X (2.27)
y = a0 + a1 x + a2 x2 (2.28)
35
The least squares parabola approximating the set of points (xi , yi ), i = 0, 1, 2, , n
has the equation (2.28). The constants a0 , a1 and a2 can be determined by solving
simultaneously the equations.
X X X
y = na0 + a1 x + a2 x2
X X X X
xy = a0 x + a1 x2 + a2 x3 (2.29)
X X X X
x2 y = a0 x2 + a1 x3 + a2 x4
The equations (2.29) are called the normal equations for the least squares parabola
(2.28). This parabola is said to be the regression curve of y on x, since an estimate
of y is obtained for any given value of x. Similarly, we can have a regression
equation of x on y.
Example 13
Fit a curve of the form Y = ex to the data below
x 1 2 3 4
Y 7 11 17 27
Solution:
Taking the natural logarithms of both sides leads to
ln y = ln + x.
x 1 2 3 3
y 1.95 2.40 2.83 3.30
and one obtains = 4.48 and = 0.45. Thus the curve becomes
Y = 4.48e0.45x .
36
Exercise
Obtain a quadratic curve which fits the data in the table below, using the least
squares method.
37
Chapter 3
3.1 Introduction
In chapter 2, we considered the problem of interpolation, that is, given a set of tab-
ulated values (x0 , y0 ), (x1 , y1 ), , (xn , yn ) of x and y, can we find a polynomial
yn (x) of the lowest degree such that y(x) and yn (x) agree at the set of tabulated
points? In this chapter, we are concerned with numerical differentiation and inte-
gration. This involves deriving formulae for derivatives and integrals from given
set of tabulated values of x and y in the interval [x0 , xn ].
38
p!
where (pr ) = (pr)!r!
, leads to the following results:
Using only the first terms of the formulae above, one obtains the forward finite
difference approximations to the following derivatives:
1 y(x + h) y(x)
y (x) = y0 =
h h
1 2 y(x + 2h) 2y(x + h) + y(x)
y (x) = y 0 =
h2 h2
1 y(x + 3h) 3y(x + 2h) + 3y(x + h) y(x)
y (x) = 3 3 y0 =
h h3
and so on.
In a similar vein, we obtain other formulae using Newtons backward differ-
39
ence interpolating formula (2.8) as
Solution:
We construct the difference table for the tabulated values as follows
40
x y 2 3 4 5 6
1.0 2.7183
1.2 3.3201 0.6018
1.4 4.0552 0.7351 0.1333
1.6 4.9530 0.8978 0.1627 0.0294
1.8 6.0496 1.0966 0.1988 0.0361 0.0067
2.0 7.3891 1.3395 0.2429 0.0441 0.0080 0.0013
2.2 9.0250 1.6359 0.2964 0.0535 0.0096 0.0014 0.0001
41
Approximating y(x) by Newtons forward difference interpolating formula leads
to the integral
Z xn
p(p 1) 2 p(p 1)(p 2) 3
I= y0 + py0 + y0 + y0 + dx
x0 2! 3!
(3.14)
Since x = x0 + ph, dx = hdp and so the integral (3.14) becomes
Z n
p(p 1) 2 p(p 1)(p 2) 3
I=h y0 + py0 + y0 + y0 + dp
0 2! 3!
(3.15)
which on simplification yields
Z xn
n(n 2)2 3
n n(2n 3) 2
y(x)dx = nh y0 + y0 + y0 + y0 +
x0 2 12 24
(3.16)
Equation (3.16) is a general formula from which one can obtain different integra-
tion formulae by putting n = 1, 2, 3, .
We derive from this general formula a few important numerical integration
formulae such as the trapezoidal and Simpsons rules.
Putting n = 1 into (3.16), all differences higher than the first become zero and
one obtains
x1
1
Z
h
y(x)dx = h y0 + y0 = [y0 + y1 ]
x0 2 2
Similarly, we deduce for the remaining intervals [x1 , x2 ], , [xn1 , xn ] as
Z x2
h
y(x)dx = [y1 + y2 ]
x1 2
..
.
Z xn
h
y(x)dx = [yn1 + yn ]
xn1 2
42
Combining these integrals, we obtain the formula
Z xn
h
y(x)dx = [y0 + 2y1 + 2y2 + 2y3 + + yn ] (3.17)
x0 2
The error of the trapezoidal formula can be estimated as follows: Let y = f (x)
be continuous and many times differentiable in [x0 , xn ]. Thus expanding f (x) in
Taylor series about x = x0 , we get
Z xn Z xn
(x x0 )2
f (x)dx = y0 + (x x0 )y0 + y0 + dx
x0 x0 2!
h2 h3
= hy0 + y0 + y0 + (3.18)
2 6
In a similar vein,
h2 h3
h h
[y0 + y1 ] = y0 + y0 + hy0 + y0 + y0 +
2 2 2 6
2
h h3
= hy0 + y0 + y0 + (3.19)
2 4
From (3.18) and (3.19) we obtain
Z xn
h h3
f (x)dx [y0 + y1 ] y0 + (3.20)
x0 2 12
which is the error in the interval [x0 , x1 ]. Continuing in a similar way, we obtain
the errors in the remaining subintervals [x1 , x2 ], [x2 , x3 ], , [xn1 , xn ].
43
Summing all these, one obtains the error as
h3
[y + y1 + y2 + + yn1
] (3.21)
12 0
Taking y (x) as the largest value of the n quantities on the right hand side of
(3.21), we get
h3 (b a) 2
ny (x) h y (x) (3.22)
12 12
where nh = b a.
Error bounds can now be obtained by taking the largest value for y (x), say
MU and the smallest value ML , in the interval of integration, hence
KML KMU ,
where K = ba
12
h2 .
To obtain Simpsons rule, otherwise called Simpsons 13 -rule, we need to divide the
interval of integration a x b into an even number of subintervals n = 2m.
Thus, Simpsons 13 -rule can easily be obtained from (3.16) by substituting n = 2.
This leads to the relation
Z x2
1 2 h
y(x)dx = 2h y0 + y0 + y0 = [y0 + 4y1 + y2 ] (3.23)
x0 6 3
Similarly, for the remaining intervals [x2 , x4 ], [x4 , x6 ] , [xn2 , xn ], one obtains
Z x4
h
y(x)dx = [y2 + 4y3 + y4 ]
x2 3
Z x6
h
y(x)dx = [y4 + 4y5 + y6 ]
x4 3
..
.
Z xn
h
y(x)dx = [yn2 + 4yn1 + yn ].
xn2 3
44
Summing the above integrals, we obtain the composite Simpsons 13 -rule as
Z xn
h
y(x)dx = [y0 + 4y1 + 2y2 + 4y3 + 2y4 + + 2yn2 + 4yn1 + yn ]
x0 3
(3.24)
Simpsons 13 -rule has the error term
(b a) 4 (4)
h f (x)
180
3 5 (4)
h f (x)
80
and it is much less accurate than the Simpsons 13 -rule, hence it is rarely used for
computations.
Example 15
R1 dx
Evaluate I = 0 1+x2
, correct to 4 places of decimals.
Using the Trapezoidal, Simpsons 13 - and 38 -rules for h = 0.5, 0.25 and 0.125, we
45
have the following results:
(a) h = 0.5. The values are tabulated below
x 0 0.5 1.0
y 1.00000 0.80000 0.50000
46
0.125
I= 3
[1.50000 + 4(0.98452 + 0.87671 + 0.71910 + 0.56637) + 2(0.94118 +
0.80000 + 0.64000)] = 0.78538
Simpsons 83 -rule can now be applied to obtain
0.375
(iii) I = 8
[1.50000+3(0.98452+0.80000+0.56637)+3(0.94118+0.71910)+
2(0.87671 + 0.64000)] = 0.77657.
Note that the exact solution correct to 5 places of decimals is I 0.78540.
Example 16
R1 2
Evaluate the integral I = 0
ex dx, using
(i) Trapezoidal rule with n = 5
(ii) Trapezoidal rule with n = 10
(iii) Simpsons 13 -rule with n = 10
(iv) Simpsons 38 -rule with n = 10.
Solution:
2
n xn x2n f (xn ) = exn
0 0 0 1.000000
1 0.2 0.04 0.960789
2 0.4 0.16 0.852144
3 0.6 0.36 0.697676
4 0.8 0.64 0.527292
5 1.0 1.00 0.367879
1.367879 3.037901
0.2
I= (1.367879 + 2 3.037901) = 0.744368.
2
47
2
n xn x2n f (xn ) = exn
0 0 0 1.000000
1 0.1 0.01 0.99050
2 0.2 0.04 0.960789
3 0.3 0.09 0.913931
4 0.4 0.16 0.852144
5 0.5 0.25 0.778801
6 0.6 0.36 0.697676
7 0.7 0.49 0.612626
8 0.8 0,64 0.527292
9 0.9 0.81 0.444858
10 1.0 1.00 0.367879
1.367879 6.778167
0.1
I= (1.367879 + 2 6.778167) = 0.746211.
2
0.1
I= (1.367879 + 4 3.740266 + 2 3.037901) = 0.746825.
3
0.3
I= (1.367879 + 3 2.45482 + 3 2.266882 + 2 2.056465) = 0.736722.
8
48