Numerical Differentiationandintegration
Numerical Differentiationandintegration
*** 3/1/13 EC
What’s Ahead
295
296CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
In an elementary calculus course, the students learn the concept of the derivative of a function
dy d
y = f (x), denoted by f ′ (x), dx or dx (f (x)), along with various scientific and engineering
applications.
The need for numerical differentiation arises from the fact that very often, either
• f (x) is not explicitly given and only the values of f (x) at certain discrete points are known
or
Consider the blood flow through an artery or vein. It is known that the nature of viscosity
dictates a flow profile, where the velocity v increases toward the center of the tube and is zero
at the wall, as illustrated in the following diagram:
Let v be the velocity of blood that flows along a blood vessel which has radius R and length
l at a distance r from the central axis. Let ∆P = Pressure difference between the ends of the
tube and η = Viscosity of blood.
7.2. PROBLEM STATEMENT 297
From the law of laminar flow which gives the relationship between v and r, we have
r2
v(r) = vm 1 − 2 (7.1)
R
where
1 ∆P 2
vm = R is the maximum velocity.
4η l
Substituting the expression for vm in (7.1), we obtain
1 ∆P 2
v(r) = (R − r 2 ) (7.2)
4η l
Thus, if ∆P and l are constant, then the velocity v of the blood flow is a function of r in [0, R].
In an experimental set up, one then can measure velocities at several different values of r, given
η, ∆P , l and R.
dv
The problem of interest is now to compute the velocity gradient (that is, dr ) from r = r1
to r = r2 . We will consider this problem later with numerical values.
Given the functional values, f (x0 ), f (x1 ), . . . , f (xn ), of a function f (x) which is not explicitly
known, at the points x0 , x1 , . . . , xn in [a, b] or a differentiable function f (x) on [a, b].
f (x + h) − f (x)
f ′ (x) = lim
h→0 h
Thus, it is the slope of the tangent line at the point (x, f (x)). The difference quotient (DQ)
f (x + h) − f (x)
h
298CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
is the slope of the secant line passing through (x, f (x)) and (x + h, f (x + h)).
(x, f (x))
(x + h, f (x + h))
x x+h
As h gets smaller and smaller, the difference quotient gets closer and closer to f ′ (x). However,
if h is too small, then the round-off error becomes large, yielding an inaccurate value of the
derivative.
In any case, if the DQ is taken as an approximate value of f ′ (x), then it is called two-point
forward difference formula (FDF) for f ′ (x).
Thus, two-point backward difference and two-point central difference formulas, are similarly
defined, respectively, in terms of the functional values f (x − h) and f (x), and f (x − h) and
f (x + h).
Given the following table, where the functional values correspond to f (x) = x ln x:
x f (x)
1 0
2 1.3863
3 3.2958
7.2. PROBLEM STATEMENT 299
Approximate f ′ (2) by using two-point (i) FDF, (ii) BDF, and (iii) CDF. (Note that f ′ (x) =
1 + ln x; f ′ (2) = 1 + ln 2 = 1.6931.)
Input Data: x = 2, h = 1, x + h = 3.
Solution.
f (x+h)−f (x) f (3)−f (2)
Two-point FDF: f ′ (x) ≈ h = 1 = 1.9095.
Remarks: The above example shows that two-point CDF is more accurate than two-point FDF
and BDF. The reason will be clear from our discussion on truncation errors in the next section.
Derivations of the Two-Point Finite Difference Formulas and Errors: Taylor Series
Approach
In this section, we will show how to derive the two-point difference formulas and the truncation
errors associated with them using the Taylor series and state without proofs, the three-point
FDF and BDF. The derivatives of these and other higher-order formulas and their errors will
be given in Section 7.2.3, using Lagrange interpolation techniques.
Consider the two-term Taylor series expansion of f (x) about the points x + h and x − h,
respectively:
h2 ′′
f (x + h) = f (x) + hf ′ (x) + f (ξ0 ), where x < ξ0 < x + h (7.6)
2
and
h2 ′′
f (x − h) = f (x) − hf ′ (x) + f (ξ1 ), where x − h < ξ1 < x (7.7)
2
′ f (x + h) − f (x) h ′′
f (x) = − f (ξ0 ) (7.8)
h 2
• The term within brackets on the right-hand side of (7.8) is the two-point FDF.
• The second term (remainder) on the right-hand side of (7.8) is the truncation error for
two-point FDF.
Similarly, solving for f ′ (x) from (7.7), we get
f (x) − f (x − h) h ′′
f ′ (x) = + f (ξ1 ) (7.9)
h 2
• The first term within brackets on the right-hand side of (7.9) is the two-point BDF.
• The second term (remainder) on the right-hand side of (7.9) is the truncation error for
two-point BDF.
′′
Assume that f (x) is continuous. Consider this time a three-term Taylor series expansion
of f (x) about the points (x + h) and (x − h):
h ′′ h3 ′′′
f (x + h) = f (x) + hf ′ (x) + f (x) + f (ξ2 ) (7.10)
2 3!
h2 ′′ h3 ′′′
f (x − h) = f (x) − hf ′ (x) + f (x) − f (ξ3 ) (7.11)
2 3!
h3 h ′′′ ′′′
i
f (x + h) − f (x − h) = 2hf ′ (x) + f (ξ2 ) + f (ξ3 ) (7.12)
3!
f (x + h) − f (x − h) h2 h ′′′ ′′′
i
f ′ (x) = − f (ξ2 ) + f (ξ3 ) (7.13)
2h 12
To simplify the expression within the brackets in (7.13), we need the following theorem from
Calculus:
7.2. PROBLEM STATEMENT 301
Let
Then
n
X n
X
f (xi )ci = f (c) ci , for some c in [a, b].
i=1 i=1
′′′
We now apply IVT to f (x) in (7.13), with n = 2 and c1 = c2 = 1. For this, we note that
′′′
(i) f (x) is continuous on [x − h, x + h] (Hypothesis (i) of IVT is satisfied)
′ f (x + h) − f (x − h) h2 ′′′
f (x) = − f (ξ) (7.14)
2h 6
The term within brackets on the right-hand side of (7.14) is the two-point CDF, and the
2 ′′
term − h6 f (ξ) is the truncation error for two-point CDF.
h ′′
Error for Two-point FDF:
2 f (ξ0 ), x < ξ0 < x + h
h ′′
Error for Two-point BDF: 2 f (ξ1 ), x − h < ξ1 < x
h2 ′′′
Error for Two-point CDF: 6 f (ξ), x−h< ξ < x+h
302CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
• Two-point FDF and BDF are O(h) (they are first-order approximations).
It is now clear why two-point CDF is more accurate than both two-point FDF and BDF. This
is because, both two-point FDF and BDF are O(h) white two-point CDF is O(h2 ). Note that
Example 7.1 supports this statement.
7.2.2 Three-point and Higher Order Formulas for f ′ (x): Lagrange Interpo-
lation Approach
Three-point and higher-order derivative formulas and their truncation errors can be derived in
the similar way as in the last section. Three-point FDF and BDF approximate f ′ (x) in terms
of the functional values at three points: x, x + h, and x + 2h for FDF and x, x − h, x − 2h for
BDF, respectively.
−3f (x)+4f (x+h)−f (x+2h)
Three-point FDF for f ′ (x): f ′ (x) ≈ 2h
X −−−−−X −−−−−X
x x+h x + 2h
X −−−−−−−X −−−−−−−X
x − 2h x−h x
The derivations and error formulas of theses and other higher-order approximations are given
in the next section.
The difference formulas are simple to use but they are only good for approximating f ′ (x) where
x is one of the tabulated points x0 , x1 , . . . , xn . On the other hand, if x is a nontabulated point
in [a, b], and f ′ (x) is sought at that point, then the simplest thing to do is to:
• Find an interpolating polynomial Pn (x) passing through x0 , x2 , . . . , xn (we will use La-
grange interpolations here).
′
• Accept Pn (x) as an approximation to f ′ (x).
As we will see later, if x coincides with one of the points x0 , x1 , . . . , xn , then we can recover
some of the finite difference formulas of the last section, as special cases.
7.2. PROBLEM STATEMENT 303
Let (x0 , f0 ), (x1 , f1 ), . . . , (xn , fn ) be (n + 1) distinct points in [a, b] and a = x0 < x1 < x2 . . . <
xn−1 < xn = b. Then recall the Lagrange interpolating polynomial Pn (x) of degree at most, n
is given by:
where
So,
′
Pn (x) = L′0 (x)f (x) + L′1 (x)f1 + · · · + L′n (x)fn . (7.17)
Thus, the derivative of f (x) at any point x (tabulated or nontabulated) can be approximated by
Pn′ (x). Obviously, the procedure is quite tedious as it requires computation of all the Lagrangian
polynomials Lk (x) and their derivatives at x = x. We will derive below the derivative formulas
in three special cases: n = 1, n = 2 and n = 4, which are commonly used. These formulas will
become the respective difference formulas in the special case when x = x is a tabulated point.
In order to distinguish these formulas from the corresponding finite difference formulas, these
will be called, respectively, two-point, three-point, and five-point formulas.
n=1 (Two-point formula and Two-point FDF and BDF) Here the two tabulated points
are x0 and x1 .
x−x1 x−x0
L0 (x) = x0 −x1 , L1 (x) = x1 −x0
f1 −f0
This gives us the two-point formula for f ′ (x): f ′ (x) ≈ x1 −x0
Summarizing:
f1 − f0
Two-Point Formula: f ′ (x) ≈
x1 − x0
f (x + h) − f (x)
Two-Point FDF: f ′ (x) ≈
h
Two-Point BDF: f ′ (x) ≈ f (x) − f (x − h)
h
X − − − − − −X − − − − − −X
x0 x1 x2
Summarizing:
Three-point Formula:
h i h i h i
f ′ (x) ≈ f0 (x02x−x 1 −x2
−x1 )(x0 −x2 ) + f 2x−x0 −x2
1 (x1 −x0 )(x1 −x2 ) + f 3x−x0 −x1
2 (x2 −x0 )(x2 −x1 ) . (7.20)
7.2. PROBLEM STATEMENT 305
Three-point FDF: f ′ (x) ≈ [−3f (x) + 4f (x + h) − f (x + 2h)]/2h
1
f ′ (x) ≈
Three-point BDF: 2n [f (x − 2h) − 4f (x − h) + 3f (x)]
n=4 Five-point formula and associated four-point CDF difference formula can similarly
be obtained [Exercise]. We list below the five-point FDF and four-point CDF, for easy
references:
Five-Point FDF:
1
f ′ (x) ≈ [−25f (x) + 48f (x + h) − 36f (x + 2h) + 16f (x + 3h) − 3f (x + 4h)] . (7.21)
12h
Four-point CDF:
f (x − 2h) − 8f (x − h) + 8f (x + h) − f (x + 2h)
f ′ (x) ≈ . (7.22)
12h
Note: Analogous to two-point CDF, we call the above formula as four-point CDF, because
the function value f (x) does not appear in the above formula. It uses only four function values:
f (x − 2h), f (x − h), f (x + h), and f (x + 2h).
Recall that the error term in (n + 1) point polynomial interpolation of f (x) is given by
(Theorem 6.7):
(x − x0 )(x − x1 ) · · · (x − xn ) (n+1)
En (x) = f (ξx ), where x0 < ξx < xn . (7.23)
(n + 1)!
d (x − x0 )(x − x1 ) · · · (x − xn ) (n+1) (x − x0 )(x − x1 ) · · · (x − xn ) d (n+1)
En′ (x) = f (ξx )+ [f (ξx )].
dx (n + 1)! (n + 1)! dx
(7.24)
Simplification. The error formula (7.24) can be simplified if the point x at which the derivative
is to be evaluated happens to be one of the nodes xi , as in the case of finite difference formulas.
First, if x = xi , then the second term on the right-hand side of (7.24) becomes zero, because
(x − xi ) is a factor.
Secondly,
306CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
d
(x − x0 )(x − x1 ) · · · (x − xn ) at x = xi is (x − x0 )(x − x1 ) · · · (x − xi−1 )(x − xi+1 ) · · · (x − xn ).
dx
Thus, at x = xi , we obtain
1
En (x) = hn f (n+1) (η).
(n + 1)!
Theorem 7.2. (Error Theorem for Numerical Differentiation).
Let
(a)
n
1 Y
En (x) = f ′ (xi ) − Pn′ (xi ) = f (n+1) (η) (xi − xj ) (7.25)
(n + 1)!
j=0
j6=i
hn
En (x) = f (n+1) (η)
(n + 1)!
7.2. PROBLEM STATEMENT 307
Special cases: Since finite difference formulas concern finding the derivative at a tabulated
point, x = xi , we immediately recover the following error results established earlier:
• Two-point CDF and three-point FDF and BDF (n = 2) are O(h2 ). (Second-order
approximation)
• Four-point CDF and five-point FDF and BDF (n = 3) are O(h3 ). (Third-order ap-
proximation)
and so on.
Example 7.3
x f (x)
1 0
2 1.3863
2.5 2.2907
Input Data:
(i) Nodes: x0 = 1, x1 = 2, x2 = 2.5
(ii) Functional values: f0 = 0, f1 = 1.3863, f2 = 2.2907
(iii) The point at which the derivative is to be approximated: x = 2.1
(iv) The degree of interpolation: n=2
(i) Compute P2 (x) - Lagrange interpolating polynomial of degree 2, using Equation (7.18).
(ii) Compute P2′ (x) using Equation (7.22) and accept it as an approximation to f ′ (x).
Solution:
′ 2x − x1 − x2 2x − x0 − x2 2x − x0 − x1
f (x) ≈ P2′ (x) = f0 + f1 + f2
(x0 − x1 )(x0 − x2 ) (x1 − x0 )(x1 − x2 ) (x2 − x0 )(x2 − x1 )
308CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
h i h i
4.2−1−2.5 4.2−1−2
So, f ′ (2.1) ≈ P2′ (2.1) = 0 + 1.3863 1×(−0.5) + 2.2907 1.5×0.5 = 1.7243.
The following example will help understand the technique. Here we will show how the Richard-
son extrapolation technique can be used to derive four-point CDF which has error O(h4 ) by
combining two two-point CDFs with spacing h and h2 , respectively, each of which has error
O(h2 ).
Recall that two-point CDF was derived from three-term Taylor series expansions of f (x + h)
and f (x − h). If instead, five-term Taylor series expansions are used, then proceeding exactly
′′
in the same way as in Section 7.2.1 and assuming that f (x) is continuous in [x − h, x + h], we
can write
′′′
f (x + h) − f (x − h) f (x) 2
′
f (x) = − h + O(h4 ). (7.26)
2h 3!
The first term in the brackets on the right-hand side is the two-point CDF which has error
O(h2 ), with spacing h. Now, suppose that f ′ (x) is evaluated with spacing h2 . Then we have
" #
f (x + h2 ) − f (x − h2 )
′′′
f (x) 2
′
f (x) = − h + O(h4 ). (7.27)
h 4 · 3!
The first term in the brackets on the right-hand side is the two-point CDF with spacing h2 , which
has also error O(h2 ). So the order of truncation error remains the same.
It turns out, however, that these two approximations can be combined to obtain an approx-
imation which has error O(h4 ). This can be done by eliminating the term involving h2 , as
follows:
7.2. PROBLEM STATEMENT 309
f (x + h2 ) − f (x − h2 ) f (x) 2
′′
4f ′ (x) = 4 − h + O(h4 ).
h 3!
Next, subtract the last equation from Equation (7.26) and divide the result by 3, yielding
" #
h
1 f (x + ) − f (x − h) f (x + h) − f (x − h)
f ′ (x) = 4 2
− + O(h4 ).
3 h 2h
This is an approximation of f ′ (x) with error O(h4 ). Indeed, the reader will easily recognize the
term in the bracket on the right-hand side, as the four-point CDF with spacing h2 ; that is with
the points: x − h, x − h2 , x, x + h2 , x + h. Replacing h by 2h in the above formula, we have
8f (x + h) − 8f (x − h) − f (x + 2h) + f (x − 2h)
f ′ (x) ≈ (7.28)
12h
which is the four-point CDF with spacing h and we already know that this approximation has
error O(h4 ).
Richardson’s Technique
4-point CDF
2-point CDF 2-point CDF
+ with spacing h
with spacing h h
with spacing 2
Summarizing: Starting from two two-point CDFs with spacing h and h2 , each of which has
truncation error O(h2 ), Richardson extrapolating technique enables us to obtain a four-point
CDF with error O(h4 ).
General Case from O(h2k ) to O(h2k+1 ). We will now consider the general case of deriving
a derivative formula of O(h2k+2 ) starting from two formulas of O(h2k ).
k=1. From O(h2 ) to O(h4 ). Let D0 (h) and D0 ( h2 ) be two approximate derivative formulas
of O(h2 ), with spacing h and h2 , respectively, which can be written as:
2 4 6
′ h h h h
f (x) = D0 + A1 + A2 + A3 + ··· , (7.30)
2 2 2 2
where A1 , A2 , A3 , etc. are constants independent of h.
As in the last section, a formula of O(h4 ) can now be obtained by eliminating two terms
involving h2 from the above two equations. This is done as follows:
or f ′ (x) = 43 D0 ( h2 ) − 13 D0 (h) − 14 A2 h4 + · · ·
D0 ( h2 )−D0 (h)
or f ′ (x) = D0 ( h2 ) + 3 − 14 A2 h4 + · · · (7.31)
D0 ( h2 )−D0 (h)
Set D1 (h) = D0 ( h2 ) + 3 .
4
′ h 1 h
f (x) = D1 − A2 + ··· (7.32)
2 4 2
D1 ( h2 )−D1 (h)
Set D2 (h) = D1 ( h2 ) + 15
• Start with two approximations Dk−1 (h) and Dk−1 ( h2 ), each of order O(h2k ).
• Combine Dk−1 (h) and Dk−1 ( h2 ) using Richardson’s extrapolation technique to obtain an
approximation of O(h2k+2 ).
Richardson Technique
Then
312CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
The above computation can be systematically arranged in the form of the following table, to
be called Richardson Extrapolation Table.
The arrow i pointing towards an entry of the table shows the dependence of that
entry on the two entries of the previous column.
Output:
For k = 1, 2, . . . do
h Dk−1 h2 − Dk−1 (h)
Dk (h) = Dk−1 +
2 4k − 1
End
Example 7.6
Given f (x) = x ln x, h = 0.5, compute an approximation of f ′ (x) of O(h6 ) starting with two-
h
point CDF with spacing h and .
2
7.2. PROBLEM STATEMENT 313
Inputs:
f (x+h)−f (x−h) f (1.05)−f (0.5)
D0 (h) = = (Two-point CDF
2h 1
(i) = 1.05 × ln(1.05) − 0.5 × ln(0.5) with spacing h)
= 0.9548
f (x+ h2 )−f (x− h2 )
D0 ( h
2 ) = h = f (1.25)−f
0.5
(0.75)
(Two-point CDF
(ii) = 1.25×ln(1.25)−0.75×ln(0.75)
0.5 with spacing h2 )
= 0.9894
0.9894−0.9548
= 0.9894 + 3
= 1.0009
D1 (h)
i D2 (h)
h
D1 ( 2 )
Compute D0 ( h4 ) by replacing h by h
4 in the formula of D0 (h).
= 1.25×ln(1.25)−0.8750×ln(0.8750)
0.25
= 0.9974
Compute D1 ( h2 ) by replacing h by h
2 in the formula of D1 (h).
D0 ( h4 )−D0 ( h2 )
D1 ( h2 ) = D0 ( h4 ) + 3
0.9974−0.9894
= 0.9974 + 3
= 1.0001
314CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
In the preceding section, we described how two approximations of O(h2k ) can be combined to
obtain an approximation of O(h2k+2 ). The underlying assumption there was that error can be
expressed in terms of even powers of h. The same type of technique can be used to obtain an
approximation of O(hk+1 ) starting from two approximations of O(hk ), in a similar way.
Richardson’s Technique
We state the result in the following theorem. The proof is left as an Exercise.
Let Dk (h) and Dk ( h2 ) be two O(hk ) approximations of f ′ (x), which, respectively, satisfy:
Then
h Dk−1 ( h2 ) − Dk−1 (h)
Dk (h) = Dk−1 ( ) + , k = 2, 3, 4, . . . (7.37)
2 2k−1 − 1
is an O(hk+1 ) approximation of f ′ (x).
Given f (x) = x ln x, h = 0.5, (i) Compute an O(h2 ) approximation of f ′ (1) starting from an
O(h) approximation. (ii) Compare the result with that obtained by two-point CDF, which is
also an O(h2 ) approximation.
Solution (i). Step 0. Compute D1 (h) using two-point FDF (which is an O(h) approximation).
f (x+h)−f (x) f (1.5)−f (1)
D1 (h) = h = 0.5
= 1.5×ln(1.5)−1×ln(1)
0.5
= 1.2164
Compute D1 ( h2 ) by replacing h by h
2:
= 1.25×ln(1.25)−1×ln(1)
0.25
= 1.1157
D1 (h)
i D2 (h)
h
D1 ( 2 )
316CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
D1 ( h2 )−D1 (h)
D2 (h) = D1 ( h2 ) + 2−1
1.1157−1.2164
= 1.1157 + 1
= 1.0150
= 1.5×ln(1.5)−0.5×ln(0.5)
1
= 0.9548
Clearly, an O(h2 ) Richardson extrapolation technique is more accurate than two-point CDF,
which is also O(h2 ).
So far, we have considered truncation error for approximations of O(hk ), obtained from trun-
cation of a Taylor series or polynomial interpolation.
These error formulas suggest that the smaller h is, the better the approximation. However, this
is not entirely true. To come up with an optimal choice of h, we must take into consideration of
the round-off error due to the floating point computations as well. We illustrate the idea with
two-point CDF.
f (x + h) − f (x − h)
f ′ (x) ≈ (7.38)
2h
First, consider the round-off error. From the laws of Floating Point Arithmetic (Theorem
3.7), we obtain
f1 − f−1 f1 − f−1
fl = (1 + δ), |δ| ≤ 2µ, (7.39)
2h 2h
7.2. PROBLEM STATEMENT 317
where µ is the machine precision. Thus, round-off error in computing the CDF ≤ µh .
Next, let’s consider the truncation error. Assume that |f (3) (x)| ≤ M . Then from (7.17), we
2
obtain the truncation error for two CDF’s is ≤ M h6 .
µ 2
So, the total error (absolute) = round-off error + truncation error ≤ h + M h6 .
This simple illustration reveals the fact that too small of a value of h is hardly an advantage.
What is important is how to choose a value of h that will minimize the total error.
Choosing an optimal h
Example 7.9
Given f (x) = ex , 0 ≤ x ≤ 1. Find the optimal value of h for which the total error in computing
two-point CDF approximation to f ′ (x) will be as small as possible.
Solution. We need to find h for which the maximum absolute total error (which is a function
of h):
µ h2
E(h) = + M
h 6
will be minimized.
Find M :
f (x) = ex f (3) (x) = ex
So, M =e
µ 2
Thus, |E(h)| ≤ h + e µ6 .
318CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
Minimize E(h).
µ h2
E(h) = +e
h 6
q
3 3µ
is a simple function of h. It is easy to see that E(h) will be minimized if h = e . Assume
now µ = 2 × 10−7 (single-precision). Then E(h) will be minimized if
r
3 3 × 2 × 10−7
h= ≈ 0.0060
e
Verification. The readers are invited to verify [Exercise] the above result by computing
errors with different values of h in the neighborhood of 0.0060.
Needs for computing higher-order and partial derivatives arises in a wide variety of scientific
and engineering applications. Mathematical models of many of these applications are either
second- or higher-order differential equations and/or partial differential equations. Typically,
such equations are solved in two stages:
Stage I The differential equations are approximated by means of finite differences or finite
element techniques that lead to a system of algebraic equations.
Stage II Solution to the system of algebraic equations gives the solution of the applied problem.
We will consider finite difference approximation of second-order derivatives and first- and
second-order partial derivatives. These formulas can be derived exactly in the same way as
their counterparts for the first-order derivatives. We will illustrate the derivation of three-
′′
point difference formulas for f (x).
Three-Point Difference Formulas for Second Derivatives and their Truncation Er-
rors
Suppose that the functional values of x, x + h, and x + 2h are known. Using three-term Taylor’s
series expansion of f (x + h) with error terms, we can write
h2 ′′ h3 ′′
f (x + h) = f (x) + hf ′ (x) + f (x) + f (ξ1 ) where x < ξ1 < x + h. (7.40)
2! 3!
Similarly,
4h2 ′′ 8h3 ′′
f (x + 2h) = f (x) + 2hf ′ (x) + f (x) + f (ξ2 ) where x < ξ1 < x + 2h. (7.41)
2! 3!
7.2. PROBLEM STATEMENT 319
Eliminate now f ′ (x) from these two equations. To do this, multiply Equation (7.40) by 2 and
subtract it from Equation (7.41),
′′ ′′ ′′
f (x + 2h) − 2f (x + h) = −f (x) + h2 f (x) + h3 (f (ξ2 ) − f (ξ1 )) (7.42)
′′′
Assuming that f (x) is continuous on [x, x + 2h], by the Intermediate Value Theorem,
there exists a number ξ between ξ1 and ξ2 such that
′′′ ′′′
′′ f (ξ1 ) + f (ξ2 )
f (x) = . (7.43)
2
Thus, we have
′′ h3 ′′
f (x + 2h) − 2f (x + h) = −f (x) + h2 f (x) + f (ξ) (7.44)
2
′′ ′′
Solving for f (x), we have the three-point FDF for f (x).
In the same way, we can derive three-point BDF and CDF, and other higher-order formulas for
′′
f (x). We state some of these formulas below. Their derivations are left as Exercises.
′′
Three-point FDF for f (x) with Error Term:
′′ f (x) − 2f (x + h) + f (x + 2h) h ′′
f (x) = + f (ξ) (7.45)
h2 2
′′
Three-point BDF for f (x) with Error Term:
′′ f (x − 2h) − 2f (x − h) + f (x)
f (x) = + O(h) (7.46)
h2
′′
Three-point CDF for f (x) with Error Term:
′′ 1 h2 4
f (x) = [f (x − h) − 2f (x) + f (x + h)] − f (ξ), where x − h < ξ < x + h. (7.47)
h2 12
′′
Five-point CDF for f (x) with Error Term:
Solution:
′′
Exact Value of f (x) at x = 1 is 1.
As in the case of the first-order derivative, formulas for the second and higher-order derivatives
can also be derived based on Lagrange or Newton interpolations, and finite-difference formulas
can be recovered as special cases. Of course, the primary advantage with the interpolating
formulas is that they can be used to approximate the derivatives also at non-tabulated points.
As noted before, these formulas are, however, computation intensive. As an illustration, we will
′′
derive here: the four-point formula for f (x) based on Lagrange interpolation.
The four points are: x0 , x1 , x2 , x3 . The Lagrange polynomial P3 (x) of degree 3 is given by
′′
Differentiating two times, we get the four-point formula for f (x):
′′
Note: The above formula can be used to approximate f (x) at any point x in
[x0 , x3 ], not necessarily only at tabulated points.
′′
It is easy to see that E3 (x) is O(h2 ), where h is the distance of equally spaced nodes.
′′
The four-point FDF for f (x) can be obtained by setting: x0 = x, x1 = x + h, x2 = x + 2h,
and x3 = x + 3h.
′′
Four-point FDF of f (x):
′′ 2f (x) − 5f (x + h) + 4f (x + 2h) − f (x + 3h)
f (x) ≈ + O(h2 )
h2
′′
The other formulas for f (x) and higher-order derivatives can similarly be computed (though
not easily).
Let u(x, t) be a function of two variables x and t. Then the partial derivatives of the first-order
with respect to x and t are defined by
∂u u(x + h, t) − u(x, t)
∂x (x, t) = lim
h→0 h
∂u u(x, t + k) − u(x, t)
∂t (x, t) = k→0
lim
k
Idea: Apply the corresponding finite difference formulas of the single variable to one of the
two variables for which the approximation is sought, while keeping the other variable constant.
∂u ∂u
• Two-point FDF for ∂x (x, t) and ∂t (x, t) are given by:
∂u u(x+t,t)−u(x,t)
∂x (x, t) = h + O(h)
∂u u(x,t+k)−u(x,t)
∂t (x, t) = k + O(k)
∂u ∂u
• Two-point BDF for ∂x and ∂t are given by:
∂u u(x,t)−u(x−h,t)
∂x (x, t) = h + O(h)
∂u u(x,t)−u(x,t−k)
∂t (x, t) = k + O(k)
∂u ∂u
• Two-point CDF for ∂x and ∂t are given by:
∂u u(x+h,t)−u(x−h,t)
∂x (x, t) = 2h + O(h)
∂u u(x,t+k)−u(x,t−k)
∂t (x, t) = 2k + O(k)
322CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
∂2u u(x−h,t)−2u(x,t)+u(x+h,t)
∂x2 (x, t) = h2 + O(h2 )
∂2u u(x,t−k)−2u(x,t)+u(x,t+k)
∂t2 (x, t) = k2 + O(k2 )
Two-Point FDF:
′ f (x+h)−f (x)
f (x) =
h
O(h)
Two-point BDF:
f (x) = f (x)−fh(x−h)
′
Two-point CDF:
f ′ (x) = f (x+h)−f (x−h)
2h
Three-point FDF:
O(h2 )
f ′ (x) = −3f (x)+4f (x+h)−f
2h
(x+2h)
Three-point BDF:
f ′ (x) = f (x−2h)−4f (x−h)+3f (x)
2h
Four-point CDF:
′ f (x−2h)−8f (x−h)+8f (x+h)+f (x+2h)
f (x) =
12h
O(h4 )
Five-point FDF:
f (x) = −25f (x)+48f (x+h)−36f (x+2h)+16f (x+3h)−3f (x+4h)
′
12h
Three-point FDF:
′′ f (x)−2f (x+h)+f (x+2h)
f (x) =
h2
O(h)
Three-point BDF:
f (x) = f (x−2h)−2fh2(x−h)+f (x)
′′
7.2. PROBLEM STATEMENT 323
Four-point FDF:
f (x) = 2f (x)−5f (x+h)+4f (x+2h)−f (x+3h)
′′
h2
Four-point BDF:
O(h2 )
f (x) = −f (x−3h)+4f (x−2h)−5f (x−h)+2f (x)
′′
h2
Three-point CDF:
f ′′ (x) = f (x−h)−2f (x)+f (x+h)
h2
(
4
Five-point CDF:
O(h ) ′′ −f (x−2h)+16f (x−h)−30f (x)+16f (x+h)−f (x+2h)
f (x) = 12h2
∂2u u(x,t−k)−2u(x,t)+u(x,t+k)
Three-point CDF: ∂t2
(x, t) = k2
+ O(k2 )
Example 7.11
Given f (x) = x ln x, h = 1.
′′
(a) Approximate f (1) using three-point FDF.
′′
(b) Approximate f (3) using three-point BDF.
′′ 1
Analytical Formula: f ′ (x) = 1 + ln x f (x) = x
Solutions:
324CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
= 0.5232.
′′
Absolute Error: |f (1) − 0.5232| = |1 − 0.5232| = 0.4768 = 47.68%.
= 0.5232.
′′
Absolute Error: |f (3) − 0.5232| = |0.3333 − 0.5232| = 0.1899.
Remarks: Clearly, the above approximations are not good. The readers are invited to compute
′′
three-point and five-point CDF approximatiomns of f (x) using Formulas (7.47) and
(7.48) and verify the improvement is accuracy with these formulas.
7.2. PROBLEM STATEMENT 325
Exercises on Part I
For each function, approximate the following derivative values and compare them with
actual values.
7.2. (Computational) Given the following tables of functional values for the functions (as
indicated):
√
x f (x) = x sin x
0 0
π
4 0.6267
(a) π
2 1.2533
3π
4 1.0856
π 0
x f (x) = sin(sin(sin(x)))
0 0
π
4 0.6049
(b) π
2 0.7456
3π
4 0.6049
π 0
326CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
q p √
x f (x) = x+ x+ x
0 0
(c) 0.5 1.2644
1 1.5538
1.5 1.7750
2 1.9616
x f (x) = x ln x + x2 f ′ (x)
0.1 −0.2168
(d) 0.5 0.7738
0.8 6.8763
1 20.0855
B. For functions in Table (c), repeat Part A with x = 0 in (i), x = 1 in (ii), and x = 1.2
in (iii).
C. Fill in the missing entries, as accurately as possible, in Table (d) using the appropriate
formulas.
7.3. (Analytical) Derive the following formulas with their associated truncation errors:
Three-point forward difference formula for f (x): f (x) ≈ f (x)−2f (x+h)+f (x+2h)
′′ ′′
(c) h2
7.4. (Applied) The amount of force F needed to move an object along a horizontal plane is
given by
µW
F (θ) =
µ sin θ + cos θ
where
7.2. PROBLEM STATEMENT 327
7.5. (Applied) The following table gives the estimated world population (in millions) at
various dates:
Year Population
1960 2,982
1970 3,692
1980 4,435
1990 5,263
2000 6,070
2010 6,092
Estimate the rate of the world’s population growth in 1980, 2010, and 1985; using the
appropriate derivative formulas (as accurately as possible).
dT
Qx = −k
dx
where
x = distance (m) along the path of heat flow
T = temperature (deg C)
k = thermal conductivity
Qx = heat flux (W/m2 )
328CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
Using the above table, predict the rate of change for the trade deficits (as accurately as
possible) for the years 2007 and 2010.
(a) Starting with two-point CDF, compute an O(h6 ) approximation of f ′ (x) using the
Richardson extrapolation technique.
(b) Present your results in the form of a Richardson extrapolation table.
(a) Starting from two-point FDF, compute an O(h3 ) approximation of f ′ (0.5) using the
Richardson extrapolation technique.
(b) Present your results in the form of a Richardson extrapolation table.
7.10. (Computational) Verify the claim made in Exercise 7.6 for the optimal value of h =
0.0060, by computing the errors with several different values of h in the neighborhood of
0.0060.
7.11. (Analytical) Using Newton’s interpolation, give a proof of the Error Theorem for Nu-
merical Differentiation (Theorem 7.2) .
7.13. For the functions in Exercise 7.1 and 7.2 (a), Find an optimal value of h for which the
error in computing a two-point CDF approximation to f ′ (x) will be as small as possible.
MORE TO COME
330CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
3/1/13 EC
What’s Ahead
• Romberg Integration
• Gaussian Quadrature
• Improper Integral
• Multiple Integrals
In a beginning calculus course, the students learn various analytical techniques for finding the
integral
Z b
f (x) dx
a
and a variety of applications in science and engineering. A few of these applications include
• Finding the area between two curves y = f1 (x) and y = f2 (x) and the lines x = a and
x = b:
Z b
Area = f (x) dx, where
a
f (x) = f1 (x) − f2 (x), f1 (x) ≥ f2 (x),
and both functions f1 (x) and f2 (x) are continuous on [a, b].
• Volume of a solid obtained by rotating a curve or a region bounded by two curves about
a line
Rbp
• The arc length of a function f (x) given by a(x) = a 1 + [f ′ (x)]2 dx
• The area of a surface of revolution: the area of the surface obtained by rotating the
Rb p
curve y = f (x), a ≤ x ≤ b about the x-axis is S = a 2πf (x) 1 + [f ′ (x)]2 dx
• Statistical applications, such as computing the mean of any probability density func-
tion f (x): Z s
µ= x f (x) dx
−s
• Biological applications, such as the volume of the blood flow in the heart and the
cardiac output when dye is injected.
For some detailed discussions of some of these applications and many others, the readers may
refer to the authoritative calculus book by Stewart [].
As in the case of numerical differentiation, the need to numerically evaluate an integral comes
from the fact that
• In many practical applications, the integrand is not explicitly known−all that’s known
are the certain discrete values of the integrand from experimental measurements.
or
Blood Flow: Recall in Section 7.1.1, we considered the velocity of blood flow in a tube
using the law of laminar flow:
1 ∆P 2
v(r) = (R − r 2 ).
4η l
Here we consider the volume of blood flow. It can be shown (see Stewart []) that the volume V ,
of the blood that passes a cross section per unit time is given by
Z R
∆P 2
V = 2πr (R − x2 ) dx
0 4ηl
Thus, V is a function of r. If the integrand is explicitly known, then it is quite easy to compute
this integral. However, in many practical applications, only certain discrete values of r will be
known in [0, R]. The integration thus needs to be computed numerically. We will later consider
this application with numerical data.
7.4. NUMERICAL INTEGRATION TECHNIQUES: GENERAL IDEA 333
Cardiac output: The cardiac output of the heart is the volume of blood pumped by the
heart per unit time. Usually, a dye is injected into the right atrium to measure the cardiac
output.
Let c(t) denote the concentration of the dye at time t and let the dye be injected for the time
interval [0, T ]. Then the cardiac output CO is given by:
A
CO = R T
0 c(t) dt
Again, in practical applications, c(t) will be measured at certain equally spaced times over the
interval [0, T ]. Thus, all will be known to a user is some discrete values of c(t) at these instants
of time, from where the integral must be computed numerically.
A solution of this problem with numerical data will be considered later in this Chapter.
Given
or
Z b
Compute: an approximate value of I = f (x)dx using these functional values or those
a
computed from the given function.
The numerical techniques discussed in this chapter have the following form:
Z b
f (x)dx ≈ I = w0 f (x0 ) + w1 f (x1 ) + · · · + wn f (xn ) = w0 f0 + w1 f1 + · · · + wn fn
a
where x0 , x1 , . . . , xn are called the nodes, and w0 , w1 , . . . , wn are called the weights. Thus,
we can have two types of formulas:
334CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
Type 1. The (n + 1) nodes are given and the (n + 1) weights are determined by the rule. The
well-known classical quadrature rules, such as the Trapezoidal rule and Simpson’s rule are
examples of this type.
Type 2. Both nodes and weights are determined by the rule. Gaussian quadrature is an
example of this type.
• Find the interpolating polynomial Pn (x) of degree at most n, passing through the (n + 1)
points: (x0 , f0 ), (x1 , f1 ), . . . , (xn , fn ).
Z b
• Evaluate I = Pn (x)dx and accept this value as an approximation to I.
a
Exactness of the Type 1 Quadrature Rules: These rules, by construction, should be exact
for all polynomials of degree less than or equal to n.
We know that there are different ways to construct the unique interpolating polynomial using
different basis functions.
We will describe here the quadrature rules based on monomial and Lagrange interpolations.
Z b
f (x) = 1 : 1dx = w0 + w1 + · · · + wn
a
=⇒ b − a = w0 + w1 + · · · + wn
Z b
f (x) = x : xdx = w0 x0 + w1 x1 + w2 x2 + · · · + wn xn
a
b2 −a2
=⇒ 2 = w0 x0 + w1 x1 + · · · + wn xn
..
.
Z b
f (x) = xn : xn dx = w0 xn0 + w1 xn1 + · · · + wn xnn
a
bn+1 −an+1
=⇒ n+1 = w0 xn0 + w1 xn1 + · · · + wn xnn
w0 + w1 + · · · + wn =b−a
w0 x0 + w1 x1 + · · · + wn xn b2 −a2
= 2
..
.
bn+1 −an+1
w0 xn0 + w1 xn1 + · · · + wn xnn =
n+1
In matrix-vector notations:
1 1 ··· 1 w0 b−a
x0 x1 ··· xn w1 b2 −a2
2
.. .. .. ..
= ..
. . .
. .
n−1 bn −an
x0 xnn−1 · · · xnn−1 wn−1 n
bn+1 −an+1
xn0 xn1 ··· xnn wn n+1
The readers will recognize that the matrix on the left-hand side is a Vandermonde matrix,
which is nonsingular by virtue of the fact that x0 , x1 , . . . , xn are distinct. The unique
solution of this system will yield the unknowns w0 , w1 , . . . , wn .
Given x0 < x1 < x2 . . . < xn , there exist an unique set of weights w0 , w1 , . . . , wn such that
Z b
f (x)dx ≈ w0 f0 + w1 f1 + · · · + wn fn
a
where fi = f (xi ), i = 0, 1, . . . , n.
336CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
Special cases. The following two special cases give rise to two famous quadrature rules.
We will now derive Trapezoidal, Simpson’s, and other rules based on Lagrange interpolation.
The use of Lagrange interpolation also will help us derive the error formulas for these rules.
• Given n, find the Lagrange interpolating polynomial Pn (x) of degree at most n, approxi-
mating f (x):
Z b Z b
• Compute Pn (x)dx and accept the result as an approximation to f (x)dx.
a a
X X
x0 = a x1 = b
x − x1 x − x0
Recall that L0 (x) = , and L1 (x) = .
x0 − x1 x1 − x0
Thus, IT = trapezoidal rule approximation of I is given by
Z x1
x − x1 x − x0
IT = f0 + f1 dx
x0 x0 − x1 x1 − x0
Z x1 Z x1
f0 f1
= (x − x1 )dx + (x − x0 )dx
x0 − x1 x 0 x1 − x0 x 0
x x
f0 (x − x1 )2 1 f1 (x − x0 )2 1 (x1 − x0 )
= + = (f0 + f1 ).
x0 − x1 2 x0 x1 − x0 2 x0 2
Trapezoidal Rule
(x1 − x0 ) h b−a
IT = (f0 + f1 ) = (f0 + f1 ) = (f (a) + f (b)) (7.49)
2 2 2
Since the above formula only gives a crude approximation to the actual value of the integral,
we need to assess the error.
To obtain an error formula for this integral approximation, recall that the error formula for
interpolation with a polynomial of degree at most n is given by
f (n+1) (ξ(x))Ψn (x)
En (x) = , (7.50)
(n + 1)!
338CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
Since in case of the Trapezoidal rule, n = 1, the error associated interpolation error is:
′′
f (ξ(x))Ψ1 (x)
E1 (x) = ,
2!
′′
where Ψ1 (x) = (x − x0 )(x − x1 ), and f (x) is the second derivative of f (x).
Integrating this formula we have the following error formula for the Trapezoidal rule
(denoted by ET (x)):
Z x1 ′′ Z x1 ′′
f (ξ(x)) f (ξ(x))
ET (x) = (x − x0 )(x − x1 )dx = Ψ1 (x). (7.51)
x0 2! x0 2!
We now show how the above formula can be simplified. The Weighted Mean Value Theorem
(WMT) from calculus will be needed for this purpose.
Let
′′
(i) f (x) is continuous on [x0 , x1 ] (Hypothesis (i) of WMT is satisfied).
(ii) Ψ1 (x) = (x − x0 )(x − x1 ) does not change sign over [x0 , x1 ]. This is because for any x
in [x0 , x1 ], (x − x0 ) > 0 and (x − x1 ) < 0 (Hypothesis (ii) of WMT is satisfied).
So, by applying the WMT to ET (x), with g(x) = Ψ(x) and noting that h = x1 − x0 , we obtain
7.6. NUMERICAL INTEGRATION RULES BASED ON LAGRANGE INTERPOLATION339
′′ Z x1
f (η) −h3 ′′
ET = (x − x0 )(x − x1 )dx = f (η)
2! x0 12 (7.52)
−(b−a)3 ′′ −(b−a) 2 ′′
= 12 f (η) = 12 h f (η)
Z x1 =b
b−a b − a 2 ′′
f (x)dx = [f (a) + f (b)] − h f (η), a < η < b.
x0 =a | 2 {z } | 12 {z }
Trapezoidal Rule Error
Exactness of Trapezoidal Rule: From the above error formula it follows that the Trapezoidal
rule is exact only for polynomials of degree 1 or less. This is because, for all these polynomials,
′′
f (x) = 0, and is non zero, whenever f (x) is of degree 2 and higher.
Trapezoidal rule approximates the area under the curve y = f (x) from x0 = a to x1 = b by the
area of the trapezoid as shown below:
y = f (x) D (x1 , f1 )
(x0 , f0 ) C
e
f
A = x0 h B = x1
1
Note: The area of the trapezoid ABCD = Length of the base × average height = h· (f0 +f1 ) =
2
h h
(f0 + f1 ) = 2 [f (a) + f (b)].
2
340CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
x0 = a x1 x2 = b
So,
Z b=x2 Z x2
I= f (x)dx ≈ [L0 (x)f0 + L1 (x)f1 + L2 (x)f2 ]dx (7.53)
a=x0 x0
(x − x1 )(x − x2 ) (x − x0 )(x − x2 )
Now, L0 (x) = , L1 (x) = ,
(x0 − x1 )(x0 − x2 ) (x1 − x0 )(x1 − x2 )
(x − x0 )(x − x1 )
and L2 (x) = .
(x2 − x0 )(x2 − x1 )
Let h be the distance between two consecutive points of interpolation, assumed to be equally
spaced. That is, x1 − x0 = h and x2 − x1 = h.
Substituting these expressions of L0 (x), L1 (x) and L2 (x) into (7.53) and integrating, we obtain
[Exercise] the famous Simpson’s Rule:
h
IS = (f0 + 4f1 + f2 ) (7.54)
3
Noting that
h = b−a
2
f = f (x ) = f (a)
0 0
b−a
f1 = f (x1 ) = f (x0 + h) = f (a + 2 ) = f ( a+b
2 )
f2 = f (x2 ) = f (b)
1 3
E2 (x) = 3! f (ξ(x))Ψ2 (x)dx,
where Ψ2 (x) = (x − x0 )(x − x1 )(x − x2 )
Since Ψ2 (x) does change sign in [x0 , x2 ], we can not apply WMT directly to obtain the error
formula for Simpson’s rule. In this case, we use the following modified formula:
Let
(i) Ψn (x) = (x − x0 )(x − x1 ) · · · (x − xn ) be such that it changes sign on (a, b), but
Z b
Ψn (x)dx = 0
a
(ii) xn+1 be a point such that Ψn+1 (x) = (x − xn+1 )Ψn (x) is of one sign in [a, b].
To apply the above modified error formula to obtain an error expression for Simpson’s rule, we
note the following:
Z x2 Z x2
(i) Ψ2 (x)dx = (x − x0 )(x − x1 )(x − x2 )dx = 0 (Hypothesis (i) is satisfied).
x0 x0
Assume further that f (x) is 4 times continuously differentiable (Hypothesis (iii) is satis-
fied). Then by (7.55) we have the following modified error formula for Simpson’s rule:
Z x2
1 (4)
ES = f (η) Ψ3 (x)
4! x0
Z x2
1 (4)
= f (η) (x − x1 )2 (x − x0 )(x − x2 )dx
24 x0
1 (4) −4 h5 (4)
= f (η) h5 = − f (η) a < η < b.
24 15 90
b−a
Substituting h = 2 , we have
( b−a
2 )
5
Error in Simpson’s Rule: ES = − f (4) (η), where a < η < b.
90
Z b Z b=x2
b−a a+b
f (x)dx = f (x)dx = f (a) + 4f + f (b) +
a a=x0 6 2
| {z }
Simpson’s Rule
b−a 5
2
f (4) (η),
| 90 {z }
Error formula
where a < η < b.
Since f 4 (x) is zero for all polynomials of degree less than or equal to 3, but is nonzero for all
polynomials of higher degree, we conclude from the above error formula:
Simpson’s rule is exact for all polynomials of degree less than or equal to 3.
Remarks: Because of the use of the modified error formula (7.55), the error for Simpson’s rule
is of one order higher than that warranted by the usual error formula for interpolation. That
is why, Simpson’s rule is exact for all polynomials of degree less than or equal to 3, even when
Simpson’s formula is obtained by approximating f (x) by a polynomial of degree 2.
A quadrature rule is said to have the degree of precision k if the error term of that rule is
zero for all polynomials of degree less than or equal to k, but it is different from zero for some
polynomial of degree k + 1. Thus,
Simpson’s rule was developed by approximating f (x) with a polynomial of degree 2. If f (x) is
approximated using a polynomial of degree 3, then we have Simpson’s Three-Eighth Rule
[Exercise]:
Let
(ii) h = xi+1 − xi , i = 0, 1, 2.
Then
Z x3
3h
f (x)dx ≈ [f (x0 ) + 3f (x1 ) + 3f (x2 ) + f (x3 )] .
x0 8
The Error term in this case is of the same order as the Simpson’s rule.
3
Specifically, the error in the Simpson’s 38 th rule, denoted by ES8 is given by:
3 3h5 (4)
ES8 = − f (η), where x0 < η < x3 .
80
Remarks: It is clear that applications of Simpson’s rule and Simpson’s Three-Eighth rule are
restricted to the even and odd number of subintervals, respectively. Thus, often these two rules
are used in conjunction with each other.
Simpson’s rule and Simpson’s 83 th rule were developed by approximating f (x) by polynomials
of degree 2 and 3, respectively. Yet, another rule can be developed by approximating f (x)
344CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
by Hermite interpolating polynomial of degree 3, with a special choice of the nodes as:
x0 = x1 = a and x2 = x3 = b.
This rule, for obvious reasons, is called the corrected trapezoidal rule (ICT). This rule along
with the error expression is stated below. Their proofs are left as an [Exercise].
Z b
b−a (b − a)2 ′ (b − a)5 (4)
f (x)dx ≈ IT C = [f (a) + f (b)] + [f (a) − f ′ (b)] + f (η)
a | 2 {z 12 } | 720 {z }
Corrected Trapezoidal Rule Error Formula
Remarks: Comparing the error formulas of the trapezoidal rule and the corrected trapezoidal
rule, it is obvious that the corrected trapezoidal rule is much more accurate than the trapezoidal
rule.
However, the price to pay for this gain is that ICT requires computations of f ′ (a) and f ′ (b).
Z 1
Example 7.12 (a) Approximate cos xdx using
0
(b) In each case, compute the maximum error and compare this maximum error with the
actual error, obtained by an analytical formula.
Solution (a).
Input Data: x0 = a = 0, x1 = b = 1, f (x) = cos x
b−a
Formula to be used: IT = 2 [f (a) + f (b)]
Input Data: x0 = 0, x1 = 0.5, x2 = 1,
f (x) = cos x, a = 0, b = 1
b−a
Formula to be used: IS =
6 [f (a) + 4f ( a+b
2 ) + f (a)]
3
(iii) Simpson’s 8 Rule Approximation
Input Data: x0 = 0, x1 = 31 , x2 = 23 , x3 = 1, h= 1
3
3
3h
Formula to be used: IS8 = 8 [f (x0 ) + 3f (x1 ) + 3f (x2 ) + f (x3 )]
3
IS8 = 18 [cos(0)+3 cos( 31 )+3 cos( 23 )+cos(1)] = 81 [1+3×0.9450+3×0.7859+0.5403] = 0.8416.
Input Data: a = 0, b = 1, f (x) = cos x
b−a (b−a)2 ′
− f ′ (b)]
Formula to be used: IT C = 2 [f (a) + f (b)] + 12 [f (a)
1 1
IT C = [cos(0) + cos(1)] + [− sin(0) + sin(1)] = 0.8403.
12 12
Solution (b).
To compute maximum absolute errors and compare them with the actual error, we need
′′ ′′
(i) f (x) = − cos x; max |f (x)| = 1 (for trapezoidal rule)
0≤x≤1
(ii) f (iv) (x) = cos x; max |f (iv) (x)| = 1 (for Simpson’s rule)
0≤x≤1
R1
(iii) I = 0 cos xdx = [sin(1) − sin(0)] = 0.8415 (analytical value of I)
Observations: (i) Actual absolute error in each case is comparable with the corresponding
maximum (worst possible) error, but is always less than the latter.
(ii) The corrected trapezoidal rule is more accurate than the trapezoidal rule.
Trapezoidal, Simpson’s and Simpson’s Three-Eighth rules, developed in the last section are
special cases of a more general rule, known as, the Closed Newton-Cotes (CNC) rule.
(b − a)
An n-point closed Newton-Cotes rule over [a, b] has the (n + 1) nodes: xi = a + i , i=
n
0, 1, · · · , n.
(Note that CNC Rule includes the end points of the nodes.)
The open Newton-Cotes has the (n + 1) nodes which do not include the end points.
(b − a)
These nodes are given by: xi = a + i , i = 1, 2, · · · , n.
n+2
Mid-Point Rule: A well-known example of the n-point open Newton-Cotes rule is the mid-
point rule (with n = 0). Thus, the midpoint rule is based on interpolation of f (x)
a+b
with a constant function. The only node in this case is: x1 = .
2
So,
Z b
a+b
IM = Midpoint Approximation to the Integral f (x)dx = (b − a)f .
a 2
a+b
In this case Ψ0 (x) = x − x1 = x − changes sign in (a, b).
2
However, note that if we let x0 = x1 , then
2
2 a+b
Ψ1 (x) = (x − x1 ) = x−
2
is always of the same sign. Thus, as in the case of Simpson’s rule, we can derive the error
formula for IM [Exercise]:
(b − a)3
EM = f ′′ (η) , where a < η < b.
24
Z b
a+b (b − a)3
f (x)dx ≈ IM = (b − a)f + f ′′ (η) , where a < η < b.
a 2 | {z 24 }
| {z }
Midpoint Rule Error Formula
Remark: Comparing the error terms of IM and IT , we easily see that the midpoint rule is more
accurate than the trapezoidal rule. The following simple example compares the accuracy
of these different rules: trapezoidal, Simpson’s, midpoint, and corrected trapezoidal.
Example 7.13
Apply the midpoint, trapezoidal, corrected trapezoidal, and Simpson’s rule to approximate
348CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
Z 1
ex dx
0
Error Comparisons:
Observations:
(i) As predicted by theory, the corrected trapezoidal rule is more accurate than the trape-
zoidal rule.
(iii) The midpoint rule is also more accurate than the trapezoidal rule.
To obtain a greater accuracy, the idea then will be to subdivide the interval [a, b] into smaller
intervals, apply these quadrature formulas in each of these smaller intervals and add up the
results to obtain a more accurate approximation. The resulting quadrature rule is called the
composite rule. Thus, a procedure for constructing a composite rule will be as follows:
7.8. THE COMPOSITE RULES 349
• Divide [a, b] into n equal subintervals. Let the points of the subdivisions be:
b−a
Assume that each of these intervals are of equal length. Let h = = the length of
n
each of these subintervals.
Then x0 = a, x1 = a + h, x2 = a + 2h, . . . , xn = b = a + nh.
Z b Z x1 Z x2 Z xn=b
f (x)dx = f (x)dx + f (x)dx + . . . + f (x)dx
a x0 =a x1 xn−1
Applying the basic trapezoidal rule (7.49) to each of the integrals on the right-hand side and
adding the results, we obtain the composite trapezoidal rule, ICT .
Z b
h h h
Thus, f (x)dx ≈ ICT = (f0 + f1 ) + (f1 + f2 ) + . . . + (fn−1 + fn )
a 2 2 2
f0 fn
= h + f1 + f2 + . . . + fn−1 + .
2 2
Noting that
f0 = f (x0 ) = f (a)
f = f (x1 ) = f (a + h)
1
..
.
f = f (xn−1 ) = f (x + (n − 1)h)
n−1
fn = f (xn ) = f (b)
h
ICT = [f (a) + 2 (f (a + h) + 2f (a + 2h) + · · · + 2f (a + (n − 1)h) + f (b))]
2
Using the summation notation, we have the following formula for the composite trapezoidal
rule:
350CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
" h−l
#
h X
ICT = f (a) + 2 f (a + ih) + f (b)
2
i=1
The error formula for ICT is obtained by adding the individual error terms of the trapezoidal
rule in each of the subintervals. Thus, the error formula for the composite trapezoidal rule,
denoted by ECT , is given by:
−h3 ′′
ECT = [f (η1 ) + f ′′ (η2 ) + . . . + f ′′ (ηn )]
12
.
′′
To simplify the expression within the brackets, we assume that f (x) is continuous on [a, b].
Then by Intermediate Value Theorem (IVT), we can write
′′ ′′ ′′
f (η1 ) + . . . + f (ηn ) = nf (η), where η1 < η < ηn .
3
ECT = − nh ′′
12 f (η), where η1 < η < ηn .
= −n (b−a)
n ·
h2 ′′
= − (b−a)
12 f (η)
2 ′′
12 h f (η),
because h = b−a
n .
7.8. THE COMPOSITE RULES 351
Z " n−1
#
b=xn
h X b − a 2 ′′
f (x)dx = f (a) + 2 f (a + ih) + f (b) − h f (η)
a=x0 2 | 12 {z
i=1 }
| {z } Error
Composite Trapezoidal Rule
Since Simpson’s rule was obtained with two subintervals, in order to derive the CSR, we divide
the interval [a, b] into even number of subintervals, say n = 2m, where m is a positive integer
and then apply Simpson’s rule in each of those subintervals and finally, add up the results.
• Divide the interval [a, b] into n even number of equal subintervals: [x0 , x2 ], [x2 , x4 ] . . . , [xn−2 , xn ].
Set h = b−a
n .
Then we have
Z b Z x2 Z x4 Z x2n
f (x)dx = f (x)dx + f (x)dx + · · · + f (x)dx.
a x0 x2 xn−2
• Apply the basic Simpson’s rule (Formula 7.54) to each of the integrals on the right-hand
side and add the results to obtain the Composite Simpson’s rule, ICS .
h
ICS = [(f0 + 4f1 + f2 ) + (f2 + 4f3 + f4 ) + . . . + (fn−2 + 4fn−1 + fn )]
3
(7.56)
h
= [(f (a) + f (b) + 4(f1 + f3 + . . . + fn−1 ) + 2(f2 + f4 + . . . + fn−2 )] .
3
−h5 h (iv) i
ECS = f (η1 ) + f (iv) (η2 ) + . . . + f (iv) (η n2 ) ,
90
n
where x2i−2 < ηi < x2i , i = 1, . . . , .
2
As before, we can now invoke IVT to simplify the above error expression. Assuming that
f (iv) (x) is continuous on [a, b], by IVT, we can write
Thus, the error formula for the composite Simpson’s rule is simplified to:
−h5 n (iv)
ECS = × f (η)
90 2
−h5 (b − a) (iv) −h4
= f (η) = (b − a)f (iv) (η).
180 h 180
b−a
Since n = .
h
Z b
h h4
f (x)dx = [(f0 + fn ) + 4(f1 + . . . + fn−1 ) + 2(f2 + . . . + fn−2 )] − (b − a)f (4) (η)
a 3 180
h h4
= [f (a) + f (b) + 4(f1 + · · · + fn−1 ) + 2(f2 + · · · + fn−2 ] − (b − a)f (4) (η)
3
| {z } | 180 {z }
Composite Simpson’s Rule Error
Example 7.14
(
x if 0 ≤ x ≤ 12
Let f (x) =
1 − x if 12 ≤ x ≤ 1
1
(b) Composite Trapezoidal Rule with h = 2
1
ICT = 4 f (0) + f ( 21 ) + 1
4 f ( 12 ) + f (1)
1 1 1
= 4 0+ 2 + 2 + 0 = 14 (1) = 1
4
Determine h to approximate
Z 10
1
I= dt
0.1 t et
with an accuracy of ǫ = 10−3 using the composite trapezoidal rule.
Solution.
1
Input Data: f (t) = tet ; a = 0.1, b = 10.
b − a 2 ′′
Formula to be used: ECT = − h f (η).
12
Step 1. Find the maximum value (in magnitude) of ECT in the interval [0.1, 10].
and
1
max |f ′′ (t)| = (200 + 20 + 1) = 1999.69
0.1 × e0.1
9.9
So, the absolute maximum value of ECT = h2 × 1999.69.
12
Step 2. Find h.
To approximate I with an accuracy of ǫ = 10−3 , h has to be such that the absolute maximum
error is less than or equal to ǫ. That is,
354CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
9.9
h2 × 1999.69 ≤ 10−3 .
12
or,
1
h2 ≤ 12 · 10−3 ×
9.9 × 1999.69
or
h ≤ 7.7856 × 10−4 .
The readers are invited to repeat the calculations with a smaller interval, [0, 1], and compare
the results [Exercise].
ICT C and the associated error formula can be developed in the same way as the composite
trapezoidal and Simpson’s rule. We leave the derivations as [Exercises].
Z b
h h2 h4 (b − a) (iv)
f (x)dx = h(f1 + f2 + · · · + fn−1 ) + (f (a) + f (b)) + [f ′ (a) − f ′ (b)] + f (η)
a | {z 2 } |12 {z 720 }
Composite Corrected Trapezoidal Rule Error Term
Starting with two lower-order approximations with the same order of truncation errors, obtained
h
by a certain quadrature rule with step-size h and , respectively, generate successively higher-
2
order approximations.
We will take Trapezoidal rule (with two subintervals) which is O(h2 ), as our basis quadrature
rule and show how Richardson’s extrapolation technique can be applied to obtain a sequence
of approximations of O(h4 ), O(h6 ), . . ..
7.9. ROMBERG INTEGRATION 355
An implementation of the above idea is possible, because one can prove that the error for the
trapezoidal rule satisfies an equation involving only even powers of h. Thus, proceeding exactly
in the same way, as in the case of numerical differentiation, the following result, analogous to
the Richardson’s theorem for numerical differentiation, can be established [Exercise].
Then, Rk−1,j−1 and Rk,j−1 can be combined to have an improved approximation Rkj of O(h2j
k ):
Rk,j−1 − Rk−1,j−1
Rk,j = Rk,j−1 + (7.57)
4j−1 − 1
Romberg Table. The numbers {Rkj } are called Romberg numbers and can be arranged
in the form of the following table.
Romberg Table
The entries R11 , R21 , . . . , Rn1 of the 1st column of the Romberg table are just trapezoidal
approximations with spacing h1 , h2 , . . . , hn , respectively. These numbers can be computed by
using the composite trapezoidal rule. However, once R11 is computed, the other entries can be
computed recursively, as shown below, without repeatedly applying the composite trapezoidal
rule with increasing intervals.
h2
R21 = 2 [f (a) + f (b) + 2f (a + h2 )]
h2
= f (a) + f (b) + 2f a + b−a
2 2
= b−a
4 f (a) + f (b) + 2f a + b−a
2
1
Rk1 = [Rk−1,1 + hk−1 [f (a + hk ) + f (a + 3hk ) + · · · + f (a + (2k−1 − 1)hk )]].
2
2Xk−2 (7.58)
1
= Rk−1,1 + hk−1 f (a + (2i − 1)hk ) .
2
i=1
1
R11 = [(b − a)(f (a) + f (b))].
2
7.9. ROMBERG INTEGRATION 357
1
R21 = [R11 + h1 f (a + h2 )].
2
R21 − R11
R22 = R21 + .
3
1
R31 = [R21 + h2 (f (a + h3 ) + f (a + 3h3 ))].
2
R31 − R21
R32 = R31 + .
3
R32 − R22
R33 = R32 + .
15
Rk,j−1 − Rk−1,j−1
Rkj = Rk,j−1 + , j = 2, 3, . . . , k.
4k−1 − 1
358CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
Step 0. Set h = b − a.
Step 2. For k = 2, 3, . . . ,
n do (Compute 2nd through nth row).
k−2
2X
Compute Rk1 = Rk−1,j−1 + h f (a + (2i − 1)h) (Formula 7.58)
i=1
For j = 2, . . . , k do
Rk,j−1 − Rk−1,j−1
Compute Rkj = Rk,j−1 + (Formula 7.57)
4j−1 − 1
Stop if |Rk,k − Rk−1,k−1 | < ǫ.
End
End
Example 7.18
Compute
Z 1.5
I= x2 ln x dx with n = 3
1
f (x) = x2 ln x; a = 1, b = 1.5
Input Data:
n = 3; h = 1.5 − 1 = 0.5
Solution.
Step 1. Compute the 1st row of the Romberg table. R11 = 12 (1.5−1)[f (1)+f (1.5)] = 0.2280741
R11 = 0.2280741
R21 = 0.2012025 R22 = 0.1922453
R31 = 0.1944945 R32 = 0.1922585 R33 = 0.1922593
Adaptive quadrature rule is a way to adaptively adjust the value of the step-size h such that
the error becomes less than a prescribed tolerance (say ǫ).
EFh = |I − IFh |
If EFh < ǫ, the given error tolerance, then stop and accept EFh as the approximation.
h
Step 4. If EFh ≥ ǫ, then compute EF2 as follows:
Divide the interval into two equal subintervals and apply the chosen rule to each subinterval.
Add the results.
ǫ h
If the approximate value of each integral is less than , then accept IF2 as the final approxima-
2
tion.
360CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
If not, continue the process of subdividing until the desired accuracy of ǫ is obtained or a
stopping criterion is satisfied.
We will now develop a stopping criterion when Simpson’s rule is adopted as the favorite rule.
Using Simpson’s rule as our favorite technique we will derive a computable stopping criterion.
With the step-size h, that is, with the points of subdivisions as a, a + h, and b,
X X X
a a+h b
we have Simpson’s rule approximation:
h
ISh = [f (a) + 4f (a + h) + f (b)] (Formula 7.70)
3
h5 (iv)
with Error = ESh = − f (η), where a < η < b.
90
Now let’s use 4 intervals; that is, this time the points of subdivision are:
h 3h
a, a+ , a + h, a+ , b.
2 2
X X X X X
a a+h a+h a+3h b
2 2
Then, using Composite Simpson’s rule (Formula 7.73) with n = 4, that is, with the length
h
of each subinterval , we have
2
h h h 3h
IS =
2
f (a) + 4f a + + 2f (a + h) + 4f a + + f (b)
6 2 2
h 1 h5
and error this time, ES = −
2
f (4) (η), where a < η < b.
16 90
h 5
That is, I − IS = − 16 h90 f (4) (η), where a < η < b through (7.76).
2 1
h
Now the question is: how well does IS2 approximate I over ISh ?
h5 (4) 16
or f (η) ≈ [ISh − Ish/2 ].
90 15
1 h h/2
|I − Ish/2 | ≈ |I − IS | (7.59)
15 S
h
This is a rather pleasant result, because it is easy to compute ISh − IS2 . Suppose this quantity
is denoted by δ.
h
Then, if we choose h such that δ = 15ǫ, we see that ES2 < ǫ.
Thus a stopping criterion for the adaptive Simpson’s rule will be:
1 h h
IS − IS2 < ǫ. (7.60)
15
Example 7.19
Z π
2
Approximate cos xdx using Simpson’s adaptive quadrature rule with an accuracy of ǫ =
0
10−3 .
x0 = a = 0, x1 = π4 , x2 = b = π2 .
Formula to be used: ISh = h3 f (a) + 4f a+b2 + f (b) .
π h π π i π
ISh = f (0) + 4f +f Note that h = .
12 4 2 4
π h π π i
= cos(0) + 4 cos + cos
12 4 2
= 1.0023
x0 = a = 0, x1 = π8 , x2 = π4 , x3 = 3π
8 , x4 = b = π2 .
362CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
Formula to be used:
π π π π
h
4 3π
IS =
2
f (0) + 4f + 2f + 4f +f
6 8 4 2 8
π
π π π 3π
= cos(0) + 4 cos + 2 cos + 4 cos + cos = 1.0001
24 8 4 8 2
1 h h
Now I − IS = 1.4667 × 10−4 . Since ǫ = 10−3 , we can stop.
2
15 S
Note the actual error is Z b
h
f (x)dx − IS = 10−4
2
Z b
f (x)dx ≈ w0 f (x0 ) + w1 f (x1 ) . . . + wn−1 f (xn−1 ), (7.61)
a
where the numbers x0 , x1 , . . . , xn−1 , called nodes, are given, and the numbers, w0 , . . . , wn−1 ,
called weights, are determined by a quadrature rule.
For example, in
Z b
h h h
• Trapezoidal Rule: [f (x1 ) + f (x2 )] = f (x0 ) + f (x1 ), the nodes x0 and x1
f (x)dx ≈
a 2 2 2
are specified and the weights w0 = w1 = h2 are determined by the rule.
h
In fact, recall that in this case, w0 = w1 = 2 are determined as follows:
Z x1
x − x1 x1 − x0 h
w0 = dx = =
x0 x0 − x1 2 2
Z x1
x − x0 x1 − x0 h
w1 = dx = =
x0 x1 − x0 2 2
Z b
h h 4h h
• Simpson’s Rule: [f (x1 ) + 4f (x1 ) + f (x2 )] = f (x0 ) +
f (x)dx ≈ f (x1 ) + f (x2 ).
a 3 3 3 3
The nodes x0 , x1 , and x2 are specified and the weights w1 = w3 = h3 and w2 = 4h 3 are determined
by Simpson’s rule.
7.11. GAUSSIAN QUADRATURE 363
It is natural to wonder if we can devise a quadrature rule by determining both the nodes and
the weights. The idea is to obtain a quadrature rule that is exact for all polynomials of degree
less than or equal to 2n − 1, which is the largest class of polynomials for which one can hope
for the rules to be exact. This is because, we have 2n parameters: n nodes and n weights
and a polynomial of degree 2n − 1 can contain at most 2n parameters. The process is known
as Gaussian Quadrature rule.
We first derive this rule in the simple cases when n = 2 and n = 3 with the interval [−1, 1].
Case n=2. Here the abscissas is x0 and x1 . The weights w0 and w1 are to be found such that
the rule will be exact for all polynomials of degree less than or equal to 3.
Since the polynomials 1, x, x2 , and x3 form a basis of all polynomials of degree less than or
equal to 3, we can set f (x) = 1, x, x2 and x3 successively in the formula (7.61) to determine w0
and w1 .
Z 1
• f (x) = 1: 1 dx = w0 f (x0 ) + w1 f (x1 ) or 2 = w0 + w1 .
−1
Z 1
• f (x) = x: x dx = w0 f (x0 ) + w1 f (x1 ) or 0 = w0 x0 + w1 x1 .
−1
Z 1
• f (x) = x2 : x2 dx = w0 f (x0 ) + w1 f (x1 ) or 2
3 = w0 x20 + w1 x21 .
−1
Z 1
• f (x) = x3 : x3 dx = w0 f (x0 ) + w1 f (x1 ) or 0 = w0 x31 + w1 x31 .
−1
w0 + w1 = 2
w0 x0 + w1 x1 = 0
2 (7.62)
w0 x20 + w1 x21 = 3
w0 x30 + w1 x31 = 0
Z 1
1 1
f (x)dx ≈ f −√ +f √ .
−1 3 3
Case n=3. Here we seek to find x0 , x1 , and x2 ; and w0 , w1 , and w2 such that the rule will be
exact for all polynomials of degree up to five (note that n = 3; so 2n − 1 = 5).
Taking f (x) = 1, x, x2 , x3 , x4 , and x5 , and proceeding exactly as above, we can show [Exercise]
that the following systems of equations are obtained:
w0 + w1 + w2 = 2
w0 x0 + w1 x1 + w2 x2 = 0
2
w0 x20 + w1 x21 + w2 x22 =
3 (7.63)
3 3 3
w0 x0 + w1 x1 + w2 x2 = 0
2
w0 x40 + w1 x41 + w2 x42 =
5
5 5 5
w0 x0 + w1 x1 + w2 x2 = 0
The above nonlinear system is rather difficult to solve. However, it turns out that the system
will be satisfied if x0 , x1 , and x2 are chosen as the roots of an orthogonal polynomial, called
Legendre polynomial and with this particular choice of x’s, the weights, w0 , w1 , and w2 are
computed as
Z 1
wi = Li (x)dx, i = 0, 1, 2
−1
where Li (x) is the ith Lagrange polynomial of degree 3. This is also true for n = 2, and in fact
for any n, as the following discussions show.
Z b
w(x)Pn (x) Pm (x)dx = 0
a
As in the case of Chebyshev polynomials, the Legendre polynomials can also be generalized
recursively.
Then, the Legendre polynomials {Pk (x)} can be generated from the recursive relation:
Thus,
3xP1 (x) − P0 (x)
k=1: P2 (x) = = 32 x2 − 21 . (Legendre polynomial of degree 2)
2
k=2: P3 (x) = 52 x3 − 35 x . (Legendre polynomial of degree 3)
and so on.
The following theorem shows how to choose the nodes x0 , x1 , . . . , xn−1 and the weights w0 , w1 , . . . , wn−1
n−1
X
such that the quadrature formula wi f (xi ) is exact for all polynomials of degree less than or
i=0
equal to 2n − 1.
Theorem 7.20. (Choosing the Nodes and Weights in n-point Gaussian Quadrature)
Let
(i) the nodes x0 , x1 , . . . , xn−1 be chosen as the n-zeros of the nth degree Legendre polynomial
Pn (x) of degree n.
Z 1
wi = Li (x)dx, i = 0, . . . , n − 1
−1
366CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
where Li (x) is the ith Lagrange polynomial, each of degree (n − 1), given by:
n−1
Y (x − xj )
Li (x) = . j 6= i (7.64)
xi − xj
j=1
Then
Z 1 n−1
X
P (x)dx = wi P (xi ). (7.65)
−1 i=0
Proof. Case 1: First, assume that P (x) is a polynomial of degree at most n − 1. Write P (x)
as:
where x0 , x1 , . . . , xn−1 are the zeros of the Legendre polynomial of degree n. This representation
is exact, since the error term
is zero by virtue of the fact that the n-th derivative P (n) (x) of P (x), which is a polynomial of
degree at most n − 1, is zero.
So, integrating (7.84) from a = −1 to a = 1 and noting that P (xi ), i = 0, . . . , n−1 are constants,
we obtain
Z 1 Z 1 Z 1 Z 1
P (x)dx = P (x0 ) L0 (x)dx + P (x1 ) L1 (x)dx + · · · + P (xn−1 ) Ln−1 (x)dx
−1 −1 −1 −1
n−1
X
= wi (x)P (xi ).
i=0
Z 1
Remembering that wi = Li (x)dx, i = 0, . . . , n − 1, the above equation becomes
−1
Z 1
P (x)dx = w0 P0 (x) + wi P1 (x) + · · · + wn−1 Pn−1 (x).
−1
Case 2: Next, let’s assume that the degree of P (x) is at most 2n − 1, but at least n. Then
P (x) can be written in the form:
7.11. GAUSSIAN QUADRATURE 367
where Pn (x) is a nth-degree Legendre polynomial and Qn−1 (x) and Rn−1 (x) are polynomials
of degree at most n − 1. Substituting x = xi in (7.85), we get
Since xi are the zeros of Pn (x), Pn (xi ) = 0. Thus, P (xi ) = Rn−1 (xi ).
Z 1
By the orthogonal property of the Legendre polynomial Pn (x), Qn−1 (x)Pn (x)dx = 0.
−1
Z 1 Z 1
So, P (x)dx = Rn−1 (x)dx.
−1 −1
Again, since Rn−1 (x) is a polynomial of degree at-most n − 1, we can write, by Case I:
Z 1 n−1
X
Rn−1 (x)dx = wi Rn−1 (xi )
−1 i=0
Thus,
Z 1 n−1
X n−1
X
P (x)dx = wi Rn−1 (xi ) = wi P (xi ).
−1 i=0 i=0
Step 1. Pick n
Step 2. Compute the n zeros, x0 , x1 , . . . , xn−1 of the nth degree Legendre polynomial.
n
IGL = w0 f (x0 ) + w1 f (x1 ) + · · · + wn−1 f (xn−1 ).
Remarks: The zeros of the Legendre polynomials and the corresponding weights are often
available from a table.
Working with an arbitrary interval [a, b]: So far our discussion on Gaussian quadrature
Z 1
has been to approximate f (x)dx.
−1
Z b
If we need to approximate f (x) by Gaussian quadrature, we must transform the interval
a
[a, b] into [−1, 1] as follows:
Substitute
1
x = [(b − a)t + a + b].
2
Then
b−a
dx = dt.
2
Z b Z 1
(b − a)t (b − a)
f (x)dx = f +a+b dt
a −1 2 2
Rb
Thus, to approximate I = f (x) using Gauss-Quadrature rule, do the following:
a
Rb R 1 (b−a)t R1
Step 1. Transform the integral a f (x)dx to b−a
2 −1 f 2 + a + b dt = b−a
2 −1 F (t)dt .
R1
Step 2. Apply n-point Gauss-Quadrature rule to −1 F (t)dt.
b−a
Step 3. Multiply the result from Step 2 by 2 to approximate I.
Error in n-point Gaussian Quadrature: It can be shown that the error in n-point Gaus-
sian quadrature is given by
Z b
f (2n) (ξx )
ECn = [ψ(x)]2 dx
2n! a
7.11. GAUSSIAN QUADRATURE 369
Example 7.21
Z 2
Apply 3-point Gaussian Quadrature rule to approximate e−x dx.
1
f (x) = e−x ; a = 1, b = 2
Input Data:
n = 3.
Solution.
Step 1. Transform the integral to change the interval from [1, 2] to [−1, 1].
Z 2 Z 2 Z 1
−x 1 t+3
f (x)dx = e dx =f dt
1 1 −1 2 2
Z
1 1 −( t+3 )
= e 2 dt
2 −1
Z 1 t+3
)
Step 2. Approximate e−( 2 dt using Gauss-Legendre rule with n = 3.
−1
The corresponding weights are (in 4 significant digits): w0 = 0.5555, w1 = 0.8889, w2 = 0.5555.
Z 2
0.4652
Step 3. Compute f (x)dx = = 0.2326.
1 2
Z 2
Exact Solution: e−x dx = 0.2325 (up to four significant digits).
1
|0.2325 − 0.2326|
Relative Error: = 4.3011 × 10−4
|0.2325|
370CHAPTER 7. NUMERICAL DIFFERENTIATION AND NUMERICAL INTEGRATION
7.1. (Computational) Suppose x0 = 0, x1 = 1, and x2 = 2 are the three nodes and we are
Z 2
required to find a quadrature rule of the form f (x)dx = w0 f (0) + w1 f (1) + w2 f (2)
0
which should be exact for all polynomials of degree less than or equal to 2. What are w0 ,
w1 , and w2 ?
Z 1
7.2. (Computational) Suppose a quadrature rule of the form f (x)dx = w0 f (0) + f (x1 )
0
has to be devised such that it is exact for all polynomials of degree less than or equal to
1. What are the values of x1 and w0 ?
Z 20
7.3. (Computational) Is it possible to devise a quadrature rule of the form f (x)dx =
1
w0 f (1) + w1 f (2) + w2 f (3) + · · · + w19 f (20) that is exact for all polynomials of as high
of a degree as possible? Determine why solving the system for the weights will not be
computationally effective.
7.4. (Computational) Using the appropriate data from Exercise 7.1 in (Exercises on Nu-
merical Differentiation),
7.5. (Computational) Using the appropriate data of Exercise 7.2 in approximate the
following integrals,
Z π
√
(a) x sin xdx
Z0 π
(b) sin(sin sin(x))dx
0
7.12. EXERCISES ON PART II 371
Z 2 √ q
√
(c) x + x + x dx
0
7.6. (Analytical) Derive the trapezoidal and Simpson’s rule using Newton’s interpolation
with forward differences.
7.7. (Analytical)
7.9. (Analytical) Drive the corrected trapezoidal rule with error formula.
7.10. Derive error formula for Simpson’s rule using Taylor’s Series of order 3 about x1 .
7.11. (Computational) Using the appropriate data set of Exercise 1 in Section 7.3,
7.13. True or False. The degree of precision of a quadrature rule is the degree of the interpo-
lating polynomial on which the rule is based. Explain your answer.
Z 2π
7.14. (a) Show that when Simpson’s rule is applied to sin xdx, there is zero error, assum-
0
ing no round-off.
Z b
(b) Obtain a generalization to the other integrals f (x)dx. Show that Simpson’s rule
Z π a
2
is exact for cos2 xdx.
0
Z 3
7.15. Prove that the quadrature rule of the form f (x)dx ≈ w0 f (1) + w1 f (2) + w2 f (3) is
1
exact for all polynomials of as high a degree as possible, and is nothing but Simpson’s
rule.
7.13. MATLAB PROBLEMS ON NUMERICAL QUADRATURE AND THEIR APPLICATIONS373
M7.1