0% found this document useful (0 votes)
28 views

May June 2014 Solution

The document provides solutions to numerical methods problems involving root finding algorithms and solving systems of nonlinear equations. 1) It describes the false position method for solving equations, and explains the difference between false position and secant methods. It gives examples of root finding methods that use linear and nonlinear approximations of the function. 2) It provides two examples of fixed-point iteration algorithms for finding the root of a given equation. It also shows two iterations of the false position method to approximate the root of a sample function. 3) It writes out the system of equations needed for one iteration of Newton's method to solve a given system of nonlinear equations, using specified starting values, but does not solve the system.

Uploaded by

sal27adam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

May June 2014 Solution

The document provides solutions to numerical methods problems involving root finding algorithms and solving systems of nonlinear equations. 1) It describes the false position method for solving equations, and explains the difference between false position and secant methods. It gives examples of root finding methods that use linear and nonlinear approximations of the function. 2) It provides two examples of fixed-point iteration algorithms for finding the root of a given equation. It also shows two iterations of the false position method to approximate the root of a sample function. 3) It writes out the system of equations needed for one iteration of Newton's method to solve a given system of nonlinear equations, using specified starting values, but does not solve the system.

Uploaded by

sal27adam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

university

of south africa

COS2633 NUMERICAL METHODS 1 May/June 2014

SOLUTIONS

QUESTION 1

(a) Consider the equation f (x) = 0.


(i) Describe the false position (regula falsi) method for solving the equation f (x) = 0. (2)
The Regula Falsi (false position) method is an iterative method to solve equations of the form
f (x) = 0. We start with two initial approximations p0 and p1 and their respective images q0 and q1 .
Then set p as the x−value of the intersection of the line joining (p0 , q0 ) and (p1 , q1 ) with the x−axis.
p = p1 − q1 (p1 − p0 )/(q1 − q0 ); q = f (p)
(p, q) become the new (p0 , q0 ) if q and q0 have the same sign, or the new (p1 , q1 ) if q and q1 have the
same sign. Then we repeat the process until we find a p such that f (p) = 0 or the distance between
p and p1 is less than a given tolerance.
(ii) Explain briefly the main difference between the false position method and the secant method. (2)
The difference between the false position method and the secant method is that in the false position
method, every successive pair of approximations of the root actually brackets the root, which is not
a requirement for the secant method.
(iii) Give two examples of methods which find an approximate solution to f (x) = 0 by using a linear (2)
approximation of the function f .
We have the false position method, the secant method, the bisection method.
(iv) Give two examples of methods which find an approximate solution to f (x) = 0 by using a non-linear (2)
approximation of the function f .
We have the Müller’s method, the fixed-point method on g(x) = x − f (x), the fixed-point method
on g(x) = x − f (x)/f 0 (x), also known as the Newton’s method.

(b) Assume that we need to find the roots of (4)


1
x2 + = 20.
x
Give two different fixed-point iteration algorithms for finding the root.
We have f (x) = x2 + 1/x − 20, ans thus f 0 (x) = 2x − 1/x2 . Hence we can apply the fixed-point method
as follows:
• On g(x) = x − f (x) = x − x2 − 1/x + 20;
f (x) x2 + 1/x − 20 x(x2 + 20x − 2)
• On g(x) = x − = x − = . This is also known as the Newton’s
f 0 (x) 2x − 1/x2 2x2 − 1
method.
Note: There are many other options.

(c) Use the false position method to approximate the root of f (x) = x2 + 2x − 3. Use x0 = 0 and x1 = 3 as (5)
starting values. Complete two iterations.

[TURN OVER]
2 COS2633
MAY/JUNE 2014

• Iteration 1: We have

f (x0 ) = f (0) = −3 < 0; f (x1 ) = f (3) = 12 > 0

Hence
x1 − x0 3−0 3
p = x1 − f (x1 ) = 3 − 12 =
f (x1 ) − f (x0 ) 12 − (−3) 5
 2
3 6 36
f (p) = + −3=−
5 5 25

• Iteration 2: f (p) has the same sign as f (0), hence x0 = 3/5 and f (x0 ) = −36/25. Hence

x1 − x0 3 − 3/5 6
p = x1 − f (x1 ) = 3 − 12 =
f (x1 ) − f (x0 ) 12 − (−36/25) 7

(d) Suppose Newton’s method is to be used to solve the following system of non-linear equations: (8)

x = x2 + y
xy + 2y = 1

Write down the system of equations necessary to complete one iteration, using x0 = 1 and y0 = 2 as
starting values. Do not solve the system.
We define
x − x2 − y
 
F (x, y) =
xy + 2y − 1
Then the Jacobian of F is  
1 − 2x −1
JF (x, y) =
y x+2
and its inverse is  
1 x+2 1
JF−1 (x, y) =
D −y 1 − 2x
where D = det JF (x, y) = (1 − 2x)(x + 2) + y = −2x2 − 3x + y + 2.
The Newton’s algorithm (recurrence relation) is given by
 [n+1]   [n] 
x x
   
−1
[n+1] = [n] − JF x[n] , y [n] F x[n] , y [n]
y y

which gives !
2
x[n+1] x[n] x[n] + 2
     
1 1 x[n] − x[n] − y [n]
= −
y [n+1] y [n] Dn −y [n] 1 − 2x[n] x[n] y [n] + 2y [n] − 1
   2
where Dn = det JF x[n] , y [n] = 1 − 2x[n] x[n] + 2 + y [n] = −2 x[n] − 3x[n] + y [n] + 2.
After substitution, we get
1 h [n]  2  i
x[n+1] = x[n] − x + 2 x[n] − x[n] − y [n] + x[n] y [n] + 2y [n] − 1
Dn h
1  2  i
y [n+1] = y [n] − −y [n] x[n] − x[n] − y [n] + 1 − 2x[n] )(x[n] y [n] + 2y [n] − 1
Dn
Note: There are many options, depending on how you choose F (at least four obvious options), but the
reasoning and procedure is the same.
[25]

[TURN OVER]
3 COS2633
MAY/JUNE 2014

QUESTION 2

(a)
(i) Describe the Gauss-Seidel iterative method for approximating the solution to the system of linear (7)
equations, Ax = b, where A is an n × n matrix and b is an n−vector. Then also give the matrix
formulation of this method.
The Gauss-Seidel method uses the transformation of the matrix A in the form A = D + L + U where
D is the main diagonal, L is the lower triangular part of A excluding the main diagonal, and U is
the upper triangular part of A, also excluding the main diagonal. From A = D + L + U , we have
(provided D + L is invertible)

Ax = b ⇐⇒ (D + L + U )x = b
⇐⇒ (D + L)x = −U x + b
⇐⇒ x = −(D + L)−1 U x + (D + L)−1 b
This leads to the recurrence relation

(D + L)x[n+1] = −U x[n] + b ⇐⇒ x[n+1] = −(D + L)−1 U x[n] + (D + L)−1 b

Note: If A = D − L − U , the recurrence relation becomes

(D − L)x[n+1] = U x[n] + b ⇐⇒ x[n+1] = (D − L)−1 U x[n] + (D − L)−1 b

And if x = (x1 , x2 , . . . , xn ), this can also be written as


 
i−1 n
[n+1] 1  X [n+1]
X [n]
(1) xi = bi − aij xj − aij xj 
aii j=1 j=i+1

At each step, the components already computed are involved in the computation of the following
(remaining) components.

(ii) Use the Gauss-Seidel iterative method to find the solution to the linear system (6)
    
2 −3 x 1
= .
3 1 y 7

Start with the initial value (2, 2), and do two iterations. Is the solution you found after two iterations
the exact solution to the system? Justify your answer!
Using the procedure described in (1) above, we have
Iteration 1

1 1 7
x1 = (b1 − a12 y0 ) = (1 + 3 × 2) =
a11 2 2
 
1 1 7 7
y1 = (b2 − a21 x1 ) = 7−3× =−
a22 1 2 2
Iteration 2

 
1 1 7 19
x2 = (b1 − a12 y1 ) = 1−3× =−
a11 2 2 4
 
1 1 19 85
y2 = (b2 − a21 x2 ) = 7+3× =
a22 1 4 4
Indeed (x2 , y2 ) is not the exact solution and this can be checked by straightforward substitution.
The exact solution is actually (x, y) = (2, 1).

(b)

[TURN OVER]
4 COS2633
MAY/JUNE 2014

(i) Find the inverse of the matrix   (7)


2 −1 1
2 2 2
−1 −1 2
by performing Gaussian elimination on a larger augmented matrix
.
h i
A .. I

where I denotes the 3 × 3 identity matrix. Check your answer by computing the product AA−1 . You
can work with fractions to be exact.
We start with the augmented matrix and perform the row operations.
..
 
i  2 −1 1 . 1 0 0
.. ..
h
= Row2 ← Row2 − Row1
 
A . I 2
 2 2 . 0 1 0
Row3 ← Row3 + 21 Row1

..
−1 −1 2 . 0 0 1

..
 
2 −1 1 . 1 0 0
 .. 
0 3 1 . −1 1 0
Row3 ← Row3 + 21 Row2
 
3 5
.. 1
0 −2 2 . 2 0 1
..
 
2 −1 1 . 1 0 0
.
0 3 1 .. −1 1 0
 
Row3 ← 13 Row3
 
.. 1
0 0 3 . 0 2 1
..
 
2 −1 1 . 1 0 0 
.
0 3 1 .. −1 1 0  Row2 ← 31 (Row2 − Row3 )
 
 
.
0 0 1 .. 0 1
6
1
3
..
 
2 −1 1 . 1 0 0  Row1 ← 1 (Row1 + Row2 − Row3 )
2
 .. 
0 1 0 . − 1 5 − 1 
 3 18 9
.. 1 1
0 0 1 . 0 6 3
..
 
1 1 2
 1 0 0 . 3 18 − 9
.
0 1 0 .. − 1 5 − 1 
 
 3 18 9
.
0 0 1 .. 0 1
6
1
3
hence the inverse of A is
1 1
− 29
 
3 18

B = A−1 =
 1 5

−
 3 18 − 19 

1 1
0 6 3
It can be verified by straightforward substitution that indeed AB = BA = I.

(ii) What is the essential difference between direct methods and iterative methods for solving linear (3)
systems?
The essential difference is that direct methods are more suitable for small scale problems and lead
to the exact solution whereas iterative methods are more suitable for large scale problems and lead
to an approximation of the solution.
[23]

[TURN OVER]
5 COS2633
MAY/JUNE 2014

QUESTION 3

(a) Consider the data


x 0.0 0.25 0.5
f(x) 1.5 1 0.5
(i) Construct the divided difference’s table for the data. (2)
The divided difference table is given below:
i xi f [xi ] f [xi , xi+1 ] f [xi , xi+1 , xi+2 ]
0 0 1.5
−2
1 0.25 1 0
−2
2 0.5 0.5
Note: Do not confuse divided-difference table with difference table. Refer to the textbook to see
the difference between the two concepts.
(ii) Construct the Newton’s form of the polynomial p(x) of lowest degree that interpolates f (x) at these (3)
points.
The required polynomial is given by
p(x) = f [x0 ] + f [x0 , x1 ](x − x0 ) + f [x0 , x1 , x2 ](x − x0 )(x − x1 )
= 1.5 + (−2)(x − x0 ) + 0(x − x0 )(x − x1 )
= 1.5 − 2(x − 0)
p(x) = −2x + 23

(iii) Suppose that these data were generated by the function (3)
cos 2πx
f (x) = 1 + .
2
Use the next term rule to approximate the error |p(x) − f (x)| over the interval [0, 0.5].
Your answer should be a number.

cos 2πx
|p(x) − f (x)| = −2x + 23 − 1 −
2

(x − x0 )(x − x1 )(x − x2 ) (3)
≈ f (ξ) 0 < ξ < 1/2
3!
From the definition of f , we have
f 0 (x) = −π sin 2πx; f 00 (x) = −2π 2 cos 2πx; f (3) (x) = 4π 3 sin 2πx
Also note that x0 = 0, x1 = 1/4 and x2 = 1/2, and therefore for all x such that 0 < x < 1/2, we
have the following:
• 0 ≤ 2πx ≤ π
• 0 ≤ x − x0 = x − 0 ≤ 1/2
• −1/4 ≤ x − x1 = x − 1/4 ≤ 1/4
• −1/2 ≤ x − x2 = x − 1/2 ≤ 0
• 0 ≤ sin 2πx ≤ 1 ⇒ 0 ≤ f (3) (x) ≤ 4π 3
Therefore
1 111 3 π3
0 ≤ |p(x) − f (x)| ≤ 4π = ≈ 1.2919
3! 2 4 2 24

(b) Let (8)


F = {f (x) = c0 + c1 x|c0 , c1 ∈ R}
Set up the normal equations to find the least-squares best f ∈ F to approximate the data

[TURN OVER]
6 COS2633
MAY/JUNE 2014

X Y
x0 y0
x1 y1
x2 y2
.. ..
. .
xm ym
Let z = c0 + c1 x; then ei = yi − zi , 1 ≤ i ≤ m.
We wish to minimize the error, or equivalently, the sum of squares of errors
m
X
S= (yi − zi )2
i=0

with respect to c0 and c1 . In that case,


m
∂S X
=0 ⇐⇒ 2 (yi − c1 xi − c0 ) (−1) = 0
∂c0 i=0
m
∂S X
=0 ⇐⇒ 2 (yi − c1 xi − c0 ) (−xi ) = 0
∂c1 i=0

which after rearrangement gives (remember we have m + 1 data points, from index 0 to index m)
P P
(m + 1)c0 + c1 xi = yi
P P 2 P
c0 xi + c1 x i = xi yi

and these are the required normal equations.

(c) Find the constants a, b and c such that (9)

a(x − 1)3 + b(x − 1)2 + 6(x − 1) + 1



:1≤x<2
S(x) =
cx3 + 18x2 − 30x + 19 :2≤x≤3

is a natural cubic spline.


If we denote by S0 the definition of S when 1 ≤ x < 2 and S1 the definition of S when 2 ≤ x ≤ 3, then,
to have a cubic spline, we must have

Sj (xj+1 ) = Sj+1 (xj+1 ) S0 (2) = S1 (2)


Sj0 (xj+1 ) = Sj+1
0
(xj+1 ) ⇐⇒ S00 (2) = S10 (2)
00 00
Sj (xj+1 ) = Sj+1 (xj+1 ) S000 (2) = S100 (2)

And we have
3a(x − 1)2 + 2b(x − 1) + 6

0 :1≤x<2
S (x) =
3cx2 + 36x − 30 :2≤x≤3
and also 
6a(x − 1) + 2b : 1 ≤ x < 2
S 00 (x) =
6cx + 36 :2≤x≤3
Now
S0 (2) = S1 (2) ⇐⇒ a(2 − 1)3 + b(2 − 1)2 + 6(2 − 1) + 1 = c × 23 + 18 × 22 − 30 × 2 + 19
⇐⇒ a + b + 7 = 8c + 72 − 60 + 19
⇐⇒ a + b − 8c = 24

Similarly,
S00 (2) = S10 (2) ⇐⇒ 3a(2 − 1)2 + 2b(2 − 1) + 6 = 3c × 22 + 36 × 2 − 30
⇐⇒ 3a + 2b + 6 = 12c + 72 − 30
⇐⇒ 3a + 2b − 12c = 36

[TURN OVER]
7 COS2633
MAY/JUNE 2014

Also,
S000 (2) = S100 (2) ⇐⇒ 6a(2 − 1) + 12b = 6c × 2 + 36
⇐⇒ 6a + 2b = 12c + 36
⇐⇒ 6a + 2b − 12c = 36
And finally we have
a + b − 8c = 24 a=0
3a + 2b − 12c = 36 ⇐⇒ b=0
6a + 2b − 12c = 36 c = −3
Note: The values of a, b and c found here are those for a cubic spline, not a natural cubic spline.
[25]

QUESTION 4

3
(a) What is the degree of the approximating polynomial on which Simpson’s rule is based? (2)
8
3
The Simpson rule involves three intervals of equal length, thus Lagrange interpolation of degree 3.
8

(b) Find an approximation for the integral (7)


Z 1
ex dx
0
3
by means of the composite Simpson’s rule for 3 subintervals (or panels).
8
3
rule on the three subintervals 0, 31 , 13 , 23 and 23 , 1 . The formula is given by
     
We apply the Simpson 8
Z x3
3
f (x)dx = P3 (x) = h [f (x0 ) + 3f (x1 ) + 3f (x2 ) + f (x3 )]
x0 8
1
This gives h = 9 and thus
      
R1 3 1 1 2 1
0
f (x)dx = × × f (0) + 3f + 3f +f
8 9    9  9   3  
3 1 1 4 5 2
+ × × f + 3f + 3f +f
8 9  3 9 9 3
3 1 2 7 8
+ × × f + 3f + 3f + f (1)
8 9 3 9 9
≈ 1.7183

(c) For the two-term Gaussian quadrature formula for the integral between −1 and 1, we have the points (6)
t1 = −0.5773 and t2 = 0.5773, as well as the weights w1 = 1 and w2 = 1. Use the two-term Gaussian
quadrature formula to calculate an approximation to the integral
Z 5
5x3 dx.
0

Here, a = 0 and b = 5. Now, to rescale from [0, 5] to [−1, 1], let

2x − a − b 1
t= ⇐⇒ x = [(b − a)t + (b + a)]
b−a 2
Thus
2x − 5 1 5
t= ⇐⇒ x = (5t + 5); therefore dx = dt
5 2 2

[TURN OVER]
8 COS2633
MAY/JUNE 2014

Hence Z 5 Z 1  
1 5
f (x)dx = f (5t + 5) dt
0 −1 2 2
 
5 1
Now if we let g(t) = f (5t + 5) , then
2 2
R5 R1
0
f (x)dx = −1 g(t)dt
= 1× g(−0.5773)
 + 1 × g(0.5773)
  
5 5(−0.5773) + 5 5(0.5773) + 5
= f +f
2 2 2
= 781.1820

(d) Explain why the value of the integral calculated in (4.3) above is exact. (2)
The value of the integral is exact because the n−term Gaussian quadrature is accurate for polynomials
of degree 2n − 1 or less, hence the two-term Gaussian quadrature is accurate for polynomials of degree
2 × 2 − 1 = 3 or less, and our integrand is a polynomial of degree 3.
Note: Any error in this case comes from the approximation of the roots of the second Legendre polyno-
mial: √
1 3
x2 − = 0 ⇐⇒ x = ± ≈ ±0.5773
3 3
[17]

TOTAL MARKS: [90]


c
UNISA 2014

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy