0% found this document useful (0 votes)
17 views

Numerical Analysis: Chapter Finding Zeros: Nguyen Duc Manh

Uploaded by

Thanh trinh thi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Numerical Analysis: Chapter Finding Zeros: Nguyen Duc Manh

Uploaded by

Thanh trinh thi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Numerical Analysis: Chapter 3

Finding Zeros

Nguyen Duc Manh


Last update: June 2022

Department of Mathematics and Informatics


Hanoi National University of Education
nguyendm@hnue.edu.vn
21/08/2024 1
Finding the zeros of a function
Problem: Finding the zeros of a given function f, that is arguments ξ
for which f(ξ) = 0 is a classical problem. In particular, determining the
zeros of a polynomial (the zeros of a polynomial are also known as its
roots)
p ( x) = a0 x n + a1 x n -1 + .... + an
has captured the attention of pure and applied mathematicians for
centuries. However, much more general problems can be formulated in
terms of finding zeros, depending upon the definition of the function f :
E → F, its domain E, and its range F.

In this course, we consider f to be a function of single variable


The process of finding the zeros of this function consist of two steps
i) Locate an interval (a, b) which is quite small and contains the
zeros of the function f.
ii) Refine the solution: Try to find the zeros with a precise error.

21/08/2024 2
Bisection Method
Idea : Suppose that f is continuous on the interval [a, b] and f(a)f(b) < 0.
Start with ∆0 = [a, b], then divide ∆0 with its middle point and choose ∆1
= [a1, b1] which is one of two subintervals of ∆0 such that f(a1)f(b1) < 0.
Again this interval ∆1 is divided and choose the subinterval ∆2
containing the solution and so on. More precisely, at the step n we have
a sequence of intervals satisfying:
D 0 É D1 É ... É D n = [ a n , b n ] , f(a n )f(b n ) < 0,
b-a
bn - a n = n
® 0.
2
Note that {an} inscreases and is bounded from above by b, and {bn}
descreases and is bounded from below by a. Moreover, bn – an → 0,
thus a , b ® x.
n n
Since f(a n )f(bn ) < 0, when n ® ¥, we have ( f(x ) ) £ 0 . Thus x
2
is
the solution of the equation.
If we stop the process at the step n, and take the middle point of ∆n to
be an approximate solution then the error is
a n +b n b-a
ξ- £ n+1 ® 0
21/08/2024 2 2 3
Bisection Method
Example : Find the solution of this equation on the interval [0, 1]:

f(x) := x 4 + 2 x3 - x - 1 = 0
f(0) = -1, f(1)=1,Δ 0 = [ 0,1]
f(0,5) = -1,19 Þ Δ1 = [ 0,5;1]
f(0,75) = -0,59 Þ Δ 2 = [ 0,75;1]
f(0,875) = 0,05 Þ Δ 3 = [ 0,75;0,875]
f(0,8125) = -0,304 Þ Δ 4 = [ 0,8125;0,875]
f(0,8438) = -0,135 Þ Δ 5 = [ 0,8438;0,875]
f(0,8594) = 0,043 Þ Δ 6 = [ 0,8438;0,8594]
If we stop at the step 6 and take the middle point of ∆6 to be an
approximate solution, then the error is

ξ - 0,851 £ 0,008. Bisection


21/08/2024
Method 4
Iterative Method
In general it is not possible to determine a zero ξ of a function f: E → F
explicitly within a finite number of steps, so we have to resort to
approximation methods. These methods are usually iterative and have
the following form: beginning with a starting value x0, successive
approximates xk, k = 1, 2, . . . , to ξ are computed with the aid of an
iteration function φ: E → E:
x k+1 = φ(x k ), k = 1, 2, . . .
If ξ is a fixed point of φ(i.e., φ(ξ) = ξ), if all fixed points of φ are also
zeros of f, and if φ is continuous in a neighborhood of each of its fixed
points, then each limit point of the sequence xk, k = 1, 2, . . . , is a fixed
point of φ, and hence a zero of f.

The following questions arise in this connection:


(1) How is a suitable iteration function φ to be found?
(2) Under what conditions will the sequence xk converge?
(3) How quickly will the sequence xk converge?

21/08/2024 5
Iterative Method
Theorem: If the function φ is twice differentiable on [a, b] and such
that:
[ ]
i) "x Î a, b , φ'(x) £ q < 1.

ii)"x Î [a, b] , φ(x) Î [a, b].

Then the sequence xk converges to the solution x of the equation on


[a, b] with the convergence rate given by :
q
xk - ξ £ x k - x k-1 (1)
1-q
qk
xk - ξ £ x1 - x 0 (2)
1-q

Remark: the conditions i) and ii) mean that the functionφ is a contraction
mapping on the interval [a, b]. The theorem above is just a special case
of the Banach fixed-point theorem for a contraction mapping on a non-
empty complete metric space.
21/08/2024 6
Iterative Method
Remark:
- The formula (2) is used to estimate the solution in advance,
specifically to calculate the number of iterations necessary to
achieve the desired accuracy.

- When a = x 0 - r, b = x 0 + r then to φ transforms [a, b] into [a, b], a


sufficient condition is
φ(x 0 ) - x 0 £ (1 - q) r.
- When φ is increasing (w.r.t. decreasing) then the image of interval
[ ]
[a, b] of this map is φ(a),φ(b) (w.r.t. [φ(b),φ(a)] ). Thus to check
whether φ maps [a, b] into [a, b] is quite easy.

21/08/2024 7
Iterative Method
Example : Find the solution of the following equation on [9, 10]:

f(x) = x 3 + x - 1000 = 0.
Solution: we can transform this equation into the three equivalent
forms as follows :
a) x = φ1 (x) = 1000 - x 3
1000 1
b) x = φ 2 (x) = 2
-
x x
c) x = φ3 (x) = 3 1000 - x
Thus we have:
a) φ'1 (x) = -3x 2 Þ max φ'1 (x) = 300 > 1.
9£ x £10

b) φ'2 (x) = -2000x -3 + x -2 Þ max φ'2 (x) » 2 > 1


9£ x £10

1 1
21/08/2024
c) φ'3 (x) = - (1000 - x) -2/3 Þ max φ'3 (x) » !1 8
3 9£ x £10 300
Iterative Method
If we use the iterative method for the functions φ1 , φ 2 then the
sequence xk is not sure to converge; while if use the function φ 3 to
create the sequence xk, then the sequence converges to the solution
quickly.
On the other hand, it is easy to check that φ 3 maps [9, 10] into [9, 10].
We thus construct the sequence of xk as follows :
x0 = 10;
x1 = φ3 (x 0 ) = 9,96655;
x 2 = φ3 (x1 ) = 9,96666;
x 3 = φ3 (x 2 ) = 9,96667.
If we choose x3 to be an approximate solution, the error is:

q 0,0001
x3 - x £ x3 - x 2 » = 3,33.10-7.
(1 - q) 300

21/08/2024 9
Interpolation Method
(the regula falsi method, method of false position, or false position method)

Hypothesis: In this method, we need the two following conditions :

1) The equation f(x) = 0 has unique solution on the interval [a, b].

[ ]
2) f Î C2 a, b , and f’, f’’ do not change the sign on [a, b].

Without loss of generality in what following, we can suppose that f’’ > 0
on [a, b]. (i.e., f is a convex function on [a, b]).

21/08/2024 10
Interpolation Method
(the regula falsi method, method of false position, or false position method)

Case 1: If f’ < 0 on the interval [a, b] (f is decreasing)


y
A
We create the sequence xk f(a)
as follows:
x0 = b
f(a)
x k+1 = a - (x k - a), k ³ 0
f(x k ) - f(a) x2 x1 x0 = b
a x x
æ f(x k ) ö
ç = xk - (x k - a) ÷ . B2
è f(x k ) - f(a) ø f(b) B1
B
This sequence decreases and converges to the solution x of the
equation.

21/08/2024 11
Interpolation Method
(the regula falsi method, method of false position, or false position method)

Case 2: If f’ > 0 on the interval [a, b] (f is increasing)


y
We create the sequence xk as f(b) B
follows:
x0 = a
f(b)
x k+1 = b - (x k - b), k ³ 0
f(x k ) - f(b) x0 =a x1 x2 x
æ ö x b
f(x k )
ç = xk - (x k - b) ÷ .
è f(x k ) - f(b) ø
f(a) A2
A A1

This sequence increases and converges to the solution x of the


equation.

21/08/2024 12
Interpolation Method
(the regula falsi method, method of false position, or false position method)

Error: we have two formulas to estimate the error

[
1) If f ¢(x) ³ m > 0,"x Î a, b ,]
f(x k )
x k -ξ £
m
2) If f’ does not change sign on [a, b] and
0 < m £ f ¢(x) £ M ,"x Î [a, b] ,
then
M-m
x k+1 - ξ £ x k+1 - x k .
m

8/21/24 13
Interpolation Method
(the regula falsi method, method of false position, or false position method)
Example: Calculate the value of 3
Solution : Consider the equation on [0, 3]
x2 = 3
It is easy to see that f(x) = x2 – 3 satisfies all condition of the
interpolation method on the interval [0, 3]. Take x0 = 0, and construct
the sequence xk as follows
f(x k ) x 2k - 3
x k+1 = x k - (x k - b) = x k - 2 (x k - 3)
f(x k ) - f(b) xk - 9
3(x k + 1)
= ×
xk + 3
Thus :
x1 = 1; x6 = 1,7307692;
x2 = 1,5; x7 = 1,7317070;
x3 = 1,666667; x8 = 1,7319587;
x4 = 1,71428571; x9 = 1,7320261;
21/08/2024
x5 = 1,72727272; x = 1,7320508. 14
Secant Method
We create the sequence {xk} as follows:
y

Initializing two points: x0, x1 and


compute
$! % $!"#
x!"# = x! − f x!
& $ %& $
! !"#
for k ≥ 1, or equivalently: x x3
x2 x1 x0 x
$!"# &($! ) % $! &($!"# )
x!"# =
& $! % & $!"#
The sequence xk of the secant method converges to a root x of f if the
initial values x0 and x1 are sufficiently close to the root. The order of
convergence is φ, where

is the golden ratio. In particular, the convergence is superlinear, but


not quite quadratic.
21/08/2024 15
Secant Method
y
This result only holds under some
technical conditions, namely that f be
twice continuously differentiable and
the root in question be simple (i.e.,
with multiplicity 1).

x x3
x2 x1 x0 x

If the initial values are not close enough to the root, then there is no
guarantee that the secant method converges. There is no general
definition of "close enough", but the criterion has to do with how "wiggly"
the function is on the interval [x0, x1]. For example, if f is differentiable on
that interval and there is a point where f’ = 0 on the interval, then the
algorithm may not converge.
21/08/2024 16
Convergence speed for iterative methods
In numerical analysis, the order of convergence and the rate of
convergence of a convergent sequence are quantities that represent
how quickly the sequence approaches its limit.
Definition: A sequence {xn} that converges to x* is said to have order of
convergence q ≥ 1and rate of convergence μ if

The rate of convergence μ is also called the asymptotic error constant.


Note that this terminology is not standardized and some authors will use
rate where this article uses order.
In practice, the rate and order of convergence provide useful insights
when using iterative methods for calculating numerical approximations. If
the order of convergence is higher, then typically fewer iterations are
necessary to yield a useful approximation. Strictly speaking, however,
the asymptotic behavior of a sequence does not give conclusive
information about any finite part of the sequence.
21/08/2024 17
Convergence speed for iterative methods
Similar concepts are used for discretization methods. The solution of the
discretized problem converges to the solution of the continuous problem
as the grid size goes to zero, and the speed of convergence is one of
the factors of the efficiency of the method. However, the terminology, in
this case, is different from the terminology for iterative methods.
Convergence speed for iterative methods: (Q-convergence
definitions) Suppose that the sequence {xk} converges to the number L.
The sequence is said to converge Q-linearly to L if there exists a number
μ ∈ (0,1) such that

The number μ is called the rate of convergence.


The sequence is said to converge Q-superlinearly to L (i.e. faster than
linearly) if
.
21/08/2024 18
Convergence speed for iterative methods
Convergence speed for iterative methods: Suppose that the sequence
{xk} converges to the number L.
The sequence is said to converge Q-sublinearly to L (i.e. slower than
linearly) if

If the sequence converges sublinearly and additionally

then it is said that the sequence {xk} converges logarithmically to L. Note


that unlike previous definitions, logarithmic convergence is not called "Q-
logarithmic."

21/08/2024 19
Convergence speed for iterative methods
In order to further classify convergence, the order of convergence is
defined as follows. The sequence is said to converge with order q to L for
q ≥ 1 if

for some positive constant M > 0 (not necessarily less than 1 if q > 1). In
particular, convergence with order:
• q = 1 is called linear convergence (if M < 1),
• q = 2 is called quadratic convergence,
• q = 3 is called cubic convergence,
• etc.

It is not necessary, however, that q be an integer. For example, the


secant method, when converging to a regular, simple root, has an order
of q ≈ 1.618.
In the definitions above, the "Q-" stands for "quotient" because the terms
are defined using the quotient between two successive terms. Often,
however, the "Q-" is dropped and a sequence is simply said to have
21/08/2024 20
linear convergence, quadratic convergence, etc.
Newton’s Method
Hypothesis: In this method, we need the two following conditions :

1) The equation f(x) = 0 has unique solution on the interval [a, b].
2) f Î C2 [a, b], and f’, f’’ do not change the sign on [a, b].
y
Method: choose a point x0 f(b) B
in the interval [a, b] which is
Fourrier point, that is :
f(x 0 )f ¢¢(x 0 ) > 0.
Construct the sequence xk : b
f(x k ) a x
x k+1 = x k - , k ³ 0. x
f ¢(x k ) x2 x1 x0
f(a)
A
This is a monotone sequence which converges to the solution x of the
equation.
21/08/2024 21
Newton’s Method
Error: if

f ¢¢(x) £ M1 , "x Î [a, b] ,


f ¢(x) ³ M 2 > 0,"x Î [a, b] ,
then

M1 2
x k+1 -ξ £ x k+1 - x k .
2M 2

Remark: The Newtonian method has a quadratic order of


convergence, that is the sequence xk created by this method will
converge to the solution very quickly.

8/21/24 22
Newton’s Method
Example: Calculate the value of 3

Solution : Consider the equation on [1, 2]


x2 = 3
It is easy to see that f(x) = x2 – 3 satisfies all conditions of the Newton
method on the interval [1, 2]. Take x0 = 2, and construct the sequence
xk as follows
f(x k ) x k2 - 3 x k2 + 3
x k+1 = x k - = xk - = ×
f ¢(x k ) 2x k 2x k
Thus:
x1 = 1,75;
x2 = 1,7321429;
x3 = 1,73205081;
x4 = 1,732050808;
x = 1,732050808.
21/08/2024 23

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy