100% found this document useful (1 vote)
146 views39 pages

Roots

The document discusses numerical methods for finding the roots or zeroes of functions. It introduces root-finding as solving the equation f(x) = 0. It provides examples of finding roots of polynomials algebraically when possible. When algebraic solutions are not possible, it discusses using numerical approximation methods enabled by computers. These numerical methods covered include bracketing methods like the bisection method, fixed point iteration, and the false position or regula falsi method. It provides algorithms and examples of applying each method to solve specific equations numerically.

Uploaded by

Fitriani Hanipah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
146 views39 pages

Roots

The document discusses numerical methods for finding the roots or zeroes of functions. It introduces root-finding as solving the equation f(x) = 0. It provides examples of finding roots of polynomials algebraically when possible. When algebraic solutions are not possible, it discusses using numerical approximation methods enabled by computers. These numerical methods covered include bracketing methods like the bisection method, fixed point iteration, and the false position or regula falsi method. It provides algorithms and examples of applying each method to solve specific equations numerically.

Uploaded by

Fitriani Hanipah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 39

Numerical methods for finding

the roots of a function


Roots of equations
• The roots of a function f (x) are defined as the values
for which the value of the function becomes equal to
zero.
• So, finding the roots of f (x) means solving the
equation f (x) = 0.
• Example:
f (x) = 𝑎𝑥 2 + 𝑏𝑥 + 𝑐 is a quadratic polynomial, the
roots are given by the well-known formula
−𝑏 ± 𝑏 2 − 4𝑎𝑐
𝑥1,2 =
2𝑎
Roots of Equations
Roots of Equations
Roots of equations
• For a polynomial of degree 3 or higher, it is sometimes
possible to find the roots by factorising the polynomial
Example:
• 𝑓 𝑥 = 𝑥 3 − 6𝑥 2 + 11𝑥 − 6
= (𝑥 − 1)(𝑥 − 2)(𝑥 − 3)
so the roots are 1, 2 and 3
• 𝑓 𝑥 = 𝑥 4 − 16
= (𝑥 2 − 4)(𝑥 2 + 4)
so the roots are 2 and -2
Roots of equations
𝑓 𝑥 = 𝑎𝑥 5 + 𝑏𝑥 4 + 𝑐𝑥 3 + 𝑑𝑥 2 + 𝑒𝑥 + 𝑓
𝑓 𝑥 = sin 𝑥 + 𝑥

𝑥 =?
• For a large number of problems it is, not possible
to find exact values ​for the roots of the function
so we have to find approximations instead.
Roots of Equations
• Non-computer methods:
– factorizing solution (not always available)
– Graphical solution (inaccurate)

• Numerical methods suitable for computers


Roots of Equation

Bisection Method

Bracketing
Methods
False Position Method
Roots of
Equations Fixed Point Iteration
Open
Newton Raphson
Methods
Secant
Bracketing Methods
• Intermediate Value Theorem
If f(x) is continous in the interval [a,b] and
f(a).f(b)<0, then there exist ‘c’ such that a<c<b
and f(c)=0
Bisection Method
• Algorithm:
Set 𝑎1 = 𝑎 and 𝑏1 = 𝑏, and let 𝑐1 be the midpoint of 𝑎, 𝑏 ; that
is
𝑏1 − 𝑎1 𝑎1 + 𝑏1
𝑐1 = 𝑎1 + =
2 2
– If 𝑓 𝑐1 = 0 then 𝑐 = 𝑐1 , and we are done
– If 𝑓 𝑐1 ≠ 0, then 𝑓 𝑐1 has the same sign as either 𝑓 𝑎1 or
𝑓 𝑏1
• If 𝑓 𝑐1 and 𝑓 𝑎1 have the same sign, 𝐶 ∈ 𝑐1 , 𝑏1 , set 𝑎2 = 𝑐1 and
𝑏2 = 𝑏1
• If 𝑓 𝑐1 and 𝑓 𝑎1 have opposite sign, 𝐶 ∈ 𝑎1 , 𝑐1 , set 𝑎2 = 𝑎1 and
𝑏2 = 𝑐1
– Then re-apply the process to the interval 𝑎2 , 𝑏2 ; etc
Bisection Method
The Bisection Method to Solve f(x)=0
Given the function f defined on [a,b] satisfying f(a).f(b)<0
1. 𝑎1 = 𝑎, 𝑏1 = 𝑏, 𝑐0 = 𝑎;
2. 𝑖 = 1;
𝑎𝑖 +𝑏𝑖
3. 𝑐𝑖 = ;
2
4. If 𝑐𝑖 − 𝑐𝑖−1 < 𝜀 or 𝑓(𝑐𝑖 ) < 𝜀, then 10;
5. If 𝑓(𝑐𝑖 )𝑓(𝑎𝑖 ) > 0, then 6;
If 𝑓(𝑐𝑖 )𝑓(𝑎𝑖 ) < 0, then 8;
6. 𝑎𝑖+1 = 𝑐𝑖 , 𝑏𝑖+1 = 𝑏𝑖 ;
7. 𝑖 = 𝑖 + 1; go to 3;
8. 𝑎𝑖+1 = 𝑎𝑖 , 𝑏𝑖+1 = 𝑐𝑖 ;
9. 𝑖 = 𝑖 + 1; go to 3;
10. End of procedure
Relative Error Test
The iteration will be terminated when a bound for the relative
error is less than 𝜀:
𝑐𝑛 − 𝑐𝑛−1

𝑐𝑛
Error Bounds
Let 𝑎𝑛 , 𝑏𝑛 , and 𝑐𝑛 denote the nth computed values of 𝑎, 𝑏, and 𝑐:
𝑏𝑛 − 𝑎𝑛
𝑏𝑛+1 − 𝑎𝑛+1 = ,𝑛 ≥ 1
2
And
𝑏−𝑎
𝑏𝑛 − 𝑎𝑛 = 𝑛−1
2
Where 𝑏 − 𝑎 denotes the length of the original interval with
which we started.
Since the root 𝛼 ∈ 𝑎𝑛 , 𝑐𝑛 or 𝛼 ∈ [𝑐𝑛 , 𝑏𝑛 ], we know that
1
𝛼 − 𝑐𝑛 ≤ 𝑐𝑛 − 𝑎𝑛 = 𝑏𝑛 − 𝑐𝑛 = 𝑏𝑛 − 𝑎𝑛
2
So
1
𝛼 − 𝑐𝑛 ≤ 𝑛+1 (𝑏 − 𝑎)
2
This shows that the iterates 𝑐𝑛 → ∞ as 𝑛 → ∞

To see how many iterations will be necessary:


𝛼 − 𝑐𝑛 < 𝜀
1
𝑛+1
𝑏−𝑎 <𝜀
2
We can solve this to give
(𝑏 − 𝑎)
log 2𝜀
𝑛≥
log 2
Example: The Bisection Method
Show that 𝑓 𝑥 = 𝑥 3 + 4𝑥 2 − 10 = 0 has a
root in [1,2] and use the bisection method to
determine an approximation to the root that is
accurate to at least within 0.1
Advantages and Disadvantages
of the Bisection Method
advantages
– The method is guaranteed to converge
– The error bound decreases by half with each
iteration
Disadvantages
– The bisection method converges very slowly
– The bisection method cannot detect multiple roots
Exercise
1. Consider the nonlinear equation
𝑓 𝑥 = 𝑥6 − 𝑥 − 1 = 0
a. Show there is a root 𝛼 in the interval [1,2]
b. Estimate how many iterations will be needed in order to
approximate this root with an accuracy of 𝜀 = 0.01 using
the bisection method
False Position (Regula Falsi) Method
• It is very similar to bisection method
• However, the next iteration point is not midpoint of interval,
but intersection of axis x with secant through [𝑎𝑘 , 𝑓(𝑎𝑘 )] and
[𝑏𝑘 , 𝑓(𝑏𝑘 )]
The root of secant we estimate by
𝑎𝑘 − 𝑏𝑘
𝑥𝑘+1 = 𝑎𝑘 − 𝑓 𝑎𝑘
𝑓(𝑎𝑘 ) − 𝑓(𝑏𝑘 )
• If 𝑓(𝑥𝑘+1 ) = 0 then 𝑥 ∗ = 𝑥𝑘+1 and stop
• If 𝑓(𝑥𝑘+1 ) ≠ 0 then:
– If 𝑓(𝑎𝑘 )𝑓(𝑥𝑘+1 ) < 0, then
𝑎𝑘+1 = 𝑎𝑘
𝑏𝑘+1 = 𝑥𝑘+1
– If 𝑓(𝑎𝑘 )𝑓(𝑥𝑘+1 ) > 0, then
𝑎𝑘+1 = 𝑥𝑘+1
𝑏𝑘+1 = 𝑏𝑘
From construction of (𝑎𝑘+1 , 𝑏𝑘+1 ) it follows that
𝑓(𝑎𝑘+1 ). 𝑓 𝑏𝑘+1 < 0, so each interval (𝑎𝑘 , 𝑏𝑘 )
contains a root
Example: The False Position Method
Show that 𝑓 𝑥 = 𝑥 3 + 4𝑥 2 − 10 = 0 has a
root in [1,2] and use the false position method
to determine an approximation to the root that
is accurate to at least within 10-4
Advantages and Disadvantages
of the False Position Method
Advantages:
• No need to calculate a complicated derivative
Disadvantages
• May converge slowly for functions with big
curvatures
Exercise
Apply the Method of False Position on initial
interval [-1,1] to find the root of
𝑓 𝑥 = 𝑥6 − 𝑥 − 1 = 0
Fixed Point Iteration
• A number x is a fixed point for a given function 𝑔
if 𝑔 (𝑥) = 𝑥
• Root finding 𝑓 (𝑥) = 0 is related to fixed-point
iteration 𝑔(𝑥) = 𝑥
– Given a root-finding problem 𝑓 (𝑥) = 0, there are many
𝑔 with fixed points at 𝑥 :
Example: 𝑔 𝑥 = 𝑥 − 𝑓 𝑥
𝑔 𝑥 = 𝑥 + 3𝑓(𝑥)
If 𝑔 has fixed point at 𝑥, then 𝑓 (𝑥) = 𝑥 − 𝑔(𝑥) has a zero
at 𝑥
A Fixed Point Problem
A Fixed Point Problem
• To approximate the fixed point of a function g, we
choose an initial approximation 𝑥1 and generate the
sequence {𝑥𝑛 }𝑛→∞ by letting 𝑥𝑛 = 𝑔(𝑥𝑛−1 ), for each n
≥1
• If the sequence converges to x and g is continuous,
then
𝑥 = lim 𝑥𝑛 = lim 𝑔 𝑥𝑛−1 = 𝑔 lim 𝑥𝑛−1 =𝑔 𝑥
𝑛→∞ 𝑛→∞ 𝑛→∞
and a solution to x = g(x) is obtained.
• This technique is called fixed-point, or functional
iteration
A Fixed Point Problem
To find the fixed point of g in an interval [a, b],
given the equation x = g(x) with an initial guess
𝑥0 ∈ [a, b]:
1. 𝑛 = 1
2. 𝑥𝑛 = 𝑔(𝑥𝑛−1 )
𝑥𝑛 −𝑥𝑛−1
3. If , or x=g(x) and f(x)=0, then 5;
𝑥𝑛
4. 𝑛 → 𝑛 + 1; go to 2
5. End of Procedure
Example
The equation 𝑥 2 − 2𝑥 − 3 = 0 with 𝑥0 = 4 has a unique root in
[1, 4]

Solution
𝑥 2 − 2𝑥 − 3 = 0
𝑥 𝑥−2 −3=0
3
𝑥=
𝑥−2
3
𝑥𝑛+1 = , 𝑥0 = 4
𝑥𝑛 −2
𝑛 𝑥𝑛 𝒈(𝑥𝑛−1 ) 𝒇(𝑥𝑛 )
0 4 1.5 5
1 1.5 -6 -3.75
2 -6 -0.375 45
3 -0.375 -1.2631 -2.1093
4 -1.2631 -0.9193 1.1216
⋮ ⋮ ⋮ ⋮
14 -1.000004 -0.99999 0.000017
15 -0.99999 -1.00000 -0.000006
16 -1.00000 -1.00000 0.000002
17 -1.00000 1.00000 -0.000001
Convergence Result
Let 𝑔 ∈ 𝐶 𝑎, 𝑏 with 𝑔 𝑥 ∈ 𝑎, 𝑏 for all 𝑥 ∈
[𝑎, 𝑏]. Let 𝑔′ 𝑥 exist on (a,b) with
𝑔′ 𝑥 ≤ 𝑀 < 1, ∀𝑥 ∈ [𝑎, 𝑏]
If 𝑥0 is any point in [a,b] then the sequence
defined by 𝑥𝑛 = 𝑔 𝑥𝑛−1 , 𝑛 ≥ 1 will converge to
the unique fixed point x in [a,b]
Example: The equation 𝑥 2 − 2𝑥 − 3 = 0 with
𝑥0 = 4 has a unique root in [1, 4]
3
a. 𝑥𝑛+1 =
𝑥𝑛 −2
3
𝑔(𝑥) =
𝑥−2
−3
𝑔′ 𝑥 = 2
<0
𝑥−2
 converge
b. 𝑥𝑛+1 = 2𝑥𝑛 + 3
𝑔(𝑥) = 2𝑥 + 3

1
𝑔 𝑥 = <1
2𝑥 + 3
 converge
𝑥𝑛 2 −3
c. 𝑥𝑛+1 =
2
2
𝑥 −3
𝑔 𝑥 =
2
𝑔′ 𝑥 = 𝑥 > 1, ∀𝑥ϵ[1,4]
 Not converge
Exercises
The equation 𝑥 3 + 4𝑥 2 − 10 = 0 with 𝑥0 = 1.5
has a unique root in [1,2]
Newton-Raphson Method
• We will work with tangent of the
graph of function f
• Therefore we suppose that f is
differentiable
• We chose the initial
approximation of the root 𝑥0 .
• We route a tangent line to the
graph of function f through point
𝑥0 , 𝑓 𝑥0
• The intersect with axis x will be 𝑥1 .
• Then we route a tangent line
through 𝑥1 , 𝑓 𝑥1
• The intersect with axis x will be 𝑥2
and so on
Newton-Raphson Method
• The line tangent to the graph of y= f(x) at 𝑥0 , 𝑓 𝑥0 is the
graph of the linear Taylor polynomial:
𝑦 − 𝑓(𝑥0 ) = 𝑓′(𝑥0 )(𝑥1 − 𝑥0 )
• The root of 𝑦 is 𝑥1 :
𝑓(𝑥0 ) + 𝑓′(𝑥0 )(𝑥1 − 𝑥0 ) = 0
𝑓′(𝑥0 )(𝑥1 − 𝑥0 ) = −𝑓(𝑥0 )
−𝑓(𝑥0 )
(𝑥1 − 𝑥0 ) =
𝑓′(𝑥0 )
−𝑓(𝑥0 )
(𝑥1 − 𝑥0 ) =
𝑓′(𝑥0 )
𝑓(𝑥0 )
𝑥1 = 𝑥0 −
𝑓′(𝑥0 )
Newton-Raphson Method
• Since x1 is expected to be an improvement over x0 as an
estimate of 𝛼, we repeat the procedure with x1 as initial
guess:
𝑓(𝑥1 )
𝑥2 = 𝑥1 −
𝑓′(𝑥1 )
• Repeating this process, we obtain a sequence of numbers,
iterates, x1, x2, x3, . . . hopefully approaching the root 𝛼.
• The iteration Formula
𝑓(𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 −
𝑓′(𝑥𝑛 )
is referred to as the Newton’s method, or Newton-Raphson,
for solving f(x) = 0.
The Secant Method
• The Newton method is based on
approximating the graph of y = f(x)
with a tangent line and on then
using a root of this straight line as
an approximation to the root 𝛼 of
f(x).
• From this perspective, other
straight-line approximation to y =
f(x) would also lead to methods of
approximating a root of f(x).
• One such straight-line
approximation leads to the secant
method.
• Assume that two initial guesses to 𝛼 are known and denote them by 𝑥0
and 𝑥1 . They may occur
• on opposite sides of 𝛼, or
• on the same side of 𝛼.
The Secant Method
• To derive a formula for x2, we proceed in a manner similar to
that used to derive Newton’s method:
Find the equation of the line and then find its root x2.
• The equation of the line is given by
𝑓 𝑥1 − 𝑓(𝑥0 )
𝑦 = 𝑝 𝑥 = 𝑓 𝑥1 + (𝑥 − 𝑥1 )
𝑥1 − 𝑥0
• Solving p 𝑥2 = 0, we obtain
𝑥1 − 𝑥0
𝑥2 = 𝑥1 − 𝑓 𝑥1
𝑓 𝑥1 − 𝑓(𝑥0 )
The Secant Method
• Having found𝑥2 , we can drop 𝑥0 and use𝑥1 , 𝑥2 as a new set of
approximate values for 𝛼.
• This leads to an improved values 𝑥3 and this can be continued
indefinitely.
• we obtain the general formula for the secant method
𝑥𝑛 − 𝑥𝑛−1
𝑥𝑛+1 = 𝑥𝑛 − 𝑓 𝑥𝑛
𝑓 𝑥𝑛 − 𝑓(𝑥𝑛−1 )
• It is called a two-point method, since two approximate values
are needed to obtain an improved value.
• The bisection method is also a two-point method, but the
secant method will almost always converge faster than
bisection.
Exercise
Consider the equation 𝑓 𝑥 = 𝑥 2 − 2𝑥 − 3
a. Using four steps of the newton’s method
starting at 𝑥0 = 0
b. Using four steps of the secant method starting
at 𝑥0 = −2 and 𝑥1 = 1

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy