Matlabdocumentation Final
Matlabdocumentation Final
Matlabdocumentation Final
2019-2020
ECADVMIL (ADVANCE ENGINEERING MATHEMATICS FOR ECE – LAB)
MATLAB DOCUMENTATION
ECE181
Submitted by:
Pagulayan, Milen Joan T.
2018-101945
BSECE
Submitted to:
Engr. Armil S. Monsura
Instructor
I. BRACKETING METHODS
A. Bisection Method
B. False Position Method
C. Modified False Position Method
1. A. Brent’s Method
1.
A. Muller’s Method
B. Bairstow’s Method
BRACKETING METHOD
Bracketing methods determine successively smaller intervals (brackets)
that contain a root. When the interval is small enough, then a root has been
found. They generally use the intermediate value theorem, which asserts that
if a continuous function has values of opposite signs at the end points of an
interval, then the function has at least one root in the interval. Therefore, they
require to start with an interval such that the function takes opposite signs at
the end points of the interval. However, in the case of polynomials there are
other methods (Descartes' rule of signs, Budan's theorem and Sturm's
theorem) for getting information on the number of roots in an interval. They
lead to efficient algorithms for real-root isolation of polynomials, which ensure
finding all real roots with a guaranteed accuracy.
A. BISECTION METHOD
MATHEMATICAL BACKGROUND.
METHOD: FORMULA
The method is applicable for numerically solving the equation f(x) = 0 for
the real variable x, where f is a continuous function defined on an interval [a, b]
and where f(a) and f(b) have opposite signs. In this case a and b are said to bracket
a root since, by the intermediate value theorem the continuous function f must
have at least one root in the interval (a, b).
At each step the method divides the interval in two by computing the midpoint c =
(a+b) / 2 of the interval and the value of the function f(c) at that point. Unless c is
itself a root (which is very unlikely, but possible) there are now only two
possibilities: either f(a) and f(c) have opposite signs and bracket a root, or f(c)
and f(b) have opposite signs and bracket a root. The method selects the
subinterval that is guaranteed to be a bracket as the new interval to be used in the
next step. In this way an interval that contains a zero of f is reduced in width by
50% at each step. The process is continued until the interval is sufficiently small.
Explicitly, if f(a) and f(c) have opposite signs, then the method sets c as the new
value for b, and if f(b) and f(c) have opposite signs then the method sets c as the
new a. (If f(c)=0 then c may be taken as the solution and the process stops.) In both
cases, the new f(a) and f(b) have opposite signs, so the method is applicable to this
smaller interval
ALGORITHM
EXAMPLE
Example-1
1. Find a root of an equation f(x)=x3-x-1 using Bisection method
Solution:
Here x3-x-1=0
Let f(x)=x3-x-1
Here
x 0 1 2
f(x) -1 -1 5
1st iteration :
Here f(1)=-1<0 and f(2)=5>0
∴ Now, Root lies between 1 and 2
x0=1+22=1.5
f(x0)=f(1.5)=0.875>0
2nd iteration :
Here f(1)=-1<0 and f(1.5)=0.875>0
∴ Now, Root lies between 1 and 1.5
x1=1+1.52=1.25
f(x1)=f(1.25)=-0.29688<0
3rd iteration :
x2=1.25+1.52=1.375
f(x2)=f(1.375)=0.22461>0
4th iteration :
x3=1.25+1.3752=1.3125
f(x3)=f(1.3125)=-0.05151<0
5th iteration :
f(x4)=f(1.34375)=0.08261>0
6th iteration :
x5=1.3125+1.343752=1.32812
f(x5)=f(1.32812)=0.01458>0
7th iteration :
x6=1.3125+1.328122=1.32031
f(x6)=f(1.32031)=-0.01871<0
8th iteration :
x7=1.32031+1.328122=1.32422
f(x7)=f(1.32422)=-0.00213<0
9th iteration :
x8=1.32422+1.328122=1.32617
f(x8)=f(1.32617)=0.00621>0
10th iteration :
x9=1.32422+1.326172=1.3252
f(x9)=f(1.3252)=0.00204>0
11th iteration :
x10=1.32422+1.32522=1.32471
f(x10)=f(1.32471)=-0.00005<0
FUNCTIONS
Below is an example pseudocode of bisection method
INPUT: Function f,
endpoint values a, b,
tolerance TOL,
maximum iterations NMAX
CONDITIONS: a < b,
either f(a) < 0 and f(b) > 0 or f(a) > 0 and f(b) < 0
OUTPUT: value which differs from a root of f(x) = 0 by less than TOL
N ← 1
while N ≤ NMAX do // limit iterations to prevent infinite loop
c ← (a + b)/2 // new midpoint
if f(c) = 0 or (b – a)/2 < TOL then // solution found
Output(c)
Stop
end if
N ← N + 1 // increment step counter
if sign(f(c)) = sign(f(a)) then a ← c else b ← c // new interval
end while
Output("Method failed.") // max number of steps exceeded
exists. Then:
>> format long
>> eps_abs = 1e-5;
>> eps_step = 1e-5;
>> a = 0.0;
>> b = 2.0;
>> while (b - a >= eps_step || ( abs( f(a) ) >= eps_abs && abs( f(b)
) >= eps_abs ) )
c = (a + b)/2;
if ( f(c) == 0 )
break;
elseif ( f(a)*f(c) < 0 )
b = c;
else
a = c;
end
end
>> [a b]
ans = 1.259918212890625 1.259925842285156
>> abs(f(a))
ans = 0.0000135103601622
>> abs(f(b))
ans = 0.0000228224229404
if ( f(a) == 0 )
r = a;
return;
elseif ( f(b) == 0 )
r = b;
return;
elseif ( f(a) * f(b) > 0 )
error( 'f(a) and f(b) do not have opposite signs' );
end
for k = 1:N
% Find the mid-point
c = (a + b)/2;
if ( f(c) == 0 )
r = c;
return;
elseif ( f(c)*f(a) < 0 )
b = c;
else
a = c;
end
if ( b - a < eps_step )
if ( abs( f(a) ) < abs( f(b) ) && abs( f(a) ) < eps_abs )
r = a;
return;
elseif ( abs( f(b) ) < eps_abs )
r = b;
return;
end
end
end
Method
The method of false position provides an exact solution for linear functions,
but more direct algebraic techniques have supplanted its use for these functions.
However, in numerical analysis, double false position became a root-finding
algorithm used in iterative numerical approximation techniques. Many equations,
including most of the more complicated ones, can be solved only by iterative
numerical approximation. This consists of trial and error, in which various values
of the unknown quantity are tried. That trial-and-error may be guided by
calculating, at each step of the procedure, a new estimate for the solution. There
are many ways to arrive at a calculated-estimate and regula falsi provides one of
these.
Given an equation, move all of its terms to one side so that it has the form, f (x) =
0, where f is some function of the unknown variable x. A value c that satisfies this
equation, that is, f (c) = 0, is called a root or zero of the function f and is a solution
of the original equation. If f is a continuous function and there exist two points a0
and b0 such that f (a0) and f (b0) are of opposite signs, then, by the intermediate
value theorem, the function f has a root in the interval (a0, b0).
The false position method differs from the bisection method only in the
choice it makes for subdividing the interval at each iteration. It converges faster
to the root because it is an algorithm which uses appropriate weighting of the
intial end points x1 and x2 using the information about the function, or the data of
the problem. In other words, finding x3 is a static procedure in the case of the
bisection method since for a given x1 and x2, it gives identical x3, no matter what
the function we wish to solve. On the other hand, the false position method uses
the information about the function to arrive at x3.
Formula
The poor convergence of the bisection method as well as its poor adaptability
to higher dimensions (example systems of two or more non-linear equations)
motivate the use of better techniques. One such method is the Method of False
Position. Here, start with an initial interval [x1,x2], and we assume that the
function changes sign only once in this interval. Next, find an x3 in this interval,
which is given by the intersection of the x axis and the straight line passing
through (x1,f(x1)) and (x2,f(x2)). It is easy to verify that x3 is given by
Then, choose the new interval from the two choices [x1,x3] or [x3,x2]
depending on in which interval the function changes sign.
More precisely, suppose that in the k-th iteration the bracketing interval
is (ak, bk). Construct the line through the points (ak, f (ak)) and (bk, f (bk)), as
illustrated. This line is a secant or chord of the graph of the function f. In point-
slope form, its equation is given by
Now choose ck to be the x-intercept of this line, that is, the value of x for which y =
0, and substitute these values to obtain
Algorithm
Example 1
Solution:
Here x3-x-1=0
Let f(x)=x3-x-1
Here
1st iteration :
Here f(1)=-1<0 and f(2)=5>0
∴ Now, Root lies between x0=1 and x1=2
x2=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x2=1-(-1)⋅ 2-1/ 5-(-1)
x2=1.16667
f(x2)=f(1.16667)=-0.5787<0
2nd iteration :
Here f(1.16667)=-0.5787<0 and f(2)=5>0
∴ Now, Root lies between x0=1.16667 and x1=2
x3=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x3=1.16667-(-0.5787)⋅2-1.166675-(-0.5787)
x3=1.25311
f(x3)=f(1.25311)=-0.28536<0
3rd iteration :
Here f(1.25311)=-0.28536<0 and f(2)=5>0
∴ Now, Root lies between x0=1.25311 and x1=2
x4=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x4=1.25311-(-0.28536)⋅2-1.253115-(-0.28536)
x4=1.29344
f(x4)=f(1.29344)=-0.12954<0
4th iteration :
Here f(1.29344)=-0.12954<0 and f(2)=5>0
∴ Now, Root lies between x0=1.29344 and x1=2
x5=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x5=1.29344-(-0.12954)⋅2-1.293445-(-0.12954)
x5=1.31128
f(x5)=f(1.31128)=-0.05659<0
5th iteration :
Here f(1.31128)=-0.05659<0 and f(2)=5>0
∴ Now, Root lies between x0=1.31128 and x1=2
x6=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x6=1.31128-(-0.05659)⋅2-1.311285-(-0.05659)
x61.31899
f(x6)=f(1.31899)=-0.0243<0
6th iteration :
Here f(1.31899)=-0.0243<0 and f(2)=5>0
∴ Now, Root lies between x0=1.31899 and x1=2
x7=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x7=1.31899-(-0.0243)⋅2-1.318995-(-0.0243)
x7=1.32228
f(x7)=f(1.32228)=-0.01036<0
7th iteration :
Here f(1.32228)=-0.01036<0 and f(2)=5>0
∴ Now, Root lies between x0=1.32228 and x1=2
x8=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x8=1.32228-(-0.01036)⋅2-1.322285-(-0.01036)
x8=1.32368
f(x8)=f(1.32368)=-0.0044<0
8th iteration :
Here f(1.32368)=-0.0044<0 and f(2)=5>0
∴ Now, Root lies between x0=1.32368 and x1=2
x9=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x9=1.32368-(-0.0044)⋅2-1.323685-(-0.0044)
x9=1.32428
f(x9)=f(1.32428)=-0.00187<0
9th iteration :
Here f(1.32428)=-0.00187<0 and f(2)=5>0
∴ Now, Root lies between x0=1.32428 and x1=2
x10=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x10=1.32428-(-0.00187)⋅2-1.324285-(-0.00187)
x10=1.32453
f(x10)=f(1.32453)=-0.00079<0
10th iteration :
Here f(1.32453)=-0.00079<0 and f(2)=5>0
∴ Now, Root lies between x0=1.32453 and x1=2
x11=x0-f(x0)⋅ x1-x0/f(x1)-f(x0)
x11=1.32453-(-0.00079)⋅2-1.324535-(-0.00079)
x11=1.32464
f(x11)=f(1.32464)=-0.00034<0
Approximate root of the equation x3-x-1=0 using False Position method is 1.3246
PSEUDOCODE
1. Start
3. Input
a. Lower and Upper guesses x0 and x1
b. tolerable error e
4. If f(x0)*f(x1) > 0
print "Incorrect initial guesses"
goto 3
End If
5. Do
x2 = x0 - ((x0-x1) * f(x0))/(f(x0) - f(x1))
If f(x0)*f(x2) < 0
x1 = x2
Else
x0 = x2
End If
7. Stop
EXAMPLES
X3+4x2-10
x=0:0.05:4;
f=@(x) (x.^3)+(4*(x.^2))-10;
plot(x,f(x));grid
x1=input ('x1=');
x2=input ('x2=');
for i= 1:20
f1=f(x1);
f2=f(x2);
x3=x2-((f2*(x1-x2)/(f1-f2)));
f3=f(x3);
if sign(f2)==sign(f3)
x2 = x3 ;
else
x1=x2;x2=x3;
end
end
2. Find the real root of the equation x^3-2x-5=0 by using false position
method.
Solution:
Let f(x) = x^3-2x-5
F(2) = -1 = negative
F(3) = 16 = positive
FUNCTION:
% Setting x as symbolic variable
syms x;
% Input Section
fa = eval(subs(y,x,a));
fb = eval(subs(y,x,b));
if fa*fb > 0
else
c = a - (a-b) * fa/(fa-fb);
fc = eval(subs(y,x,c));
fprintf('\n\na\t\t\tb\t\t\tc\t\t\tf(c)\n');
while abs(fc)>e
fprintf('%f\t%f\t%f\t%f\n',a,b,c,fc);
if fa*fc< 0
b =c;
fb = eval(subs(y,x,b));
else
a =c;
fa = eval(subs(y,x,a));
end
c = a - (a-b) * fa/(fa-fb);
fc = eval(subs(y,x,c));
end
end
clear;clc
x=0:0.05:4;
f=@(x) (x.^3)+(4*(x.^2))-10;
plot(x,f(x));grid
x1=input ('x1=');
x2=input ('x2=');
for i= 1:20
f1=f(x1);
f2=f(x2);
x3=x2-((f2*(x1-x2)/(f1-f2)));
f3=f(x3);
if sign(f2)==sign(f3)
x2 = x3 ;
else
x1=x2;x2=x3;
end
end
Mathematical Background
𝑦−𝑓(𝑎)
𝑥 − 𝑎 = 𝑓(𝑏)−𝑓(𝑎) (𝑏 − 𝑎),
𝑎𝑓(𝑏)−𝑏𝑓(𝑎)
𝑥1 = .
𝑓(𝑏)−𝑓(𝑎)
According to Regula Falsi method, the second and higher approximations of the
desired root are as follows:
𝑎𝑓(𝑏)−𝑏𝑓(𝑎)
𝑥𝑘+1 = , k = 1, 2, 3, 4, ........
𝑓(𝑏)−𝑓(𝑎)
where each iteration set ends with either 𝑎 = 𝑥𝑘 if 𝑓(𝑏) × 𝑓(𝑥1 ) < 0,
But sometimes this method seems take more than the reasonable number of
iterations in solving a problem. Considering this problem an efficient
improvement of the method is developed. In section 2 the improved method is
described. An efficient algorithm and illustration with numerical example has
been presented in section 3. Section 4 presents a conclusion.
Method: Formula
For a given function 𝑓(𝑥), which is continuous on [a, b], such that 𝑓(𝑎) × 𝑓(𝑏) <
0, there exists a real root 𝑐 in the interval [a, b]. Now the straight line joining the
points (𝑎, 𝑓(𝑎)) and (𝑏, 𝑓(𝑏)) cuts the x-axis at the point (𝑐1 , 0). So the first
approximation of the desired root is
𝑓(𝑏)(𝑏−𝑎) 𝑎𝑓(𝑏)−𝑏𝑓(𝑎)
𝑐1 = 𝑏 − 𝑓(𝑏)−𝑓(𝑎) = .
𝑓(𝑏)−𝑓(𝑎)
Again for each case we check 𝑓(𝑐𝑘 ) × 𝑓(𝑐𝑘+1 )for k= 0,1,2,... ... where 𝑐0 = 𝑎. If
this value is positive then for case 1, we set
Therefore, the root of the equation 𝑓(𝑥) = 0 can be found by iterative process
using the following formula
𝑎𝑓(𝑏)−𝑏𝑓(𝑎)
𝑐𝑘+1 = for k = 0,1,2,... ... ...
𝑓(𝑏)−𝑓(𝑎)
The iterative process continues until the absolute difference between two
successive values of 𝑐𝑘 is less than a desired value (app. zero).
Algorithm
To find a real root of the equation 𝑥 = 0 which lie in the interval [a,b], the following
algorithm and a computer program have been developed using MATLAB R2018 for the
simulation results:
Step 3: INPUT a, b
Step 5: i = 1
Step 6: If Fa*Fb<0
𝑎(𝐹𝑏)−𝑏(𝐹𝑎)
𝑤= ;
𝐹𝑏−𝐹𝑎
end
Step 7: i = 2 to max
Step 9: 𝐹𝑤 = 𝐹(𝑤);
b=w;
Fa=(F(a)/2);
𝑎(𝐹𝑤)−𝑏(𝐹𝑎)
𝑤= 𝐹𝑤−𝐹𝑎
;
Fw=F(w);
Else if 𝐹𝑤 ∗ 𝐹𝑏 < 0;
𝑎 = 𝑤;
𝐹𝑏 = (𝐹(𝑏)/2);
𝑤(𝐹𝑏)−𝑏(𝐹𝑤)
𝑤= 𝐹𝑏−𝐹𝑤
;
Fw = F(w);
end
Step 11: fprintf (i, a, b, Fw)
Step 12: Error=abs(Fw);
Step 13: If (error<tolx)
fprintf(‘An exact solution x= was found’, w)
break
end
EXAMPLE
The following table shows the solutions of the equation in the interval [1, 3]
using the said modified Regula Falsi method method:
Iteration
a b 𝑐𝑘 𝑓(𝑐𝑘 )
No. k
17 1.414212 1.414216 1.414214 -0.000007
18 1.414212 1.414214 1.414213 0.000003
19 1.414213 1.414214 1.414214 -0.000002
Fixed Point Iteration Method: A point, say, s is called a fixed point if it satisfies
the equation x = g(x). In this method, we first rewrite the equation (1) in the form;
In such a way that any solution of the equation (2), which is a fixed point of g, is
a solutioV.Ggn of equation (1). Then consider the following algorithm.
Algorithm 1: Start from any point x0 and consider the recursive process
Then g has exactly one fixed point l0 in [a, b] and the sequence (xn) defined by the
process (3), with a starting point x0 ∈ [a, b], converges to l0.
Proof (*): By the intermediate value property g has a fixed point, say l0. The
convergence of (xn) to l0 follows from the following inequalities:
Theorem: Let l0 be a fixed point of g(x). Suppose g(x) is differentiable on [l0 −ε,l0
+ε] for some ε > 0 and g satisfies the condition | g0(x) | ≤ α < 1 for all x ∈ [l0 −ε,l0 +
ε]. Then the sequence (xn) defined by (3), with a starting point x0 ∈[l0 −ε,l0 +ε],
converges to l0.
Proof: By the mean value theorem g([l0 −ε,l0 + ε]) ⊆ [l0 −ε,l0 + ε] (Prove!).
Therefore, the proof follows from the previous theorem. The previous theorem
essentially says that if the starting point is sufficiently close to the fixed point then
the chance of convergence of the iterative process is high.
Algorithm 2:
It is understood that here we assume all the necessary conditions so that xn is well
Steps in Writing
1. Initialize with guess p0 and i = 0 for the given equation. Substitute 0 for all values
of x.
2. Set p0+1 = g(pi). If the initial guess is zero add 1 for the value of g(x), substitute
g(p0+1) to all values of x.
4. Stop with p = pi+1, hence it is the real root of the given equation.
7. By using x n+1= phi (x n), n= 0,1,2,… , you can find the required root of the given
equation.
B. Newton-Raphson Method
MATHEMATICAL BACKGROUND
Newton-Raphson method, named after Isaac Newton and Joseph Raphson,
is a popular iterative method to find the root of a polynomial equation. It is also
known as Newton’s method, and is considered as limiting case of secant method.
Based on the first few terms of Taylor’s series, Newton-Raphson method is more
used when the first derivation of the given function/equation is a large value. It is
often used to improve the value of the root obtained using other rooting finding
methods in Numerical Methods.
Derivation of Newton-Raphson Method:
The theoretical and mathematical background behind Newton-Raphson
method and its MATLAB program (or program in any programming language) is
approximation of the given function by tangent line with the help of derivative,
after choosing a guess value of root which is reasonably close to the actual root.
The x- intercept of the tangent is calculated by using elementary algebra,
and this calculated x-intercept is typically better approximation to the root of the
function. This procedure is repeated till the root of desired accuracy is found.
Lets now go through a short mathematical background of Newton’s
method. For this, consider a real value function f(x) as shown in the figure below:
Consider 𝑥1 to be the initial guess root of the function f(x) which is
essentially a differential function. Now, to derive better approximation, a tangent
line is drawn as shown in the figure. The equation of this tangent line is given by:
𝑦 = 𝑓 ′ (𝑥1 )(𝑥 − 𝑥1 ) + 𝑓(𝑥1 )
where, f’(x) is the derivative of function f(x).
As shown in the figure, 𝑓𝑥2 = 0 i.e. at 𝑥 = 𝑥2 , 𝑦 = 0
Therefore, 0 = 𝑓(𝑥1 ) (x2 −x1 ) + 𝑓(x1 )
𝑓(𝑥 )
Solving, 𝑥2 = 𝑥1 − 𝑓′(𝑥1
1)
Repeating the above process for 𝑥𝑛 and 𝑥𝑛+1 terms of the iteration process,
we get the general iteration formula for Newton-Raphson Method as:
This formula is used in the program code for Newton Raphson method in
MATLAB to find new guess roots. Just as with fixed-point iteration, the Newton-
Raphson approach will often diverge if the initial guesses are not sufficiently close
to the true roots. Whereas graphical methods could be employed to derive good
guesses for the single-equation case, no such simple procedure is available for the
multi-equation version.
FUNCTION OF NEWTON-RAPHSON
In this code for Newton’s method in MATLAB, any polynomial function can
be given as input. Initially in the program, the input function has been defined and
is assigned to a variable ‘a’.
After getting the initial guess value of the root and allowed error, the
program, following basic MATLAB syntax, finds out root undergoing iteration
procedure as explained in the theory above. Here’s a sample output of this code:
EXAMPLE OF NEWTON-RAPHSON METHOD
The above program of Newton-Raphson method in MATLAB, taking the
same function used in the above program and solving it numerically. The function
is to be corrected to 9 decimal places.
Solution:
Given function: x 3 − x − 1 = 0, is differentiable.
The first derivative of f(x) is f′(x) = 3x 2 – 1
Let’s determine the guess value.
f(1) = 1 -1 -1 = -1 and f(2) = 8 – 2 -1 = 5
Therefore, the root lies in the interval [1, 2]. So, assume 𝑥1 = 1.5 as the initial
guess root of the function f(x) = x 3 − x − 1.
Now,
f(1.5) = 1.53 – 1.5 – 1 = 0.875
f’(1.5) = 3 * 1.52– 1 = 5.750
Using Newton’s iteration formula:
𝑓(x1 ) 0.875
x2 = x1 – ′ = 1.5 – = 1.34782600
𝑓 (𝑥1 ) 5.750
The iteration for x3 , x4 , …. is done similarly.
The table below shows the whole iteration procedure for the given function in
the program code for Newton Raphson in MATLAB and this numerical example.
C. SECANT METHOD
MATHEMATICAL BACKGROUND
The Secant method is a recapitulative tool of numerical methods and
mathematics to find the estimated derivation of the polynomial equations. In using
the Secant method, the difficulty in derivatives and inconvenience in calculating
some of the functions will be present. During the pattern of iteration, this method
assumes the function to be approximately linear in the region of interest. It is often
considered to be a finite difference calculation of Newton’s method even though
secant method was developed independently. However, it is generally used as an
alternative to the other method due to the fact of its being exempt from derivative.
In late 20th century, Potra et, al. simplified that the secant method is one of
the most efficient algorithms and procedure for solving nonlinear equations. It has
been used from the time of early Italian algebraists and has been extensively
studied in the literature. It is well known that for smooth equations the classical
1+ √5
secant method is super linearly convergent with Q-order at = 1.618. Then,
2
Ostrowski (1973) admitted that with the exception of the first step, only one
function value per step is used its efficiency index is also (1 + √5)/2. The primary
simplification of the secant process meant for systems of two nonlinear equalities
turns back to Gauss.
Its rate of convergence is more rapid than that of bisection method. So,
secant method is considered to be a much faster root finding method. At the same
time, there is no need to find the derivative of the function as in Newton-Raphson
method. However, the access in its limits is inevitable such when this method fails
to converge when f(xn) = f(xn-1) and If X-axis is tangential to the curve, it may not
converge to the solution.
METHOD: FORMULA
As stated above, it is hard to use Secant method due to the presence of
inconvenience in the process of manual calculation of the functions. As a known
method in Numeds, Secant method estimates the point of intersection of the curve
and the X- axis (i.e. root of the equation that represents the curve) as exactly as
possible.
For that, it uses succession of roots of secant line in the curve. Assume x0
and x1 to be the initial guess values, and construct a secant line to the curve
through (x0, f(x0)) and (x1, f(x1)). The equation of this secant line is given by:
𝑓(𝑥1 ) − 𝑓(𝑥2 )
𝑦= (𝑥 − 𝑥1 ) + 𝑓(𝑥1 )
𝑥1 − 𝑥0
𝑓(𝑥𝑘−1 ) − 𝑓(𝑥𝑘 )
𝑓′(𝑥𝑘 ) =
𝑥𝑘−1 − 𝑥𝑘
Remember the derivative is “slope of the line tangent to the curve”. Once
seen the pattern, the remaining operation will be able to get by substituting on the
Newton’s Formula. Later that, the equation will lead back to the equation of this
secant line.
𝑓(𝑥𝑘 )(𝑥𝑘−1 − 𝑥𝑘 )
𝑥𝑘+1 = 𝑥𝑘 −
𝑓(𝑥𝑘−1 ) − 𝑓(𝑥𝑘 )
STEPS:
1. Write the primary equation. The equation of this secant line is given by:
𝑓(𝑥1 ) − 𝑓(𝑥2 )
𝑦= (𝑥 − 𝑥1 ) + 𝑓(𝑥1 )
𝑥1 − 𝑥0
𝑓(𝑥1 ) − 𝑓(𝑥2 )
0= (𝑥 − 𝑥1 ) + 𝑓(𝑥1 )
𝑥1 − 𝑥0
3. Now, considering this new x as x2, and repeating the same process for x2,
x3, x4, we end up with the following expressions:
𝑓(𝑥1 )(𝑥1 − 𝑥0 )
𝑥2 = 𝑥1 −
𝑓(𝑥1 ) − 𝑓(𝑥0 )
𝑓(𝑥2 )(𝑥2 − 𝑥1 )
𝑥3 = 𝑥2 −
𝑓(𝑥2 ) − 𝑓(𝑥1 )
⇊
As we know,
𝑥2 = 𝑥1 − 𝑓(𝑥1 )
−0.1 − 0
𝑥2 = 0 − 0.8053 ∗
0.8053 − 1
𝑥2 = 0.4136
The complete calculation and iteration of secant method (and MATLAB program)
for the given function is presented in the table below:
𝒏 𝒙𝒏−𝟏 𝒙𝒏 𝒙𝒏+𝟏 |𝒇(𝒙𝒏+𝟏 )| |𝒇(𝒙𝒏+𝟏 ) − 𝒙𝒏 |
1 0.0 -0.1 -0.5136 0.1522 0.4136
2 -0.1 -0.5136 -0.6100 0.0457 0.0964
3 -0.5136 -0.6100 -0.6514 0.0065 0.0414
4 -0.6100 -0.6514 -0.6582 0.0013 0.0068
5 -0.6514 -0.6582 -0.6598 0.0006 0.0016
6 -0.6582 -0.6598 -0.6595 0.0002 0.0003
Thus, the root of 𝑓(𝑥) = 𝑐𝑜𝑠(𝑥) + 2 𝑠𝑖𝑛(𝑥) + 𝑥 2 as obtained from secant method
as well as its MATLAB program is -0.6595.
Proof:
𝑓(−0.6585) = 0.0002
Mathematical Background
Newton’s method (Newton-Rhapson) is fast (quadratic convergence) but
derivative may not be available.
Secant method uses two points to approximate the derivative, but
approximation may be poor if points are far apart.
Modified Secant method is a much better approximation because it uses one
point, and the derivative is found by using another point some small distance, d,
away
The modified secant method needs two evaluations of the function in each
iteration. If we consider a complicated function (or operator), this fact can
reduce its competitiveness. In this case, the idea is to consider cr, = 0 after some
first iterations, because then the secant method usually obtains enough good
results.
whenever
Using the above results, the definition of method (l), and Lemma 1, it is not hard
to prove that
Equation (2) is only obtained in the cases x,-l < x* < x,, x, < x* < X,-I. Thus, we have
proved the three-step Q-superlinear convergence of (xn,). However, in practice,
there are some advantages to this modified secant method. First, since
is a better approximation to F’(xn) than , the
convergence will be faster (the first iterations will be better). Next, the size of the
neighbourhoods can be higher, that is, we can consider worse starting points x0,
as we will see in numerical experiments. Finally, with our modification, usually
and then we could obtain Q-superlinear
convergence (or Q-quadratically if F is strongly semismooth).
Present Algorithms
Steps in Writing
In order to show the performance of the modified secant method, we have
compared it with the classical secant method. We have tested on several
semismooth equations. In Table 1, we display the iterates for
with x0 = 0.1, xi = 0.05. In this case, we have d-d+ > 0, and obviousiy, the secant
method is three-step Q-quadratically convergent, the modified secant method
with on = 0.9 is two-step Q-quadratically convergent, and the modified secant
method with an, = 1 - 10-l’ is Q-quadratically convergent.
If we consider as starting points xi = 0.2, xcg = 0.3, the conclusions are similar,
but the new approach is more convenient to use; see Table 2.
Examples
Determine the highest real root of
f (x) = 2x3 − 11.7x2 + 17.7x − 5
function
[root,approximate_error] = secant(func,xr,es,a,maxit)
% func= name of function.
% xr = initial guess
% es = desired relative error
% a= perturbation fraction
% maxit = maximum allowable iterations
if nargin<5, maxit=50;
end
%if maxit blank set to 50
if nargin<4, es=0.001;
end
%if es blank set to 0.001
% Secant method
iter = 0;
while (1)
xrn = xr - a*xr*func(xr)/((func(xr+a*xr) - func(xr));
iter = iter + 1;
if xrn ~= 0,
ea = abs((xrn - xr)/xrn)*100;
end
approximate_error(iter) = ea;
if ea <= es | iter >= maxit,
break, end
xrold = xr;
xr = xrn;
end
root = xrn;
disp(['total number of iterations to display the root (modified secant) =
',num2str(iter)]);
The open methods are relied on formulas requiring a single initial guess
point or two initial guess points that do not necessarily bracket a real root. They
may sometimes diverge from the root as the iteration progresses. Some of the
known open methods are Secant method, Newton-Raphson method, and Muller’s
method.
The bracketing methods require two initial guess points, a and b, that
bracket or contain the root. The function also has the different parity at these
two initial guesses i.e. f(a)f(b) < 0. The width of the bracket is reduced as the
iteration progresses until the approximate solution to a desired accuracy is
reached. By following this procedure, a root of f is certainly found. Some of the
known bracketing methods are Bisection method, Regula Falsi method (or False
Position), and Improved or modified Regula Falsi method. In this article we will
focus only on the bracketing methods. In the rest of the text, let f be a real and
continuous function on an interval [a, b], and f(a) and f(b) have different parity
i.e. f(a)f(b) < 0. Therefore, there is at least one real root r in the interval [a, b] of
the equation f(x) = 0.
A. BRENT’S METHOD
MATHEMATICAL BACKGROUND
Since its development in 1972, Brent’s method has been the most popular
method for finding the zeros of functions. This method usually converges very
quickly to a zero for the occasional difficult functions encountered in practice; it
typically takes O(n) iterations to converge, where O(n) is the number of steps
required for the bisection method to find the zero to approximately the same
accuracy. Brent has shown that this method requires as many as O(n)^2
iterations in the worst case.
METHOD: FORMULA
where
with
STEPS
3. If necessary, ˆb is adjusted or replaced with the bisection point. (The rules are
complicated.)
4. Once ˆb has been finalized, a, b, c, and ˆb are used to determine new values of
a, b, and c. (The rules are complicated.)
Remark: In part (ii) of step 2, the coefficients α, β, and (especially) γ are easily
determined using standard methods at the cost of a few arithmetic operations.
(Of course, there needs to be a safeguard against the unlikely event that f(a),
f(b), and f(c) are not distinct.) Note that γ is just p(0), so if f really were the
inverse of a quadratic, i.e., f −1 (y) = p(y) = αy2 + βy + γ for all y, then ˆb = γ
would satisfy f( ˆb) = f(p(0)) = f(f −1 (0)) = 0. Thus inverse quadratic
interpolation provides a low-cost approximate zero of f that should be more
accurate than that obtained by linear (secant) interpolation. Note that if direct
quadratic interpolation were used instead of inverse quadratic interpolation, i.e.,
if we found p(x) = αx2 + βx + γ such that p(a) = f(a), p(b) = f(b), and p(c) = f(c),
then it would be necessary to find a ˆb such that p( ˆb) = 0 using the quadratic
formula, which involves a square root. By using inverse quadratic interpolation,
Brent’s method avoids this square root.
PRESENT ALGORITHMS
EXAMPLE
Operate the secant method to get the three roots with the cubic polynomial f[x] =
4x2 –16x2+17x-4.
Show information on the actual computations for that beginning value p0=3 and
p1= 2.8
Solution:
Hopefully, this iteration pn+1 = g[pn-1, pn] will probably converge into a root of
f[x]. Graph this function y=f[x]
Y=f[x] = -4+17x-16x2+4x2
Root(1) this root with starting the values p0=3.0 as well as p1=2.8.
Utilize the secant method to find a numerical approximation for the root Initial,
do the iteration one step at the same time.
Kind each one of the subsequent commands inside an individual cell and execute
these one-by-one.
Evaluate the outcome along with Mathematica's built in numerical root finder.
How they are good:
FUNCTIONS
function b = fzerotx(F,ab,varargin)
a = ab(1);
b = ab(2);
fa = F(a,varargin{:});
fb = F(b,varargin{:});
if sign(fa) == sign(fb)
error('Function must change sign on the interval')
end
c = a;
fc = fa;
d = b - c;
e = d;
while fb ~= 0
if sign(fa) == sign(fb)
a = c; fa = fc;
d = b - c; e = d;
end
if abs(fa) < abs(fb)
c = b; b = a; a = c;
fc = fb; fb = fa; fa = fc;
end
m = 0.5*(a - b);
tol = 2.0*eps*max(abs(b),1.0);
if (abs(m) <= tol) | (fb == 0.0)
break
end
d = m;
e = m;
else
s = fb/fc;
if (a == c)
p = 2.0*m*s;
q = 1.0 - s;
else
q = fc/fa;
r = fb/fa;
p = s*(2.0*m*q*(q - r) - (b - c)*(r - 1.0));
q = (q - 1.0)*(r - 1.0)*(s - 1.0);
end;
if p > 0, q = -q; else p = -p; end;
c = b;
fc = fb;
if abs(d) > tol
b = b + d;
else
b = b - sign(b-a)*tol;
end
fb = F(b,varargin{:});
end
end
A. MULLER’S METHOD
This method is better suited to finding the roots of polynomials, and therefore
we will focus on this particular application of Muller's method.
MATHEMATICAL BACKGROUND
This method was first presented by D.E. Muller in 1956. This technique can be
used for any root finding program but it is particularly useful for approximating
the roots of polynomials. Muller method’s is an extension of the Secant Method.
The secant method begins with the two initial approximations x0 and x1 and
determines the next approximation x2 as the intersection of the x-axis with the
line through (x0, f(x0)) and (x1, f(x1)).
The power of Muller Method comes from the fact that it finds the complex
roots of the functions. This property makes it more useful when compared with
the other methods. (like Bisection, Newton, Regula-Falsi …)
FORMULA
(1)
Then define
(2)
(3)
(4)
This method can also be used to find complex zeros of analytic functions.
ALGORITHM
1. Start
2. Declare function f(x)
3. Get initial approximation in array x
4. Get values of aerr and maxitr
*Here aerr is the absolute error
Maxitr is the maximum number of iterations for the desired degree of
accuracy*
5. Loop for itr = 1 to maxitr
6. Calculate li, di, mu and s
7. If mu < 0,
I = (2*y(x[0]*di)/(-mu + s)
8. Else,
I = (2*y(x[I]*di)/(-mu – s)
9. x[I + 1] = x[I] + l*(x[I] – x[I – 1])
10. Print itr and x[l]
11. If fabs (x[l] – x[0]) < aerr,
Print the required root as x[l]
12. Else,
Loop for i=0 to 2
x[i] = x[i + l]
13. End loop (i)
14. End loop (itr)
15. Print the solution does not converge
16. Stop
STEPS IN WRITING
h2= x2 – x1
Step 3
Step 5
Step
STOP
Step 7 Set x0=x1 (Prepare for next iteration)
x1=x2
x2=p
h1= x1 - x0
h2= x2 – x1
Step 8
STOP
EXAMPLE
The following demonstrates the first six iterations of Müller's method in Matlab.
Suppose we wish to find a root of the same polynomial
starting with the same three initial approximations x0 = 0, x1 = -0.1, and x2 = -0.2.
The first formula in red is the root of the quadratic polynomial which is added
onto the middle approximation x(2).
>> p = [1 3 7 1 5 2 5 5]
p=
1 3 7 1 5 2 5 5
>> y = polyval( p, x )
y =
5.00000
4.51503
4.03954
>> c = M \ y
c =
0.47367
4.80230
4.51503
>> y = polyval( p, x )
y =
4.5150
4.0395
-13.6858
>> c = M \ y
c =
-13.2838
6.0833
4.0395
>> y = polyval( p, x )
y =
4.0395
-13.6858
1.6597
>> c = M \ y
c =
-21.0503
38.6541
-13.6858
>> y = polyval( p, x )
y =
-13.6858
1.6597
0.5160
>> c = M \ y
c =
-31.6627
8.0531
1.6597
>> y = polyval( p, x )
y =
1.65973
0.51602
0.05802
>> c = M \ y
c =
-18.6991
13.1653
0.5160
>> y = polyval( p, x )
y =
0.51602
0.05802
-0.00046
>> c = M \ y
c =
-21.8018
14.5107
0.0580
0.000000000000000
-0.100000000000000
-0.200000000000000
-1.148643697414111
-0.568122032631211
-0.669630566165950
-0.702851144883234
-0.706857484921269
-0.706825973130949
-0.706825980788168
-0.706825980788170
FUNCTION
function Muller()
clc, clear all % Clears the command window and
variables
syms x % variable x declared symbol
R_Accuracy = 1e-8; % No of digit for termination
A_x = 0; % Function initiallization
flag =1; % Flag will be used for terminating
process
Root_index =0;
disp('Polynomial function of Order "n" is of type:a[1]X^n+a[2]X^(n-1)+
..+a[n]X^1+a[n+1]');
disp('Type Coeff as "[ 1 2 3 ...]" i.e Row vector form');
Coeff = input('Enter the coefficient in order? ');
[row_initial,col_initial] = size(Coeff);
for i = 1:col_initial
A_x = A_x + Coeff(i)*(x^(col_initial-i)); % Polynomial function building
end
clc
disp('Polynomial is : ');
disp(A_x)
while(flag)
[row,col] = size(Coeff);
if (col ==1)
flag =0;
elseif(col==2)
flag =0;
Root_index = Root_index + 1;
Root(Root_index)= -Coeff(2)/Coeff(1);
disp(['Root found:' num2str(-Coeff(2)/Coeff(1)) '']);
disp(' ')
elseif(col >= 3)
Guess = input('Give the three initial guess point [x0, x1, x2]: ');
if isempty(Guess)
Guess = [1 2 3];
disp('Using default value [1 2 3]')
elseif(Guess == zeros(1,3))
break
end
disp(['Three initial guess are: ' num2str(Guess) ' ']);
for i = 1:100
h1 = Guess(2)-Guess(1);
h2 = Guess(3)-Guess(2);
d1 = (polyval(Coeff,Guess(2))-polyval(Coeff,Guess(1)))/h1;
d2 = (polyval(Coeff,Guess(3))-polyval(Coeff,Guess(2)))/h2;
d = (d2-d1)/(h1+h2);
b = d2 + h2*d;
Delta = sqrt(b^2-4*polyval(Coeff,Guess(3))*d);
if (abs(b-Delta)<abs(b+d))
E = b + Delta;
else
E = b - Delta;
end
h = -2*polyval(Coeff,Guess(3))/E;
p = Guess(3) + h;
if (abs(h) < R_Accuracy)
Factor = [1 -p];
Root_index = Root_index + 1;
Root(Root_index)= p;
disp(['Root found: ' num2str(p) ' ']);
% disp(['Root found after' num2str(i) ' no of iteration.']);
disp(' ')
break;
else
Guess = [Guess(2) Guess(3) p];
end
if (i ==99)
disp('Method failed to find root!!!');
end
end
end
[Coeff,rem] = deconv(Coeff,Factor);
Coeff;
end
disp(['Function has ' num2str(Root_index) ' roots, given as:']);
for i = 1:Root_index
disp(['Root no ' num2str(i) ' is ' num2str(Root(i)) ' .'])
end
disp('End of Program');
B. BAIRSTOW’S METHOD
MATHEMATICAL BACKGROUND
METHOD: FORMULA
Bairstow Method is an iterative method used to find both the real and complex
roots of a polynomial. It is based on the idea of synthetic division of the given
polynomial by a quadratic function and can be used to find all the roots of a
polynomial. Given a polynomial say,
(B.1)
(B.2)
(B.3)
(B.4)
(B.5a)
(B.5b)
for
(B.5c)
Newton
Raphson's method.
Since both and are functions of r and s we can have Taylor series expansion of , as:
(B.6a)
(B.6b)
(B.7a)
(B.7b)
To solve the system of equations , we need the partial derivatives of w.r.t. r and s.
Bairstow has shown that these partial derivatives can be obtained by
synthetic division of , which amounts to using the recurrence relation
replacing with and
with i.e.
(B.8a)
(B.8b)
(B.8c)
for where
(B.9)
(B.10a)
(B.10b)
(B.12)
If we want to find all the roots of then at this point we have the following
three possibilities:
STEPS IN WRITING
As first quadratic polynomial one may choose the normalized polynomial formed
from the leading three coefficients of f(x),
u = an-1 / an = 11/6; v = an-2/an =-33/6
After eight iterations the method produced a quadratic factor that contains the
roots −1/3 and −3 within the represented precision. The step length from the
fourth iteration on demonstrates the super linear speed of convergence.
EXAMPLE
Example 1: Find the (real/complex) roots of the following equation using Solver.
In order to limit calculations with complex numbers, instead of finding each root
individually, we find quadratic divisors as done using Bairstow’s method. The
calculations are shown in Figure 1.
Figure 1 – Using Solver to find roots of a polynomial
We show the coefficients of the polynomial in range B6:B10. The parameters r and
s from Bairstow’s algorithm are shown in cells B12 and B13. These are initially set
to zeros. The polynomial which results from division by is shown in range E8:E10
with the remainder shown in E6:E7.
The formula in cell E10 is =B10. The formula in cell E9 is =B9+$B$12*E10. The
formula in cell E8 is =B8+$B$12*E9+$B$13*E10. The formulas in cells E6 and E7
are similar to the formula in cell E8. Our goal now is to use Solver to modify the r
and s values in order to get cells E6 and E7 (the remainder after division) to
become zero..
Since Solver is only able to target one cell (the Set Objective value), we place the
formula =E6^2+E7^2 in cell E12, and use this as the target cell. Cells E6 and E7
only become zero when cell E12 becomes zero and
vice versa.
We see that cells E6, E7 and E12 are close to zero and the values for r and s have
changed to r = 0 and s = -4. This means that one of the quadratic divisors x2 – rx –
s of the original polynomial is x2 + 4. This quadratic is zero only when
±2$latex\sqr{-1}$, which is usually written as ±2i where i = $latex\sqr{-1}$.
These values are shown in B17:C17 and B18:C18, where the value in the B cell
contains the real part of the root and the value in the C cell contains the imaginary
part. To accomplish this in Excel we place the formula =IF(B15>=0,(B12-
SQRT(B15))/2,B12/2) in cell B17 and the formula =IF(B15>=0,0,-SQRT(-B15)/2)
in cell C17. The formulas for cells B18 and C18 are identical except that we replace
–SQRT by +SQRT.
In this case we can simply use the quadratic formula. The new r and s values are
shown in cells H12 and H13. Cell H12 contains the formula =-E9, while cell H13
contains the formula =-E8. This time the resulting roots x = 3 and x = -5 are real
(since the discriminant in cell H15 is positive) as shown in range H17:I18.
FUNCTIONS
function [rts,it]=bairstow(a,n,tol)
it=1;
while n>2
while st>tol
b(1)=a(1)-u; b(2)=a(2)-b(1)*u-v;
for k=3:n
b(k)=a(k)-b(k-1)*u-b(k-2)*v;
end;
c(1)=b(1)-u; c(2)=b(2)-c(1)*u-v;
for k=3:n-1
c(k)=b(k)-c(k-1)*u-c(k-2)*v;
end;
c2=c(n-2)*c(n-2); bc=b(n-1)*c(n-2);
dn=c1-c2;
du=(b1-bc)/dn; dv=(cb-c(n-2)*b(n))/dn;
u=u+du; v=v+dv;
end;
[r1,r2,im1,im2]=solveq(u,v,n,a);
n=n-2;
a(1:n)=b(1:n);
end;
u=a(1); v=a(2);
[r1,r2,im1,im2]=solveq(u,v,n,a);
rts(n,1:2)=[r1 im1];
if n==2
rts(n-1,1:2)=[r2 im2];
end;
function [r1,r2,im1,im2]=solveq(u,v,n,a)
if n==1
else
d=u*u-4*v;
if d<0
d=-d;
elseif d>0
else
end;
end;
V. GAUSS ELIMINATION
MATHEMATICAL BACKGROUND
A critical step in this process is the ability to divide row values by the value of a
"pivot entry" (the value of an entry along the top-left to bottom-right of (a possibly
modified) coefficient matrix.
Naive Gaussian Elimination assumes that this division will always be possible i.e.
that the pivot value will never be zero. (Note, by the way, a pivot value close to but
not necessarily equal to zero, can make the results unreliable when working with
calculators or computers with limited accuracy).
METHOD: FORMULA
The following sections divide Naïve Gauss elimination into two steps:
1) Forward Elimination
2) Back Substitution
To conduct Naïve Gauss Elimination, Mathematica will join the [A] and [RHS]
matrices into one augmented matrix, [C],
from every subsequent equation that follows the kth row. For example, in step 2
(i.e. k=2), the coefficient of x2 will be
zeroed from rows 3 .. n. With each step that is conducted, a new matrix is
generated until the coefficient matrix is transformed
ALGORITHM
1. Start
2. Declare the variables and read the order of the matrix n.
3. Take the coefficients of the linear equation as:
Do for k=1 to n
Do for j=1 to n+1
Read a[k][j]
End for j
End for k
4. Do for k=1 to n-1
Do for i=k+1 to n
Do for j=k+1 to n+1
a[i][j] = a[i][j] – a[i][k] /a[k][k] * a[k][j]
End for j
End for i
End for k
5. Compute x[n] = a[n][n+1]/a[n][n]
6. Do for k=n-1 to 1
sum = 0
Do for j=k+1 to n
sum = sum + a[k][j] * x[j]
End for j
x[k] = 1/a[k][k] * (a[k][n+1] – sum)
End for k
7. Display the result x[k]
8. Stop
FLOWCHART
EXAMPLE
We could proceed to try and replace the first element of row 2 with a zero, but
we can actaully stop. To see why, convert back to a system of equations:
Notice the last equation: 0=5. This is not possible. So the system has no solutions;
it is not possible to find values x, y, and z that satisfy all three equations
simultaneously.
FUNCTIONS
function C = gauss_elimination(A,B)
i = 1;
X = [ A B ];
[ nX mX ] = size( X);
while i <= nX
if X(i,i) == 0
disp('Diagonal element zero') % displaying the result if
there exists zero
return
end
X = elimination(X,i,i);
i = i +1;
end
C = X(:,mX);
function X = elimination(X,i,j)
[ nX mX ] = size( X);
a = X(i,j);
X(i,:) = X(i,:)/a;
for k = 1:nX
if k == i
continue
end
X(k,:) = X(k,:) - X(i,:)*X(k,j);
End
B. GAUSS-JORDAN
MATHEMATICAL BACKGROUND
METHOD: FORMULA
• Ri + αRj means: Replace row i with the sum of row i and α times row j.
• Ri + αRj means: Replace row i with the sum of row i and α times row j.
STEPS
(a) The rows (if any) consisting entirely of zeros are grouped
together at the bottom of the matrix.
(b) In each row that does not consist entirely of zeros, the leftmost
nonzero element is a 1 (called a leading 1 or a pivot).
(c) Each column that contains a leading 1 has zeros in all other
entries.
(d) The leading 1 in any row is to the left of any leading 1’s in the
rows below it.
3. Stop process in step 2 if you obtain a row whose elements are all
zeros except the last one on the right. In that case, the system is
inconsistent and has no solutions. Otherwise, finish step 2 and
read the solutions of the system from the final matrix.
AlGORITHM
Start
6. Display Result.
EXAMPLE
Example 4. Solve the following system by using the Gauss-Jordan elimination
method.
FUNCTIONS
function x = gauss_jordan_elim (a,b)
[m,n]= size(a);
for j=1:m-1
for z=2:m
if a(j,j)==0
t=a(1,:);a(1,:)=a(z,:);
a(z,:)=t;
end
end
for i=j+1:m
a(i,:)=a(i,:)-a(j,:)*(a(i,j)/a(j,j));
end
end
for j=m:-1:2
for i=j-1:-1:1
a(i,:)=a(i,:)-a(j,:)*(a(i,j)/a(j,j));
end
end
for s=1:m
a(s,:)=a(s,:)/a(s,s);
x(s)=a(s,n);
end
References:
BISECTION METHOD
https://en.wikipedia.org/wiki/Bisection_method
https://en.wikipedia.org/wiki/Regula_falsi
https://brainly.in/question/6700140#readmore
http://home.iitk.ac.in/~psraj/mth101/lecture_notes/lecture8.pdf
SECANT METHOD
http://www.cs.utexas.edu/users/kincaid/PGE310/Ch6-Roots-Eqn.pdf
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.39.9154&rep=
rep1&type=pdf
https://www.codewithc.com/secant-method-matlab-program/
http://www.cs.utexas.edu/users/kincaid/PGE310/Ch6-Roots-Eqn.pdf
BARISTOW MEDTHOD
https://en.wikipedia.org/wiki/Bairstow%27s_method
BRENT’S METHOD
https://en.wikipedia.org/wiki/Brent%27s_method
MULLER’S METHOD
https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/10RootFinding/
mueller/
https://socratic.org/questions/what-is-naive-gaussian-elimination
GAUSS-JORDAN
https://brilliant.org/wiki/gaussian-elimination/