0% found this document useful (0 votes)
23 views15 pages

Unit - III Numerical Methods Notes

Uploaded by

uhdnisp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views15 pages

Unit - III Numerical Methods Notes

Uploaded by

uhdnisp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Unit – III

Numerical Methods
Numerical methods are techniques to approximate mathematical procedures (e.g.,
integrals). Approximations are required because we either cannot solve the procedure
analytically (e.g., solving transcendental equations) or because the analytical method is
intractable (e.g., solving a set of a thousand simultaneous linear equations for a thousand
unknowns). By end of this chapter, students will be able to apply the numerical methods
for the following mathematical procedures and topics: Integration, solutions of
nonlinear equations and simultaneous linear equations, and first order ordinary
differential equations.

Numerical methods are often, of repetitive nature. These consist in repeated


execution of the same process where at each step the result of the preceding step is used.
This is known as iteration process and is repeated till the result is obtained to a desired
degree of accuracy.

Root Finding:
One of the most common mathematical tasks we encounter is the need to solve
equations. That is to say, for some function f(x) and a value b, we wish to know for
which x it is true that f(x)=b.

In fact, this problem can be reduced to the special case of finding the values of x
for which a given function takes the value zero. Suppose we wish to solve f(x)=b. We
can simply define a new function g(x)= f (x) – b with the result that our problem is now
to solve g(x) = 0.

Quite a lot of the algebra which you learned at school is directed to solving this
problem for particular sorts of functions. For example, if g(x) is of the form 𝑎𝑥 2 + 𝑏𝑥 +
𝑐 then the quadratic formula can be used to solve g(x) = 0. However, for some functions
an algebraic solution may be difficult to find or may not even exist, or we might not
have an algebraic representation of our function at all! In these cases, how do we solve
the equation?
Intermediate Value Theorem: Suppose f(x) is any continuous function on a closed
interval [a, b] and if u is any number between f(a) and f(b) then there exist at least one
number c in [a, b] such that f(c)=u.

Consequently,

If f(x) is any continuous function on a closed interval [a, b] and if f(a) and f(b)
are of opposite signs then there exists a root x=c of f(x)=0 in [a, b].

The bisection method:

This method consists in locating the roots of the equation f(x)=0 between a and
b. If f(x) is continuous between a and b, and f(a) and f(b) are of opposite signs then
there is a root between a and b.

For definiteness, let f(a) be negative and f(b) be positive. Then the first
𝑎+𝑏
approximation to the root is 𝑥1 = .
2

If 𝑓(𝑥1 ) = 0, then x1 is a root of f(x)=0. Otherwise, the root lies between a and
x1 or x1 and b according as f(x1) is positive or negative.
Then we bisect the interval as before and continue the proess until the root is
found to desired accuracy.

In the figure, f(x1) is positive, so that the


root lies between a and x1. Then the second
𝑎+𝑥1
approximation to the root is 𝑥2 = .
2

If f(x2) is negative, the root lies between x1


and x2.

Then the third approximation to the root is


𝑥1 +𝑥2
𝑥3 = and so on.
2

Example 1: Find a real root of the equation 𝒙𝟑 − 𝟒𝒙 − 𝟗 = 𝟎, using the bisection


method and corret to 3 decimal plaes.
Example 2: By using bisection method, find an approximate root of the equation
𝟏
𝐬𝐢𝐧 𝒙 = that lies between x=1.
𝑿

x1=1.5
Newton – Raphson Method:

Let x0 be an approximate root of the equation f(x)=0.

If ξ= x0+h be the exact root, then f(ξ)=0.

Now, expanding f(ξ)=f(x0+h)=0 by Taylor’s series, we have

Geometrical Interpretation of Newton – Raphson Method:

Let f(x)=0 be the given equation and ξ


be the exact root of it.
Let x0 be a point near the root ξ. Then the
equation of the tangent at A0 [x0, f(x0)] is
𝑦 − 𝑓(𝑥) = 𝑓′(𝑥0 )(𝑥 − 𝑥0 ).
𝑓(𝑥 )
It cuts the x-axis at 𝑥1 = 𝑥0 − 𝑓′(𝑥0 ) which is
0

a first approximation to the root ξ.


If A1 is the point corresponding to x1 on the curve, then the tangent at A1 will cut the x-axis
at x2 which is nearer to ξ and is, therefore, a second approximation to the root which is given
𝑓(𝑥 )
by 𝑥2 = 𝑥1 − 𝑓′(𝑥1 ).
1

Repeating this process, we approach to the root ξ quite rapidly. Hence the method consists
in replacing the part of the curve between the point A0 and the x-axis by means of the tangents
to the curve at A0.
Example 1: Find a root of 𝒆𝒙 𝐬𝐢𝐧 𝒙 = 𝟏 near x = 1 using Newton Raphson’s method.
Example 2. Find a real root of the equation 𝟑𝒙 − 𝐜𝐨𝐬 𝒙 − 𝟏 = 𝟎 using Newton
Raphson method.
Numerical Integration

The method of finding the value of an integral ‫׬‬௔ ݂ ሺ‫ ݔ‬ሻ݀‫ ݔ‬by using numerical techniques is called
“Numerical Integration”.

A definite integral of the form ‫׬‬௔ ݂ ሺ‫ ݔ‬ሻ݀‫ ݔ‬represents the area under the curve y = f(x) enclosed between
the limits x = a and x = b. This integration is possible only if f(x) is explicitly given and if it is
integrable.

The problem of numerical integration can be stated as, “Given a set of (n+1) points (xi, yi), i = 0, 1, 2,

…, n for the function y = f(x) where f(x) is not known explicitly, it is required to evaluate ‫׬‬௫ ೙ ݂ ሺ‫ ݔ‬ሻ݀‫ݔ‬.

The problem of numerical integration is solved by replacing f(x) with an interpolating polynomial P n(x)
௫ ௫
and obtaining ‫׬‬௫ ೙ ܲ௡ ሺ‫ ݔ‬ሻ݀‫ ݔ‬which is approximately taken as the value of ‫׬‬௫ ೙ ݂ ሺ‫ ݔ‬ሻ݀‫ݔ‬.
బ బ

We have to understand that while analytical methods give exact answers, the numerical techniques
provide us only approximate answers.

The integrals of some functions, like ܵ݅݊‫ ݔ‬ଶ ǡ ୪୬ ௫ ܽ݊݀ξͳ ൅ ‫ ݔ‬ସ , have no elementary formulas. When
we cannot find a workable integral for a function ƒ that we have to integrate, we can partition the
interval of integration, replace ƒ by a closely fitting polynomial on each subinterval, integrate the
polynomials, and add the results to approximate the definite integral of ƒ. This procedure is an example
of numerical integration. In this section we study two such methods, the Trapezoidal Rule and
Simpson’s Rule.

Trapezoidal Approximations
The Trapezoidal Rule for the value of a definite integral is based on approximating the region between
a curve and the x-axis with trapezoids instead of rectangles, as in Figure. It is not necessary for the
subdivision points ‫ݔ‬଴ ǡ ‫ݔ‬ଵ ǡ ‫ݔ‬ଶ ǡ ǥ‫ݔ‬௡ in the figure to be evenly spaced, but the resulting formula is
௕ି௔
simpler if they are evenly spaced. We therefore assume that the length of each subinterval is ο‫ ݔ‬ൌ 

FIGURE The Trapezoidal Rule approximates short stretches of


the curve y = ƒ(x) with line segments. To approximate the
integral of ƒ from a to b, we add the areas of the trapezoids
made by joining the ends of the segments to the x-axis.
௕ି௔
The length ο‫ ݔ‬ൌ  is called the step size or mesh size. The area of the trapezoid that lies above the

௬೔షభା௬೔ ο௫
i th subinterval is ο‫ ݔ‬ቀ ଶ
ቁൌ ሺ‫ݕ‬௜ିଵ ൅ ‫ݕ‬௜ ሻ

The Trapezoidal Rule



To approximate ‫׬‬௔ ݂ ሺ‫ ݔ‬ሻ݀‫ ݔ‬, we use
࢈ ο࢞
‫ࢌ ࢇ׬‬ሺ࢞ሻࢊ࢞ = T = ૛
ሺ࢟૙ ൅ ૛࢟૚ ൅ ૛࢟૛ ൅ ‫ ڮ‬൅  ૛࢟࢔ି૚ ൅ ࢟࢔ ሻǤ

The y’s are the values of ƒ at the partition points.


i.e. ‫ݕ‬଴ ൌ ݂ ሺܽሻǡ‫ݕ‬ଵ ൌ ݂ሺ‫ݔ‬ଵ ሻǡ‫ݕ‬ଶ ൌ ݂ ሺ‫ݔ‬ଶ ሻǡ ǥ‫ݕ‬௡ିଵ ൌ ݂ ሺ‫ݔ‬௡ିଵ ሻǡ ‫ݕ‬௡ ൌ ݂ ሺܾሻǤ
‫ݔ‬଴ ൌ ܽǡ‫ݔ‬ଵ ൌ ܽ ൅  ο‫ݔ‬ǡ‫ݔ‬ଶ ൌ ܽ ൅ ʹο‫ݔ‬ǡ ǥ‫ݔ‬௡ିଵ ൌ ܽ ൅ ሺ݊ െ ͳሻο‫ݔ‬ǡ ‫ݔ‬௡ ൌ ܾǡ
ܾെܽ
‫݁ݎ݄݁ݓ‬ο‫ ݔ‬ൌ 
݊
Simpson’s Rule: Approximations Using Parabolas
Another rule for approximating the definite integral of a continuous function results from using
parabolas instead of the straight-line segments that produced trapezoids. As before, we partition the
interval [a, b] into n subintervals of equal length h = Δx = (b - a)/n, but this time we require that n be
an even number. On each consecutive pair of intervals we approximate the curve y = ƒ(x) ≥ 0 by a
parabola, as shown in figure. A typical parabola passes through three consecutive points (xi-1 , yi-1), (xi ,
yi), and (xi+1, yi+1) on the curve.

The Simpson’s Rule



To approximate ‫׬‬௔ ݂ ሺ‫ ݔ‬ሻ݀‫ ݔ‬, we use
࢈ ο࢞
‫ࢌ ࢇ׬‬ሺ࢞ሻࢊ࢞ = S = ૜
ሺ࢟૙ ൅ ૝࢟૚ ൅ ૛࢟૛ ൅ ૝࢟૜ ǥ ൅  ૛࢟࢔ି૛ ൅ ૝࢟࢔ି૚ ൅ ࢟࢔ ሻǤ

The y’s are the values of ƒ at the partition points.


i.e. ‫ݕ‬଴ ൌ ݂ ሺܽሻǡ‫ݕ‬ଵ ൌ ݂ሺ‫ݔ‬ଵ ሻǡ‫ݕ‬ଶ ൌ ݂ ሺ‫ݔ‬ଶ ሻǡ ǥ‫ݕ‬௡ିଵ ൌ ݂ ሺ‫ݔ‬௡ିଵ ሻǡ ‫ݕ‬௡ ൌ ݂ ሺܾሻǤ
‫ݔ‬଴ ൌ ܽǡ‫ݔ‬ଵ ൌ ܽ ൅  ο‫ݔ‬ǡ‫ݔ‬ଶ ൌ ܽ ൅ ʹο‫ݔ‬ǡ ǥ‫ݔ‬௡ିଵ ൌ ܽ ൅ ሺ݊ െ ͳሻο‫ݔ‬ǡ ‫ݔ‬௡ ൌ ܾǤ
௕ି௔
The number n is even and ο‫ ݔ‬ൌ 

Estimating Definite Integrals

Using Trapezoidal and Simpson’s rules estimate the following integrals with n = 4 and n = 6

1. ‫ି׬‬ଵሺ‫ ݔ‬ଶ ൅ ͳሻ݀‫ݔ‬

2. ‫ି׬‬ଶሺ‫ ݔ‬ଶ െ ͳሻ݀‫ݔ‬
ଶ ଵ
3. ‫׬‬ଵ ݀‫ݏ‬
௦మ
ସ ଵ
4. ‫׬‬ଶ ݀‫ݏ‬
ሺ௦ିଵሻమ

5. ‫׬‬଴ ξ‫ ݔ‬൅ ͳ ݀‫ݔ‬
ଷ ଵ
6. ‫׬‬଴ ݀‫ݔ‬
ξ௫ାଵ

7. ‫׬‬଴ •‹ሺ‫ ݔ‬൅ ͳሻ ݀‫ݔ‬

8. ‫ି׬‬ଵ ‘•ሺ‫ ݔ‬൅ ߨሻ ݀‫ݔ‬
Application Questions on Numerical Integration:

1. A town wants to drain and fill a small


polluted swamp (Figure). The swamp
averages 5 ft deep. About how many
cubic yards of dirt will it take to fill
the area after the swamp is drained?

2. Wing design: The design of a new airplane requires a gasoline tank of constant cross-
sectional area in each wing. A scale drawing of a cross-section is shown here. The tank
must hold 5000 lb of gasoline, which has a density of 42 lb/ft3. Estimate the length of
the tank by Simpson’s Rule.
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS

A number of numerical methods are available for the first order differential
dy
equations of the form: = f ( x, y ) , given y( x0 ) = y0 .
dx

Euler’s Method:

dy
Consider a first order differential equation = f ( x, y ) with y( x0 ) = y0 .
dx

First we divide the interval ( x0 , xn ) into n subintervals each of width h so that

xn = x0 + nh . Now we wish to find the value of y at xn = x0 + nh . In this method, we use

the property that in a small interval, a curve is nearly a straight line. Thus at ( x0 , y0 ) , we

approximate the curve by a tangent at that point.

i.e., In the interval ( x0 , x1 ) we approximate the curve y(x) by the tangent line at

 dy 
( x0 , y0 ) whose slope is   = f ( x0 , y0 )
 dx ( x0 , y0 )

The equation of a line through ( x0 , y0 ) , whose slope is f ( x0 , y0 ) is given by

y − y0 = f ( x0 , y0 ) ( x − x0 ) .

If the ordinate corresponding to x1 meets this tangent line in ( x1 , y1 ) then


y1 − y0 = ( x1 − x0 ) f ( x0 , y0 ) (or) y1 = y0 + ( x1 − x0 ) f ( x0 , y0 )

(or) y1 = y0 + hf ( x0 , y0 )  h = x1 − x0 

Again in the interval ( x1 , x2 ) and through the point ( x1 , y1 ) , we approximate the

 dy 
curve y(x) by the tangent line at ( x1 , y1 ) whose slope is   = f ( x1 , y1 ) .
 dx ( x , y ) 1 1

The equation of this tangent line is y − y1 = f ( x1 , y1 ) ( x − x1 )


If the ordinate corresponding to x2 meets this tangent line in ( x2 , y2 ) then
y2 − y1 = ( x2 − x1 ) f ( x1 , y1 ) (or) y2 = y1 + ( x2 − x1 ) f ( x1 , y1 )

(or) y2 = y1 + hf ( x1 , y1 )  h = x2 − x1 

Continuing this process n times, we get in general

yn +1 = yn + hf ( xn , yn ) , n=0,1,2,…

This is the Euler’s method of finding the approximate solution of the equation
dy
= f ( x, y )
dx

Problems:
1) Solve by Euler’s method, y' = x + y, y(0) = 1 and find y(0.3) taking step size
h=0.1. Compare the result obtained by this method with the result obtained
analytically.
dy
2) Using Euler’s method, solve for y at x=2 from = 3x 2 + 1 , y(1)=2, taking step
dx
size (i) h = 0.5 and (ii) h = 0.25.

Modified Euler’s Method:

• This method is developed to rectify the error approximation of Euler’s method.


• Better approximations are obtained when ‘h’ is small.

dy
Consider the first order differential equation = y = f ( x, y ) with the initial
dx
condition y( x0 ) = y0 .

To find y ( x1 ) = y1 at x = x1 = x0 + h :

We have y1(0) = y0 + hf ( x0 , y0 )

h
The first approximation of y1 is, y1(1) = y0 +  f ( x0 , y0 ) + f ( x1 , y1(0) ) 
2
h
The second approximation of y1 is, y1(2) = y0 +  f ( x0 , y0 ) + f ( x1 , y1(1) ) 
2

h
The third approximation of y1 is, y1(3) = y0 +  f ( x0 , y0 ) + f ( x1 , y1(2) ) 
2


h
The nth approximation of y1 is, y1( n ) = y0 +  f ( x0 , y0 ) + f ( x1 , y1( n −1) ) 
2

This process should be continued till two successive approximations are sufficiently
close to each other.

dy
Now we have = y = f ( x, y ) with the condition y ( x1 ) = y1 .
dx

To find y( x2 ) = y2 at x = x2 = x1 + h :

We take the initial approximation as y2(0) = y1 + hf ( x1 , y1 )

h
The first approximation of y 2 is, y2(1) = y1 +  f ( x1 , y1 ) + f ( x2 , y2(0) ) 
2

h
The second approximation of y 2 is, y2(2) = y1 +  f ( x1 , y1 ) + f ( x2 , y2(1) ) 
2

h
The third approximation of y 2 is, y2(3) = y1 +  f ( x1 , y1 ) + f ( x2 , y2(2) ) 
2

h
The nth approximation of y 2 is, y2( n ) = y1 +  f ( x1 , y1 ) + f ( x2 , y2( n −1) ) 
2

This process should be continued till two successive approximations are


sufficiently close to each other.

Similarly we can find y3 , y4 , y5 ,...


Runge – Kutta Method:

dy
Consider the first order differential equation = y = f ( x, y ) with y( x0 ) = y0 .
dx

To find y ( x1 ) = y1 at x = x1 = x0 + h :

1
y1 = y0 + ( K1 + 2K 2 + 2K3 + K 4 )
6

Where K1 = hf ( x0 , y0 )

 h K 
K 2 = hf  x0 + , y0 + 1 
 2 2 

 h K 
K3 = hf  x0 + , y0 + 2 
 2 2 

K 4 = hf ( x0 + h, y0 + K 3 )

dy
Now we have = y = f ( x, y ) with y ( x1 ) = y1 .
dx

To find y( x2 ) = y2 at x = x2 = x1 + h :

1
y2 = y1 + ( K1 + 2 K 2 + 2K3 + K 4 )
6

Where K1 = hf ( x1, y1 )

 h K 
K 2 = hf  x1 + , y1 + 1 
 2 2 

 h K 
K3 = hf  x1 + , y1 + 2 
 2 2 

K 4 = hf ( x1 + h, y1 + K 3 )

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy