NUMERIC CHAPTER 1 2016
NUMERIC CHAPTER 1 2016
1.1 Introduction
Numerical Analysis is a subject that is concerned with devising methods for approximating the
solution of mathematically expressed problems.
Such problems may each be formulated for example in terms of algebraic, transcendental
equations or ordinary differential equations or partial or integral equations.
More often the mathematical problems cannot be solve by exact methods.
The mathematical models ordinarily do not solve the physical problems exactly, so it is often more
appropriate to find an approximate solution.
Generally numerical analysis does not give exact solution, instead it attempts to devise a method which
will yield an approximation differing from exactness by less than a specified tolerance.
The efficiency of the method used to solve the given problem depends both upon the accuracy
required of the method and the ease with which it can be implemented.
Numerical technique is widely used by scientists and engineers to solve their problems.
A major advantage for numerical technique is that a numerical answer can be obtained even when a
problem has no analytical solution.
However, result from numerical analysis is an approximation, in general, which can be made as
accurate as desired.
The reliability of the numerical result will depend on an error estimate or bound, therefore the
analysis of error and the sources of error in numerical methods is also a critically important part of
the study of numerical technique.
Errors and Approximations in Computation
A computer has a finite word length and so only a fixed number of digits are stored and used during
computation.
This would mean that even in storing an exact decimal number in its converted form in the computer
memory, an error is introduced. This error is machine dependent and is called machine epsilon.
In general, we can say that 𝑬𝒓𝒓𝒐𝒓 = 𝑻𝒓𝒖𝒆 𝒗𝒂𝒍𝒖𝒆 − 𝑨𝒑𝒑𝒓𝒐𝒙𝒊𝒎𝒂𝒕𝒆 𝒗𝒂𝒍𝒖𝒆
Approximation of Error numbers:
Exact number: number with which no uncertainly is associated to no approximation is taken.
21 7
Example: 5, , 5 etc. are exact numbers
6
Page | 1
Approximate number: There are numbers which are not exact.
Example: 𝑒 = 2.7182 … , √2 = 1.41421 …, etc. They contain infinitely many non-recurring digits.
Therefore, the numbers obtained by retaining a few digits are called Approximation numbers.
Example: 𝑒 ≈ 2.718 𝑎𝑛𝑑 𝜋 ≈ 3.142
Significant digits (Figures): are the numbers of digits used to express the number.
The digits 1, 2, 3, … , 9 are significant digits and ′0′ is also a significant figure except when it is
used to fix the decimal point or used to fill the place of discarded digits.
Example: 5879, 0.4762 contains four significant digits, 0.00486, 0.000382 contains three
significant digits and 2.0682 contains five significant digits
1.2 Sources of Errors
Analysis of errors is the central concern in the study of numerical analysis and therefore we will
investigate the sources and types of errors that may occur in a given problem and the subsequent
propagation of errors.
Errors in the solution of a problem are due to the following reasons:
To solve physical problems, mathematical models are formulated to describe them and these
models do not describe the problems exactly and as a result errors are introduced.
The methods used to solve the mathematical models are often not exact and as a consequence errors
are introduced.
A computer has a finite word length and so only a fixed number of digits of a number are inserted
and as a consequence errors are introduced.
1.3 Classification of Errors
The errors induced by the sources mentioned above are classified as:
a) Inherent Errors:
The inherent error is that quantity which is already present in the statement of the problem
before its solution.
The inherent error arises either due to the simplified assumptions in the mathematical
formulation of the problem or due to the errors in the physical measurements of the parameters
of the problem.
Inherent error can be minimized by obtaining better data, by using high precision computing
aids and by correcting obvious errors in the data.
These are errors that we cannot avoid; unless the mathematical models that are formulated to
describe the physical problems are exact such errors will always be induced. Because of this
such errors are called Inherent errors.
Page | 2
b) Truncation Errors:
The mathematical models may be formulated in algebraic or transcendental or other type of
equations. The solutions of such equations may not be solved analytically. Hence, we use
numerical methods to obtain the solutions of such equations. In the process errors will be
induced. Such errors are called truncation error (errors due to the method).
These types of errors caused by using approximate formulae in computation or on replace an
infinite process by a finite one.
Example: The Taylor’s series formula for finite terms:
𝟏
𝒇(𝒙) = 𝒇(𝒙𝟎 ) + (𝒙 − 𝒙𝟎 )𝒇′ (𝒙𝟎 ) + 𝟐 (𝒙 − 𝒙𝟎 )𝟐 𝒇′′ (𝒙𝟎 ) + ⋯
c) Computational Errors: Computational tools have limited space to store digits, and a number with
number of the digits greater than the tools accommodation capacity will be truncated and such
errors are called computational errors.
Using symbols we can see the classification of errors:
Let 𝑥 denote the exact solution of the physical problem.
Let 𝑥̃ be the solution corresponding to the given mathematical description (model).
Let 𝑥̃𝑛 be the solution of the problem obtained from the numerical method on the assumptions that
rounding are absent.
Let 𝑥̃𝑛 ∗ be the approximation to the solution obtained in the actual computation. Then we have:
Inherent error: 𝑒𝐼 = |𝑥̃ − 𝑥|
Error due to the method: 𝑒𝑚 = |𝑥̃𝑛 − 𝑥̃|
Computational error: 𝑒𝑐 = |𝑥̃𝑛 ∗ − 𝑥|
Then actual error induced: 𝑒 = |𝑥̃𝑛 ∗ − 𝑥|
̃𝒏 ∗ − 𝒙| ≤ |𝒙
|𝒙 ̃𝒏 ∗ − 𝒙
̃| + |𝒙 ̃| + |𝒙
̃𝒏 − 𝒙 ̃ − 𝒙| ⟹ 𝒆 ≤ 𝒆𝒄 + 𝒆𝒎 + 𝒆𝒍
In many cases, the error is meant not the difference between the approximation and the exact but rather
certain measures of distance between them.
Hence, if 𝒆 ≤ 𝒆𝒄 + 𝒆𝒎 + 𝒆𝒍 = Tolerance, then the numerical solution is accepted.
d) Round-off error is the quantity, which arises from the process of rounding off numbers.
It sometimes also called numerical error.
The process of dropping unwanted digits is called round-off.
2
Example: The number 7 can be written as 0.29, 0.286, 0.2857
Page | 3
Example: 8.893 𝑡𝑜 8.89
ii. Greater than 5 in the (𝑛 + 1)𝑡ℎ place, increase the (𝑛)𝑡ℎ digit by unity.
Example: 5.3456 𝑡𝑜 5.346
iii. Exactly 5 in the (𝑛 + 1)𝑡ℎ place, increase the (𝑛)𝑡ℎ digit by unity if it is odd otherwise,
leaves it unchanged.
Example: 11.675 𝑡𝑜 11.68 and 11.685 𝑡𝑜 11.68
e) Absolute error: is the numerical difference between the true value of a quantity and its
approximate value.
Thus if 𝑥 ′ is the approximate value of quantity 𝑥 then |𝒙 − 𝒙′ | is called the absolute error and
denoted by 𝐸𝑎 . Therefore, 𝑬𝒂 = |𝒙 − 𝒙′ |
The unit of exact or unit of approximate values expresses the absolute error.
|𝒙−𝒙′ | 𝑬
f) Relative Error: The relative error 𝐸𝑟 defined by 𝑬𝒓 = |𝒙|
𝒂
= 𝑻𝒓𝒖𝒆 𝒗𝒂𝒍𝒖𝒆 where 𝑥 ′ is the
Example: If 𝑥 = 0.51 and correct to two decimal places then ∆𝑥 = 0.005 and the relative accuracy
0.005
is given by × 100 ≅ 0.98%
0.51
Exercise:
22
1. An approximate value of 𝜋 is given by 𝑥 ′ = = 3.1428571 and its true value is 𝑥 = 3.1415926 then
7
A floating point representation is normalized if the first digit of the mantissa is different from zero.
i.e., if 0 ≤ 𝑑𝑖 ≤ 9 and 𝑑1 ≠ 0
In computers, floating-point numbers have three parts: the sign, the fraction part often called the
mantissa and the exponent part.
Page | 5
The three parts of the number have a fixed total length that is often 32 or 64 bits (sometimes more).
The mantissa part uses most of these bits (23 – 52 bits) and this determines the precision, the
exponent part uses (7 – 11) bits and this number determines the range of the values.
𝑛 bits
𝑛 =𝑡+𝑟+𝑙
Note: A number cannot be represented exactly if it contains more than 𝑡 bits in the mantissa.
Page | 6