0% found this document useful (0 votes)
13 views

Unit

Uploaded by

2330219
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Unit

Uploaded by

2330219
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

UNIT-1

LINEAR SYSTEMS
Introduction:
A system of linear equations (or linear system) is a collection of two
or more linear equations involving the same variables.

Gaussian elimination
The Gaussian elimination method is known as the row reduction
algorithm for solving linear equations systems. It consists of a
sequence of operations performed on the corresponding matrix of
coefficients. We can also use this method to estimate either of the
following:
• The rank of the given matrix
• The determinant of a square matrix
• The inverse of an invertible matrix
To perform row reduction on a matrix, we have to complete a
sequence of elementary row operations to transform the matrix till
we get 0s (i.e., zeros) on the lower left-hand corner of the matrix as
much as possible. That means the obtained matrix should be an
upper triangular matrix. There are three types of elementary row
operations; they are:

Swapping two rows and this can be expressed using the notation ↔,
for example, R2 ↔ R3
Multiplying a row by a nonzero number, for example, R1 → kR2
where k is some nonzero number
Adding a multiple of one row to another row, for example, R2 → R2 +
3R1
The obtained matrix will be in row echelon form. The matrix is said
to be in reduced row-echelon form when all of the leading coefficients
equal 1, and every column containing a leading coefficient has zeros
elsewhere. This final form is unique; that means it is independent of
the sequence of row operations used.
UNIT-2
INTERPOLATION
Introduction:
Interpolation is a type of estimation, a method of constructing
(finding) new data points based on the range of a discrete set of
known data points

Newton’s Interpolation
• Forward Difference
Newton's forward interpolation formula is a method used to
approximate the value of a function near the beginning of a set
of values. It is used when the values of the independent variable
are at equal intervals and the value of the independent variable
is near the beginning of the set of values.
• Backward Difference
Newton's backward interpolation is a method for
approximating a function with a polynomial near the end that
passes through a set of equally spaced points.
• Newton Divided difference
Newton's divided difference interpolation method allows us to
create a polynomial function from a set of data points. This
polynomial is accurate at each of the data points and becomes
generally more accurate as we add more data points.
Forward difference
Backward difference
Divided difference
Lagrange Interpolation
Interpolation is a method of deriving a simple function from the given
discrete data set such that the function passes through the provided
data points. This helps to determine the data points in between the
given data ones. This method is always needed to compute the value
of a function for an intermediate value of the independent function. In
short, interpolation is a process of determining the unknown values
that lie in between the known data points. It is mostly used to
predict the unknown values for any geographical related data points
such as noise level, rainfall, elevation, and so on.
The unknown value on the data points can be found using the linear
interpolation and Lagrange’s interpolation formula.

Q: USING LAGRANGE INTERPOLATION FIND THE FORM OF Y(X) FROM THE FOLLOWING TABLE
AND HENCE DETERMINE LN (2)

X Y=LN(X)
0 -12
1 0
3 12
4 24
UNIT-3
INTEGRATION
There are various methods through which we may integrate complex
problems. The following methods have been used: -

1. Trapezoidal rule of integration


The trapezoidal rule is a numerical method for approximating the
definite integral of a function. It works by approximating the area
under the curve of the function by dividing it into trapezoids and
summing up their areas.
Divide the interval [a, b] into n subintervals of equal width. For
each subinterval, approximate the area under the curve using the
trapezoid formed by the function values at the endpoints of the
subinterval. Finally, sum up the areas of all the trapezoids to get an
approximation of the integral.

2. Simpson 1/3 method


The Simpson's 1/3 rule is another numerical method for
approximating definite integrals, similar to the trapezoidal rule,
but it uses quadratic approximations instead of linear ones. It's
particularly useful when the function being integrated is relatively
smooth. Divide the interval [a, b] into n subintervals of equal
width Δx = (b - a) / n. It's important to note that for Simpson's 1/3
rule, n must be an even number. Approximate the area under the
curve for each pair of subintervals using quadratic approximations.
Each pair of subintervals forms a segment where three function
values are used to fit a quadratic curve (hence "1/3" rule). This
quadratic approximation is typically achieved by using Lagrange
interpolation. Calculate the integral over each pair of subintervals
using the quadratic approximation. Sum up the integrals over all
pairs of subintervals to get an approximation of the total integral.

3. Simpson 3/8 method


Simpson's 3/8 rule is an extension of Simpson's 1/3 rule, which
allows for the approximation of definite integrals using cubic
approximations instead of quadratic ones. This rule is particularly
useful when dealing with smooth functions and can provide more
accurate results compared to Simpson's 1/3 rule, especially when
the number of intervals is odd.

4. Gauss Quadrature method


Gauss quadrature is a numerical method for approximating
definite integrals using weighted sums of function evaluations at
specific points. Unlike some other numerical integration
techniques such as the trapezoidal rule or Simpson's rule, Gauss
quadrature achieves higher accuracy by carefully choosing both
the points of evaluation and the weights associated with those
points. Gauss quadrature selects the points xi (called nodes) within
the interval of integration [a, b]. These points are chosen such that
they are optimal for integrating polynomials up to a certain
degree. Corresponding to each integration point xi , Gauss
quadrature assigns a weight wi . These weights are chosen such
that the quadrature formula gives exact results for polynomials up
to a certain degree.
The integral of a function f(x) over the interval [a, b] is
approximated by the weighted sum of function evaluations at the
selected points.
Gauss Quadrature Integration Methods
The Gauss integration scheme is a very efficient method to perform
numerical integration over intervals.
Gauss Legendre Integration Method:
Gauss quadrature aims to find the “least” number of fixed points to
approximate the integral of a function f: [-1,1] such that:

where 1<=i<n: xi belongs to wi (real numbers). Also, i: xi is called an


integration point and is called the associated weight.
curve for polynomials. In the following sections, the required number
of integration points for particular polynomials are presented
Gauss Laguerre Method
Laguerre-Gauss quadrature, also called Gauss-Laguerre quadrature or
Laguerre quadrature, is a Gaussian quadrature over the interval [0,
infinity) with weighting function W(x)=e^(-x)
It is especially useful when the integrand has the form f(x)e-x, where e-x
serves as a weight function. This approach is widely used in physics,
engineering, and probability, particularly for problems involving
exponentially decaying functions. Gauss-Laguerre Quadrature is
designed for integrals over [0, ∞). The weight function e -x is built into
the method, so f(x) should be adjusted accordingly if a different weight
is needed.

Gauss Hermite Integration


Evaluation of integral of the form using Gauss-Hermite

quadrature involves to calculate .The Gauss-Hermite


Quadrature method is a numerical integration technique specifically
designed for approximating integrals of functions over the interval (−∞,
∞) with a Gaussian (normal) weight. This method is particularly useful
when the integrand has the form f(x)e-x2, where e-x2 serves as a weight
function.
Unit-4
IVP (Initial Value
Problems)
Euler’s Method
Euler's method is a numerical technique for approximating the
solution of ordinary differential equations (ODEs)
Euler’s Method.
dydt=f(t,y)y(t0)=y0
First, we know the value of the solution at t=t0t=t0 from the initial
condition. Second, we also know the value of the derivative
at t=t0t=t0. We can get this by plugging the initial condition
into f(t,y)f(t,y) into the differential equation itself. So, the derivative
at this point is.
dydt∣∣∣t=t0=f (t0, y0)
The tangent line is
y=y0+f (t0, y0) (t−t0)
Final value is
y1=y0+f (t0, y0) (t1−t0)
y2=y1+f (t1, y1) (t2−t1)
y3=y2+f (t2, y2) (t3−t2)
y4=y3+f (t3, y3) (t4−t3) etc.
yn+1=y_n+f (t_n, y_n) (t_n+1−t_n)
Euler’s Method using RC charging
Euler’s Modified
The Euler Modified Method, also known as the Improved Euler's
Method or Heun's Method, is a numerical procedure for solving
ordinary differential equations (ODEs) that's more accurate than the
original Euler's Method

The Euler Modified Method uses the average of the slopes at two
points to estimate the value of the function at the next point. This
reduces error and provides a more accurate estimate than the original
Euler's Method, which only uses the slope at the current point

RK2
The Runge-Kutta 2nd order method is a numerical technique used to
solve an ordinary differential equation of the form
dy/dx=f(x,y), y(x0)=y0
Euler’s method is given by
RK4 Method:
The Runge-Kutta method provides the approximate value of y for a
given point x. Only the first order ODEs can be solved using the
Runge Kutta RK4 method.
Comparison between Euler modified, RK2 and RK4
Unit-5
BVP (Boundary Value
problems)
Two Point Boundary value problems
A two-point boundary value problem is a set of coupled first-order ordinary
differential equations (ODEs) with boundary conditions at two points.

Types of Boundary value problems


Dirichlet condition
let us take a look at an ordinary differential equation (du/dx)+u=0dudx+u=0 in the
domain[a,b].We have the Dirichlet boundary condition when the boundary
prescribes a value to the dependent variable(s). A Dirichlet boundary condition
for the above ODE looks like

Neumann condition
In the Neumann boundary condition, the derivative of the dependent variable is
known in all parts of the boundary:
Robin condition
The Robin boundary condition is a weighted combination of the Dirichlet
boundary and the Neumann boundary condition in all the parts of the boundary:

where χi's are constants


representing the weights.

Finite Difference Method


Finite difference is often used as an approximation of the derivative, typically in
numerical differentiation.
y c(x)y(x)+ d(x)y(x) +e(x);a<= x<= b
y(a)= 1; y(b)=  2
To apply finite difference method first discretize the domain a x b   into N-1
computational grid points i x ;i 1,2…N-1 and two boundary points x0 and xN
The differential equation is now written at each internal grid point i x;i 1,2…..N-1
. For this, the derivatives are replaced by corresponding finite differences:

That is

The grid points are equispaced and can be written as


Shooting Method
The shooting method stands as a sophisticated numerical strategy devised for the
resolution of boundary value problems (BVPs) within the realm of ordinary
differential equations (ODEs). BVPs present an intriguing challenge as they call
for solutions to be sought across a prescribed interval, while distinct conditions
are prescribed at both extremities of this interval. This differentiates them from
initial value problems (IVPs), which entail merely a single condition at the outset.
The shooting method's central premise revolves around an intriguing
transformation of BVPs into IVPs. This transformation hinges on a clever initial
guess for a condition at one end of the interval, typically the left end, which is
then treated as an initial value. Subsequently, numerical integration techniques
are engaged to trace the ODE's trajectory from the guessed point towards the
opposite end of the interval, the right end. If, however, the computed numerical
solution fails to adhere to the desired boundary condition at the right end, the
procedure is adjusted iteratively, fostering the convergence toward an accurate
solution. The shooting method serves as a versatile tool, extensively harnessed in
an array of fields, spanning from physics to engineering and the sciences, enabling
the systematic and numerical exploration of intricate problems hinging on the
satisfaction of constraints at both extremities of a domain

This technique is called shooting method, by analogy to the procedure of firing


objects at stationer target. We start with the parameter t0 that determines the
initial elevation at which object is fired from the point(a,alpha) and along the
curve described by the solution to initial value problem

Python code:

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy