0% found this document useful (0 votes)
9 views

Numerical_Chapter3

This document discusses systems of linear equations (SLE), including definitions, types (homogeneous and inconsistent), and methods for solving them. It outlines direct methods such as Cramer's rule, Gauss's elimination, and matrix decomposition, providing examples for clarity. The document also includes mathematical representations and procedures for solving linear equations using these methods.

Uploaded by

tesfayeyisahak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Numerical_Chapter3

This document discusses systems of linear equations (SLE), including definitions, types (homogeneous and inconsistent), and methods for solving them. It outlines direct methods such as Cramer's rule, Gauss's elimination, and matrix decomposition, providing examples for clarity. The document also includes mathematical representations and procedures for solving linear equations using these methods.

Uploaded by

tesfayeyisahak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Manalebish D

Numerical Analysis Math-3221


Chapter 3 : System of linear equations

October 30, 2023


Chapter 1

Basic concepts in error


estimation

1
Chapter 2

Non-linear Equation

2
Chapter 3

The solution of system of


linear equations(SLE)
 
b
 1 
Let A = (aij ) be an m × n matrix over R. Let B =  ...  be a vector in
 
 
bm
n
R . The system

a11 x1 + a12 x2 + . . . + a1n xn = b1


a21 x1 + a22 x2 + . . . + a2n xn = b2
..
.
am1 x1 + am2 x2 + . . . + amn xn = bm

is called a system of m linear equations in n unknowns x1 , x2 , . . . xn . The


system is said to be homogeneous if all the constants b1 , b2 , . . . bm are equal
to 0.i.e. B = 0.
The matrix A = (aij )i=1,...m ,j=1,...n is called the coefficient matrix.
The entry aij is located in the ith row and j th column.
An n−tuple u = (u1 , u2 , . . . , un ) of real numbers in a solution if it satisfies
each of the above equations. If a system has a solution it is said to be

3
consistent, otherwise it is inconsistent.

3x1 + x2 − 2x3 = 0 x − 2y = 1
x1 − x2 + x3 = 0 −3x + 6y = 4
4x1 − x3 = 0 0=7
Put x1 = 1 ⇐ x3 = 4, x2 = 5 no solution inconsistent
(1, 5, 4)is a non trivial solution

The system can be written in the form


     
a a b
 11   1n   1 
 ..   .  . 
 .  x1 + · · · +  ..  xn =  ..  or

     
am1 amn bm

A1 x1 + . . . + An xn = B or simply Ax = B where Ai = (a1i , . . . ami ) is the ith


column vector of A and x = (x1 , . . . xn )T
If n = m, i.e. A is a square matrix, A is invertible i.e. det A ̸= 0, then
x = A−1 B is a unique solution where
1
A−1 = (adjA)
det A
If B = 0 and det A ̸= 0, we have the trivial solution x = 0
If det A = 0 but B ̸= 0, we have an infinite number of solutions

3.1 Direct Methods for SLE


Direct Methods for solving System of Linear Equations:-

1. Cramers rule

2. Gauss’s elimination method

3. Gaussian elimination with partial pivoting

Manalebish D: 4 4
4. Gauss Jordan’s Method

5. Matrix (LU) Decomposition

3.1.1 Cramers rule

When determinant of the coefficient matrix det A = D ̸= 0 and B ̸= 0 by


Cramer’s rule, the system Ax = B has the solution which is given by the
ratio
x1 x2 xn 1
= = ··· = =
D1 D2 Dn D
Di
So that for each i = 1, . . . , n. xi = where Di is the determinant of the
D
matrix obtained from A by replacing the ith column Ai by B.

Example 1 Solve
2x + z = 1
4x + 2y − 3z = −1
5x + 3y + z = 2

In order to use cramers


 rule Let us
 first
 write
  the system of equations in to
2 0 1 x 1
    
matrix Ax = B =⇒ 4 2 −3 y  = −1
    
    
5 3 1 z 2
Next we calculate the determinant of the coefficient matrix

2 0 1
det A = 4 2 −3 = 24 ̸= 0
5 3 1

Then to find the value of the first variable calculate the determinant of the
matrix where the first column is replaced by the B matrix and divide by the

Manalebish D: 5 5
determinant of the coefficient matrix.

1 0 1 2 1 1
−1 2 −3 4 −1 −3
2 3 1 4 1 5 2 1 4 1
x= = = y= = =
det A 24 6 det A 24 6

2 0 1
4 2 −1
5 3 2 16 2
z = = =
det A 24 3
 
1 1 2
Therefore the solution is , ,
6 6 3

Gauss’s elimination method

In this method we transform the system into the form

a11 x1 = b1
a21 x1 + a22 x2 = b2
.. ..
. .
an1 x1 + an2 x2 + . . . + ann xn = bn

i.e. Lx = B where L is the lower triangular matrix


L = (aij ), i ≥ j, i < j, aij = 0 and solve for the xi ’s by forward substitution;
or to the form
a11 x1 + a12 x2 + . . . + a1n xn = b1
a22 x2 + . . . + a2n xn = b2
.. ..
. .
ann xn = bn

Manalebish D: 6 6
i.e. U x = B where U is the upper triangular matrix
U = (aij ), i ≤ j, i > j, aij = 0 and solve for the xi ’s by backward substitution;
By using the elementary row(column) operations: That is

1. Interchange the ith row(column) Ai (Ai ) and j th row(column) Aj (Aj )

Ai ↔ Aj (Ai ↔ Aj )

2. Multiply the ith row (column) by a non zero scalar K ̸= 0

Ai ↔ KAi (Ai ↔ KAi )

3. Replace the ith row(column) by K ′ times the j th row plus K times the
ith row
Ai ↔ K ′ Aj + KAi K ̸= 0

Apply these operations on the augmented matrix (A \ B) of the system Ax =


B to reduce A into identity matrix.
a a · · · a1n b1
 11 12 
 a21 a22 · · · a2n b2 
 
(A \ B) =   .. .. .. ..

.. 
 . . . . . 
 
an1 an2 · · · ann bn
is called the augmented matrix of the system Ax = B.

Example 2 Solve the system

x + 2y + z = 3
3x − 2y − 4z = −2
2x + 3y − z = −6

Then the augmented matrix

Manalebish D: 7 7
     
 1 2 1 3   1 2 1 3  −−−−−−−−−−→  1 2 1 3 
 A2 =3A1 −A2 
(A\B) =  3 −2 −4 −2 −−−−−−−−→  0 8 7 11  A3 = 8A3 − A3  0 8 7 11
   
− 
  A3 =2A1 −A3    
2 3 −1 −6 0 1 3 12 0 0 17 85
Thus the above system is (row) equivalent to the system

x + 2y + z = 3
85
8y + 7z = 11 ⇒z= =5 y = −3 x=4
17
17z = 85

The solution is (4, −3, 5)

Example 3 Solve the following system of linear equations

−x1 + 3x2 + 5x3 + 2x4 = 10


x1 + 9x2 + 8x3 + 4x4 = 15
x2 + x4 = 2
2x1 + x2 + x3 − x4 = −3
   
−1 3 5 2 10 −1 3 5 2 10
   
   
 1 9 8 4 15  A2 =A1 +A2  0 12 13 6 25  A3 =A2 −12A3
(A\B) =   −−−−−−−−−→   −−−−−−−−−→
 0 1 0 1 2  A4 =2A1 +A4  0 1 0 1 2  A4 =A4 −7A3
   
   
2 1 1 −1 −3 0 7 11 3 17
   
−1 3 5 2 10 −1 3 5 2 10
   
   
 0 12 13 6 25  −−−−−−−−−−−−−−→  0 12 13 6 25 
  A4 = −11A3 + 13A4  
 0 0 13 −6 1   0 0 13 −6 1 
   
   
0 0 11 −4 3 0 0 0 14 28
Thus

−x1 + 3x2 + 5x3 + 2x4 = 10


12x2 + 13x3 + 6x4 = 25
⇒ x4 = 2, x3 = 1, x2 = 0, x1 = −1
13x3 − 6x4 = 1
14x4 = 28

Manalebish D: 8 8
The solution is (−1, 0, 1, 2)
f u n c t i o n C = g a u s s e l i m i n a t i o n (A, B) % d e f i n i n g th e f u n c t i o n
A= [ −1 3 5 2 ; 1 9 8 4 ;0 1 0 1 ; 2 1 1 −1] % I n p u t t i n g t h e v a l u e o f c
B = [ 1 0 ; 1 5 ; 2 ; −3] % % I n p u t t i n g t h e v a l u e o f c o e f f i c i e n t matrix
i = 1; % loop v a r i a b l e
X = [ A B ];
[ nX mX ] = s i z e ( X ) ; % d e t e r m i n i n g t h e s i z e o f matrix
w h i l e i <= nX % s t a r t o f l o o p
i f X( i , i ) == 0 % c h e c k i n g i f t he d i a g o n a l e l e m e n t s a r e z e r o o r not
d i s p ( ’ D ia go n al element z e r o ’ ) % d i s p l a y i n g t he r e s u l t i f t h e r e e x i
return
end
X = e l i m i n a t i o n (X, i , i ) ; % p r o c e e d i n g f o r w a r d i f d i a g o n a l e l e m e n t s a r e n
i = i +1;
end
C = X( : ,mX) ;

f u n c t i o n X = e l i m i n a t i o n (X, i , j )
% P i v o t i n g ( i , j ) element o f matrix X and e l i m i n a t i n g o t h e r column

% elements to zero

[ nX mX ] = s i z e ( X ) ;
a = X( i , j ) ;
X( i , : ) = X( i , : ) / a ;
for k = 1 : nX % l o o p t o f i n d t r i a n g u l a r form
i f k == i
continue
end
X( k , : ) = X( k , : ) − X( i , : ) ∗ X( k , j ) ; % f i n a l r e s u l t
end

Manalebish D: 9 9
Gaussian elimination with partial pivoting

The first row is called the pivotal row and the coefficients of the first unknown,
the pivotal coefficients. We choose the pivotal row as the equation which has
the numerically (absolutely) largest coefficient of the first variable. This is
called partial pivoting. While using partial pivoting, exchange only rows.
Exchanging rows does not affect the order of the variables (xi )
For increased numerical stability, make sure the largest possible pivot element
is used. This requires searching in the partial column below the pivot element.
To avoid division by zero, swap the row having the zero pivot with one of
the rows below it.
To minimize the effect of roundoff, always choose the row that puts the largest
pivot element on the diagonal.

Example 4

10x − 7y + 3z + 5w = 6 (3.1)
−6x + 8y − z − 4w = 5 (3.2)
3x + y + 4z + 11w = 2 (3.3)
5x − 9y − 2z + 4w = 7 (3.4)

Let
 us write the augmented matrix corresponding to it
10 −7 3 5 6
 
 5 −9 −2 4 7 
 
 
 3 1 4 11 2 
 
 
−6 8 −1 −4 5
Divide the first equation by 10 and eliminate x

Manalebish D: 10 10
 
1 −0.7 0.3 0.5 0.6
 
−−−−−− −
→ −9 −2
 
1
 5 4 7 
A1 = 10 A1  
 3 1 4 11 2 
 
 
−6 8 −1 −4 5
use the first row to eliminate all the coefficients
 of x
1 −0.7 0.3 0.5 0.6
 
 0 −5.5 −3.5 1.5 4 
 
A2 =A2 −5A1 

−−−−−−−→ 
A3 =A3 −3A1

 0 3.1 3.1 9.5 0.2 

A4 =A4 +6A1
 
0 3.8 0.8 −1 8.6
Since the largest in absolute value in the second column is in the second row
we divide by−5.5 and eliminate the coefficient
 of y
1 −0.7 0.3 0.5 0.6
 
−−−−−−1−−→  0 1 0.63 −0.27 −0.72 
 
A2 = −5.5 A2  
 0 3.1 3.1 9.5 0.2 
 
 
0 3.8 0.8 −1 8.6
use the second
 row to eliminate all the coefficients 
of y
1 −0.7 0.3 0.5 0.6
 
−0.27 −0.72
 
A3 =A3 −3.1A2 
 0 1 0.63 
−−−−−−−−−−→  
A4 =A4 −3.8A2
0 0 1.1272 10.3454 2.4545 

 
0 0 −1.6181 0.03636 11.3636
Since the largest in absolute value in the third and forth column is in the
fourth row
interchange the rows 
1 −0.7 0.3 0.5 0.6
 
−0.27 −0.72 
 
0 1 0.63
A3 ↔ A4  
 0 0 −1.6181 0.03636 11.3636 
 
 
0 0 1.1272 10.3454 2.4545

then we divide by −1.6181 and eliminate the coefficient of z

Manalebish D: 11 11
 
1 −0.7 0.3 0.5 0.6
 
−0.27 −0.72  −−−−−−−−−−−−−−→
 
0 1 0.63
  A4 = A4 − 1.1272A2
0 0 1 −0.2247 −7.02247 
 
 
0 0 1.1272 10.3454 2.4545
 
1 −0.7 0.3 0.5 0.6
 
 0 1 0.63 −0.27 −0.72 
 
  dividing A4 by 10.354
0 0 1 −0.2247 −7.02247 
 
 
0 0 0 10.3454 10.37079
 
1 −0.7 0.3 0.5 0.6
 
 0 1 0.63 −0.27 −0.72 
 
 
0 0 1 −0.2247 −7.02247 
 
 
0 0 0 1 1

using backward substitution the solution is w = 1, z = −7, y = 4, x = 5


This method minimizes round off errors which may affect the accuracy of
the solution.

Example 5 consider the system

0.0004x + 1.402y = 1.406


0.4003x − 1.502y = 2.501
using
 backward substitution
 the solution
 is y = 1, andx = 10 
0.0004 1.4021 1.406 0.4003 −1.502 2.501 −−−−−−− −−→
  A1 ↔ A2   A1 = 1 A1
0.4003
0.4003 −1.502 2.501 0.0004 1.4021 1.406
   
1 −3.7522 6.2478 −−−−−−−−−−−−−→ 1 −3.7522 6.2478 
 − A2 = A2 − 0.0004A1 
0.0004 1.4021 1.406 0 1.4035 1.4035
 
−−−−−−− 1
−−→ 1 −3.7522 6.2478
A2 = 1.4035 A2  
0 1 1
using backward substitution the solution is y = 1, and x = 10

Manalebish D: 12 12
f u n c t i o n x = GaussPP (A, b )
%GaussPP (A, b ) s o l v e s t he n−by−n l i n e a r system o f e q u a t i o n s u s i n g p a r t i a l
%p i v o t i n g
%A i s t h e c o e f i c i e n t matrix
%b t h e r i g h t −hand column v e c t o r
% A=[10 −7 3 5 ; −6 8 −1 −4 ; 3 1 4 1 1 ; 5 −9 −2 4 ]
% b=[6; 5; 2 ; 7 ]
% A= [ 1 2 1 ; 3 −2 −4; 2 3 −1]
% b = [ 3 ; −2; −6]
A= [ 0 . 0 0 0 4 1 . 4 0 2 1 ; 0 . 4 0 0 3 −1.502 ]
b=[1.406;2.501]
n = s i z e (A, 1 ) ; %g e t t i n g n
A = [ A, b ] ; %p r o d u c e s t he augmented matrix

%e l i m i n a t i o n p r o c e s s s t a r t s
f o r i = 1 : n−1
p = i;
%comparison t o s e l e c t th e p i v o t
f o r j = i +1:n
i f abs (A( j , i ) ) > abs (A( i , i ) )
U = A( i , : ) ;
A( i , : ) = A( j , : ) ;
A( j , : ) = U ;
end
end
%c h e k i n g f o r n u l l i t y o f t he p i v o t s
w h i l e A( p , i )== 0 & p <= n
p = p+1;
end
i f p == n+1
d i s p ( ’ No unique s o l u t i o n ’ ) ;
break

Manalebish D: 13 13
else
i f p ˜= i
T = A( i , : ) ;
A( i , : ) = A( p , : ) ;
A( p , : ) = T;
end
end

f o r j = i +1:n
m = A( j , i ) /A( i , i ) ;
f o r k = i +1:n+1
A( j , k ) = A( j , k ) − m∗A( i , k ) ;
end
end
end

%c h e c k i n g f o r nonzero o f l a s t e n t r y
i f A( n , n ) == 0
d i s p ( ’ No unique s o l u t i o n ’ ) ;
return
end
U=t r i u (A)
%backward s u b s t i t u t i o n
x ( n ) = A( n , n+1)/A( n , n ) ;
f o r i = n − 1: −1:1
sumax = 0 ;
f o r j = i +1:n
sumax = sumax + A( i , j ) ∗ x ( j ) ;
end
x ( i ) = (A( i , n+1) − sumax ) /A( i , i ) ;
end

Manalebish D: 14 14
Jordan’s Method

A modification of Gauss elimination method where by all the coefficients


above and below the main diagonal are eliminated is called the Jordan’s
method.
If A is n × n with det A ̸= 0 and we transform the augmented matrix
(A \ B) to the form (In \ D) then the column vector D is the solution of the
system.

Example 6 Solve the system

x + 2y + 5z = −9
x − y + 3z = 2
3x − 6y − z = 25
   
1 2 5 −9 1 2 5 −9
   
1 −A2   A1 =3A1 −2A2
(A \ B) =  1 −1 3 2  −A−−2 =A 3 2 −11  −
 
−−−−−−→  0 −−−−−−−−→
  A3 =3A1 −A2   A3 =A3 −4A2
3 −6 −1 25 0 12 16 −52
   
3 0 11 −5 3 0 11 −5
  −−−−−−→  
 A2 =A2 −2A3
 0 3 2 −11  A3 = 18 A3  0 3 2 −11
  
−−−−−−−−−−→
    A1 =A1 −11A3
0 0 8 −8 0 0 1 −1
   
3 0 0 6 1 0 0 2
  1
 
 A1 = 3 A1 
 0 3 0 −9  −−−−−1−→  0 1 0 −3  The solution is (2, −3, −1)
 
  A2 = 3 A2  
0 0 1 −1 0 0 1 −1

f u n c t i o n [ x , e r r ]= g a u s s j o r d a n e l i m (A, b )
A = [ 1 1 1 ; 2 3 5 ; 4 0 5 ] % i n p u t f o r augmented matrix A
b = [ 5 ; 8 ; 2 ] % i n t p u t f o r matrix B
[ n ,m]= s i z e (A ) ; % f i n d i n g th e s i z e o f matrix A
e r r =0; % c a l c u l a t i o n o f e r r o r
x=z e r o s ( n , 1 ) ; % c a l l i n g f u c t i o n z e r o

Manalebish D: 15 15
i f n ˜= m
d i s p ( ’ e r r o r : n˜=m’ ) ; % d i s p l a y i n g e r r o r i f found
err = 1;
end % end o f t he s c o p e o f i f
i f l e n g t h ( b ) ˜= n % f i n d i n g t h e l e g t h o f matrix B
d i s p ( ’ e r r o r : wrong s i z e o f b ’ ) ; % d i s p l a y i n g e r r o r , i f found
err = 2;
else
i f s i z e ( b , 2 ) ˜= 1
b=b ’ ;
end % end o f t he s c o p e o f i f −e l s e
i f s i z e ( b , 2 ) ˜= 1
d i s p ( ’ e r r o r : b i s a matrix ’ ) ; % d i s p l a y i n g e r r o n i n matrix B
err = 3;
end
end
i f e r r == 0
Aa=[A, b ] ;
f o r i =1:n
[ Aa( i : n , i : n+1) , e r r ]= g a u s s p i v o t (Aa( i : n , i : n + 1 ) ) ;
i f e r r == 0
Aa ( 1 : n , i : n+1)= g a u s s j o r d a n s t e p (Aa ( 1 : n , i : n+1) , i ) ;
end
end
x=Aa ( : , n +1);
end
A=0;
f u n c t i o n A1=g a u s s j o r d a n s t e p (A, i ) % c a l l i n g o f f u c t i o n f u n c t i o n

[ n ,m]= s i z e (A ) ; % d e t e r m i n a t i o n o f s i z e o f matrix A
A1=A; % a s s i g n i n g A t o A1
s=A1( i , 1 ) ;

Manalebish D: 16 16
A1( i , : ) = A( i , : ) / s ;
k = [ [ 1 : i − 1 ] , [ i +1:n ] ] ;
f o r j=k
s=A1( j , 1 ) ;
A1( j , : ) = A1( j , : ) −A1( i , : ) ∗ s ;
end % end o f f o r l o o p
f u n c t i o n [ A1 , e r r ]= g a u s s p i v o t (A) % c a l l i n g o f f u c n t i o n
[ n ,m]= s i z e (A ) ; % f i n d i n g th e s i z e o f matrix A
A1=A; % p r o c e s s o f a s s i g n i n g
err = 0; % error flag
i f A1 ( 1 , 1 ) == 0
check = l o g i c a l ( 1 ) ; % l o g i c a l ( 1 ) − TRUE
i = 1;
w h i l e check
i = i + 1;
if i > n
d i s p ( ’ e r r o r : matrix i s s i n g u l a r ’ ) ;
err = 1;
check = l o g i c a l ( 0 ) ;
else
i f A( i , 1 ) ˜= 0 & check
check = l o g i c a l ( 0 ) ;
b=A1( i , : ) ; % p r o c e s s t o change row 1 t o i
A1( i , : ) = A1 ( 1 , : ) ;
A1 ( 1 , : ) = b ;
end
end
end
end

Manalebish D: 17 17
Matrix Inversion (Using Jordan’s Method)

To find the inverse of a non singular square matrix A using Jordan’s Method,
we apply the elementary row operations to the combined matrix (A \ In ) and
transform it to the form (In \ D). Then D = A−1
 
1 2 5
 
Example 7 Find the inverse of the matrix  1 −1 3 
 
 
3 −6 −1
   
1 2 5 1 0 0 1 2 5 1 0 0
    −−−−−−−−−−→
 A2 =A2 −A1 
(A\I3 ) =  1 −1 3 0 1 0  −−−−−−−−−→  0 −3 −2 −1 1 0  A3 = A3 − 4A2
 
  A3 =A3 −3A1  
3 −6 −1 0 0 1 0 −12 −16 −3 0 1
   
1 2 5 1 0 0 1 2 5 1 0 0
 −−−−−−−→ 
 A2 = −31 A2
 
−1
 0 −3 −2 −1 1 0  A3 = 8 A3  0 −3 −2 −1 1 0  −
  
−−−−−−−−→
    A1 =A1 −5A3
0 0 −8 1 −4 1 0 0 1 −1 8
1 −1
2 8
  
−5 −7 11
1 2 0 13 8 2
5
8  1 0 0 19
24 6 24 
 
5 −2 1  A = A − 2A  5 −2 1  = (I3 /A−1 )

 0 1 0 12
 
3 12 1 1 2 0 1 0 12 3 12
   
−1 1 −1 −1 1 −1
0 0 1 8 2 8 0 0 1 8 2 8
 
19 −7 11
 24 6 24 
−1 5 −2 1
A =
 
12 3 12

 
−1 1 −1
8 2 8
 
1 0 2
 
Example 8 Find the inverse of A =  2 −1

3

 
4 1 8
   
1 0 2 1 0 0 1 0 2 1 0 0
    −−−−−−→
 A2 =2A1 −A2 
(A\I3 ) =  2 −1 3 1 1 2 −1 0  A3 ↔ A2

0 1 0 − −−−−−−−−→  0

  A3 =A3 −4A1  
4 1 8 0 0 1 0 1 0 −4 0 1

Manalebish D: 18 18
   
1 0 2 1 0 0 1 0 2 1 0 0
  −−−−−−−−−−→   −−−−−−−−−−−→
1 0 −4 0 1  A3 → A3 − A2  0 1 0 −4 0 1  A1 → A1 − 2A3

0
  
   
0 1 1 2 −1 0 0 0 1 6 −1 −1
   
1 0 0 −11 2 2 −11 2 2
   
−1
1 0 −4 0 1  D = A =  −4 0 1 

0
  
   
0 0 1 6 −1 −1 6 −1 −1
 
1 1 1 1
 
 
1 2 2 2
Exercise 1 Find the inverse of the matrix M =  
1 2 3 3
 
 
1 2 3 4
   
1 1 1 1 1 0 0 0 1 1 1 1 1 0 0 0
   
1 1 −1 1 0 0 
   
1 2 2 2 0 1 0 0  A2 =A2 −A1  0 1
(A \ I4 ) =   −−−−−−−→ 
 A =A −A 

1 2 3 3 0 0 1 0  A43 =A43 −A11  0 1 2 2 −1 0 1 0 
 
   
1 2 3 4 0 0 0 1 0 1 2 3 −1 0 0 1
   
1 0 0 0 2 −1 0 0 1 0 0 0 2 −1 0 0
   
A1 =A1 −A2
 0 1
 1 1 −1 1  A2 =A2 −A3  0 1 0 0 −1 2 −1 0 
0 0   
− −−−−−−→  − −−−−−−−→  
A =A −A A4 =A4 −A3 
3 3 2 
A4 =A4 −A2  0 0 1 1 0 −1 1 0 
  0 0 1 1 0 −1 1 0 

0 0 1 2 0 −1 0 1 0 0 0 1 0 0 −1 1
 
1 0 0 0 2 −1 0 0
 
−−−−−−−−−→   0 1 0 0 −1 2 −1 0 
A3 = A3 − A4   = (I4 /M −1 )

 0
 0 1 0 0 −1 2 0  
0 0 0 1 0 0 −1 1

f u n c t i o n b = mat inv2 ( a )
a =[1 0 2 ; 2 −1 3 ; 4 1 8 ]
% Find d i m e n s i o n s o f i n p u t matrix
[ r , c ] = size (a );
% I f i n p u t matrix i s not square , s t o p f u n c t i o n
i f r ˜= c

Manalebish D: 19 19
d i s p ( ’ Only Square M a t r i c e s , p l e a s e ’ )
b = [];
return
end

% Target i d e n t i t y matrix t o be t r a n s f o r m e d i n t o th e output


% i n v e r s e matrix
b = eye ( r ) ;

%The f o l l o w i n g code a c t u a l l y p e r f o r m s t h e matrix i n v e r s i o n by working


% on each element o f th e i n p u t
for j = 1 : r
for i = j : r
i f a ( i , j ) ˜= 0
for k = 1 : r
s = a( j , k ); a( j , k) = a( i , k ); a( i , k) = s ;
s = b( j , k ) ; b( j , k) = b( i , k ) ; b( i , k) = s ;
end
t = 1/ a ( j , j ) ;
for k = 1 : r
a( j , k) = t ∗ a( j , k );
b( j , k) = t ∗ b( j , k ) ;
end
for L = 1 : r
i f L ˜= j
t = −a (L , j ) ;
for k = 1 : r
a (L , k ) = a (L , k ) + t ∗ a ( j , k ) ;
b (L , k ) = b (L , k ) + t ∗ b ( j , k ) ;
end
end
end

Manalebish D: 20 20
end
break
end
% D i s p l a y warning i f a row f u l l o f z e r o s i s found
i f a ( i , j ) == 0
d i s p ( ’ Warning : S i n g u l a r Matrix ’ )
b = ’ error ’ ;
return
end
end
% Show t h e e v o l u t i o n o f t he i n p u t matrix , so t h a t we can
% c o n f i r m t h a t i t became an i d e n t i t y matrix .

Matrix Decomposition

Recall that for a square matrix A = (aij ), the diagonal elements are the
elements aij , fi = 1, . . . , n. If only the diagonal entries are nonzero A is
called a diagonal matrix. A diagonal matrix all of whose diagonal entries are
equal to one it is called the identity (or unit) matrix of order n.
 
1 0
Example 9 I2 =   and A × I = I × A = A for any A
0 1

If all the elements above the diagonal are zero the matrix is called lower-
triangular; if all the elements below the diagonal are zero it is called upper
triangular.
 
1 0 0
 
Example 10 L =  4 6 0  Lower triangular
 
 
−2 1 −4

Manalebish D: 21 21
 
1 −3 3
 
U =  0 −1 0  Upper triangular
 
 
0 0 1

Given
 the matrixA decompose
 into lower
 and upper triangular
 matrix

 1 2 1   1 2 1  −−−−−−−−−−→  1 2 1 
 A2 =A2 −3A1 
A =  3 −2 −4  −−−−−−−−−→  0 −8 −7  A3 = A3 − 81 A2  0 −8 −7  = U
   
  A3 =A3 −2A1    
−17
2 3 −1 0 −1 −3 0 0 8
Thus we have converted to upper triangular to write the Lower triangular
matrix we just take −1 times the coefficients used in finding the upper tri-
angular
 which
 is
1 0 0
 
3 1 0=L
 
 
1
2 8 1
Therefore we have decomposed the given coefficient matrix into Lower and
upper 
triangular     
1 0 0 1 2 1 1 2 1
    
LU =  3 1 0   0 −8 −7  =  3 −2 −4 =A
    
    
1 −17
2 8 1 0 0 8 2 3 −1

Exercise 2 Solve the system using Gauss’ method

4x − 2y + z = 15
−3x − y + 4z = 8
x − y + 3z = 13
     
4 −2 1 4 −2 1 4 −2 1
 −−−−−−−−−−→ 
 A2 =A2 + 43 A1 
   
1
A =  −3 −1 4  −−−−−−−−1−→  0 −2.5 4.75  A3 = A3 − 5 A2  0 −2.5 4.75  =
   
  A3 =A3 − 4 A1    
1 −1 3 0 0.5 2.75 0 0 1.8
U

Manalebish D: 22 22
 
1 0 0
 
The corresponding lower triangular is L =  −0.75 1 0 
 
 
0.25 0.2 1

AX = B
LU X = B for U X = Y

    LY
 = B
1 0 0 y 15
  1   
 −0.75 1 0   y2  = 
   
8 

    
0.25 0.2 1 y3 13

77 108
using forward substitution we have then y1 = 15, y2 = 4 y3 = 20

   UX = 
Y 
4 −2 1 x 15
    
 77 
0 −2.5 4.75   y  =  4 
  

    
27
0 0 1.8 z 5

Solution x = 2, y = −2, z = 3

Note 1 To get the zeros below the diagonal we have used the coefficients of
the leading terms. Furthermore, we can see that the original coefficient matrix
A canbe written asthe 
product   
4 −2 1 1 0 0 4 −2 1
     
U =  −3 −1 4  =  −0.75 1 0  ×  0 −2.5 4.75
     

     
1 −1 3 0.25 0.2 1 0 0 1.8
| {z } | {z }
L U

This procedure is called an LU decomposition of A i.e.A = L × U where


L is the lower triangular and U is upper triangular matrix

Exercise 3 Thus for the above example, we see that Gaussian elimination

Manalebish D: 23 23
1. Solves the system of equation

2. Finds inverse of a square matrix (Gauss-Jordan)

3. Compute the determinant of a matrix


det A = det(LU ) = det L ∗ det U = det U since det L = 1

4. Gives us an LU decomposition of the coefficient matrix (where L ∗ U =


A′ where A′ is a permutation of A)

5. Find the inverse of the coefficient matrix

Given a system Ax = B If the coefficient matrix A can be written as a product


A = LU where L is a lower triangular matrix and U upper triangular, we
have
Ax = B ⇔ LU x = B

Putting U x = y we get Ly = B which can be solved for y by forward


substitution. Then solve U x = y for x by backward substitution.

Example 11 1. For system of linear equations

4x − y + 2z = 15
−x + 2y + 3z = 5
5x − 7y + 9z = 8

(a) Solve the system by Gauss elimination

(b) Decompose the coefficient matrix into lower and upper triangular

(c) find the determinant

(d) find the solution using LU decomposition

(e) Find the inverse of the coefficient matrix

Manalebish D: 24 24
Solution    
4 −1 2 15 4 −1 2 15
  1
  −−−−−−−−−−−→
 A2 = 4 A1 +A2 
(a) (A\B) =  −1 2 3 5  −−−−−5 7 7 35  A3 = 23
7 A2 + A3

−−−−−−→  0
  A3 = 4 A1 +A3  4 2 4 
5 −7 9 8 0 −23
4
26 −43
4 4
 
4 −1 2 15
 
 7
0 4 2 4  7 35 
 
0 0 18 18
Backward substitution gives x = 4, y = 3, z = 1 
1 0 0
 
(b) From (a) above we can write the L matrix as  −0.25 1 0  and the
 
 
−23
1.25 7 1
coefficient matrix A
     
4 −1 2 1 0 0 4 −1 2
     
     7
A =  −1 2 3  =  −0.25 1 0  ∗  0 4 2 7 
     
−23
5 −7 9 1.25 7 1 0 0 18
| {z } | {z }
L U

(c) det A = det LU = det L ∗ det U = det U = 126

(d) Now Ax = B ⇔ LU x = B Then put U x = y we get Ly = B


    
1 0 0 y 15
  1   
i.e. −0.25 0   y2  =  5 

1
   
    
−23
1.25 7 1 y3 8

Manalebish D: 25 25
Using forward substitution

y1 = 15
−15 15 35
−0.25(15) + y2 = 5 + y2 = 5 y2 = 5 + =
  4 4 4
23 35
1.25(15) − + y3 = 8 y3 = 18
7 4
 
15
 
 35 
Ux =  4 
 
18
    
4 −1 2 x 15
  1   
7 7
 0 4 2   x2  =  4 
    35 
    
0 0 18 x3 18

Using backward substitution

18x3 = 18 x3 = 1
 
4 7 7 35
x2 + = x2 + 2 = 5 x2 = 3
7 4 2 4
4x1 − 3 + 2 = 15 x1 = 4

Example 12 Solve the system using LU decomposition and find the deter-
x1 + x2 + x3 + x4 = 0
x1 + 2x2 + 2x3 + 2x4 = 1
minant
x1 + 2x2 + 3x3 + 3x4 = 1
x1 + 2x2 + 3x3 + 4x4 = 0
   
1 1 1 1 1 1 1 1 
1 1 1 1

   
     
1 2 2 2  A2 =A2 −A1  0 1 1 1  0 1 1 1  −−−−−−−−−→
A3 =A3 −A2 
A=  −−−−−−−→   −−−−−−−−→   A4 = A4 − A3

 A =A −A  A4 =A4 −A3  0 0 1 1 
1 2 3 3  A34 =A34 −A11  0 1 2 2
 
 
   
0 0 1 2
1 2 3 4 0 1 2 3

Manalebish D: 26 26
   
1 1 1 1 1 0 0 0
   
 0 1 1 1   1 1 0 0 
 = U The corresponding lower triangular matrix is L = 
   
 
 0 0 1 1   1 1 1 0 
   
0 0 0 1 1 1 1 1
    
1 1 1 1 1 1 1 1
0 0 0 1
    
    
1 2 2 2 1 1 0 0  0 1 1 1
A=  = LU =   
1 2 3 3 1 1 1 0  0 0 1 1
    
    
1 2 3 4 1 1 1 1 0 0 0 1
 
y
 1
 
 y2 
First solve the system LY = B where  
 y3 
 
 
y4
      
1 0 0 0 y 0 0
  1     
      
1 1 0 0   y2   1   1 
  = ⇒Y = 
1 1 1 0   y3   1   0 
      
      
1 1 1 1 y4 0 −1
 
−1
 
 
 1 
Next solve U X = Y we get the solution ⇒ X =  
 1 
 
 
−1

Exercise 4 1. Find the inverse of the


 following matrices

  1 0 1 0
5 −2 3  
 1 −1 1 3 
   
A=0 1 7 B=
  
5 2 1 1
   
2 −1 0  
2 0 −3 9

% LU F a c t o r i z a t i o n MATLAB Program
% LU f a c t o r i z a t i o n with p a r t i a l ( row ) p i v o t i n g

Manalebish D: 27 27
f u n c t i o n [ L , U, P]= LU pivot (A) % d e c l a r a t i o n o f f u n c t i o n
% L , U, and P a r e output o f th e program
% A i s t h e i n p u t matrix o r argument t o t h e f u n c t i o n
A = [ 4 −2 1 ; −3, −1 4 ; 1 −1 3 ] % Input matrix which i s t o be f a c t o r e d
[ n , n]= s i z e (A ) ; % c h e c k i n g t he s i z e o f matrix
L=eye ( n ) ; P=L ; U=A; % a s s i g n i n g p r o c e s s
f o r k=1:n %s t a r t o f l o o p
[ p i v o t m]=max( abs (U( k : n , k ) ) ) ; % p i v o t i n g p r o c e s s
m=m+k −1;
i f m˜=k % b e g i n i n g o f s c o p e o f i f s t a t e m e n t
% i n t e r c h a n g e rows m and k i n U
temp=U( k , : ) ; % a s s i g n i n g t o temporary v a r i a b l e
U( k , : ) =U(m, : ) ;
U(m, : ) = temp ; % r e t u r n i g t h e v a l u e from temp
% i n t e r c h a n g e rows m and k i n P
temp=P( k , : ) ; % a s s i g n i n g t o temp
P( k , : ) =P(m, : ) ;
P(m, : ) = temp ; % r e t u r n i n g th e v a l u e from temp
i f k >= 2 % c h e c k i n g t he c o n d i t i o n u s i n g i f
temp=L( k , 1 : k −1);
L( k , 1 : k−1)=L(m, 1 : k −1);
L(m, 1 : k−1)=temp ;
end % end o f i f s c o p e
end
f o r j=k+1:n % l o o p t o p r i n t output
L( j , k)=U( j , k ) /U( k , k )
U( j , : ) =U( j , : ) −L( j , k ) ∗U( k , : )
end
end

Manalebish D: 28 28
Tridiagonal Matrices

A tridiagonal matrix is a matrix that have nonzero elements only on the


diagonal and in the positions adjacent to the diagonal.
 
 −4 2 0 0 0 
 1 −4 1 0 0 
 
 
Example 13  0 1 −4 1 0 
 
 
 0 0 1 −4 1 
 
 
0 0 0 2 −4

An n × n tridiagonal matrix can be stored as an n × 3 matrix using only the


non zero entries
  as
 0 −4 2 
 1 −4 1 
 
 
 1 −4 1 
 
 
 1 −4 1 
 
 
2 −4 0
(The zero entries can also be any value x)
Consider the following system of 5 equations in 5 unknowns with the above
tridiagonal matrix as its matrix of coefficients

Ax = B
−4x1 + 2x2 =0
x1 − 4x2 + x3 = −4
x2 − 4x3 + x4 = −11
x3 − 4x4 + x5 = 5
2x4 − 4x5 = 6

The corresponding augmented matrix (A \ B) can be stored as

Manalebish D: 29 29
 
 0 −4 2 0 
1 −4 1 −4 
 

 
1 −4 1 −11  a 5 × 4 matrix can be easily stored in computers and can
 

 
1 −4 1 5 
 

 
2 −4 0 6
be solved using an algorithm for Gaussian elimination or using LU decom-
position.

Remark 3.1.1 A system of linear equations is said to be well conditioned.


If small errors in the coefficients or in the solving process have only small
effect on the solution.

Example
 14 The system
 0.9999x − 1.0001y = 1
 x−y =1
has the solution x = 0.5 y = −0.5 and the system

 0.9999x − 1.0001y = 1
 x−y =1+ε
has a solution x = 0.5 + 5000.5ε and y = −0.5 + 4.999.5ε.
Thus a change of magnitude ε produces a change in the solution of mag-
nitude 5000ε.

The
 system
 a x +a x =b
11 1 12 2 1
is ill-conditioned if
 a21 x1 + a22 x2 = b2
a12 a22
≈ or a11 a22 ≈ a12 a21 or a11 a22 − a12 a21 ≈ 0
a11 a21
i.e. if det A = a11 a22 − a12 a21 , is “ near zero”
Thus ill-conditioning can be regarded as an approach towards singularity
(i.e. difficulty to get inverse or solution)

Manalebish D: 30 30
For ill-conditioned system Ax = B withan initial approximate solution
x′ = (x′1 , x′2 . . . x′n ), there corresponds the residual R = B − Ax′
Ax′ = B − R ⇐ Ax′ = Ax − R which gives A(x − x′ ) = R whose
solution is the correction to be applied to x′ . This procedure can be repeated
iteratively.

3.2 Indirect Methods for SLE


Starting from an approximate solution to a system of linear equations, to
obtain better approximation from a computational cycle,

• Iterative methods are used when convergence is rapid so that the solu-
tion is obtained with much less work than those of the direct method.

• They are not sensitive to round off errors, provided that it converges

• Applicable for system of large order with many zero coefficients.

Gauss Jacobi Method

This is a method of simultaneous corrections.


Given the system Ax = B i.e.

a11 x1 = b1
a21 x1 + a22 x2 = b2
.. ..
. .
an1 x1 + an2 x2 + . . . + ann xn = bn

with aii ̸= 0(If not rearrange the system so that the ith coefficient for xi ̸= 0
to get it)

Manalebish D: 31 31
Rewrite it in the form
 
n
1  X
xi = bi − aij xj 
aii
j=1,j̸=i

for i = 1, 2, . . . n, assuming that aii ̸= 0 i.e.

(b1 − a12 x2 − . . . − a1n xn ) (bn − a11 x1 − . . . − an(n−1) xn )


x1 = , · · · xn =
a11 ann
(0) (0)
starting with an initial guess x(0) = (x1 , . . . xn ) define the iteration as
 
n
(m+1) 1  X
xi = bi − aij xm
j

aii
j=1,j̸=i

for i = 1, 2, . . . n and m ≥ 0

• This process takes many iterations, if it converges at all and is generally


used only if direct methods fail.

• State a criteria for terminating the iteration say after K(given) itera-
tions or when |xj+1
i − xji | < ε, i = 1, . . . n

Example 15 Solve the following by the Gauss-Jacobi method


 
10x1 + 3x2 + x3 = 14 0
 
2x1 − 10x2 + 3x3 = −5 with x0 =  0 
 
 
x1 + 3x2 + 10x3 = 14 0

 
3
(1) 1  X (0)
xi = bi − aij xj 
aii
j=1,j̸=i

Manalebish D: 32 32
Here a11 = 10, b1 = 14, a12 = 3, a13 = 1
a22 = −10, b2 = −5, a21 = 2, a23 = 3 a33 = 10, b3 = 14, a31 = 1, a32 = 3

(1) 1 h (0) (0)


i
(0) (0)
⇒ x1 = 14 − 3x2 − x3 = 1.4 − 0.3x2 − 0.1x3
10 h
(1) 1 (0) (0)
i
(0)
x2 = −5 − 2x1 − 3x3 = 0.5 + 0.2x01 + 0.3x3
−10
(1) 1 h 0 (0)
i
(0) (0)
x3 = 14 − x1 − 3x3 = 1.4 − 0.1x1 − 0.3x2
10
substituting the values of x(0) = (0, 0, 0)T we have x1 = (1.4, 0.5, 1.4) pro-
ceeding in similar manner we have the following result table

m xm
1 xm
2 xm
3

0 0 0 0
1 1.4 0.5 1.4
2 1.11 1.2 1.11
3 0.929 1.055 0.929
4 0.9906 0.9645 0.9906
5 1.01159 0.9953 1.01159
6 1.00025 1.0058 1.00025
 
1
 
which converges to the exact solution x =  1  with an error ε10−2
 
 
1

%% J a c o b i Method
%% S o l u t i o n o f x i n Ax=b u s i n g J a c o b i Method
% ∗ ∗ I n i t a i l i z e ’A’ ’ b ’ & i n t i a l g u e s s ’ x ’ ∗
%%
A=[6 −2 1 ; −2 7 2 ; 1 2 −5]
b=[11 5 −1] ’
x =[2.1 0.8 0.8 ] ’

Manalebish D: 33 33
n=s i z e ( x , 1 ) ;
normVal=I n f ;
%%
% ∗ ∗ T o l e r e n c e f o r method∗

t o l =1e −5; i t r =0;


%% Algorithm : J a c o b i Method
%%
w h i l e normVal>t o l
x o l d=x ;

f o r i =1:n
sigma =0;

f o r j =1:n

i f j ˜= i
sigma=sigma+A( i , j ) ∗ x ( j ) ;
end

end

x ( i )=(1/A( i , i ) ) ∗ ( b ( i )−sigma ) ;
end

i t r=i t r +1;
normVal=abs ( xold−x ) ;
end
f p r i n t f ( ’ S o l u t i o n o f t h e system i s : \n%f \n%f \n%f \n%f i n %d i t e r a t i o n s ’ , x , i t r )

Manalebish D: 34 34
Gauss-Seidel Method
(0) (0) (0)
• This is a method of successive corrections starting from x(0) = (x1 , x2 , . . . xn )
define the iteration
" i−1 n
#
(m+1) 1 X (m+1)
X (m)
xi = bi − aij xj − aij xj i = 1, . . . , n
aii j=1 j=i+1

(m+1)
That is, each new component xi is immediately used in the com-
putation of the next component.

• Faster convergence than Gause-Jacobi

For the example done above

(1) 1 h (0) (0)


i 1 h 0 (0)
i
(0) (0)
⇒ x1 = b1 − a21 x2 − a31 x3 = 14 − 3x2 − x3 = 1.4 − 0.3x2 − 0.1x3
a11 10
(1) 1 h
1 (0)
i 1 h (1) (0)
i
(1) (0)
x2 = b2 − a21 x1 − a23 x3 = −5 − 2x1 − 3x3 = 0.5 + 0.2x1 + 0.3x3
a22 −10
(1) 1 h
1 (1)
i 1 h (1) (1)
i
(1) (1)
x3 = b3 − a31 x1 − a32 x2 = 14 − x1 − 3x2 = 1.4 − 0.1x1 − 0.3x2
a33 10
substituting the values of x(0) = (0, 0, 0)T we have x1 = (1.4, 0.78, 1.026) pro-
ceeding in similar manner we have the following result table

m xm
1 xm
2 xm
3

0 0 0 0
1 1.4 0.78 1.026
which converges to the exact solution
2 0.9234 0.99248 1.1092
3 0.99134 1.0310 0.99159
4 0.99154 0.99578 1.0021
 
1
 
x =  1  more quickly than Gauss Jacobi method.
 
 
1

Manalebish D: 35 35
Remark 3.2.1 If the iteration diverges, a rearrangment of the equations may
produce convergence.
Solve different equations for different xi (aii ̸= 0)
Iterate in a different order

Example 16 Solve the system

10x1 − 2x2 + x3 = 12
x1 + 9x2 − x3 = 10
2x1 − x2 + 11x3 = 20

(0) (0) (0)


with initial guess x1 = x2 = x3 = 0 The iteration process is given by

(j) (j)
xj+1
1 = 1.2000 + 0.2000x2 − 0.1000x3
(j+1) (j)
xj+1
2 = 1.1111 − 0.1111x1 + 0.1111x3
(j+1) (j+1)
xj+1
3 = 1.8182 − 0.1818x1 + 0.0909x2

For instance

x11 = 1.2000 + 0.2000(0) − 0.1000(0) = 1.2000


x12 = 1.1111 − 0.1111(1.2000) + 0.1111(0) = 0.9778
x13 = 1.8182 − 0.1818(1.2000) + 0.0909(0.9778) = 1.6889

Thus
j 0 1 2 3 4 5
xj1 0 1.2000 1.2267 1.2624 1.2625 1.2624
xj2 0 0.9778 1.1624 1.1598 1.1591 1.1591
xj3 0 1.6889 1.7008 1.6941 1.6940 1.6941
Thus the solution x1 = 1.2624, x2 = 1.1591, x3 = 1.6941
The iteration convergence depends on how the equations are ordered.

Manalebish D: 36 36
Definition 1 A matrix is strictly diagonally dominant if each term on the
main diagonal of A is larger (in absolute value) than the sum of the absolute
values of all the other terms in the same row or column i.e.
n
X n
X
|aii | > |aij | and |aii | > |aji | ∀i = 1, 2, . . . n
j=1,i̸=j j=1,i̸=j

Note 2 we arrange the equations to satisfy the theorem as closely as we


can,by making the terms on the main diagonal as large as we can.

A sufficient but not necessary condition of convergence of in direct methods

Theorem 3.2.2 If A is strictly diagonally domonant then both Gauss-Jacobi


and Gauss Seidel methods converge.

proof:-
Let x be the exact root of Ax + B and
xm the mth step iterated solution
Let em = x − xm for m ≥ 0
Pn aij m Pn aij
Then em+1
i = j=1 ej Let M = max j=1 Then
j̸=i aii aii
1≤i≤n j̸=i

∥e(m+1) ∥ ≤ M ∥e(m) ∥
≤ M 2 ∥e(m−1) ∥
..
.
≤ M m+1 ∥e(0) ∥

That is ∥e(m+1) ∥ ≤ M m+1 ∥e(0) ∥ which converges to 0 as m −→ ∞ provided


that M < 1.
Pn aij
< 1 =⇒ nj=1 ∥aij ∥ < ∥aii ∥
P
But M < 1 if max j=1
1≤i≤n j̸=i aii j̸=i

Therefore A must be diagonally dominant. The element on the diagonal


must be greater than the sum of all other entries in its row.

Manalebish D: 37 37
Example 17 Solve the system

−x1 + 3x2 + 5x3 + 2x4 = 10 (3.5)


x1 + 9x2 + 8x3 + 4x4 = 15 (3.6)
x2 + x4 = 2 (3.7)
2x1 + x2 + x3 − x4 = −3 (3.8)

with initial guess x0 = (0, 0, 0, 0)T

since we cannot solve for x3 from (3.7), we rearrange the last to equations so
that we can solve for x3 from (3.8), and x4 from (3.7).

1
x11 = (10 − 3x02 − 5x03 − 2x04 )
−1
1
x12 = (15 − x01 − 8x03 − 4x04 )
9
1
x13 = (−3 − 2x01 − x02 + x04 )
1
1
x14 = (2 − x02 )
1

substituting the initial guess x0 = (0, 0, 0, 0)T we have x1 = (−10, 35 , −3, 2)T
continuing the iteration

1
x21 = (10 − 3x12 − 5x13 − 2x14 )
−1
1
x22 = (15 − x11 − 8x13 − 4x14 )
9
1
x23 = (−3 − 2x11 − x12 + x14 )
1
1
x24 = (2 − x12 )
1

substituting x1 we have x2 = (−24, 41 52 1 T


9 , 3 , 3 ) But this diverges even after 100

iterations

Manalebish D: 38 38
But rearranging the equations as

2x1 + x2 + x3 − x4 = −3
x1 + 9x2 + 8x3 + 4x4 = 15
−x1 + 3x2 + 5x3 + 2x4 = 10
x2 + x4 = 2

we can solve for each variable as


1
x11 = (−3 − x02 − x03 + x04 )
2
1
x12 = (15 − x01 − 8x03 − 4x04 )
9
1
x13 = (10 + x01 − 3x02 − 2x04 )
5
1
x14 = (2 − x02 )
1
Substituting the initial guess x0 = (0, 0, 0, 0)T we have x1 = ( −3 5
2 , 3 , 2, 2)
T

which converges after 73 iterations to the solution.


x73 = (−1.000001, 0.000001, 0.999999, 1.99999998) then x = (−1, 0, 1, 2)

Example 18 Solve using i) Gauss Jacobi ii) Gauss seidel

6x1 − 2x2 + x3 = 11
x1 + 2x2 − 5x3 = −1
−2x1 + 7x2 + 2x3 = 5

with initial guess x0 = (0, 0, 0)

Solution i):-
Let us reorder the equation ii) and iii) so that

6x1 − 2x2 + x3 = 11
−2x1 + 7x2 + 2x3 = 5
x1 + 2x2 − 5x3 = −1

Manalebish D: 39 39
Then
1
x11 = (11 + 2x02 − x03 )
6
1
x12 = (5 + 2x01 − 2x03 )
7
1
x13 = (−1 − x01 − 2x02 )
−5
Then for the iteration we take
1
xn+1
1 = (11 + 2xn2 − xn3 )
6
1
xn+1
2 = (5 + 2xn1 − 2xn3 )
7
1
xn+1
3 = (−1 − xn1 − 2xn2 )
−5
starting with initial guess x0 = (0, 0, 0) we get
j 0 1 2 3 4 5 ... 9
xj1 0 1.833 2.038 2.085 2.004 1.994 . . . 2.000
xj2 0 0.714 1.181 1.053 1.001 0.990 . . . 1.000
xj3 0 0.200 0.832 1.080 1.038 1.001 . . . 1.000
ii):- using Gauss-Seidel

1
xn+1
1 = (11 + 2xn2 − xn3 )
6
1
xn+1
2 = (5 + 2xn+1
1 − 2xn3 )
7
1
xn+1
3 = (−1 − xn+1
1 − 2xn+12 )
−5
starting with initial guess x0 = (0, 0, 0) we get
j 0 1 2 3 4 5
xj1 0 1.833 2.069 1.998 1.999 2.000
xj2 0 1.238 1.002 0.995 1.000 1.000
xj3 0 1.062 1.015 0.998 1.000 1.000
Hence faster convergence

Manalebish D: 40 40
Exercise 5 Solve using Gauss Jacobi method with error ε ≤ 0.01

1. Solve

x − 0.25y − 0.25z = 50
−0.25x + y − 0.25w = 50
−0.25x + z − 0.25w = 25
−0.25x − 0.25z + w = 25

with initial guess x0 = y 0 = z 0 = w0 = 100

2. Solve

x − 5y − z = 8
4x + 2y + 8z = 32
10x − 2y + z = 9

with initial guess x0 = y 0 = z 0 = 0

%% Gauss S e i d e l Method
%% S o l u t i o n o f x i n Ax=b u s i n g Gauss S e i d e l Method
% ∗ ∗ I n i t a i l i z e ’A’ ’ b ’ & i n t i a l g u e s s ’ x ’ ∗
%%
A=[10 −2 1 ; 1 9 −1;2 −1 1 1 ]
b=[12 10 2 0 ] ’
x=[0 0 0 ] ’
n=s i z e ( x , 1 ) ;
normVal=I n f ;
%%
% ∗ ∗ T o l e r e n c e f o r method∗

t o l =1e −5; i t r =0;

Manalebish D: 41 41
%% Algorithm : Gauss S e i d e l Method
%%
w h i l e normVal>t o l
x o l d=x ;

f o r i =1:n

sigma =0;
f o r j =1: i −1
sigma=sigma+A( i , j ) ∗ x ( j ) ;
end

f o r j=i +1:n
sigma=sigma+A( i , j ) ∗ x o l d ( j ) ;
end

x ( i )=(1/A( i , i ) ) ∗ ( b ( i )−sigma ) ;
end

i t r=i t r +1;
normVal=norm ( x o l d −x ) ;
end
%%
f p r i n t f ( ’ S o l u t i o n o f t h e system i s : \n%f \n%f \n%f \n%f i n %d i t e r a t i o n s ’ , x , i t r )

3.3 System of non linear equations


Non linear equations are equations which may

• involve powers of the variable

• involve products of different variables

Manalebish D: 42 42
• include non algebraic expressions

• have many solutions

are called system of nonlinear equations

Iteration method
Consider two nonlinear equations in two unknowns x and y.

f1 (x, y) = 0
f2 (x, y) = 0

Express in the form

x = g1 (x, y)
y = g2 (x, y)

start with an initial guess (x0 , y0 ) and iterate using the equation

xi+1 = g1 (xi , yi )
yi+1 = g2 (xi , yi )

provided that this iteration converges to the actual solution (x, y)


Assuming initial guess exists the new estimates can be obtained in a manner
similar to either the Gauss Jacobi method or Gauss Seidel method. The
|xi+1 − xi |
corresponding stopping criteria |xi+1 − xi | < δ or <ε
|xi+1 |
Consider the system

f1 (x, y) = x2 + y − 20x + 40 = 0
f2 (x, y) = y 2 + x − 20y + 20 = 0

Manalebish D: 43 43
which can be written in the form

x2 + y
x = +2
20
y2 + x
y = +1
20

Choosing an initial guess x0 = 0, y0 = 0 we get

x2n−1 + yn−1
xn = +2
20
2
yn−1 + xn−1
yn = +1
20

x20 + y0 02 + 0 y02 + x0 02 + 0
x1 = +2= +2=2 y1 = +1= +1=1
20 20 20 20
x21 + y1 22 + 1 y12 + x1 12 + 2
x2 = +2= + 2 = 2.25 y2 = +1= + 1 = 1.15
20 20 20 20
x3 = 2.31 y3 = 1.18
x4 = 2.326 y4 = 1.185
x5 = 2.33 y5 = 1.187

The solution (with an accuracy range < 0.01) is x = 2.33 and y = 1.187

Example 19 Use iteration method to find the solution to the following non
linear system of equations

x21 + x1 x2 = 10
x2 + 3x1 x22 = 57

Let us rewrite as follows



x1 = r10 − x1 x2
57 − x2
x2 =
3x1

Manalebish D: 44 44
Using the initial values x01 = 1.5 x02 = 3.5
s s
57 − x02 57 − 3.5
q p
1
x1 = 10 − x01 x02 = 10 − (1.5)(3.5) = 2.1794 x12 = = = 2.8605
3x01 3(1.5)
s
p 57 − 2.8605
x21 = 10 − (2.1794)(2.8605) = 1.9405 x22 = = 3.0495
3(1.9405)
x31 = 2.020 x32 = 2.9834
x41 = 1.993 x42 = 3.005

therefore the solution is x1 = 2 and x2 = 3

Newton-Ralphson Method
Consider two nonlinear equations in two unknowns x and y.

f1 (x, y) = 0
f2 (x, y) = 0

Where the function f1 and f2 are continuous and differentiable so that they
may be explained in a Taylor series.
Expand f1 (xi+1 , yi+1 ) and f2 (xi+1 , yi+1 ) at (xi , yi ) as

∂f1 ∂f1
f1 (xi+1 , yi+1 ) = f1 (xi , yi ) + (xi+1 − xi ) + (yi+1 − yi ) + . . . and
∂x ∂y
∂f2 ∂f2
f2 (xi+1 , yi+1 ) = f2 (xi , yi ) + (xi+1 − xi ) + (yi+1 − yi ) + . . .
∂x ∂y
∂f1 ∂f1 ∂f2 ∂f2
where ∂x , ∂y , ∂x , ∂y are to be evaluated at x = xi and y = yi
Now suppose xi+1 and yi+1 are very close to the actual root (x, y) and assume
that xi ≃ xi+1 and yi ≃ yi+1 so that the remaining terms of the Taylor series

Manalebish D: 45 45
may be omitted to get

∂f1 ∂f1
0 = f1 (xi , yi ) + (xi+1 − xi ) + (yi+1 − yi )
∂x ∂y
∂f2 ∂f2
0 = f2 (xi , yi ) + (xi+1 − xi ) + (yi+1 − yi )
∂x ∂y

xi+1 − xi = △xi xi+1 = xi + △xi


Let =⇒
yi+1 − yi = △yi yi+1 = yi + △yi
Now let us find the values of △xi and △yi for each iteration from the equations

∂f1 ∂f1
△ xi + △ yi = −f1 (xi , yi )
∂x ∂y
∂f2 ∂f2
△ xi + △ yi = −f2 (xi , yi )
∂x ∂y

we solve by Cramers rule for △xi and △yi provided that the determinant of
∂f1 ∂f1
∂x ∂y
the coefficient matrix J = ̸= 0 Known as the Jacobian determinant.
∂f2 ∂f2
∂x ∂y
∂f1 ∂f1
−f1 (xi , yi ) ∂y ∂x −f1 (xi , yi )
∂f2 ∂f2
−f2 (xi , yi ) ∂y ∂x −f2 (xi , yi )
Then △xi = and △yi =
J J
xi+1 = xi + △xi
Using we compute the next approximation
yi+1 = yi + △yi
This can be extended for an n × n system for n > 2 but requires many
computations

Example 20 Use the Newton Ralphson method to find solution to


x21 + x1 x2 = 10
using initials x01 = 1.5 and x02 = 3.5
x2 + 3x1 x22 = 57

Manalebish D: 46 46
x21 + x1 x2 − 10 = 0 = f1 (x1 , x2 )
Consider
x2 + 3x1 x22 − 57 = 0 = f2 (x1 , x2 )

∂f1 ∂f2
= 2x1 + x2 = 3x22
∂x1 ∂x1
∂f1 ∂f2
= x1 = 1 + 6x1 x2
∂x2 ∂x2

The corresponding Jacobian


2x1 + x2 x1 2(1.5) + (3.5) 1.5 6.5 1.5
J= = = = 156.125
3x22 1 + 6x1 x2 3(3.5)2 1 + 6(1.5)(3.5) 36.75 32.5

10 − x21 − x1 x2 x1 2.5 1.5

(1)
57 − x2 − 3x1 x22 1 + 6x1 x2 −1.625 32.5
△x1 = = = 0.536 and
2x1 + x2 x1 156.125

3x22 1 + 6x1 x2
6.5 2.5

(1)
36.75 −1.625
△x2 = = −0.6561
6.5 1.5
36.75 32.5

(1) (0) (1)


x1 = x1 + △x1 = 1.5 + 0.536 = 2.036
(1) (0) (1)
x2 = x2 + △x2 = 3.5 + 0.6561 = 2.844

10 − (x01 )2 − x01 x02 x01 2.5 1.5

(0)
57 − x02 − 3x01 (x02 )2 1 + 6x01 x02 −1.625 32.5
Next △x1 = = = 0.536 and
2x01 + x02 x01 156.125
3(x02 )2 1 + 6x01 x02

2x01 + x02 10 − (x01 )2 − x01 x02 6.5 2.5

(0)
3(x02 )2 57 − x02 − 3x01 (x02 )2 36.75 −1.625
△x2 = = = −0.6561
2x01 + x02 x01 6.5 1.5
3(x02 )2 1 + 6x01 x02 36.75 32.5

Manalebish D: 47 47
(1) (0)
x1 = x01 + △x1 = 1.5 + 0.536 = 2.036
(1) (0)
x2 = x02 + △x2 = 3.5 − 0.6561 = 2.844

Next
(1)2 (1) (1) (1)
10 − x1 − x 1 x2 x1
(1) (1) (1)2 (1) (1)
(1)
57 − x2 − 3x1 x2 1 + 6x1 x2
△x1 = = −0.03733 and
(1) (1) (1)
2x1 + x2 x1
(1)2 (1) (1)
3x2 1 + 6x1 x2
(1) (1) (1) (1) (1)
2x1 + x2 10 − (x1 )2 − x1 x2
(1) (1) (1) (1)
(1)
3(x2 )2 57 − x2 − 3x1 (x2 )2
△x2 = = 0.15841
(1) (1) (1)
2x1 + x2 x1
(1) (1) (1)
3(x2 )2 1 + 6x1 x2

(2) (1) (1)


x1 = x1 + △x1 = 2.036 − 0.03733 = 1.999
(2) (1) (1)
x2 = x2 + △x2 = 2.844 + 0.15841 = 3.0022

Next
(2)2 (2) (2) (2)
10 − x1 − x1 x 2 x1
(2) (2) (2)2 (2) (2)
(2)
57 − x2 − 3x1 x2 1 + 6x1 x2
△x1 = = −0.00130 and
(2) (2) (2)
2x1 + x2 x1
(2)2 (2) (2)
3x2 1 + 6x1 x2
(2) (2) (2) (2) (2)
2x1 + x2 10 − (x1 )2 − x1 x2
(2) (2) (2) (2)
(2)
3(x2 )2 57 − x2 − 3x1 (x2 )2
△x2 = = −0.00229
(2) (2) (2)
2x1 + x2 x1
(2) (2) (2)
3(x2 )2 1 + 6x1 x2

(3) (2) (2)


x1 = x1 + △x1 = 1.999 + 0.00130 = 2.000
(3) (2) (2)
x2 = x2 + △x2 = 3.0022 − 0.00229 = 2.999
(4) (4)
on the fourth iteration we get x1 = 2 and x2 = 3 we say Newton Ralphson
method converges faster than iteration method.

Manalebish D: 48 48
Example 21 Using Newton Ralphson solve
x2 + y = 4
with initial guess x0 = 1.1, and y0 = 3.01
2x − xy = −1

x2 + y − 4 = 0 = f1 (x1 , x2 )
Consider
2x − xy + 1 = 0 = f2 (x1 , x2 )

∂f1 ∂f2 ∂f1 ∂f2


= 2x =2−y =1 = −x
∂x ∂x ∂y ∂y

2x 1
The corresponding Jacobian J = = −2x2 + y − 2 for the given
2−y −x
∂f1
−f1 ∂y
−(x2i + yi − 4) 1
∂f2
−f2 ∂y
−(2xi − xi yi + 1) −xi
initial guess J ̸= 0 △xi = = = and
J −2x2i + yi − 2
∂f1
∂x
−f1 2x −(x2i + yi − 4)
∂f2
∂x
f2 2 − y −(2xi − xi yi + 1)
△yi = =
J −2x2i + yi − 2
Starting with x0 = 1.1 and y0 = 3.01 we get

△x0 = −0.0929 △y0 = −0.0156


x1 = x0 + △x0 y1 = y0 + △y0
x1 = 1.1 − 0.0929 = 1.0071 y1 = 3.01 − 0.0156 = 2.9944
△x1 = −0.0071 △y1 = 0.0055
x2 = x1 + △x1 y2 = y1 + △y1
x2 = 1.0071 − 0.0070 = 1.0001 y2 = 2.9944 + 0.0055 = 2.9999

Exercise 6 Solve the system using Newton Ralphson method

y = cos x
1. with initial guess x0 = y0 = 1 (use radians not degrees)
x = sin y

Manalebish D: 49 49
x + xy = 4
2. with initial guess x0 = 1.98, and y0 = 1.02
x+y =3

Manalebish D: 50 50

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy