0% found this document useful (0 votes)
4 views

Chapter 3

Chapter Three discusses methods for solving systems of linear equations, including graphical, direct, and indirect methods. It covers matrix representation, operating rules for matrices, determinants, and ranks, as well as specific solution techniques like Cramer's rule, Gaussian elimination, and the Gauss-Jordan method. The chapter also highlights potential pitfalls in these methods, such as division by zero and round-off errors.

Uploaded by

dawitkebede1619
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Chapter 3

Chapter Three discusses methods for solving systems of linear equations, including graphical, direct, and indirect methods. It covers matrix representation, operating rules for matrices, determinants, and ranks, as well as specific solution techniques like Cramer's rule, Gaussian elimination, and the Gauss-Jordan method. The chapter also highlights potential pitfalls in these methods, such as division by zero and round-off errors.

Uploaded by

dawitkebede1619
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Chapter Three

SOLVING SYSTEMS OF LINEAR


EQUATIONS

Contents of lecturer:
a. Graphical method
b. Direct method
c. Indirect method

1
A Linear Equation
representation

* Graphical solution is the point of intersection of the two lines for simple
systems
x + 2y =1
unique Solution x + 2y =1
2x + 4y = 4 No Solution Infinite Solutions
2x + 4y = 2

2
System of linear algebraic
Have a general equations that are of the form

where the a’s are constant coefficients, the b’s are


constants, and n is the number of equations.
3
Representing systems of linear equation with matrix
• As depicted below, [A] is the shorthand notation for the
matrix and aij designates an individual element of the
matrix.
A horizontal set of elements = a row ; a vertical set of elements =a
column.
The 1st subscripts i and 2nd one j always designates the number of
the row and column respectively in which the element lies.
Column 2
 a11 a12  a1n  column vectors.
a a22 
 a2 n  Raw 2
[ A]   21  x1 
 
 b1 
 
     x
 2
 
b
 2
 
     
 an1 an 2  ann   x
 n   b
 n 

4
Summary of Operating rules that govern matrix

• Two n by m matrices are equal ,if and only if, every element in the
first is equal to every element in the second, that is, [A] = [B] if aij
= bij for all i and j.
• Addition and subtraction of two matrices, say, [A] and [B], is
accomplished by adding and subtraction of corresponding terms
respectively in each matrix. The elements of the resulting matrix
[C] are computed as:
• cij = aij + bij ; dij = eij – fij
• Addition are commutative: [A] + [B] = [B] + [A]
• Both Addition and multiplication are also associative; that is,
([A] + [B]) + [C] = [A] + ([B] + [C])
• The multiplication of a matrix [A] by a scalar g is obtained by
multiplying every element of [A] by g.
5
Determinant and rank
• Both these terms are equivalent, but it is much easier to evaluate a rank of a
matrix rather than to evaluate a determinant of a matrix for larger-size
systems.
• The rank of a matrix is defined as the maximum number of linearly
independent column vectors in the matrix or the maximum number of
linearly independent row vectors in the matrix. Both definitions are
equivalent. For an r x c matrix. If r is less than c, then the maximum rank of the
matrix is r.
• The rank of a matrix is equal to the number of linearly independent columns of
that matrix which can be easily identified by reducing a matrix to its upper
triangular form.
• The maximum number of linearly independent vectors in a matrix is equal to the
number of non-zero rows in its row echelon matrix. 6
has rank 2: the first two columns are linearly independent, so the
rank is at least 2, but since the third is a linear combination of the
first two (the first column minus the second), the three columns
are linearly dependent so the rank must be less than 3

has rank 1: there are nonzero columns, so the rank is


positive, but any pair of columns is linearly dependent.

The final matrix (in row echelon form) has two non-zero
rows and thus the rank of matrix A is 2.

7
• In linear algebra, the determinant is a scalar value that can be computed
from the elements of a square matrix and encodes certain properties of
the linear transformation described by the matrix. The determinant of a
matrix A is denoted det(A), det A, or |A|.
• Determinant evaluation is very computationally intensive. Therefore,
the crammers rule although it looks fairly straightforward to implement;
is actually not very useful for any type of practice.

• An augmented matrix is a matrix obtained by appending the columns of


two given matrices, usually for the purpose of performing the same
elementary row operations on each of the given matrices
8
Solution existence test
Given: M=coefficient matrix, Ma augmented matrix &
D=determinant of a matrix
n= number of equation

D 0 Unique Solution D=0 no or infinite Solution

9
Linear equation solution methods
• There are several methods which directly used to solve
equation. Prominent among these are direct methods:
Cramer’s rule,
Gaussian elimination,
 Gauss-Jordan method and
LU decomposition.
• Another class of method for solving linear equations is
known as indirect method (iterative methods).
• By this approach, we starts from any initial guess, say x(0),and
generate an improved estimate x(k+1) from previous approximation x(k).
Jacobi Method
Gauss-Seidel Method etc.
10
Cramer’s Rule method of solution finding
(The Determinant Method)

If D = determinant of matrix A and Di = determinant of A i ,


where
A i is obtained by replacing the ith column of A with b.
• A has unique solution if D ≠ 0 or rank of A is not equal to
0.
• A has either no solution or infinitively many soln if D = 0
Example:

D ≠0  unique solution exists.


11
Direct method :Naïve Gaussian Elimination
A method to solve simultaneous linear equations of the
form [A][X]=[C]
Two steps
1. Forward Elimination
2. Back Substitution

12
Step to apply Gaussian elimination method.
 a11 a12  a1n b1 
 
 a21 a22  a23 b2 
1. Create augmented matrix      
 
 an1 an 2  ann bn 
2. Elimination
1st round elimination
21=a21/a11, 31=a31/a11, 41=a41/a11………, n1=an1/a11
R21=R20- 21R10, R31=R30- 31R10, R41=R40- 41R10,…, Rn1=Rn0-n1R10

2nd round elimination


32=a32/a22, 42=a42/a22,….. , n2=an2/a22
R32=R31- 32R21, R42=R41- 42R21,…., Rn2=Rn1- n2R21
3rd round elimination
43=a43/a33,
R3.3
=R
4 Then4
2
- 43 R 2
finally R
3 ,……….,back
3
=R 2
- 
n substitution
n R
n2 3
2

13
Cont’d

14
15
16
Example
x1  x2  x3 4
2 x1  x2  3x3 7
3x1  4 x2  2 x3 9

17
Are there any pitfalls of Naïve Gauss Elimination Method?
Division by zero:
It is possible that division by zero may occur during forward
elimination steps. For example for the set of equations

Round-off error:
Naive Gauss Elimination Method is prone to round-off errors. This is
true when there are large numbers of equations as errors propagate.
Also, if there is subtraction of numbers from each other, it may create
large errors.

using six and five significant digits with chopping in your


calculations and compare the results.
18
Gaussian Elimination with Partial Pivoting method

To solve simultaneous linear equations of the form [A]


[X]=[C]
Two steps
• 1. Forward Elimination
• 2. Back Substitution
Forward Elimination
• Same as naïve Gauss elimination method except that we
switch rows before each of the (n-1) steps of forward
elimination.

19
Step to apply Gaussian elimination with pivoting method.
 a11 a12 a1n b1 
1. Create augmented matrix 


 a21 a22  a23 b2 
     
 
 an1 an 2  ann bn 
2. Elimination
Check the first indices of all rows, and take the raw with the highest absolute value
of the 1st indices to interchange with raw 1.
1st round elimination
21=a21/a11, 31=a31/a11, 41=a41/a11………, n1=an1/a11
R21=R20-  21R10, R31=R30-  31R10, R41=R40- 41R10,……, Rn1=Rn0- n1R10
2nd round elimination
Check the 2nd indices of all rows below Row2, and take the raw with the highest
absolute value of the 2nd indices to interchange with raw 2 .
32=a32/a22, 42=a42/a22,….. , n2=an2/a22
R32=R31- 32R21, R42=R41- 42R21,…., Rn2=Rn1- n2R21
3rd round elimination
43=a43/a33,
R 4
3
=R 4
2
- rd43R3 ,……….,
 2
R n
3
=R n
2
-  n2 R 3
2
Check the 3 indices of all rows below Row3, and take the raw with the highest absolute
value of the 3rd indices to interchange with raw 3.
3. Then finally back substitution 20
 a11 a12  a1n b1 
Matrix Form at Beginning of 1st  
 a21 a22  a23 b2 
Step of Forward Elimination
     
 
 an1 an 2  ann bn 

Matrix Form at Beginning of 2nd Step of


Forward Elimination

Matrix Form at End of Forward Elimination

21
Back Substitution Starting Eqns.

Back Substitution
Example
x1  x2  x3 4
2 x1  x2  3 x3 7
3 x1  4 x2  2 x3 9
22
Direct method : Gauss-Jordan

• The Gauss-Jordan method is a variation of Gauss


elimination.
• The major difference is that when an unknown is
eliminated in the Gauss-Jordan method, it is eliminated
from all other equations rather than just the subsequent
ones.
• In addition, all rows are normalized by dividing them by
their pivot elements.
• Thus, the elimination step results in an identity matrix
rather than a triangular matrix
• Consequently, it is not necessary to employ back
substitution to obtain the solution.
23
Cont’d
Starting from a system Ax=b of the general form

We continue process of elimination until the set of equations reduce to


identity matrix form shown below.

1 0 0 ... 0
0 1 0 ... 0
    
0 0 ... 1 0
0 0 0  1 24
Example: Solve the following equation by using:
Example
a) Cramer’s rule x1  x2  x3 4
b) Naïve gauss 2 x1  x2  3 x3 7
c) Gauss Jordan
3 x1  x2  6 x3 2

25
Iterative method (indirect method)
Indirect method is consists:
Jacobi Method
Gauss-Seidel Method and etc.

Iterative Methods (Example)


E1 : 10 x1  x 2  2 x3  6
E2 :  x1  11x 2  x3  3 x 4  25
E3 : 2 x1  x 2  10 x3  x 4   11
E4 : 3x2  x3  8 x 4  15

26
Iterative Methods (Example)
E1 : 10 x1  x 2  2 x3  6
E2 :  x1  11x 2  x3  3 x 4  25
E3 : 2 x1  x 2  10 x3  x 4   11
E4 : 3x2  x3  8 x 4  15

We rewrite the system in the x=Tx+c form


1 1 3
x1  x 2  x3 
10 5 5
1 1 3 25
x2  x1  x3  x4 
11 11 11 11
1 1 1 11
x3  - x1  x 2  x4 
5 10 10 10
3 1 15
x4   x 2  x3 
8 8 8
Jacobi method iteration
and start iterations x (0) (0, 0, 0, 0)
with x (1) 
1 ( 0)
x 
1 ( 0)
x 
3
 0.6000
1 10 2 5 3 5
1 ( 0) 1 3 25
x 2(1)  x1  x3(0) - x 4(0)   2.2727
11 11 11 11
1 1 1 11
x3(1)  - x1(0)  x 2(0)  x 4(0)   1.1000
5 10 10 10
3 1 15
x 4(1)  - x 2(0)  x3(0)   1.8750
8 8 8

Continuing the iterations, the results are in the


:Table
The Gauss-Seidel Iterative Method
(k )
The idea of GS is to compute x using most
:recently calculated values. In our example
1 ( k  1) 1 3
x1( k )  x2  x3( k  1) 
10 5 5
1 (k ) 1 3 25
x2( k )  x1  x3( k  1) - x4( k  1) 
11 11 11 11
1 1 1 11
x3( k )  - x1( k )  x2( k )  x4( k  1) 
5 10 10 10
3 1 15
x4( k )  - x2( k )  x3( k ) 
8 8 8
( 0)
Starting iterations with x (0, 0, 0, 0) , we
obtain

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy