0% found this document useful (0 votes)
118 views

EPPC102 Module III

III. Linear Algebraic Equations • Gauss Elimination • LU Decomposition and Matrix Inversion • Special Matrices and Gauss Seidel

Uploaded by

Elizabeth Gomez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views

EPPC102 Module III

III. Linear Algebraic Equations • Gauss Elimination • LU Decomposition and Matrix Inversion • Special Matrices and Gauss Seidel

Uploaded by

Elizabeth Gomez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

MODULE III

INTRODUCTION

Lesson 1 Gauss Elimination

Lesson 2 LU Decomposition and


Matrix Inversion

Lesson 3 Special Matrices and


Gauss Seidel

Module III
2

MODULE III

LINEAR ALGEBRAIC EQUATIONS

 INTRODUCTION

Matrices can be used to compactly write and work with systems of


equations. Matrices can be manipulated in any way that a normal equation
can be. This is very helpful when we start to work with systems of equations.
It is helpful to understand how to organize matrices to solve these systems.
This chapter presents methods on solving linear algebraic equations
through the use of Matrices.

OBJECTIVES

After studying the module, you should be able to:

1. Know how to compute the determinant using Gauss elimination.


2. Understand the advantages of pivoting; realize the difference between
partial and complete pivoting.
3. Recognize how Gauss elimination can be formulated as an LU
decomposition.
4. Know how to incorporate pivoting and matrix inversion into an LU
decomposition algorithm
5. Understand why the Gauss-Seidel method is particularly well suited for
large, sparse systems of equations.

 DIRECTIONS/ MODULE ORGANIZER


There are three lessons in the module. Read each lesson carefully then
answer the exercises/activities to find out how much you have benefited from
it. Work on these exercises carefully and submit your output to your
instructor.
In case you encounter difficulty, discuss this with your instructor during
the face-to-face meeting.

EEPC102 Module III


3

Lesson 1

 Gauss Elimination

A. Gauss Elimination

Gaussian elimination with back-substitution works well as an algorithmic


method for solving systems of linear equations. For this algorithm, the order
in which the elementary row operations are performed is important. Move
from left to right by columns, changing all entries directly below the leading
1’s to zeros.

Example 1. Consider the three coupled linear equations.


3𝑥𝑥1 + 5𝑥𝑥2 + 2𝑥𝑥3 = 8
2𝑥𝑥1 + 3𝑥𝑥2 − 1𝑥𝑥3 = 1
𝑥𝑥1 − 2𝑥𝑥2 − 3𝑥𝑥3 = −1
Using the rules of matrix multiplication, we can represent the above
equations in matrix form:
Eq’n 1.a
[𝐴𝐴]{𝑋𝑋} = [𝐵𝐵]
3 5 2 𝑥𝑥1 8
�2 3 −1� �𝑥𝑥2 � = � 1 �
1 −2 −3 𝑥𝑥3 −1
Let’s rewrite the matrix in equation above to get one matrix with both A and
B in it:
KEY: Whatever we do to l.h.s of an equation, we do to the r.h.s so we don’t
change the problem.
This is called augmenting the matrix
3 5 2 8 Pivot row
l.h.s �2 3 −1� 1 � r.h.s
1 −2 −3 −1
A B
Step 1
New Row 2 = (Row 1)(-2/3)+(Row 2)
Row 1 is called pivot row for this step. Some multiple of it is added to another
equation, but the pivot row remains unchanged

EEPC102 Module III


4

2 2 2 2
�3(− 3) 5(− 3) 2(− 3) 8(− 3)�
2 3 −1 1
add Eq’n 1.b
1 7 13
�0 − − − �
3 3 3

3 5 2 8 New Row 2
1 7 13
�0 − − �− �
3 3 3
1 −2 −3 −1

Step 2 -reduce A(3,1) to zero


New Row 3 = (Row 1) (-1/3) + (Row 3)
Row 1 is the pivot row again
Expanding this instruction like we did in equation (1.b), the result is
3 5 2 8
⎡ 1 7 13⎤
⎢0 − − �− ⎥ New Row 3
⎢ 3 3� 3⎥
⎢ 11 11 11⎥
⎣0 − 3 − 3 − 3 ⎦
Now we need to reduce A(3,2) to zero. If we added some multiple of Row 1,
then A(3,1) would become non-zero. Instead, we’ll need to add some multiple
of Row 2 to Row 3 to get a new Row 3.
Before we go on, lets consider error reduction.
Error reduction-swap Rows 2 and 3
• If there were some numerical error in the computer storage of any
coefficient, say the error from rounding off the -1/3 currently in spot
A(2,2), then when we multiply Row 2 by some factor and add it to Row
3, we also multiply the error by that factor.
• If we can always multiply by some small number (less than 1), we can
reduce the propagation of round-off error.
• We can enforce this by making sure the lead coefficient (the pivot
coefficient) in the pivot row has the largest absolute value among itself
and all the coefficients under it (the coefficients to be reduces to
zero).
• Since it does not matter what order I put the equations in, we will
rearrange row when we find the current pivot coefficient has a smaller
absolute value than those beneath it. In the current example we have:

EEPC102 Module III


5

3 5 2 8
⎡ 1 7 13⎤
⎢0 − − �− ⎥
⎢ 3 3� 3⎥
⎢ 11 11 11⎥
⎣ 0 − − −
3 3 3⎦
Row 2 will be the pivot row to eliminate A(3,2), which makes A(2,2) the pivot
11 1
coefficient. Since �− � > �− �, we’ll swap Rows 2 and 3, and the new pivot
3 3
coefficient will be largest:
3 5 2 8
⎡ 11 11 11⎤
⎢0 − − �− ⎥
⎢ 3 3� 3⎥
⎢ 1 7 13⎥
⎣ 0 − − −
3 3 3⎦
Now we can continue with minimal round-off of error propagation
Step 3 -reduce A(3,2) to zero
New Row 3 + (Row2)(-1/11) + (Row 3)
Row 2 is the pivot row now

3 5 2 8 New Row 3
11 11 11
�0 − − �− �
3 3 3
0 0 −2 −4

Now lets expand this to its full form with A, X, B in separate matrices
3 5 2 𝑥𝑥1 8
11 11 11
�0 − − � �𝑥𝑥2 � = �− �
3 3 𝑥𝑥3 3
0 0 −2 −4
Back Substitution gives us,
𝑥𝑥1 3
𝑥𝑥
� 2 � = �−1�
𝑥𝑥3 2

B. Gauss-Jordan
The Gauss-Jordan method is a variation of Gauss elimination. The major
difference is that when an unknown is eliminated in the Gauss-Jordan method, it is
eliminated from all other equations rather than just the subsequent ones. In
addition, all rows are normalized by dividing them by their pivot elements. Thus, the
elimination step results in an identity matrix rather than a triangular matrix.
Consequently, it is not necessary to employ back substitution to obtain the solution.

EEPC102 Module III


6

Example 2. Use the Gauss-Jordan technique to solve the same system as in


Example 1
3𝑥𝑥1 + 5𝑥𝑥2 + 2𝑥𝑥3 = 8
𝑥𝑥1 − 2𝑥𝑥2 − 3𝑥𝑥3 = −1
2𝑥𝑥1 + 3𝑥𝑥2 − 1𝑥𝑥3 = 1
Solution:
First, express the coefficients and the right-hand side as an augmented matrix:
3 5 2 8
�1 −2 −3� −1�
2 3 −1 −1

Then normalize the first row by dividing it by the pivot element, 3, to yield
5 2 8
1
� 3 3� 3�
1 −2 −3 −1
2 3 −1 1

New Row 2 = (Row 1)(-1)+(Row 2)


New Row 3 = (Row 1) (-2) + (Row 3)
5 2 8
⎡1 ⎤
⎢ 3 3 � 3 ⎥
⎢0 − 11 − 11 − 11⎥
⎢ 3 3� 3⎥
⎢ 1 7 13⎥
⎣ 0 − − −
3 3 3⎦

Next, normalize the second row by dividing it by -11/3:


5 2 8
⎡1 ⎤
⎢ 3 3 � 3 ⎥ Eq’n 1.c
⎢0 1 1 � 1 ⎥
⎢ 1 7 13⎥
⎣ 0 − − − ⎦
3 3 3
Reduction of the 𝑥𝑥2 terms from the fi rst and third equations gives
New Row 1 = (Row 2) (-5/3) + (Row 1)
New Row 3 = (Row 2) (1/3) + (Row 3)
1 0 −1 1
�0 1 1 � 1 �
0 0 −2 −4
The third row is then normalized by dividing it by -2:

EEPC102 Module III


7

1 0 −1 1
�0 1 1 � 1 �
0 0 1 2

Finally, the 𝑥𝑥3 terms can be reduced from the first and the second equations to give
New Row 1 = (Row 3) (1) + (Row 1)
New Row 2 = (Row 3) (-1) + (Row 2)
1 0 0 3
�0 1 0� −1�
0 0 1 2
Thus, the coefficient matrix has been transformed to the identity matrix, and the
solution is obtained in the right-hand-side vector. Notice that no back substitution
was required to obtain the solution.

Learning Activity

Given the equations


10𝑥𝑥1 + 2𝑥𝑥2 − 𝑥𝑥3 = 27
−3𝑥𝑥1 − 6𝑥𝑥2 + 2𝑥𝑥3 = −61.5
𝑥𝑥1 + 𝑥𝑥2 + 25 = −21.5
(a) Solve by using Gauss elimination and Gauss Jordan Method. Show all steps
of the computation.
(b) Substitute your results into the original equations to check your answers.

EEPC102 Module III


8

Lesson 2

 LU Decomposition and Matrix


Inversion

A. LU Decomposition
LU decomposition methods separate the time-consuming elimination
of the matrix [A] from the manipulations of the right-hand side {B}. Thus,
once [A] has been “decomposed,” multiple right-hand-side vectors can be
evaluated in an efficient manner.
Interestingly, Gauss elimination itself can be expressed as an LU
decomposition.

Overview of LU Decomposition
Just as was the case with Gauss elimination, LU decomposition requires
pivoting to avoid division by zero. However, to simplify the following
description, we will defer the issue of pivoting until after the fundamental
approach is elaborated. In addition, the following explanation is limited to a
set of three simultaneous equations. The results can be directly extended to
n-dimensional systems.
Equation (1.a) can be arranged to give
[𝐴𝐴]{𝑋𝑋} − [𝐵𝐵] = 0 Eq’n 2.a
Suppose that Eq. (2.a) could be expressed as an upper triangular system:
𝑢𝑢11 𝑢𝑢12 𝑢𝑢13 𝑥𝑥1 𝑑𝑑1
� 0 𝑢𝑢22 𝑢𝑢23 � �𝑥𝑥2 � = �𝑑𝑑2 � Eq’n 2.b
0 0 𝑢𝑢33 𝑥𝑥3 𝑑𝑑3
Recognize that this is similar to the manipulation that occurs in the first step
of Gauss elimination. That is, elimination is used to reduce the system to
upper triangular form. Equation (2.b) can also be expressed in matrix notation
and rearranged to give
[𝑈𝑈]{𝑋𝑋} − [𝐷𝐷 ] = 0 Eq’n 2.c
Now, assume that there is a lower diagonal matrix with 1’s on the diagonal,
1 0 0
[𝐿𝐿] = �𝑙𝑙21 1 0� Eq’n 2.d
𝑙𝑙31 𝑙𝑙32 1
that has the property that when Eq. (2.c) is premultiplied by it, Eq. (2.a) is
the result. That is,
Eq’n 2.e

EEPC102 Module III


9

[𝐿𝐿]�[𝑈𝑈]{𝑋𝑋} − {𝐷𝐷 }� = [𝐴𝐴]{𝑋𝑋} − {𝐵𝐵}


If this equation holds, it follows from the rules for matrix multiplication that
[𝐿𝐿][𝑈𝑈] = [𝐴𝐴] Eq’n 2.f
And
[𝑳𝑳]{𝑫𝑫} = {𝑩𝑩} Eq’n 2.g

A two-step strategy for obtaining solutions can be based on Eqs. (2.c), (2.f),
and (2.g):
1. LU decomposition step. [A] is factored or “decomposed” into lower [L]
and upper [U] triangular matrices.
2. Substitution step. [L] and [U] are used to determine a solution {X} for
a right-handside {B}. This step itself consists of two steps. First, Eq.
(10.8) is used to generate an intermediate vector {D} by forward
substitution. Then, the result is substituted into Eq. (10.4), which can
be solved by back substitution for {X}.

LU Decomposition version of Gauss Elimination


Gauss elimination can be used to decompose [A] into [L] and [U]. This
can be easily seen for [U], which is a direct product of the forward
elimination. Recall that the forward elimination step is intended to reduce
the original coefficient matrix [A] to the form
𝑎𝑎11 𝑎𝑎12 𝑎𝑎13
Eq’n 2.h
[𝑈𝑈] = � 0 𝑎𝑎22 𝑎𝑎23 �
0 0 𝑎𝑎33
which is in the desired upper triangular format.
Though it might not be as apparent, the matrix [L] is also produced
during the step. This can be readily illustrated for a three-equation system,
𝑎𝑎11 𝑎𝑎12 𝑎𝑎13 𝑥𝑥1 𝑏𝑏1
𝑎𝑎
� 21 𝑎𝑎22 𝑎𝑎23 � �𝑥𝑥2 � = �𝑏𝑏2 �
𝑎𝑎31 𝑎𝑎32 𝑎𝑎33 𝑥𝑥3 𝑏𝑏3
The first step in Gauss elimination is to multiply row 1 by the factor
𝑎𝑎21
𝑓𝑓21 =
𝑎𝑎11
and subtract the result from the second row to eliminate 𝑎𝑎21 . Similarly, row
1 is multiplied by
𝑎𝑎31
𝑓𝑓31 =
𝑎𝑎11
and the result subtracted from the third row to eliminate 𝑎𝑎31 . The final step
is to multiply the modified second row by

EEPC102 Module III


10

𝑎𝑎′32
𝑓𝑓32 =
𝑎𝑎′22
and subtract the result from the third row to eliminate 𝑎𝑎′32 .
Now suppose that we merely perform all these manipulations on the
matrix [A]. Clearly, if we do not want to change the equation, we also have
to do the same to the right-hand side {B}. But there is absolutely no reason
that we have to perform the manipulations simultaneously. Thus, we could
save the f’s and manipulate {B} later.
Where do we store the factors 𝑓𝑓21, 𝑓𝑓31, and 𝑓𝑓32? Recall that the whole
idea behind the elimination was to create zeros in 𝑎𝑎21 , 𝑎𝑎31 , and 𝑎𝑎32 . Thus, we
can store 𝑓𝑓21 , in 𝑎𝑎21 , 𝑓𝑓31 in 𝑎𝑎31 , and 𝑓𝑓32 in 𝑎𝑎32 . After elimination, the [A]
matrix can therefore be written as
𝑎𝑎11 𝑎𝑎12 𝑎𝑎13
� 𝑓𝑓21 𝑎𝑎′22 𝑎𝑎′23 � Eq’n 2.i
𝑓𝑓31 𝑓𝑓32 𝑎𝑎"33
This matrix, in fact, represents an efficient storage of the LU decomposition
of [A],
[𝐴𝐴] → [𝐿𝐿][𝑈𝑈] Eq’n 2.j

Where
𝑎𝑎11 𝑎𝑎12 𝑎𝑎13
[𝑈𝑈] = � 0 𝑎𝑎′22 𝑎𝑎′23 �
0 0 𝑎𝑎"33
And
1 0 0
[𝐿𝐿] = �𝑓𝑓21 1 0�
𝑓𝑓31 𝑓𝑓32 1

Example 3
Lets try to derive an LU decomposition based on the gauss elimination
performed in Example 1.
Solution: (Remember we swap Row 2 and Row 3)
3 5 2
[𝐴𝐴] = �1 −2 −3�
2 3 −1
After forward elimination, the following upper triangular matrix was
obtained:

EEPC102 Module III


11

3 5 2
11 11
[𝑈𝑈] = �0 − − �
3 3
0 0 −2
The factors employed to obtain the upper triangular matrix can be
assembled into a lower triangular matrix. The elements 𝑎𝑎21 and 𝑎𝑎31 were
eliminated by using the factors
𝑎𝑎21 1
𝑓𝑓21 = =
𝑎𝑎11 3
𝑎𝑎31 2
𝑓𝑓31 = =
𝑎𝑎11 3
and the element 𝑎𝑎′32 was eliminated by using the factor
1
𝑎𝑎′32 −3 1
𝑓𝑓32 = = =
𝑎𝑎′22 − 11 11
3
Thus, the lower triangular matrix is
𝟏𝟏 𝟎𝟎 𝟎𝟎
⎡𝟏𝟏 ⎤
⎢ 𝟏𝟏 𝟎𝟎⎥
[𝑳𝑳] = ⎢𝟑𝟑 ⎥
⎢𝟐𝟐 𝟏𝟏 ⎥
⎣𝟑𝟑 𝟏𝟏⎦
𝟏𝟏𝟏𝟏
Consequently, the LU decomposition is
1 0 0
⎡1 ⎤ 3 5 2
⎢ 1 0⎥ 11 11
[𝐴𝐴] = [𝐿𝐿][𝑈𝑈] = ⎢3 ⎥ �0 − 3 − 3 �
⎢2 1 ⎥
0 0 −2
⎣3 11 1⎦
This result can be verified by performing the multiplication of [L][U] to give
3 5 2
⎡1 11 11⎤
⎢ − − ⎥
[𝐿𝐿][𝑈𝑈] = ⎢3 3 3⎥
⎢ 2 1 ⎥
⎣3 11 −2 ⎦

where the minor discrepancies are due to round-off.

Example 4. To complete the problem by generating the final solution with


forward and backward substitution
Recall that the system being solved in Example 1 was

EEPC102 Module III


12

3 5 2 𝑥𝑥1 8
�1 −2 −3� �𝑥𝑥2 � = �−1�
2 3 −1 𝑥𝑥3 1
and that the forward-elimination phase of conventional Gauss elimination
resulted in
3 5 2 𝑥𝑥 8 Eq’n 2.k
11 11 𝑥𝑥1 11
�0 − − � � 2 � = �− �
3 3 𝑥𝑥3 3
0 0 −2 −4
The forward-substitution phase is implemented by applying Eq. (2.f) to our
problem,
1 0 0
⎡1 ⎤ 𝑑𝑑
8
⎢ 1 0⎥ 1
⎢3 �
⎥ 2𝑑𝑑 � = � −1 �
⎢2 1 ⎥ 𝑑𝑑 1
⎣3 11 1⎦ 3

or multiplying out the left-hand side,


𝑑𝑑1 =8
1
𝑑𝑑 + 𝑑𝑑2 = −1
3 1
2 1
𝑑𝑑 + 𝑑𝑑 + 𝑑𝑑3 = 1
3 1 11 2
We can solve the first equation for 𝑑𝑑1 ,
𝑑𝑑1 = 8
which can be substituted into the second equation to solve for
1 11
𝑑𝑑2 = −1 − (8) = −
3 3
Both 𝑑𝑑1 and 𝑑𝑑2 can be substituted into the third equation to give
2 1 11
𝑑𝑑3 = 1 − (8) − (− ) = −4
3 11 3
Thus,
𝟖𝟖
𝟏𝟏𝟏𝟏
{𝑫𝑫} = �− �
𝟑𝟑
−𝟒𝟒
which is identical to the right-hand side of Eq. (2.k).
This result can then be substituted into Eq. (2.c), [U]{X} = {D}, to give

EEPC102 Module III


13

3 5 2 𝑥𝑥 8
11 11 𝑥𝑥1 11
�0 − − � � 2 � = �− �
3 3 𝑥𝑥3 3
0 0 −2 −4
which can be solved by back substitution for the final solution,
3
{𝑥𝑥 } = �−1�
2

B. The Matrix Inverse

If a matrix [A] is square, there is another matrix, [A]-1 called the inverse
of [A], for which
[𝐴𝐴][𝐴𝐴−1 ] = [𝐴𝐴]−1 [𝐴𝐴] = [𝐼𝐼 ]

The inverse of a matrix can be computed in a column-by-column fashion


by generating solutions with unit vectors as the right-hand-side constants. For
example, if the right-hand-side constant has a 1 in the first position and zeros
elsewhere,
1
{𝑏𝑏} = �0�
0
the resulting solution will be the first column of the matrix inverse. Similarly,
if a unit vector with a 1 at the second row is used
0
{𝑏𝑏} = �1�
0
the result will be the second column of the matrix inverse.
The best way to implement such a calculation is with the LU
decomposition algorithm described at the beginning of this chapter. Recall
that one of the great strengths of LU decomposition is that it provides a very
efficient means to evaluate multiple right hand-side vectors. Thus, it is ideal
for evaluating the multiple unit vectors needed to compute the inverse.

Matrix Inversion
Example 4. Employ LU decomposition to determine the matrix inverse for
the system from Example 3.
3 5 2
[𝐴𝐴] = �1 −2 −3�
2 3 −1

EEPC102 Module III


14

Recall that the decomposition resulted in the following lower and upper
triangular matrices:
1 0 0
3 5 2 ⎡1 ⎤
11 11 ⎢ 1 0⎥
[𝑈𝑈] = �0 − − � [𝐿𝐿] = ⎢3 ⎥
3 3 ⎢2 1 ⎥
0 0 −2 1⎦
⎣3 11
Solution:
The first column of the matrix inverse can be determined by performing the
forward-substitution solution procedure with a unit vector (with 1 in the first
row) as the right-hand-side vector. Thus, Eq. (2.g), the lower-triangular
system, can be set up as
1 0 0
⎡1 ⎤ 𝑑𝑑
1
⎢ 1 0⎥ 1
⎢3 �
⎥ 2𝑑𝑑 � = � 0�
⎢2 1 ⎥ 𝑑𝑑 0
⎣3 11 1⎦ 3
1 7
and solved with forward substitution (see eqn 1.c) for {𝐷𝐷}𝑇𝑇 = �1 − 3 − 11�.
This vector can then be used as the right-hand side of Eq. (2.b),
1
3 5 2 𝑥𝑥1 ⎡ 1⎤
11 11 ⎢− ⎥
�0 − − � �𝑥𝑥2 � = ⎢ 3 ⎥
3 3 𝑥𝑥3 ⎢ 7⎥
0 0 −2 ⎣− 11⎦

1 5 7
which can be solved by back substitution for {𝑋𝑋}𝑇𝑇 = �2 − 22 22

which is the first column of the matrix,


1
⎡ 0 0⎤
⎢ 2 ⎥
5
[𝐴𝐴]−1 = ⎢− 0 0⎥
⎢ 22 ⎥
⎢ 7 ⎥
⎣ 22 0 0⎦
To determine the second column, Eq. (2.6) is formulated as

EEPC102 Module III


15

1 0 0
⎡1 ⎤ 𝑑𝑑
0
⎢ 1 0⎥ 1
⎢3 �
⎥ 2𝑑𝑑 � = � 1�
⎢2 1 ⎥ 𝑑𝑑 0
⎣3 11 1⎦ 3
1
this can be solved for {D}=�0 1 − �, This vector can then be used as the
11
right-hand side of Eq. (2.b),
3 5 2 𝑥𝑥 0
11 11 𝑥𝑥1 1
�0 − − � � 2� = � 1 �
3 3 𝑥𝑥3 −
0 0 −2 11

1 7 1
{𝑋𝑋}𝑇𝑇 = �2 − 22 22
�, which is the second column of the matrix,

1 1
⎡ 0⎤
⎢ 2 2 ⎥
⎢ 5 7
[𝐴𝐴]−1 = − − 0⎥
⎢ 22 22 ⎥
⎢ 7 1 ⎥
⎣ 22 0 ⎦
22
To determine the third column, Eq. (2.6) is formulated as
1 0 0
⎡1 ⎤ 𝑑𝑑
0
⎢ 1 0⎥ 1
⎢3 �
⎥ 2𝑑𝑑 � = � 0�
⎢2 1 ⎥ 𝑑𝑑 1
⎣3 11 1⎦ 3

this can be solved for {D}=[0 0 1], This vector can then be used as the right-
hand side of Eq. (2.b),
3
5 2 𝑥𝑥
11 11 𝑥𝑥1 0
�0 − − � � 2 � = � 0�
3 3 𝑥𝑥3 1
0 0 −2
1 1 1
{𝑋𝑋}𝑇𝑇 = �−2 2
− �, which is the third column of the matrix,
2

1 1 1
⎡ − ⎤
⎢ 2 2 2⎥
5 7 1
[𝐴𝐴]−1 = ⎢− − ⎥
⎢ 22 22 2 ⎥
⎢ 7 1 1⎥
⎣ 22 −
22 2⎦
The validity of this result can be checked by verifying that [A][A]-1= [I].

EEPC102 Module III


16

Example 5. Solve Example 1 using inverse of a matrix.


3𝑥𝑥1 + 5𝑥𝑥2 + 2𝑥𝑥3 = 8
𝑥𝑥1 − 2𝑥𝑥2 − 3𝑥𝑥3 = −1
2𝑥𝑥1 + 3𝑥𝑥2 − 1𝑥𝑥3 = 1
Solution:
First, express the coefficients and the right-hand side as an augmented matrix:
3 5 2 𝑥𝑥1 8
�1 −2 −3� �𝑥𝑥2 � = �−1�
2 3 −1 𝑥𝑥3 1
First, we will find the inverse of A by augmenting with the identity.
3 5 2 1 0 0
�1 −2 −3� 0 1 0�
2 3 −1 0 0 1
Normalize Row 1 by multiplying it by 1/3
5 2 1
1 0 0
� 3 3 �3 �
1 −2 −3 0 1 0
2 3 −1 0 0 1
New Row 2 = (Row 1)(-1)+(Row 2)
New Row 3 = (Row 1) (-2) + (Row 3)
5 2 1
⎡1 0 0⎤
⎢ 3 3 � 3 ⎥
⎢0 − 11 11 1
− − 1 0⎥
⎢ 3 3� 3 ⎥
⎢ 1 7 2 ⎥
⎣ 0 − − − 0 1⎦
3 3 3
Next, normalize the second row by dividing it by -11/3:
1
⎡ 5 2 0 0⎤
⎢1 3 ⎥
3 3 � 1 3
⎢0 1 1 � − 0⎥
⎢ 1 7 11 11 ⎥
⎢0 − − 2 ⎥
3 3 − 0 1⎦
⎣ 3
Reduction of the 𝑥𝑥2 terms from the fi rst and third equations gives
New Row 1 = (Row 2) (-5/3) + (Row 1)
New Row 3 = (Row 2) (1/3) + (Row 3)

EEPC102 Module III


17

2 5
⎡ 0⎤
⎢1 0 −1 11 11 ⎥
⎢0 1 1 � 1 −
3
0⎥
⎢0 0 −2 11 11 ⎥
⎢ 7 1 ⎥
⎣ − − 1⎦
11 11
The third row is then normalized by dividing it by -2:
2 5
⎡ 0 ⎤
⎢1 0 −1 11 11 ⎥
⎢0 1 1 � 1 − 3 0 ⎥
⎢0 0 1 11 11 ⎥
⎢ 7 1 1⎥
⎣ − ⎦
22 22 2
Finally, the 𝑥𝑥3 terms can be reduced from the first and the second equations to give
New Row 1 = (Row 3) (1) + (Row 1)
New Row 2 = (Row 3) (-1) + (Row 2)
1 1 1
⎡ − ⎤
⎢1 0 0 2 2 2⎥
⎢0 1 0� − 5 7 1 ⎥

⎢0 0 1 22 22 2 ⎥
⎢ 7 1 1⎥
⎣ − ⎦
22 22 2
So,
1 1 1
⎡ − ⎤
⎢ 2 2 2⎥
5 7 1
[𝐴𝐴]−1 = ⎢− − ⎥
⎢ 22 22 2 ⎥
⎢ 7 1 1⎥
⎣ 22 − ⎦
22 2
Multiply both sides of the equation by [𝐴𝐴]−1 . We want [𝐴𝐴]−1 𝐴𝐴𝐴𝐴 = [𝐴𝐴]−1 𝐵𝐵
1 1 1 1 1 1
⎡ − ⎤ ⎡ − ⎤
⎢ 2 2 2⎥
2 𝑥𝑥1 ⎢ 2 2 2⎥
⎢− 5 7 1 ⎥ 3 5 ⎢ 5 7 1 ⎥
8
− 𝑥𝑥
�1 −2 −3� � 2 � = − − �−1�
⎢ 22 22 2 ⎥ 2 3 −1 𝑥𝑥3 ⎢ 22 22 2 ⎥ 1
⎢ 7 1 1⎥ ⎢ 7 1 1⎥
⎣ 22 − ⎦ ⎣ 22 −
22 2 22 2⎦
Thus,
4 − 0.5 − 0.5
⎡ 20 7 1⎤
𝟑𝟑
⎢− + + ⎥
[𝐴𝐴]−1 𝐵𝐵 = ⎢ 11 22 2⎥ = �−𝟏𝟏�
⎢ 28 1 1 ⎥ 𝟐𝟐
⎣ 11 − 22 − 2 ⎦
The solution is 3, -1 and 2

EEPC102 Module III


18

Learning Activity
1. Use LU decomposition to determine the matrix inverse for the
following system. Do not use a pivoting strategy, and check your results
by verifying that [A][A]-1 = [I].
10𝑥𝑥1 + 2𝑥𝑥2 − 𝑥𝑥3 = 27
−3𝑥𝑥1 − 6𝑥𝑥2 + 2𝑥𝑥3 = −61.5
𝑥𝑥1 + 𝑥𝑥2 + 5𝑥𝑥3 = −21.5
2. Solve the following system of equations using LU decomposition with
partial pivoting:
2𝑥𝑥1 − 6𝑥𝑥2 − 𝑥𝑥3 = −38
−3𝑥𝑥1 − 𝑥𝑥2 + 7𝑥𝑥3 = −34
−8𝑥𝑥1 + 𝑥𝑥2 − 2𝑥𝑥3 = −20
3. The following system of equations is designed to determine
concentrations (the c’s in g/m3) in a series of coupled reactors as a
function of the amount of mass input to each reactor (the right-hand
sides in g/day),
15𝑐𝑐1 − 3𝑐𝑐2 − 𝑐𝑐3 = 3800
−3𝑐𝑐1 + 18𝑐𝑐2 − 6𝑐𝑐3 = 1200
−4𝑐𝑐1 − 𝑐𝑐2 + 12𝑐𝑐3 = 2350
(a) Determine the matrix inverse.
(b) Use the inverse to determine the solution.
(c) Determine how much the rate of mass input to reactor 3 must be
increased to induce a 10 g/m3 rise in the concentration of reactor 1.
(d) How much will the concentration in reactor 3 be reduced if the rate
of mass input to reactors 1 and 2 is reduced by 500 and 250 g/day,
respectively?

EEPC102 Module III


19

Lesson 3

 Special Matrices and


Gauss-Seidel

A. Special Matrices

A banded matrix is a square matrix that has all elements equal to zero,
with the exception of a band centered on the main diagonal. Banded systems
are frequently encountered in engineering and scientific practice. For
example, they typically occur in the solution of differential equations

Although Gauss elimination or conventional LU decomposition can be


employed to solve banded equations, they are inefficient, because if pivoting
is unnecessary none of the elements outside the band would change from
their original values of zero. Thus, unnecessary space and time would be
expended on the storage and manipulation of these useless zeros. If it is
known beforehand that pivoting is unnecessary, very efficient algorithms can
be developed that do not involve the zero elements outside the band.

1. Tridiagonal System

Eq’n 3a

Notice that we have changed our notation for the coefficients from a’s
and b’s to e’s, f’s, g’s, and r’s. This was done to avoid storing large numbers
of useless zeros in the square matrix of a’s. This space-saving modification is
advantageous because the resulting algorithm requires less computer
memory.

As with conventional LU decomposition, the algorithm consists of three


steps: decomposition and forward and back substitution. Thus, all the
advantages of LU decomposition, such as convenient evaluation of multiple

EEPC102 Module III


20

right-hand-side vectors and the matrix inverse, can be accomplished by


proper application of this algorithm.

Tridiagonal Solution with the Thomas Algorithm

Example 6. Solve the following tridiagonal system with the Thomas algorithm.

Solution:

First, the decomposition is implemented as


𝑒𝑒2 = −1/2.04 = −0.49
𝑓𝑓2 = 2.04 − (−0.49)(−1) = 1.550
𝑒𝑒3 = −1/1.550 = −0.645
𝑓𝑓3 = 2.04 − (−0.645)(−1) = 1.395
𝑒𝑒4 = −1/1.395 = −0.717
𝑓𝑓4 = 2.04 − (−0.717)(−1) = 1.323

Thus, the matrix has been transformed to

and the LU decomposition is

You can verify that this is correct by multiplying [L][U] to yield [A].
The forward substitution is implemented as
𝑟𝑟2 = 0.8 − (−0.49)(40.8) = 20.8
𝑟𝑟3 = 0.8 − (−0.49)(40.8) = 20.8
𝑟𝑟4 = 0.8 − (−0.49)(40.8) = 20.8

EEPC102 Module III


21

Thus, the right-hand-side vector has been modified to

40.8
� 20.8 �
14.221
210.996

which then can be used in conjunction with the [U] matrix to perform back
substitution and obtain the solution
210.996
𝑇𝑇4 = = 159.480
1.323
14.221 − (−1)(159.48)
𝑇𝑇3 = = 124.538
1.395
20.800 − (−1)(124.538)
𝑇𝑇2 = = 93.778
1.550
40.800 − (−1)(93.778)
𝑇𝑇1 = = 93.778
65.970

2. Cholesky Decomposition

A symmetric matrix is one where 𝑎𝑎𝑖𝑖𝑖𝑖 = 𝑎𝑎𝑗𝑗𝑗𝑗 for all i and j. In other
words, [A] = [A]T. Such systems occur commonly in both mathematical and
engineering problem contexts. They offer computational advantages because
only half the storage is needed and, in most cases, only half the computation
time is required for their solution.

One of the most popular approaches involves Cholesky decomposition.


This algorithm is based on the fact that a symmetric matrix can be
decomposed, as in
[𝐴𝐴] = [𝐿𝐿][𝐿𝐿]𝑇𝑇 Eq’n 3.b

That is, the resulting triangular factors are the transpose of each other.
The terms of Eq. (3.b) can be multiplied out and set equal to each other. The
result can be expressed simply by recurrence relations. For the kth row,

Eq’n 3.c

and

Eq’n 3.d

EEPC102 Module III


22

Example 7. Apply Cholesky decomposition to the symmetric matrix.

6 15 55
[𝐴𝐴] = �15 55 225�
55 225 979

Solution:
For the first row (k = 1), Eq. (3.c) is skipped and Eq. (3.d) is employed to
compute
𝑙𝑙11 = �𝑎𝑎11 = √6 = 2.4495

For the second row (k = 2), Eq. (3.c) gives

𝑎𝑎21 15
𝑙𝑙21 = = = 6.1237
𝑙𝑙11 2.4495

and Eq. (3.d) yields

2
𝑙𝑙22 = �𝑎𝑎22 − 𝑙𝑙21 = �55 − (6.1237)2 = 4.1833

For the third row (k = 3), Eq. (3.c) gives (i = 1)

𝑎𝑎31 55
𝑙𝑙31 = = = 22.454
𝑙𝑙11 2.4495

and (i = 2)

𝑎𝑎32 − 𝑙𝑙21 𝑙𝑙31 225 − 6.1237(22.454)


𝑙𝑙32 = = = 20.917
𝑙𝑙22 4.1833

and Eq. (3.d) yields


2 2
𝑙𝑙33 = �𝑎𝑎33 − 𝑙𝑙31 − 𝑙𝑙32 = �979 − (22.454)2 − (20.917)2 = 6.1101

Thus, the Cholesky decomposition yields

2.4495
[𝐿𝐿] = �6.1237 4.1833 �
22.454 20.917 6.1101

The validity of this decomposition can be verified by substituting it and its


transpose into Eq. (3.b) to see if their product yields the original matrix [A].

EEPC102 Module III


23

B. Gauss-Seidel

The Gauss-Seidel method is the most commonly used iterative method.


Assume that we are given a set of n equations:

[𝐴𝐴]{𝑋𝑋} = {𝐵𝐵}

Suppose that for conciseness we limit ourselves to a 3x3 set of equations. If


the diagonal elements are all nonzero, the first equation can be solved for
𝑥𝑥1 , the second for 𝑥𝑥2 , and the third for 𝑥𝑥3 to yield

𝑏𝑏1 − 𝑎𝑎12 𝑥𝑥2 − 𝑎𝑎13 𝑥𝑥3 Eq’n 3.e


𝑥𝑥1 =
𝑎𝑎11
𝑏𝑏2 − 𝑎𝑎21 𝑥𝑥1 − 𝑎𝑎23 𝑥𝑥3
𝑥𝑥2 = Eq’n 3.f
𝑎𝑎22
𝑏𝑏3 − 𝑎𝑎31 𝑥𝑥1 − 𝑎𝑎32 𝑥𝑥2 Eq’n 3.g
𝑥𝑥3 =
𝑎𝑎33

A positive definite matrix is one for which the product {X}T [A]{X} is greater
than zero for all nonzero vectors {X}.

Now, we can start the solution process by choosing guesses for the x’s.
A simple way to obtain initial guesses is to assume that they are all zero.
These zeros can be substituted into Eq. (3.e), which can be used to calculate
a new value for 𝑥𝑥1 = 𝑏𝑏1 /𝑎𝑎11 . Then, we substitute this new value of 𝑥𝑥1 along
with the previous guess of zero for 𝑥𝑥3 into Eq. (3.f) to compute a new value
for 𝑥𝑥2 . The process is repeated for Eq. (3.g) to calculate a new estimate for
𝑥𝑥3 . Then we return to the first equation and repeat the entire procedure until
our solution converges closely enough to the true values. Convergence can be
checked using the criterion.

𝑗𝑗 𝑗𝑗−𝑖𝑖
𝑥𝑥 − 𝑥𝑥
�𝜀𝜀𝑎𝑎,𝑖𝑖 � � 𝑖𝑖 𝑗𝑗 𝑖𝑖 � ∗ 100% < 𝜀𝜀𝑠𝑠
𝑥𝑥𝑖𝑖
for all 𝑖𝑖, where 𝑗𝑗 and 𝑗𝑗 − 1 are the present and previous iterations.

Example 8. Use the Gauss-Seidel method to obtain the solution of the system
3𝑥𝑥1 − 0.1 − 0.2𝑥𝑥3 = 7.85
0.1𝑥𝑥1 + 7𝑥𝑥2 − 0.3𝑥𝑥3 = −19.3
0.3𝑥𝑥1 − 0.2𝑥𝑥2 + 10𝑥𝑥3 = 71.4

EEPC102 Module III


24

Solution:
First, express the coefficients and the right-hand side as an augmented matrix:
3 −0.1 −0.2 𝑥𝑥1 7.85
�0.1 7 𝑥𝑥
−0.3� � 2 � = �−19.3�
0.3 −0.2 10 𝑥𝑥3 71.4

Solve each of the equations for its unknown on the diagonal.

7.85 + 0.1𝑥𝑥2 + 0.2𝑥𝑥3


𝑥𝑥1 = Eq’n 3.h
3
−19.3 − 0.1𝑥𝑥1 + 0.3𝑥𝑥3
𝑥𝑥2 = Eq’n 3.i
7
71.4 − 0.3𝑥𝑥1 + 0.2𝑥𝑥2
𝑥𝑥3 = Eq’n 3.j
10

By assuming that 𝑥𝑥2 and 𝑥𝑥3 are zero, Eq. (3.h) can be used to compute
7.85 + 0.1 (0) + 0.2(0)
𝑥𝑥1 = = 2.616667
3
This value, along with the assumed value of 𝑥𝑥3 =0, can be substituted into Eq.
(3.i) to calculate

−19.3 − 0.1(2.61667) + 0.3(0)


𝑥𝑥2 = = −2.794524
7

The first iteration is completed by substituting the calculated values for 𝑥𝑥1
and 𝑥𝑥1 into Eq. (3.j) to yield

71.4 − 0.3(2.61667) + 0.2(−2.794524)


𝑥𝑥3 = = 7.059609
10

For the second iteration and other iterations, the same process is repeated
to compute the new 𝑥𝑥1 , 𝑥𝑥2 , 𝑎𝑎𝑎𝑎𝑎𝑎 𝑥𝑥3 .
7.85 + 0.1 (−2.794524) + 0.2(7.059609)
𝑥𝑥1 = = 2.994156
3
−19.3 − 0.1(2.994156) + 0.3(7.059609)
𝑥𝑥2 = = −2.497362
7
71.4 − 0.3(2.994156) + 0.2(−2.497362)
𝑥𝑥3 = = 7.054228
10

EEPC102 Module III


25

As you can see at iteration number 5 we got the same values. Thus,
this is considered the values of the algebraic equations. If we try to use
fraction form we will arrive at the exact values of the equation.
𝑥𝑥1 3
𝑥𝑥
� 2 � ≈ �−2.5�
𝑥𝑥3 7

Learning Activity

1. The following tridiagonal system must be solved as part of a larger


algorithm (Crank-Nicolson) for solving partial differential equations:

Use the Thomas algorithm to obtain a solution.


2. Perform a Cholesky decomposition of the following symmetric system
by hand,
8 20 15 𝑥𝑥1 50
�20 80 50� �𝑥𝑥2 � = �250�
15 50 60 𝑥𝑥3 100

3. Use the Gauss-Seidel method to solve the following system until the
percent relative error falls below 𝜀𝜀𝑠𝑠 = 5%,
10𝑥𝑥1 + 2𝑥𝑥2 − 𝑥𝑥3 = 27
−3𝑥𝑥1 − 6𝑥𝑥2 + 2𝑥𝑥3 = −61.5
𝑥𝑥1 + 𝑥𝑥2 + 5𝑥𝑥3 = −21.5

Summative Test

1. Given the equations


2𝑥𝑥1 − 6𝑥𝑥2 − 𝑥𝑥3 = −38
−3𝑥𝑥1 − 𝑥𝑥2 + 7𝑥𝑥3 = −34

EEPC102 Module III


26

−8𝑥𝑥1 + 𝑥𝑥2 − 2𝑥𝑥3 = −20


(a) Solve by Gauss elimination with partial pivoting. Show all steps of the
computation.
(b) Substitute your results into the original equations to check your answers.
(c) Also solve the equation using Gauss Jordan
(d) Solve the following system of equations using LU decomposition with
partial pivoting

2. Given the equations


2𝑥𝑥1 − 5𝑥𝑥2 + 𝑥𝑥3 = 12
−𝑥𝑥1 + 3𝑥𝑥2 − 𝑥𝑥3 = −8
3𝑥𝑥1 − 4𝑥𝑥2 + 2𝑥𝑥3 = 16
Solve the system using
(a) Inverse Matrix
(b) Gauss-Seidel Method

 MODULE SUMMARY

In module III, you have learned methods of solving Algebraic Equations


through Matrices.

Lesson 1 is composed of Gaussian Elimination and the Gauss Jordan


Method in solving Linear Algebraic equations.

Lesson 2 is composed of LU Decomposition and Inverse Matrix method


in solving Linear Algebraic equations.

Lesson 3 deals with special Matrices such as Thomas algorithm,


Cholesky decomposition, and the Gauss-Seidel.

Congratulations! You have just studied Module III.

EEPC102 Module III

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy