EPPC102 Module III
EPPC102 Module III
INTRODUCTION
Module III
2
MODULE III
INTRODUCTION
OBJECTIVES
Lesson 1
Gauss Elimination
A. Gauss Elimination
2 2 2 2
�3(− 3) 5(− 3) 2(− 3) 8(− 3)�
2 3 −1 1
add Eq’n 1.b
1 7 13
�0 − − − �
3 3 3
3 5 2 8 New Row 2
1 7 13
�0 − − �− �
3 3 3
1 −2 −3 −1
3 5 2 8
⎡ 1 7 13⎤
⎢0 − − �− ⎥
⎢ 3 3� 3⎥
⎢ 11 11 11⎥
⎣ 0 − − −
3 3 3⎦
Row 2 will be the pivot row to eliminate A(3,2), which makes A(2,2) the pivot
11 1
coefficient. Since �− � > �− �, we’ll swap Rows 2 and 3, and the new pivot
3 3
coefficient will be largest:
3 5 2 8
⎡ 11 11 11⎤
⎢0 − − �− ⎥
⎢ 3 3� 3⎥
⎢ 1 7 13⎥
⎣ 0 − − −
3 3 3⎦
Now we can continue with minimal round-off of error propagation
Step 3 -reduce A(3,2) to zero
New Row 3 + (Row2)(-1/11) + (Row 3)
Row 2 is the pivot row now
3 5 2 8 New Row 3
11 11 11
�0 − − �− �
3 3 3
0 0 −2 −4
Now lets expand this to its full form with A, X, B in separate matrices
3 5 2 𝑥𝑥1 8
11 11 11
�0 − − � �𝑥𝑥2 � = �− �
3 3 𝑥𝑥3 3
0 0 −2 −4
Back Substitution gives us,
𝑥𝑥1 3
𝑥𝑥
� 2 � = �−1�
𝑥𝑥3 2
B. Gauss-Jordan
The Gauss-Jordan method is a variation of Gauss elimination. The major
difference is that when an unknown is eliminated in the Gauss-Jordan method, it is
eliminated from all other equations rather than just the subsequent ones. In
addition, all rows are normalized by dividing them by their pivot elements. Thus, the
elimination step results in an identity matrix rather than a triangular matrix.
Consequently, it is not necessary to employ back substitution to obtain the solution.
Then normalize the first row by dividing it by the pivot element, 3, to yield
5 2 8
1
� 3 3� 3�
1 −2 −3 −1
2 3 −1 1
1 0 −1 1
�0 1 1 � 1 �
0 0 1 2
Finally, the 𝑥𝑥3 terms can be reduced from the first and the second equations to give
New Row 1 = (Row 3) (1) + (Row 1)
New Row 2 = (Row 3) (-1) + (Row 2)
1 0 0 3
�0 1 0� −1�
0 0 1 2
Thus, the coefficient matrix has been transformed to the identity matrix, and the
solution is obtained in the right-hand-side vector. Notice that no back substitution
was required to obtain the solution.
Learning Activity
Lesson 2
A. LU Decomposition
LU decomposition methods separate the time-consuming elimination
of the matrix [A] from the manipulations of the right-hand side {B}. Thus,
once [A] has been “decomposed,” multiple right-hand-side vectors can be
evaluated in an efficient manner.
Interestingly, Gauss elimination itself can be expressed as an LU
decomposition.
Overview of LU Decomposition
Just as was the case with Gauss elimination, LU decomposition requires
pivoting to avoid division by zero. However, to simplify the following
description, we will defer the issue of pivoting until after the fundamental
approach is elaborated. In addition, the following explanation is limited to a
set of three simultaneous equations. The results can be directly extended to
n-dimensional systems.
Equation (1.a) can be arranged to give
[𝐴𝐴]{𝑋𝑋} − [𝐵𝐵] = 0 Eq’n 2.a
Suppose that Eq. (2.a) could be expressed as an upper triangular system:
𝑢𝑢11 𝑢𝑢12 𝑢𝑢13 𝑥𝑥1 𝑑𝑑1
� 0 𝑢𝑢22 𝑢𝑢23 � �𝑥𝑥2 � = �𝑑𝑑2 � Eq’n 2.b
0 0 𝑢𝑢33 𝑥𝑥3 𝑑𝑑3
Recognize that this is similar to the manipulation that occurs in the first step
of Gauss elimination. That is, elimination is used to reduce the system to
upper triangular form. Equation (2.b) can also be expressed in matrix notation
and rearranged to give
[𝑈𝑈]{𝑋𝑋} − [𝐷𝐷 ] = 0 Eq’n 2.c
Now, assume that there is a lower diagonal matrix with 1’s on the diagonal,
1 0 0
[𝐿𝐿] = �𝑙𝑙21 1 0� Eq’n 2.d
𝑙𝑙31 𝑙𝑙32 1
that has the property that when Eq. (2.c) is premultiplied by it, Eq. (2.a) is
the result. That is,
Eq’n 2.e
A two-step strategy for obtaining solutions can be based on Eqs. (2.c), (2.f),
and (2.g):
1. LU decomposition step. [A] is factored or “decomposed” into lower [L]
and upper [U] triangular matrices.
2. Substitution step. [L] and [U] are used to determine a solution {X} for
a right-handside {B}. This step itself consists of two steps. First, Eq.
(10.8) is used to generate an intermediate vector {D} by forward
substitution. Then, the result is substituted into Eq. (10.4), which can
be solved by back substitution for {X}.
𝑎𝑎′32
𝑓𝑓32 =
𝑎𝑎′22
and subtract the result from the third row to eliminate 𝑎𝑎′32 .
Now suppose that we merely perform all these manipulations on the
matrix [A]. Clearly, if we do not want to change the equation, we also have
to do the same to the right-hand side {B}. But there is absolutely no reason
that we have to perform the manipulations simultaneously. Thus, we could
save the f’s and manipulate {B} later.
Where do we store the factors 𝑓𝑓21, 𝑓𝑓31, and 𝑓𝑓32? Recall that the whole
idea behind the elimination was to create zeros in 𝑎𝑎21 , 𝑎𝑎31 , and 𝑎𝑎32 . Thus, we
can store 𝑓𝑓21 , in 𝑎𝑎21 , 𝑓𝑓31 in 𝑎𝑎31 , and 𝑓𝑓32 in 𝑎𝑎32 . After elimination, the [A]
matrix can therefore be written as
𝑎𝑎11 𝑎𝑎12 𝑎𝑎13
� 𝑓𝑓21 𝑎𝑎′22 𝑎𝑎′23 � Eq’n 2.i
𝑓𝑓31 𝑓𝑓32 𝑎𝑎"33
This matrix, in fact, represents an efficient storage of the LU decomposition
of [A],
[𝐴𝐴] → [𝐿𝐿][𝑈𝑈] Eq’n 2.j
Where
𝑎𝑎11 𝑎𝑎12 𝑎𝑎13
[𝑈𝑈] = � 0 𝑎𝑎′22 𝑎𝑎′23 �
0 0 𝑎𝑎"33
And
1 0 0
[𝐿𝐿] = �𝑓𝑓21 1 0�
𝑓𝑓31 𝑓𝑓32 1
Example 3
Lets try to derive an LU decomposition based on the gauss elimination
performed in Example 1.
Solution: (Remember we swap Row 2 and Row 3)
3 5 2
[𝐴𝐴] = �1 −2 −3�
2 3 −1
After forward elimination, the following upper triangular matrix was
obtained:
3 5 2
11 11
[𝑈𝑈] = �0 − − �
3 3
0 0 −2
The factors employed to obtain the upper triangular matrix can be
assembled into a lower triangular matrix. The elements 𝑎𝑎21 and 𝑎𝑎31 were
eliminated by using the factors
𝑎𝑎21 1
𝑓𝑓21 = =
𝑎𝑎11 3
𝑎𝑎31 2
𝑓𝑓31 = =
𝑎𝑎11 3
and the element 𝑎𝑎′32 was eliminated by using the factor
1
𝑎𝑎′32 −3 1
𝑓𝑓32 = = =
𝑎𝑎′22 − 11 11
3
Thus, the lower triangular matrix is
𝟏𝟏 𝟎𝟎 𝟎𝟎
⎡𝟏𝟏 ⎤
⎢ 𝟏𝟏 𝟎𝟎⎥
[𝑳𝑳] = ⎢𝟑𝟑 ⎥
⎢𝟐𝟐 𝟏𝟏 ⎥
⎣𝟑𝟑 𝟏𝟏⎦
𝟏𝟏𝟏𝟏
Consequently, the LU decomposition is
1 0 0
⎡1 ⎤ 3 5 2
⎢ 1 0⎥ 11 11
[𝐴𝐴] = [𝐿𝐿][𝑈𝑈] = ⎢3 ⎥ �0 − 3 − 3 �
⎢2 1 ⎥
0 0 −2
⎣3 11 1⎦
This result can be verified by performing the multiplication of [L][U] to give
3 5 2
⎡1 11 11⎤
⎢ − − ⎥
[𝐿𝐿][𝑈𝑈] = ⎢3 3 3⎥
⎢ 2 1 ⎥
⎣3 11 −2 ⎦
3 5 2 𝑥𝑥1 8
�1 −2 −3� �𝑥𝑥2 � = �−1�
2 3 −1 𝑥𝑥3 1
and that the forward-elimination phase of conventional Gauss elimination
resulted in
3 5 2 𝑥𝑥 8 Eq’n 2.k
11 11 𝑥𝑥1 11
�0 − − � � 2 � = �− �
3 3 𝑥𝑥3 3
0 0 −2 −4
The forward-substitution phase is implemented by applying Eq. (2.f) to our
problem,
1 0 0
⎡1 ⎤ 𝑑𝑑
8
⎢ 1 0⎥ 1
⎢3 �
⎥ 2𝑑𝑑 � = � −1 �
⎢2 1 ⎥ 𝑑𝑑 1
⎣3 11 1⎦ 3
3 5 2 𝑥𝑥 8
11 11 𝑥𝑥1 11
�0 − − � � 2 � = �− �
3 3 𝑥𝑥3 3
0 0 −2 −4
which can be solved by back substitution for the final solution,
3
{𝑥𝑥 } = �−1�
2
If a matrix [A] is square, there is another matrix, [A]-1 called the inverse
of [A], for which
[𝐴𝐴][𝐴𝐴−1 ] = [𝐴𝐴]−1 [𝐴𝐴] = [𝐼𝐼 ]
Matrix Inversion
Example 4. Employ LU decomposition to determine the matrix inverse for
the system from Example 3.
3 5 2
[𝐴𝐴] = �1 −2 −3�
2 3 −1
Recall that the decomposition resulted in the following lower and upper
triangular matrices:
1 0 0
3 5 2 ⎡1 ⎤
11 11 ⎢ 1 0⎥
[𝑈𝑈] = �0 − − � [𝐿𝐿] = ⎢3 ⎥
3 3 ⎢2 1 ⎥
0 0 −2 1⎦
⎣3 11
Solution:
The first column of the matrix inverse can be determined by performing the
forward-substitution solution procedure with a unit vector (with 1 in the first
row) as the right-hand-side vector. Thus, Eq. (2.g), the lower-triangular
system, can be set up as
1 0 0
⎡1 ⎤ 𝑑𝑑
1
⎢ 1 0⎥ 1
⎢3 �
⎥ 2𝑑𝑑 � = � 0�
⎢2 1 ⎥ 𝑑𝑑 0
⎣3 11 1⎦ 3
1 7
and solved with forward substitution (see eqn 1.c) for {𝐷𝐷}𝑇𝑇 = �1 − 3 − 11�.
This vector can then be used as the right-hand side of Eq. (2.b),
1
3 5 2 𝑥𝑥1 ⎡ 1⎤
11 11 ⎢− ⎥
�0 − − � �𝑥𝑥2 � = ⎢ 3 ⎥
3 3 𝑥𝑥3 ⎢ 7⎥
0 0 −2 ⎣− 11⎦
1 5 7
which can be solved by back substitution for {𝑋𝑋}𝑇𝑇 = �2 − 22 22
�
1 0 0
⎡1 ⎤ 𝑑𝑑
0
⎢ 1 0⎥ 1
⎢3 �
⎥ 2𝑑𝑑 � = � 1�
⎢2 1 ⎥ 𝑑𝑑 0
⎣3 11 1⎦ 3
1
this can be solved for {D}=�0 1 − �, This vector can then be used as the
11
right-hand side of Eq. (2.b),
3 5 2 𝑥𝑥 0
11 11 𝑥𝑥1 1
�0 − − � � 2� = � 1 �
3 3 𝑥𝑥3 −
0 0 −2 11
1 7 1
{𝑋𝑋}𝑇𝑇 = �2 − 22 22
�, which is the second column of the matrix,
1 1
⎡ 0⎤
⎢ 2 2 ⎥
⎢ 5 7
[𝐴𝐴]−1 = − − 0⎥
⎢ 22 22 ⎥
⎢ 7 1 ⎥
⎣ 22 0 ⎦
22
To determine the third column, Eq. (2.6) is formulated as
1 0 0
⎡1 ⎤ 𝑑𝑑
0
⎢ 1 0⎥ 1
⎢3 �
⎥ 2𝑑𝑑 � = � 0�
⎢2 1 ⎥ 𝑑𝑑 1
⎣3 11 1⎦ 3
this can be solved for {D}=[0 0 1], This vector can then be used as the right-
hand side of Eq. (2.b),
3
5 2 𝑥𝑥
11 11 𝑥𝑥1 0
�0 − − � � 2 � = � 0�
3 3 𝑥𝑥3 1
0 0 −2
1 1 1
{𝑋𝑋}𝑇𝑇 = �−2 2
− �, which is the third column of the matrix,
2
1 1 1
⎡ − ⎤
⎢ 2 2 2⎥
5 7 1
[𝐴𝐴]−1 = ⎢− − ⎥
⎢ 22 22 2 ⎥
⎢ 7 1 1⎥
⎣ 22 −
22 2⎦
The validity of this result can be checked by verifying that [A][A]-1= [I].
2 5
⎡ 0⎤
⎢1 0 −1 11 11 ⎥
⎢0 1 1 � 1 −
3
0⎥
⎢0 0 −2 11 11 ⎥
⎢ 7 1 ⎥
⎣ − − 1⎦
11 11
The third row is then normalized by dividing it by -2:
2 5
⎡ 0 ⎤
⎢1 0 −1 11 11 ⎥
⎢0 1 1 � 1 − 3 0 ⎥
⎢0 0 1 11 11 ⎥
⎢ 7 1 1⎥
⎣ − ⎦
22 22 2
Finally, the 𝑥𝑥3 terms can be reduced from the first and the second equations to give
New Row 1 = (Row 3) (1) + (Row 1)
New Row 2 = (Row 3) (-1) + (Row 2)
1 1 1
⎡ − ⎤
⎢1 0 0 2 2 2⎥
⎢0 1 0� − 5 7 1 ⎥
−
⎢0 0 1 22 22 2 ⎥
⎢ 7 1 1⎥
⎣ − ⎦
22 22 2
So,
1 1 1
⎡ − ⎤
⎢ 2 2 2⎥
5 7 1
[𝐴𝐴]−1 = ⎢− − ⎥
⎢ 22 22 2 ⎥
⎢ 7 1 1⎥
⎣ 22 − ⎦
22 2
Multiply both sides of the equation by [𝐴𝐴]−1 . We want [𝐴𝐴]−1 𝐴𝐴𝐴𝐴 = [𝐴𝐴]−1 𝐵𝐵
1 1 1 1 1 1
⎡ − ⎤ ⎡ − ⎤
⎢ 2 2 2⎥
2 𝑥𝑥1 ⎢ 2 2 2⎥
⎢− 5 7 1 ⎥ 3 5 ⎢ 5 7 1 ⎥
8
− 𝑥𝑥
�1 −2 −3� � 2 � = − − �−1�
⎢ 22 22 2 ⎥ 2 3 −1 𝑥𝑥3 ⎢ 22 22 2 ⎥ 1
⎢ 7 1 1⎥ ⎢ 7 1 1⎥
⎣ 22 − ⎦ ⎣ 22 −
22 2 22 2⎦
Thus,
4 − 0.5 − 0.5
⎡ 20 7 1⎤
𝟑𝟑
⎢− + + ⎥
[𝐴𝐴]−1 𝐵𝐵 = ⎢ 11 22 2⎥ = �−𝟏𝟏�
⎢ 28 1 1 ⎥ 𝟐𝟐
⎣ 11 − 22 − 2 ⎦
The solution is 3, -1 and 2
Learning Activity
1. Use LU decomposition to determine the matrix inverse for the
following system. Do not use a pivoting strategy, and check your results
by verifying that [A][A]-1 = [I].
10𝑥𝑥1 + 2𝑥𝑥2 − 𝑥𝑥3 = 27
−3𝑥𝑥1 − 6𝑥𝑥2 + 2𝑥𝑥3 = −61.5
𝑥𝑥1 + 𝑥𝑥2 + 5𝑥𝑥3 = −21.5
2. Solve the following system of equations using LU decomposition with
partial pivoting:
2𝑥𝑥1 − 6𝑥𝑥2 − 𝑥𝑥3 = −38
−3𝑥𝑥1 − 𝑥𝑥2 + 7𝑥𝑥3 = −34
−8𝑥𝑥1 + 𝑥𝑥2 − 2𝑥𝑥3 = −20
3. The following system of equations is designed to determine
concentrations (the c’s in g/m3) in a series of coupled reactors as a
function of the amount of mass input to each reactor (the right-hand
sides in g/day),
15𝑐𝑐1 − 3𝑐𝑐2 − 𝑐𝑐3 = 3800
−3𝑐𝑐1 + 18𝑐𝑐2 − 6𝑐𝑐3 = 1200
−4𝑐𝑐1 − 𝑐𝑐2 + 12𝑐𝑐3 = 2350
(a) Determine the matrix inverse.
(b) Use the inverse to determine the solution.
(c) Determine how much the rate of mass input to reactor 3 must be
increased to induce a 10 g/m3 rise in the concentration of reactor 1.
(d) How much will the concentration in reactor 3 be reduced if the rate
of mass input to reactors 1 and 2 is reduced by 500 and 250 g/day,
respectively?
Lesson 3
A. Special Matrices
A banded matrix is a square matrix that has all elements equal to zero,
with the exception of a band centered on the main diagonal. Banded systems
are frequently encountered in engineering and scientific practice. For
example, they typically occur in the solution of differential equations
1. Tridiagonal System
Eq’n 3a
Notice that we have changed our notation for the coefficients from a’s
and b’s to e’s, f’s, g’s, and r’s. This was done to avoid storing large numbers
of useless zeros in the square matrix of a’s. This space-saving modification is
advantageous because the resulting algorithm requires less computer
memory.
Example 6. Solve the following tridiagonal system with the Thomas algorithm.
Solution:
You can verify that this is correct by multiplying [L][U] to yield [A].
The forward substitution is implemented as
𝑟𝑟2 = 0.8 − (−0.49)(40.8) = 20.8
𝑟𝑟3 = 0.8 − (−0.49)(40.8) = 20.8
𝑟𝑟4 = 0.8 − (−0.49)(40.8) = 20.8
40.8
� 20.8 �
14.221
210.996
which then can be used in conjunction with the [U] matrix to perform back
substitution and obtain the solution
210.996
𝑇𝑇4 = = 159.480
1.323
14.221 − (−1)(159.48)
𝑇𝑇3 = = 124.538
1.395
20.800 − (−1)(124.538)
𝑇𝑇2 = = 93.778
1.550
40.800 − (−1)(93.778)
𝑇𝑇1 = = 93.778
65.970
2. Cholesky Decomposition
A symmetric matrix is one where 𝑎𝑎𝑖𝑖𝑖𝑖 = 𝑎𝑎𝑗𝑗𝑗𝑗 for all i and j. In other
words, [A] = [A]T. Such systems occur commonly in both mathematical and
engineering problem contexts. They offer computational advantages because
only half the storage is needed and, in most cases, only half the computation
time is required for their solution.
That is, the resulting triangular factors are the transpose of each other.
The terms of Eq. (3.b) can be multiplied out and set equal to each other. The
result can be expressed simply by recurrence relations. For the kth row,
Eq’n 3.c
and
Eq’n 3.d
6 15 55
[𝐴𝐴] = �15 55 225�
55 225 979
Solution:
For the first row (k = 1), Eq. (3.c) is skipped and Eq. (3.d) is employed to
compute
𝑙𝑙11 = �𝑎𝑎11 = √6 = 2.4495
𝑎𝑎21 15
𝑙𝑙21 = = = 6.1237
𝑙𝑙11 2.4495
2
𝑙𝑙22 = �𝑎𝑎22 − 𝑙𝑙21 = �55 − (6.1237)2 = 4.1833
𝑎𝑎31 55
𝑙𝑙31 = = = 22.454
𝑙𝑙11 2.4495
and (i = 2)
2.4495
[𝐿𝐿] = �6.1237 4.1833 �
22.454 20.917 6.1101
B. Gauss-Seidel
[𝐴𝐴]{𝑋𝑋} = {𝐵𝐵}
A positive definite matrix is one for which the product {X}T [A]{X} is greater
than zero for all nonzero vectors {X}.
Now, we can start the solution process by choosing guesses for the x’s.
A simple way to obtain initial guesses is to assume that they are all zero.
These zeros can be substituted into Eq. (3.e), which can be used to calculate
a new value for 𝑥𝑥1 = 𝑏𝑏1 /𝑎𝑎11 . Then, we substitute this new value of 𝑥𝑥1 along
with the previous guess of zero for 𝑥𝑥3 into Eq. (3.f) to compute a new value
for 𝑥𝑥2 . The process is repeated for Eq. (3.g) to calculate a new estimate for
𝑥𝑥3 . Then we return to the first equation and repeat the entire procedure until
our solution converges closely enough to the true values. Convergence can be
checked using the criterion.
𝑗𝑗 𝑗𝑗−𝑖𝑖
𝑥𝑥 − 𝑥𝑥
�𝜀𝜀𝑎𝑎,𝑖𝑖 � � 𝑖𝑖 𝑗𝑗 𝑖𝑖 � ∗ 100% < 𝜀𝜀𝑠𝑠
𝑥𝑥𝑖𝑖
for all 𝑖𝑖, where 𝑗𝑗 and 𝑗𝑗 − 1 are the present and previous iterations.
Example 8. Use the Gauss-Seidel method to obtain the solution of the system
3𝑥𝑥1 − 0.1 − 0.2𝑥𝑥3 = 7.85
0.1𝑥𝑥1 + 7𝑥𝑥2 − 0.3𝑥𝑥3 = −19.3
0.3𝑥𝑥1 − 0.2𝑥𝑥2 + 10𝑥𝑥3 = 71.4
Solution:
First, express the coefficients and the right-hand side as an augmented matrix:
3 −0.1 −0.2 𝑥𝑥1 7.85
�0.1 7 𝑥𝑥
−0.3� � 2 � = �−19.3�
0.3 −0.2 10 𝑥𝑥3 71.4
By assuming that 𝑥𝑥2 and 𝑥𝑥3 are zero, Eq. (3.h) can be used to compute
7.85 + 0.1 (0) + 0.2(0)
𝑥𝑥1 = = 2.616667
3
This value, along with the assumed value of 𝑥𝑥3 =0, can be substituted into Eq.
(3.i) to calculate
The first iteration is completed by substituting the calculated values for 𝑥𝑥1
and 𝑥𝑥1 into Eq. (3.j) to yield
For the second iteration and other iterations, the same process is repeated
to compute the new 𝑥𝑥1 , 𝑥𝑥2 , 𝑎𝑎𝑎𝑎𝑎𝑎 𝑥𝑥3 .
7.85 + 0.1 (−2.794524) + 0.2(7.059609)
𝑥𝑥1 = = 2.994156
3
−19.3 − 0.1(2.994156) + 0.3(7.059609)
𝑥𝑥2 = = −2.497362
7
71.4 − 0.3(2.994156) + 0.2(−2.497362)
𝑥𝑥3 = = 7.054228
10
As you can see at iteration number 5 we got the same values. Thus,
this is considered the values of the algebraic equations. If we try to use
fraction form we will arrive at the exact values of the equation.
𝑥𝑥1 3
𝑥𝑥
� 2 � ≈ �−2.5�
𝑥𝑥3 7
Learning Activity
3. Use the Gauss-Seidel method to solve the following system until the
percent relative error falls below 𝜀𝜀𝑠𝑠 = 5%,
10𝑥𝑥1 + 2𝑥𝑥2 − 𝑥𝑥3 = 27
−3𝑥𝑥1 − 6𝑥𝑥2 + 2𝑥𝑥3 = −61.5
𝑥𝑥1 + 𝑥𝑥2 + 5𝑥𝑥3 = −21.5
Summative Test
MODULE SUMMARY