Gauss Elmination and Gauss Jordan Operations Count
Gauss Elmination and Gauss Jordan Operations Count
Gauss Elmination and Gauss Jordan Operations Count
CHAPTER OBJECTIVES
• Knowing how to solve small sets of linear
equations with the graphical method and
Cramer’s rule.
• Understanding how to implement forward
elimination and back substitution as in
Gauss elimination.
• Understanding how to count flops to
evaluate the efficiency of an algorithm.
• Recognizing how the banded structure of a
tridiagonal system can be exploited to
obtain extremely efficient solutions.
SOLVING SMALL NUMBERS OF
EQUATIONS
SOLVING SMALL NUMBERS OF
EQUATIONS
Determinants and Cramer’s Rule
minor
For the second-order case, the determinant can be computed as
or
Determinants and Cramer’s Rule
• EXAMPLE 9.1
• Compute values for the determinants of the systems represented in Figs. 9.1 and 9.2.
• In the foregoing example, the singular systems had zero determinants. Additionally, the
results suggest that the system that is almost singular (Fig. 9.2c) has a determinant that
is close to zero.
Determinants and Cramer’s Rule
• This rule states that each unknown in a system of linear algebraic equations may be
expressed as a fraction of two determinants with denominator D and with the
numerator obtained from D by replacing the column of coefficients of the unknown in
question by the constants b1, b2, . . . , bn. For example, for three equations, x1 would be
computed as
pivot element
NAIVE GAUSS ELIMINATION
• The procedure can be continued using the remaining pivot equations. The final
manipulation in the sequence is to use the (n − 1)th equation to eliminate the xn−1
term from the nth equation. At this point, the system will have been transformed to an
upper triangular system:
· Back Substitution.
• This result can be back-substituted into the (n − 1)th equation to solve for xn−1. The
procedure, which is repeated to evaluate the remaining x’s, can be represented by the
following formula:
NAIVE GAUSS ELIMINATION
• EXAMPLE 9.3
• Use Gauss elimination to solve
• Although there is a slight round-off error, the results are very close to the exact solution
of x1 = 3, x2 = −2.5, and x3 = 7. This can be verified by substituting the results into the
original equation set:
MATLAB M-file: GaussNaive
NAIVE GAUSS ELIMINATION
• Operation Counting
• The execution time of Gauss elimination depends on the amount of floating-point
operations (or flops) involved in the algorithm. On modern computers using math
coprocessors, the time consumed to perform addition/subtraction and
multiplication/division is about the same. Therefore, totaling up these operations
provides insight into which parts of the algorithm are most time consuming and how
computation time increases as the system gets larger.
• For every one of these iterations, there is one division to calculate the factor. The
next line then performs a multiplication and a subtraction for each column
element from 2 to nb. Because nb = n + 1, going from 2 to nb results in n
multiplications and n subtractions. Together with the single division, this amounts
to n + 1 multiplications/divisions and n addition/subtractions for every iteration of
the inner loop. The total for the first pass through the outer loop is therefore (n −
1)(n + 1) multiplication /divisions and (n − 1)(n) addition/subtractions.
NAIVE GAUSS ELIMINATION
or
NAIVE GAUSS ELIMINATION
• A similar analysis for the multiplication/division flops yields
• Thus, the total number of flops is equal to 2𝑛3 / 3 plus an additional component
proportional to terms of order 𝑛2 and lower. The result is written in this way
because as n gets large, the O(𝑛2 ) and lower terms become negligible. We are
therefore justified in concluding that for large n, the effort involved in forward
elimination converges on 2𝑛3 / 3.
NAIVE GAUSS ELIMINATION
• Because only a single loop is used, back substitution is much simpler to evaluate. The
number of addition/subtraction flops is equal to n(n − 1)/2. Because of the extra
division prior to the loop, the number of multiplication/division flops is n(n + 1)/2.
These can be added to arrive at a total of
1. As the system gets larger, the computation time increases greatly. As in Table 9.1, the
amount of flops increases nearly three orders of magnitude for every order of
magnitude increase in the number of equations.
2. Most of the effort is incurred in the elimination step. Thus, efforts to make the method
more efficient should probably focus on this step.
PIVOTING
• The primary reason that the foregoing technique is called .naive. is that during both the
elimination and the back-substitution phases, it is possible that a division by zero can
occur. For example, if we use naive Gauss elimination to solve
• the normalization of the first row would involve division by a11 = 0. Problems may also
arise when the pivot element is close, rather than exactly equal, to zero because if the
magnitude of the pivot element is small compared to the other elements, then round-
off errors can be introduced.
PIVOTING
• Therefore, before each row is normalized, it is advantageous to determine the
coefficient with the largest absolute value in the column below the pivot element. The
rows can then be switched so that the largest element is the pivot element. This is
called partial pivoting.
• If columns as well as rows are searched for the largest element and then witched, the
procedure is called complete pivoting. Complete pivoting is rarely used because most
of the improvement comes from partial pivoting. In addition, switching columns
changes the order of the x’s and, consequently, adds significant and usually unjustified
complexity to the computer program.
PIVOTING
• EXAMPLE 9.4
• Use Gauss elimination to solve
• Note that in this form the first pivot element, a11 = 0.0003, is very close to zero. Then
repeat the computation, but partial pivot by reversing the order of the equations. The
1 2
exact solution is x1 = and x2 = .
3 3
• Note how the solution for x1 is highly dependent on the number of significant figures.
This is because in Eq. (E9.4.1), we are subtracting two almost-equal numbers.
PIVOTING
• On the other hand, if the equations are solved in reverse order, the row with the larger
pivot element is normalized. The equations are
• This case is much less sensitive to the number of significant figures in the
computation: Thus, a pivot strategy is much more satisfactory.
MATLAB M-file: GaussPivot
• Notice that we have changed our notation for the coefficients from a’s and b’s to
e’s, f’s, g’s, and r’s. This was done to avoid storing large numbers of useless zeros
in the square matrix of a’s. This space-saving modification is advantageous
because the resulting algorithm requires less computer memory.
TRIDIAGONAL SYSTEMS
• EXAMPLE 9.5
• Solve the following tridiagonal system:
• As with Gauss elimination, the first step involves transforming the matrix to upper
triangular form. This is done by multiplying the first equation by the factor e2/f1 and
subtracting the result from the second equation. This creates a zero in place of e2 and
transforms the other coefficients to new values,
TRIDIAGONAL SYSTEMS
• Notice that g2 is unmodified because the element above it in the first row is zero. After
performing a similar calculation for the third and fourth rows, the system is transformed
to the upper triangular form