0% found this document useful (0 votes)
15 views

Hsu-Chapter 4 Linear Algebra and Matrices

Chapter 4 of 'Applied Engineering Analysis' focuses on linear algebra and matrices, covering key concepts such as linear functions, matrix operations, and the distinction between matrices and determinants. It discusses various forms of matrices, including rectangular, square, row, column, upper and lower triangular, and diagonal matrices, along with their applications in engineering analysis. The chapter also addresses matrix algebra, including addition, subtraction, and multiplication of matrices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Hsu-Chapter 4 Linear Algebra and Matrices

Chapter 4 of 'Applied Engineering Analysis' focuses on linear algebra and matrices, covering key concepts such as linear functions, matrix operations, and the distinction between matrices and determinants. It discusses various forms of matrices, including rectangular, square, row, column, upper and lower triangular, and diagonal matrices, along with their applications in engineering analysis. The chapter also addresses matrix algebra, including addition, subtraction, and multiplication of matrices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Applied Engineering Analysis

- slides for class teaching*


Chapter 4
Linear Algebra and Matrices

* Based on the book of “Applied Engineering


Analysis”, by Tai-Ran Hsu, published by
John Wiley & Sons, 2018.
(ISBN 9781119071204)

(Chapter 4 Linear Algebra and Matrices) 1


© Tai-Ran Hsu
Chapter Learning Objectives

• Linear algebra and its applications

• Forms of linear functions and linear equations

• Expression of simultaneous linear equations in matrix forms

• Distinction between matrices and determinants

• Different forms of matrices for different applications

• Transposition of matrices

• Addition, subtraction and multiplication of matrices

• Inversion of matrices

• Solution of simultaneous equations using matrix inversion method

• Solution of large numbers of simultaneous equations using Gaussian elimination method

• Eigenvalues and Eigenfunctions in engineering analysis 2


4.1 Introduction to Linear Algebra and Matrices
Linear algebra is concerned mainly with:

Systems of linear equations,


Matrices,
Vector space,
Linear transformations,
Eigenvalues, and eigenvectors.

Linear and Non-linear Functions and Equations:

Linear equations:
Linear
-4x1 + 3x2 – 2x3 + x4 = 0
functions:
where x1, x2, x3 and x4 are
unknown quantities

Examples of Nonlinear Equations: Simultaneous linear equations:

 4 x 1  3x 2  2 x 32  x 34  0 8 x1  4 x2  x3  12
2 x1  6 x2  x3  3
or x2 + y2 = 1 x1  2 x2  x3  2
or xy = 1
or sinx = y where x1, x2 and x3 are unknown
quantities
3
4.2 Determinants and Matrices

Both determinants and matrices are logical and convenient representations of large sets of
real numbers or variables and vectors involved in engineering analyses.

These large sets of real numbers, variables and vector quantities are arranged in arrays of
rows and columns:

a11 a12 a13     a1n


a 21 a 22 a 23     a 2n
a31 a32 a33     a3n
       
       

a m1 a m2 a m3     amn
in which a11, a12,………………….., amn represent group of data, with m=row number,
n = column number, and m = 1,2,3,….,m and n = 1,2,3,…..,n

4
4.2 Determinants and Matrices – Cont’d

There are different ways to express the Determinants and Matrices as shown below:

Determinant A is expressed with A placed between two vertical bars:


a11 a12 a13     a1n
a 21 a 22 a 23     a2n
A  aij         
       
a m1 am2 a m3     a mn
whereas Matrix A is expressed by placing A in square brackets:

 a11 a12 a13     a1n 


a a 22 a 23     a 2 n 
 21
A  aij            
 
         
a m1 am2 a m3     a mn 
5
Difference between the determinants or matrices
The same data set in “determinants” can be evaluated to a single number, or a scalar
quantity.

Matrices cannot be evaluated to single numbers or variables.

Matrices represent arrays of data and they remain so in mathematical operations in all
engineering analyses.
Evaluation of determinants:

A determinant can be evaluated by sequential reduction in sizes, for example, a 2x2


determinant can be reduced to the size of 2-1=1 - a single number as in Example 4.1,
whereas a 3x3 determinant can be reduced by two consequential reductions to reach a single
value as illustrated in the Example 4.2. A general rule for the size reduction process is to use
the following formula: s
C  cij    1 cijn
n n i j
(4.9a)
n 1
n
where the superscript n denotes the reduction step number. Determinant C is the determinant
n
A after the n-step reduction in size. The elements in these matrices c are in the determinants
ij
that exclude the elements in ith row and jth column in the previous form of the determinant.

Matrices in engineering analysis:


As mentioned before, matrices cannot be evaluated to a single number or data. Rather, they
will always be in the form of matrices.
6
4.3 Different Forms of Matrices
4.3.1 Rectangular matrices:
The general form of rectangular matrices is shown below:

 a11 a12 a13   


 a1n 
a a 22 a 23    a 2 n 

 21
A  aij           
 (4.11)
 
      
  
a m1 am2 a m3     a mn 
with the “elements” of this matrix designated by aij with the first subscript i indicating the row
number and the second subscript j indicating the column number.

The rectangular matrices have the number of rows i ≠ number of columns j.

4.3.2 Square matrices:


This type of matrices with i = j, and are common in engineering analysis. Following is a
typical square matrix of the size 3x3:

 a11 a12 a13 


A  a21 a 22 a 23  (4.12)

a31 a32 a33  7


4.3.3 Row matrices:
In this case, the total number of row I = m = 1 with the total number of columns = n:
A  a11 a12 a13    a1n  (4.13)

4.3.4 Column matrices:  a11 


a 
 21 
 a31 
 
These matrices have only one column, i.e. n = j = 1   
but with m rows.
A    (4.14)
  
  
Column matrices are used to express the components of vector  
quantities, such as the expression of a force vector:   
a 
Fx   m1 
F  Fy  with Fx, Fy and Fz to be the components of the force vector along the x, y
F  and z coordinate respectively.
 z Diagonal of a square matrix
4.3.5 Upper triangular matrices:
We realize that all SQUARE matrices have a “diagonal” line a11 a12 a13 
across the elements drawn from those at the first row
and column. An upper triangular matrix has all the element A  a21 a22 a23
in this matrix to be zero below its diagonal line, such as
Illustrated in the form in the right for an upper triangular a31 a32 a33
matrix of 3x3 with elements below the diagonal line: a21=a31=a32=0: 8
4.3.6 Lower triangular matrices:
This is an opposite case to the upper triangular matrix, in which all elements above the
diagonal lines are zero as shown below for a 3 x 3 square matrix:
 a11 0 0
A  a 21 a 22 0  (4.16)
 a31 a32 a33 
4.3.7 Diagonal matrices:
In these matrices, the only non-zero elements are those on the diagonals. Example of a
diagonal matrix of the size of 4x4 is shown below:
a11 0 0 0 
0 a 22 0 0  (4.17)
A  
0 0 a33 0 
 
0 0 0 a 44 

This type of matrices is similar to that of diagonal matrices,


except that the non-zero elements on the diagonal lines
have a value of unity, i.e. “1.0”. A 4x4 unity matrix is shown
in the right:
Unity matrices have the following useful properties:
1 0 0  0 0 
 I    0 1 0   0  0   a diagonal martrix and [A][I] = [I][A] (4.19b)
0 0 1  0 0   9
4.4 Transposition of Matrices
Transposition of a matrix [A] often is required in engineering analysis. It is designated by [A]T.

Transposition of matrix [A] is carried out by interchanging the subscripts that define the
locations of the elements in matrix [A].

Mathematical operations of matrix transposition will be followed by letting [aij]T = [aji] .


T
 a11 
Following are three such examples: a 
 21 
 a31 
Case 1: Transposing a column matrix  
into a row matrix:   
AT     a11 a 21 a31     a m1 
  
  
 
  
a   a11 a 21 
 m1  T
Case 2: Transposing a rectangular matrix  a11 a13 
 a12 a 22 
a12
into another rectangular matrix: a 
 21 a 22 a 23 
a13 a 23 
Case 3: Transposing a square matrix into another square matrix:
Diagonal of a square matrix

a11 a12 a13  a11 a21 a31 


A  a21 a22 a23  AT  a12 a22 a32 
a31 a32 a33 
a13 a23 a33 
10
(a) Original matrix (b) Transposed matrix
4.5 Matrix Algebra
We are made aware of the fact that matrices are expressions of arrays of numbers or variables
– but not single numbers. As such, addition/subtraction and multiplications of matrices need to
follow certain rules.
4.5.1 Addition and subtraction of matrices:

Addition or subtraction of two matrices requires that both matrices having the same size,
i.e., with equal number of rows and columns.
A  B   C  with c ij  a ij  bij (4.20)
in which aij, bij and cij are the elements of the matrices [A], [B] and [C] respectively.

4.5.2 Multiplication of matrices by a scalar quantity α


α [C] = [α cij] (4.21)
4.5.3 Multiplication of two matrices:
Multiplication of two matrices requires the satisfaction of the following condition:
The total number of column in the first matrix =
the total number of rows in the second matrix
Mathematically, we must have: [C] = [A] x [B] (4.22)
sizes:(m x p) = (m x n) (n x p)

in which the notations shown in the parentheses below the matrices in Equation (4.22)
denotes the number of rows and columns in each of these matrices.
The following recurrence relationship in Eqution (4.23) may be used to determine the
elements in the product matrix [C] with cij = ai1b1j + ai2b2j +…+ainbnj (4.23)
where i=1,2,..m and j=1,2,3……..n. 11
4.5.3 Multiplication of two matrices-Cont’d
Following are four (4) examples on multiplications of matrices

Example 4.4
Multiply two matrices [A] and [B] defined as:

 a 11 a 12 a 13   b11 b12 b13 


A  a 21 a 22 a 23  and B  b 21 b 22 b 23 
a 31 a 32 a 33  b 31 b 32 b 33 
Solution:  a 11 a 12 a 13   b11 b12 b13 
C  AB  a 21 a 22 a 23  b 21 b 22 b 23 
a 31 a 32 a 33   b 31 b 32 b 33 
 a 11 b11  a 12 b 21  a 13 b 31 a 11 b12  a 12 b 22  a 13 b 32 a 11 b13  a 12 b 23  a 13 b 33 
 a 21 b11  a 22 b 21  a 23 b 31   

    
Example 4.5:
Multiply the following rectangular matrix and a column matrix
Solution:
We checked the number of column of the first matrix equals the number of rows of the second
Matrix. We may thus have the flowing multiplication:
 x1 
 c11 c12 c13     c11 x1  c12 x 2  c13 x3   y1 
y  C x    x2     
c 21 c 22 c 23    c 21 x1  c 22 x 2  c 23 x3   y 2 
 x3  12
4.5.3 Multiplication of two matrices-Cont’d
Example 4.6:

This example will show he difference in the results of multiplication of two matrices in
the different OEDER of the matrices.

Case A: Multiply a Row matrix by a Column matrix, resulting in a scalar quantity:


b11 
 
a11 a12 a13 b21   a11b11  a12 b21  a13b31
b 
 31 
Case B: Multiply a Column matrix by a Row matrix, resulting in a Square matrix!
 a11   a11b11 a11b12 a11b13 
 
a 21 b11 b12 b13   a 21b11 a 21b12 a 21b13  (a square matrix)
a  a31b11 a31b13 
 31  a31b12

Example 4.7:

We will show that multiply a square matrix by a column matrix will result in another
column matrix:
a11 a12 a13 x  a11x  a12 y  a13 z 
a a a  y  a x  a y  a z
 21 22 23    21 22 23  (a columnmatrix)
a31 a32 a33  z  a31x  a32 y  a33 z 
13
4.5.5 Additional rules on multiplication of matrices:

• Distributive law: [A]([B] + [C]) = [A][B] + [A][C]

• Associative law: [A]( [B][C]) = [A][B]([C])

• Caution: Different order of multiplications of two matrices will result in


different forms, i.e. the order of matrices in multiplication is very
IMPORTANT. Always Remember the following relations:

AB  BA
● Product of two transposed matrices: ([A][B])T = [B]T[A]T

14
4.5.4 Matrix Representation of Simultaneous Equations
Matrix operations are powerful tools in modern-day engineering analysis. They are
widely used in solving large numbers of simultaneous equations using digital
computers. Following are the expressions on how matrices may be used to develop
algorithms for the solution of large number of n-simultaneous equations:

 a11 a12 a13     a1n   x1   r1 


a a 22 a 23     a 2 n   x 2  r2 
 21    
              
 
             
   
a n1 an2 an3 
    a nn   x n  rn 

from which, we may conveniently express these simultaneous linear equations in the
following simplified form:
[A] {x} = {r} (4.25)

where matrix [A] is usually referred to as the “coefficient matrix,” {x} is the “unknown
matrix,” and {r} is the “resultant matrix.”
8 4 1   x 1  12
represents the 3 simultaneous
Example: The matrix equation: 2 6  1 x 2    3 

Equations:
1  2 1   x 3   2 
8x1 + 4x2 + x3 = 12
2x1 + 6x2 – x3 = 3 where x1, x2 and x3 are the unknowns to be solved by
x1 – 2x2 + x3 = 2 these 3 simultaneous equations
15
4.6 Matrix Inversion [A]-1
Since matrices are used to represent ARRAYS of numbers or variables in engineering
analysis (but not single numbers of variables), there is no such thing as the division of
two matrices. The closest to “divisions” in matrix algebra is matrix inversion. We define
the inversion of matrix [A], i.e. [A]-1 to be:
[A][A]-1 = [A]-1[A] = [I] (4.26)
where [I] is a unity matrix defined by Equation (4.18) on P. 125.

One must note a fact that inversion of a matrix [A] is possible only if the equivalent
determinant of [A], i.e. A  0
The matrix [A] is called “singular matrix” if A 0
Following are the 4 steps to invert the matrix [A]:
Step 1: Evaluate the equivalent determinant of the matrix [A], and make sure that A  0
Step 2: If the elements of matrix [A] are aij, we may determine the elements of the
co-factor matrix [C] to be: cij  (1) i  j A' in which A' is the equivalent
determinant of a matrix [A’] that has all elements of [A] excluding those in
the ith row and jth column
Step 3: Transpose the co-factor matrix from [C] to [C]T following the procedure outlined
in Section 4.4 on p. 125
Step 4: The inverse matrix [A]-1 for matrix [A] may be established by the following
expression:
A1 
1
C T (4.28)
A
16
Example 4.8 (p.130)

We will invert the following 3x3 matrix [A] following the 4 steps specified in the proceeding slide:

1 2 3
A   0  1 4 
 2 5  3

Step 1: Evaluate the equivalent determinant of [A]:


1 2 3
1 4 0 4 0 1
A  0 1 4 1 2 3   39 ( 0)
5 3 2 3 2 3
2 5 3

Step 2: determine the elements of the co-factor matrix, [C]: c11   111  1 3  45   17
We thus have the co-factor matrix, [C] in the c12   1
1 2
0 3  4 2   8
form:   1 0 5   1 2   2
1 3
c13
  1 2  3  35  21
2 1
c 21
 17  8  2   1 1 3  3 2  3
2 2

C    21 3  9
c 22
  1 15  2  2    9
23
c 23
 11  4  1  c31   1 24  3 1  11
31

  1 14  30   4
3 2
c32
  1 1 1  2 0   1
3 3
c33
17
Step 3: Transpose the [C] matrix is:

 17 21 11 
C T    8 3  4
  2  9  1 

which leads to the inverted matrix [A] to be:

Step 4: Determine the inverse matrix, [A]-1 following Equation (4.28):

 17 21 11  17  21  11
A1 
C   1   8 3  4  1  8  3 4 
T

A  39   39  
  2  9  1  2 9 1 

One may verify the correct inversion of matrix [A] by:


[A][A]-1 = [I]

where [I] is a unit matrix defined in Equation (4.18)

18
Solution of Large Number of
Simultaneous Equations
Using Matrix Algebra

A vital tool for solving very large number of


simultaneous equations in engineering analysis
using digital computers

19
Why huge number of simultaneous equations in advanced engineering analyses?
● Numerical analyses, such as the finite element method (FEM) and finite difference method (FDM)
are two effective and powerful analytical tools for engineering analysis in in real but complex situations in:
● Mechanical stress and deformation analyses of machines and structures,
● Thermofluid analyses for temperature distributions in solids, and fluid flow behavior requiring solutions
in pressure drops and local velocity, as well as fluid-induced forces.
● The essence of FEM and FDM is to DISCRETIZE the continua of “real structures” or “flow patterns” of
complex configurations and loading/boundary conditions into FINITE number of sub-components
(called elements) inter-connected at common NODES.
● Analyses are performed in individual ELEMENTS instead of entire continua of complex solid or flow patterns.
● Example of discretization of a piston in an internal combustion engine and the results in stress distributions
in piston and connecting rod are depicted in the following images:

Piston

FE analysis results

Connecting
rod
Discretized piston/connecting
Real piston rod for FEM analysis Distribution of stresses
http://www.npd-solutions.com/feaoverview.html

● FEM or FDM analyses requires the derivation of one algebraic equation for every NODE
in the discretized model – One will readily appreciate the need for solving a huge number
of simultaneous equations, in view of the huge number of elements (and nodes) involved
in the analysis as illustrated in the 2 left images on real solid structure and the analytical
model for the piston and the connecting rod !!
● Many analyses using FEM requiring solutions of tens of thousands simultaneous equations
are not unusual in advanced engineering analyses. 20
4.7 Solution of Simultaneous Linear Equations
We have demonstrated in Section 4.7.1 and the case in the proceeding slide on the need for using
commercial finite element computer codes (see detailed description of the finite element method
and commercial code in Chapter 11) require the solutions of very large number of simultaneous
equations (often in hundreds or thousands in the numbers).

Required time and efforts in solving these huge number of simultaneous equations obviously are
much beyond human capability. These tasks apparently require the use of digital computers with
proper algorithms. Matrix algebra is the only viable way for developing algorithms for digital
computers to do this job.
There are generally two methods suitable for such applications:
(1) The inverse matrix technique, and (2) The Gaussian elimination technique, as will be
presented in the following formulations.
4.7.2 Solution of Large Number of Simultaneous Linear Equations Using Inverse
matrix technique:
We have demonstrated how the following simultaneous equations may be expressed in
matrix form: a11 x1  a12 x 2  a13 x3  ..................................  a1n x n  r1
a 21 x1  a 22 x 2  a 23 x3  ..................................  a 2 n x n  r2
a31 x1  a32 x 2  a33 x3  ..................................  a3n x n  r3
.............................................................................................
.............................................................................................
a n1 x1  a n 2 x 2  a n 3 x3  ..................................  a nn x n  rn
or by a compact form of: [A]{x} = {r} 21
4.7 Solution of Simultaneous Linear Equations – Cont’d
4.7.2 Solution of Large Number of Simultaneous Linear Equations Using
Inverse Matrix Technique – Cont’d:
The unknown matrix {x} in the above equation may be solved by multiplication of an
inverse matrix of [A] on both sides of the equation as follows:

[A]-1[A]{x} = [A]-1{r} or [I]{x} = [A]-1{r}

We may thus determine the unknown matrix {x} = [A]-1{r} to be the solution of the
simultaneous equations.
Example 4.9 (p.134)
Solve the following simultaneous equations using the inverse matrix method:
4x1 + x2 = 24 (a)
Solution: x1 – 2x2 = -21 (b)

We may express the above simultaneous equations into a matrix form: [A]{x} = {r}.
 x1 
r  
 4 1  24 
where the matrices A  1  2
x    and 
  x2   21
We found the inverse of [A] matrix to be:

A  
1C
T
1  2  1 1 2 1 
   which leads to the solution of x1=3 and x2=12 as follows:
A 9   1 4  9 1  4
x  2 1   24  1  2 x24  1x21  27   3 
x   1   A1 r  1    21  9 1x24  (4)(21)  108  12
 2
x 9 1  4     
The use of Inverse matrix technique in solving simultaneous equations is usually limited to
moderate number of simultaneous equations in engineering analysis, say less than 100.
22
4.7.3 Solution of large number of simultaneous equations using Gaussian
elimination method

The inventor of this method was Johann Carl Friedrich Gauss


(1777–1855)

A German astronomer (planet orbiting),


Physicist (molecular bond theory, magnetic theory, etc.), and
Mathematician (differential geometry, Gaussian distribution in statistics, etc.)

● Gaussian elimination method and its derivatives, e.g., Gaussian-Jordan elimination


method and Gaussian-Siedel iteration method are widely used in solving large number
of simultaneous equations as required in many modern-day numerical analyses, such as
engineering analyses involving FEM and FDM as mentioned earlier.

● The principal reason for Gaussian elimination method being popular in this types of
applications is the required formulations in this method are in simple arithmetic expressions,
and the solution procedure for the large number of unknown quantities can be readily
programmed using current programming languages such as FORTRAN for digital computers
with enormous memory capacities and incrediblly high computational efficiencies.

23
The essence of Gaussian elimination method:

1) To convert the square coefficient matrix [A] of a set of simultaneous equations


into the form of “Upper triangular” matrix in Equation (4.15) by using an “elimination procedure”

 a11 a12 a13  Via “elimination a11 a12 a13 


A  a 21 a 22 a 23 
process
Aupper   0 a'22 a'23 
a31 a32 a33   0 0 a' '33 

The superscripts attached to the elements of the right-hand-side matrix designate the step number in
the elimination process, for instance: (‘) for the step 1 and (‘’) for step 2 after the elimination process.
2) The last unknown quantity in the converted upper triangular coefficient matrix and the
corresponding changes in the resultant matrix in the simultaneous equations becomes
immediately available, as shown below:

a11 a12 a13   x1   r1 


 0 a'    
 22 a'23   x2    r2'  with x3 = r3’’/a33’’
 0 0 a ' '33   x3  r3'' 
3) The second last unknown quantity x2 may be obtained by substituting the newly found
numerical value of the last unknown quantity into the second last equation:
r2'  a23
'
x3
a x a x  r
'
22 2
'
23 3 2
'
x2  '
a22
4) The remaining unknown quantities (x1) may be obtained by the similar procedure,
which is termed as “back substitution”
24
4.7.3 Math formulations of The Gaussian elimination process:
We will present the math formulations of this process by the solution of the following three
simultaneous equations:
a11 x1  a12 x2  a13 x3  r1
a21 x1  a22 x2  a23 x3  r2 (4.34 a,b,c)

a31 x1  a32 x2  a33 x3  r3


We will express this simultaneous equation in (4.34a,b,c) in a matrix form:

 a 11 a a   x1   r 1 
    
12 13
(4.35)
 a 21 a 22 a 23   x 2 
 r 2 
a a a    r 
 31 32 33   x 3   3

or in a simpler form: Ax  r


Step 1: We will express the unknown x1 in Equation (4.34a) in terms of x2 and x3 as follows:

x 1
 r 1
 a12 x 2  a13 x 3
a
11 a 11 a 11

25
Now, if we substitute x1 in Equation (4.34b and c) with: x 1
 r 1
 a12 x 2  a13 x 3
a 11 a 11 a 11

we will turn Equation (4.34a,b,c) from: to a new form in Equation (4.36a,b,c):

a11 x1  a12 x2  a13 x3  r1 a11x1  a12 x 2  a13 x 3  r 1 (4.36a)


   
0   a 22  a 21 a 12  x 2   a 23  a 21 a 13  x 3   a 21 r 1
a21 x1  a22 x2  a23 x3  r2 
 a11   a11 
r 2
a 11
(4.36b)

a31 x1  a32 x2  a33 x3  r3    


0   a 32  a 31 a 12  x 2   a 33  a 31 a 13  x 3  r  a 31 r 1 (4.36c)

 a11   a11  3
a 11

You will not see x1 in the new Equation (4.36b and c) anymore with this substitution –
So, x1 is “eliminated” in Equations (4,36a,b,c) after Step 1 elimination

The new matrix form of the simultaneous equations has the form:
a   a 21 a13
1

 a 11    r1 where a
1
 a 22  a 21 12 a 23 a 23
a a 22
a a
  x1   1 
11

12 13 11
1 1
 r 2  a a
a  a a
1
 0 23   x 2 
(4.37) 13
a 22 a a  a a
1 12
33
a 33 31
 1 
32 32 31
1   1 a 11 11

 0 a a 
33   x 3   r 3  a a
r r  r
1
 r 2  21 r 1
32 1 31
r 2 3
a 3 1
a 11 11

The superscript index numbers (“1”) indicates “elimination step 1” in the above expressions
26
Step 2: Elimination involves the expression of x2 in Equation (4.36b) in term of x3:

   
from 0   a 22  a 21 a 12  x 2   a 23  a 21 a 13  x 3  r  a 21 r 1 (4.36b)

 a11   a11  2
a 11

a  a 
to r2  21 r1   a23  a21 13  x3
x2 
a11  a11 
 a 
 a22  a21 12 
 a11 
and submitted it into Equation (4.36c), resulting in eliminating x2 in that equation.

The matrix [A] form of the original simultaneous equations now takes the form of an
Upper triangular matrix, and we have thus accomplished the Gaussian elimination process:
 a 11 a 12 a 13    r 1 
2

 2 
x1   2 
 0 a 22 a 23   x 2   r 2 
2
(4.38)
 2    2
 0 0 a 33   x 3  r 3 
We notice the coefficient matrix [A] now has become an “upper triangular matrix,” from
which we have the solution from the las row of the elements to give: 2
r
x 3  23
a 33
The other two unknowns x2 and x1 may be obtained by the “back substitution process” from
Equation (4.38),such that: 2
2 r3
r a
2

r a x
2 2 2 23
a
2
x  r a x a x
1 12 13
27
 2 23 3
 33 and
x 2 2 2
1
a a
11 11a 2
11
3

a 22 a 22
Recurrence relations for Gaussian elimination process:
We have learned that Gaussian elimination method requires the conversions of the
original square coefficient matrix [A] into a upper triangular matrix form with the
corresponding modifications of the Resultant matrices {r} in the given simultaneous
equations.

For a set of 3-simultaneous equations, (3-1) = 2 steps of elimination would be


sufficient for this process as we have demonstrated in the previous case. So, we may
say that we need to perform (n-1) elimination steps to solve the n-simultaneous
equations.

In reality, for example, we often need to perform (50,000 – 1) = 49,999 elimination


steps to solve 50,000 given simultaneous equations in an engineering analysis. Such
task is by no means realistic if all such computations are performed by human efforts.

A realistic way to perform such tasks is to use digital computers with their horrendous
capacity of storage memories and super fast computation of arithmetic operations.
Gaussian elimination method that provides eliminations of elements in the coefficient
matrices [A], and the corresponding revisions of the Resultant matrices {r} can all be
done with arithmetic operations as shown in the previous example appear to be viable
for having them used in developing the algorithm for programming for most available
digital computers. The following slide will show the recurrence relations that can
achieve the above set goals. 28
Recurrence relations for Gaussian elimination process-Cont’d:
Given a general form of n-simultaneous equations:

a11 x1  a12 x 2  a13 x3  ..................................  a1n x n  r1


a 21 x1  a 22 x 2  a 23 x3  ..................................  a 2 n x n  r2
a31 x1  a32 x 2  a33 x3  ..................................  a3n x n  r3
(4.30)
.............................................................................................
.............................................................................................
a m1 x1  a m 2 x 2  a m 3 x3  ..................................  a mn x n  rn
The following recurrence relations can be used in Gaussian elimination process:
n 1
For elimination: a n 1 n 1
a a a
n nj
n 1 (4.39a)
with i > n and j>n, in which
ij ij in
a nn
n = elimination step number n 1

r r a r
n n 1 n 1 n
i i in n 1 (4,39b)
Solution of unknowns a n nn

from back substitution: r  a x


i
j  i 1
ij j

x 
i
with i  n  1, n  2, .......,1
(4.40)
a ii

where aij, ri and xi are the elements in the final matrices at the conclusion of the
elimination process. 29
Example 4.10 (p. 138):
Solve the following simultaneous equations using Gaussian elimination method .
4 x  x  24 (a)
1 2
x1  2 x2   21 (b)

Solution:
We may express these simultaneous equations into matrix form as:
 4 1   x1   24 
1 2   x   21
  2  
Recognize that: a11  a11  4, a12  a12  1, a 21  a 21  1, a 22  a 22  2, r1  24 and r2  r2   21
0 0 0 0 0

We are now ready to use the recurrent relationships shown in Equations (4.39 a,b) for the Gaussian
elimination process. We realize that only 2-1= one step is required to convert the coefficient matrix for
these 2 simultaneous equations.
Step 1 with n = 1, i > n = 2 and j > n = 2:
0 0
a 1 1 9
r r a r
1 0 0
  21  1x
24
  21  6   27
a a a   2  1x   2   
1 0 0
and
12 1
22 22 21 0 2 2 21 0
4 4 4 a 4
a 11 11

The coefficient matrix [A] after this step of elimination becomes:


4 1 
 9   x1    24 
 0    x2  27 
 4
we have the solution for x2 from the last (the 2nd) equation as: 9
 x2   27  x2  12
2 4
and use the back substitution to r1   a1 j x j
compute the other unknown: j 2 r1  a12
o
x2 24  1x12
x1  o
 o
 3 30
a 11 a
11 4
Example 4.11 (p. 139):
Solve the following 3 simultaneous equations using Gaussian elimination method:

8 x1  4 x2  x3  12 (a)
2 x1  6 x2  x3  3 (b)
x1  2 x2  x3  2 (c)
Solution:
We will first express Equations (a,b and c) in the following matrix form:
8 4 1   x1  12 
 2 6 1  x    3  (d)
  2  
1 2 1   x3   2 

Because there are 3 simultaneous equations in this example, we will need to perform
3-1 = 2 steps of elimination for the solutions:
Step 1 with n = 1, i > n = 2 and j > n = 2:
0
with i = 2, j = 2: a  a 21 a12  6  2 x
4
a  a a  5
1 0 0 12
22 22 21 0 a 22
a 11 a 11
8
0
1

0

0 a 13
  a21 a13   1  2 x
1
  1.25
and with i = 2, j = 3: a 23 a 23 a 21 0 a 23
a 11 a11
8
0

r
1
 r 2  a 21
0 0 r 1
0
 r a r 1
 3  2x
12
0
2 2 21
a 11 a 11
8

31
Example 4.11-Cont’d
0

with i = 3 and j = 2: a  a 31 a12   2  1x


4
a  a a    2.5
1 0 0 12
32 32 31 0 a 32
a 11 a 11
8
0

 a 31 a13  a 31 a13  1  1x
1
 
1 0 0
a 33 a 33 0 a 33
0.875
a 11 a 11
8
0

r a r r a r
12
   2  1x  0.5
1 0 0 1 1
r 3 3 31 0 3 31
a 11 a 11
8

We may thus express Equation (d) after Step 1 of elimination to take the form:

8 4 1   x1   12 
0   x    0  (e)
 5 1.25  2  
   
0 2.5 0.875   x3  0.5
We are now ready to perform Step 2 (the last step) in the elimination process to convert
the coefficient matrix in Equation (e) into an upper triangular matrix:

Step 2 with n = 2, i > n= 3 and j > n = 3:


We realize that a 21  a31  a32  0 because the subscripts i and j of these matrix
2 2 2

elements are less than n = 2. by using the recurrence relations, we compute the following:
 1.25
1
a  0.875   2.5  x
a  a a  0.25
2 1 1
33 33 32
23
1 and
a 5
22
1

r a r
0
  0.5   2.5  x  0.5
2 1 1 2
r 3 3 32 1
5
a 22
32
Example 4.11-Cont’d
We have completed the conversion of the matrix equation in Equation (e) to a new form
of upper triangular coefficient matrix [A] with modified resultant matrix {r} in Equation (f)
after Step 2 elimination:

8 4 1   x1   12 
0 5 1.25  x    0 
 2  
(f)

   
0 0 0.25   x3  0.5
One may readily see from the last line in Equation (f) for the solution of x3 to be:
x3 = 0.5/0.25 = 2

The values of the remaining two unknowns, x2 and x1, may be obtained by using the
recurrence relation of back substitution as given in Equation (4.39b) as follows:
We begin with n = 3 in Equation (4.39b)
3
with:
a x
ri 
j i 1
ij j
x 
i with i  2 ,1
3a ii

r  a x r  a x
2
j 3
2j j
0   1.25 x 2
Hence, with i = 2: x2   2 23 3
  0.5
a a
22
3
22
5
r  a x
1
j 2
1j j
r  a x  a x   12  4 x0.5  1x2  1
  1 12 2 13 3
and to determine x1 with i = 1: x1 a 11 a 11
8
33
Additional Example:
Solve the following simultaneous equations using the Gaussian elimination method:
x + z =1
2x + y + z = 0 (a)
x + y + 2z = 1
Express the above equations in a matrix form:
1 0 1   x  1
2 1 1   y   0 (b)
    
1 1 2  z  1
If we compare Equation (b) with the following typical matrix expression of
3-simultaneous equations:
 a 11 a a   x1   r 1 
    
12 13

 a 21 a 22 a 23   x 2 
 r 2 
a a a    r 
 31 32 33   x 3   3
we will have the following matrix elements at Zeroth step:

a11 = 1 a12 = 0 a13 = 1 r1 = 1


a21 = 2 a22 = 1 a23 = 1 and r2 = 0
a31 = 1 a32 – 1 a33 = 2 r3 = 1
34
Let us use the recurrence relationships for the elimination process in Equation (8.25):
n 1 n 1
n

n 1

a
n 1 nj
r
n
 r
n 1
 a r
n 1 n
with i >n and j> n
a ij a ij a
in n 1 i i in
a
n 1

a nn
nn

Step 1 n = 1, so i = 2,3 and j = 2,3

For i = 2, j = 2 and 3:
a120 a 0
i = 2, j = 2: a  a  a 0  a12  a21 12  1 2  1
1
22
0
22
0
21
a11 a11 1

a130 a 1
i = 2, j = 3: a  a  a 0  a23  a21 13  1 2   1
1
23
0
23
0
21
a11 a11 1

r10 r 1
i = 2: r  r  a 0  r2  a21 1  0  2   2
1
2 2
o 0
21
a11 a11 1
For i = 3, j = 2 and 3:
a120 a 0
i = 3, j = 2: a  a  a 0  a32  a31 12  11  1
1
32
0
32
0
31
a11 a11 1
0
0 a13 a13 1
i = 3, j = 3:
1
a33  a33
0
 a31  a33  a31  2  1 1
a110 a11 1
r10 r 1
i = 3: r  r  a 0  r3  a31 1  11  0
1
3 3
0 0
31
a11 a11 1 35
So, the original simultaneous equations after Step 1 elimination have the form:

a11 a12 a13   x1   r1 


0 1    1
 a122 a23   x2   r2 
1 
 0 1
a32 a33   x3  r31 

1 0 1   x1   1 
0     
 1  1  x 2     2 
 0 1 1   x 3   0 
 
We now have: a121  0 a122  1 a123   1
1
a31  0 a32
1
 1 a33
1
1
r21   2 r31  0
36
Step 2 n = 2, so i = 3 and j = 3 (i > n, j > n)

2 a123
a  a  a 1  1 1
1  1  2
1
i = 3 and j = 3: 33 33 32
a22 1
2 1 r21
r  r  a 1  0 1 1 2  2
3 3 32
a22 1
The coefficient matrix [A] has now been triangularized, and the original simultaneous
equations has been transformed into the form:

a11 a12 a13   x1   r1  1 0 1   x1   1 


0     0 1  1  x    2
a122 a123   x2    r21 
   2   
2 
 0 0 a33   x3  r32  0 0 2   x3   2 
We get from the last equation with (0) x1 + (0) x2 + 2 x3 = 2, from which we solve for
x3 = 1. The other two unknowns x2 and x1 can be obtained by back substitution
of x3 using Equation (8.26):
 3 
x2   r2   a2 j x j  / a22  r2  a23 x2  / a22   2   11 / 1  1
 3 j 3 
 
and x1   r1   a1 j x j  / a11  r1  a12 x2  a13 x3  / a11
 j 2 
 1 0 1  11/ 1  0
We thus have the solution: x = x1 = 0; y = x2 = -1 and z = x3 = 1 37
4.8 Eigenvalues and Eigenfunctions (p.141)
Eigenvalue – a German term meaning “characteristic value” that appears in some math
operations.
Eigenfunction – a “characteristic function” present in a math operation

Both eigenvalues and eigenfunctions, in general, are used in the following two areas in
engineering analysis:

(1) In transform of a geometry from one space to another for the same or desired enlargement
or reduced magnitudes and orientations to simplify the analysis, and

(2) In the form of parameters appearing in, and characterizing the solutions of certain
equations in the analysis.

We will introduce the application of eigenfunctions in geometric transformation first, to be


followed by the second type of application as mentioned above.

38
4.8 Eigenvalues and Eigenfunctions in geometric transformation:

Geometric transformation often is performed in engineering analysis involving complex


geometry of solids or fluids. The purpose of using this technique is to transform
these solids and fluids of complex geometry to a much simpler geometry that can be
handled by available analytical techniques.

Geometric Transformation-Linear transformation:

It is a rule for changing one geometric figure (or matrix or vector) into another, using a
formula with a specified format. This format is a linear combination, in which the original
components (e.g., the x and y coordinates of each point of the original figure) are
changed via the formula ax + by to produce the coordinates of the transformed figure.

Linear transformation of a geometric figure such as a straight line can be stretched or


compressed, and rotate subject to the values of coefficients a and b in the formula ax+by
in the x-y plane as will be seen in the following example. We also recognize that some
such transformations have an inverse, which undoes their effects.

39
Geometric Transformation - Linear transformation – Cont’d:
A simple example of linear transformation of a straight line AB from x-y plane to x’-y’ plane:
This simple case involves the transformation of a straight line AB from a plane defined by
x-y coordinate system to that in a x’-y’ coordinate system via a linear function of: cx + dy
in which c = 2 and d= 3 units. y

B(2,4)
x’=2c=4 units
y’ B(4,12)
A(0,0) x
The coordinates of terminal points of the line A and
B in the original x-y plane are: xa=0 and ya=0 at Point A(0,0) y’ = 0xc+4d=4x3=12 units
and xB=2 and yB=4 at Point B.

The coordinates of A and B in the new transformed


Plane (x’-y’ plane) is obtained by the following
the linear transformation function: 2x+3y. We thus A(0,0) x’
obtain: xA’ = 0x2 =0 for Point A in x’-y’ plane, and
yB’ = 2x2+4x3=16 for Point B in the x’-y’ plane.
We may also obtain the coordinates of A and B in the transformed plane by the following
a matrix in the
from which we get: xA’=0, y’A = 0; xB’=4, yB’=12 units
form:

40
Example on Geometric Transformation - Nonlinear transformation:
A well-known nonlinear geometric transformation is the Joukowski’s Transformation
in the design analysis of airfoils.
Joukowski was a Russian mathematician who invented this transformation. By using
this technique, the fluid flow around the geometry of an airfoil can be analyzed as
the flow around a rotating cylinder whose geometric symmetry simplifies the needed
computations of the non-symmetric airfoil geometry.

Example of Joukowski’s transformation:


Aerodynamic Analysis of Airfoil
using Joukowski’s transformation:

Joukowski Transformation

41
Eigenvalues and Eigenfunctions with Characteristic Functions:
“Characteristic functions” appear in “Characteristic equations” which often appear in the
solutions in certain engineering analyses.
For example, determining natural frequencies in modal analyses of structures, in which
natural frequencies of the structures designated by ωn with mode number n = 1,2,3,….are
important design parameters of their vibrations by applied periodic excitation forces with
frequencies ω. Uncontrollable, and often devastating, vibration called “resonant vibration”
of the structure can occur with the excitation force frequency ω matching any one of the
natural frequencies of the structure (ωn). The governing differential equations used to
determine the natural frequencies of cable structures are homogeneous differential
equations as will be presented in Chapters 9. The characteristic equation associated with
the solution of the amplitude of vibration y(x) from these equations would have a form of:
sinβL=0 in which L is the length of the cable and β is the eigenvalues of the eigenfunction
sinβL. We realize the there a great many number of non-zero eigenvalues β = π/L, 2π/L,
3π/L,……, nπ/L, and each of these β-value will “characterize” the way how the cable would
vibrate (we call the modes of vibration). For instance:
β3=3π/L:
Mode 1 vibration of the cable (β1=π/L): β2=2π/L:

So, we can see that the eigenvalues β in eigenfunction sinβL=0 characterize the shapes of the
cable in various modes of vibration.
42
4.8.1 Eigenvalues and Eigenvectors of Matrices (p. 142)
We mentioned at the beginning of this section that vector quantities in Chapter 3 may be
transformed in 2-D or 3-D spaces via both linear and nonlinear transformations.

Such transformations can be accomplished by linear transformation functions in such forms


as: ax+by for 2-D transformations, or ax+by+cz for 3-D transformation where a,b,c are real
numbers.

Because vectors involve components, transformation functions are usually in the forms of
matrices.

The eigenvalues (λ) for a eigenfunction matrix that associated with the linear transformation
of a vector quantity may be expressed by a vector expressed in a matrix form of:
for a vector with 2 components along the x- and y-coordinates, or

for a vector with 3 components along the respective x-, y- and z-coordinates

[A] = the linear transformation matrix, a square matrix with real number elements

The eigenvalues (λ) of the eigenfunction matrix [A] may be defined as:
(4.44)
Ax   x
We realize that Equation (4.43) may be expressed in another form of:
A   I x  {0} (4.45)
The value of λ can be determined by:
det  A   I   0 (4.46)
43
Example 4.13 (p. 143)
  1 2
Find eigenvalues and eigenvectors of the matrix:  A   
 7 8
Solution:
We may use Equation (4.46) to obtain the following equation:
1  2
A  I   0
7 8
from which we solve for the two eigenvalues: 1  1 and 2  6
Next, we will determine the eigenvectors corresponding to these two eigenvalues.

For eigenvalue λ1 = 1:

We will use Equation (4.45), with which: A   I x  {0} to determine the eigenvector {x}
corresponding to this eigenvalue λ1= 1:
   1 2 1 0   x1   1  1 2   x1  0
 
  7 8 10 1   x     7 8  1  x   0
    2   2  
leading to the following simultaneous equations:
‐2x1 + 2x2 = 0
‐7x1 + 7x2 = 0
Solving for x1=x2=p= a real number, which leads to:

44
Example 4.13 – Cont’d

For eigenvalue λ2 = 6:

We will follow the same procedure for the case with eigenvalue λ1, with the following
equation in matrix form:
 1  6 2   x1  0
  7 8  6  x   0
  2  
leading to the following simultaneous equations:
‐7x1 + 2x2 = 0
‐7x1 + 2x2 = 0
If we assume x2 = p in the above, which will lead to x1 = 2/7. We will thus obtain the
eigenvector to be:
 x1   2 p  2 
    7   7 p 
 x2   p  7
2
We thus conclude that the eigenvector corresponding to eigenvalue λ2 = 6 is:  
7
4.8.1 Eigenvalues and Eigenvectors of Matrices- Cont’d

Similar procedure will be followed for geometric transformations in 3-D space. In such
cases, the transformation functions would be in 3x3 matrices, from which the eigenvalues
and eigenfunction vectors may be determines in similar ways as illustrated in Example 4.14.
45
4.8.3 Application of Eigenvalues and Eigenfunctions in Engineering Analysis (p.146)
Engineering analysis often involve eigenvalues and
eigenfunctions, as mentioned in Section 4.8, and also
in the subsequent Chapters 8 and 9 of this book.

Following is one example that illustrate how they will


be used in solving a complex problem that involved
“coupled” simultaneous differential equations to
determine the frequencies of the movements
of the two masses in a system Illustrated in Figure
4.6 in the right:

We may derive the following two simultaneous differential equations for the amplitudes of
both masses y1(t) and y2(t) from their initial equilibrium conditions:
d 2 y1 t 
m  ky1 t   k  y2 t   y1 t  (4.47a)
dt 2
d 2 y2 t 
m  k  y2 t   y1 t   ky2 t  (4.47b)
dt 2
We notice that the two unknowns, y1(t) and y2(t) in both Equations (4.47a) and
(4.47b) This “coupling” effect makes the solution for both these quantities extremely
difficult.
Fortunately, we realize that the motion of both masses m in the system follow simple
harmonic motion pattern. Mathematically, this motion can be expressed in sine
functions, or: yi(t) = Yisin(ωt) for i = 1,2, where Yi = the maximum amplitude of
vibration of mass m, and ω is the frequency of vibration. 46
4.8.3 Application of Eigenvalues and Eigenfunctions in Engineering Analysis – Cont’d

Upon substitution of the relationship yi(t) = Yisin(ωt) into the simultaneous differential equations
In Equations (4.47a,b), we get the following equations:

 2k 2 k
   Y1  Y2  0 (4.49a)
m  m
k  2k  (4.49b)
 Y1     2 Y2  0
m m 
We may express these equations in the following matrix form:

 2k 2 k 

 m    
  m  Y1  0
   (4.50a)
   
    2   
k 2 k Y 0

2
 m m 
or in a different form:   2k k 
 m 
 k m    2 1 0  Y1   0 (4.50b)
2k  0 1 Y  0
     2   
  m m  
Matching Equation (4.50b) with (4.45) result in the following relations:
 2k k
We may thus obtain the frequency of the
 
A   mk m, Y 
x   1  , and    2 (4.51) vibrating mass m to be:   
2k  Y2  47
 
 m m  This is a speedy way to get this critical solution

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy