Advanced Engineering Mathematics
Advanced Engineering Mathematics
What Is A Matrix?
A matrix is a group or set of numbers arranged in a square or rectangular array enclosed by two
brackets and its plural is matrices.
Properties of matrices
1. Contains specified numbers of rows and columns
2. It has an order expressed in number of rows and columns for instance row x column
Advantages of matrices
1. It reduces complicated systems of equations to simple expressions as applied in the solving
of linear equations simultaneously for instance.
𝑎1 𝑥 + 𝑏1 𝑦 = 𝑐
And
𝑎2 𝑥 + 𝑏2 𝑦 = 𝑑
2. Adapted to systematic method of mathematical treatment and well suited to computers
Elements of matrices
Elements are numbers within the enclosed brackets arranged either in squares or rectangular array
for instance.
𝑎𝑐
𝑨2𝑥2 = ( )
𝑏𝑑
Where a,b,c,d are all real numbers.
A matrix is denoted by a bold capital letter and the element is denoted by a lower case for example
matrix A with elements aij, where i => identifies the rows.
j => identifies the columns.
As shown in the matrix below.
𝑎11 𝑎12
𝑨2𝑥2 = ( )
𝑎21 𝑎22
𝑨2𝑥2 is matrix A which is a 2x2 matrix and has the elements listed above in the square form where
𝒂𝟏𝟏 is the element in row 1 and column 1.
Application of matrices in the mechatronics and biomedical field.
It is applied in the study of quantum mechanics, electrical circuit and optics for instance it helps
in the calculation of battery power output, resistor conversion of electrical energy in to other useful
forms of energy especially in solving problems using KVI laws.
Types of matrices
i. Column / vector matrix.
This is a matrix which has any number of rows and only one column.
𝑎11
𝑎21
𝑎31
𝑨𝑛𝑥1 = .
.
.
(𝑎𝑛1 )
n is the nth row
v. Diagonal matrix
This is a square matrix where all elements are zero except those elements on the main diagonal.
With that said, a square matrix can also be a diagonal matrix, but a diagonal matrix differ by the
following conditions;
aij=0 for all, i≠j
aij≠0 for some or all, i=j.
Example
𝑎1 0 0
𝑨3𝑥3 =( 0 𝑎22 0 )
0 0 𝑎33
Where “a” is a scalar / integer.
For aij=0, i ≠ j.
For aij=a, i = j.
Matrix operation
a. Equality of matrices
This is possible when all the corresponding elements ae the same for instance
23 23
𝑨2𝑥2 = ( ) , 𝑩2𝑥2 = ( )
51 51
As seen above, A=B
2 6
=( )
10 11
c. Scalar multiplication
Note: A scalar is any real integer
Let k be a scalar quantity and B is a matrix of any order, then, kB=Bk
Properties
k (A+B) =kA+kB
(k+g) A=kA+gA
k(AB)=kA(B)=A(k)(B)
k(gA)=(gk)A
Example
1
Given a scalar k= 2, and𝑩2𝑥2 = (25 31). Find k𝑩2𝑥2 = (25 31)
1 23
𝒌𝑩2𝑥2 = ( )
2 51
23
22
= (5 1)
22
d. Matrix multiplication
For a matrix to be multiplied easily, the two matrices being multiplied must be conformable/
compatible, for instance the number of columns must be equal to the number of rows of the other
matrix. For instance
0 3 23
𝑨2𝑥2 ∗ 𝑩2𝑥2= ( )∗( )
5 10 51
(0 ∗ 2 + 3 ∗ 5) (0 ∗ 3 + 3 ∗ 1)
=( )
(5 ∗ 2 + 10 ∗ 5) (5 ∗ 3 + 10 ∗ 1)
15 3
=( )
60 25
Determinant of matrices
We can only find the determinant of square matrices. There isn’t any determinant for non-square
matrices. Determinant is denoted by det (A), where A is the matrix.
Method 1
1.1) Cover up the row and the column containing the element to form a submatrix ie by
𝑎 𝑏 𝑐
partitioning. Consider the matrix below [𝑑 𝑒 𝑓 ] the first submatrix for the
𝑔 ℎ 𝑖
matrix shown is formed when we cover the row and column ((a,b,c) and (a,d,g)
𝑒 𝑓
with a hence [ ]. Now find the determinant of this 2x2, which is (ei-hf). The
ℎ 𝑖
above found determinant is the cofactor of “a”.
1.2) Each of the elements is the allocated sign based on its position within the matrix,
+ − +
for instance the signs of a minor/ sub matrix are as shown:[− + −]. The product
+ − +
of the cofactor of the element and the element is then assigned a sign basing on
their position on the matrix for instance +a((ei-hf))
1.3) The determinant of a 3x3 matrix is the sum/ difference of the different products of
elements with their corresponding cofactors.
Example
Find the determinant of the given matrix
1 1 1
𝑨3𝑥3 = [1 −1 0]
1 1 1
det(A)=1(-1x1-1x0)-1(1x1-1x0) +1(1x1-1x-1)
det(A)= -1-1+2
det(A)=0
Note: for any matrix whose determinant is zero, the matrix will be a singular matrix.
Transpose
A transpose of a matrix is obtained by interchanging its rows into columns and columns into rows,
for instance, rewriting the columns as rows and rows as columns. Note that a transpose of a square
matrix can also be obtained by considering the major diagonal as the reflection lines.
A transpose is denoted by upper case letter “T” in the superscript of any given matrix. For instance,
if A is the given matrix, then the transpose of the matrix is written as AT. A transpose can also be
written as A!.
Generally, if A=[aij]mxn, then AT=[aij]nxm.
Examples
𝒂 𝒅
𝑎 𝑏 𝑐
𝐴2𝑥3 = [ ], 𝑨𝑻 = [𝒃 𝒆 ]
𝑑 𝑒 𝑓
𝒄 𝒇
𝑎 𝑏 𝑐 𝑎 𝑑 𝑔
𝑇
𝐵3𝑥3 [𝑑 𝑒 𝑓] , 𝐵 [𝑏 𝑒 ℎ]
𝑔 ℎ 𝑖 𝑐 𝑓 𝑖
Hence for a square matrix, the major diagonals act like the reflection line.
Properties of transpose
i. (AT)T=A
ii. (A+B)T=AT+BT
iii. (kA)T=kAT
iv. (AB)T=BT*AT
Inverse of a matrix
(While finding the inverse of a matrix, this is where we shall apply almost all the above discussed
subtopics)
Consider a scalar “k”, the inverse of “k” is K-1 which is the reciprocal k/ division of one by the
scalar.
Note that when a square matrix is multiplied by its inverse then, we get an identity matrix. For
instance, AA-1=A-1A=I.
The inverse of a matrix can be calculated form the equation below.
1
𝐴−1 = (𝑎𝑑𝑗𝑜𝑖𝑛𝑡 𝐴)
det(𝐴)
Examples
1) Find the inverse of the following matrices.
1 −3
a) 𝐴2𝑥2 = [ ]
4 6
The determinant of matrix A,
det(𝐴) = (1 ∗ 6) − (4 ∗ −3)
det(𝐴) = 18
1
𝐴−1 = (𝑎𝑑𝑗𝑜𝑖𝑛𝑡 𝐴)
det(𝐴)
1 6 3
𝐴−1 = [ ]
18 −4 1
1 1
𝐴−1 =[ 3 6]
−2 1
9 18
𝟏 𝟑 𝟏
b) 𝐀 𝟑𝒙𝟑 = [𝟎 𝟐 𝟎]
𝟑 𝟏 𝟑
Adjoint (A)
2 0 0 0 0 2 3 1 1 1 1 3 3 1 1 1 1 3
Cofactor=| |,| |,| |,| |,| |,| |,| |,| |,| |
1 3 3 3 3 1 1 3 3 3 3 1 2 0 0 0 0 2
Cofactor =6, 0,-6, 8, 0,-5,-2, 0, 2
6 0 −6
Cofactor = [ 8 0 −5]
−2 0 2
6 8 −2
Transpose of the cofactor/ adjoint (A)= [ 0 0 0]
−6 −5 2
Determinant of A
𝟏 𝟑 𝟏𝟏 𝟑 𝟏
[{𝟎 𝟐 𝟎|𝟎 𝟐 𝟎}]
𝟑 𝟏 𝟑𝟑 𝟏 𝟑
𝒅𝒆𝒕(𝑨) = [(𝟏 ∗ 𝟐 ∗ 𝟑 + 𝟑 ∗ 𝟎 ∗ 𝟑 + 𝟏 ∗ 𝟎 ∗ 𝟏) − (𝟑 ∗ 𝟐 ∗ 𝟏 + 𝟏 ∗ 𝟎 ∗ 𝟏 + 𝟑 ∗ 𝟎 ∗ 𝟑)]
𝒅𝒆𝒕(𝑨) = 𝟎
(The matrix is a singular matrix)
Therefore, the inverse of A=A-1
1
𝐴−1 = (𝑎𝑑𝑗𝑜𝑖𝑛𝑡 𝐴)
det(𝐴)
1 6 8 −2
𝐴−1 = [0 0 0]
0
−6 −5 2
Hence the inverse of a singular, matrix is an infinite. Thereby they don’t have inverses.
I. Linear dependence
Consider the set of vectors
1 3 5 1 3 0
{[2] , [5] , [ 9 ]} 𝑎𝑛𝑑 {[2] , [5] , [1]}
3 7 13 3 7 2
Do not span R3
The problem is that,
5 1 3
[ 9 ] = 2 [2] + [5]
13 3 7
And
0 1 3
[1] = 3 [2] − [5]
2 3 7
Therefore,
1 3 5 1 3
span {[2] , [5] , [ 9 ]} = 𝑠𝑝𝑎𝑛 {[2] , [5]}
3 7 13 3 7
And
1 3 0 1 3
𝑠𝑝𝑎𝑛 {[2] , [5] , [1]} = 𝑠𝑝𝑎𝑛 {[2] , [5]}
3 7 2 3 7
Spanning sets
Suppose that V is a vector space and that X1,X2,X3…………..Xk are vectors in V.
The set of vectors {X1, X2, X3 … … … … . . Xk} is likely to be independent if
R1X1, R2 X2, R3 X3…………..Rk Xk=0
For R1, R2, R3 …………..Rk ∈ 𝑅
Where at least one of R1, R2, R3 …………..Rk is non zero
Example
1 3 5 0
2 [2] + [5] − [ 9 ] = [0]
3 7 13 0
1 3 0 0
3 [2] + [5] − [1] = [0]
3 7 2 0
So the two sets of vectors.
1 3 5 1 3 0
{[2] , [5] , [ 9 ]} And {[2] , [5] , [1]}
3 7 13 3 7 2
Are linearly dependent.
Suppose that X, Y∈ 𝑉 . When X and Y are linearly dependent
1 3 0
Let X=[2] ,Y=[2] and Z=[4]
3 1 8
By finding the numbers r,s,t which are not all zero, such that root 5y +tz=0
This can be achieved by solving the augemented matrix equation.
By applying on R2 and R3
1 3 00
{2 2 4|0}
3 1 80
R2=R2-2R1
R3=R3-3R1
1 3 00
{0 −4 4|0}
0 −8 8 0
R3=R3-2R2
1 3 00
{0 −4 4|0}
0 0 00
Therefore, this set of equations has a non-zero solution and hence, {𝑥, 𝑦, 𝑧} is linearly dependent
set of vectors.
Consider the polynomials
P(X)=1+3X+2X2 ,Q(X)=3+X+2X2 and R(X)=2X+X2 in P2 is {𝑃(𝑋), 𝑄(𝑋), 𝑅(𝑋)} linearly
independent
Solution
We to decide whether we can find real numbers r,s,t which are not all zero such that
rP(X) +sQ(X)+tR(X)=0
r(1+3X+2X2)+s(3+X+2X2)+t(2X+X2)=0
X2(t+2s+2r)+X(2t+s+3r)+(3s+r)=0
This correspond to solving the following systems of linear equations.
r+3s=0
3r+s+2t=0
2r+2s+t=0
By extracting out the matrix and computing,
1 3 0
[3 1 2]
2 2 1
R2=R2-3R1
R3=R3-2R1
1 3 0
[0 −8 2]
0 −4 1
Applying on R3
R2=R2-2R3
1 3 0
[0 0 0]
0 −4 1
Hence, {𝑃(𝑋), 𝑄(𝑋), 𝑅(𝑋)} is linearly dependent.
II. Linear independence
Suppose that V is a vector space, the set of vectors {X1, X2, X3 … … … … . . Xk} V is linearly
independent if and only if scalar R1, R2, R3 …………..Rk ∈ 𝑅 such that R1X1, R2 X2, R3
X3…………..Rk Xk=0 are R1=R2=Rk=0
If {X1, X2, X3 … … … … . . Xk} are linearly independent, then it is not possible to write any of these
vectors as a linear combination of the remaining vectors. For instance;
X1=R2X2+R3X3+…………+RkXk, then, -X1+R2X2+R3X3+……….+RKXk=0 and all these
coefficients must zero.
Example
1 3 5
Let X=[2], Y=[2] and Z=[ 2 ] and is the set {X1, X2, X3} linearly independent;
3 9 −1
Note; it should be note that, rx+y+tz=0 for r,s,and t to be real
Solving the corresponding system of linear equations using row reduction to echelon,
1 3 5 0
[2 2 2 |0]
3 9 −1 0
R2=R2-2R1
R3=R3-3R1
1 3 5 0
[0 −4 −8 |0]
0 9 −16 0
R2=-(1/4)R2
R3=-(1/16)R3
1 3 5 0
[0 1 2 |0]
0 0 1 0
Hence rx+y+tz=0 only and if r=s=t=0. Therefore {X1, X2, X3} is linearly independent subject of
R3
Let X={sinx,cosx}CF, is X linearly dependent or linearly independent?
Suppose that s sinx + t cosx=0
Note; this equation holds for all x∈R so,
X=0 s.0+t.1=0
𝜋
X= 2 s.1+t.0=0
Therefore, we must have s=0=t
Hence {sinx,cox} is linearly independent
Show that {𝑒 𝑥 , 𝑒 2𝑥 , 𝑒 3𝑥 } is a linearly independent sub set F
Suppose that r𝑒 𝑥 + 𝑠𝑒 2𝑥 + 𝑡𝑒 3𝑥 = 0, then,
X=0 r+s+t=0
X=1 r𝑒 + 𝑠𝑒 2 + 𝑡𝑒 3 = 0
X=2 r𝑒 2 + 𝑠𝑒 4 + 𝑡𝑒 6 = 0
Solving the matrix equation,
1 1 1
[𝑒 𝑒2 𝑒 3]
𝑒2 𝑒4 𝑒6
R2=(1/𝑒)R2
R3=(1/𝑒2)R3
1 1 1
[1 𝑒 𝑒 2]
1 𝑒2 𝑒4
R2=R2-R1
R3=R3-R1
1 1 1
[0 𝑒 − 1 𝑒 2 − 1]
0 𝑒2 − 1 𝑒4 − 1
R2=(1/(𝑒 − 1))R2
R3=(1/(𝑒 2 − 1))R3
1 1 1
[0 1 𝑒+1]
0 1 𝑒2 + 1
R3=R3-R2
1 1 1
[0 1 𝑒+1]
0 0 𝑒2 − 𝑒
Therefore, {𝑒 𝑥 , 𝑒 2𝑥 , 𝑒 3𝑥 } is a set of linearly independent functions in the vector space F.
Rank of a matrix (it is denoted by 𝝆(𝑨))
This is the maximum number of linearly independent column vectors / row vectors and it cannot
exceed the number of its rows / columns.
If we consider a square matrix, the columns/ rows are linearly independent if the matrix is
nonsingular for instance, when its determinant is not equal to zero.
In general, rank of a matrix is the dimension of the vector space generated by its column/rows. For
instance, if A is an mxn matrix and m≥n, then rank (A) ≤n, but if m<n then the rank of a matrix,
rank (A) ≤m. this indicates that, if a matrix is not a square, either its columns/ rows must be linearly
dependent.
For small square matrices, we perform row elimination in order to obtain an upper triangular
matrix. If row of zeros occurs, the rank of the matrix is less than n and it’s a singular matrix.
Nullity of a matrix
The nullity of a matrix is defined as the number of vectors present in the null space of a given
matrix (i.e.) rank + nullity = number of columns in the matrix.
Properties of the rank of the matrix
Zero matrices have no nonzero row hence it has an independent row/column implying that
rank=0, for such matrices.
When the rank equals to the smallest dimension, it is called the full rank matrix.
Steps involved to find the rank of the matrix
Let A= (aij) mxn, be a matrix. A positive integer r is said to be the rank of matrix A.
There are two methods one can use to determine the rank of a matrix namely;
i. Minor method
ii. Echelon form/ reduction to its normal form
Using the minor method
If a matrix contains at least one nonzero element, then 𝝆(𝑨) ≥ 𝟏
The rank of the identity matrix In is n.
If the rank of a matrix A is r, then there exist at least one minor of order r which does not
vanish (i.e.) if the order is (r+1)
If A is a matrix of mxn, then 𝝆(𝑨) ≤ 𝒎 𝑜𝑟 𝒏
A square matrix A of order n has to inverse given 𝝆(𝑨) = 𝒏
Partitioning of matrices
Literally, this is the division/ separation of the main matrices into sub matrices so that they become
compatible for matrix multiplication/ ease the multiplication task. Matrix partition finds its
application in computing amongst other fields.
A worked example of matrix partitioning.
Consider the matrices below;
1 0
1 3 −2 3 −3 2 1
𝐴3𝑥5 = [2 4 −2 2 1 ] , 𝐵5𝑥2 = 3 3
0 −2 1 1 1 2 2
[1 2]
Matrix A is sub divided into 4 portions while B can be sub divided in to two portions/ sub matrices.
Note that while dividing this matrices, the first subdivision of the first matrix is considered for
compatibility of the next sub matrix of another matrix and hence the matrices above are divided as
follows;
1 0
1 3 −2 3 −3 2 1
𝐴3𝑥5 = [2 4 −2 2 1 ] , 𝐵5𝑥2 = 3 3
0 −2 1 1 1 2 2
[1 2]
For matrix A, let the first quadrant be A1, quadrant 2 be A2, quadrant 3= A3 and quadrant 4=A4.
For matrix B, let the upper matrix be B1 and the lower matrix be B2.
The new formed matrices will be;
𝐴1 𝐴2 𝐵
𝐴2𝑥2 = [ ] 𝑎𝑛𝑑 𝐵2𝑥1 = [ 1 ]
𝐴3 𝐴4 𝐵2
At this point we carry out the normal multiplication of matrices
𝐴1 𝐵1 + 𝐴2 𝐵2
𝐴𝐵 = [ ]
𝐴3 𝐵1 + 𝐴4 𝐵2
Work out the different multiplication of each combination of matrices.
1 0
1 3 −2
𝐴1 𝐵1 = [ ] [2 1]
2 4 −2
3 3
1 −3
𝐴1 𝐵1 = [ ]
4 −2
3 −3 2 2
𝐴2 𝐵2 = [ ][ ]
2 1 1 2
3 0
𝐴2 𝐵2 = [ ]
5 6
1 0
𝐴3 𝐵1 = [0 −2 1] [2 1]
3 3
𝐴3 𝐵1 = [−1 1]
2 2
𝐴4 𝐵2 = [1 1] [ ]
1 2
𝐴4 𝐵2 = [3 4]
The next step, is to add the matrices (follow the principal of addition and subtraction of matrices)
𝐴1 𝐵1 + 𝐴2 𝐵2
𝐴𝐵 = [ ]
𝐴3 𝐵1 + 𝐴4 𝐵2
1 −3 3 0
[ ]+[ ]
𝐴𝐵 = [ 4 −2 5 6]
[−1 1] + [3 4]
4 −3
𝐴𝐵 = [9 4]
2 5
Hence using portioning to work out a complex/tedious matrix to get a simplified answer.