Linear Algebra: Learning Objectives
Linear Algebra: Learning Objectives
Linear Algebra: Learning Objectives
1 Linear Algebra
Learning Objectives
After reading this chapter, you will know:
1. Matrix Algebra, Types of Matrices, Determinant
2. Cramer’s rule, Rank of Matrix
3. Eigenvalues and Eigenvectors
Matrix
Definition
A system of “mn” numbers arranged along m rows and n columns. Conventionally, A matrix is
represented with a single capital letter.
a11 a12 − − a1j − − a1n
a21 a22 − − a2j − − a2n
Thus, A = [ − − − − aij − − ain ]
am1 am2 − − − − − amn
A = (aij )
m×n
aij → ith row, jth column
Principle diagonal, Trace transpose
Types of Matrices
1. Row and Column Matrices
Row Matrix → [ 2 7 8 9] → A matrix having single row is row matrix or row vector
5
10
Column Matrix → [ ] → Single column (or column vector)
13
1
2. Square Matrix
Number of rows = Number of columns
Order of Square matrix → No. of rows or columns
1 2 3
Example: A = [5 4 6]; Order of this matrix is 3
0 7 5
Principal Diagonal (or Main Diagonal or Leading Diagonal)
The diagonal of a square matrix (from the top left to the bottom right) is called as principal
diagonal.
4. Diagonal Matrix: A square matrix in which all the elements except those in leading diagonal are
zero.
−4 0 0
Example: [ 0 6 0]
0 0 8
5. Scalar Matrix: A Diagonal matrix in which all the leading diagonal elements are same.
2 0 0
Example: [0 2 0]
0 0 2
6. Unit Matrix (or Identity Matrix): A Diagonal matrix in which all the leading diagonal elements
are ‘1’.
1 0 0
Example: I3 = [0 1 0]
0 0 1
7. Null Matrix (or Zero Matrix): A matrix is said to be Null Matrix if all the elements are zero.
0 0 0
Example: [ ]
0 0 0
14. Hermitian Matrix: It is a square matrix with complex entries which is equal to its own conjugate
transpose.
Aθ = A or aij = a̅ij
5 1−i
For example: [ ]
1+i 5
Note: In Hermitian matrix, diagonal elements → Always real
15. Skew Hermitian Matrix: It is a square matrix with complex entries which is equal to the negative
of conjugate transpose.
Aθ = −A or aij = − a̅ji
5 1−i
For example = [ ]
1+i 5
Note: In Skew-Hermitian matrix, diagonal elements → Either zero or Pure Imaginary.
18. Nilpotent Matrix : If Ak = 0 (null matrix), then A is called Nilpotent matrix (where k is a +ve
integer).
19. Periodic Matrix : If Ak+1 = A (where, k is a +ve integer), then A is called Periodic matrix.
If k =1 , then it is an idempotent matrix.
Equality of Matrices
Two matrices can be equal if they are of
(a) Same order
(b) Each corresponding element in both the matrices are equal
Rules
1. Matrices of same order can be added
2. Addition is cummulative →A+B = B+A
3. Addition is associative →(A+B) +C = A+ (B+C) = B + (C+A)
Multiplication of Matrices
Condition: Two matrices can be multiplied only when number of columns of the first matrix is equal
to the number of rows of the second matrix. Multiplication of (m × n) and (n × p) matrices results in
m×n
matrix of (m × p)dimension [ n × p = m × p]
Properties of multiplication
1. Let Am×n , Bp×q then ABm×q exists ⇔ n = p
2. BAp×n exists ⇔ q
3. AB ≠ BA
4. A(BC) = (AB)C
5. AB = 0 need not imply either A = 0 or B = 0
Multiplication of Matrix by a Scalar: Every element of the matrix gets multiplied by that scalar.
Determinant
An nth order determinant is an expression associated with n × n square matrix.
If A = [aij ], Element aij with ith row, jth column.
a11 a12
For n = 2, D = det A = |a | = (a11 a22 − a12 a21)
21 a 22
Determinant of “Order n”
a11 a12 a13 − − a1n
a21 − − − − a2n
D = |A| = det A = || − − − − − − ||
− − − − − −
an1 an2 − − − ann
Cofactor is the minor with “proper sign”. The sign is given by (−1)i+j (where the element
belongs to ith row, jth column).
b b3
A2 = Cofactor of a2 = (−1)1+2 × | 1 |
c1 c3
A1 A2 A3
Cofactor matrix can be formed as | B1 B2 B3 |
C1 C2 C3
In General
ai Aj + bi Bj + ci Cj = ∆ if i = j
ai Aj + bi Bj + ci Cj = 0 if i ≠ j
{ai , bi , ci are the matrix elements and Ai , Bi , Ci are corresponding cofactors. }
Note: Singular matrix: If |A| = 0 then A is called singular matrix.
Nonsingular matrix: If |A| ≠ 0 other A is called Non − singular matrix.
Properties of Determinants
1. A determinant remains unaltered by changing its rows into columns and columns into rows.
a1 b1 c1 a1 a2 a3
| a2 b2 c2 | = | b1 b2 b3 | i. e. , det A = det AT
a3 b3 c3 c1 c2 c3
2. If two parallel lines of a determinant are inter-changed, the determinant retains it numerical
values but changes in sign. (In a general manner, a row or column is referred as line).
a1 b1 c1 a1 c1 b1 c1 a1 b1
| a2 b2 c2 | = − | a2 c2 b2 | = | c2 a2 b2 |
a3 b3 c3 a3 c3 b3 c3 a3 b3
3. Determinant vanishes if two parallel lines are identical.
4. If each element of a line be multiplied by the same factor, the whole determinant is multiplied
by that factor. [Note the difference with matrix].
a1 Pb1 c1 a1 b1 c1
P | a2 Pb2 c2 | =| a2 b2 c2 |
a3 Pb3 c3 a3 b3 c3
5. If each element of a line consists of the m terms, then determinant can be expressed as sum of
the m determinants.
a1 b1 c1 + d1 − e1 a1 b1 c1 a1 b1 d1 a1 b1 e1
| 2a b2 c 2 + d2 − e 2| = | 2a b2 c 2| + a
| 2 b2 d2| − | 2 b2 e2 |
a
a3 b3 c3 + d3 − e3 a3 b3 c3 a3 b3 d3 a3 b3 e3
6. If each element of a line be added equi-multiple of the corresponding elements of one or more
parallel lines, determinant is unaffected.
Example: By the operation, R 2 → R 2 + pR1 +qR 3 , determinant is unaffected.
7. Determinant of an upper triangular/ lower triangular/diagonal/scalar matrix is equal to the
product of the leading diagonal elements of the matrix.
8. If A & B are square matrix of the same order, then |AB|=|BA|=|A||B|.
1
9. If A is non-singular matrix, then |A−1 | = |A|.
10. Determinant of a skew symmetric matrix (i.e., AT =−A) of odd order is zero.
11. If A is a unitary matrix or orthogonal matrix (i.e., AT = A−1 ) then |A|= ±1.
12. If A is a square matrix of order n then |k A| = k n |A|.
13. |In | = 1 ( In is the identity matrix of order n).
Multiplication of Determinants
The product of two determinants of same order is itself a determinant of that order.
In determinants we multiply row to row (instead of row to column which is done for matrix).
Determinant Matrix
No. of rows and columns are always equal No. of rows and column need not be same
(square/rectangle)
Scalar Multiplication: Elements of one line Scalar Multiplication: All elements of matrix is
(i.e., one row and column) is multiplied by multiplied by the constant
the constant
Can be reduced to one number Can’t be reduced to one number
Interchanging rows and column has no Interchanging rows and columns changes the
effect meaning all together
Multiplication of 2 determinants is done by Multiplication of the 2 matrices is done by
multiplying rows of first matrix & rows of multiplying rows of first matrix & column of
second matrix second matrix
Transpose of Matrix
Matrix formed by interchanging rows & columns is called the transpose of a matrix and denoted
by AT.
1 2
1 5 4
Example: A = [5 1] Transpose of A= Trans (A)= A′ = AT = [ ]
2 1 6
4 6
Note:
1 1
A = (A + AT ) + (A − AT ) = symmetric matrix + skew-symmetric matrix.
2 2
If A & B are symmetric, then AB+BA is symmetric and AB−BA is skew symmetric.
If A is symmetric, then An is symmetric (n=2, 3, 4…….).
If A is skew-symmetric, then An is symmetric when n is even and skew symmetric when n is
odd.
Adjoint of a Matrix
Adjoint of A is defined as the transposed matrix of the cofactors of A. In other words,
Adj (A) = Trans (cofactor matrix)
a1 b1 c1 a1 b1 c1
Determinant of the square matrix A = [ a2 b2 c2 ] is ∆ = |a2 b2 c2 |
a3 b3 c3 a3 b3 c3
The matrix formed by the cofactors of the elements in A is
A1 B1 C1
[ A2 B2 C2 ] → Also called as cofactor matrix
A3 B3 C3
A1 A2 A3
Then transpose of [ B1 B2 B3 ] = Adj (A)
C1 C2 C3
Inverse of a Matrix
Adj A
A−1 = |A|
|A| must be non-zero (i.e. A must be non-singular).
Inverse of a matrix, if exists, is always unique.
a b 1 d −b
If it is a 2 × 2 matrix [ ] , its inverse will be ad−bc [ ]
c d −c a
Important Points
1. IA = AI = A, (Here A is square matrix of the same order as that of I )
2. 0 A = A 0 = 0, (0 is null matrix)
3. If AB = 0, then it is not necessarily that A or B is null matrix.
Also it doesn’t mean BA = 0
2 2 2 −2 0 0
Example: AB = [ ]×[ ]=[ ]
2 2 −2 2 0 0
4. If the product of two non-zero square matrix A & B is a zero matrix, then A & B are singular
matrix.
5. If A is non-singular matrix and A B=0, then B is null matrix.
6. AB ≠ BA (in general) → Commutative property is not applicable
7. A(BC) = (A B)C → Associative property holds.
8. A(B+C) = AB+ AC → Distributive property holds.
9. AC = AD , doesn’t imply C = D [Even when A ≠ 0].
10. (A + B)T = AT + B T
11. (AB)T = B T AT
12. (AB)−1 = B −1 A−1
13. A A−1 = A−1 A = I
14. (kA)T = k AT (k is scalar, A is vector)
15. (kA)−1 = k −1 A−1 (k is scalar, A is vector)
16. (A−1 )T = (AT )−1
17. (̅̅̅̅
AT ) = (A̅ )T (Conjugate of a transpose of matrix = Transpose of conjugate of matrix)
18. If A non-singular matrix A is symmetric, then A−1 is also symmetric.
19. If A is a orthogonal matrix , then AT and A−1 are also orthogonal.
20. If A is a square matrix of order n then
(i) |adj A|=|A|n−1
2
(ii) |adj (adj A)|=|A|(n−1)
(iii) adj (adj A) =|A|n−2 A
Example: Write the following matrix A as a sum of symmetric and skew symmetric matrix
1 2 4
A = [−2 5 3]
−1 6 3
1 1 1 2 4 1 −2 −1
Solution: Symmetric matrix = (A +AT ) = {[−2 5 3] + [2 5 6 ]}
2 2
−1 6 3 4 3 3
2 0 3 1 0 3/2
1
= [0 10 9] = [ 0 5 9/2]
2
3 9 6 3/2 9/2 3
0 2 5/2
1
Skew symmetric matrix = (A −AT) = [ −2 0 −3/2]
2
−5/2 3/2 0
Note:
Elementary transformations doesn’t change the rank of the matrix.
However it changes the Eigen value of the matrix.
We call a linear system S1 “Row Equivalent” to linear system S2, if S1 can be obtained from S2 by
finite number of elementary row operations.
Rank of Matrix
If we select any r rows and r columns from any matrix A, deleting all other rows and columns, then
the determinant formed by these r×r elements is called minor of A of order r.
5. A system of homogeneous equations such that the number of unknown variables exceeds the
number of equations, necessarily has non-zero solutions.
6. If A is a non-singular matrix, then all the row/column vectors are independent.
7. If A is a singular matrix, then vectors of A are linearly dependent.
8. r(A)=0 iff (if and only if) A is a null matrix.
2 3 −1 −1
1 −1 −2 −4
Example: Find rank of [ ]
3 1 3 −2
6 3 0 −7
Solution:
1 −1 −2 −4 R − 2R 1 −1 −2 −4
2 1
2 3 −1 −1 R − 3R 0 5 3 7
R1 ↔ ~ [ ] 3 1 ~ [ ]
3 1 3 −2 0 4 9 10
R 4 − 6R1
6 3 0 −7 0 9 12 17
−2 −4
3 7 −2 −4
1 −1 1 −1
3 7
4 9 0 5 33 22 0 5 33 22
R3 − R2, R4 − R2~ R4 − R3 ~
5 5 0 0 5 5 0 0
0 0 33 22
[0 0 5 5
0 0]
[ 5 5 ]
Hence Rank = 3 (Number of Non zero rows)
−1 0 0 0 0 0
0 0 0 0 0 0
Example: The rank of a diagonal matrix 0 0 1 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
[ 0 0 0 0 0 4]
(A) 1 (B) 2 (C) 3 (D) 4
Solution: Option (C) is correct as the number of non-zero elements in the diagonal matrix gives the
rank.
2 −4 6
Example: Rank of matrix [−1 2 −3] is
3 −6 9
(A) 3 (B) 2 (C) 0 (D) 1
Solution: By doing R 3 → R 3 + 3R 2 and R1 → R1 + 2R 2 ,
0 0 0 −1 2 −3
we get [−1 2 −3] R1 ⟷ R 2 [ 0 0 0 ]
0 0 0 0 0 0
Number of non-zero row is 1. Hence rank is 1.
µ −1 0
Example: The rank of [ 0 µ −1] is 2, value of µ =?
−1 0 µ
(A) 3 (B) 2 (C) 1 (D) 0
Solution: Option (C) is correct (Hint: Just by equating the determinant to zero)
Vector Space
It is a set V of non-empty vectors such that with any two vectors “a and b” in V, all their linear
combinations αa + βb (α, β are real numbers) are elements of V.
Dimension: The maximum number of linearly independent vectors in V is called the dimension of V
(and denoted as dim V).
Span: The set of all linear combinations of given vector a(1) , a(2), ……………………a(p) with same
number of components is called the span of these vectors. Obviously, a span is a vector space.
4 8
Cramer’s Rule
Let the following two equations be there
a11 x1 + a12 x2 = b1 --------------------------------------- (i)
a21 x1 + a22 x2 = b2 -------------------------------------- (ii)
a11 a12
D = |b |
21 b22
b a12
D1 = | 1 |
b2 a22
a b1
D2 = | 11 |
a21 b2
Solution using Cramer’s rule:
D1 D2
x1 = and x2 =
D D
In the above method, it is assumed that
1. No. of equation = No. of unknown
2. D≠ 0
Eigenvalues
If A is a matrix, equation formed by ‘determinant of (A − λ I) equated to zero’ is called
characteristic equation.
The roots of this equation are called the characteristic roots / latent roots / Eigen values of the
matrix A.
Properties of Eigenvalues
1. The sum of the Eigen values of a matrix is equal to the sum of its principal diagonal elements.
2. The product of the Eigen values of a matrix is equal to its determinant.
3. The largest Eigen values of a matrix is always greater than or equal to any of the diagonal
elements of the matrix.
4. If 𝜆 is an Eigen value of orthogonal matrix, then 1/ 𝜆 is also its Eigen value.
5. If A is real, then its Eigen value is either real or complex conjugate pair.
6. Matrix A and its transpose AT has same characteristic root (Eigen values).
7. The Eigen values of triangular matrix are just the diagonal elements of the matrix.
8. Zero is the Eigen value of the matrix if and only if the matrix is singular.
9. Eigen values of a unitary matrix or orthogonal matrix has absolute value ‘1’.
10. Eigen values of Hermitian or symmetric matrix are purely real.
11. Eigen values of skew Hermitian or skew symmetric matrix is zero or pure imaginary.
|A|
12. is an Eigen value of adj A (because adj A = |A|A−1 ).
λ
13. If λ is an Eigen value of the matrix then ,
i) Eigenvalue of A−1 is 1/λ
ii) Eigenvalue of Am is λm
iii) Eigenvalue of kA are kλ (k is scalar)
iv) Eigenvalue of A + k I are λ + k
v) Eigenvalue of (A − k I)2 are (−k)2
Eigenvectors
[A − λ I ] X = 0, where x is non – zero vector
For each Eigenvalue λ, solving for X gives the Eigenvectors. Clearly, zero vector X = 0 is one of the
solutions. (But it is of no practical interest).
Note:
1. The set of Eigenvalues is called SPECTRUM of A.
2. The largest of the absolute value of Eigenvalues is called spectral radius of A.
3. Multiplying the Eigenvector X by a scalar (λ) and matrix (A) gives the same result.
4. For a given Eigenvalue, there can be different Eigenvectors, but for same Eigenvector, there
can’t be different Eigen values.
info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 14
Linear Algebra
Properties of Eigenvectors
1. Eigenvector X of matrix A is not unique.
2. Let X i is Eigenvector, then cX i is also Eigen vector (c = scalar constant).
3. If λ1 , λ2 , λ3 . . . . . λn are distinct, then X1 , X 2. . . . . X n are linearly independent .
4. If two or more Eigen values are equal, it may or may not be possible to get linearly independent
Eigenvectors corresponding to equal roots.
5. Two Eigenvectors are called orthogonal vectors if X1T∙ X 2 = 0. (X1 , X 2 are column vectors)
(Note: For a single vector to be orthogonal , AT = A−1 or, A. AT = A. A−1 = )
6. Eigenvectors of a symmetric matrix corresponding to different Eigenvalues are orthogonal.
5 4
Example: Find the Eigenvalue and the Eigen vector of A = [ ]
1 2
Solution: |A − λ Ι | = 0
5−λ 4
| |=0
1 2−λ
λ2 − 7λ + 6 = 0 ⇒ λ = 6, 1
By definition of Eigen vector, [A − λΙ ] X = 0
5 − λ1 4 x1
Hence, [ ] [x ] = 0
1 2 − λ1 2
−1 4 x1
For λ = 6, [ ][ ] = 0
1 −4 x2
Which gives following two equation
−x1 + 4x2 = 0 ---------------(i)
x1 − 4x2 = 0 ---------------(ii)
However only one equation is independent
x1 x2
= giving the Eigenvector (4, 1)
4 1
4 4 x1
For λ = 1, [ ][ ]=0
1 1 x2
4x1 + 4x2 = 0
x1 + x2 = 0
x1 x2
= giving the Eigenvector (1, −1)
1 −1
1 2 2
Example: If A = [2 1 2] , find the value of A2 − 4A − 5I
2 2 1
Solution: |A − λ Ι| = 0⇒ −λ3 + 9λ + 3λ2 + 5
⇒ (−λ2 + 4λ + 5) (λ+1) = 0
λ2 − 4λ − 5 =0 or λ + 1= 0
From Cayley Hamilton Theorem, A2 − 4A − 5 = 0 or A+I=0
As A ≠ −I
Hence, A2 − 4A − 5 = 0
1 2 −3
Example: For matrix A =[ 0 3 2 ], find Eigenvalues of 3A3 + 5A2 − 6A + 2 I.
0 0 −2
Solution: |A − 0|=
1− 2 −3
| 0 3− 2 |=0
0 0 −2 −
(1 − )(3 − )(−2 − ) = 0 ⟹ = 1, = 3, = −2
Eigen value of A = 1, 3, −2
Eigen value of A3 = 1, 27, −8
Eigen value of A2 = 1, 9, 4
Eigen value of = 1, 1, 1
First Eigen value of 3A3 + 5A2 − 6A + 2 I = 3 × 1 + 5 × 1 − 6 × 1 + 2 = 4
Second Eigen value of 3A3 + 5A2 − 6A + 2 I = 3 (27)+5(9)–6(3)+2=81+45–18+2 = 110
Third Eigen value of 3A3 + 5A2 − 6A + 2 I =3(−8)+5(4)–6(−2)+2= −24 + 20 + 12 + 2 = 10
a11 0 0 0
a21 a22 0 0
Example: Find Eigen values of matrix A = [a ]
31 a 32 a 33 0
a41 a42 a42 a44
a11 − 0 0 0
a21 a22 − 0 0
Solution: |A−| = | |=0
a31 a32 a33 − 0
a41 a42 a43 a44 −
Expanding, (a11 − ) (a22 − ) (a33 − ) (a44 − ) = 0
= a11, a22 , a33 , a44 → which are just the diagonal elements
Note: Recall the property of Eigenvalues, “The Eigen value of triangular matrix are just
the diagonal elements of the matrix”.