0% found this document useful (0 votes)
11 views

Chapter 1

The document discusses systems of linear equations and their properties. It defines linear systems of equations and their solutions. It examines geometric interpretations of 2x2 systems. It introduces equivalent systems through row operations and triangular and row echelon forms for solving systems.

Uploaded by

陳毅懋
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Chapter 1

The document discusses systems of linear equations and their properties. It defines linear systems of equations and their solutions. It examines geometric interpretations of 2x2 systems. It introduces equivalent systems through row operations and triangular and row echelon forms for solving systems.

Uploaded by

陳毅懋
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Chapter 1

MATRICES AND SYSTEMS OF


EQUATIONS

1.1 Systems of Linear Equations

(I) A linear system of m equations in n unknowns can be written as


a11 x1 C a12 x2 C    C a1n xn D b1
a21 x1 C a22 x2 C    C a2n xn D b2
::
:
am1 x1 C am2 x2 C    C amn xn D bm (1.1.1)
where aij , bi 2 R, 1  i  m, 1  j  n, and xj are variables. We refer (1.1.1) as m  n linear
systems.
Example 1.1.1
(a) 8
<x C 2x D 5
1 2
) 2  2 linear system:
:2x1 C 3x2 D 8

) .1; 2/ is a solution.

(b) 8
<x
1 x2 C x3 D 2
) 2  3 linear system:
:2x1 C x2 x3 D 4
) It has many solutions, which have the form .2; ˛; ˛/.
(c)

<x1 C x2 D 2
ˆ
x1 x2 D 1 ) 3  2 linear system:
ˆ
x1 D 4

) No solutions. ) The system is inconsistent.

1
1.1 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

The set of all solutions to a linear system is called the solution set of the system. If a system is
inconsistent, its solution set is empty.

(II) Geometric point of view (2  2 systems):


(a) 8
<x C x D 2
1 2
) .2; 0/ is the only solution
:x1 x2 D 2

x2
x1 - x 2 = 2

x1

x1 + x 2 = 2

(b) 8
<x C x D 2
1 2
) no solution, inconsistent.
:x1 C x2 D 1

(c) 8
<x C x D 2
1 2
) infinite solutions
: x1 x2 D 2

x1 + x2 = 2
x1

These results can be extended to m  n systems.

1. exact one solution ) consistent


2. infinite solutions ) consistent
3. no solutions , inconsistent

(III) Equivalent Systems

Definition 1.1.1 Two systems of equations involving the same variables are said to be equivalent
if they have the same solution set.

Textbook: Linear Algebra with Applications 2 Author: Steven J. Leon and Lisette d. Pillis
1.1 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

Example 1.1.2

3x1 C 2x2 x3 D 2 3x1 C 2x2 x3 D 2


(a) x2 D 3 (b) 3x1 x2 C x3 D 5
2x3 D 4 3x1 C 2x2 C x3 D 2

Solution: . 2; 3; 2/ . 2; 3; 2/
) equivalent systems.

There are three operations that can be used on a system to obtain an equivalent system:

I. The order in which any two equations are written may be interchanged.

II. Both sides of an equation may be multiplied by the same nonzero real number.

III. A multiple of one equation may be added to another.

Definition 1.1.2 An n  n system is said to be in strict triangular form if in the kth equation
the coefficients of the first k 1 variables are all zero and the coefficient of xk is nonzero..k D
1; 2; 3; : : : ; n/

Example 1.1.3
3x1 C 2x2 C x3 D 1
x2 x3 D 2
2x3 D 4
Advantage: It is easy to find the solution set of this system by using back substitution.

Example 1.1.4 See EXAMPLE 2, page 20, for back substitution.

Note: If a system of equations is not triangular, we can use operations I and III to obtain an
equivalent system that is in triangular form.

Example 1.1.5
x1 C 2x2 C x3 D 3
3x1 x2 3x3 D 1
2x1 C 3x2 C x3 D 4
coefficient matrix:

Textbook: Linear Algebra with Applications 3 Author: Steven J. Leon and Lisette d. Pillis
1.2 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

Augmented matrix:

) x3 D 4, x2 D 2, x1 D 3.
Elementary row operations:
(I) Interchange two rows. (II) Multiply a row by a nonzero real number. (III) Add a multiple
of one row to another row.
Example 1.1.6 See EXAMPLE 4 on page 23.
In general, if A 2 Rmn; B 2 Rmr , the augmented matrix is denoted by .AjB/, i.e.,
0 1
a11 a12 : : : a1n b11 b12 : : : b1r
B a21 a22 : : : a2n b21 b22 : : : b2r C
B C
.AjB/ D B :
B :: :: :: :: :: :: :: C
@ :
: : : : : : : : A
C
am1 am2 : : : amn bm1 bm2 : : : bmr
see FIGURE 1.1.2 (page 24) to know how to get a strictly triangular form from a square matrix.

1.2 Row Echelon Form

(I)
Example 1.2.1
0 1 0 1
1 1 1 1 1 1 1 1 1 1 1 1
B
B 1 1 0 0 1 1CC
B0 0 1 1 2
B 0CC
2 2 0 0 1 1 C ) B0 0 2 2 3 3C
B C B C
B
B C B C
@ 0 0 1 1 1 1A @0 0 1 1 1 1A
1 1 2 2 2 1 0 0 1 1 1 0
0 1
1 1 1 1 1 1
B0 0 1 1 2 0C
B C
) B0 0 0 0 1 3 C ) inconsistent
B C
B C
@0 0 0 0 0 4A
0 0 0 0 0 3

Textbook: Linear Algebra with Applications 4 Author: Steven J. Leon and Lisette d. Pillis
1.2 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

Example 1.2.2
0 1 0 1
1 1 1 1 1 1 1 1 1 1 1 1
B
B 1 1 0 0 1 1C C
B0 0 1
B 1 2 0C
C
2 2 0 0 1 1 C ) B0 0 0 0 1 3C
B C B C
B
B C B C
@ 0 0 1 1 1 3A @0 0 0 0 0 0A
1 1 2 2 2 4 0 0 0 0 0 0

<x1 C x2 C x3 C x4 C x5 D 1
ˆ
x3 C x4 C 2x5 D 0
ˆ
x5 D 3

x1 , x3 , x5 : lead (dependent) variables, the first nonzero elements in each row of the augmented
matrix.
x2 , x4 : free (independent) variables.

Definition 1.2.1 A matrix is said to be in row echelon form if

(1) The first nonzero entry in each row is 1.

(2) If row k does not consist entirely of zeros, then the number of leading zero entries in row
k C 1 is greater than the number of leading zero entries in row k.

(3) If there are rows whose entries are all zero, they are below the rows having nonzero entries.

Example 1.2.3 0 1
1 x x x x x ::: x
B0 0 1 x x x ::: xC
B C
B C
@0 0 0 0 1 x ::: xA
0 0 0 0 0 0 ::: 0

Example 1.2.4 See EXAMPLE 2 and 3 on page 28.

Remark 1.2.1 The number of dependent variables in a system is equal to the number of nonzero
rows in row echelon form.

Definition 1.2.2 The process of using row operations I, II, and III to transform a linear system
into one whose augmented matrix is in row echelon form is called Gaussian elimination.

(II) a. Overdetermined systems: more equations than unknowns(m > n), i.e., number of row >
number of column.

Example 1.2.5 See EXAMPLE 4 on page 29.

b. Underdetermined systems: fewer equations than unknowns(m < n), i.e., number of row <
number of column.

Textbook: Linear Algebra with Applications 5 Author: Steven J. Leon and Lisette d. Pillis
1.2 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

) either have no solution or infinite solution!

Example 1.2.6 See EXAMPLE 5 on page 30.

It is not possible for an underdetermined system to have a unique solution.


Explanation: Any row echelon form of the coefficient matrix of underdetermined system
will have r nonzero rows, and r  m < n. So if the system is consistent, it will always have
n r > 0 free variables.
Note: number of free (independent) variables: n r.
number of dependent variables: r.

Definition 1.2.3 A matrix is said to be in reduced row echelon form if :


(1) The matrix is in row echelon form.
(2) The first nonzero entry in each row is the only nonzero entry in its column.

Note: The process of using elementary row operations to transform a matrix into reduced
row echelon form is called Gauss-Jordan reduction.

Example 1.2.7 See EXAMPLE 6 on page 32.

Remark 1.2.2 The physical meaning of dependent (or independent) variables may be easily
seen from reduced row echelon form. For instance, the reduced row echelon form of the
system in Example 1.2.2 is
0 1
1 1 0 0 0 2
B0 0 1 1 0 6 C
B C
B0 0 0 0 1 3C
B C
B C
@0 0 0 0 0 0 A
0 0 0 0 0 0
It indicates
x1 C x2 D 2; x3 C x4 D 6; x5 D 3
If we treat x2 D ˛ and x4 D ˇ as independent variables, then

x1 D 2 ˛; x3 D 6 ˇ; x5 D 3 are dependent variables.

However, we also can treat x1 D ˛ and x3 D ˇ as independent variables, then

x2 D 2 ˛; x4 D 6 ˇ; x5 D 3 are dependent variables in this case.

Note that x5 must be dependent variable, you are not allowed to choose x5 freely.

Textbook: Linear Algebra with Applications 6 Author: Steven J. Leon and Lisette d. Pillis
1.3 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

See APPLICATION 1 and 2.

c. Homogeneous systems:

A system of linear equations is said to be homogeneous if the constants on the right-hand


side are all zero. An m  n homogeneous system will have at least one solution x D 0, and
it is always consistent.

Theorem 1.2.1 An m  n homogeneous system of linear equations has a nontrivial solution


if n > m.

How about the case when n  m? (not always has nontrivial solution)

Example 1.2.8 Consider the following homogeneous systems .n  m/ where the matrix A
of each system is indicated below. Which system(s) has(have) nontrivial solutions?
2 3 2 3
1 0 0 1 0 0
6 0 1 07 6 0 1 17
1
.1/ 6 7; .2/ Ax D 0; A exists; .3/ 6
6 7 6 7
4 1 1 15 4 1 1 15
7

1 1 1 1 1 1

Solution: (1) The reduced row echelon form of A is


2 3
1 0 0
60 1 07
7 ) no nontrivial solutions, only has trivial solution
6 7
6
40 0 15
0 0 0

(2) no nontrivial solutions.


(3) The reduced row echelon form of A is
2 3
1 0 0 2 3
60 1 17 0
7 ) has many nontrivial solutions W x D 4 ˛ 5 ; ˛ ¤ 0
6 7 6 7
40 0 05
6
˛
0 0 0

1.3 Matrix Arithmetic

Matrix and vector notations:


0 1
a11 a12 : : : a1n
B a21 a22 : : : a2n C
B C
ADB
B :: :: :: C D .aij /
:: C
@ : : : : A
am1 am2 : : : amn

Textbook: Linear Algebra with Applications 7 Author: Steven J. Leon and Lisette d. Pillis
1.3 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

I. The element in Rn will be treated as Rn1 , i.e. a column vector, for example
0 1
x1
Bx2 C
B C
xDB B :: C
C (1.3.1)
@:A
xn

its row vector will be xT .


II. The ith row vector of A 2 Rmn will be denoted by
 
a.i; W/ , aE i , ai1 ai2 : : : ai n 2 R1n ; i D 1; 2; : : : ; m

The j th column vector of A 2 Rmn will be denoted by


0 1
a1j
B a2j C
B C
aj D B : C
B
C ; j D 1; 2; : : : ; n
@ :: A
amj

Therefore, the matrix A can be written as


0 1
aE 1
 BB aE 2 C
 C
A D a1 a2    an D B : C
B
@ :: A
C
aEm

Definition 1.3.1 Two m  n matrices A and B are said to be equal if aij D bij each i and j .

(I) Scalar Multiplication


˛A: Multiply each of the entries of A by ˛.

(II) Matrix Addition

A C B D .aij / C .bij / D .cij / D C A; B 2 Rmn

where cij D aij C bij , i D 1; 2; : : : ; m, j D 1; 2; : : : ; n.

Example 1.3.1 ! ! !
3 2 1 2 2 2 5 4 3
C D
4 5 6 1 2 3 5 7 9

Note: zero matrix 0 is the additive identity on the set of all m  n matrices, and A is the
additive inverse.

Textbook: Linear Algebra with Applications 8 Author: Steven J. Leon and Lisette d. Pillis
1.3 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

Definition 1.3.2 If a1 ; a2 ;    ; an are vectors in Rm and c1 ; c2 ;    ; cn are scalars, then a


sum of the form

c1 a1 C c2 a2 C    C cn an

is said to be a linear combination of the vectors a1 ; a2 ;    ; an .


 
Suppose that A , a1 a2    an 2 Rmn, aj 2 Rm , 1  j  n, is the column vector
of A, and a vector x is given by (1.3.1). It is known that
0 1
a11 x1 C a12 x2    C a1n xn
B a21 x1 C a22 x2    C a2n xn C
B C
Ax D B :: C
:
B C
@ A
am1 x1 C am2 x2    C amn xn
0 1 0 1 0 1
a11 a12 a1n
B a21 C B a22 C B a2n C
B C B C B C
B :: C C x2 B :: C C    C xn B :: C
D x1 B C B C B C
@ : A @ : A @ : A
am1 am2 amn

, x1 a1 C x2 a2 C    xn an

The above equation indicates that the result of Ax is a linear combination of the column
vectors of A.

Remark 1.3.1 In fact if A 2 Rmn , then Ax can also be written as


0 1
aE 1 x
B aE 2 x C
B C
m
Ax D B B :: C 2 R :
C
@ : A
aE mx

Theorem 1.3.1 (Consistency Theorem for Linear System) A linear system Ax D b is


consistent iff b can be written as a linear combination of the column vectors of A.

Example 1.3.2 See EXAMPLE 6, 7 on page 49.

(III) Matrix Multiplication


If A D .aij /, B D .bij /, and A 2 Rmn , B 2 Rnr , then AB D C D .cij /, where
n
X
cij D aE i bj D aik bkj and C 2 Rmr .
kD1

Example 1.3.3 See EXAMPLE 8 and 9 on page 50, 51.

Textbook: Linear Algebra with Applications 9 Author: Steven J. Leon and Lisette d. Pillis
1.4 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

Remark: AB ¤ BA, i.e. multiplication of matrices is not commutative.

Definition 1.3.3 The transpose of an m  n matrix A is the n  m matrix B defined by

bj i D aij

for i D 1; : : : ; m and j D 1; : : : ; n. The transpose of A is denoted by AT .

Example 1.3.4 See EXAMPLE 11 on page 56.

Definition 1.3.4 An n  n matrix A is said to be symmetric if AT D A.

1.4 Matrix Algebra

Theorem 1.4.1 Each of the following statements is valid for any scalars ˛ and ˇ and for any
matrices A, B, C for which the indicated operations are defined.
(1) A C B D B C A (2) .A C B/ C C D A C .B C C/
(3) .AB/C D A.BC/ (4) A.B C C/ D AB C AC
(5) .A C B/C D AC C BC (6) .˛ˇ/A D ˛.ˇA/
(7) ˛.AB/ D .˛A/B D A.˛B/ (8) .˛ C ˇ/A D ˛A C ˇA
(9) ˛.A C B/ D ˛A C ˛B

Proof:
(3) Let A 2 Rmn, B 2 Rnr , C 2 Rr s , D , AB, E , BC. Now we want to show DC D AE.
n
X
dil D aik bkl ; i D 1; : : : ; m; l D 1; : : : ; r
kD1
) r
X
ekj D bkl clj ; k D 1; : : : ; n; j D 1; : : : ; s
lD1

) The ij th term of DC is
r r n n r
!
X X X X X
dil clj D aik bkl clj D aik bkl clj
lD1 lD1 kD1 kD1 lD1
Xn
D aik ekj D the ij th term of AE
kD1

(4) Let A D .aij / 2 Rmn , B D .bij / 2 Rnr , C D .cij / 2 Rnr , D , A.B C C/, E ,
AB C AC.
then
n
X n
X n
X
dij D aik .bkj C ckj / D aik bkj C aik ckj D eij
kD1 kD1 kD1

) A.B C C/ D AB C AC. 

Textbook: Linear Algebra with Applications 10 Author: Steven J. Leon and Lisette d. Pillis
1.4 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

Example 1.4.1 See EXAMPLE 1 on page 63.

The Identity matrix:


0 1
1 0 0 ::: 0
B0 1 0 : : : 0C
B C 8
B C <1 i D j
nn
B C
IDB B :: :: :: : : : :: C
C 2 R or I D .ıij / where ıij D
B: : : :C :0 i ¤ j
B C
@0 0 : : : 1 0A
0 0 ::: 0 1

If A 2 Rnn , then AI D IA D A. In general, if B 2 Rmn , C 2 Rnr , then BI D B, IC D C.

Note: The notation for the j th column vector of I is ej and


 
I D e1 e2 : : : en

Diagonal and Triangular matrix:


A matrix A 2 Rnn is said to be upper triangular if aij D 0 for i > j and lower triangular if
aij D 0 for i < j .

Example 1.4.2
0 1 0 1
x x x x x ::: x 0 0 0 0 :::
B0 x x xCx ::: Bx x 0 0C 0 :::
B C B C
B0 0 x xCx ::: Bx x x 0C 0 :::
B C B C
B C; B C
B0 0 0 xCx ::: Bx x x 0C x :::
B :: :: :: :: B :: :: :: ::
B C B C
:: ::
: :
C C
@: : : xA : @: : :0A :
0 0 0 0 ::: x x x x x ::: x

A matrix A 2 Rnn is diagonal if aij D 0 whenever i ¤ j . ()both upper and lower triangular
matrix)

Matrix Inversion:

Definition 1.4.1 An n  n matrix A is said to be nonsingular or invertible if there exists a matrix


B such that AB D BA D I. The matrix B is said to be a multiplicative inverse of A.

Note: A matrix A can have at most one multiplicative inverse, which is denoted by A 1 .

Definition 1.4.2 An n  n matrix is said to be singular if it does not have a multiplicative inverse.

The following theorem shows that any product of nonsingular matrices is nonsingular.

Textbook: Linear Algebra with Applications 11 Author: Steven J. Leon and Lisette d. Pillis
1.5 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

Theorem 1.4.2 If A and B are nonsingular n  n matrices, then AB is also nonsingular and
.AB/ 1 D B 1 A 1 .

Proof:
* A and B are nonsingular matrices
) 9 A 1 and B 1 3 A 1 A D AA 1 D I, and B 1 B D BB 1
DI
.B 1 A 1 /.AB/ D B 1 .A 1 A/B D B 1 B D I

.AB/.B 1 A 1 / D A.BB 1 /A 1
D AA 1
DI
Algebraic Rules for Transposes
1. .AT /T D A.
2. .˛A/T D ˛AT .
3. If A and B are both m  n matrices, then .A C B/T D AT C BT .
4. If A 2 Rmn, and B 2 Rnr , then .AB/T D BT AT .
Proof:
4. Let C D AB, and denote the ij th entries of AT , BT , CT by aij , bij and cij respectively. Then
n
X
cij D aik bkj ; aij D aj i ; bij D bj i ; cij D cj i
kD1
n
X
T
The ij th entry of .AB/ is cij D cj i D aj k bki
kD1

The ij th term of BT AT is given by


n
X n
X n
X
 
bik akj D bki aj k D aj k bki D the ij th term of CT .
kD1 kD1 kD1

).AB/T D BT AT . 

Example 1.4.3 See EXAMPLE 6 on page 71.

1.5 Logic in the Proof of Theorem

In general a mathematic theorem can be written in a form as

If < proposition A >; then < proposition B > , (1.5.1)

which means that if proposition A is satisfied, then proposition B is true. The statement (1.5.1) can
be also written as
< proposition B > if < proposition A > . (1.5.2)
In (1.5.1)(or (1.5.2)), proposition A is often called as the sufficient condition of the theorem. In
order to simplify the notation, we also write (1.5.1)(or (1.5.2)) as

< proposition A > ) < proposition B > .

Textbook: Linear Algebra with Applications 12 Author: Steven J. Leon and Lisette d. Pillis
1.6 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

On the other hand, some mathematic theorems are written as

< proposition B > iff < proposition A >

or
< proposition B > , < proposition A > . (1.5.3)

It means that if proposition A is satisfied, then proposition B is also fulfilled. In addition, if propo-
sition B is satisfied, then proposition A is also true. In this case proposition A is the sufficient
condition of the theorem, and proposition B is the necessary condition of the theorem. When
proving the theorem such as (1.5.3), one has to prove that

< proposition A > ) < proposition B >; and < proposition B > ) < proposition A >

Sometimes we prove the theorem (1.5.1)(or (1.5.2)) by using the so called “prove by contra-
diction”. The main idea of it is based on the following truth table:

p q p!q p q  q ! p  p ! q
T T T F F T T
T F F F T F T
F T T T F T F
F F T T T T T

The above truth table clearly indicates

p ! q   q !  p; p ! q ¤  p !  q:

Therefore one can prove (1.5.1)(or (1.5.2)) by using the following statement

not < proposition B > ) not < proposition A > .

For example:
Suppose proposition A is true (according to the given condition), however, proposition B is
not true (we initiate  q). Then : : : (use the result or condition of  q, continue to prove)

::
:

At the end of the proof, we find that proposition A is not true (i.e., we prove that  q ) 
p/, which contradicts the assumption we admitted in the beginning of this proof. Therefore, the
statement
< proposition A > ) < proposition B >

is correct (or is proved)!

Textbook: Linear Algebra with Applications 13 Author: Steven J. Leon and Lisette d. Pillis
1.6 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

1.6 Elementary Matrices

(I) Equivalent Systems


Given an m  n linear system Ax D b, we can use elementary row operation to find the solu-
tions of this system. This procedure is equivalent to multiply a sequence of nonsingular matrices
E1 ; E2 ; : : : ; Ek to both sides of the equation Ax D b to obtain a simpler system

Ek Ek 1 : : : E2 E1 Ax D Ek Ek 1 : : : E2 E1 b

i.e. Ux D c, where U D Ek Ek 1 : : : E2 E1 A, c D Ek Ek 1 : : : E2 E1 b. The new system is equiva-


lent to the original provided that M D Ek Ek 1 : : : E2 E1 is nonsingular. The matrix M is indeed
nonsingular since it is a product of nonsingular matrices Ei (Theorem 1.4.2).

(II) Elementary matrices: the matrices obtained from the identity matrix I by the performance of
one elementary row operation. There are three types of elementary matrices:

a. Type I: obtained by interchanging two rows of I.


premultiply a unit matrix but interchange ith row with j th row ) interchange ith row with
j th row of a matrix A.
postmultiply a unit matrix but interchange ith row with j th row ) interchange ith column
with j th column of a matrix.

Example 1.6.1 See EXAMPLE 1 on page 76.

b. Type II: obtained by multiplying a row of I by a nonzero constant.


premultiply a unit matrix with ith row multiplied by a nonzero constant ) multiply ith row
of a matrix by a nonzero constant.
postmultiply a unit matrix with ith row multiplied by a nonzero constant ) multiply ith
column of a matrix by a nonzero constant.

Example 1.6.2 See EXAMPLE 2 on page 77.

c. Type III: obtained by adding a multiple of one row of I to another row of I.


0 1
1
B :: : :
B: :
C
C
B C
B0 : : : 1
j th row ! B 0 C
B ::
C
A matrix premultiplied by a matrix E D ::
:
C
B: C
B C
ith row ! BB0 : : : ˛ : : : 1
C
C
B :: :: C
@: : A
0 ::: 0 ::: 0 ::: 1
" j th column

Textbook: Linear Algebra with Applications 14 Author: Steven J. Leon and Lisette d. Pillis
1.6 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

) add ˛ j th row to the ith row of a matrix.


0 1
1
B :: ::
B: :
C
C
B C
B0
j th row ! B ::: 1 0 C
B ::
C
A matrix postmultiplied by a matrix E D ::
:
C
B: C
B C
ith row ! BB0 ::: ˛ ::: 1 C
C
B :: :: C
@: : A
0 ::: 0 ::: 0 ::: 1
" j th column
) add ˛ ith column to the j th column of a matrix.

Example 1.6.3 See EXAMPLE 3 on page 77.


1
Theorem 1.6.1 If E is an elementary matrix, then E is invertible and E is an elementary matrix
of the same type.

Proof:
type I:
1
EA D B ) E.EA/ D EB D A ) EE D I )E DE
type II: If E is the elementary matrix of type II formed by multiplying the ith row of I by a nonzero
scalar ˛, then

N ,I
EE
0 10 1
1 1
B :: :: C B :: ::
B: : C B: :
C
C
B CB C
B0 ::: 1 0 C B0 ::: 1 0 C
B :: C B ::
B CB C
H) :: :: CDI
: :
C
B: C B:
B CB C
ith row ! B
B0 ::: 0 ::: ˛1 C B0
CB ::: 0 ::: ˛ C
C
B :: :: C B: ::
: A @ ::
C
@: : A
0 ::: 0 ::: 0 ::: 1 0 ::: 0 ::: 0 ::: 1

type III: Let EA D B, where


0 # j th column 1
1
B :: : :
B: :
C
C
B C
B0 : : : 1 0 C
B ::
B C
ED : : CW add ˛  the j th row to the ith row
:
C
B:
B C
ith row ! B
B0 : : : ˛ : : : 1
C
C
B :: : :: C
@: A
0 ::: 0 ::: 0 ::: 1

Textbook: Linear Algebra with Applications 15 Author: Steven J. Leon and Lisette d. Pillis
1.6 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

i.e., B is obtained from A by adding ˛ the j th row to the ith row. Now we transform B back to
A, which means the ith row of B ˛ the j th row of B. Let F be obtained from I by : adding
. ˛/ times the j th row of I to the ith row of I, i.e.,
# j th column
0 1
1
B :: ::
B: :
C
C
B C
B0
j th row ! B ::: 1 0 C
B ::
C
FD ::
:
C
B: C
B C
ith row ! BB0 ::: ˛ ::: 1 C
C
B :: :: C
@: : A
0 ::: 0 ::: 0 ::: 1
) FEA D FB D A H) .FE I/A D 0, A is any matrix.
1
) FE D I (it can also be shown that EF D I), which means F is the inverse of E, and E D
F. 

Definition 1.6.1 A matrix B is row equivalent to A if there exists a finite sequence E1 E2    Ek


of elementary matrices such that

B D Ek Ek 1 : : : E2 E1 A

Remark:
(i) Two augmented matrices .Ajb/ and .Bjc/ are row equivalent iff Ax D b and Bx D c are
equivalent systems.
(ii) If A is row equivalent to B, then B is row equivalent A.
(iii) If A is row equivalent to B, and B is row equivalent to C, then A is row equivalent C.

Theorem 1.6.2 .Equivalent Conditions for Nonsingularity/ Let A 2 Rnn . The following are
equivalent
(a) A is nonsingular.
(b) Ax D 0 has only the trivial solution 0.
(c) A is row equivalent to I.

Proof: (a))(b) Suppose xO is a solution of Ax D 0, then

xO D IOx D .A 1 A/Ox D A 1 .AOx/ D A 1 0 D 0

(b))(c)
Use elementary row operations, the system can be transformed into Ux D 0, where U is in
row echelon form, and U must be a triangular matrix with ai i D 1 (Since A is a square matrix and
Ax D 0 has only the trivial solution 0). It follows then that I is the reduced row echelon form of A
and hence A is row equivalent to I.
(c))(a)

Textbook: Linear Algebra with Applications 16 Author: Steven J. Leon and Lisette d. Pillis
1.6 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

* A is row equivalent to I
) 9 Ek Ek 1 : : : E2 E1 such that A D Ek Ek 1 : : : E2 E1 I D Ek Ek 1 : : : E2 E1
* Ei ; 1  i  k, is invertible
) the product Ek Ek 1 : : : E2 E1 is also invertible ) A is nonsingular
) A 1 D .Ek Ek 1 : : : E2 E1 / 1 D E1 1 E2 1 : : : Ek 1 

Remark 1.6.1 In the proof of (b))(c), suppose that one ai i D 0 in U, for example,
0 1
1 1 1 0
@ 0 0 2 0 A;
0 0 1 0

which is equivalent to 0 1
1 1 0 0
@ 0 0 0 0 A:
0 0 1 0
Then this system has many nontrivial solutions, i.e., x1 C x2 D 0 and x3 D 0.

Remark 1.6.2 If A is nonsingular, then A is row equivalent to I, i.e.,


1
Ek Ek 1    E1 A D I ) Ek Ek 1    E1 I D A

The above equation suggests us that we can use the same series of elementary row operations that
transform the augmented matrix .A j I/ to .I j A 1 /, which also tell us that we are able to obtain
A 1 by using this method.

Example 1.6.4 see Example 4 and 5 on page 80, 81.

Corollary 1.6.1 Let A 2 Rnn . The system of equations Ax D b has a unique solution iff A is
nonsingular.

Proof:
Sufficient: if A is nonsingular )Ax D b has a unique solution

A 1 Ax D A 1 b H) x D A 1b

Necessary: Ax D b has a unique solution ) A is nonsingular


we prove it by contradiction, i.e., if A is singular ) Ax D b has many solutions!
* A is singular ( q)
) Ax D 0 has a solution z ¤ 0 (from Theorem 1.6.2, (a) , (b))
Suppose that xO is a solution of Ax D b and let y D xO C z; y ¤ xO

Ay D A.Ox C z/ D b C 0 D b

) y is also a solution to Ax D b, i.e., Ax D b has many solutions ( p). 

Textbook: Linear Algebra with Applications 17 Author: Steven J. Leon and Lisette d. Pillis
1.7 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

LU Factorization

If an n  n matrix A can be reduced to strict upper triangular form using only row operation
III, then we can factorize the matrix A into a product of a unit lower triangular matrix L times
a strictly upper triangular matrix U. This process we often referred as LU factorization.
Suppose that

Ek Ek 1    E2 E1 A D U

where Ej ; j D 1; 2;    ; k are the elementary matrices of type III, U is an strict upper triangular
matrix. Then

A D E1 1 E2 1    Ek 1 1 Ek 1 U , LU

Note that
0 1 0 1
1 1
B :: :: B :: ::
B: : B: :
C C
C C
B C B C
B0 ::: 1 0 C B0 ::: 1 0 C
B: B:
B C B C
E D B :: :: ) E 1
D B :: ::
: :
C C
C C
B C B C
B0 ::: ˛ ::: 1 C B0 ::: ˛ ::: 1 C
B C B C
B :: :: C B :: :: C
@: : A @: : A
0 ::: 0 ::: 0 ::: 1 0 ::: 0 ::: 0 ::: 1

Example 1.6.5 see Example 6 on page 82.

1.7 Partitioned Matrices

(I)

Example 1.7.1
0 1
1 2 4 1 3
0 1
! 1 2 4 1 3
B2 1 1 1 1C A11 A12 B 2 1 1 1 1 C
A D B H) A D DB
B C C
@3 3 2 1 2A
C
A21 A22 @ 3 3 2 1 2 A
4 6 2 2 4 4 6 2 2 4
01
  1 2 1
B D b1 b2 b3 D @ 2 3 1A
1 4 1
 
In general, if A 2 Rmn, B D b1 b2 : : : br 2 Rnr (i.e. bi 2 Rn1 ), then
 
AB D Ab1 Ab2 : : : Abr 2 Rmr

Textbook: Linear Algebra with Applications 18 Author: Steven J. Leon and Lisette d. Pillis
1.7 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

if B 2 Rnr and 0 1
aE 1
B aE 2 C
B C
mn
ADB
B :: C 2 R
C ; aE i 2 R1n
@ : A
aE m
then 0 1
aE 1 B
B aE 2 B C
B C
mr
AB D B : C
B
C2R ; aEi B 2 R1r ; i D 1;    ; m
@ : A:
aE m B

(II) Block multiplication: A 2 Rmn , B 2 Rnr


Case 1: B D .B1 B2 / 2 Rnr , B1 2 Rnt , B2 2 Rn.r t/ )
   
AB D A B1 B2 D AB1 AB2

Case 2: !
A1
AD 2 Rmn ; A1 2 Rkn ; A2 2 R.m k/n
A2
) ! !
A1 A1 B
AB D BD
A2 A2 B

Case 3:
!
  B1
A D A1 A2 ; B D ; A1 2 Rms ; A2 2 Rm.n s/
; B1 2 Rsr ; B2 2 R.n s/r
B2

) !
  B
1
AB D A1 A2 D A1 B1 C A2 B2
B2

Case 4:
   
A11 A12 k B11 B12 s
AD ; BD
A21 A22 m k B21 B22 n s
s n s t r t
) !
A11 B11 C A12 B21 A11 B12 C A12 B22
AB D
A21 B11 C A22 B21 A21 B12 C A22 B22

Example 1.7.2 See EXAMPLE 1 on page 90.

Textbook: Linear Algebra with Applications 19 Author: Steven J. Leon and Lisette d. Pillis
1.7 Class Notes of Linear Algebra Instructor: Prof. Chih-Chiang Cheng

Example 1.7.3 Let A 2 Rnn,


!
A11 0
AD ; A11 2 Rkk ; k < n:
0 A22

Show that A is nonsingular iff A11 and A22 are nonsingular.


Proof: (if)
* A11 and A22 are nonsingular
) A111 and A221 exist
) ! ! !
A111 0 A11 0 Ik 0
D DI
0 A221 0 A22 0 In k
and ! ! !
A11 0 A111 0 Ik 0
D DI
0 A22 0 A221 0 In k
) !
1 A111 0
A D
0 A221
and A is nonsingular.
(only if)
* A is nonsingular ) A 1
exists
Let !
B11 B12
A 1
DB,
B21 B22
) BA D AB D I
! ! ! !
B11 B12 A11 0 B11 A11 B12 A22 Ik 0
D D
B21 B22 0 A22 B21 A11 B22 A22 0 In k
! ! ! !
A11 0 B11 B12 A11 B11 A11 B12 Ik 0
D D
0 A22 B21 B22 A22 B21 A22 B22 0 In k
) A11 B11 D B11 A11 D Ik ) B11 is the inverse of A11 .
) A22 B22 D B22 A22 D In k ) B22 is the inverse of A22 . 

Textbook: Linear Algebra with Applications 20 Author: Steven J. Leon and Lisette d. Pillis

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy