Ch-5 Gram-Schmidt Process for Orthonormal Basis(1)
Ch-5 Gram-Schmidt Process for Orthonormal Basis(1)
Ch-5 Gram-Schmidt Process for Orthonormal Basis(1)
Courtesy:- Material for this lecture is selected from Kolman’s book, and
handouts of Virtual University, Lahore, Virtual COMSATS and Imperial
College London.
Objective of this Lecture:-
(A) The prime objective of this chapter is to convert a set of linearly independent vectors
(basis) into a set of orthonormal vectors (basis).
(B) To achieve this goal, first we need to define several bits and pieces like “dot product
(inner product), length (norm or magnitude) of vectors, unit vector, distance between
vectors, projection of vectors, orthogonal vectors and orthonormal vectors”.
Section 5.1:
Definition 1: (Dot or inner or scalar or projection product of vectors in terms of
vector components)
𝒖𝟏 𝒗𝟏
𝒖𝟐 𝒗𝟐
. .
Let 𝒖, 𝒗 ∈ 𝑅𝑛 then 𝒖 = . and 𝒗 = . , consider these vectors as 𝑛 × 1
. .
[𝒖 𝒏 ] [𝒗𝒏 ]
𝑇
matrices. Clearly 𝒖 = [𝒖𝟏 𝒖𝟐 . . . 𝒖𝒏 ].
Now the dot or inner product is a scalar quantity dented by 𝒖• 𝒗 or (𝒖, 𝒗) and is
defined as matrix product
𝑣1
𝑣2
.
𝒖• 𝒗 = (𝒖, 𝒗) = 𝒖𝑇 𝒗 = [𝑢1 𝑢2 . . . 𝑢𝑛 ] . = 𝑢1 𝑣1 + 𝑢2 𝑣2 + ⋯ + 𝑢𝑛 𝑣𝑛
.
[𝑣𝑛 ]
Note: Although notation 𝒖• 𝒗 is more familiar but we will use this natation
(𝒖, 𝒗) for dot/inner product to maintain the consistency with Kolman’s book.
𝟔 𝟒
Example 1: Let 𝒖 = [𝟒] and 𝒗 = [−𝟏] then
𝟐 𝟖
(𝒖, 𝒗) = 𝟔(𝟒) + 𝟒(−𝟏) + 𝟐(𝟖) = 𝟑𝟔 = (𝒗, 𝒖) (Inner product is commutative)
Definition 2: (Length or norm or magnitude of vector)
𝑣1
𝑣2
.
Let 𝒗 ∈ 𝑅𝑛 then 𝒗 = . , then length of this vector is denoted by ‖𝒗‖ (scalar
.
[𝑣𝑛 ]
quantity) and defined as ‖𝒗‖ = √𝒗𝟏 2 + 𝒗𝟐 2 + . . . +𝒗𝒏 2 . When 𝑛 = 2 𝑜𝑟 3 i.e.,
our vector is from 𝑅2 (2D) or 𝑅3 (3D) then
1 2 2 2 −2 2 1 4 4
√
‖𝒖‖ = ( ) + ( ) + ( ) = √ + + = 1
3 3 3 9 9 9
Now we have enough tools and finally arrived at the beginning of our core
problem.
Gram-Schmidt Process to obtain orthonormal basis:-
Our life is very easy while working with orthonormal bases. We observed that
to work with the natural/standard basis for 𝑹𝟐 𝐚𝐧𝐝 𝑹𝟑 the computations are
kept to a minimum. (See my previous recording)
𝟏
𝟑
Recall (1) if 𝑆 = {𝑒1 , 𝑒2 , 𝑒3 } are standard basis of 𝑹 , where 𝑒1 = [𝟎] , 𝑒2 =
𝟎
𝟎 𝟎 𝒂
[𝟏] , 𝑒3 = [𝟎] then any vector 𝒗 = [𝒃] ∈ 𝑹𝟑 can directly be written as linear
𝟎 𝟏 𝒄
combination of basis i.e., 𝒗 = 𝒂𝑒1 + 𝒃𝑒2 + 𝒄𝑒3 . (Coefficients of bases vectors are
coordinates of the vector 𝒗)
Recall (2) if set 𝑆 = {𝑉1 , 𝑉2 , 𝑉3 } is any basis of 𝑹𝟑 , and vector 𝒗 ∈ 𝑹𝟑 then 𝒗 can
be written as 𝒗 = 𝒂𝑉1 + 𝒃𝑉2 + 𝒄𝑉3 . (See Examle 1(LC)), We have to solve a non-
homogeneous system to obtain the coordinates of 𝒗.
Following theorem shows the advantage of orthonormal basis over any basis of
vector space.
Theorem 5:- (In book Theorem 5.5)
Let 𝑆 = {𝑉1 , 𝑉2 , . . . , 𝑉𝑛 } be a finite orthonormal basis of finite vectors of a
Euclidean space 𝑽 and let 𝒗 be any vector in 𝑽. Then
𝒗 = 𝑐1 𝑉1 + 𝑐2 𝑉2 + 𝑐3 𝑉3 + ⋯ + 𝑐𝑛 𝑉𝑛
(𝒗, 𝑉1 ) = 𝑐1 (𝑉1 , 𝑉1 ) = 𝑐1 ||𝑉1 ||2 = 𝑐1
Where 𝑐𝑖 = (𝒗, 𝑉𝑖 ), 𝑖 = 1,2,3, … , 𝑛
Note:- We only need to compute few dot product in presence of orthonormal
basis.
Example 7:-
Solution:-
𝑊 = {[𝑏 + 2𝑐 − 𝑑 𝑏 𝑐 𝑑]: 𝑎 = 𝑏 + 2𝑐 − 𝑑}
[𝑏 + 2𝑐 − 𝑑 𝑏 𝑐 𝑑]
= 𝑏[1 1 0 0] + 𝑐[2 0 1 0] + 𝑑[−1 0 0 1]
Solution: Solve using augmented matrix: [A|O] (Do yourself)
𝑥1 −4𝑟 −4
𝑋 = [𝑥2 ] = [ 5𝑟 ] = 𝑟 [ 5 ]
𝑥3 𝑟 1
−4
Basis of solution (Null) space 𝑆 = {[ 5 ]}
1
−4
𝑢1 = [ 5 ]
1
−4
𝑣1 = 𝑢1 = [ 5 ]
1
𝑣1 1 −4
𝑤1 = = [5]
||𝑣1 || √42
1