Lecture09 Student

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

≻≻≻≻—≻≻≻≻—Lecture Nine —≺≺≺≺—≺≺≺≺

Chapter Four: The Euclidean Space

Review. A n × 1 matrix of R is called an n-vector of R.


 
u1
u 
 2
Given an n-vector u =  . , We call ui the i-th component of u.
.
un
The set of all n-vectors of R is called an n-dimensional Euclidean space, denoted by Rn.

Linear Algebra lecture 9−1


   
u1 v1
u  v 
 2  2
Review. Let u =  .  and v =  .  be vectors in Rn and c be a scalar (c ∈ R).
. .
 
un vn u1 + v 1
u + v 
 2 2
Then the sum of u and v is again an n-vector defined by u + v =  .  and
 . 
  un + v n
cu1
 cu 
 2
the scalar multiple of u by c is an n-vector defined by cu =  . . Note
 . 
cun
(α) Rn is closed under the operation of vector addition and
(β) Rn is closed under the operation of scalar multiplication.

Linear Algebra lecture 9−2


Furthermore,
(a) for all u, v ∈ Rn, we have u + v = v + u.
(b) for all u, v, w ∈ Rn, we have (u + v) + w = u + (v + w).
(c) 0 is the (unique) vector in Rn satisfying u + 0 = u for all u ∈ Rn.
(d) For each u ∈ Rn, −u ≡ (−1)u is the (unique) vector in Rn satisfying u + (−u) = 0.
(e) for all u, v ∈ Rn and c ∈ R, we have c(u + v) = cu + cv.
(f) for all u ∈ Rn and c, d ∈ R, we have (c + d)u = cu + du.
(g) for all u ∈ Rn and c, d ∈ R, we have (cd)u = c(du).
(h) for all u ∈ Rn, we have 1u = u.

Linear Algebra lecture 9−3


Definition. An inner product on Rn is a function h·, ·i from Rn × Rn to R satisfying
(a) (Positive-definiteness) hu, ui ≥ 0 for all u ∈ Rn and hu, ui = 0 if and only if u = 0.
(b) (symmetry) hu, vi = hv, ui for all u, v ∈ Rn;
(c) (linearity) hcu + w, vi = chu, vi + hw, vi for all u, v, w ∈ Rn and c ∈ R;

Example. The dot product (Euclidean inner product) of the n-vectors u and v defined by
hu, vi = u · v = u1 × v1 + u2 × v2 + . . . + un × vn is an inner product.
Proof. Skip.

Theorem 4.2.4. (Cauchy-Schwarz inequality) hu, vi2 ≤ hu, uihv, vi for all u, v ∈
Rn .

Proof. Skip.
Linear Algebra lecture 9−4
Definition. A function || · || from Rn to R is called a norm (length) provided that
(a) ||u|| ≥ 0 for all u ∈ Rn and ||u|| = 0 if and only if u = 0.
(b) ||cu|| = |c| ||u|| for all u ∈ Rn and c ∈ R.
(c) (triangular inequality) ||u + v|| ≤ ||u|| + ||v|| for all u, v ∈ Rn.

p
Example. The Euclidean norm (L2 norm) defined by ||u|| = ||u||2 = u21 + · · · + u2n is a
norm in Rn.
Proof. Skip.

Example. The taxicab norm or Manhattan norm (L1 norm) defined by ||u|| = ||u||1 =
|u1| + |u2| + · · · + |un| is a norm in Rn.
Proof. Skip.

Linear Algebra lecture 9−5


p
Fact. Given an inner product h·, ·i, we can define a norm by ||u|| = hu, ui. This
norm || · || is said to be induced by the inner product h·, ·i.

p
Proof. 1 ”well-defined” Since hu, ui ≥ 0 for all u, ||u|| = hu, ui is well-defined.
2◦ ”||u|| ≥ 0 and ||u|| = 0 if and only if u = 0”
First it is obvious that ||u|| ≥ 0.
Furthermore, since h·, ·i is an inner product,
p
it follows immediately ||u|| = hu, ui = 0 if and only if u = 0.
3◦ ”||cu|| = |c| ||u||”
p
By definition, we have ||cu|| = hcu, cui.
By symmetry and linearity of the inner product, we have
p p p
2
hcu, cui = c hu, ui = |c| hu, ui = |c|||u||.
4◦ ”||u + v|| ≤ ||u|| + ||v||”
By symmetry and linearity of the inner product, we have
||u + v||2 = hu + v, u + vi = hu, ui + 2hu, vi + hv, vi.
It follows the Cauchy-Schwarz inequality that
p
hu, ui + 2hu, vi + hv, vi ≤ hu, ui + 2 hu, uihv, vi + hv, vi.
p p
We obtain ||u + v|| ≤ ( hu, ui + hv, vi)2 = (||u|| + ||v||)2.
2

Linear Algebra lecture 9−6


p
Example. The Euclidean norm ||u||2 = u21 + u22 + · · · + u2n is induced from the dot
product u · v = u1v1 + u2v2 + . . . + unvn.

Fact. (Parallelogram law) If || · || is a norm induced from an inner product h·, ·i, then
one has ||u + v||2 + ||u − v||2 = 2(||u||2 + ||v||2).

Proof. Skip.

Linear Algebra lecture 9−7


Definition. A function d : Rn × Rn −→ R is called a distance (metric) if d satisfies
(a) d(u, v) ≥ 0 for all u ∈ Rn and d(u, v) = 0 if and only if u = v.
(b) d(u, v) = d(v, u) for all u, v ∈ Rn
(c) (triangular inequality) d(u, v) ≤ d(u, w) + d(w, v) for all u, v, w ∈ Rn
Also, we call (Rn, d) a metric space.

Example. The Euclidean distance (L2 distance, L2 metric) is defined as d(u, v)


p
= (u1 − v1)2 + (u2 − v2)2 + · · · + (un − vn)2.

Example. The taxicab metric (Manhattan distance, L1 distance) is d(u, v) = |u1 − v1| +
|u2 − v2| + · · · + |un − vn|.

Linear Algebra lecture 9−8


Fact. Given a norm || · ||, we can define a distance by d(u, v) = ||u − v||. The metric
d is said to be induced by the norm || · ||.
Proof. Skip.

Example. The Euclidean distance is induced by the Euclidean norm and the taxicab metric
is induced by the taxicab norm.

Fact. If a distance d(·, ·) is induced by a norm || · ||, then we have


(1) (translation invariance) d(u, v) = d(u + w, v + w);
(2) (homogeneity) d(cu, cv) = |c|d(u, v).

Proof. Skip.

Linear Algebra lecture 9−9


Definition. Two nonzero vectors u and v in Rn are said to be parallel if u = tv for some
t ∈ R and they are parallel in the same direction if t > 0 and in the opposite direction if
t < 0.

Definition. A vector u in Rn is said to be a unit vector provided that ||u|| = 1.

u
Note. Given u 6= 0 ∈ Rn, ||u||
is the only unit vector parallel to u in the same direction.

Definition. Two vectors u and v in Rn are said to be orthogonal if u · v = u1v1 + · · · +


unvn = 0 (hu, vi = 0).

Linear Algebra lecture 9−10


Definition. A function of Rn into Rm is a map assigning a unique vector (point) f (v) in
Rm for each vector (point) v in Rn. The vector f (v) is called the image of v. The set of all
images of the vectors in Rn is simply called the image (range) of f , Rf = {f (v) : v ∈ Rn}.
Given V ⊆ Rn, the image of V under f is defined as f (V ) = {f (v) : v ∈ V }. Given
W ⊆ Rm, the inverse image of W under f is f −1(W ) = {v : f (v) ∈ W }.

Definition. A linear transformation L of Rn into Rm is a function L : Rn −→ Rm


satisfying (a) L(v + u) = L(v) + L(u) for all v, u ∈ Rn.
(b) L(cv) = cL(v) for all v ∈ Rn and c ∈ R.

Remark 4.3.ET4. The above conditions (a) and (b) is equivalent to the condition (ab)
L(cv + u) = cL(v) + L(u) for for all v, u ∈ Rn and c ∈ R

(or (ab ) L(cv + du) = cL(v) + dL(u) for for all v, u ∈ Rn and c, d ∈ R).

Linear Algebra lecture 9−11


" #!
v1
Example. (Reflection with respect to x-axis in R2) L : R2 −→ R2 defined by L =
v2
" #
v1
is linear.
−v2

Example. (Projection of points in R3 into the xy-plane) L : R3 −→ R2 defined by


 
v1 " #
v1
L  v2  = is linear.
 
v2
v3
Example. (Scalar multiplication, dilation, contraction) Given r ∈ R, L : Rn −→ Rn
defined by L(v) = rv is linear.

Example. Given r ∈ R and b 6= 0m ∈ Rn, L : Rn −→ Rn defined by L(v) = rv + b is


not linear because L(2v) = r(2v) + b 6= 2(rv + b) = 2L(v).
Linear Algebra lecture 9−12
Fact. Given an m × n matrix, the function L : Rn −→ Rm defined by L(v) = Av is a
linear transformation.
Proof. 1◦ From matrix multiplication properties,
we immediately have L(v + u) = A(v + u) = Av + Au = L(v) + L(u)
and L(cv) = A(cv) = c(Av) = cL(v) for all v, u ∈ Rn and c ∈ R.

Theorem 4.3.6&7. If L : Rn −→ Rm is a linear transformation, then


(a) L(c1v1 + c2v2 + · · · + ck vk ) = c1L(v1) + c2L(v2) + · · · + ck L(vk ) for any vectors
v1, v2, . . . , vk in Rn and any scalars c1, c2, . . . , ck .
(b) L(v − u) = L(v) − L(u) for all u, v in Rn.
(c) L(0n) = 0m.

Proof. Skip.

Linear Algebra lecture 9−13


Corollary 4.2.1. Let L : Rn −→ Rm be a function. If L(0n) 6= 0m, then L is not a
linear transformation.

Example. Given r ∈ R and b 6= 0m ∈ Rn, L : Rn −→ Rn defined by L(v) = rv + b is


not linear because L(0n) = b 6= 0m.

Notation. In Rn, ei, for i = 1, . . . , n, denotes the vector that has ith component to be 1
and 0 otherwise.

Corollary. If L : Rn −→ Rm is a linear transformation, then L(v) = v1L(e1) +


v2L(e2) + · · · + vnL(en)

Linear Algebra lecture 9−14


Example 4.3.2. Let L : R3 −→ R2 be linear transformation for which we know that
     
1 " # 0 " # 0 " #
2 3 −1
L  0  = , L  1  = , and L  0  = . Please (a) find
     
−1 1 2
0 0 1
   
−3 v1
L  4  and (b) express L  v2 
   

2 v3
        
−3 1 0 0
(a) L  4  = L (−3)  0  + 4  1  + 2  0 
        

2 0 0 1
     
1 0 0
= (−3)L  0  + 4L  1  + 2L  0 
     

0 0 1
" # " # " # " #
2 3 −1 4
= (−3) +4 +2 = .
−1 1 2 11
Linear Algebra lecture 9−15
        
v1 1 0 0
(b) L  v2  = L v1  0  + v2  1  + v3  0 
        

v3 0 0 1
     
1 0 0
= v1L  0  + v2L  1  + v3L  0 
     

0 0 1
" # " # " # " #
2 3 −1 2v1 + 3v2 − v3
= v1 + v2 + v3 =
−1 1 2 −v1 + v2 + 2v3
 
" # v1
2 3 −1  
=  v2  .
−1 1 2
v3

Linear Algebra lecture 9−16


Theorem4.3.8. Let L : Rn −→ Rm be a linear transformation. Then there exists
a unique m × n matrix A such that L(v) = Av for all v ∈ Rn. Furthermore A has
columns L(e1), L(e2), . . . , L(en).

Proof. 1◦ ”existence”
Given any v ∈ Rn, note v = v1e1 + v2e2 + · · · + vnen .
Hence by linearity of L, we have L(v) = hL(v1e1 + v2e2 + · · · + vnein )
= v1L(e1) + v2L(e2) + · · · + vnL(en) = L(e1) L(e2) · · · L(en) v = Av,
h i
where A = L(e1) L(e2) · · · L(en) , whose size is m × n.

2◦ ”uniqueness”
Suppose we also have L(v) = Bv for all v ∈ Rn.
Then (A − B)v = 0m for all v ∈ Rn.
Particular, letting v = ei, i = 1, 2, . . . , n,
we obtain the ith column of A − B is (A − B)ei = 0m.
Hence A and B agree.
h i
Definition. The matrix A = L(e1) L(e2) · · · L(en) is called the standard matrix
representing L.

Linear Algebra lecture 9−17

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy