CMU LinAL Prob8

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

21-241: Matrix Algebra – Summer I, 2006

Practice Exam 2
   
   
1 2 4 3
1. Let v1 =  0  , v2 =  1 , v3 =
  2 , and w =
  1 .
−1 3 6 2
(a) Is w in {v1 , v2 , v3 }? How many vectors are in {v1 , v2 , v3 }?
(b) How many vectors are in span {v1 , v2 , v3 }?
(c) Is w in the subspace spanned by {v1 , v2 , v3 }? Why?

Solution.

(a) No. {v1 , v2 , v3 } is a set containing only three vectors v1 , v2 , v3 . Apparently, w equals none of
these three, so w ∈ / {v1 , v2 , v3 }.
(b) span {v1 , v2 , v3 } is the set containing ALL possible linear combinations of v1 , v2 , v3 . Particularly,
any scalar multiple of v1 , say, 2v1 , 3v1 , 4v1 , · · · , are all in the span. This implies span {v1 , v2 , v3 }
contains innitely many vectors.
(c) To determine whether w belongs to span {v1 , v2 , v3 }, we are to look to write w as a linear
combination of v1 , v2 , v3 . For this purpose, we need to nd three scalars c1 , c2 , c3 , such that
w = c1 v1 + c2 v2 + c3 v3 . This amounts to solve the system Ac = w for c = (c1 , c2 , c3 )T , where
matrix A = (v1 v2 v3 ). Note that actually we only need to determine if this system allows a
solution. Now apply Gaussian to reduce the augmented matrix in the echelon form:
     
1 2 4 3 1 2 4 3 1 2 4 3
R3 +R1 R −5R2
 0 1 2 1 − −−−→  0 1 2 1  −−3−−−→  0 1 2 1 
−1 3 6 2 0 5 10 5 0 0 0 0

The bottom row doesn’t lead to inconsistency, so the system allows a solution (actually has
innitely many). This shows that w is in the subspace spanned by {v1 , v2 , v3 }.

2. Given subspaces H and K of a vector space V , the sum of H and K, written as H + K, is the set of
all vectors in V that can be written as the sum of two vectors, one in H and the other in K; that is,

H + K = {w|w = u + v for some u ∈ H and some v ∈ K}

(a) Show that H + K is subspace of V .


(b) Show that H is a subspace of H + K and K is a subspace of H + K.

Proof.

(a) Since H and K are subspaces of V , the zero vector 0 has to belong to them both. Taking
u = v = 0, we have w = 0 + 0 = 0, which, by denition, belongs to H + K. Next, we are to show
H + K is closed under both addition and scalar multiplication. Suppose w1 , w2 are two vectors
in H + K. By denition, they can be written as

w1 = u1 + v1 , w2 = u2 + v2 , for some u1 , u2 ∈ H and some v1 , v2 ∈ K.

Hence,
w1 + w2 = (u1 + v1 ) + (u2 + v2 ) = (u1 + u2 ) + (v1 + v2 ),

1
Matrix Algebra Practice Exam 2

where, u1 + u2 ∈ H because H is a subspace, thus closed under addition; and v1 + v2 ∈ K


similarly. This shows that w1 + w2 can be written as the sum of two vectors, one in H and the
other in K. So, again by denition, w1 + w2 ∈ H + K, namely, H + K is closed under addition.
For scalar multiplication, note that given scalar c,

cw1 = c(u1 + v1 ) = cu1 + cv1 ,

where cu1 ∈ H because H is closed under scalar multiplication; and cv1 ∈ K parallelly. Now
that cw1 has been written as the sum of two vectors, one in H and the other in K, it’s in H + K.
That is, H + K is closed under scalar multiplication. And we are done.
(b) Since H is a subspace of V , it’s nonempty, closed under addition and scalar multiplication. We
only need to show that H is a subset of H + K. This is derived from the fact that each vector
in H can be written as the sum of itself, which belongs to H, and the zero vector, which belongs
to K. A similar argument justies K is a subspace of H + K, too.

3. Let x and y be linearly independent elements of a vector space V . Show that u = ax + by and
v = cx + dy are linearly independent if and only if ad − bc 6= 0. Is the entire collection x, y, u,v
linearly independent?
 )
a c
Proof. Let A = (x y), B = (u v), C = , then
b d
 )
a c
AC = (x y) = (ax + by cx + dy) = (u v) = B.
b d

Two key facts we’ll use later are that u and v (or, x and y) are linearly independent if and only if
the homogeneous system Br = 0 (or, Ar = 0) allows only trivial solution, denoted by Fact 1 (or,
Fact 2). Now slow down, carefully think of the following deduction process, and make sure you really
understand each step involved.

u, v are linearly independent


⇐⇒Br = 0 has only trivial solution (by Fact 1)
⇐⇒(AC)r = 0 has only trivial solution (since AC = B)
⇐⇒A(Cr) = 0 has only trivial solution (by associativity)
⇐⇒Cr = 0 has only trivial solution (by Fact 2, replace r by Cr)
⇐⇒C is nonsingular
⇐⇒ det C 6= 0
⇐⇒ad − bc 6= 0

The entire collection x, y, u,v is linearly dependent, since we have four scalars, a, b, −1, 0, not all zero,
such that the linear combination ax + by + (−1)u + 0v = 0.

 
−2 4 −2 −4
4. Find bases for the column space (range) and null space (kernel) of the matrix A =  2 −6 −3 1 .
−3 8 2 −3
Solution. To nd the basis for column space, we need to nd pivot column(s). To nd the basis for
null space, we need to nd general solution to the homogeneous system Ax = 0. Both can be achieved

2
Matrix Algebra Practice Exam 2

by reducing the matrix in the echelon form.


     
−2 4 −2 −4 −2 4 −2 −4 -2 4 −2 −4
R2 +R1 R +R2
 2 −6 −3 1  −− −−−→  0 −2 −5 −3  −−3−−→  0 -2 −5 −3 
R3 − 32 R1
−3 8 2 −3 0 2 5 3 0 0 0 0

We see that the rst two columns are pivot columns, so the rst two column of the ORIGINAL
MATRIX A, namely, {(−2, 2, −3)T , (4, −6, 8)T }, form a basis for Col A. The last two columns are
free, and we can easily read the general solution from the echelon form:
5 3
x2 = − x3 − x4 , x1 = 2x2 − x3 − 2x4 = −6x3 − 5x4 , x3 , x4 free
2 2
Written in vector form,
           
x1 −6x3 − 5x4 −6x3 −5x4 −6 −5
 x2   − 5 x3 − 3 x4   − 5 x3   − 3 x4   5   − 32 
x=   2 2 = 2 + 2  = x3  2  + x4 
 x3  = 

x3   x3   0   1   0 
x4 x4 0 x4 0 1

Thus, {(−6, 52 , 1, 0)T , (−5, 32 , 0, 1)T } form a basis for Nul A.

{  )  )}  )
3 −4 2 2
5. Show that u1 = , u2 = is a basis for R . Let x = . Find the coordinate
−5 6 −6
vector for x with respect to this basis.
Solution. First of all, u1 and u2 are linearly independent because they are not multiples of each
other. Next, we are to characterize vectors in span {u1 , u2 }. Suppose vector b ∈ R2 belongs to
span {u1 , u2 }, then the linear system Ay = b is consistent, where matrix A = (u1 u2 ). Applying
Gaussian to the augmented matrix, we get
( 
3 −4 b1
 )
3 −4 b1 R2 + 53 R1
−−−−−→
−5 6 b2 0 − 23 b2 + 53 b1

The system has a pivot in each row, thus is always consistent for all possible b ∈ R2 . Therefore,
span {u1 , u2 } = R2 , and {u1 , u2 } form a basis for R2 . To nd the coordinate vector for x, we need
nd the solution to Ay = x. Replacing b1 , b2 by 2, −6 respectively in the echelon form we obtained
above, we can write out the solution y = (6, 4)T . This is to say, x = 6u1 + 4u2 , so the coordinate
vector for x w.r.t {u1 , u2 } is (6, 4)T .

6. Let V be an inner product space.

(a) Prove that 〈x, v〉 = 0 for all v ∈ V if and only if x = 0.


(b) Prove that 〈x, v〉 = 〈y, v〉 for all v ∈ V if and only if x = y.
(c) Let v1 , · · · , vn be a basis for V . Prove that 〈x, vi 〉 = 〈y, vi 〉, i = 1, · · · , n, if and only if x = y.

Proof.

(a) Suppose 〈x, v〉 = 0 for all v ∈ V . Simply let v = x and we get 〈x, x〉 = 0, which implies x = 0.

3
Matrix Algebra Practice Exam 2

(b) We reduce the equivalence as follows:

〈x, v〉 = 〈y, v〉, ∀ v ∈ V


⇐⇒〈x, v〉 − 〈y, v〉 = 0, ∀ v ∈ V
⇐⇒〈x − y, v〉 = 0, ∀ v ∈ V (by bilinearity)
⇐⇒x − y = 0 (by part (a))
⇐⇒x = y

(c) If x = y, of course we have 〈x, vi 〉 = 〈y, vi 〉, i = 1, 2, · · · , n. Reversely, suppose 〈x, vi 〉 = 〈y, vi 〉,


i = 1, · · · , n. Since v1 , · · · , vn is a basis for V , any vector v ∈ V can be written as a linear
combination of the n vectors, say, v = c1 v1 + c2 v2 + · · · + cn vn . Linearity of inner product implies

〈x, v〉 = 〈x, c1 v1 + c2 v2 + · · · + cn vn 〉
= c1 〈x, v1 〉 + c2 〈x, v2 〉 + · · · + cn 〈x, vn 〉 (by linearity)
= c1 〈y, v1 〉 + c2 〈y, v2 〉 + · · · + cn 〈y, vn 〉 (since 〈x, vi 〉 = 〈y, vi 〉)
= 〈y, c1 v1 + c2 v2 + · · · + cn vn 〉 (by linearity)
= 〈y, v〉

Since this equality holds for all v ∈ V , part (b) tells us that x = y.

7. Prove that
(a1 + a2 + · · · + an )2 6 n(a21 + a22 + · · · + a2n )
for any real numbers a1 , · · · , an . When does equality hold?
Proof. Let u = (a1 , a2 , · · · , an )T , v = (1, 1, · · · , 1)T . Then,

u · v = a1 + a 2 + · · · + an , ‖u‖2 = a21 + a22 + · · · + a2n , ‖v‖2 = n.

By Cauchy-Schwarz inequality, |u · v| 6 ‖u‖ ‖v‖. Squaring both sides, we obtain

(a1 + a2 + · · · + an )2 6 n(a21 + a22 + · · · + a2n ).

This completes the proof.

8. Verify the formula ‖v‖ = max{|v1 + v2 |, |v1 − v2 |} denes a norm on R2 . Establish the equivalence
between this norm and the usual Euclidean norm ‖ · ‖2 .
Proof. We need verify positivity, homogeneity and triangle inequality one by one.

Positivity: Since |v1 + v2 | > 0, |v1 − v2 | > 0, it’s clear that ‖v‖ > 0. Moreover, ‖v‖ = 0 if and only
if |v1 + v2 | = |v1 − v2 | = 0, that is, v1 = v2 = 0, namely v = 0.
Homogeneity: ‖cv‖ = max{|cv1 + cv2 |, |cv1 − cv2 |} = |c| max{|v1 + v2 |, |v1 − v2 |} = |c| ‖v‖.
Triangle Inequality: We need use triangle inequality for absolute value

|a + b| 6 |a| + |b|,

and the fact (denoted by Fact 3) that “the maximum of sums is less than or equal to the sum of
maximums”,
max{a1 + a2 , b1 + b2 } 6 max{a1 , b1 } + max{a2 , b2 }.

4
Matrix Algebra Practice Exam 2

Both inequalities are usual. Having them, we can obtain the triangle inequality for norm ‖ · ‖:

‖u + v‖ = max{|(u1 + v1 ) + (u2 + v2 )|, |(u1 + v1 ) − (u2 + v2 )|} (by denition)


6 max{|u1 + u2 | + |v1 + v2 |, |u1 − u2 | + |v1 − v2 |} (by triangle inequality)
6 max{|u1 + u2 |, |u1 − u2 |} + max{|v1 + v2 |, |v1 − v2 |} (by Fact 3)
= ‖u‖ + ‖v‖. (by denition)

We just directly proved ‖ · ‖ denes a norm. If we think in another way, it can be veried that

‖v‖ = max{|v1 + v2 |, |v1 − v2 |} = |v1 | + |v2 | = ‖v‖1 ,

namely, ‖ · ‖ is actually the 1-norm ‖ · ‖1 . The proof is not dicult and left to you.
To show the equivalence of two norms, we need to nd two POSITIVE constants m, M , such that

m‖v‖2 6 ‖v‖ 6 M ‖v‖2 , for all v ∈ R2 .

You may already nd it’s convenient to compare squares of norms when Euclidean norm is involved.
So, let’s square:

‖v‖2 = (max{|v1 + v2 |, |v1 − v2 |})2 = max{|v1 + v2 |2 , |v1 − v2 |2 }


= max{v12 + v22 + 2v1 v2 , v12 + v22 − 2v1 v2 } = v12 + v22 + max{2v1 v2 , −2v1 v2 }
= v12 + v22 + |2v1 v2 | > v12 + v22 = ‖v‖22 .

This allows us letting m = 1. On the other hand, since |2v1 v2 | 6 v12 + v22 ,

‖v‖2 = v12 + v22 + |2v1 v2 | 6 2(v12 + v22 ) = 2‖v‖22 .



Therefore, we can choose M = 2. Thus we complete the proof.


1 1 1
9. Prove the matrix A =  1 2 −2  is positive denite. Find its Cholesky factorization.
1 −2 14

Proof. We apply Gaussian to show the matrix has all positive pivots and nd the LDLT factorization.
     
1 1 1 1 1 1 1 1 1
R2 −R1 R +3R2
 1 2 −2  − −−−→  0 1 −3  −−3−−−→  0 1 −3 
R3 −R1
1 −2 14 0 −3 13 0 0 4
     
1 0 0 1 0 0 1 0 0
R2 +R1 R −3R2
 0 1 0 − −−−→  1 1 0  −−3−−−→  1 1 0 =L
R3 +R1
0 0 1 1 0 1 1 −3 1

Now, we see the matrix is regular and has all positive pivots 1, 1, 4, thus is positive denite. Let D =
diag(1, 1, 4), S = diag(1, 1, 2), then we obtain the Cholesky factorization

A = LDLT = LS 2 LT = LSS T LT = M M T ,
1 0 0

where M = LS = 1 1 0 .
1 −3 2

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy